Extracting Sky Router Crash Data Amidst Kernel Panics

Sky Hub

I have a Sky broadband connection with fibre into a Sky Router / Sky Hub. I have noticed very short outages of internet service with increasing frequency recently. The outages are short, maybe around 3-5 minutes long but these are annoying enough in the middle of an online meeting or some other synchronous activity. Sometimes two or three of these short outages occur in a relatively short time frame.

There did not seem to be a correlation to temperature, or even usage. In other words, the device doesn’t seem to be crashing because it is overheating under load or ambient temperature.

Extracting the Crash Data

It turns out that some enterprising engineer took the decision to embed crash or reset data in what is perhaps a surprising place, or at least a surprising place for easy user access. If one backups the router configuration settings to a file, the result is an XML file that includes two interesting stanzas towards the end.


Now the X_SKY_COM_DEVICE_DOCTOR also looks quite interesting, but the piece for us right now is the X_SKY_REBOOT_CAUSE. You will note in this case that the RebootInfo data contains the following:

kernel panic marker detected in the flash

Which is interesting if not encouraging. For those not in the know a kernel panic indicates a type of crash in the base operating system of the device. It could be caused by faulty software or in this case firmware in the device, or it could be caused by some hardware problem. In Sky’s case firmware updates are highly automated. It could easily be caused by a firmware bug, but in that case it would likely be experienced by many customers. The smart money if that’s not the case might be on faulty hardware.

Importantly, even if there is some problem in the broadband provision coming from the fibre, the router should not panic or crash – it should just deal with the problem, reconnect when possible and move on. That would be obvious in the device logs.

I called Sky to report this, and the first conversation wasn’t too productive if it wasn’t surprising: the request to turn the device off and on again. Not bad advice, but not successful. I got told to do a total factory reset – which was a time consuming pain, and didn’t fix the problem.

By this point, I’d started to write a very small Python script to automate extracting the crash data and time-stamping it, if the backup file was download by hand. To complete the script I really need to automate the web request part – which I haven’t attempted yet as it looks like I need to handle the session data – not insurmountable, but a bit of work. Here is that short unvarnished script.

import xml.etree.ElementTree as ET
import datetime as dt
import os

# Extract the XML from the settings file and get the root
tree = ET.parse('sky_router_settings.conf')
root = tree.getroot()

# Look for the X_SKY_REBOOT_CAUSE stanza should it exist
for item in root.findall('.//X_SKY_REBOOT_CAUSE'):
    # It does, so let's extract the details
    reboot_time = int(item.find('RebootTime').text)
    reboot_reason_type = item.find('RebootReasonType').text
    reboot_reason_code = item.find('RebootReasonCode').text
    reboot_info = item.find('RebootInfo').text
    # Make an ISO datetime from the Unix epoch timestamp
    reboot_format_time = dt.datetime.utcfromtimestamp(reboot_time).strftime("%Y-%m-%d %H:%M:%S")
    # Let's write the data if it isn't already there (in current working directory)
    if not os.path.exists(reboot_format_time):
        print(f"Found new crash data... {reboot_format_time} {reboot_info}")
            fh = open(reboot_format_time, 'x')
            print(f'Time:{reboot_format_time}', file=fh)
            print(f'Type:{reboot_reason_type}', file=fh)
            print(f'Code:{reboot_reason_code}', file=fh)
            print(f'Info:{reboot_info}', file=fh)
        except Exception as e:
            print(f'oops, something went wrong')

So, to use this, one logs into the sky router (probably browse to or whatever address your router is on from within your network) – go to Maintenance and Backup Settings. Drop the saved file in the same directory as the script, and run it. I was doing this periodically to check for crashes I had not witnessed. It will save any new data into a file with the timestamp as a filename in the same directory. Crude, but it works.

Armed with a number of crash events I called Sky back. What followed was a highly frustrating conversation for all sides, where I was advised that I had to plug the hub into a different electrical socket. I duly did so, and incidentally noticed that the hub records a different reason for the reset.

Power On Reset detected

In other words, the hub notes when it was power cycled. To the surprise of virtually no-one, changing the socket the hub was plugged into did not prevent the crashes. My script detected another one just before 2 am yesterday.

Time:2022-09-07 01:56:12
Info:kernel panic marker detected in the flash

I called Sky again and finally had a constructive conversation – they are sending me a new hub to test. Hopefully this will solve the problem. I doubt it’s a firmware issue or it would have been more widely reported.


I think I will probably try and bite the bullet and use Python to download the backup file too. If I can get that bit working I can rig the whole thing up to cron to check for crash data automatically. New hub or not, keeping a track of these crash events would be useful.

It’s curious that the hub obviously does this hard work of storing crash data, but this doesn’t seem to be transmitted to Sky which would really help them diagnose problems when customers call.


Unfortunately a new hub has not solved the problem, so it looks as though it may be a firmware problem after all. I have collected some more messages that might be in the reboot reason. Here is my list so far:

kernel panic marker detected in the flash
Power On Reset detected
The new firmware image downloaded by FUS is being written to flash. The device may REBOOT
CPE has been software resetted (possibly watchdog timeout, if no other indicators

exim4 upgrades and configuration fragility

Last night I decided I’d catch up on sysadmin tasks. Some of that was trying to tighten up my spam filtering again. I had got in place a per-user Bayesian filter on spamassassin, which essentially should allow it to learn a much more individual pattern of what each user considers spam. I also had configuration for having my mail server (running exim) reject mail within the SMTP session for very egregious examples. That configuration hasn’t been working for a while – I had to revert lots of custom changes to my config for a significant exim4 upgrade a while back, and I haven’t had the time and patience to try and reinstate it all. So I thought I’d look at that.

I ran a standard apt update and upgrade before I started. I noted exim4 and its various binary packages were marked for upgrade which isn’t unusual and proceeded (the upgrade was between versions 4.95-RC2-1 and 4.95-1). Debian warned me of a change to exim4.conf.template and I examined the diff briefly, didn’t see anything extraordinary and retained my config. In any case, I am using a distributed config in conf.d so expected to see a more targetted diff on one of those. I didn’t. exim restarted without complaint.

I started using tail to follow my exim logs, and could immediately see that every single inbound message was being temporarily rejected, albeit, there was no information as to why. I spent probably the guts of an hour checking the changes between my configuration files and .dpkg-dist versions (the ones shipped as the new changes) and couldn’t see the problem. I tried copying over the exim4.conf.template and updating the configuration with update-exim4.conf and still I had the same problem. I checked the changelog and didn’t see anything profound that should really be a worry.

In the end I had to downgrade all the exim4 packages, and my mail started to be delivered again. Of course, this only buys me a little time to either find the problem, or hope it’s something upstream in the Debian package. I maybe should report this as a bug, but I feel I don’t yet have enough information.

However, it really got me thinking just how fragile exim4 configuration seems to be. I need to add a few tweaks to the shipped configuration if I want more effective handling of some things, such as my multiple domains, or being able to handle email addresses with extra bits like <name>-<website>@<domain> so I can filter email from various sites (and determine who sold my data). But in the main the challenges are in more effective spam filtering. All of those tweaks can be really easily disrupted by an update. I’ve been thinking of trying to generate a patch set against the default configuration in case that makes it easier to accept all new config files and applying the patch set, but that will have its own problems. greylist – a package that temporarily rejects initial attempts to deliver mail from unknown servers uses this patch approach.

In the meantime, if you upgrade exim4 and suddenly have all inbound messages being temporarily rejected you are not alone, but I can’t yet explain why. For me a temporary downgrade was the only solution.

aptitude install exim4=4.94.2-7 exim4-base=4.94.2-7 exim4-config=4.94.2-7 exim4-daemon-heavy=4.94.2-7

20 years since shodan – reflections on gradings, mastery and imposter syndrome

Shodan Certificate in Aikido

Today (5th August 2021) marks twenty years since I first graded to shodan (the first black belt grade) in a martial art. It might come as a surprise to many non martial artists that there are multiple black belt grades, and that a black belt does not represent the end of a journey but more of a proper beginning.

I dug out my shodan certificate recently when I realised this milestone was approaching. It was carefully stored in a filing cabinet since I’d taken it out of a frame when I replaced it with a nidan (2nd dan) certificate. That was a mistake, but probably born out of the circumstances.

I started training in aikido on the 16th August of 1999, with a bit of a rocky start, but eventually I built up some momentum and attended all the classes and weekend courses I could, taking up iaido as well at the same time. I was probably taking instruction from around six people in two martial arts, which generally was a good thing, as I learned that different people had different talents and approaches some of which my own personal style resonated with more than others.

In the August of 2001, I was at the Summer School of our Aiki No Michi association in Galway, intending to grade for 1st Kyu – which is the last grade before the black belt grades. My uke (partner) for the main part of the grading, was a gentleman I’d never met before but in much the same position. We were grading alongside those who were challenging various black belt grades and the late Alan Ruddock sensei led the grading, under the watchful eye and opinions of the late Henry Kono sensei. About half way through the grading, Alan made some comment that while he knew we were grading for 1st Kyu he just wanted to watch us for a bit longer. I can remember thinking that I couldn’t consider the possible implications of that and so I just decided to carry on as if it had not been said.

The grading continued, and eventually we each did our minute or so of multiple attackers, I think with six each. It’s something of a simulation obviously but there are real challenges in dealing with six people when you are already pretty tired and you’re trying to land each person in a safe spot.

Afterwards, as Alan announced the results I was stunned that he’d decided to award me with shodan rather than the 1st Kyu I was attempting, and I was in excellent company with many friends who had been grading for shodan that year. The result however, was that I had an even bigger slice of imposter syndrome than many people who first get their black belt. At the same Summer School I interviewed Alan and Henry, and when speaking to them about grading Alan made some comment about how when one gets to black belt you realise you still don’t know that much yet – and I feel he looked pointedly at me when he said it. It provided quite a bit of relief. I told myself many times that Alan had a choice not to award me shodan, but still the feeling of uncertainty persisted.

Just over a year later (18th August 2002) I obtained my shodan in Muso Jikiden Eishen Ryu Iaido in Leeds, England, under a panel led by the late great Nishimoto sensei, though Iwata sensei’s name is on the certificate. I remember that grading keenly – it was a very hot day and drips of sweat fell on the wooden floor when we bowed in at the start. That shodan certificate has stayed on my wall.

But when I got my nidan in aikido a few years later I replaced my shodan certificate in the frame. I think nidan had been important to me to reassure me the shodan had been legitimate, but in retrospect it was a terrible mistake to remove that shodan certificate from the frame and from the wall. Obtaining that grade from Alan Ruddock remains one of the most important moments in my life. So I have placed it freshly in a new frame to go back on a wall somewhere, and today will be a good day to do that.

Shodan Certificate in Aikido

Incidentally my yondan (4th dan) certificate in aikido signed by Anita Bonnivert sensei is on my wall, having seen Anita’s aikido I was very honoured to receive that grade, and I’ve just realised my nidan (2nd dan) in Iaido (from Stephen Bentham et al.) has no certificate on the wall, so I have some homework to do checking the dates and details of that – it was a long time ago.

Martial arts are about a lot more than physical technique, the suffix “do” on the end of many traditional Japanese martial arts means “way” (as “tao” in Chinese). They usually have a deep culture of mental introspection. I have certainly learned a lot about myself in the last 20/22 years, and a lot about other people, and a lot about human interaction in both physical and non physical spaces. I’ve applied a lot of that learning in many other facets of my life.

A lot of us wrestle with imposter syndrome. I sometimes feel that it’s a good sign to do so. It can be linked to our expectation of “mastery” as a destination, but the concept of do/tao is that mastery is a continuing journey and never a destination. This concept is explored in a lot of detail in George Leonard’s book: “Mastery: The Keys to Success and Long-Term Fulfillment”, and I’d really recommend this book to anyone interested in the challenges of self improvement in any sphere.

Gradings are funny things, and I’ve written more about them elsewhere, they shouldn’t really define us, but they do mark important milestones on our continued journeys and so they are worth remembering and celebrating.

Beef in Soy Sauce

This is a version of a recipe my Mum had that I have experimented with in for a slow cooker format. I’ve pared this down to the most simple version I can. I usually run this recipe by eye, but one of my daughters wants to have a go at it, so I decided I’d actually work out the proportions, more or less. This version of the recipe requires very little in the way of process and benefits from being left all day. Slow cookers are very cheap, and you can leave this on the go for many hours – almost the more the better.


Here are indicative amounts of each ingredients, you might want to scale these depending on the number of people to serve. You may also want to play with the proportions a bit. These proportions will serve 4 or more depending on appetite and helpings.

  • 800g diced or strips of beef
  • 170ml white wine
  • 100ml soy sauce (and some more for serving)
  • 30g cornflour
  • Plenty of frozen peas
  • 90g Basmati rice per person
  • Beef stock cube (optional)

The wine doesn’t need to be anything particularly special but some very cheap wine used for cooking can be very sharp and is probably best avoided.

I typically use Kikkoman soy sauce because it’s not as strong as some dark soy sauce and not as light as some light soy sauce. If you’re using something very dark, use less to begin with.


You can coat the meat in the cornflour to start, but you don’t have to if you’re feeling lazy. I increasingly don’t bother. The very simplest version of the recipe is to add the beef into the slow cooker and then the wine and soy sauce. With a slow cooker it is difficult to reduce fluid effectively and some will emerge from the beef, so err on the side of adding too little initially. The beef should not be fully covered.

If you have the time, leave this on “low” in the slow cooker for several hours. Typically the more the better as long as it doesn’t dry out, but a minimum of four hours and maybe up to around eight as a guide. If you’re about, give the beef a stir to break up the bits that stick together at some point, but this can be done later. If you have less time, but the cooker on high, it will still likely take several hours, and the beef may not be as tender, but it is quicker.

If you didn’t start off coating the beef with cornflour, you can add the cornflour the “correct” way, mixed thoroughly with a small amount of water, but as long as you put it in long enough before serving you can just add it in and stir, and leave for 30 minutes or so. If you have you slow cooker on low, turn it to high to cook out the cornflour and thicken the sauce for a while. You can also add a beef stock cube if you like. If you have time and want to thicken the sauce you can also leave the lid slightly ajar. Add a little more cornflour at that point if it’s not thick enough. If the sauce is too thick, add a little more wine, or water.

Taste the sauce and add more soya sauce if need be, if the wine has left things with too sharp a taste you can add a pinch of sugar to even it out.

When you are about fifteen minutes away from serving, add some vegetables. The simplest addition would be frozen peas which will heat through, add plenty as per personal taste. Broccoli (not frozen) also works well.

Wash the rice thoroughly in a sieve and place in a saucepan. Add water (1.5 ml for every gram of rice) to the saucepan, and bring just to the boil. Turn the heat off, leave the lid on and let the rice steam for 10 minutes.

Serve the sauce over a bed of rice for each person, serve with more soy sauce. Any remaining beef and sauce can be refrigerated or frozen.

Budgeting for Assessment

Workloads for Academics in Higher Education are often very complex, with teaching loads, research tasks and administration all juggling for our attention with lots of task switching adding to the complexity. For many academics, teaching loads are a significant part of their work, but explicitly looking at the time spent on assessment could bring better results for staff and students alike.

Contact Hours to Admin Hours

There are usually ad-hoc assumptions about the amount of administration time a module takes above and beyond the contact hours spent in front of a class. For our purposes contact hours could just as easily be delivering synchronous and asynchronous hours on-line as time spent in more traditional on-campus delivery.

A common such assumption is that it takes two hours of this administration time for each hour of contact, sometimes more. That administration time can be subdivided into various tasks such as preparation of teaching materials, delivery of assessment, and other correspondence with students etc..

Preparation time on teaching materials can obviously be markedly higher for the first presentation of a module, or after significant changes. Many academics are already undertaking substantial additional preparation time to re-factor materials for on-line delivery at the moment.

Assessment Hours

I want to focus on the time spent on assessment because I feel this is a serious time sink for most academics. This is partly because when it comes to reform of learning and teaching in a curriculum, assessment is often the last consideration because we are nervous about the serious consequences of getting things wrong. It is also partly because we can sometimes draw the conclusion that time spent on assessment equates directly to quality.

How often have you attempted to place a budget on your assessment time before delivering a module? I mean the time taken to design an assessment, deliver it to students, assess the submissions and deliver feedback. My guess is that very few of us have done this.

The outcomes of this can be serious. We often design assessments focusing on the first half of these tasks that take a tremendous and unquantified amount of labour to fully deliver, often much more than we really expected. This can result in a very stressed academic or team of academics, or the delivery of feedback is too late to be effectively useful to the students, or the quality and depth of the feedback suffers. Any combinations of these outcomes is also potentially likely.

Budget influences Design, poor Design blows the Budget

Agreeing a budget with a line manager, or even with yourself, can be informative. If you think the budget is too low you are faced with the choice of making the argument that additional resource is really required, or that you need to re-design the assessment to fit within your budget.

Of course, there are often times when additional resource really is required, but my argument here is that this should be a conscious choice, planned for and if possible agreed with your line manager who may be able to bring practical assistance, or at least balance out the rest of your workload.

Even if you agree a high budget, a good plan, and a good design minimises the risk of blowing that budget.

Design Choices

So what choices can we make to reduce the time burden? Some choices not only have no adverse effect on quality, but can actually deepen the quality of feedback or reflection opportunities for students. Here’s a very non-exhaustive list of thoughts in this direction.

  • Do you really need all those questions to confirm your learning outcomes? Do you have some questions that are just repeating the assessment of the same aspects? Trim them if so. Extra material can be used for tutorials instead.
  • Have you considered the use of a good rubric if you aren’t already using one? This can improve transparency of outcomes to students both before and after assessments and provide some generic feedback, leaving you with more time to give more focused feedback and can hugely improve the speed of marking.
  • Can you partially automate some of the assessment? If assessments are being delivered on-line many Virtual Learning Environments allow you to set assessments with questions with set or calculated answers so some of the marking and feedback can be automated. You can combine these with deeper more free response questions.
  • Can peer assessment accomplish some of your goals? If you are nervous about using peer assessment (and it does need care) what about using it in a formative way as part of your assessment diet. This can also greatly deepen students’ understanding of how their work is marked and assessed.
  • Can self assessment accomplish some of your goals? This can encourage highly reflective learning and allow you to guide the feedback based on the students’ initial assumptions.

What are your ideas to reduce your assessment budget will keeping, or even deepening the quality?

Even if you don’t undertake this formally with your line manager, try setting yourself as assessment budget, and consider how to work within it so that you can deliver authentic assessments, quality feedback in a way that leaves you time, focus and attention for the other parts of your job.

The Tyranny of Resilience and the New Normal

Resilience - you keep using that word. I do not think it means what you think it means.

In the midst of the COVID-19 pandemic there is a word and a short phrase that are both in very common usage. All too often they are used in unhelpful and arguably incorrect ways.

Elasticity has Limits

Resilience has an interesting etymology, coming from the Latin ‘resilire’, ‘to recoil or rebound’. It came to encompass ideas of elasticity, naturally enough, as its meaning evolved over centuries. From an engineering point of view, the most important aspects of elastic behaviour for our purposes are that:

  1. An elastic object returns to its previous state after the load is removed;
  2. Past a certain load, elastic behaviour is no longer seen, permanent change remains after the load is removed.

You might already see where I am going here.

The Temporary Abnormal

The “New Normal” contains a similar unspoken assumption: that we have passed through a phase of one normality, through some transition, into a new normality.

But the truth is we are still under unusual load, still in the transition, and this is not the new normal. Using that phrase to imply otherwise can be potentially disrespectful and distressing to the everyday experiences of people who are struggling with the circumstances of the pandemic and in many cases the constantly changing and/or escalating demands it places on them.

It’s like that unhelpful use of the word “resilience” — used by some people allocating load to imply that coping with trying circumstances is purely the responsibility with those under extreme load. That usage reminds me of a quote from the Hitch Hiker’s Guide to the Galaxy.

We have normality. I repeat, we have normality. Anything you still can’t cope with is therefore your own problem.

Douglas Adams

But in reality the problems that emerge in the temporary abnormal are a shared responsibility – from recognition to resolution. Individuals should not be left to feel they are on their own and that if they were only resilient enough everything would be fine, regardless of the load.

We must all remain mindful that resilience has its breaking point, its “elastic limit” and we have a shared responsibility to pay attention to this.

The New Normal is yet to come

There will indeed be a new normal, but it’s going to take quite a while to emerge. Some of this will be enforced upon us, some of this will be negotiated, and some of it will be opportunistically seized.

Much innovation will come in the transition phase, “the temporary abnormal”, and we can expect to see a great deal of this to survive into the post COVID-19 acute phase. Many old assumptions will be revisited and fall away. There’s room for optimism and enthusiasm here, that while many thing will not rebound to how they were before we have a wonderful opportunity to shape the future that will eventually emerge.

Bread without commercial yeast – Sour-dough

The bubble structure of sourdough is often more varied

At the time of writing, an extraordinary number of people are in some sort of lockdown, and trying to go the shops less often, and when they do, certain items are in short supply. One of those items is commercial yeast.

If you still have access to flour (preferably strong flour), salt and water, you are still in business.

The bubble structure of sourdough is often more varied
The bubble structure of sourdough is often more varied. This loaf has cheddar cheese within it too.


Sourdough is the oldest form of leavened bread – it’s basically using the natural yeasts (and bacteria) in our environment instead of commercial yeast. It is in some ways, a bit more tricky than normal bread making and in some ways more straightforward. It’s very in vogue at the moment, and there’s a wealth of resources out there, but I thought I’d summarise what had worked for me, and refer you to some of those if this is something that interests you because the sheer volume out there can be overwhelming.

Starter or Mother Culture

The heart of sour-dough is making a starter, or a culture from which you can always get yeast. To begin with you will need to provide food for the natural yeasts found in the air, or to get started more quickly, some other organic matter.

Try to find a jar with a reasonably wide mouth. A kilner jar works well for me, but if you use one, remove the rubber gasket so you can close the jar but it’s not 100% airtight. You can get all sorts of culture jars (and indeed the whole starters) on-line if you want.

I used this this approach with some green grapes to begin my starter. But you can use live yoghurt, or quite a variety of things to provide some initial yeasts. Keep your starter nice and warm if you can, but not hot when you are growing and feeding it. You probably need to feed it for a week to get it really going and even then it might be quite weak until it’s a couple of weeks old.

You need to feed your starter periodically. To do this, discard (or use) a little of the starter, and then mix 100 g strong flour and 100 ml of warm water (I use a fork in a measuring jug) and add this to your starter culture. The culture will rise – often dramatically – when fed and fall as the flour is consumed and the growth of yeast slows. How often you need to do this depends on how often you use the culture. If you are getting it going, do this once per day. If, once you have the starter going you don’t have time to do this often, put the culture in the fridge when not in use. It won’t die but will slow its replication. You can feed it once a week or so in this way.

If you get a brown liquid layer on the top of the culture after a few days of neglect, don’t worry. Some people stir this back in. I just discard it and feed. If for whatever reason your starter really fails to take off or goes bad, you can of course start again.

If you plan to bake a sour-dough loaf it’s a good idea to feed your culture some hours in advance. It will bubble and rise as the yeast consumes the flour. You ideally want to use the culture while it’s still rising (because the flour isn’t all spent).

Making bread, first proof

There are as many sourdough recipes for loaves as for bread with commercial yeast. You’ll find it’s much more dependent on your environment than commercial yeast but this makes it quite interesting. Like a lot of bread making, it often follows a pattern of letting the dough rise twice. The difference is you need to allow more time. This can be a nuisance, but it’s also, for me, what makes sourdough fit in with a busy schedule more than regular bread, because you can do one of these stages in the fridge.

I usually use a recipe somewhere between this one and this one.

  • 375 g/ 13 oz strong white flour
  • 250 g/ 9 oz sourdough starter
  • 10 g (2 tsp) salt
  • 10 g (2 tsp) brown sugar
  • 130-175 ml / 4-6 fl oz tepid water
  • olive oil, for kneading

Salt is an important component of making bread, for structure as well as flavour. I sometimes usejust a smidgen more than usual as above (you can use 7 g if you find this too much). Brown sugar is optional but can help the dough along.

Add the water gradually, the dough can go from too dry to awkwardly wet in short order. It’s easier to work with drier dough, but I’d recommend having it just a little wet to help the next stages. Remember the starter is already adding a lot of liquid content (and flour).

Sourdough normally takes a bit more time to knead and stretch. I have large hands and sometimes find the proposed times in recipes are too long. Take the guesswork out by doing the window test – cut off a piece of dough and stretch it to the light. If it can be made translucent without it quickly breaking then the structure is elastic enough and you are ready to let it rise.

Put the dough in a bowl (the one you used to mix it is fine), cover with a wet tea towel, and leave to rise. It will tend to spread more than rise vertically than regular bread and be slower. It’ll probably take three hours or so, but you will find your own routine with this.

Leave the dough under a wet tea towel to rise.
Leave the sour-dough to rise under a wet tea towel, it will tend to spread out more and rise less vertically than dough with commercial yeast.

Second rise

Now take out your dough and gently shape it. Knock out some of the air but don’t be too rough. If you fancy trying to incorporate other ingredients, like bacon, or cheese or sundried tomatoes, you can do it now. Don’t add too much at first if you are experimenting. The loaf pictured here had some cubes of cheddar, a little less than 1 cm a side added at this stage. If you have a banneton – or proving basket, liberally sprinkle it with flour, but you can make do with a bowl and towel. See this video for details, which is also a really good reference anyway.

Try your best to ensure you have the dough back into a ball, or at least seal and seam as best you can.

If you have sesame seeds or poppy seeds you’d like to add, just wet your hand, wet the dough with them and sprinkling them on top. Grated cheese also works well, sparingly.

Gently knock back the dough to remove excess air, then reshape, ready to go in a floured bowl.

Place your dough top down into your bowl. You can cover this (I use a clean bin bag that I will next use in my pedal bin after I finish using it for this). This second rise can take quite a while. Maybe up to five hours. But for me, this is the best part of sourdough – you can place this all in the fridge. The rise will continue, but very slowly. In the fridge give it 9 – 12 hours or more, overnight it best.

So you can start your dough around 8 pm, let it rise for three hours, shape and put it in the fridge, and bake it the late morning. Again, you will find your own routine.

The dough is ready when you press it slightly and the indentation remains or bounces back only very slowly.

In the Oven

Carefully extract your dough from your bowl or banneton, it may be quite sticky, especially if you didn’t flour the banneton enough.

Make a few slashes in the top of the dough with a very sharp thin knife. These help regulate the rise.

Some people like to bake the bread in a Dutch Oven – a cast iron pot. With an initial bake with the lid on, and then a while with the lid off. I don’t possess such a thing. You can use a baking tray, but I tend to use my invaluable pizza stone. Again, work out what works for you.

I preheat the oven to 230C or 210C fan. You need a good hot start so make sure it is up to heat. You can add a deep tray with cold water in the bottom shelf of your oven to help get a nice deep crust. I bake for 25 minutes at that temperature and then drop the oven temperature by 20 degrees for the last 10 minutes. Again, you will find your own sweet spot depending on your oven.

A finished sesame sourdough loaf
A finished sesame and cheese sourdough loaf

Share and share a like

Once you get your starter culture going, it’s easy to give some to someone else. Just pour some into a clean jar, and feed. Feed the rest of your original culture and you can now easily give half away.

Battleships Server / Client for Education

Game overview for admins

I’ve been teaching a first year introductory module in Python programming for Engineering at Ulster University for a few years now. As part of the later labs I have let the students build a battleships game using Object Oriented Programming – with “Fleet” objects containing a list of “Ships” and so on where they could play on one computer against themselves. But I had often wanted to see if I could get as far as an on-line server application based on similar principles that the students could use. This would teach them client server fundamentals but also let them have fun playing against each other.

Last year I finally attained that goal. I built a very simple web-server using Python Django to manage players, games, and ships. I wanted to use Python partly because it’s become my language of choice for web applications, but also because I wanted the code to be as accessible and understandable to the students as possible. I have initially placed very little emphasis on the User Interface of the server – this is partly deliberate because I wanted the students to have a strong incentive to develop this in the clients. I did however build a very simple admin view to let me see the games in progress. Additionally Django provides a very easy way to build admin views to manipulate objects. I have enabled this to let admins (well me) tweak data if I need to.

The Server

The server provides a very simple interface – an API – to let the students interact with the game. They can do this initially very directly using a web browser. They can simply type in specific web addresses to take certain actions. For instance


will, for a battleships server installed on battleships.server.net (not a real server), list all the current active games. You’ll note the api/1.0/ part of the URL. I did this so that in future years I could change the API and add new version numbers. That’s partly the focus of a later section of this blog.

The output of all the API calls is actually encoded in JSON. This is still fairly human readable, and indeed some web browsers, like Firefox, will render this in a way that is easy to explore. Again, this makes it possible for the students to play the game with nothing more than a web browser and a certain amount of patience. This is good for exploration, but certainly not the easiest way to play. Here’s some sample output from my real games server with my students (poor Brian).

Sample games index JSON output
Sample games index JSON output – this is designed to be read by machines (a client program) but it’s still clear enough for humans so students can read it directly.

The code for the server and documentation about the API is all publicly available on my GitHub repository. You’ll also find some basic installation instructions in case you’d like to run your own game server.

I did want to build some very basic views into the server, I built a more human version of the games list, and I also built this view which was designed only for admins – obviously in the hands of normal players it would make the game a bit pointless.

Game overview for admins
Game overview for admins only.

This allowed me to see a little of what was actually going on in the games my students were playing, as well as showing the current state of ships and who owns what, it also showed a bit more info – here for another game than that above:

Game history and ship list
Game history and ship list

As you can see this shows some of the history of the game so far – this information is available to clients in the API, and likewise the list of surviving ships are shown. The API call for the ships still surviving only shows the ships for the player making the call, with a secret (password) that was generated when the player was created.

You may notice that the server automatically names ships with names taken from the Culture novels from Iain M. Banks.

The Client(s)

In this way the students are using their web browsers as a highly unsophisticated client to access the server. Actually playing the game this way will be frustrating – requiring careful URLs to be typed every time and noting the output for further commands. This is quite deliberate. It would be easy to build the web server to allow the whole game to be played seamlessly from a web browser – but this isn’t the point – I want the students to experience building a client themselves and developing that.

For my module last year I gave the students a partially complete client, written in Python, using text only. Their aim for the lab was to complete the client. No client would be exactly the same, but they would all be written to work against the same server specification. In theory a better client will give a player an edge against others – a motivation for improvements. One student built the start of a graphical interface, some needed to be given more working pieces to complete their clients.

I’ve placed a more or less complete client on GitHub as well, but it could certainly do with a number of improvements. I may not make those since the idea of the project is to provide a focus for students to make these improvements.

What Went Well

The server worked reasonably well, and once I got going the code for automatic ship generation and placement worked quite nicely. I built a good number of unit tests for this which flushed out some problems, and which again are intended as a useful teaching resource.

I spent a reasonable amount of time building a server API that couldn’t be too easily exploited. For instance, it’s not possible for any player to be more than one move ahead of another to prevent brute force attacks by a client.

The server makes it relatively easy for as many players as one wants in a given game.

What Went Wrong

I made some missteps the first time around the block.

  • the game grid’s default size is too large – I made this configurable, but too large a grid makes the chances of quick hits much more remote – which the students need for motivation. I only really noticed this when I built my admin view shown above.
  • in the initial game, any hit on a ship destroys it, there is no concept of health, and bigger ships are more vulnerable. This is arguably not a mistake, but is a deviation from expected behaviour.

What’s Next

This year I may add another version of the API that adds health to ships so that multiple hits are needed for large ships. By moving all of this to a new version number in the URL as above, e.g.


I hope to allow for improved versions of the game while still supporting (within reason) older clients accessing the original version.

I might also add admin calls to tweak the number of ships per player and size of the ships. There are defaults in the code, but these can easily be overriden.

Shall We Play A Game?

There’s absolutely no reason the clients need to be written in Python. There’s no reason not to write one in C++ or Java, or to build a client that runs on Android or iOS phones. There’s no reason that first year classes with a text mode Python client can’t play against final year students using an Android client.

If you are teaching in any of these languages and feel like writing new clients against the server please do. If you feel like making your clients open source to help more students (and lecturers) learn more, that would be super too.

If you have some improvements in mind for the server, do feel free to fork it, or put in pull requests.

If you or your class do have a go at this, I’d love to hear from you.

Anatomy of a Puzzle

Recently I was asked to provide a Puzzle For Today for the BBC Radio 4 Today programme which was partially coming as an Outside Broadcast from Ulster University.

I’ve written a post about the puzzle itself, and some of the ramifications of it; this post is really more about the thought process that went into constructing it.

When I was first asked to do this I had a look at the #PuzzleForToday hashtag on Twitter and found that a lot of people found the puzzles pretty hard, so I thought I might try to construct a two part puzzle with one part being relatively easy and the other a bit harder. I also wanted something relatively easy to remember and work out in your head for those people driving, since commuters probably make up a lot of the Today programme audience.

A lot of my students will know I often do puzzles with them in class, but most are quite visual, or are really very classic puzzles, so I needed something else. Trying to find something topical I thought about setting a puzzle around a possible second referendum election, since this was much in the news at the time. My first go at this was coming up with a scenario about an election count, and different ballot counters with different speeds counting a lot of ballots.

I constructed an idea that a Returning Officer had a ballot with a team of people to count votes. One of the team could count all the votes in two hours, the next in four hours, and the next in eight hours. But how long would they take if they worked all together? The second part would be: if there were more counters available but each took twice as long again as the one before, what was the least possible time to complete the task.

I liked this idea because I thought there were a lot of formal and informal ways to get to the answer, and indeed the answers I saw on Twitter and Facebook confirmed this. Perhaps the easiest way to approach the puzzle is this: to consider how much work each can do in an hour. We see the first person gets half of it done, the next one quarter and the next one eighth. All together then:

    \[\frac{1}{2} + \frac{1}{4} + \frac{1}{8} = \frac{7}{8}\]

We need to work out what number we would multiply by this to get one – i.e. the whole job being done. In this case it works out as

    \[1 \div \frac{7}{8} = \frac{8}{7}\]

which works out (a bit imprecisely) with a bit of work, or a decent calculator, as one hour, eight minutes and thirty four seconds and a bit. So, not great, but the second part of the puzzle works out much more smoothly. If we keep on with out pattern, we get

    \[\frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \frac{1}{16} + \frac{1}{32} + \cdots\]

Formally, this is called a Geometric Progression, there is a constant factor between each terms. Sometimes these infinite sums actually have a finite answer, which might be surprising. If you keep adding these fractions you might see that are adding ever closer to 1. Therefore this potentially infinite number of counters gets the work done in one hour, it can’t be less.

So, I was happy the second number worked out nicely, but the first is pretty tricky, and not easily worked out in one’s head. So I wondered what numbers I could use instead that would work out as an exact number of minutes. This really means that my three fractions from the first part of the problem, divide perfectly into 60. I used some Python to help me with this with a list comprehension:

# Establish a top size to work with n = 100
# Find all the triples where
# a,b,c <= n and 1/a + 1/b + 1/c # divides evenly into 60 triples = [(a,b,c) for a in range(1,n+1) for b in range(1,a) for c in range(1,b) if (60/(1/a + 1/b + 1/c)).is_integer()]

It turns out that restricting ourselves to three numbers for the first quiz all of which are under 100 that there are 902 such sets of numbers. The very smallest numbers are 1, 2 and 6. The problem is that most of these triples don’t have the property of my original choices – which that there is a common number multiplied between the first and second, and the second and third etc.. That would make the second part of the puzzle more difficult.

So, I modified my list comprehension a bit to add the condition that there was a common ratio from a to b to c:

# Establish a top size to work with
n = 100

# Find all the triples where a,b,c <= n and 1/a + 1/b + 1/c
# divides evenly into 60 and where the factor between a and b,
# is the same as that between b and c

geometric_triples = [(a,b,c) for a in range(1,n+1)
                     for b in range(1,a)
                     for c in range(1,b)
                     if (60/(1/a + 1/b + 1/c)).is_integer()
                     and a/b == b/c]

This produced just three triples (with all numbers under 100):

>>> geometric_triples
[(28, 14, 7), (56, 28, 14), (84, 42, 21)]

and you can see these are all quite related. So I grabbed the first three numbers to try and keep the puzzle small.

As well as that, the programme team wanted a different focus than an election – they were a bit worried that because it was in the news so much it would be better to have another focus. I considered a computational task divided between processors, but eventually concluded this wouldn’t make a lot of sense to some listeners, so I went with this final configuration of the puzzle.

Part One

A Professor gives her team of three PhD students many calculations to perform. The first student is the most experienced and can complete the job on her own in 7 hours, the next would take 14 hours on his own, and the last would take 28 hours working single handed to complete the task. How long would the task take if they all worked together?

Part Two

If the Professor has more helpers, but which follow the same pattern of numbers to complete the task, what is the absolute minimum time the task can take?

You can probably answer this from the details of the construction above, but if not, you can always cheat here (the BBC programme page) or here.

# Establish a top size to work with n = 100
# Find all the triples where a,b,c <= n and 1/a + 1/b + 1/c
# divides evenly into 60 and where the factor between a and b,
# is the same as that between b and c
geometric_triples = [(a,b,c) for a in range(1,n+1) for b in range(1,a) for c in range(1,b) if (60/(1/a + 1/b + 1/c)).is_integer() and a/b == b/c]

My Puzzle for the Day

In November 2018 the BBC Radio 4 Today Programme was visiting Ulster University for an outside broadcast. I was asked to write the Puzzle for the Day for the broadcast. Here is my puzzle and some discussion about how it can be solved. The puzzle and a very brief solution is on the BBC page, but I restate it below, together with a more full solution.

Part One

A Professor gives her team of three PhD students many calculations to perform. The first student is the most experienced and can complete the job on her own in 7 hours, the next would take 14 hours on his own, and the last would take 28 hours working single handed to complete the task. How long would the task take if they all worked together?

Part Two

If the Professor has more helpers, but which follow the same pattern of numbers to complete the task, what is the absolute minimum time the task can take?

Solving Part One.

Possibly the easiest way to solve the first part is to consider how much work can be done by each team member in a single hour and adding to get the total amount of work done per hour. For instance, the first team member can do the whole job in seven hours, so they can do \frac{1}{7} of the job in one hour etc. So the team together can do the following in one hour:

    \[\frac{1}{7} + \frac{1}{14} + \frac{1}{28} = \frac{4}{28} + \frac{2}{28} + \frac{1}{28} = \frac{7}{28} = \frac{1}{4}.\]

In other words, one quarter of the job can be done in a single hour by the team of three working together. It follows that to do the whole job, the team needs four hours.

Solving Part Two.

So what about part two of the puzzle? Quite reasonably, some people attempting the puzzle assumed that the more and more team members we would add, the time to complete the task would eventually drop to zero. This seems fairly intuitive – if you have potentially infinite team members then the total time must drop to zero.

But imagine we continue our pattern from above, for far more than three team members. This would be the proportion of work done in one hour by even an infinite team.

    \[\frac{1}{7} + \frac{1}{14} + \frac{1}{28} + \frac{1}{56} + \frac{1}{112} + \frac{1}{224} + \cdots \]

If this addition produces an infinite “answer” then an infinite work rate per hour would certainly suggest the task could be done instantly. Surprisingly perhaps this sum of infinitely many numbers does not have an infinite answer. It may be a little easier to see its nature if we take out a factor of \frac{1}{7}.

Then the summation looks like this:

    \[\frac{1}{7} (1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \frac{1}{16} + \frac{1}{32} + \cdots)\]

It may be intuitive for some people that the numbers inside the bracket will get over closer to 2 – setting aside the initial 1, each subsequent number added goes half way from the previous sum to adding an additional 1 in total. In other words, this whole sum will be \frac{2}{7}.

Some infinite summations do indeed have finite answers, but it can be quite difficult to prove which are which, or to find the summations if they are finite. However, this example, aside from intuitively having a relatively easy answer falls into a special category of such summations called a geometric progression. These are series of the form:

    \[a + ar + ar^2 + ar^3 + ar^4 + \cdots = \]

    \[a (1 + r + r^2 + r^3 + r^4 + \cdots)\]

In other words, each item in the sum is the previous item multiplied by some common ratio r. There is a nice formula for the sum of the first n terms of such progressions which you could use to solve part one – though it would rather be a sledgehammer to crack a nutshell, but there is a formula for the infinite summation too.

    \[S_\infty = \frac{a}{1-r}\]

(provided |r| < 1, or in other words that r is between -1 and 1 not-inclusive, if not then the summation is infinite).

In this case a=\frac{1}{7} and r=\frac{1}{2} and r passes the test above so the infinite sum is indeed verified to be \frac{2}{7}.

Finally therefore, if even infinitely many team members can only do \frac{2}{7} of the job in an hour, we need to work out the number to multiply on this to get to the whole job being done, i.e. not “2/7” of the job but the whole “1” of the job.

That number is \frac{7}{2}, so the whole job can be reduced from the four hours of part one to not less than 3.5 hours.

Of course, there are other ways to solve the puzzle. This is just one example pathway. If you are interested in how I went about constructing the puzzle, I detail that in another post.

Incidentally, the fact that summations of infinitely many objects sometimes has finite answers is of vital importance for many real life applications of mathematics. The whole subject of Integral Calculus relies on this, and for instance, this is closely related to why it is possible, with finite energy for a rocket to escape Earth or an electron to escape an atom.