## Extracting Sky Router Crash Data Amidst Kernel Panics

I have a Sky broadband connection with fibre into a Sky Router / Sky Hub. I have noticed very short outages of internet service with increasing frequency recently. The outages are short, maybe around 3-5 minutes long but these are annoying enough in the middle of an online meeting or some other synchronous activity. Sometimes two or three of these short outages occur in a relatively short time frame.

There did not seem to be a correlation to temperature, or even usage. In other words, the device doesn’t seem to be crashing because it is overheating under load or ambient temperature.

## Extracting the Crash Data

It turns out that some enterprising engineer took the decision to embed crash or reset data in what is perhaps a surprising place, or at least a surprising place for easy user access. If one backups the router configuration settings to a file, the result is an XML file that includes two interesting stanzas towards the end.

    <X_SKY_REBOOT_CAUSE>
<RebootTime>1660120232</RebootTime>
<RebootReasonType>KLRT</RebootReasonType>
<RebootReasonCode>1001</RebootReasonCode>
<RebootInfo>kernel&#32;panic&#32;marker&#32;detected&#32;in&#32;the&#32;flash</RebootInfo>
</X_SKY_REBOOT_CAUSE>
<X_SKY_COM_DEVICE_DOCTOR>
<Enable>TRUE</Enable>
<CmsLockDuration>10</CmsLockDuration>
<MemoryThreshold>95</MemoryThreshold>
<SharedMemoryThreshold>95</SharedMemoryThreshold>
</X_SKY_COM_DEVICE_DOCTOR>

Now the X_SKY_COM_DEVICE_DOCTOR also looks quite interesting, but the piece for us right now is the X_SKY_REBOOT_CAUSE. You will note in this case that the RebootInfo data contains the following:

kernel panic marker detected in the flash

Which is interesting if not encouraging. For those not in the know a kernel panic indicates a type of crash in the base operating system of the device. It could be caused by faulty software or in this case firmware in the device, or it could be caused by some hardware problem. In Sky’s case firmware updates are highly automated. It could easily be caused by a firmware bug, but in that case it would likely be experienced by many customers. The smart money if that’s not the case might be on faulty hardware.

Importantly, even if there is some problem in the broadband provision coming from the fibre, the router should not panic or crash – it should just deal with the problem, reconnect when possible and move on. That would be obvious in the device logs.

I called Sky to report this, and the first conversation wasn’t too productive if it wasn’t surprising: the request to turn the device off and on again. Not bad advice, but not successful. I got told to do a total factory reset – which was a time consuming pain, and didn’t fix the problem.

By this point, I’d started to write a very small Python script to automate extracting the crash data and time-stamping it, if the backup file was download by hand. To complete the script I really need to automate the web request part – which I haven’t attempted yet as it looks like I need to handle the session data – not insurmountable, but a bit of work. Here is that short unvarnished script.

import xml.etree.ElementTree as ET
import datetime as dt
import os

# Extract the XML from the settings file and get the root
tree = ET.parse('sky_router_settings.conf')
root = tree.getroot()

# Look for the X_SKY_REBOOT_CAUSE stanza should it exist
for item in root.findall('.//X_SKY_REBOOT_CAUSE'):

# It does, so let's extract the details
reboot_time = int(item.find('RebootTime').text)
reboot_reason_type = item.find('RebootReasonType').text
reboot_reason_code = item.find('RebootReasonCode').text
reboot_info = item.find('RebootInfo').text

# Make an ISO datetime from the Unix epoch timestamp
reboot_format_time = dt.datetime.utcfromtimestamp(reboot_time).strftime("%Y-%m-%d %H:%M:%S")

# Let's write the data if it isn't already there (in current working directory)
if not os.path.exists(reboot_format_time):
print(f"Found new crash data... {reboot_format_time} {reboot_info}")
try:
fh = open(reboot_format_time, 'x')

print(f'Time:{reboot_format_time}', file=fh)
print(f'Type:{reboot_reason_type}', file=fh)
print(f'Code:{reboot_reason_code}', file=fh)
print(f'Info:{reboot_info}', file=fh)

fh.close()

except Exception as e:
print(f'oops, something went wrong')
print(e)

So, to use this, one logs into the sky router (probably browse to http://192.168.0.1 or whatever address your router is on from within your network) – go to Maintenance and Backup Settings. Drop the saved file in the same directory as the script, and run it. I was doing this periodically to check for crashes I had not witnessed. It will save any new data into a file with the timestamp as a filename in the same directory. Crude, but it works.

Armed with a number of crash events I called Sky back. What followed was a highly frustrating conversation for all sides, where I was advised that I had to plug the hub into a different electrical socket. I duly did so, and incidentally noticed that the hub records a different reason for the reset.

Power On Reset detected

In other words, the hub notes when it was power cycled. To the surprise of virtually no-one, changing the socket the hub was plugged into did not prevent the crashes. My script detected another one just before 2 am yesterday.

Time:2022-09-07 01:56:12
Type:KLRT
Code:1001
Info:kernel panic marker detected in the flash

I called Sky again and finally had a constructive conversation – they are sending me a new hub to test. Hopefully this will solve the problem. I doubt it’s a firmware issue or it would have been more widely reported.

## #TODO

I think I will probably try and bite the bullet and use Python to download the backup file too. If I can get that bit working I can rig the whole thing up to cron to check for crash data automatically. New hub or not, keeping a track of these crash events would be useful.

It’s curious that the hub obviously does this hard work of storing crash data, but this doesn’t seem to be transmitted to Sky which would really help them diagnose problems when customers call.

## exim4 upgrades and configuration fragility

Last night I decided I’d catch up on sysadmin tasks. Some of that was trying to tighten up my spam filtering again. I had got in place a per-user Bayesian filter on spamassassin, which essentially should allow it to learn a much more individual pattern of what each user considers spam. I also had configuration for having my mail server (running exim) reject mail within the SMTP session for very egregious examples. That configuration hasn’t been working for a while – I had to revert lots of custom changes to my config for a significant exim4 upgrade a while back, and I haven’t had the time and patience to try and reinstate it all. So I thought I’d look at that.

I ran a standard apt update and upgrade before I started. I noted exim4 and its various binary packages were marked for upgrade which isn’t unusual and proceeded (the upgrade was between versions 4.95-RC2-1 and 4.95-1). Debian warned me of a change to exim4.conf.template and I examined the diff briefly, didn’t see anything extraordinary and retained my config. In any case, I am using a distributed config in conf.d so expected to see a more targetted diff on one of those. I didn’t. exim restarted without complaint.

I started using tail to follow my exim logs, and could immediately see that every single inbound message was being temporarily rejected, albeit, there was no information as to why. I spent probably the guts of an hour checking the changes between my configuration files and .dpkg-dist versions (the ones shipped as the new changes) and couldn’t see the problem. I tried copying over the exim4.conf.template and updating the configuration with update-exim4.conf and still I had the same problem. I checked the changelog and didn’t see anything profound that should really be a worry.

In the end I had to downgrade all the exim4 packages, and my mail started to be delivered again. Of course, this only buys me a little time to either find the problem, or hope it’s something upstream in the Debian package. I maybe should report this as a bug, but I feel I don’t yet have enough information.

However, it really got me thinking just how fragile exim4 configuration seems to be. I need to add a few tweaks to the shipped configuration if I want more effective handling of some things, such as my multiple domains, or being able to handle email addresses with extra bits like <name>-<website>@<domain> so I can filter email from various sites (and determine who sold my data). But in the main the challenges are in more effective spam filtering. All of those tweaks can be really easily disrupted by an update. I’ve been thinking of trying to generate a patch set against the default configuration in case that makes it easier to accept all new config files and applying the patch set, but that will have its own problems. greylist – a package that temporarily rejects initial attempts to deliver mail from unknown servers uses this patch approach.

In the meantime, if you upgrade exim4 and suddenly have all inbound messages being temporarily rejected you are not alone, but I can’t yet explain why. For me a temporary downgrade was the only solution.

aptitude install exim4=4.94.2-7 exim4-base=4.94.2-7 exim4-config=4.94.2-7 exim4-daemon-heavy=4.94.2-7

## Battleships Server / Client for Education

I’ve been teaching a first year introductory module in Python programming for Engineering at Ulster University for a few years now. As part of the later labs I have let the students build a battleships game using Object Oriented Programming – with “Fleet” objects containing a list of “Ships” and so on where they could play on one computer against themselves. But I had often wanted to see if I could get as far as an on-line server application based on similar principles that the students could use. This would teach them client server fundamentals but also let them have fun playing against each other.

Last year I finally attained that goal. I built a very simple web-server using Python Django to manage players, games, and ships. I wanted to use Python partly because it’s become my language of choice for web applications, but also because I wanted the code to be as accessible and understandable to the students as possible. I have initially placed very little emphasis on the User Interface of the server – this is partly deliberate because I wanted the students to have a strong incentive to develop this in the clients. I did however build a very simple admin view to let me see the games in progress. Additionally Django provides a very easy way to build admin views to manipulate objects. I have enabled this to let admins (well me) tweak data if I need to.

## The Server

The server provides a very simple interface – an API – to let the students interact with the game. They can do this initially very directly using a web browser. They can simply type in specific web addresses to take certain actions. For instance

http://battleships.server.net/api/1.0/games/index/

will, for a battleships server installed on battleships.server.net (not a real server), list all the current active games. You’ll note the api/1.0/ part of the URL. I did this so that in future years I could change the API and add new version numbers. That’s partly the focus of a later section of this blog.

The output of all the API calls is actually encoded in JSON. This is still fairly human readable, and indeed some web browsers, like Firefox, will render this in a way that is easy to explore. Again, this makes it possible for the students to play the game with nothing more than a web browser and a certain amount of patience. This is good for exploration, but certainly not the easiest way to play. Here’s some sample output from my real games server with my students (poor Brian).

The code for the server and documentation about the API is all publicly available on my GitHub repository. You’ll also find some basic installation instructions in case you’d like to run your own game server.

I did want to build some very basic views into the server, I built a more human version of the games list, and I also built this view which was designed only for admins – obviously in the hands of normal players it would make the game a bit pointless.

This allowed me to see a little of what was actually going on in the games my students were playing, as well as showing the current state of ships and who owns what, it also showed a bit more info – here for another game than that above:

As you can see this shows some of the history of the game so far – this information is available to clients in the API, and likewise the list of surviving ships are shown. The API call for the ships still surviving only shows the ships for the player making the call, with a secret (password) that was generated when the player was created.

You may notice that the server automatically names ships with names taken from the Culture novels from Iain M. Banks.

## The Client(s)

In this way the students are using their web browsers as a highly unsophisticated client to access the server. Actually playing the game this way will be frustrating – requiring careful URLs to be typed every time and noting the output for further commands. This is quite deliberate. It would be easy to build the web server to allow the whole game to be played seamlessly from a web browser – but this isn’t the point – I want the students to experience building a client themselves and developing that.

For my module last year I gave the students a partially complete client, written in Python, using text only. Their aim for the lab was to complete the client. No client would be exactly the same, but they would all be written to work against the same server specification. In theory a better client will give a player an edge against others – a motivation for improvements. One student built the start of a graphical interface, some needed to be given more working pieces to complete their clients.

I’ve placed a more or less complete client on GitHub as well, but it could certainly do with a number of improvements. I may not make those since the idea of the project is to provide a focus for students to make these improvements.

## What Went Well

The server worked reasonably well, and once I got going the code for automatic ship generation and placement worked quite nicely. I built a good number of unit tests for this which flushed out some problems, and which again are intended as a useful teaching resource.

I spent a reasonable amount of time building a server API that couldn’t be too easily exploited. For instance, it’s not possible for any player to be more than one move ahead of another to prevent brute force attacks by a client.

The server makes it relatively easy for as many players as one wants in a given game.

## What Went Wrong

I made some missteps the first time around the block.

• the game grid’s default size is too large – I made this configurable, but too large a grid makes the chances of quick hits much more remote – which the students need for motivation. I only really noticed this when I built my admin view shown above.
• in the initial game, any hit on a ship destroys it, there is no concept of health, and bigger ships are more vulnerable. This is arguably not a mistake, but is a deviation from expected behaviour.

## What’s Next

This year I may add another version of the API that adds health to ships so that multiple hits are needed for large ships. By moving all of this to a new version number in the URL as above, e.g.

http://battleships.server.net/api/2.0/games/index/

I hope to allow for improved versions of the game while still supporting (within reason) older clients accessing the original version.

I might also add admin calls to tweak the number of ships per player and size of the ships. There are defaults in the code, but these can easily be overriden.

## Shall We Play A Game?

There’s absolutely no reason the clients need to be written in Python. There’s no reason not to write one in C++ or Java, or to build a client that runs on Android or iOS phones. There’s no reason that first year classes with a text mode Python client can’t play against final year students using an Android client.

If you are teaching in any of these languages and feel like writing new clients against the server please do. If you feel like making your clients open source to help more students (and lecturers) learn more, that would be super too.

If you have some improvements in mind for the server, do feel free to fork it, or put in pull requests.

If you or your class do have a go at this, I’d love to hear from you.

## Anatomy of a Puzzle

Recently I was asked to provide a Puzzle For Today for the BBC Radio 4 Today programme which was partially coming as an Outside Broadcast from Ulster University.

I’ve written a post about the puzzle itself, and some of the ramifications of it; this post is really more about the thought process that went into constructing it.

When I was first asked to do this I had a look at the #PuzzleForToday hashtag on Twitter and found that a lot of people found the puzzles pretty hard, so I thought I might try to construct a two part puzzle with one part being relatively easy and the other a bit harder. I also wanted something relatively easy to remember and work out in your head for those people driving, since commuters probably make up a lot of the Today programme audience.

A lot of my students will know I often do puzzles with them in class, but most are quite visual, or are really very classic puzzles, so I needed something else. Trying to find something topical I thought about setting a puzzle around a possible second referendum election, since this was much in the news at the time. My first go at this was coming up with a scenario about an election count, and different ballot counters with different speeds counting a lot of ballots.

I constructed an idea that a Returning Officer had a ballot with a team of people to count votes. One of the team could count all the votes in two hours, the next in four hours, and the next in eight hours. But how long would they take if they worked all together? The second part would be: if there were more counters available but each took twice as long again as the one before, what was the least possible time to complete the task.

I liked this idea because I thought there were a lot of formal and informal ways to get to the answer, and indeed the answers I saw on Twitter and Facebook confirmed this. Perhaps the easiest way to approach the puzzle is this: to consider how much work each can do in an hour. We see the first person gets half of it done, the next one quarter and the next one eighth. All together then:

We need to work out what number we would multiply by this to get one – i.e. the whole job being done. In this case it works out as

which works out (a bit imprecisely) with a bit of work, or a decent calculator, as one hour, eight minutes and thirty four seconds and a bit. So, not great, but the second part of the puzzle works out much more smoothly. If we keep on with out pattern, we get

Formally, this is called a Geometric Progression, there is a constant factor between each terms. Sometimes these infinite sums actually have a finite answer, which might be surprising. If you keep adding these fractions you might see that are adding ever closer to 1. Therefore this potentially infinite number of counters gets the work done in one hour, it can’t be less.

So, I was happy the second number worked out nicely, but the first is pretty tricky, and not easily worked out in one’s head. So I wondered what numbers I could use instead that would work out as an exact number of minutes. This really means that my three fractions from the first part of the problem, divide perfectly into 60. I used some Python to help me with this with a list comprehension:

# Establish a top size to work with n = 100
# Find all the triples where# a,b,c <= n and 1/a + 1/b + 1/c
# divides evenly into 60

triples = [(a,b,c) for a in range(1,n+1)
for b in range(1,a)
for c in range(1,b)
if (60/(1/a + 1/b + 1/c)).is_integer()]

It turns out that restricting ourselves to three numbers for the first quiz all of which are under 100 that there are 902 such sets of numbers. The very smallest numbers are 1, 2 and 6. The problem is that most of these triples don’t have the property of my original choices – which that there is a common number multiplied between the first and second, and the second and third etc.. That would make the second part of the puzzle more difficult.

So, I modified my list comprehension a bit to add the condition that there was a common ratio from a to b to c:

# Establish a top size to work with
n = 100

# Find all the triples where a,b,c <= n and 1/a + 1/b + 1/c
# divides evenly into 60 and where the factor between a and b,
# is the same as that between b and c

geometric_triples = [(a,b,c) for a in range(1,n+1)
for b in range(1,a)
for c in range(1,b)
if (60/(1/a + 1/b + 1/c)).is_integer()
and a/b == b/c]



This produced just three triples (with all numbers under 100):

>>> geometric_triples
[(28, 14, 7), (56, 28, 14), (84, 42, 21)]


and you can see these are all quite related. So I grabbed the first three numbers to try and keep the puzzle small.

As well as that, the programme team wanted a different focus than an election – they were a bit worried that because it was in the news so much it would be better to have another focus. I considered a computational task divided between processors, but eventually concluded this wouldn’t make a lot of sense to some listeners, so I went with this final configuration of the puzzle.

Part One

A Professor gives her team of three PhD students many calculations to perform. The first student is the most experienced and can complete the job on her own in 7 hours, the next would take 14 hours on his own, and the last would take 28 hours working single handed to complete the task. How long would the task take if they all worked together?

Part Two

If the Professor has more helpers, but which follow the same pattern of numbers to complete the task, what is the absolute minimum time the task can take?

You can probably answer this from the details of the construction above, but if not, you can always cheat here (the BBC programme page) or here.

# Establish a top size to work with n = 100
# Find all the triples where a,b,c <= n and 1/a + 1/b + 1/c
# divides evenly into 60 and where the factor between a and b,
# is the same as that between b and c
geometric_triples = [(a,b,c) for a in range(1,n+1) for b in range(1,a) for c in range(1,b) if (60/(1/a + 1/b + 1/c)).is_integer() and a/b == b/c]

## Assessment handling and Assessment Workflow in WAM

As is often the way, the scope of the project broadened and I found myself writing in support for handling assessments and the QA processes around them. At some point this necessitates a new name for WAM to something more general (answers on a post card please) but for now, development continues.

Last year I added features to allow Exams, Coursework, and their Moderation and QA documents to be uploaded to WAM. This was generally reasonably successful, but a bit clunky. We gave several External Examiners access to the system and they were able to look in at the modules for which they were an examiner and the feedback was pretty good.

## What Worked

One of the things that worked best about last year’s experiment was that we put in information about the Programmes (Courses) each Module was on. It’s not at all unusual for many Programmes to have the same Module within them.

This can cause a headache for External Examination since an External Examiner is normally assigned to a Programme. In short, the same Module can end up being looked at by several Examiners. While this is OK, it can be wasteful of work, and creates potential problems when two Examiners have a different perspective on the Module.

So within WAM, I put in code an assumption of what we should be doing in paper based systems – that every Module should have a “Lead Programme”. The examiner for that Programme should be the one that has primacy, and furthermore, where they are presented other Modules on the Programme for which they aren’t the “lead” Examiner, they should know that this is for information, and they may not be required to delve into it in so much detail – unless they choose to.

This aspect worked well, and the External Examiners have a landing screen that shows which Modules they are examining, and which they are the lead Examiner.

## What Didn’t Work

I had written code that was intended to look at what assessment artefacts had been uploaded since a last user’s login, and email them the relevant stuff.

This turned out to be problematic, partly because one had to unpick who should get what, but mostly because I’m using remote authentication with Django (the Python framework in which WAM is written), and it seems that the last login time isn’t always updated properly when you aren’t using Django’s built in authentication.

But the biggest problem was a lack of any workflow. This was a bit deliberate since I didn’t want to hardcode my School or Faculty’s workflow.

You should never design your software product for HE around your own University too tightly. Because your own University will be a different University in two years’ time.

So, I wanted to ponder this a bit. It made visibility of what was going on a little difficult. It looked a bit like this (not exactly, as this is a screenshot from a newer version of an older module):

with items shown from oldest at the bottom to newest at the top. You can kind of infer the workflow state by the top item, and indeed, I used that in the module list.

But staff uploaded files they wanted to delete (and that was previously disallowed for audit reasons) and the workflow wasn’t too clear and that made notifications more difficult.

## What’s New

So, in a beta version of 2.0 of the software I have implemented a workflow model. I did this by:

• defining a model that represented the potential states a Module could be in, each state defines who can trigger it, and what can happen next, and who should be notified;
• defining a model that shows a “sign off” event.

Once it became possible to issue a “sign off” of where we were in the workflow, a lot of things became easier. This screenshot shows how it looks now.

Ok, it’s a bit of a dumb example, since I’m the only user triggering states here (and I can only do that in some cases since I’m a Superuser, otherwise some states can only be triggered by the correct stakeholder – the moderator of examiner).

However, you can see that now we can still have all the assessment resources, but with sign offs at various stages. The sign off could (and likely would) have much more detailed notes in a real implementation.

This in turn has made notification emails much easier to create. Here is the email triggered by the final sign off above.

The detailed notes aren’t shown in the email, in case other eyes are on it and there are sensitive comments.

All of this code is available at GitHub. It’s working now, but I’m probably do a few more bits before an official 2.0 release.

I will be demoing the system at the Royal Academy of Engineering in London next Monday, although that will focus entirely on WAM’s workload features.

## Migrating Django Migrations to Django 2.x

Django is a Python framework for making web applications, and its impressive in its completeness, flexibility and power for speedy prototyping.

It’s also an impressive project for forward planning, it has a kind of built in “lint” functionality that warns about deprecated code that will be disallowed in future versions.

As a result when Django 2.0 was released I didn’t have to make many changes to my app code base to get it to work successfully. However, today when I tried to update my oldest Django App (started in Django 1.8x) I hit an unexpected snag. The old migrations were sometimes invalid. Curiously I don’t think this problem emerged the last time I tried.

Django uses migrations to move the database schema from one version to the next. Most of the time it’s a wonderful system. In the rare case it goes wrong it can be … tricky. Today’s problem is quite specific, and easier to fix.

Django 2.0 enforces that ForeignKey fields explicitly specify a behaviour to follow on deletion of the object pointed to by the key. In general whether we Cascade the deletion, or set the field to Null, getting the behaviour write can be important, particular on fields where a Null value has a legitimate meaning.

But a bit of a sting in the tail is that an older Django project may have migrations created automatically by Django which don’t obey this. I discovered this today and found I couldn’t proceed with my project unless I went back and modified the old migrations to be 2.0 compliant.

So if this happens to you, here are some suggestions on fixing the problem.

You will know if you have a problem if when you try to run your test server, or indeed replace runserver by check

python3 manage.py runserver

you get an error and output like this

  File "/Users/colin/Development/WAM/WAM/loads/migrations/0024_auto_20160627_1049.py", line 7, in <module>
class Migration(migrations.Migration):
File "/Users/colin/Development/WAM/WAM/loads/migrations/0024_auto_20160627_1049.py", line 100, in Migration
TypeError: __init__() missing 1 required positional argument: 'on_delete'


I would suggest you try runserver whatever you did before as it will continue to try each time you save a file.

Open your code with your favourite editor, and open your models.py file (you may have several depending on your project), and the migration file that’s broken as above.

Looking in your migration file you’ll find the offending line. In this case it’s the last (non trivial) line below.

      migrations.AddField(
model_name='activity',
name='activity_set',
),

To ensure that your migrations will be applied consistently with your final model (well, as long as nobody tries to migrate to an intermediate state) look carefully in the correct model (Activity) in this case, and see what decision you make for deletion there. In my case I want deletion of the ActivitySet to kill all linked Activitiy(s). So replicate the “on_delete” choice from there.

      migrations.AddField(
model_name='activity',
name='activity_set',
),

Each time you save your new migration file the runserver terminal window will re-run the check, hopefully moving on to the next migration that needs to be fixed. Work your way through methodically until your code checks clean. Check into source control, and you’re done.

## Semi Open Book Exams

A few years ago, I switched one of my first year courses to use what I call a semi-open-book approach.

Open-book exams of course allow students to bring whatever materials they wish into them, but they have the disadvantage that students will often bring in materials that they have not studied in detail, or even at all. In such cases, sifting through materials to help them answer a question could be counter productive.

On the other hand, the real world is now an increasingly “open-book” environment, which huge amounts of information available to those in the workplace which is now almost always Internet connected.

So I decided to look at another approach. Students are allowed to bring in a single, personalised, A4 sheet, on which they can write whatever they wish on both sides. There are a few rules:

• the sheet must be written on “by hand”, that is to say, it cannot be printed to from a computer, or typed;
• the sheet must be “original”, that is to say, it cannot be a photocopy of another sheet (though students may of course copy their original for reference);
• the sheet must be the student’s own work, and they must formally declare as much (with a tick box);
• the sheet must be handed in with the exam paper, although it is not marked.

The purpose of these restrictions are to ensure that each student takes a lead in producing an individual sheet, and to inhibit cottage industries of copied sheets.

In terms of what can go on the sheet? Well anything really. It can be sections from notes, important formulae, sample questions or solutions. The main purpose here is to prompt students to work out what they would individually distill down to an A4 page. So they go through all the module notes, tutorial problems and more, and work out the most valuable material that deserves to go on one A4 page. I believe that this process itself is the greatest value of the sheet, its production rather than its existence in the exam. I’m working on some research to test this.

So I email them each an A4 PDF, which they can print out at home, and on whatever colour paper they may desire. The sheet is individual and has their student number on it with a barcode, for automated processing and analysis afterwards for a project I’m working on, but this is anonymised. The student’s name in particular does not appear, since in Ulster University, it does not appear on the exam booklet.

The top of my sheet looks like this:

So, if you would like to do the same, I am enclosing the Python script, and LaTeX that I use to achieve this. You could of course use any other technology, or not individualise the sheet at all.

For convenience the most recent code will also be placed on a GitHub repository here, feel free to clone away.

My script has just been rewritten for Python 3.x, and I’ve added a lot of command line parameters to decouple it from me and Ulster University only use. It opens a CSV file from my University which contains student id numbers, student names, and emails in specific columns. These are the default for the script but can be changed. For each student it uses LaTeX to generate the page. It actually creates inserts for each student of the name and student number, you can then edit open-book.tex to allow the page to be as you wish it. You don’t need to know much LaTeX to achieve this, but ping me if you need help. I am also using a LaTeX package to create the barcodes automatically.

I’ve spent a bit of time adding command line parameters to this script, but you can try using

python3 open-book.py --help

for information. The script has been rewritten for Python 3. If you run it without parameters it will enter interactive mode and prompt you.

I’d strongly recommend running with the –test-only option at first to make sure all looks good, and opening open-book.pdf will show you the last generated page so you can see it’s what you want.

Anyway, feel free to do your own thing, or mutilate the code. Enjoy!

#!/usr/bin/env python

#
#
# Free and Open Source Software under GPL v3
#
import argparse

import csv
import re
import subprocess
import smtplib
from email.mime.application import MIMEApplication
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText

def process_student(args, row):
"""Takes a line from the CSV file, you will likely need to edit aspects of this."""
'kicks of the processing of a single student'
student_number = row[args.student_id_column]
student_name = row[args.student_name_column]
student_email = row[args.student_email_column]
print('  Processing:', student_name , ':', student_email)
create_latex_inserts(student_number, student_name)
create_pdf()
send_email(args, student_name, student_email)

def create_latex_inserts(student_number, student_name):
"""Write LaTeX inserts for the barcode and student name

For each student this will create two tiny LaTeX files:

* open-book-insert-barcode.tex which contains the LaTeX code for a barcode representing the student number
* open-book-insert-name.tex which will contain simply the student's name

These files can be included/inputted from open-book.tex as desired to personalise that document

student_number is the ID in the students record system for the student
student_name is the name of the student"""

# Open a tiny LaTeX file to put this in
file = open('open-book-insert-barcode.tex', 'w')

# All the file contains is LaTeX to code to create the bar code
string = '\psbarcode{' + student_number + '}{includetext height=0.25}{code39}'
file.write(string)
file.close()

# The same exercise for the second file to contain the student name
file = open('open-book-insert-name.tex', 'w')
string = student_name
file.write(string)
file.close()

def create_pdf():
"""Calls LaTeX and dvipdf to create the personalised PDF with inserts from create_latex_inserts()"""

# Suppress stdout, but we leave stderr enabled.
subprocess.call("latex open-book", stdout=subprocess.DEVNULL, shell=True)
subprocess.call("dvipdf open-book", stdout=subprocess.DEVNULL, shell=True)

def send_email(args, student_name, student_email):
"""Emails a single student with the generated PDF."""
#TODO: Might be useful to improve the to address
#TODO: Allow subject to be tailored.

subject = args.email_subject
# to_address = student_name + ' <' + student_email + '>'

msg = MIMEMultipart()
msg['Subject'] = subject

text = 'Dear Student\nPlease find enclosed your guide sheet template for the exam. Read the following email carefully.\n'
part1 = MIMEText(text, 'plain')
msg.attach(part1)

# Open the files in binary mode.  Let the MIMEImage class automatically
# guess the specific image type.
fp = open('open-book.pdf', 'rb')
fp.close()

msg.attach(img)

# Send the email via our own SMTP server, if we are not testing.
if not args.test_only:
s = smtplib.SMTP(args.smtp_server)
s.quit()

def override_arguments(args):
"""If necessary, prompt for arguments and override them

Takes, as input, args from an ArgumentParser and returns the same after processing or overrides.
"""

# If the user enabled batch mode, we disable interactive mode
if args.batch_mode:
args.interactive_mode = False

if args.interactive_mode:
override = input("CSV filename? default=[{}] :".format(args.input_file))
if len(override):
args.input_file = override

override = input("Student ID Column? default=[{}] :".format(args.student_id_column))
if len(override):
args.student_id_column = int(override)

override = input("Student Name Column? default=[{}] :".format(args.student_name_column))
if len(override):
args.student_name_column = int(override)

override = input("Student Email Column? default=[{}] :".format(args.student_email_column))
if len(override):
args.student_email_column = int(override)

override = input("Student ID Regular Expression? default=[{}] :".format(args.student_id_regexp))
if len(override):
args.student_id_regexp = override

override = input("SMTP Server? default=[{}] :".format(args.smtp_server))
if len(override):
args.smtp_server = override

override = input("Email subject? default=[{}] :".format(args.email_subject))
if len(override):
args.email_subject = override

override = input("Email sender address? default=[{}] :".format(args.email_sender))
if len(override):
args.email_sender = override

return(args)

def parse_arguments():
"""Get all the command line arguments for the file and return the args from an ArgumentParser"""

parser = argparse.ArgumentParser(
description="A script to email students study pages for a semi-open book exam",
epilog="Note that column count arguments start from zero."

)

action='store_true',
dest='batch_mode',
default=False,
help='run automatically with values given')

action='store_true',
dest='interactive_mode',
default=True,
help='prompt the user for details (default)')

dest='input_file',
default='students.csv',
help='the name of the input CSV file with one row per student')

dest='student_id_column',
default=1,
help='the column containing the student id (default 1)')

dest='student_name_column',
default=2,
help='the column containing the student name (default 2)')

dest='student_email_column',
default=9,
help='the column containing the student email (default 9)')

dest='student_id_regexp',
default='B[0-9]+',
help='a regular expression for valid student IDs (default B[0-9]+)')

dest='smtp_server',
default='localhost',
help='the address of an smtp server')

dest='email_subject',
help='the subject of emails that are sent')

dest='email_sender',
help='the sender address from which to send emails')

action='store_true',
dest='test_only',
default=False,
help='do not send any emails')

args = parser.parse_args()

# Allow for any overrides from program logic or interaction with the user
args = override_arguments(args)
return(args)

def main():
"""the main function that kicks everything else off"""

print("Hello")
args = parse_arguments()

print("Starting open-book...")
print(args)

student_count = 0
# Go through each row
student_number = row[args.student_id_column]
# Check if the second cell looks like a student number
if re.match(args.student_id_regexp, row[args.student_id_column]):
student_count = student_count + 1
process_student(args, row)
else:
print('  Skipping: non matching row')

print('Stopping open-book...')

if __name__ == '__main__':
main()


I use a LaTeX template for the base information, this can be easily edited for taste.

\documentclass[12pt,a4paper]{minimal}
\usepackage[latin1]{inputenc}
\usepackage{pst-barcode}
\usepackage[margin=2cm]{geometry}

%
% Does it all have to be Arial now? <sigh>
%
\renewcommand{\familydefault}{\sfdefault}

\author{Professor Colin Turner}
\begin{document}
\begin{centering}
\textbf{EEE122 Examination Guide Sheet}

This sheet, and its contents that you have added, can be brought into
the examination for EEE122. The contents \textbf{must} be compiled
by yourself, be handwritten, and be original (i.e. \textbf{NOT}
photocopied or similar). You may use the
reverse side. You may retain a copy you have made before the examination
but the original must be handed in with your examination scripts at the

%\input{open-book-insert-name.tex}
% D'Oh! Not supposed to put a name on anything going in the exam.
\input{open-book-insert-barcode.tex}
\hfill
\begin{pspicture}(7,1in)
%\psbarcode{
%\input{./open_book_insert.tex}
%B00526636
%}{includetext height=0.25}{code39}
\input{open-book-insert-barcode.tex}
\end{pspicture}
\end{centering}

\vfill
\begin{centering}
Please read the following declaration and tick the box to indicate you agree:

I declare this sheet to have been compiled by myself and not by another, and that the student number above is mine.

%\rule[-1 cm]{10 cm}{1 pt}
\framebox[0.3 cm]{ }

\end{centering}

\end{document}


## Pretty Printing C++ Archives from Emails

I’m just putting this here because I nearly managed to lose it. This is a part of a pretty unvarnished BASH script for a very specific purpose, taking an email file containing a ZIP of submitted C++ code from students. This script produces pretty printed PDFs of the source files named after each author to facilitate marking and annotation. It’s not a thing of beauty. I think I’ll probably write a new cleaner version in future.

#!/bin/bash
#
# A script to take C++ files in coursework and produce pretty printed PDF
# listings named with the author information.
#
# It takes a ZIP file of .cpp and .h files and produces a ZIP file of PDFs
#

# Requires
#   enscript
#   ps2pdf
#   munpack

#
# Called for each file to be encoded
#
pretty_print_file()
{
# Extract the Author JavaDoc information
author=1 | sed -n -e 's/^.*@[Aa]uthor (cat /\1/gp')

# How many lines did we get back?
lines=author_snip" | wc -l)

# If we got no author info
if [ author_snip" ]
then
author="no-author"
author_snip="no-author"
fi

# if we got too many
if [ 1, Author author_snip)"
output=1
output+=".pdf"
echo "Encoding 1 | ps2pdf - parsed-output/1 ]
then
echo "Usage: unpack_coursework <email_file>"
exit
fi

# Make a temporary directory and copy the email file into it.
echo "Creating temporary directory..."
dir=mktemp -d
echo 1 dir

# Unpack the email
echo "Unpacking email..."
munpack f *.cpp
unzip -Cj f
done

# And the same for source files
echo "Parse .cpp files..."
for f in *.cpp
do
pretty_print_file dir/parsed-output
zip parsed-output *.pdf
cd ..

# Back to the directory we started in.
popd

# Copy the parsed ZIP to the current directory for inspection and marking
cp dir
rm -rf \$dir


## OPUS and Assessment 3 – Regime Change

This is the third and final article in a short series on how OPUS, a system for managing placement on-line, handles assessment. You probably want to read the first and second article before getting into this.

# Regime Change

It’s not just in geo-political diplomacy that regime change is a risky proposition. In general you should not change a regime once it has been established and students entered on to it. If you do, there is a risk that marks and feedback will become unavailable for existing assessments, or that marks are calculated incorrectly and so on. Obviously it is also non-ideal practice for the transparency of assessment.

Instead you should create a new regime in advance of a new academic year, change the assessment settings in the relevant programmes of study to indicate that regime will come into force in the new year, and brief all parties appropriately. All of this is done by the techniques covered in the first two articles. If you have done all that, well done, and you can stop reading now.

# TL;DR DON’T DO THIS, TURN BACK NOW!

This shouldn’t ever happen, as noted really you need to ensure your regime changes are correctly configured and enabled before any students start collecting marks.

And yet, it does happen, or at least it has happened to me twice that I have been asked to make tweaks to a regime where student marks already exist. Indeed it is happened to me this week, hence this article.

Even changing small details like titles will effect the displayed data for students from previous years. Tweaking weightings could cause similar or more serious problems.

So what happens if we create a new regime and move our students onto it midstream? Well, the existing marks and feedback are recorded against the old regime, so they will “disappear” unless and until the students are placed back on that regime.

If you want to do this, and copy over the marks from the old regime into the new regime, there is a potential way to do this. It is only been used a handful of times and should be considered dangerous. It also probably won’t work if your original marks use a regime where the same assessment appears more than once in the regime for any given student.

But, if you’re here and want to proceed, it will probably be possible using what was deliberately undocumented functionality.

You will need command line, root access (deliberately – this is not a bug), in order to do this. If you haven’t got root access you need to get someone who does so you can… Read all the instructions before starting.

## 0. BACK UP ALL YOUR DATA NOW

Before contemplating this insanity, ensure your OPUS database is backed up appropriately. I’d also extract a broadsheet of all existing collected assessment for good measure from the Information, Reports section of the Admin interface.

That said, this functionality deliberately copies data, it doesn’t delete it – but still.

## 0. NO REALLY, BACK UP ALL YOUR DATA NOW, I REALLY MEAN IT.

Ok, you’re still here.

First of all this approach only makes sense (obviously) if the marks you have already captured are valid. I.e. the assessment(s) you want to change are in the future for the students and haven’t been recorded. If not, then obviously OPUS can’t help you do anything meaningful with the marks you have already collected.

## 1. Make your New Assessment(s)

Maybe you plan to just change from one stock assessment to another, or perhaps you want to adjust a weighting on an existing assessment that hasn’t been undertaken by students in this year. In this case, you can skip this step.

But if needed, create and test any new assessments following the approach laid out in the second article in this series. Do make sure you spend some time testing the form.

## 2. Add and Configure a New Assessment Regime

Create your new assessment regime, as detailed in the first article, but don’t link it to any programmes yet.

Your new regime should be configured as you wish it to be. Remember, for there to be any point in this exercise, the early assessments already undertaken by the students need to be the same (though not necessarily in the same order) – otherwise OPUS can’t help and you need to sort out all the marks in transition entirely manually.

## 3. Note the IDs of the Old and New Regimes

Things start to get clunky at this point. Remember, we are heading off road. You will need the database ID of both the old regime and the new one.

You can obtain these by, for instance, going to Assessment Groups in the Configuration menu and editing the regimes in turn. The URL will show something like this:

At the very end, you will “id=2” so 2 is the id we want. Write these down for both regimes, noting carefully the old and new one. It’s almost certain the new id will be larger than the old one.

## 4. Choose your timing well

You want to complete the steps from here on in, smoothly, in a relatively short time period. It is advisable that you switch OPUS into maintenance mode in a scheduled way with prior warning. This can be done from the Superuser, Services menu in the admin interface, if you are a superuser level admin – if you aren’t you shouldn’t be doing this without the help of such a user. You can also enter maintenance mode with the command line tool.

## 5. Use the Command Line Tool with root access

OPUS ships with a command line utility. With luck, typing “opus” from a root command prompt will reveal it. It’s usually installed in /usr/sbin/ and may not require root access in general, but it most certainly will insist on it for this use.

If that didn’t work, go find it in the cron directory of your OPUS install and run it with

php opus.php

If you needed this to work, you’ll need to use instead of just using “opus” in the next command. We need a command called copy_assessment_results and you’ll note it’s not on the list. It’s not on the dev_help list either, because … did I mention this is a stupid thing to do? You need to enter in the command as follows changing the id for old and new regimes to be those you wrote down in step 3. All on one line.

opus copy_assessment_results old_regime_id=1&new_regime_id=2

Don’t run this more than once, the code isn’t smart enough not to copy over an additional set of data with possibly “exciting” results.

This copies assessment results and feedback, and marks from one regime to another. It’s potentially wasteful but it can’t identify the correct students and doesn’t delete data as an obvious precaution.

## 6. Enable the New Regime for Students

Even in maintenance mode, Superuser admins can log in and act. You can switch over your regime now. Maybe do this for one programme and test the results before using the bulk change facility discussed in the previous article.

With luck you will see your shiny new assessment regime with the old marks and feedback for the existing work in the old regime copied over. Older students on the old regime should still show their results and feedback correctly.

If not – well, this is what that backup in step 0 was for, right? And you’ll have to do it manually from the broadsheet you exported as well.

## 7. Re-enable Normal Access

Either from the command line tool with

opus start

or from the Superuser, Services menu, re-open OPUS for formal access.

## 8. Corrective Action

Explain to relevant colleagues the pain and stress of having to do this and that in future all assessment regime changes should be done appropriately, before students begin completing assessments.

## OPUS and Assessment 2 – Adding Custom Assessments

This is a follow on to the previous article on setting up assessment in OPUS, an on-line system for placement learning. You probably want to read that first. This is much more advanced and requires some technical knowledge (or someone that has that).

## Making New Assessments

Suppose that OPUS doesn’t have the assessment you want, then you will have to build your own, from scratch, or by modifying an existing one. This takes some minor HTML skill and access to your OPUS code to add a new file. So if you can’t do this yourself, ensure you get appropriate support.

Look at an existing assessment closely first. Go back to Advanced on the OPUS admin menu, and then Assessments.

For each assessment, clicking on Structure allows access to underlying variables that are captured. These can be numeric, text, or checkboxes, and some validation is possible too.

you need to work out what things you will capture, and create a skin for the assessment, most usually from modifying one from another. This following snippet from a related Smarty template shows this is just HTML, but OPUS, through Smarty drops in an assessment->flag_error(“mark5″)}
<input type=”text” class=”data_entry_required” size=”2″ value=”{assessment->flag_error(“comment5″)}
{include file=”general/assessment/textarea.tpl” name=”comment5″ rows=”7″ cols=”60″}
</td>
</tr>

<tr><td colspan=”5″>Marking Scheme</td></tr>
<tr>
<td> Uses language to clearly express views concisely </td>
<td> Expresses clearly but with some minor errors </td>
<td> Good expression, logical flow, reasonably concise </td>
<td> Reasonable flow, some contorted expressions, a little verbose </td>
<td> Poor expression, verbose, some colloquialisms </td>
</tr>

This is a representative snippet. You can see this full template here. Note the “special” code in between braces { }. The variables in the template pertain to the names in the structure.

## Create and Save Your Template

Create your template, probably using one of the existing ones to help you understand the format. This provides the layout and skin for your pro-forma and allows you to do anything you can wish with HTML/CSS. Be mindful of security considerations, but you aren’t writing main code, just an included bit. OPUS will top and tail the file for you when it runs.

Save it under the templates/assessments directory in your OPUS install. I recommend you make a subdirectory for your institution.

Avoid using the “uu” directory. This is used for pre-shipped assessments and those used at Ulster University. There is a chance your changes will get clobbered by a new OPUS version if you put your template in there.

## Adding the Assessment variables into OPUS

Then you need to create your new Assessment item itself as at the top of the article. Once you have created it, click on structure and add each variable you will capture in turn, whether it is text, a number, or a checkbox, and any simple validation rules – such as minimum or maximum values.

The description appears in feedback and validation, so make sure it is meaningful to the end user. The name is the variable name as it appears in your template. The weighting field is used to determine if numeric values contribute to the score. Usually use 1 if you want the score to be counted, and 0 if you want the score to be ignored. Finally you can choose whether each field is compulsory or not. Optional fields will be ignored in a total when OPUS creates a percentage.

Once complete, add your new assessment into a test regime as detailed in the first article and do some careful testing before adding the regime to live students.