Tag Archives: Workload Modelling

Budgeting for Assessment

Workloads for Academics in Higher Education are often very complex, with teaching loads, research tasks and administration all juggling for our attention with lots of task switching adding to the complexity. For many academics, teaching loads are a significant part of their work, but explicitly looking at the time spent on assessment could bring better results for staff and students alike.

Contact Hours to Admin Hours

There are usually ad-hoc assumptions about the amount of administration time a module takes above and beyond the contact hours spent in front of a class. For our purposes contact hours could just as easily be delivering synchronous and asynchronous hours on-line as time spent in more traditional on-campus delivery.

A common such assumption is that it takes two hours of this administration time for each hour of contact, sometimes more. That administration time can be subdivided into various tasks such as preparation of teaching materials, delivery of assessment, and other correspondence with students etc..

Preparation time on teaching materials can obviously be markedly higher for the first presentation of a module, or after significant changes. Many academics are already undertaking substantial additional preparation time to re-factor materials for on-line delivery at the moment.

Assessment Hours

I want to focus on the time spent on assessment because I feel this is a serious time sink for most academics. This is partly because when it comes to reform of learning and teaching in a curriculum, assessment is often the last consideration because we are nervous about the serious consequences of getting things wrong. It is also partly because we can sometimes draw the conclusion that time spent on assessment equates directly to quality.

How often have you attempted to place a budget on your assessment time before delivering a module? I mean the time taken to design an assessment, deliver it to students, assess the submissions and deliver feedback. My guess is that very few of us have done this.

The outcomes of this can be serious. We often design assessments focusing on the first half of these tasks that take a tremendous and unquantified amount of labour to fully deliver, often much more than we really expected. This can result in a very stressed academic or team of academics, or the delivery of feedback is too late to be effectively useful to the students, or the quality and depth of the feedback suffers. Any combinations of these outcomes is also potentially likely.

Budget influences Design, poor Design blows the Budget

Agreeing a budget with a line manager, or even with yourself, can be informative. If you think the budget is too low you are faced with the choice of making the argument that additional resource is really required, or that you need to re-design the assessment to fit within your budget.

Of course, there are often times when additional resource really is required, but my argument here is that this should be a conscious choice, planned for and if possible agreed with your line manager who may be able to bring practical assistance, or at least balance out the rest of your workload.

Even if you agree a high budget, a good plan, and a good design minimises the risk of blowing that budget.

Design Choices

So what choices can we make to reduce the time burden? Some choices not only have no adverse effect on quality, but can actually deepen the quality of feedback or reflection opportunities for students. Here’s a very non-exhaustive list of thoughts in this direction.

  • Do you really need all those questions to confirm your learning outcomes? Do you have some questions that are just repeating the assessment of the same aspects? Trim them if so. Extra material can be used for tutorials instead.
  • Have you considered the use of a good rubric if you aren’t already using one? This can improve transparency of outcomes to students both before and after assessments and provide some generic feedback, leaving you with more time to give more focused feedback and can hugely improve the speed of marking.
  • Can you partially automate some of the assessment? If assessments are being delivered on-line many Virtual Learning Environments allow you to set assessments with questions with set or calculated answers so some of the marking and feedback can be automated. You can combine these with deeper more free response questions.
  • Can peer assessment accomplish some of your goals? If you are nervous about using peer assessment (and it does need care) what about using it in a formative way as part of your assessment diet. This can also greatly deepen students’ understanding of how their work is marked and assessed.
  • Can self assessment accomplish some of your goals? This can encourage highly reflective learning and allow you to guide the feedback based on the students’ initial assumptions.

What are your ideas to reduce your assessment budget will keeping, or even deepening the quality?

Even if you don’t undertake this formally with your line manager, try setting yourself as assessment budget, and consider how to work within it so that you can deliver authentic assessments, quality feedback in a way that leaves you time, focus and attention for the other parts of your job.

Assessment handling and Assessment Workflow in WAM

Sometime ago I began writing a Workload Allocation Modeller aimed at Higher Education, and I’ve written some previous blog articles about this.

As is often the way, the scope of the project broadened and I found myself writing in support for handling assessments and the QA processes around them. At some point this necessitates a new name for WAM to something more general (answers on a post card please) but for now, development continues.

Last year I added features to allow Exams, Coursework, and their Moderation and QA documents to be uploaded to WAM. This was generally reasonably successful, but a bit clunky. We gave several External Examiners access to the system and they were able to look in at the modules for which they were an examiner and the feedback was pretty good.

What Worked

One of the things that worked best about last year’s experiment was that we put in information about the Programmes (Courses) each Module was on. It’s not at all unusual for many Programmes to have the same Module within them.

This can cause a headache for External Examination since an External Examiner is normally assigned to a Programme. In short, the same Module can end up being looked at by several Examiners. While this is OK, it can be wasteful of work, and creates potential problems when two Examiners have a different perspective on the Module.

So within WAM, I put in code an assumption of what we should be doing in paper based systems – that every Module should have a “Lead Programme”. The examiner for that Programme should be the one that has primacy, and furthermore, where they are presented other Modules on the Programme for which they aren’t the “lead” Examiner, they should know that this is for information, and they may not be required to delve into it in so much detail – unless they choose to.

This aspect worked well, and the External Examiners have a landing screen that shows which Modules they are examining, and which they are the lead Examiner.

What Didn’t Work

I had written code that was intended to look at what assessment artefacts had been uploaded since a last user’s login, and email them the relevant stuff.

This turned out to be problematic, partly because one had to unpick who should get what, but mostly because I’m using remote authentication with Django (the Python framework in which WAM is written), and it seems that the last login time isn’t always updated properly when you aren’t using Django’s built in authentication.

But the biggest problem was a lack of any workflow. This was a bit deliberate since I didn’t want to hardcode my School or Faculty’s workflow.

You should never design your software product for HE around your own University too tightly. Because your own University will be a different University in two years’ time.

So, I wanted to ponder this a bit. It made visibility of what was going on a little difficult. It looked a bit like this (not exactly, as this is a screenshot from a newer version of an older module):

Old view of Assessment Items
Old view of Assessment Items

with items shown from oldest at the bottom to newest at the top. You can kind of infer the workflow state by the top item, and indeed, I used that in the module list.

But staff uploaded files they wanted to delete (and that was previously disallowed for audit reasons) and the workflow wasn’t too clear and that made notifications more difficult.

What’s New

So, in a beta version of 2.0 of the software I have implemented a workflow model. I did this by:

  • defining a model that represented the potential states a Module could be in, each state defines who can trigger it, and what can happen next, and who should be notified;
  • defining a model that shows a “sign off” event.

Once it became possible to issue a “sign off” of where we were in the workflow, a lot of things became easier. This screenshot shows how it looks now.

Example of new assessment workflow
Example of new assessment workflow

Ok, it’s a bit of a dumb example, since I’m the only user triggering states here (and I can only do that in some cases since I’m a Superuser, otherwise some states can only be triggered by the correct stakeholder – the moderator of examiner).

However, you can see that now we can still have all the assessment resources, but with sign offs at various stages. The sign off could (and likely would) have much more detailed notes in a real implementation.

This in turn has made notification emails much easier to create. Here is the email triggered by the final sign off above.

The detailed notes aren’t shown in the email, in case other eyes are on it and there are sensitive comments.

All of this code is available at GitHub. It’s working now, but I’m probably do a few more bits before an official 2.0 release.

I will be demoing the system at the Royal Academy of Engineering in London next Monday, although that will focus entirely on WAM’s workload features.

Workload Allocation Modelling Update – Scalability

I have been doing some more work on my software to handle Academic Workload Modelling, developing a roadmap for two future versions, one being modifications needed to run real allocations for next year without scrapping existing data, and another being code to handle the moderation of exams and coursework (which isn’t really anything to do with workload modelling, there’s some more mission creep going on).

Improvements to Task Handling

Speaking of mission creep I noted in the last article I’d added some code to capture tasks that staff members would be reminded off and could self-certify as complete. I improved this a lot with more rich detail about when tasks were overdue and UI improvements.

I wanted to automate some batch code to send emails from the system periodically. I discovered that using a Django management command provided an elegant way to the batch mode code into the project that could be called with cron through the usual Django manage.py script that it creates to handle its own internal related tasks for the project from the command line.

#
# Regular cron jobs for the wam package
#
#  m h  dom mon dow user command
# Every week
#
# Each Monday at 7 am, send all reminders
0 7 * * 1 root /usr/bin/python3 /usr/local/share/wam/manage.py email_reminders
# Each Wednesday and Friday at 7 am, send reminders for overdue and less than 7 days to deadline tasks
0 7 * * 3,5 root /usr/bin/python3 /usr/local/share/wam/manage.py email_reminders --urgent-only

It was easy to use this framework to add command switches and configuration of verbosity (you might note I haven’t disabled all output at the moment so I can monitor execution at this stage). I have set this up to email folks on a Monday morning with all the tasks, but also on Wednesday and Friday if there are urgent tasks still outstanding (less than a week to deadline).

I’ve been using this functionality live and it has worked very well. I used Django templates to help provide the email bodies, both in HTML and plain text.

Sample Task Reminder Email
Sample Task Reminder Email

Issues of Scale

My early prototype handled data for one academic year, albeit with fields in the schema to try and solve this at a later stage. It also suffered from a problem in that if other Schools wanted to use the system, how would I disaggregate the data both for security and convenience?

In the end I hit upon a solution for both issues, a WorkPackage model that allows a range of dates (usually one academic year) and a collection of Django User Groups to be specified. This allows all manually allocated activities, and module data to be specified with a package and therefore both invisible to other packages (users in other Schools, or in other Academic Years). I was also able to put the constants I’m using to model workload into the Django model, making it easier to tweak year on year.

class WorkPackage(models.Model):
    '''Groups workload by user groups and time
    
    A WorkPackage can represent all the users and the time period
    for which activities are relevant. A most usual application
    would be to group activities by School and Academic Year.
    
    name        the name of the package, probably the academic unit
    details     any further details of the package
    startdate   the first date of activities related to the package
    enddate     the end date of the activities related to the package
    draft       indicates the package is still being constructed
    archive     indicates the package is maintained for record only
    groups      a collection of all django groups affected
    created     when the package was created
    modified    when the package was last modified
    nominal_hours
                the considered normal number of load hours in a year
    credit_contact_scaling
                multiplier from credit points to contact hours
    contact_admin_scaling
                multiplier from contact hours to admin hours
    contact_assessment_scaling
                multiplier from contact hours to assessment hours
    
    '''
    
    name = models.CharField(max_length = 100)
    details = models.TextField()
    startdate = models.DateField()
    enddate = models.DateField()
    draft = models.BooleanField(default=True)
    archive = models.BooleanField(default=False)
    groups = models.ManyToManyField(Group, blank=True)
    nominal_hours = models.PositiveIntegerField(default=1600)
    credit_contact_scaling = models.FloatField(default=8/20)
    contact_admin_scaling = models.FloatField(default=1)
    contact_assessment_scaling = models.FloatField(default=1)
    created = models.DateTimeField(auto_now_add = True)
    modified = models.DateTimeField(auto_now = True)
    
    def __str__(self):
        return self.name + ' (' + str(self.startdate) + ' - ' + str(self.enddate) + ')'
    
    class Meta:
        ordering = ['name', '-startdate']

I’m pretty much ready to use the system for a real allocation now without having to purge the test data I used this this year. I can simply create a new WorkPackage.

I need to write some functionality to allow one package’s allocations to be automatically rolled over to the next as a starting point, but I reckon that’s maybe two or three more hours.

Future Plans for the Application

The next part of planned functionality is an ability to handle coursework and examination and the moderation process. It will be quite a big chunk of new functionality and moving the system again to something quite a bit bigger than just a workload allocation system.

This of course means I need a better Application name, (WAM isn’t so awesome anyway). Suggestions on a post card.

Django Issues

I think I’m getting more to grips with Django all the time – although I often have the nagging feeling I’m writing several lines of code that would be simpler if I had a better feel for its syntax for dealing with QuerySets.

The big problem I hit, again, was issues in migrations. I created and executed migrations on my (SQLite) development system, but when I moved these over to production (MySQL) it barfed spectacularly.

Once again the lack of idempotent execution means you have to work out what part of the migration worked and then tag the migration as “faked” in order to move onto the next. This was sufficient this time, and I didn’t have to write custom migrations like last time, but it’s really not very reassuring.

Further Details

As before, the code is on GitHub, and the development website on foss.ulster.ac.uk, if you want more details.