Tag Archives: programming

Battleships Server / Client for Education

I’ve been teaching a first year introductory module in Python programming for Engineering at Ulster University for a few years now. As part of the later labs I have let the students build a battleships game using Object Oriented Programming – with “Fleet” objects containing a list of “Ships” and so on where they could play on one computer against themselves. But I had often wanted to see if I could get as far as an on-line server application based on similar principles that the students could use. This would teach them client server fundamentals but also let them have fun playing against each other.

Last year I finally attained that goal. I built a very simple web-server using Python Django to manage players, games, and ships. I wanted to use Python partly because it’s become my language of choice for web applications, but also because I wanted the code to be as accessible and understandable to the students as possible. I have initially placed very little emphasis on the User Interface of the server – this is partly deliberate because I wanted the students to have a strong incentive to develop this in the clients. I did however build a very simple admin view to let me see the games in progress. Additionally Django provides a very easy way to build admin views to manipulate objects. I have enabled this to let admins (well me) tweak data if I need to.

The Server

The server provides a very simple interface – an API – to let the students interact with the game. They can do this initially very directly using a web browser. They can simply type in specific web addresses to take certain actions. For instance

http://battleships.server.net/api/1.0/games/index/

will, for a battleships server installed on battleships.server.net (not a real server), list all the current active games. You’ll note the api/1.0/ part of the URL. I did this so that in future years I could change the API and add new version numbers. That’s partly the focus of a later section of this blog.

The output of all the API calls is actually encoded in JSON. This is still fairly human readable, and indeed some web browsers, like Firefox, will render this in a way that is easy to explore. Again, this makes it possible for the students to play the game with nothing more than a web browser and a certain amount of patience. This is good for exploration, but certainly not the easiest way to play. Here’s some sample output from my real games server with my students (poor Brian).

Sample games index JSON output
Sample games index JSON output – this is designed to be read by machines (a client program) but it’s still clear enough for humans so students can read it directly.

The code for the server and documentation about the API is all publicly available on my GitHub repository. You’ll also find some basic installation instructions in case you’d like to run your own game server.

I did want to build some very basic views into the server, I built a more human version of the games list, and I also built this view which was designed only for admins – obviously in the hands of normal players it would make the game a bit pointless.

Game overview for admins
Game overview for admins only.

This allowed me to see a little of what was actually going on in the games my students were playing, as well as showing the current state of ships and who owns what, it also showed a bit more info – here for another game than that above:

Game history and ship list
Game history and ship list

As you can see this shows some of the history of the game so far – this information is available to clients in the API, and likewise the list of surviving ships are shown. The API call for the ships still surviving only shows the ships for the player making the call, with a secret (password) that was generated when the player was created.

You may notice that the server automatically names ships with names taken from the Culture novels from Iain M. Banks.

The Client(s)

In this way the students are using their web browsers as a highly unsophisticated client to access the server. Actually playing the game this way will be frustrating – requiring careful URLs to be typed every time and noting the output for further commands. This is quite deliberate. It would be easy to build the web server to allow the whole game to be played seamlessly from a web browser – but this isn’t the point – I want the students to experience building a client themselves and developing that.

For my module last year I gave the students a partially complete client, written in Python, using text only. Their aim for the lab was to complete the client. No client would be exactly the same, but they would all be written to work against the same server specification. In theory a better client will give a player an edge against others – a motivation for improvements. One student built the start of a graphical interface, some needed to be given more working pieces to complete their clients.

I’ve placed a more or less complete client on GitHub as well, but it could certainly do with a number of improvements. I may not make those since the idea of the project is to provide a focus for students to make these improvements.

What Went Well

The server worked reasonably well, and once I got going the code for automatic ship generation and placement worked quite nicely. I built a good number of unit tests for this which flushed out some problems, and which again are intended as a useful teaching resource.

I spent a reasonable amount of time building a server API that couldn’t be too easily exploited. For instance, it’s not possible for any player to be more than one move ahead of another to prevent brute force attacks by a client.

The server makes it relatively easy for as many players as one wants in a given game.

What Went Wrong

I made some missteps the first time around the block.

  • the game grid’s default size is too large – I made this configurable, but too large a grid makes the chances of quick hits much more remote – which the students need for motivation. I only really noticed this when I built my admin view shown above.
  • in the initial game, any hit on a ship destroys it, there is no concept of health, and bigger ships are more vulnerable. This is arguably not a mistake, but is a deviation from expected behaviour.

What’s Next

This year I may add another version of the API that adds health to ships so that multiple hits are needed for large ships. By moving all of this to a new version number in the URL as above, e.g.

http://battleships.server.net/api/2.0/games/index/

I hope to allow for improved versions of the game while still supporting (within reason) older clients accessing the original version.

I might also add admin calls to tweak the number of ships per player and size of the ships. There are defaults in the code, but these can easily be overriden.

Shall We Play A Game?

There’s absolutely no reason the clients need to be written in Python. There’s no reason not to write one in C++ or Java, or to build a client that runs on Android or iOS phones. There’s no reason that first year classes with a text mode Python client can’t play against final year students using an Android client.

If you are teaching in any of these languages and feel like writing new clients against the server please do. If you feel like making your clients open source to help more students (and lecturers) learn more, that would be super too.

If you have some improvements in mind for the server, do feel free to fork it, or put in pull requests.

If you or your class do have a go at this, I’d love to hear from you.

Python script to randomise an m3u playlist

While I’m blogging scripts for playlist manipulation here is one I use in a nightly cron job to shuffle our playlists so that various devices playing from them have some daily variety. All disclaimers apply, it’s rough and ready but WorksForMe (TM).

I have an entry in my crontab like this

0 4 * * * /home/colin/bin/playlist-shuffle.py -q -i /var/media/mp3/A_Colin.m3u -o /var/media/mp3/A_Colin_Shuffle.m3u

which takes a static playlist and produces a nightly shuffled version.

#!/usr/bin/env python
#
# Simple script to randomise an m3u playlist
# Colin Turner <ct@piglets.com>
# 2013
# GPL v2
#

import random
import re

# We want to be able to process some command line options.
from optparse import OptionParser

def process_lines(options, all_lines):
  'process the list of all playlist lines into three chunks'
  # Eventually we want to support several formats
  m3u = True
  extm3u = False
  if options.verbose:
    print "Read %u lines..." % len(all_lines)
  header = list()
  middle = list()
  footer = list()
  
  # Check first line for #EXTM3U
  if re.match("^#EXTM3U", all_lines[0]):     if options.verbose:       print "EXTM3U format file..."     extm3u = True     header.append(all_lines[0])     del all_lines[0]      loop = 0   while loop < len(all_lines):     # Each 'item' may be multiline     item = list()     if re.match("^#EXTINF.*", all_lines[loop]):
      item.append(all_lines[loop])
      loop = loop + 1
    # A proper regexp for filenames would be good
    if loop < len(all_lines):
      item.append(all_lines[loop])
      loop = loop + 1
    if options.verbose: print item
    middle.append(item)
            
  return (header, middle, footer)

def load_playlist(options):
  'loads the playlist into an array of arrays'
  if options.verbose:
    print "Reading playlist %s ..." % options.in_filename
  with open(options.in_filename, 'r') as file:
    all_lines = file.readlines()
  (header, middle, footer) = process_lines(options, all_lines)
  return (header, middle, footer)

def write_playlist(options, header, middle, footer):
  'writes the shuffled playlist'
  if options.verbose:
    print "Writing playlist %s ..." % options.out_filename
  with open(options.out_filename, 'w') as file:
    for line in header:
      file.write(line)
    for item in middle:
      for line in item:
        file.write(line)
    for line in footer:
      file.write(line)


def shuffle(options):
  'perform the shuffle on the playlist'
  # read the existing data into three arrays in a tuple
  (header, middle, footer) = load_playlist(options)
  # and shuffle the lines array
  if options.verbose:
    print "Shuffling..."
  random.shuffle(middle)
  # now spit them back out
  write_playlist(options, header, middle, footer)

def print_banner():
  print "playlist-shuffle"

def main():
  'the main function that kicks everything else off'
  
  usage = "usage: %prog [options] arg"
  parser = OptionParser(usage)
  parser.add_option("-i", "--input-file", dest="in_filename",
                    help="read playlist from FILENAME")
  parser.add_option("-o", "--output-file", dest="out_filename",
                    help="write new playlist to FILENAME")
  parser.add_option("-v", "--verbose",
                    action="store_true", dest="verbose")
  parser.add_option("-q", "--quiet", default=False,
                    action="store_true", dest="quiet")
                    
  (options, args) = parser.parse_args()
#  if len(args) == 0:
#      parser.error("use -h for more help")
  
  if not options.quiet:
    print_banner()
  
  shuffle(options)
  
  if not options.quiet:
      print "Playlist shuffle complete..."
  
 
if  __name__ == '__main__':
  main()

Python script to add a file to a playlist

I have a number of playlists on Gondolin, which is a headless machine. I wanted to be able to easily add a given mp3 file to the playlists which are in m3u format. That means that each entry has both the filename and an extended line with some basic metadata, in particular the track length in seconds, the track artist and name. I wanted a script that could extract this information from the mp3 file and make adding the entry easy. So I wrote this in Python. It’s rough and ready and it is probably not very Pythonic but it’s working for me. The script should create a playlist if it doesn’t currently exist, and check for a newline at the end of the file so that the appended lines are really on a new line. ItWorksForMe (TM).

This uses the eyeD3 Python library, which on Debian is provided in python-eyed3.

My basic usage is

playlist-append -m the_mp3_file.mp3 -p the_playlist.m3u -r /var/media/mp3

the last parameter is the path relative to which the mp3 filename should be written to. This is useful for me because I rsync the whole tree between machines, as you will see there are options for writing an absolute pathname if you prefer. I should probably rewrite the script to do it relative to the playlist, but that’s another day.

#!/usr/bin/env python

#
# Trivial script to extract meta data from an mp3 file and add
# the mp3 file and data to an existing m3u file
#
# Colin Turner <ct@piglets.com>
# 2014
# GPL v2
#
# v 20140801.0 Initial Version
#
# v 20140802.0
# The mp3 filename is now, by default, written relative to the path of
# the playlist if possible.
#

import eyeD3
import re
import os

# We want to be able to process some command line options.
from optparse import OptionParser

def append(options, artist, title, seconds):
  'append the fetched data to the playlist'
  mp3_filename = resolve_mp3_filename(options)
  # Check if the playlist file exists
  there_is_no_spoon = not os.path.isfile(options.out_filename)

  with open(options.out_filename, 'a+') as playlist:
    # was the file frshly created?
    if there_is_no_spoon:
      # So write the header
      print >> playlist, "#EXTM3U"
    else:
      # There was a file, so check the last character, in case there was no \n
      playlist.seek(-1, os.SEEK_END)
      last_char = playlist.read(1)
      if(last_char != '\n'):
        print >> playlist

    # OK, now able to write
    print >> playlist, "#EXTINF:%u,%s - %s" % (seconds, artist, title)
    print >> playlist, "%s" % mp3_filename

def resolve_mp3_filename(options):
  'resolve the mp3 filename appropriately, if we can, and if we are asked to'
  'there are three modes, depending on command line parameters:'
  '-l we write the filename precisely as on the command list'
  '-r specifies a base relative which to write the filename'
  'otherwise we try to resolve relative to the directory of the playlist'
  'the absolute filename will be the fall back position is resolution is impossible'

  if options.leave_filename:
    # we have been specifcally told not to resolve the filename
    mp3_filename = options.in_filename
    if options.verbose:
      print "Filename resolution disabled."

  if not options.leave_filename and not len(options.relative_to):
    # Neither argument used, automatcally resolve relative to the playlist
    (playlist_path, playlist_name) = os.path.split(os.path.abspath(options.out_filename))
    options.relative_to = playlist_path + os.path.sep
    if options.verbose:
      print "Automatic filename resolution relative to playlist base %s" % options.relative_to

  if len(options.relative_to):
    # We have been told to map the path relative to another path
    mp3_filename = os.path.abspath(options.in_filename)
    # Check that the root is actually present
    if mp3_filename.find(options.relative_to) == 0:
      # It is present and at the start of the line
      mp3_filename = mp3_filename.replace(options.relative_to, '', 1)

  if options.verbose:
    print "mp3 filename will be written as %s..." % mp3_filename
  return mp3_filename

def get_meta_data(options):
  'perform the append on the playlist'
  # read the existing data into three arrays in a tuple
  if options.verbose:
    print "Opening MP3 file %s ..." % options.in_filename
  if eyeD3.isMp3File(options.in_filename):
    # Ok, so it's an mp3
    audioFile = eyeD3.Mp3AudioFile(options.in_filename)
    tag = audioFile.getTag()
    artist = tag.getArtist()
    title = tag.getTitle()
    seconds = audioFile.getPlayTime()
    if not options.quiet:
      print "%s - %s (%s s)" % (artist, title, seconds)
    # OK, we have the required information, now time to write to the playlist
    return artist, title, seconds
  else:
    print "Not a valid mp3 file."
    exit(1)

def print_banner():
  print "playlist-append"

def main():
  'the main function that kicks everything else off'

  usage = "usage: %prog [options] arg"
  parser = OptionParser(usage)
  parser.add_option("-m", "--mp3-file", dest="in_filename",
                    help="the FILENAME of the mp3 file to add")
  parser.add_option("-p", "--playlist-file", dest="out_filename",
                    help="the FILENAME of the playlist to append to")
  parser.add_option("-l", "--leave-filename", dest="leave_filename", action="store_false",
                    help="leaves the mp3 path as specified on the command line, rather than resolving it")
  parser.add_option("-r", "--relative-to", dest="relative_to", default="",
                    help="resolves mp3 filename relative to this path")
  parser.add_option("-v", "--verbose",
                    action="store_true", dest="verbose")
  parser.add_option("-q", "--quiet", default=False,
                    action="store_true", dest="quiet")

  (options, args) = parser.parse_args()
#  if len(args) == 0:
#      parser.error("use -h for more help")

  if not options.quiet:
    print_banner()

  (artist, title, seconds) = get_meta_data(options)
  append(options, artist, title, seconds)

  if not options.quiet:
      print "Appended to playlist..."


if  __name__ == '__main__':
  main()



Migration from Savane to Redmine

I am admin for a server at work foss.ulster.ac.uk to host our open source development work. It used to run on GNU Savane, but despite several efforts, that project is clearly dead in the ditch.

So having to change the underlying system, I decided to move to Redmine (you can see some previous discussion here). I’m recording aspects of the migration here mostly for my own sake.

This install was on Debian Squeeze. I first of all installed the relevant package

aptitude install redmine redmine-pgsql

and followed the prompts for the configuration. The documentation for the Debian install is a little unhelpful about how to actually configure the web server, and while I have good experience with Apache, I have very little with Ruby on Rails.

I installed the Apache Passenger module.

aptitude install libapache2-mod-passenger

and copied the example config

cd /usr/share/doc/redmine/examples/
cp apache2-passenger-alias.conf /etc/apache2/sites-available/redmine

I then edited the newly created redmine file to look like this:

# These modules must be enabled : passenger
# Configuration for http://foss.ulster.ac.uk/redmine

ServerName foss.ulster.ac.uk
# this is the passenger config
RailsEnv production
SetEnv X_DEBIAN_SITEID "default"

#
# This is the example from the Debian package
#
#SetEnv RAILS_RELATIVE_URL_ROOT "/redmine"
# apache2 serves public files
#DocumentRoot /usr/share/redmine/public
#Alias "/redmine/plugin_assets/" /var/cache/redmine/default/plugin_assets/
#Alias "/redmine" /usr/share/redmine/public

#
# And my attempt (CT 20120816)
#
SetEnv RAILS_RELATIVE_URL_ROOT "/redmine"
# apache2 serves public files
DocumentRoot /usr/share/redmine/public
Alias "/plugin_assets/" /var/cache/redmine/default/plugin_assets/
Alias "/" /usr/share/redmine/public

Directory "/usr/share/redmine/public"
Order allow,deny
Allow from all

In my case I wanted Redmine on the web root, so you can see the changes I made.

I then disabled the default config and enabled this:

a2ensite redmine
a2dissite default
a2dissite default-ssl

and restarted Apache

/etc/init.d/apache2 restart

Now you can login, with the default username and password (admin and admin) and change them and start some configuration.

Garbage collecting sessions in PHP

In PHP, sessions are by default stored in a file in a directory. Sessions can be specifically destroyed from within the code, for example when users logout explicitly, but frequently they do not. As a result session files tend to hang around, and cause the problem of how to clean them up. The standard way is to use PHP’s own garbage collection which is normally enabled out of the box. In this, we define constants that specify the maximum lifetime for the session, and essentially the probability of clean up.

To make things more interesting, Debian, out of the box doesn’t do garbage collection in this way. It has a cron job that regularly erases session files in the defined session directory. But, if like me, and many others, you put your session files in a different directory for each application to avoid clashes on namespaces for two applications running under the same browser from the same server, you have a problem. If you forget Debian’s behaviour the session files will just grow indefinitely. I had forgotten this issue and found over a year’s worth of session files in a directory recently.

Solving this problem is actually quite difficult to do optimally. I mean, I could create a cron job to mirror Debian’s own, but then I’d have to put the maximum lifetime in a cron job somewhere, out of the way, and difficult for the average sys admin I’m working with to find and deal with. (That is, away from the main configuration of the project). Or I could parse this value out of the main configuration. But this leads to another problem. For some users, a 30 minute maximum idle time is acceptable (although in my case where actually a suite of applications are being used as a single gestalt entity that can even be a problem), but for many of my administrator users you need huge idle times, since they are used to logging in first thing, and periodically working at the application through the day.

In the end I settled on changing our framework to make it easy to pass through garbage collection values. This makes an interface to the configuration really easy, but it doesn’t solve the problems of long session times that not all users need, and huge delays in garbage collection.

In my last article I talked about a Munin plugin for OPUS, but when you look at it you’ll see these kind of cliff fall drops, which are caused by the garbage collection finally kicking in and removing sessions where users have not explicitly logged out. Currently, every ten minutes, OPUS runs through its user database and finds users who are allegedly online but have no active session file and then marks them offline. Then it updates the file with the online user count that Munin reads.

I suspect eventually, I will write a more sophisticated script that actually kills sessions depending upon idle time and the user class, which would make for a more accurate picture here. Any brighter ideas gratefully accepted.

My first Munin plugin

Munin is a great, really useful project for monitoring all sorts of things on servers over short and long term periods, and can help identify and even warn of undue server loads. It is also appropriately and poetically named for one of Odinn’s crows (so I suppose I should have written this on a Wednesday).

We’ve been running Munin on one of our production servers at work for quite some time, and it gives us a lot of confidence that, to say the least, the server is running in its comfort zone around the clock. Among other bits and pieces, we run OPUS and the PDSystem on this box, two of our home grown projects that are available to the students. For some time now I’ve considered writing a plugin for OPUS to show logged in users, and I finally did this, albeit the counts are not nearly so reliable as I’d like for two reasons, but I’ll probably discuss that in another post. Anyway, I arranged for OPUS to drop a simple text file which simply contains counts of online users with the syntax

student: 10
admin: 2

and so on, for each of the categories of users. Then I needed a plugin to deal with this. I decided to write it simple shell script, since its portable and I’m not much of a perl fan.

#!/bin/sh

#
# Munin plugin for OPUS showing online users
# Copyright Colin Turner
# GPL V2+
#

# Munin plugins, at their simplest, are run either with "config" or
# no parameters (I plan to add auto configuration later).
case $1 in
  config)
  # In config mode, we spout out details of the graphs we will have
  # I want one graph, with lots of stacked values. The first one is
  # an AREA, and the others are stacked above them. I also (-l 0)
  # make sure the graph shows everything down to zero.
	cat <<'EOM'
graph_title OPUS online users
graph_args -l 0
graph_vlabel online users
graph_info The number of online users on OPUS is shown.
student.label student
student.min 0
student.draw AREA
staff.label academic
staff.min 0
staff.draw STACK
company.label hr staff
company.min 0
company.draw STACK
supervisor.label supervisor
supervisor.min 0
supervisor.draw STACK
admin.label admin
admin.min 0
admin.draw STACK
root.label root
root.min 0
root.draw STACK
application.label application
application.min 0
application.draw STACK
EOM
	exit 0;;
esac

# Now the plugin is being run for data. Bail if the file is unavailable
if [ ! -r /var/lib/opus/online_users ] ; then
     echo Cannot read /var/lib/opus/online_users >&2
     exit -1
fi

# Otherwise, a quick sed converts the default format to what Munin needs
cat /var/lib/opus/online_users | sed -e "s/:/.value/"

The plugin has now been running for several days, and you can see its output here. There are problems with it, but that’s more to do with PHP, Debian and user choice, and I’ll comment on that another time. However, already it gives me a useful feel for a lot of user behaviour.

Writing Munin plugins is easy, and Munin does so much of the hard work of turning your creation into something useful.

Geany and other Development Tools

I’ve tried lots of programming editors and ides over the years, obviously in Unix and Linux this is a Holy War, particularly between the advocates of vi and emacs. It is common for both groups to suggest that the other editor is hopelessly over-complex or clumsy. I think there’s some truth in that, because essentially, they both stink.

I tend to be an emacsen user myself, but I just think emacs is slightly less awful than vi. My first action on a new install is usually to use vi to edit my sources.list in Debian, to help me install emacs. Perhaps thats strange, because I really like sed. So what’s the problem with them? They both share this kind of puritanically awkward interface that works well on a console, but sucks in a GUI. They both use ridiculously arcane sequences of key presses to do anything, and I mean even basic stuff like saving and quitting. Yes, yes, you don’t have to lecture me about old terminals and their limitations, been there done that, got the t-shirt. I tend to do all my systems maintenance in emacs, but when I’m programming, I’ve started to love the softness of a decent editor that actually makes it plain and simple to edit multiple buffers of source code, even though its a pain to use different editors for console and gui work. Continue reading Geany and other Development Tools

Drupal Login Problems

So, in order to post that rant about PHP and SimpleXML I had to fix a problem that seems to have spontaneously arisen with Drupal (this content management system).

For some reason it wasn’t persisting login information, at least from firefox (sorry – iceweasel here on my Debian system). It’s interesting to note, reading about the bug, that it has been around for literally months and doesn’t seem to have been nailed.

So, anyway, I’ve installed some beta of Drupal, and yes, it now seems to be fixed… If I could only solve the problem that I can’t “uncollapse” parts of the content now.

UPDATE: OK, this seems to be a problem with firefox version 2, or probably really the CSS file for it. It works with Galeon, or when I tell firefox to fake being IE .

SimpleXML should be called BloodyAckwardXML

Another night of coding in PHP, and I’ve officially decided that SimpleXML utterly irritates me.

I’d already discovered, much to my irritation, that is virtually impossible to handle SimpleXML objects elegantly with the Smarty template engine – but now I discover I can’t even shove them in a PHP session without trouble – when you next visit the site you get stuff like this:

Warning: session_start() [function.session-start]: Node no longer exists

and then more trouble.

As part of a new Web Application Framework I’m working on I wanted to parse XML configuration files one time only, and then cache the results in the session. It looks like I now have to totally redesign my idea :-(. You can see the work in progress at its home page.