Free software and its relationship to Quality

Yesterday I was given an advance preview of a document for comment. It finished with a paragraph warning the audience that since it was discussing an open source solution it would require more in-house expertise to cope with it. I won’t mention more about the context since while this exasperates me, it’s not the fault of the writer, who is simply under the same illusion as many others on this issue.

Suppose I write a product X, and I make it free software. Suddenly it requires lots of in-house knowledge to deal with it. If on the other hand, I take the exact same product, make it proprietary and slap a one thousand pound price tag on it, now it’s a breeze to install and maintain. <sigh>. I hope when it’s put this way the fallacy is obvious. Yes, some free software is good quality, some is poor quality which is just like every other type of software, but in general, I would argue free software is more likely to be of high quality since it is intrinsically open for peer review and the idea of the gift culture. There is much more reason to care about overall code quality in a free product than in a proprietary one, since the customer can see it! Even if they don’t know much about the code, perceptions of quality can quickly arise from a cursory examination.

The fact that a product is free or open source, simply means you can use in-house expertise to deal with it, not that you have to.

Working in a university, it’s really informative to look at some of the really expensive “gold-standard” software provided at that level. It costs a small fortune, usually with annual fees. Is it slick, beautiful, well documented and easy to set-up? Often these products are appallingly difficult to install and maintain, the companies involved taking hefty consultancy for that as well as for the product. It’s much easier to install many of the free products on the market. Anecdotes I know, but my experience none-the-less.

Writing this has reminded me of comments that Dirk Eddelbuettel alluded to in his blog which again, make it clear that people have completely the wrong end of the stick about this sort of software, thinking it’s not appropriate to real life or mission critical engineering. On the contrary, it is more appropriate. By the way, the software being discussed in that article (the statistics software R) is slick, more powerful than any proprietary alternative I know of, and has superb documentation and supporting books.

Destroying Hard Drives

Today, the BBC News website reported on a Which? Computing magazine article that claims that the physical destruction of a hard drive is essential to protect the data on it, citing that they had retrieved deleted files from second hand drives.

For most folks in computing, the ability to retrieve deleted files is not surprising, nor is the possible survival of data even after repartitioning and reformatting a disk. Nevertheless, the idea that is is impossible to reliably and easily erase a disk has been contested by The Great Zero Challenge. Quite simply, their challenge is to a data recovery company to retrieve data from a drive after the unix command dd has been used to overwrite the whole drive using /dev/zero as a source.

Given no-one has accepted the challenge, it seems you might not have to reach for the hammer after all.

Crashing Cars, an answer?

Every year I attempt to visit my old PhD supervisor Brian McMaster (old in the sense that my PhD is now a thing of the past, I am making no reference to the man in question!) at Christmas time to have a quick natter and exchange gifts. I was squeezed for time this year since I also had to hire a gown from Queen’s to attend an event for the University of Ulster (long and boring story – getting an award for our work on OPUS). Anyway, just before I left, I asked him about my previous Crashing Cars problem. He wasn’t by any means the first PhD I’d asked about this, and I’d even asked a few physicists. I was hoping of course, that he would immediately say I was being stupid and had missed something obvious, but he found the problem as bothersome as I did.

I continued to mull it over a bit, and even found the problem had some more unsettling properties to do with the masses of the car, but didn’t make any progress unraveling the mystery. Well, a few days later Brian CC’d me on an email to someone with whom he had clearly been discussing the problem with a possible solution. Having read it a good few times and thought it over it makes sense to me and it doesn’t come as a surprise that Brian was the one who cracked the central nub of the problem. I’m a lot more cheerful about it now, but it goes to show that dark nasties can lurk in surprisingly simple problems.

I was brooding some more about the frames of reference thing and maybe beginning to see where the paradox lies. Thing is [perhaps] that the observer in the car is non-accelerated only up to the moment of impact: we can’t use him to assess KE after that moment with the same cavalier abandon that prevailed beforehand. [Especially since he’ll have a headache.] Thus it is not legitimate to say: “from car 1’s POV, KE before = 2mv2, KE after = 0 + 0, therefore KE dissipated into crunch = 2mv2”. Which would be very troubling since it would appear to let us distinguish between states of rest/uniform motion by Physics.

What we can say instead is that from the point of view of a non-accelerated observer *travelling initially with car 1*, the KE before = 2mv2 and the KE after = 1/2 2m (-v)2 = mv2 so the KE going into the impact process = 2mv2 – mv2 = mv2. Which agrees with the observer on the roadside! So maybe the old geezer with the mop of white hair and the century’s most iconic formula was right after all. I’m sufficiently encouraged to copy this email to my tormentor in Jordanstown and see if it allays his apprehensions. Hi Colin!