The new paradigm now is that the kernel sets the monitor resolution and X is basically a client application to use it. This solves a lot of problems for most people, but unfortunately the kernel doesn't really handle the situation when the monitor doesn't actually respond with a valid EDID. More unfortunately, this actually happens in numerous situations - dodgy monitors and dodgy KVM switches being two obvious ones.
It turns out, however, that there is a workaround. You can tell the kernel that you have a (made-up) EDID block to load that it's going to pretend came from the monitor. To do this, you have to generate an EDID block - handily explained in the Kernel documentation - which requires grabbing the kernel source code and Making the files in the Documentation/EDID directory. Then put the required file, say 1920x1080.bin, in a new directory /lib/firmware/edid, and add the parameter "drm_kms_helper.edid_firmware=edid/1920x1080.bin" to your kernel boot line in GRUB, and away you go.
Well, nearly. Because the monitor literally does not respond, rather than responding with something useless, the kernel doesn't turn that display on (because, after all, not responding is also what the HDMI and DVI ports are also doing, because nothing is plugged into them). So you also have to tell the kernel that you really do have a monitor there, by also including the parameter "video=VGA-1:e" on the kernel boot line as well.
Once you've done that, you're good to go. Thank you to the people at OSADL for documenting this. Domestic harmony at PaulWay Central is now restored.
The basic process of recording each talk involves recording a video camera, a number of microphones, the video (and possibly audio) of the speaker's laptop, and possibly other video and audio sources. For keynotes we recorded three different cameras plus the speaker's laptop video. In 2013 in the Manning Clark theatres we were able to tie into ANU's own video projection system, which mixed together the audio from the speaker's lapel microphone, the wireless microphone and the lectern microphone, and the video from the speaker's laptop and the document scanner. Llewellyn Hall provided a mixed feed of the audio in the room.
Immediately the problems are: how do you digitise all these things, how do you get them together into one recording system, and how do you produce a final recording of all of these things together? The answer to this at present is DVswitch, a program which takes one or more audio and video feeds and acts as a live mixing console. The sources can be local to the machine or available on other machines on the network, and the DVswitch program itself acts as a source that can then be saved to disk or mixed elsewhere. DVswitch also allows some effects such as picture-in-picture and fades between sources. The aim is for the room editor to start the recording before the start of the talk and cut each recording after the talk finishes so that each file ends up containing an entire talk. It's always better to record too much and cut it out later rather than stop recording just before the applause or questions. The file path gives the room and time and date of recording.
The current system then feeds these final per-room recordings into a system called Veyepar. It uses the programme of the conference to match the time, date and room of each recording with the talk being given in the room at that time. A fairly simple editing system then allows multiple people to 'mark up' the video - choosing which recorded files form part of the talk, and optionally setting the start and/or end times of each segment (so that the video starts at the speaker's introduction, not at the minute of setup beforehand).
When ready, the talk is marked for encoding in Veyepar and a script then runs the necessar programs to assemble the talk title and credits and the files that form the entire video into one single entity and produce the desired output files. These are stored on the main server and uploaded via rsync to mirror.linux.org.au and are then mirrored or downloaded from there. Veyepar can also email the speakers, tweet the completion of video files, and do other things to announce their existence to the world.
There are a couple of hurdles in this process. Firstly, DVswitch only deals with raw DV files recorded via Firewire. These consume about a gigabyte per hour of video, per room - the whole of LCA's raw recorded video for a week comes to about 2.2 terabytes. These are recorded to the hard drive of the master machine in each room; from there they have to be rsync'ed to the main video server before any actual mark-up and processing in Veyepar can begin. It also means that previews must be generated of each raw file before it can be watched normally in Veyepar, a further slow-down to the process of speedily delivering raw video. We tried using a file sink on the main video server that talked to the master laptop's DVswitch program and saved its recordings directly onto the disk in real time, but despite having tested this process in November 2012 and it working perfectly, during the conference it tended to produce a new file each second or three even when the master laptop was recording single, hour-long files.
Most people these days are wary of "yak shaving" - starting a series of dependent side-tasks that become increasingly irrelevant to solving the main problem. We're also wary of spending a lot of time doing something by hand that can or should be automated. In any large endeavour it is important to strike a balance between these two behaviours - one must work out when to stop work and improve the system as a whole, and when to keep using the system as is because improving it would take too long or risk breaking things irrevocably. I fear in running the AV system at LCA I have tended toward the latter too much - partly because of the desire within the team (and myself) to make sure we got video from the conference at all, and partly because I sometimes prefer a known irritation to the unknown.
The other major hurdle is that Veyepar is not inherently set up for distributed processing. In order to have a second Veyepar machine processing video, one must duplicate the entire Veyepar environment (which is written in Django) and point both at the same database on the main server. Due to a variety of complications, this was not possible without stopping Veyepar and possibly having to rebuild its database from scratch, and I and the team lacked the experience with Veyepar to know how to easily set it up in this configuration. I didn't want to start to set up Veyepar on other machines and finding myself shaving a yak and looking for a piece of glass to mount a piece of 1000-grit wet and dry sandpaper on to sharpen the razor correctly.
Instead, I wrote a separate system that produced batch files in a 'todo' directory. A script running on each 'slave' encoding machine periodically checked this directory for new scripts; when it found one it would move it to a 'wip' directory, run it, and move it and its dependent file into a 'done' directory when finished. If the processes in the script failed it would be moved into a 'failed' directory and could be resumed manually without having to be regenerated. A separate script (already supplied in Veyepar and modified by me) periodically checked Veyepar for talks that were set to "encode", wrote their encode script and set them to "review". Thus, as each talk was marked up and saved as ready to encode, it would automatically be fed into the pipeline. If a slave saw multiple scripts it would try to execute them all, but would check that each script file existed before trying to execute it in case another encoding machine had got to it first.
That system took me about a week of gradual improvements to refine. It also took me giving a talk at the CLUG programming SIG on parallelising work (and the tricks thereof) to realise that instead of each machine trying to allocate work to itself in parallel, it was much more efficient to make each slave script do one thing at a time and then run multiple slave scripts on each encoder to get more parallel processing, thus avoiding the explicit communication of a single work queue per machine. It relies on NFS correctly handling the timing of a file move so that one slave script cannot execute the script another has already moved into work in progress, but that at this granularity of work is a very small time of overlap.
I admit that, really, I was unprepared for just how much could go wrong with the gear during the conference. I had actually prepared; I had used the same system to record a number of CLUG talks in months leading up to the conference; I'd used the system by myself at home; I'd set it up with others in the team and tested it out for a weekend; I've used similar recording equipment for many years. What I wasn't prepared for was that things that I'd previously tested and had found to work perfectly would break in unexpected ways:
But the main lesson to me is that you can only practice setting it up, using it, packing it up and trying again with something different in order to find out all the problems and know how to avoid them. The 2014 team were there in the AV room and they'll know all of what we faced, but they may still find their own unique problems that arise as a result of their location and technology.
There's a lot of interest and effort being put in to improve what we have. Tim Ansell has started producing gstswitch, a Gstreamer-based program similar to DVswitch which can cope with modern, high-definition, compressed media. There's a lot of interest in the LCA 2014 team and in other people to produce a better video system that is better suited to distributed processing, distributed storage and cloud computing. I'm hoping to be involved in this process but my time is already split between many different priorities and I don't have the raw knowledge of the technologies to be able to easily lead or contribute greatly such a process. All I can do is to contribute my knowledge of how this particular LCA worked, and what I would improve.
I had a hiatus in 2012 for various reasons, but this year I've decided to run another similar event. But, as lovely as Yarrangobilly is and as comfortable as the Caves House was to stay in, it's a fair old five hour drive for people in Sydney, and even Canberrans have to spend the best part of two hours driving to get there. And Peter Miller, who runs the fabulous CodeCon (on which CodeCave was styled) every year, is going to be a lot better off near his health care and preferred hospital. Where to have such an event, then?
One idea that I'd toyed with was the Pittwater YHA: close to Sydney (where many of the attendees of CodeCave and CodeCon come from), still within a reasonable driving distance from Canberra (from where much of the remainder of the attendees hail), and close to Peter's base in Gosford. But there's no road up to it, you literally have to catch the ferry and walk 15 minutes to get there - while this suits the internet-free aesthetic of previous events, for Peter it's probably less practical. I discussed it on Google+ a couple of weeks ago without a firm, obvious answer (Peter is, obviously, reserving his say until he knows what his health will be like, which will probably be somewhere about two to three weeks out I imagine :-) ).
And then Tridge calls me up and says "as it happens, my family has a house up on the Pittwater". To me it sounds brilliant - a house all to ourselves, with several bedrooms, a good kitchen, and best of all on the roads and transport side of the bay; close to local shops, close to public transport, and still within a reasonable drive via ambulance to Gosford Hospital (or, who knows, a helicopter). Tridge was enthusiastic, I was overjoyed, and after a week or so to reify some of my calendar that far out, I picked from Friday 26th July to Sunday 28th July 2013.
Along the way I added a couple of things. For a start, Console_GetoptLong recognises --option=value arguments, as well as -ovalue where 'o' is a single letter option and doesn't already match a synonym. It also allows combining single-letter options, like tar -tvfz instead of tar -t -v -f -z (and you've specified that it should do that - this is off by default). It gives you several ways of handling something starting with a dash that isn't a defined synonym - warn, die, ignore, or add it to the unprocessed arguments list.
One recent feature which hopefully will also reduce the amount of boilerplate code is what I call 'ordered unflagged' options. These are parameters that aren't signified by an option but by their position in the argument list. We use commands like this every day - mv and cp are examples. By specifying that '_1' is a synonym for an option, Console_GetoptLong will automatically pick the first remaining argument off the processed list and, if that parameter isn't already set, it will make that first argument the value of that parameter. So you can have a command that takes both '-i input_file' and 'input_file' style arguments, in the one parameter definition.
Another way of hopefully reducing the amount of boilerplate is that it can automatically generate your help listing for you. The details are superfluous to this post, but the other convenience here is that your help text and your synonyms for the parameter are all kept in one place, which makes sure that if you add a new option it's fairly obvious how to add help text to it.
As always, I welcome any feedback on this. Patches are even better, of course, but suggestions, bug reports, or critiques are also gladly accepted.
Well, it sort of is. The normal URL doesn't work but Google reveals http://web.aanet.com.au/auric/files2/tv_grab_oztivo. Interestingly, its version number is still at the recognised place - 1.36 - but all other parts of the site seem to be having problems with its database. And since it hasn't been updated since this time in 2010, I think there's a good possibility it may remain unchanged from now on.
A number of years ago I offered to host the script on my home Subversion repository, but got no response. So I've blown the dust off, updated it, added Chris's patch, and it's now up to date at http://tangram.dnsalias.net/repos/tv_grab_oztivo/trunk/tv_grab_oztivo. Please feel free to check that out and send me patches if there are other improvements to make to it.
The cited reason that the Big Six don't sell their own books directly seems to be that they just haven't set up their websites. Bad news for Amazon: that's easy with the budgets the big publishers have - Baen already do sell their own ebooks, for example (without DRM, too). More bad news for Amazon: generating more sales by referrals (the "other readers also bought" stuff) isn't a matter of customers or catalogue, it's just a matter of data. Start selling books and you've got that kind of referral. Each publisher has reams of back catalogue begging to be digitised and sold. They've got the catalogue, they've got the direct access to the readers, they've got the money to set up the web sites, and they've now got the motivation to avoid Amazon and sell direct to the reader. That to me spells disaster for Amazon.
But it also means disaster for us. Because you're going to have multiple different publisher's proprietary e-book reader - the only one they'll bless with their own DRM. Each one will have its own little annoyances, peccadilloes and bugs. Some won't let you search. Some won't let you bookmark. Some will make navigation difficult. Some won't remember where you were up to in one book if you open up another. Others might lock up your reader, have back doors into your system, use ugly fonts, be slow, have no 'night' mode, or might invasively scan your device for other free books and move them into their own locked-down storage. And you won't be able to change, because none of your books will work in any other reader than the publisher's own. After all, why would they give another app writer access to their DRM if it means the reader might then go to a different publisher and buy books elsewhere?
We already have this situation. I have to use the Angus & Robertson reader (created by Kobo) for reading some of my eBooks. It doesn't allow me to bookmark places in the text, its library view has one mode (and it's icons, not titles), I can't search for text, and its page view is per chapter (e.g. '24 of 229') not through the entire book. In those ways and more it's inferior to the free FBReader that I read the rest of my books in - mostly from Project Gutenberg - but I have no choice; the only way to get the books from the store is through the app. These are books I paid money for and I'm restricted by what the software company that works for the publishing broker contracted by the retailer wants to implement. This is not a good thing.
What can we, the general public, do about this? Nothing, basically. Write to your government and they'll nod politely, file your name in the "wants to hear more about the arts" mailing list, and not be able to do a thing. Write to a publisher and they'll nod vacantly, file your name in the wastepaper bin, and get back to thinking how they can make more profit. Write to your favourite author and they'll nod politely, wring their hands, say something about how it's out of their control what their editor's manager's manager's manager decides, and be unable to do anything about it. Everyone else is out of the picture.
Occasionally someone suggests that Authors could just deal directly with the readers directly. At this point, everyone else sneers - even fanfic writers look down on self-publishers. And, sadly, they're right - because (as Charlie points out) we do actually need editors, copy-readers and proofers to turn the mass of words an author emits into a really compelling story. (I personally can't imagine Charlie writing bad prose or forgetting a character's name, but I can imagine an editor saying "hey, if you replaced that minor character with this other less minor character in this reference, it'd make the story more interesting", and it's these things that are what we often really enjoy about a story.) I've written fiction, and I've had what I thought was elegantly clear writing shown to be the confusing mess of conflicting ideas and rubbish imagery that it was. Editors are needed in this equation, and by extension publishers, imprints, marketers, cover designers, etc.
Likewise, instead of running your own site, why not get a couple of authors together and share the costs of running a site? Then you get something like Smashwords or any of the other indie book publishers - and then you get common design standards, the requirement to not have a conflicting title with another book on the same site, etc. So either way you're going to end up with publishers. And small publishers tend to get bought up by larger publishers, and so forth; capitalism tends to produce this kind of structure to organisations.
So as far as I can see, it's going to get worse, and then it's going to get even worse than that. I don't think Amazon will win - if nothing else, because they're already looking suspiciously like a monopolist to the US Government (it's just that the publishers and Apple were stupid enough to look like they were being greedier than Amazon). But either way, the people that will control your reading experience have no interest in sharing with anyone else, no interest in giving you free access to the book you've paid to read (and no reason if they can give you a license, call it a book, charge what a book costs, and then screw you later on), and everyone else has no control over what they're going to do with an ebook in the future. If the publisher wants to revoke it, rewrite it, charge you again for it, stop you re-reading it, disallow you reading previous pages, only read it in the publisher's colours of lime green on pink, or whatever, we have absolutely no way of stopping this. The vast majority of people are already happy to shackle themselves to Amazon, to lock themselves into Apple, and tell themselves they're doing just fine.
Sorry to be cynical about this, but I think this is going to be one of those situations where the disruptive technologies just come too little and too late. Even J. L. Rowling putting her books online DRM-free isn't going to change things - most of the commentators I've read just point to this and say "oh well, the rest of us aren't that powerful, we'll just have to co-operate with (Amazon|the publisher I'm already dealing with)". Even the ray of hope that Cory Doctorow offers with his piece on Digital Lysenkoism - that the Humble E-Book Bundle has authors wanting to get their publishers off DRM because there's a new smash-hit to be had with the Humble Bundle phenomenon - is a drop of nectar in the ocean of tears; no publisher's really going to care about the Humble Bundle success if it means facing down the bogey-man of unfettered public copying of ebooks that they themselves have been telling everyone for the last twenty years.
So publishers are definitely worrying about Amazon's monopsony. But the idea that that will cause them to give up DRM is wishful thinking. They've got too much commitment to preventing people copying their books, they don't have to give up DRM in order to cut Amazon out of the deal, and if DRM then locks readers into a reliance on the publishers it's a three-way win for them. And a total lose for us, but then capitalism has never been about giving the customer what they want.
I argued that, in fact, having to select a gear meant that drivers both new and experienced would occasionally miss a gear change and put the gearbox into neutral by mistake, causing grinding of gears and possible crashes as the car was now out of control. He claimed to have heard of a clever device that would sit over your gearbox and tell you when you weren't in gear, but you couldn't use the car like that all the time because it made the car too slow. So you tested the car with this gearbox-watcher, then once you knew that the car itself wouldn't normally miss a gear you just had to blame the driver if the car blew up, crashed, or had other problems. But he was absolutely consistent in attitude towards electric motors: you lost any chance to find out that you weren't in the right gear, and therefore the whole invention could be written off as basically misguided.
Now, clever readers will have worked out that at this point my conversation was not real, and was in fact by way of an analogy (from the strain on the examples, for one). The friend was real - Rusty Russell - but instead of electric motors we were discussing the Go programming language and instead of gearboxes we were discussing the state of variables.
In Go, all variables are defined as containing zero unless initialised otherwise. In C, a variable can be declared but undefined - the language standard AFAIK does not specify the state of a variable that is declared but not initialised. From the C perspective, there are several reasons you might not want to automatically pre-initialise a variable when you define it - it's about to be set from some other structure, for example - and pre-initialising it is a waste of time. And being able to detect when a variable has been used without knowing what its stage is - using valgrind, for example - means you can detect subtle programming errors that can have hard-to-find consequences when the variable's meaning or initialisation is changed later on. If you can't know whether the programmer is using zero because that's what they really wanted or because it just happened to be the default and they didn't think about it, then how do you know which usage is correct?
From the Go perspective, in my opinion, these arguments are a kludgy way of seeing a bug as a feature. Optimising compilers can easily detect when a variable will be set twice without any intervening examination of state, and simply remove the first initialisation - so the 'waste of time' argument is a non-issue. Likewise, any self-respecting static analysis tool can determine if a variable is tested before it's explicitly defined, and I can think of a couple of heuristics for determining when this usage isn't intended.
And one of the most common errors in C is use of undefined variables; this happens to new and experienced programmers alike, and those subtle programming problems happen far more often in real-world code as it evolves over time - it is still rare for people to run valgrind over their code every time before they commit it to the project. It's far more useful to eliminate this entire category of bugs once and for all. As far as I can see, you lose nothing and you gain a lot more security.
To me, the arguments against a default value are a kind of lesser Stockholm Syndrome. C programmers learn from long experience to do things the 'right way', including making sure you initialise your variables explicitly before you use them, because of all the bugs - from brutally obvious to deviously subtle - that are caused by doing things in any other way. Tools like valgrind work around indirectly fixing this problem after the fact. People even come to love them - like the people who love being deafened by the sound of growling, blaring petrol engines and associate the feeling of power with that cacophany. They mock those new silent electric motors because they don't have the same warts and the same pain-inducing behaviour as the old petrol engine.
I'm sure C has many good things to recommend it. But I don't think lack of default initialisation is one.
And in my experience, those people often make unrealistic demands on new software, or misuse it - consciously or unconsciously, and with or without learning about it. These people are semi-consciously determined to prove that the new thing is wrong, and everything they do then becomes in some way critical of it. Any success is overlooked as "because I knew what to do", every failure is pounced on as proof that "the thing doesn't work". I've seen this with new hardware, new software, new cars, new clothes, new houses, accommodation, etc. You can see it in the fact that there's almost no correlation between people who complain about wind generator noise and the actual noise levels measured at their property. Human beings all have a natural inclination to believe that they are right and everything else is wrong, and some of us fight past that to be rational and fair.
This is why I didn't get Rusty's post on the topic. It's either completely and brilliantly ironic, or (frankly) misguided. His good reasons are all factual; his 'bad' reasons are all ad-hominem attacks on a person. I'd understand if it was e.g. Microsoft he was criticising - e.g. "I don't trust Microsoft submitting a driver to the kernel; OT1H it's OK code, OTOH it's Microsoft and I don't trust their motives" - because Microsoft has proven so often that their larger motives are anti-competition even if their individual engineers and programmers mean well. But dmesg, PulseAudio, and systemd have all been (IMO) well thought out solutions to clearly defined problems. systemd, for example, succeeds because it uses methods that are simple, already in use and solve the problem naturally. PulseAudio does not pretend to solve the same problems as JACK. I agree that Lennart can be irritating some times, but I read an article once by someone clever that pointed out that you don't have to like the person in order to use their code...
So I wrote one.
The result is available from my nascent PHP Subversion library at:
It's released under version 3 of the GPL. It also comes with a simple test framework (written, naturally, in a clearly superior language: Perl).
This is still a work in progress, and there are a number of features I want to add to it - chief amongst them packaging it for use in PEAR. I'm not a PHP hacker, and it still astonishes me that PHP programmers have been content to use the mish-mash of different half-concocted options for command line processing when something clearly better exists - and that many of the PHP programs I have to work with don't use any of those but write their own minimal, failure-prone and ugly command line processing from scratch.
I'd love to hear from people with patches, suggestions or comments. If you want write access to the repository, let me know as well.
The thing that's scared me off is the whole "meeting someone else's standards" thing. So after Rusty's talk at OSDC this year, and finding out that 'ccanlint' can prompt you with what you need to do to make a good package, I decided to give it a go. And after I started having a few minor problems understanding exactly what I needed to do to get it working, I decided to write it down here for other people.
Trying to get a photo that shows what the eye sees of the strip when lit is hard - the camera just thinks it's way too bright. This is the closest I could get with our camera:
With the eye you can see the individual LEDs and they're bright but not so bright as to be difficult to look at. So the strip doesn't make the deck feel too bright or oversaturated. The light is warm without being monochrome or too intense. And the fact that it's a strip means that you don't get shadows or bits of the deck that are dark - the whole deck feels quite evenly lit, even at the corners.
100 watts feels like a lot, but in comparison to even one 18W fluorescent globe per space between beams (12) it is still much more efficient on power. That arrangement of fluorescent bulbs would also mean shadows, single point sources, and having to put an extra beam in the middle of the deck. And let's not even consider spot lights. No, this is a really good layout.
Why? Very simple. Just think of the number of state-level attacks on software and Internet infrastructure in recent years. "Hackers" getting fraudulent SSL certificates issued for *.google.com and other sites. People requesting Mozilla remove CNNIC from the certificate authority list because of the Chinese government similar faking of SSL certificates. Malware created by the German government for spying on people. British companies selling malware to the Egyptian government. The list goes on.
One can easily imagine any government in the world telling motherboard manufacturers that they need to install the government's own public keys in order to import motherboards into the country. It's obvious in the case of countries like Iran, Syria, and Jordan, and it's no stretch to imagine the US, Australian or any other 'Western' government doing it under the guise of "protecting our citizens". After all, we do want the government to snoop on those evil child molesters, dont' we? Or at least, the people the government tells us are child molesters. Or, at least, the people who turn out to have child abuse material on their computers after the government has done their investigation. They wouldn't use those powers to spy on ordinary citizens, right? Right?
Wrong. For state-level actors, it's not about the ordinary citizens. It's about protecting the status quo. It's about protecting their access to information and protecting their powers. The idea that someone can lock government spyware out of their computer has an easy solution - make sure that the computer itself will always install the spyware. And they have the power to go to motherboard manufactuers and get these keys installed. It's a no-brainer for them, really.
I also have no doubt that secure booting to a secure operating system will do little to stop real malware. There's always flaws to be exploited in something as large and kludgy as Microsoft's software. The phenomena Microsoft is allegedly trying to protect against - rootkits that start at boot time - are a relatively small portion of the malware spectrum. And if you're going to let an unsigned binary run - the alternative being to lock all but the large players out of the Windows software market - then malware is already exploiting the user's trust in the system and their lack of knowledge about what is good software and what isn't. "Your PC is already infected" and all that; it's trojan horses all the way down.
I don't think Microsoft is going to care that state-level players can exploit the system their proposing. It's not like they don't already give the source code to the Chinese government and so forth. But I think the rest of the PC using world has a right to be very worried about a system that will tell you that it's running signed software without you being able to choose which signatories you trust. And choice is never going to be on the agenda with Microsoft.
Trying to get useful information of log files that are being continually written is kind of frustrating. The usual Linux method is to tail -f the file and then apply a bunch of grep, cut, sed or awk filters to the pipeline. This is clumsy if you don't know what you're dealing with or looking for yet, and there are a bunch of other limitations with this approach. So my idea is to create an application with these features:
If such a thing even vaguely exists, please email me. Otherwise I'll have to think about learning how to write inotify-based ncurses-driven applications in my copious free time.
Then I want a piece of software, say on my android phone, which reads information from the fuel meter and GPS coordinates. It then records how much fuel is used and where the car was at the end of that second. This can be used simply to work out how much fuel is being used, or a kilometres per litre or miles per gallon figure based on the current distance travelled. The software can then show you your average fuel consumption and km/l 'score' per trip.
But what constitutes a trip? Well, the software can work that out fairly easily - the engine is consuming fuel constantly while it's on, and people usually start it before the start of the trip and turn it off at the end of the trip. A fairly simple check of start and end points could then group your trips by their purpose - going to work, going shopping, etc - and report your average and best score for each journey of the same purpose. You could then also compare fuel efficiency when going at different times and using different connecting roads to determine, on average, which paths and times were more efficient uses of your petrol.
But journeys often start the same way - if you live in a cul-de-sac, you always drive to the end of it to get any further, for example. So looking at the paths can then break those into segments that are common, and you can be scored on your individual performance per segment. This also means that if you drop into the shops on your way to work then this counts for two or more separate segments rather than one. The algorithm could both find short segments - roads you always went along and never deviated from - and long segments that you occasionally deviated from but mostly drove in one go.
For many journeys there's more than one way to get there, and after a period of time the software can tell you which route was the most optimal and even possibly when to drive it to get the best efficiency. This would have saved a friend of mine, who had to suffer her father going many different ways between two points on a common journey in Brisbane to determine, over time and in varying traffic, what the most efficient way was. Of course, it can tell you what your best time was and that may be a different route from the most fuel-efficient path.
And then it can start to challenge you. You want to drive to work? How about doing it using less fuel than your best effort so far? It may even be able to tell you specific segments where you can improve - where your fuel efficiency varies widely, or where it is greater than your average over similar terrain. Once you get something that can actually tell you how to improve your fuel efficiency, I think that'll make a lasting difference to how much money people spend on fuel. Classic positive feedback technique.
Finally, a device which would actually offer to provably improve your fuel efficiency.
Sadly, it joins every other device out there being touted by snake oil salesman, because - like them - it doesn't exist.
All posts licensed under the CC-BY-NC license. Author Paul Wayper.
You can also read this blog as a syndicated RSS feed.