Well, as luck would have it I recently bought several LiIon batteries at a good price, and thought I might as well have the working drill with a nice, working battery pack too. And I'd bought a nice Lithium Ion battery balancer/charger, so I can make sure the battery lasts a lot longer than the old one. So I made the new battery fit in the old pack:
First, I opened up the battery pack by undoing the screws in the base of the pack:
There were ten cells inside - NiMH and NiCd are 1.2V per cell, so that makes 12V. The pack contacts were attached to the top cell, which was sitting on its own plinth above the others. The cells were all connected by spot-welded tabs. I really don't care about the cells so I cut the tabs, but I kept the pack contacts as undamaged as possible. The white wires connect to a small temperature sensor, which is presumably used by the battery charger to work out when the battery is charged; the drill doesn't have a central contact there. You could remove it, since we're not going to use it, but there's no need to.
The new battery is going to sit 'forward' out of the case, I cut a hole for my replacement battery by marking the outline of the new pack against the side of the old case. I then used a small fretsaw to cut out the sides of the square, cutting through one of the old screw channels in the process.
I use "Tamiya" connectors, which are designed for relatively high DC current and provide good separation between both pins on both connectors. Jaycar sells them as 2-pin miniature Molex connectors; I support buying local. I started with the Tamiya charge cable for my battery charger and plugged the other connector shell into it. Then I could align the positive (red) and negative (black) cables and check the polarity against the charger. I then crimped and soldered the wires for the battery into the connector, so I had the battery connected to the charger. (My battery came with a Deanes connector, and the charger didn't have a Deanes connector cable, which is why I was putting a new connector on.)
Aside: if you have to change a battery's connector over, cut only one side first. Once that is safely sealed in its connector you can then do the other. Having two bare wires on a 14V 3AH battery capable of 25C (i.e. 75A) is a recipe for either welding something, killing the battery, or both. Be absolutely careful around these things - there is no off switch on them and accidents are expensive.
Then I repeated the same process for the pack contacts, starting by attaching a red wire to the positive contact, since the negative contact already had a black wire attached. The aim here is to make sure that the drill gets the right polarity from the battery, which itself has the right polarity and gender for the charger cable. I then cut two small slots in the top of the pack case to let the connector sit outside the case, with the retaining catch at the top. My first attempt put this underneath, and it was very difficult to undo the battery for recharging once it was plugged in.
The battery then plugs into the pack case, and the wires are just the right length to hold the battery in place.
Then the pack plugs into the drill as normal.
The one thing that had me worried with this conversion was the difference in voltages. Lithium ion cells can range from 3.2V to 4.2V and normally sit around 3.7V. The drill is designed for 12V; with four Lithium Ion cells in the battery, it ranges from 14.8V to 16.8V when fully charged. Would it damage the drill?
I tested it by connecting the battery to a separate set of thin wires, which I could then touch to the connector on the pack. I touched the battery to the pack, and no smoke escaped. I gingerly started the drill - it has a variable trigger for speed control - and it ran slowly with no smoke or other signs of obvious electric distress. I plugged the battery in and ran the drill - again, no problem. Finally, I put my largest bit in the drill, put a piece of hardwood in the vice, and went for it - the new battery handled it with ease. A cautious approach, perhaps, but it's always better to be safe than sorry.
So the result is that I now have a slightly ugly but much more powerful battery pack for the drill. It's also 3AH versus the 2AH of the original pack, so I get more life out of the pack. And I can swap the batteries over quite easily, and my charger can charge up to four batteries simultaneously, so I have something that will last a long time now.
I'm also writing this article for the ACT Woodcraft Guild, and I know that many of them will not want to buy a sophisticated remote control battery charger. Fortunately, there are many cheap four-cell all-in-one chargers at HobbyKing, such as their own 4S balance charger, or an iMAX 35W balance charger for under $10 that do the job well without lots of complicated options. These also run off the same 12V wall wart that runs the old pack charger.
Bringing new life to old devices is quite satisfying.
The other big generalisation is that this works purely on a cost amortisation calculation. I.e. that musicians are trying to cover costs, and therefore the cost of producing physical units and distributing them is the governing factor. Some musicians also look to make a living from their work, and that means setting a time period over which they hope to gain money from selling the product, an amount which they expect to live on, a number of units to sell, and so forth - all of which can vary widely and are complicated to fix. (Aside: this is why established artists push for extension to copyright - because theoretically they're extending the amount of time they gain from selling that product. This is a myth and a fairy story record labels tell them when they want them to support copyright extension.) It used to be that the cost of producing the units and distributing them were the major costs - see Courtney Love's calculation, for example - and therefore the label proposes to take that risk for the band (another fairy story); nowadays the distribution is free, and producing a new unit is cheap (in the case of digital distribution, it's totally free), so the ongoing costs of keeping the musicians alive and producing new music is the major cost for professionally produced music.
But I still think the big points made in the article is true: that the real cost of producing music - even music of reasonable quality - is coming down, that more music than ever is being produced and hence there's much more competition for listener's money, and that "hobby" artists who do it in their spare time and don't expect to make money out of their music (I'm one) drives the cost of actually getting music down too. So for professional musicians, who have sort of expected to make money out of music because their heroes of the previous generations did (due, as David points out, to a quirk in history that made the twentieth century great for this kind of oligopoly), it's a rude awakening to find out that people don't care about your twenty years in the industry or your great study of the art form, they care about listening to a catchy tune that's easy to get.
I also like the point that musicians are also inveterate software copiers. It's one reason I use LMMS and free plugins - because free, quality software does exist. I find it intensely hypocritical that professional musicians can criticise people for copying their music, when they may well have not paid a cent for all the proprietary software they use to produce it.
But to me this is really just about getting in touch with your audience. Companies like Magnatune exist to help quality artists find an audience by putting them in touch with an existing large subscriber base who wants new music. Deathmøle's insane success on Kickstarter shows that someone with an established audience can make it really big without having to sell their soul to big record labels. And Jeph himself is a great example of the way things work in the modern world, since Deathmøle is his side project - his main one is Questionable Content, which he also went into without having existing funding or requiring a big backer to grant him some money and take his rights in exchange. As Tim O'Reilly says, obscurity is a far greater threat to authors and creative artists than piracy; it doesn't matter if you're signed to the best record label there is, if they haven't actually publicised your work you might as well have not signed up at all. And, fortunately, these days we have this wonderful thing called the internet which allows artists to be directly in touch with their fans rather than having to hope that the record label will do the right thing by you and not, say, ignore you while promoting another band.
I wish David had made his point without the broad generalisations - I think it stands well without them.
The ever-thoughtful Charlie Stross has written an article about the problems facing the NSA. There's not going to be just one Edward Snowden or Bradley Manning, there's going to be heaps of them - because the Three Letter Acronym security departments are busy getting rid of all of the permanent employees who felt loyal to them and replacing them with contractors who have no more loyalty to them than the security department's loyalty to the contractor.
Now, I personally believe in being loyal to my employer. I (of course) honour the various clauses in my contract that say they get to own all my work for them, and that I won't sell or leak their secrets, and that I won't work for someone else without telling them. I believe in being loyal to the customers I work for and the people I work with. I believe that I am more valuable to an employer the longer I work there because I know the intricacies of the job better and am better at solving problems by recognising them and their underlying causes. These are things that a new employee will always struggle with.
But I believe that the big problem with employers these days is this pernicious idea that their workforce is interchangeable, not to be trusted, and best used by screwing them for as much work as you can get out of them and then throwing them away. It's an "Atlas Shrugged" mindset that believes that somehow the people at the top are being held back by the people at the bottom, and that therefore workers don't deserve any of the benefits of being at the top. It's also contributed to by the idea that companies poach people - especially "rock star" workers and people high up the ladder; the idea that those people (and their loyalty) can be bought just for their experience and to (somehow) change their environment just by being in it.
The "glory days" of jobs for life that Charlie talks about in his essay are really the times before the MBA school of management came into being; when people managed companies because they'd worked their way to the top. Those people knew the business intimately, they'd sweated over it for decades, they knew the people - and the employees knew them. There was much more of a feeling of trust in those organisations, because it was about personal relationships more than work relationships or "rightsizing" or "mission statements". Walt Disney was famous for remembering every person in the 700-strong Disney workforce. These days, one gets the impression that the management of some companies consider it a burden to even associate with the people more than a step down the org chart.
At the moment all we're really seeing, IMO, is the 'tit for tat' nature of the Prisoners Dilemma being played out in corporate workforces. If you want to find the point at which employers started cheating on their workforces, then you have to keep on going back - past the 1980s anti-union laws and workplace deregulation, past the 1880s and the weavers and miners unions, past the 1780s and the clearances... in fact, just keep going: its feudal lords demanding tithes, and high priests demanding donations, and kings demanding tributes. The Greeks famously invented Democracy, but even then slaves, women, and other "not our sort" people couldn't actually vote. The process of cheating on the people beneath you for your own gain has a long history - far longer, I would argue, than the history of the workers rebelling and demanding their own rights.
So now the workforce is no longer loyal to their employer, and we see the mistrust and second-guessing that usually accompanies standard Prisoners Dilemma situations. I think the two are evenly matched - the employer might seem to hold the power (because they write the contract the employeee must sign without change) but the employees are many, and their methods of working around the employer's restrictions and exploiting the employer's weaknesses are many and subtle. The employee has much more mobility than the employer, and while there are usually non-competition restrictions in the contract the number of times I've heard people subtly, and not so subtly, ignoring these (for example, sales people poaching client lists) makes it difficult for the employer to fight all those battles.
Overall, it's a pity, because I think a situation where employer and employee trust eachother and work together is much better than one where each is subtly trying to screw the other. Once you see it as a contest, though, it's all downhill from there. Many organisations try to rebuild trust, but the "team building exercise" is such a cliche for uncaring management that it's boring to repeat it. If you're trying to rebuild trust, but not fundamentally changing the management style and not addressing the needs and issues of the workers, then it's really just an exercise in paying some management consultant to take your money and laugh at you.
I currently work at a company which does have, at least in our Canberra offices, a lot of respect for its workers. It's easy to imagine being paid well for being a "subject matter expert" rather than having to go into management to keep climbing the pay ladder. We have regular functions every fortnight or so where you can speak to just about anyone - not 'town hall' meetings which in my experience are still basically management telling the workers the way it's going to be. And I think that there are many examples of companies that are doing the right things by their workers and seeing a lot of benefits - it's easy to cynically see what Google get out of paying for their sysadmins to have good internet connections, but the sysadmins get a decent deal out of it too, and the trust and understanding that goes with it is not easily bought.
So I do think there's hope. But I think we have to see a profound shift in the employers and their attitudes to staff before that changes. For a start, weed out the psychopaths and bullies in management before you complain about theft of office supplies. Promote people from within rather than always hiring top management from outside. Stop trying to win my trust with company slogans and mission statements, and start actually listening to me when I tell you about the opportunity that I can see right in front of you. Stop treating companies like feudal families, with their fiefdoms and strict hierarchy, and start treating us all like citizens.
Now, I know this is tricky. Once you go smaller than the minimum allocation unit size, you have to do some fairly fancy handling in the file system, and that's not going to be easy unless your file system discards block allocation and goes with byte offsets. The pathological case of inserting one byte at the start of a file is almost certainly going to mean rewriting the entire file on any block-based file system. And I'm sure it offends some people, who would say that the operations we have on files at the moment are just fine and do everything one might efficiently need to do, and that this kind of chopping and changing is up to the application programmer to implement.
That, to me, has always seemed something of a cop-out. But I can see that having file operations that only work on some file systems is a limiting factor - adding specific file system support is usually done after the application works as is, rather than before. So there it sat.
Then a while ago, when I started writing this article, I found myself thinking of another set of operations that could work with the current crop of file systems. I was thinking specifically of the process that rsync has to do when it's updating a target file - it has to copy the existing file into a new, temporary file, add the bits from the source that are different, then remove the old file and substitute the new. In many cases we're simply appending new stuff to the end of the old file. It would be much quicker if rsync could simply copy the appended stuff into a new file, then tell the file system to truncate the old file at a specific byte offset (which would have to be rounded to an allocation unit size) and concatenate the two files in place.
This would be relatively easy for existing file systems to do - once the truncate is done the inodes or extents of the new file are simply copied into the table of the old file, and then the appended file is removed from the directory. It would be relatively quick. It would not take up much more space than the final file would. And there are several obvious uses - rsync, updating some types of archives - where you want to keep the existing file until you really know that it's going to be replaced.
And then I thought: what other types of operations are there that could use this kind of technique. Splitting a file into component parts? Removing a block or inserting a block - i.e. the block-wise alternative to my byte offset operations above? All those would be relatively easy - rewriting the inode or offset map isn't, as I understand it, too difficult. Even limited to operations that are easy to implement in the file system, there are considerably more operations possible than those we currently have to work with.
I have no idea how to start this. I suspect it's a kind of 'chicken and egg' problem - no-one implements new operations for file systems because there are no clients needing them, and no-one clients use these operations because the file systems don't provide them. Worse, I suspect that there are probably several systems that do weird and wonderful tricks of their own - like allocating a large chunk of file as a contiguous extent of disk and then running their own block allocator on top of it.
Yes, it's not POSIX compliant. But it could easily be a new standard - something better.
In your article for the Sydney Morning Herald on the 31st of July 2013, you say fair use is "theft" in all but name.
And on your blog you have mentioned Bruce Sterling's piece "The Ecuadorian Library". In fact, you've quoted directly from it.
So, is that fair use? Or are you going to hand yourself in for copyright theft now?
Now, you theoretically don't make any money from quoting Bruce, so maybe you think that because it's not a commercial use that therefore you're not "stealing". But I don't think you can have it both ways.
If we follow your argument - that any use that has some kind of commercial gain is, in fact, theft - then it simply becomes a question of what "commercial gain" is. And that's where lawyers come in.
Because you've obviously gained from referencing a quotation from Shakespeare in your article title. You're probably gained by mentioning songs or stories in your books - also copyrighted. And where does that end? Should you be paying the people who wrote the thesaurus every time you look up a synonym? Should you be paying the authors whose work you cribbed on the Russo-Japanese war? Should you be paying Bruce Sterling a proportion of your royalties, as he's clearly influenced your thinking?
You're also presenting a slippery slope that cannot help anyone. An academic quotes your book? Clearly they must pay! Someone satirises it? Clearly they must pay! A student quotes from it? Well, clearly they must pay in proportion to how much they quoted - after all, some people might read your book and not use a thing from it, and others quote entire sections! Someone mentions it on a radio show? They should pay for the privilege! Someone sells your book second-hand? Well, obviously you should get a cut too!
You're also a successful author, having published eight books and translated more. So it's kind of convenient for you to say, now, that you should be paid more for all that work. It doesn't help the new author, struggling to make a living and trying to read and learn from everything they can.
And, let's face it, the spectre of some dread international conglomerate ripping off your work and not giving you any money for it is kind of the wrong way around, isn't it? After all, you've basically been published by them - big printing companies who control distribution, decide who is going to be released where and when, and decide the royalties they will offer you and how they'll pay. They don't need to steal other people's work, they've got authors begging to be published sending them manuscripts all the time. Pretending that you're threatened by hungry companies desperate to rip your work off, and ignoring the one that's already only paying you trivial amounts compared to their own salaries and bonuses, is not a very good distraction.
I have nothing against you personally. I only think that your logic in defending a system that offers a pittance to the people who actually write the words we read, and in turn demand that no-one use your work without paying for it, while at the same time using other people's work without paying for it, seems to be mixed up.
So imagine I'm going to try to use a particular technology, or I'm going to patent a new invention. As part of my due diligence, I have to provide a certified document that shows what search terms I used to search for patents, and why any patents I found were inapplicable to my use. Then, when a patent troll comes along and says "you're using our patent", my defence is, "Sorry, but your patent did not appear relevant in our searches (documentation attached)."
If my searches are considered reasonable by the court, then I've proved I've done due diligence and the patent troll's patent is unreasonably hard to find. OTOH, if my searches were unreasonable I've shown that I have deliberately looked for the wrong thing in the hopes that I can get away with patent infringement, so damages would increase. If I have no filing of what searches I did, then I've walked into the field ignorant and the question then turns on whether I can be shown to have infringed the patent or whether it's not applicable, but I can be judged as not taking the patent system seriously.
The patent applicant should be the one responsible for writing the patent in the clearest, most useful language possible. If not, why not use Chinese? Arpy-Darpy? Ganster Jive? Why not make up terms: "we define a 'fnibjaw' to be a sequence of bits at least eight bits long and in multiples of eight bits"? Why not define operations in big-endian notation where the actual use is in little-endian notation, so that your constants are expressed differently and your mathematical operations look nothing like the actual ones performed but your patent is still relevant? The language of patents is already obscure enough, and even if you did want to actually use a patent it is already hard enough with some patents to translate their language into the standard terms of art. Patent trolls rely on their patents being deliberately obscure so that lawyers and judges have to interpret them, rather than technical experts.
The other thing this does is to promote actual patent searches and potential usage. If, as patent proponents say, the patent system is there to promote actual use and license of patents before a product is implemented, then they should welcome something that encourages users to search and potentially license existing patents. The current system encourages people to actively ignore the patent system, because unknowing infringement is seen as much less of an offence than knowing infringement - and therefore any evidence of actually searching the patent system is seen as proof of knowing infringement. Designing a system so that people don't use it doesn't say a lot about the system...
This could be phased in - make it apply to all new patents, and give a grace period where searches are encouraged but not required to be filed. Make it also apply so that any existing patent that is used in a patent suit can be queried by the defendent as "too obscure" or "not using the terms of art", and require the patent owner to rewrite them to the satisfaction of the court. That way a gradual clean-up of the current mess of incomprehensible patents that that have deliberately been obfuscated can occur.
If the people who say patents are a necessary and useful thing are really serious in their intent, then they should welcome any effort to make more people actually use the patent system rather than try to avoid it.
Personally I'm against patents. Every justification of patents appeals to the myth of the "home inventor", but they're clearly not the beneficiaries of the current system as is. The truth is that far from it being necessary to encourage people to invent, you can't stop people inventing! They'll do it regardless of whether they're sitting on billion-dollar ideas or just a better left-handed cheese grater. They're inventing and improving and thinking of new ideas all the time. And there are plenty of examples of patents not stopping infringement, and plenty of examples of companies with lots of money just steamrollering the "home inventor" regardless of the validity of their patents. Most of the "poster children" for the "home inventor" myth are now running patent troll companies. Nothing in the patent system is necessary for people to invent, and its actual objectives do not meet with the current reality.
I love watching companies like Microsoft and Apple get hit with patent lawsuits, especially by patent trolls, because they have to sit there with a stupid grin on their face and still admit that the system that is screwing billions of dollars in damages out of them is the one they also support because of their belief that patents actually have value.
So introducing some actual utility into the patent system should be a good thing, yeah?
The new paradigm now is that the kernel sets the monitor resolution and X is basically a client application to use it. This solves a lot of problems for most people, but unfortunately the kernel doesn't really handle the situation when the monitor doesn't actually respond with a valid EDID. More unfortunately, this actually happens in numerous situations - dodgy monitors and dodgy KVM switches being two obvious ones.
It turns out, however, that there is a workaround. You can tell the kernel that you have a (made-up) EDID block to load that it's going to pretend came from the monitor. To do this, you have to generate an EDID block - handily explained in the Kernel documentation - which requires grabbing the kernel source code and Making the files in the Documentation/EDID directory. Then put the required file, say 1920x1080.bin, in a new directory /lib/firmware/edid, and add the parameter "drm_kms_helper.edid_firmware=edid/1920x1080.bin" to your kernel boot line in GRUB, and away you go.
Well, nearly. Because the monitor literally does not respond, rather than responding with something useless, the kernel doesn't turn that display on (because, after all, not responding is also what the HDMI and DVI ports are also doing, because nothing is plugged into them). So you also have to tell the kernel that you really do have a monitor there, by also including the parameter "video=VGA-1:e" on the kernel boot line as well.
Once you've done that, you're good to go. Thank you to the people at OSADL for documenting this. Domestic harmony at PaulWay Central is now restored.
Don't give us that claptrap about "this is what women want". Don't give us some excuse about what sells or what your surveys have said. This is so obviously a sexist, demeaning bunch of claptrap that it's insulting to look at. It's shallow, it's boring, and it's painfully one-sided in its portrayal of women. No women scientists, leaders, or workers; no current politics, economics or public interest; nothing, in short, in common with the other readers of your paper.
Please grow a backbone, get rid of your demeaning sexist view of women, and start writing real content. Your women readers will thank you for it.
This post has also been sent to the Daily Mail Online editor.
The basic process of recording each talk involves recording a video camera, a number of microphones, the video (and possibly audio) of the speaker's laptop, and possibly other video and audio sources. For keynotes we recorded three different cameras plus the speaker's laptop video. In 2013 in the Manning Clark theatres we were able to tie into ANU's own video projection system, which mixed together the audio from the speaker's lapel microphone, the wireless microphone and the lectern microphone, and the video from the speaker's laptop and the document scanner. Llewellyn Hall provided a mixed feed of the audio in the room.
Immediately the problems are: how do you digitise all these things, how do you get them together into one recording system, and how do you produce a final recording of all of these things together? The answer to this at present is DVswitch, a program which takes one or more audio and video feeds and acts as a live mixing console. The sources can be local to the machine or available on other machines on the network, and the DVswitch program itself acts as a source that can then be saved to disk or mixed elsewhere. DVswitch also allows some effects such as picture-in-picture and fades between sources. The aim is for the room editor to start the recording before the start of the talk and cut each recording after the talk finishes so that each file ends up containing an entire talk. It's always better to record too much and cut it out later rather than stop recording just before the applause or questions. The file path gives the room and time and date of recording.
The current system then feeds these final per-room recordings into a system called Veyepar. It uses the programme of the conference to match the time, date and room of each recording with the talk being given in the room at that time. A fairly simple editing system then allows multiple people to 'mark up' the video - choosing which recorded files form part of the talk, and optionally setting the start and/or end times of each segment (so that the video starts at the speaker's introduction, not at the minute of setup beforehand).
When ready, the talk is marked for encoding in Veyepar and a script then runs the necessar programs to assemble the talk title and credits and the files that form the entire video into one single entity and produce the desired output files. These are stored on the main server and uploaded via rsync to mirror.linux.org.au and are then mirrored or downloaded from there. Veyepar can also email the speakers, tweet the completion of video files, and do other things to announce their existence to the world.
There are a couple of hurdles in this process. Firstly, DVswitch only deals with raw DV files recorded via Firewire. These consume about a gigabyte per hour of video, per room - the whole of LCA's raw recorded video for a week comes to about 2.2 terabytes. These are recorded to the hard drive of the master machine in each room; from there they have to be rsync'ed to the main video server before any actual mark-up and processing in Veyepar can begin. It also means that previews must be generated of each raw file before it can be watched normally in Veyepar, a further slow-down to the process of speedily delivering raw video. We tried using a file sink on the main video server that talked to the master laptop's DVswitch program and saved its recordings directly onto the disk in real time, but despite having tested this process in November 2012 and it working perfectly, during the conference it tended to produce a new file each second or three even when the master laptop was recording single, hour-long files.
Most people these days are wary of "yak shaving" - starting a series of dependent side-tasks that become increasingly irrelevant to solving the main problem. We're also wary of spending a lot of time doing something by hand that can or should be automated. In any large endeavour it is important to strike a balance between these two behaviours - one must work out when to stop work and improve the system as a whole, and when to keep using the system as is because improving it would take too long or risk breaking things irrevocably. I fear in running the AV system at LCA I have tended toward the latter too much - partly because of the desire within the team (and myself) to make sure we got video from the conference at all, and partly because I sometimes prefer a known irritation to the unknown.
The other major hurdle is that Veyepar is not inherently set up for distributed processing. In order to have a second Veyepar machine processing video, one must duplicate the entire Veyepar environment (which is written in Django) and point both at the same database on the main server. Due to a variety of complications, this was not possible without stopping Veyepar and possibly having to rebuild its database from scratch, and I and the team lacked the experience with Veyepar to know how to easily set it up in this configuration. I didn't want to start to set up Veyepar on other machines and finding myself shaving a yak and looking for a piece of glass to mount a piece of 1000-grit wet and dry sandpaper on to sharpen the razor correctly.
Instead, I wrote a separate system that produced batch files in a 'todo' directory. A script running on each 'slave' encoding machine periodically checked this directory for new scripts; when it found one it would move it to a 'wip' directory, run it, and move it and its dependent file into a 'done' directory when finished. If the processes in the script failed it would be moved into a 'failed' directory and could be resumed manually without having to be regenerated. A separate script (already supplied in Veyepar and modified by me) periodically checked Veyepar for talks that were set to "encode", wrote their encode script and set them to "review". Thus, as each talk was marked up and saved as ready to encode, it would automatically be fed into the pipeline. If a slave saw multiple scripts it would try to execute them all, but would check that each script file existed before trying to execute it in case another encoding machine had got to it first.
That system took me about a week of gradual improvements to refine. It also took me giving a talk at the CLUG programming SIG on parallelising work (and the tricks thereof) to realise that instead of each machine trying to allocate work to itself in parallel, it was much more efficient to make each slave script do one thing at a time and then run multiple slave scripts on each encoder to get more parallel processing, thus avoiding the explicit communication of a single work queue per machine. It relies on NFS correctly handling the timing of a file move so that one slave script cannot execute the script another has already moved into work in progress, but that at this granularity of work is a very small time of overlap.
I admit that, really, I was unprepared for just how much could go wrong with the gear during the conference. I had actually prepared; I had used the same system to record a number of CLUG talks in months leading up to the conference; I'd used the system by myself at home; I'd set it up with others in the team and tested it out for a weekend; I've used similar recording equipment for many years. What I wasn't prepared for was that things that I'd previously tested and had found to work perfectly would break in unexpected ways:
But the main lesson to me is that you can only practice setting it up, using it, packing it up and trying again with something different in order to find out all the problems and know how to avoid them. The 2014 team were there in the AV room and they'll know all of what we faced, but they may still find their own unique problems that arise as a result of their location and technology.
There's a lot of interest and effort being put in to improve what we have. Tim Ansell has started producing gstswitch, a Gstreamer-based program similar to DVswitch which can cope with modern, high-definition, compressed media. There's a lot of interest in the LCA 2014 team and in other people to produce a better video system that is better suited to distributed processing, distributed storage and cloud computing. I'm hoping to be involved in this process but my time is already split between many different priorities and I don't have the raw knowledge of the technologies to be able to easily lead or contribute greatly such a process. All I can do is to contribute my knowledge of how this particular LCA worked, and what I would improve.
I had a hiatus in 2012 for various reasons, but this year I've decided to run another similar event. But, as lovely as Yarrangobilly is and as comfortable as the Caves House was to stay in, it's a fair old five hour drive for people in Sydney, and even Canberrans have to spend the best part of two hours driving to get there. And Peter Miller, who runs the fabulous CodeCon (on which CodeCave was styled) every year, is going to be a lot better off near his health care and preferred hospital. Where to have such an event, then?
One idea that I'd toyed with was the Pittwater YHA: close to Sydney (where many of the attendees of CodeCave and CodeCon come from), still within a reasonable driving distance from Canberra (from where much of the remainder of the attendees hail), and close to Peter's base in Gosford. But there's no road up to it, you literally have to catch the ferry and walk 15 minutes to get there - while this suits the internet-free aesthetic of previous events, for Peter it's probably less practical. I discussed it on Google+ a couple of weeks ago without a firm, obvious answer (Peter is, obviously, reserving his say until he knows what his health will be like, which will probably be somewhere about two to three weeks out I imagine :-) ).
And then Tridge calls me up and says "as it happens, my family has a house up on the Pittwater". To me it sounds brilliant - a house all to ourselves, with several bedrooms, a good kitchen, and best of all on the roads and transport side of the bay; close to local shops, close to public transport, and still within a reasonable drive via ambulance to Gosford Hospital (or, who knows, a helicopter). Tridge was enthusiastic, I was overjoyed, and after a week or so to reify some of my calendar that far out, I picked from Friday 26th July to Sunday 28th July 2013.
Tom Morris recently observed that it comes down to privilege: the people who don't have to worry about being taken seriously and don't get sexually harrassed at conferences don't know what all the fuss is about. They don't see the lovely invisible glow that surrounds them, coming mainly from their background - they're white males from the middle and upper classes. Tom points out that they - we - like to tell ourselves that really we had it tough, and really we're here because of our hard hacker cred, but actually we only got that because we got the computers, and that's more to do with being white and male and having parents that could afford computers and going to schools that had computers. Let's face it, if your elder brother kicks you off the computer every chance he gets, you're not going to get much of a chance to use one no matter who you are.
I think you can see this, also, in the variants of the Four Yorkshiremen Sketch that one almost inevitably hears when a group of geeks get together. A sample dialogue goes something like:
One thing that I recently learned - in perhaps a bit more blunt way than I really wanted - is that sometimes even when you can see a solution to a problem, it still won't actually get solved. In the FOSS community we have a tendency to try and solve every problem: it's almost inevitable that given a group of hackers and suboptimal situation - trying to work out the cost per person at a restaurant, or waiting a long time for a change of lights at an intersection, or seating people at a theatre - a "friendly" discussion will ensue on how to "solve" this "problem". Any slight problem - from not getting a T-shirt that fits correctly for one's body type to not being able to watch a video when one wants - becomes something that must be solved. And when that solution is not enacted by those in the power to do so, it is seen as some kind of malicious assault on not just oneself but the whole principle of efficiency and reason, Hanlon's Razor not withstanding.
There is one fundamental problem with this view: it is utterly wrong.
It is another day's labour to talk about the problems that this behaviour causes. To relate it to the problems of fairness and equality, it is, I believe, a mistake to see these as problems one can "solve" in the same sense that one solves a problem with software by submitting a bug report, a patch, or working with the maintainers. And I'm not talking about solving social problems with technical solutions (although some have proposed them).
Put simply, the problems we have with a lack of fairness and equality, particularly in gender, are only solved by a long, hard, tedious process of gradually educating people, by trying to right individual wrongs over and over again, of continually trying to make people aware of the problem they are so determined to ignore. There's no magic fix. This, or any other blog post, will not make everything work. No cunning argument or cogent example or impeccable logic will convert everyone. It's a long, boring, degrading process - but the alternative is to see equality and fairness eroded away over time.
And, worse, there are people who will never concede that there is a problem, who are mysoginst bastards, who will always assert that they're being perfectly reasonable even when being completely sexist. There are people who we cannot change, and who expect that we must change. And we have to accept and allow those people to be a part of our community. We can, as Matt Garrett has, choose who we personally want to associate with, but in my view that makes us a little less tolerant and a little more like the people we hate in the process.
So we must continue to support women - to support all the groups that are ill-treated or neglected by the communities in which we play. We must keep on patiently reasoning with people who object to whatever encroaches on their sense of entitlement. We must keep writing the anti-harrassment policies, and keep on enforcing them. We must persevere to make the world a better place.
I'd also add that we need to remember that the opinions that a person may have do not summarise them completely. As Rusty says, just because you're a great coder doesn't mean you're not a crackpot. Likewise, just because someone is a crackpot - or expresses views we disagree with - doesn't mean they don't write good code. (And sometimes someone we agree wholeheartedly with at a deep philosophical level also writes crap code, but that's another story). We don't even necessarily have to agree with all the other people who are similarly disposed to want more equality and fairness. We all play our own parts, in whatever ways we can and for whatever causes we believe in.
These are tough problems, and there aren't easy solutions; but we can't let that lack of easy solutions put us off trying to make it better.
Let the five numbers be a, b, c, d, e, in ascending order. For there to be a mode that is not the median, two numbers have to be the same and every other number is different - those two numbers have to be a and b or d and e. Let's consider the case where they're d and e - the other case is symmetric. For the mean to equal the mode:
Hopefully the next person who gets given this rather bizarre question will find this and get the answer without straining their brain coming up with cases. It is, of course, quite possible that the question had been garbled in between the teacher and me - it is, of course, trivial to think of a five number series where the median is less than the mean which in turn is less than the mode. Ah well, that's that off my brain now... :-)
Now, we know that Apple works very hard to maintain that emotion-steeped, intellect-free connection to their fanboys - even their programming howto videos come across more as marketing hype than real useful information. The amusing thing is that even there, in my opinion, they still outshine Linux zealots for pure fact-free, judgemental thinking. Linux zealots are much worse than Apple fanboys for telling everyone to convert to free open source software whenever someone complains about any other product, though, so that's kind of evened up. To go a step back from the great T-shirt slogan "No I Will Not Fix Your Computer", we need to stop trying to fix everyone else's problems, or assuming that we have to (or even can).
The really funny thing to me, in this competition of eagerness, is how Microsoft has really given up. The "Mac Vs PC" ads did wonders for that emotional image-based buy-in for Apple, but I wasn't really expecting Microsoft to embrace the image too. They have, though - Microsoft seems to be making no effort to be anything but conventional, slightly stuffy, older and prone to clumsiness. Worse, they've inspired the GNOME 3 developers: Microsoft started "reinventing" the Windows interface and throwing in pointless, ugly, hard to use changes to its Office suite about eighteen months before the GNOME developers started telling everyone that making things more difficult was the way of the future, as far as I can see.
Microsoft is also engaging in exactly the same tactics it used twenty years ago that got it in trouble with the US government. It's paying Intel and AMD a lot of money to create "Windows-Only Processors", on the amazingly naive notion that somehow the rest of the world a) can't read machine code, b) can't reverse engineer, and c) gives a toss, given that those processors are slower, more power hungry and less innovative than ARM processors these days. It's been waging this war on other operating systems via UEFI and presumably thinking that at some point the Linux community will just give up, rather than doing what it's done for the last 20 years and work a way around the problem. It keeps utterly failing to get any real traction with its phones and tablets. It's only now started to try and market a costly product that vaguely duplicates what you get for free with Google Docs.
Personally, I think this is due to Bill Gates leaving. I think he knew that Microsoft was heading toward a brick wall and it was just too big, stupid and uncoordinated to think to take its foot off the accelerator pedal. They've bled money in court cases, in DRM systems that no-one's wanted, in aborted projects (e.g. Pink) and just in sheer lack of anything new. Even that famed vendor lock in gradually erodes - look at how abysmally Vista did in the business world, even if you disregard the various organisations and government departments that are going with Linux on the desktop. And without someone with the fame, or even the charisma, of Gates, they're just hand-waving and hoping that someone cares about them.
Ultimately, I believe that free software won't "win" any more than Apple could "win" the phone market. It'll be part of the ecosystem. As more and more people learn of the advantages of using free, open source software, I think it will be more popular - really, it's problem in not reaching a wider audience has been obscurity rather than active oppression. And I think there's still the emotional attachment to free, open source software, but it's the same emotional attachment one has to science - it's cool and majestic but also based on principles we know and can see. The more Apple and Microsoft try to eliminate their competition, the more they lose the respect of their fans.
The absolute last thing they should do, in my opinion, is offer any form of rebate or cash back on buying an electric vehicle. We've seen this time and time again: offer a rebate on LPG fitting for cars and, mirabile dictu, suddenly the cost of fitting LPG to cars goes up by almost exactly the same amount. Offer a $4000 bonus for first home owners, and the entire housing market jumps up by $4000 (hurting just about everyone else even worse). In my opinion this is a classic tactic suggested by the industry in question when it wants to make it sound like it's working with the Government to do something to help, but make sure that it gets a lot more money in the process. It's not a bad policy for the Government, since it gets a cut of their business taxes anyway.
The second last thing is to make other 'cash back' or discount gestures to electric car buyers that aren't going to be permanent. The wailing and gnashing of teeth when the Government cut the Solar Panel Rebate was heard throughout the land - it bootstrapped the industry, yes, and that was a good thing, but when the rebate is dropped it then makes the Government look uncaring for the people it was only recently helping. If it's something I pay yearly, like registration, I don't want to find out that it's suddenly gone up because I was one of the first to do something that other people finally joined in on.
It's also trivial in comparison to the cost of the whole vehicle, especially when looked at in total. The Government putting $10,000 into paying all road-worthy electric vehicles' registrations doesn't, one has to admit, have much sound-bite potential. And when the vehicle is $50,000, a saving of $500 is but 1% - you save more than that in choosing to not get the luxury leather seats. And for people like me building a vehicle it's at the wrong end of the process - I've already committed over $12,000 to the bike now, I'm not going to hold off registering it because I can't afford the rego.
What's left? Really, as far as I can see, there are two major remaining options left to get more people to buy electric vehicles. One is to actually mandate their cost, so that they actually are cheaper. The other is to massively subsidise a new electric car industry in Australia to compete with the existing manufacturers - their price can be lower because their costs are subsidised by the Government.
Both of those, as far as I can see, aren't going to happen. The first would have every petrol car company screaming blue murder about price fixing and uncompetitive practices. And the second would ... yeah, have about the same effect. And take much longer. In the plus column, building a new industry producing cars that we know there will be a big demand for in the future is what Tesla did five years ago; with car manufacturing plants closing across the country, getting them going again with electric cars would be a big boost to employment and the manufacturing sector. But not even a Labor government is going to suggest that we do this; it's just too much like British Leyland.
Electric vehicles still suffer from an image problem, despite the in-roads that the Tesla Roadster has made. New cars like the Renault Fluence, the Holden Volt and the Holden Commodore conversions are looking more like standard cars, and have standard abilities such as towing a trailer. But these are still relatively expensive; fortunately, there's a way the price can come down. Meanwhile, with the Leaf and the iMiev looking like bubbles of plastic and the Twizy looking like the designer was from a magical land where it never rained and never got below 20°C or above 30°C, we've got a way to go yet before people can accept that electric cars are ordinary, working cars.
At the EV group meeting we had a speaker from Better Place. Unfortunately I missed his main presentation but the question and answer session was fairly lively. One of the things Better Place is putting forward is switching batteries rather than recharging in the car. The Fluence and the Commodore conversion will support this; Better Place is obviously working with other manufacturers to get them to use the technology.
The two big questions with that are: is there going to be competition to Better Place, and is there going to be a standard for removable car batteries. Some kind of competition is good, so that Better Place don't get a monopoly on the technology and then limit access. And that competition needs a set of standards on how batteries are designed, manufactured and instrumented, so that we can rely on being able to plug in a battery and having it work and not lie about its charge state
My question to the Better Place representative, that followed on from those two principles, was: hobbyists want to get in on this technology too. We know it's easier for you to deal with major manufacturers, but if you lock out the very people that have been leading the way, you'll alienate a group of enthusiastic potential customers. This happens all the time, so it's not going to stop us building electric vehicles, but it's disheartening when you can see the prize in front of you but you're barred from taking it.
The strategy that Better Place is taking is that the car is cheap but you pay to change the batteries over. This has the feel of the "razor and blade" problem, but it is a reasonable way to lower the price of the vehicle. But even when we lower the price down to comparable to a current petrol car, EVs are still going to have lower range for the next five or so years while lithium battery technology ramps up. In that time, there's really not much the Government can do to get more people to buy electric vehicles.
Actually, there is one: use them themselves. If the Government were to start converting their fleets to electric, there'd be numerous benefits. The cost per car would come down, as manufacturers could commit to larger production numbers and shipments. More people would find out about electric cars, find that they're pretty decent vehicles, found some of their myths dispelled, and got used to their foibles (e.g. the quiet). The Government can show that it's reducing its carbon footprint and pay less in carbon tax and fuel. And in three to four years' time we'd see a further flow-on effect as the leased fleet got sold into the general used vehicle pool.
Overall, it sounds like a win to me. Let's hope that writing to my local Federal member has some effect.
: of course, there's even worse that they can do. They can do nothing. They can charge more for registering electric vehicles since they don't pay fuel tax. They can offer massive subsidies to the fuel industry to keep it going. I'm positing that the Government actually wants to promote electric vehicles, for example as part of its carbon reduction strategy
: it's a bizarre world when it makes sense for the Government to do something because the commercial operators are too inherently conservative and resistant to change to actually try to keep their industry alive and move with the times.
: as you'd expect from a bunch of people who have been saying "come on, everybody, electric cars are the future, let's move now, let's not get trapped into depending on oil!" for the last twenty years.
All posts licensed under the CC-BY-NC license. Author Paul Wayper.
You can also read this blog as a syndicated RSS feed.