Too Busy For Words - The PaulWay Blog

Fri, 11 Apr 2014

Sitting at the feet of the Miller

Today I woke nearly an hour earlier than I'm used to, and got on a plane at a barely undignified hour, to travel for over three hours to visit a good friend of mine, Peter Miller, in Gosford.

Peter may be known to my readers, so I won't be otiose in describing him merely as a programmer with great experience who's worked in the Open Source community for decades. For the last couple of years he's been battling Leukaemia, a fight which has taken its toll - not only on him physically and on his work but also on his coding output. It's a telling point for all good coders to consider that he wrote tests on his good days - so that when he was feeling barely up to it but still wanted to do some coding he could write something that could be verified as correct.

I arrived while he was getting a blood transfusion at a local hospital, and we had spent a pleasurable hour talking about good coding practices, why people don't care about how things work any more, how fascinating things that work are (ever seen inside a triple lay-shaft synchronous mesh gearbox?), how to deal with frustration and bad times, how inventions often build on one another and analogies to the open source movement, and many other topics. Once done, we went back to his place where I cooked him some toasted sandwiches and we talked about fiction, the elements of a good mystery, what we do to plan for the future, how to fix the health care system (even though it's nowhere near as broken as, say, the USA), dealing with road accidents and fear, why you can never have too much bacon, what makes a good Linux Conference, and many other things.

Finally, we got around to talking about code. I wanted to ask him about a project I've talked about before - a new library for working with files that allows the application to insert, overwrite, and delete any amount of data anywhere in the file without having to read the entire file into memory, massage it, and write it back out again. Happily for me this turned out to be something that Peter had also given thought to, apropos of talking with Andrew Cowie about text editors (which was one of my many applications for such a system). He'd also independently worked out that such a system would also allow a fairly neat and comprehensive undo and versioning system, which was something I thought would be possible - although we differed on the implementation details, I felt like I was on the right track.

We discussed how such a system would minimise on-disk reads and writes, how it could offer transparent, randomly seekable, per-block compression, how to recover from partial file corruption, and what kind of API it should offer. Then Peter's son arrived and we talked a bit about his recently completed psychology degree, why psychologists are treated the same way that scientists and programmers are at parties (i.e. like a form of social death), and how useful it is to consider human beings as individual when trying to help them. Then it was time for my train back to Sydney and on to Canberra and home.

Computing is famous, or denigrated, as an industry full of introverts, who would rather hack on code than interact with humans. Yet many of us are extroverts who don't really enjoy this mould we are forced into. We want to talk with other people - especially about code! For an extrovert like myself, having a chance to spend time with someone knowledgeable, funny, human, and sympathetic is to see sun again after long days of rain. I'm fired up to continue work on something that I thought was only an idle, personal fantasy unwanted by others.

I can only hope it means as much to Peter as it does to me.

posted at: 17:50 | path: /tech | permanent link to this entry

Tue, 11 Feb 2014

The Day We Fight Back?

I've blacked out mabula.net and my partially complete Django replacement for it as part of:

The Day We Fight Back.

It's a token gesture, and I'd prefer something that actually causes a real change in the state of affairs. But hopefully the few people that visit my site will ask why its blacked out, and I'll tell them. Or they'll find out why for themselves. Or they'll know already.

Ultimately, the thing that worries me in all of this is that all the data collection, all the wire tapping and interception, all the bad cryptography and bastardised standards, all the spying and all the secrecy doesn't really improve our actual security. It hasn't found anything that normal detective work and normal policing and existing laws couldn't already deal with. It hasn't prevented any crimes, either against real people or against 'the state' or anything.

The 'baddies' are already adapting their methods and covering their tracks. There are far too many false positives, and much too much confirmation bias, to make the resulting 'intelligence' anything but a joke. The FBI already spends more money on covering up its mistakes - like its total waste of resources watching Brandon Mayfield - than it would if it had just asked him for an interview. Mean time they're missing the Boston Marathon bombers despite lots of evidence pointing to them. Then follows a lot of chest puffing and excuses and "we can't tell you the details, they're classified".

(Meanwhile, we have banks that are laundering money to supply to exactly the same terrorist organisations that get a slap-on-the-wrist fine and no jail time for anyone because they're "too big to fail". So not only did the NSA and all the security TLAs not find a massive source of funding for these organisations - something that's causing far more damage to USAdian society than 'terrorism' - but the entire rest of the government quietly brushed it under the carpet and pretended it didn't happen. Yeah, good one.)

Ultimately, all it's really about is perpetuating the existince of the security complex - mainly in the USA, but everywhere really. Its first imperative is to preserve itself, and it has all the means to do so. It has the secret courts and the secret laws to prevent legal challenge, and the arms and the blackmail material to prevent other attacks. And its paranoid level of secrecy and security makes it automatically treat any rein, any check on it, as a threat to its own existence - because, well, it would be.

So what REALLY scares me is that nothing we do will actually stop them at all. At this stage, it's basically impossible to even rein in the NSA's powers - and that'd be like taking a rabid tiger and smacking it on the nose to tell it to go away. To put in the high-level open oversight that lets the public see whether these agencies are actually doing anything useful with the vast quantities of money they control is a task that's beyond the realistic abilities of any government (to say nothing of the blackmail and subversive influence that any security agency can bring against anyone that wants to downsize them). Tackling the companies who run the prisons and supply the equipment and make a profit from all the unrest - that's just bordering on insane.

We've made the tiger, and we've fed the tiger because it said it would protect us, and we're on its backs because it's better than being in its jaws, and we've fed it more because we're afraid it might eat us, and it's only grown larger and hungrier. To be honest, I think the security complex will kill the world before climate change does.

posted at: 23:41 | path: /society | permanent link to this entry

Wed, 15 Jan 2014

Ignorable compression

On the way home from LCA, and on a whim, in Perth I started adding support for LZO compression to Cfile.

This turned out to have unexpected complications: while liblzo supports the wide variety of compression methods all grouped together as "LZO", it does not actually created '.lzo' files. This is because '.lzo' files also have a special header, added checksums, and file contents lists a bit like a tar file. All of this is added within the 'lzop' program - there is no external library for reading or writing lzo files in the same way that zlib handles gz files.

Now, I see three options here:

Yeah, I'm going for option one there.

LZO is a special case: it does a reasonable job of compression - not quite as much as standard gzip - but its memory requirements for compression can be miniscule and its decompression speed is very fast. It might work well for compression inside the file system, and is commonly used in consoles and embedded computers when reading compressed data. But for most common situations, even on mobile phones, I imagine gzip is still reasonably quick and produces smaller compressed output.

Now to put all the LZO work in a separate git branch and leave it as a warning to others.

posted at: 22:04 | path: /tech/c | permanent link to this entry

Sun, 24 Nov 2013

Pity the people on Lord Howe Island

I've just come back from a week holiday on Lord Howe Island. It's a beautiful and fascinating place, with heaps of great snorkelling and diving spots, amazing palm trees (the Kentia Palm being the most well known) and other wildlife, and a wonderfully relaxed attitude to life. But Islanders have a few problems that we rarely hear about.

Pretty much all food is expensive. They're just in the process of setting up their own small abbatoir, which will allow them to serve local meat at Australian food safety standards. Fish is caught locally (outside the protected areas, of course), but is variable - some times they have to serve frozen fish caught days or weeks ago. There are a few people gathering chicken eggs, available at the local co-op store.

Everything else is brought in via ship. It costs about $540 per cubic metre - much, much more if you need it to be refrigerated or frozen in transit. Ice creams typically cost $6 to $8, a half round of White Castello cheese costs $14.60, and we had the smallest roast chicken you'd ever seen for $20. Even self catering is reasonably expensive here. Likewise, all fuel, all cars, pretty much all building materials - all are shipped here from the mainland. Mail day is pretty spectacular.

I don't know what electricity costs on the island but it's main supply is a series of diesel generators. Wind, wave and solar power are being investigated but the impact on the views and possibly wildlife is considered a downside (although I'd argue that they need to think differently; bird deaths due to wind turbines are much lower than you'd think and I for one would love to see a couple of wind turbines on Transit Hill or in some of the valleys, providing good clean energy). All power lines are underground, which I think is great, and street lighting is kept to a minimum (partly to save power, partly to not interfere with the many bird species here). The other complication with renewable energy is that there's simply not enough base load and not enough distribution to mean that the variable power supply can easily be used. Wind is OK when you've got hundreds of turbines spread across a state, but not so good if they're all concentrated in a square kilometre area.

But the real reason you should pity Lord Howe Islanders is their internet connection.

There is no undersea fibre-optic cable running here. One was connected to Norfolk Island, 700km east, but they didn't connect Lord Howe Island (for some unknown reason). So all internet connections are via satellite. One of the two satellite companies servicing the island decided to stop service, and only took some of their existing customers back at higher cost and reduced rate of data. The other is not taking any new customers. The NBN satellites are already oversubscribed - so "satellite internet" for regions may already be bad - which means that Lord Howe Island has no option for new internet connections. There are only a few satellite uplinks to serve the entire population, so link congestion is high.

What does this mean? It means studying, getting email, and even getting basic information takes a lot longer. It's costly and unreliable. You could do great business on LHI - selling Kentia Palm seedlings (which used to be the main business on the island), for instance - except you can't do it using the internet and compete with other sites on the mainland. Keeping in touch with children - most go to boarding school on the mainland - is slow and some things like video calls are impossible. So many things we take for granted on the mainland, things that are possible with 3G connections and "just work" on ADSL, just do not work at all on the island. Bufferbloat is crippling here.

The islanders are already conversing with Malcolm Turnbull about capacity of the NBN satellites and getting better speed. But I can see how easily it's overlooked - the problems experienced by 300 people and their 330 or so guests can look small beside an electorate of 100,000 or so. The pity to me is that the internet is a great opportunity giver. People can run businesses, find help, and get opportunities to better themselves (almost) regardless of where they are. My trip to Lord Howe Island has really shown how much we can take for granted the availability of information that the internet brings.

posted at: 10:47 | path: /society | permanent link to this entry

Fri, 01 Nov 2013

Converting cordless drill batteries

We have an old and faithful Ryobi 12V cordless drill which is still going strong. Unfortunately, the two batteries it came with have been basically killed over time by the fairly basic charger it comes with. I bought a new battery some time ago at Battery World, but they now don't stock them and they cost $70 or so anyway. And even with a small box from Jaycar connected to the charger to make sure it doesn't cook the battery too much, I still don't want to buy another Nickel Metal Hydride battery when all the modern drills are using Lithium Ion batteries.

Well, as luck would have it I recently bought several LiIon batteries at a good price, and thought I might as well have the working drill with a nice, working battery pack too. And I'd bought a nice Lithium Ion battery balancer/charger, so I can make sure the battery lasts a lot longer than the old one. So I made the new battery fit in the old pack:

First, I opened up the battery pack by undoing the screws in the base of the pack:

There were ten cells inside - NiMH and NiCd are 1.2V per cell, so that makes 12V. The pack contacts were attached to the top cell, which was sitting on its own plinth above the others. The cells were all connected by spot-welded tabs. I really don't care about the cells so I cut the tabs, but I kept the pack contacts as undamaged as possible. The white wires connect to a small temperature sensor, which is presumably used by the battery charger to work out when the battery is charged; the drill doesn't have a central contact there. You could remove it, since we're not going to use it, but there's no need to.

The new battery is going to sit 'forward' out of the case, I cut a hole for my replacement battery by marking the outline of the new pack against the side of the old case. I then used a small fretsaw to cut out the sides of the square, cutting through one of the old screw channels in the process.

I use "Tamiya" connectors, which are designed for relatively high DC current and provide good separation between both pins on both connectors. Jaycar sells them as 2-pin miniature Molex connectors; I support buying local. I started with the Tamiya charge cable for my battery charger and plugged the other connector shell into it. Then I could align the positive (red) and negative (black) cables and check the polarity against the charger. I then crimped and soldered the wires for the battery into the connector, so I had the battery connected to the charger. (My battery came with a Deanes connector, and the charger didn't have a Deanes connector cable, which is why I was putting a new connector on.)

Aside: if you have to change a battery's connector over, cut only one side first. Once that is safely sealed in its connector you can then do the other. Having two bare wires on a 14V 3AH battery capable of 25C (i.e. 75A) is a recipe for either welding something, killing the battery, or both. Be absolutely careful around these things - there is no off switch on them and accidents are expensive.

Then I repeated the same process for the pack contacts, starting by attaching a red wire to the positive contact, since the negative contact already had a black wire attached. The aim here is to make sure that the drill gets the right polarity from the battery, which itself has the right polarity and gender for the charger cable. I then cut two small slots in the top of the pack case to let the connector sit outside the case, with the retaining catch at the top. My first attempt put this underneath, and it was very difficult to undo the battery for recharging once it was plugged in.

The battery then plugs into the pack case, and the wires are just the right length to hold the battery in place.

Then the pack plugs into the drill as normal.

The one thing that had me worried with this conversion was the difference in voltages. Lithium ion cells can range from 3.2V to 4.2V and normally sit around 3.7V. The drill is designed for 12V; with four Lithium Ion cells in the battery, it ranges from 14.8V to 16.8V when fully charged. Would it damage the drill?

I tested it by connecting the battery to a separate set of thin wires, which I could then touch to the connector on the pack. I touched the battery to the pack, and no smoke escaped. I gingerly started the drill - it has a variable trigger for speed control - and it ran slowly with no smoke or other signs of obvious electric distress. I plugged the battery in and ran the drill - again, no problem. Finally, I put my largest bit in the drill, put a piece of hardwood in the vice, and went for it - the new battery handled it with ease. A cautious approach, perhaps, but it's always better to be safe than sorry.

So the result is that I now have a slightly ugly but much more powerful battery pack for the drill. It's also 3AH versus the 2AH of the original pack, so I get more life out of the pack. And I can swap the batteries over quite easily, and my charger can charge up to four batteries simultaneously, so I have something that will last a long time now.

I'm also writing this article for the ACT Woodcraft Guild, and I know that many of them will not want to buy a sophisticated remote control battery charger. Fortunately, there are many cheap four-cell all-in-one chargers at HobbyKing, such as their own 4S balance charger, or an iMAX 35W balance charger for under $10 that do the job well without lots of complicated options. These also run off the same 12V wall wart that runs the old pack charger.

Bringing new life to old devices is quite satisfying.

posted at: 08:41 | path: /tech | permanent link to this entry

Mon, 16 Sep 2013

A Glut of Music?

I finally read David Gerard's original article, and I have to say there were some pretty sweeping assertions in there. Mainly that quantity always trumps quality, every time, no exceptions. That, personally, I think is ... a bit overstated. There are plenty of examples of people still liking the old quality stuff over the new quantity stuff - my nieces love my Eighties music just as much as they like the new popular hits. And plenty of people still listen to classical music and are interested in music that is strictly of fixed quantity - Beethoven sonatas, for example.

The other big generalisation is that this works purely on a cost amortisation calculation. I.e. that musicians are trying to cover costs, and therefore the cost of producing physical units and distributing them is the governing factor. Some musicians also look to make a living from their work, and that means setting a time period over which they hope to gain money from selling the product, an amount which they expect to live on, a number of units to sell, and so forth - all of which can vary widely and are complicated to fix. (Aside: this is why established artists push for extension to copyright - because theoretically they're extending the amount of time they gain from selling that product. This is a myth and a fairy story record labels tell them when they want them to support copyright extension.) It used to be that the cost of producing the units and distributing them were the major costs - see Courtney Love's calculation, for example - and therefore the label proposes to take that risk for the band (another fairy story); nowadays the distribution is free, and producing a new unit is cheap (in the case of digital distribution, it's totally free), so the ongoing costs of keeping the musicians alive and producing new music is the major cost for professionally produced music.

But I still think the big points made in the article is true: that the real cost of producing music - even music of reasonable quality - is coming down, that more music than ever is being produced and hence there's much more competition for listener's money, and that "hobby" artists who do it in their spare time and don't expect to make money out of their music (I'm one) drives the cost of actually getting music down too. So for professional musicians, who have sort of expected to make money out of music because their heroes of the previous generations did (due, as David points out, to a quirk in history that made the twentieth century great for this kind of oligopoly), it's a rude awakening to find out that people don't care about your twenty years in the industry or your great study of the art form, they care about listening to a catchy tune that's easy to get.

I also like the point that musicians are also inveterate software copiers. It's one reason I use LMMS and free plugins - because free, quality software does exist. I find it intensely hypocritical that professional musicians can criticise people for copying their music, when they may well have not paid a cent for all the proprietary software they use to produce it.

But to me this is really just about getting in touch with your audience. Companies like Magnatune exist to help quality artists find an audience by putting them in touch with an existing large subscriber base who wants new music. Deathmøle's insane success on Kickstarter shows that someone with an established audience can make it really big without having to sell their soul to big record labels. And Jeph himself is a great example of the way things work in the modern world, since Deathmøle is his side project - his main one is Questionable Content, which he also went into without having existing funding or requiring a big backer to grant him some money and take his rights in exchange. As Tim O'Reilly says, obscurity is a far greater threat to authors and creative artists than piracy; it doesn't matter if you're signed to the best record label there is, if they haven't actually publicised your work you might as well have not signed up at all. And, fortunately, these days we have this wonderful thing called the internet which allows artists to be directly in touch with their fans rather than having to hope that the record label will do the right thing by you and not, say, ignore you while promoting another band.

I wish David had made his point without the broad generalisations - I think it stands well without them.

posted at: 12:15 | path: /society | permanent link to this entry

Sat, 31 Aug 2013

Workplace Loyalty

OK, this is social commentary rather than technical stuff, so if you're not in the mood you can skip over this.

The ever-thoughtful Charlie Stross has written an article about the problems facing the NSA. There's not going to be just one Edward Snowden or Bradley Manning, there's going to be heaps of them - because the Three Letter Acronym security departments are busy getting rid of all of the permanent employees who felt loyal to them and replacing them with contractors who have no more loyalty to them than the security department's loyalty to the contractor.

Now, I personally believe in being loyal to my employer. I (of course) honour the various clauses in my contract that say they get to own all my work for them, and that I won't sell or leak their secrets, and that I won't work for someone else without telling them. I believe in being loyal to the customers I work for and the people I work with. I believe that I am more valuable to an employer the longer I work there because I know the intricacies of the job better and am better at solving problems by recognising them and their underlying causes. These are things that a new employee will always struggle with.

But I believe that the big problem with employers these days is this pernicious idea that their workforce is interchangeable, not to be trusted, and best used by screwing them for as much work as you can get out of them and then throwing them away. It's an "Atlas Shrugged" mindset that believes that somehow the people at the top are being held back by the people at the bottom, and that therefore workers don't deserve any of the benefits of being at the top. It's also contributed to by the idea that companies poach people - especially "rock star" workers and people high up the ladder; the idea that those people (and their loyalty) can be bought just for their experience and to (somehow) change their environment just by being in it.

The "glory days" of jobs for life that Charlie talks about in his essay are really the times before the MBA school of management came into being; when people managed companies because they'd worked their way to the top. Those people knew the business intimately, they'd sweated over it for decades, they knew the people - and the employees knew them. There was much more of a feeling of trust in those organisations, because it was about personal relationships more than work relationships or "rightsizing" or "mission statements". Walt Disney was famous for remembering every person in the 700-strong Disney workforce. These days, one gets the impression that the management of some companies consider it a burden to even associate with the people more than a step down the org chart.

At the moment all we're really seeing, IMO, is the 'tit for tat' nature of the Prisoners Dilemma being played out in corporate workforces. If you want to find the point at which employers started cheating on their workforces, then you have to keep on going back - past the 1980s anti-union laws and workplace deregulation, past the 1880s and the weavers and miners unions, past the 1780s and the clearances... in fact, just keep going: its feudal lords demanding tithes, and high priests demanding donations, and kings demanding tributes. The Greeks famously invented Democracy, but even then slaves, women, and other "not our sort" people couldn't actually vote. The process of cheating on the people beneath you for your own gain has a long history - far longer, I would argue, than the history of the workers rebelling and demanding their own rights.

So now the workforce is no longer loyal to their employer, and we see the mistrust and second-guessing that usually accompanies standard Prisoners Dilemma situations. I think the two are evenly matched - the employer might seem to hold the power (because they write the contract the employeee must sign without change) but the employees are many, and their methods of working around the employer's restrictions and exploiting the employer's weaknesses are many and subtle. The employee has much more mobility than the employer, and while there are usually non-competition restrictions in the contract the number of times I've heard people subtly, and not so subtly, ignoring these (for example, sales people poaching client lists) makes it difficult for the employer to fight all those battles.

Overall, it's a pity, because I think a situation where employer and employee trust eachother and work together is much better than one where each is subtly trying to screw the other. Once you see it as a contest, though, it's all downhill from there. Many organisations try to rebuild trust, but the "team building exercise" is such a cliche for uncaring management that it's boring to repeat it. If you're trying to rebuild trust, but not fundamentally changing the management style and not addressing the needs and issues of the workers, then it's really just an exercise in paying some management consultant to take your money and laugh at you.

I currently work at a company which does have, at least in our Canberra offices, a lot of respect for its workers. It's easy to imagine being paid well for being a "subject matter expert" rather than having to go into management to keep climbing the pay ladder. We have regular functions every fortnight or so where you can speak to just about anyone - not 'town hall' meetings which in my experience are still basically management telling the workers the way it's going to be. And I think that there are many examples of companies that are doing the right things by their workers and seeing a lot of benefits - it's easy to cynically see what Google get out of paying for their sysadmins to have good internet connections, but the sysadmins get a decent deal out of it too, and the trust and understanding that goes with it is not easily bought.

So I do think there's hope. But I think we have to see a profound shift in the employers and their attitudes to staff before that changes. For a start, weed out the psychopaths and bullies in management before you complain about theft of office supplies. Promote people from within rather than always hiring top management from outside. Stop trying to win my trust with company slogans and mission statements, and start actually listening to me when I tell you about the opportunity that I can see right in front of you. Stop treating companies like feudal families, with their fiefdoms and strict hierarchy, and start treating us all like citizens.

posted at: 22:37 | path: /society | permanent link to this entry

Tue, 13 Aug 2013

New file system operations

Many many years ago I thought of the idea of having file operations that effectively allowed you to insert and delete, as well as overwrite, sections of a file. So if you needed to insert a paragraph in a document, you would simply seek to the byte in the file just before where you wanted to insert, and tell the file to insert the required number of bytes. The operating system would then be responsible for handling that, and it could then seamlessly reorganise the file to suit. Deleting a paragraph would be handled by similar means.

Now, I know this is tricky. Once you go smaller than the minimum allocation unit size, you have to do some fairly fancy handling in the file system, and that's not going to be easy unless your file system discards block allocation and goes with byte offsets. The pathological case of inserting one byte at the start of a file is almost certainly going to mean rewriting the entire file on any block-based file system. And I'm sure it offends some people, who would say that the operations we have on files at the moment are just fine and do everything one might efficiently need to do, and that this kind of chopping and changing is up to the application programmer to implement.

That, to me, has always seemed something of a cop-out. But I can see that having file operations that only work on some file systems is a limiting factor - adding specific file system support is usually done after the application works as is, rather than before. So there it sat.

Then a while ago, when I started writing this article, I found myself thinking of another set of operations that could work with the current crop of file systems. I was thinking specifically of the process that rsync has to do when it's updating a target file - it has to copy the existing file into a new, temporary file, add the bits from the source that are different, then remove the old file and substitute the new. In many cases we're simply appending new stuff to the end of the old file. It would be much quicker if rsync could simply copy the appended stuff into a new file, then tell the file system to truncate the old file at a specific byte offset (which would have to be rounded to an allocation unit size) and concatenate the two files in place.

This would be relatively easy for existing file systems to do - once the truncate is done the inodes or extents of the new file are simply copied into the table of the old file, and then the appended file is removed from the directory. It would be relatively quick. It would not take up much more space than the final file would. And there are several obvious uses - rsync, updating some types of archives - where you want to keep the existing file until you really know that it's going to be replaced.

And then I thought: what other types of operations are there that could use this kind of technique. Splitting a file into component parts? Removing a block or inserting a block - i.e. the block-wise alternative to my byte offset operations above? All those would be relatively easy - rewriting the inode or offset map isn't, as I understand it, too difficult. Even limited to operations that are easy to implement in the file system, there are considerably more operations possible than those we currently have to work with.

I have no idea how to start this. I suspect it's a kind of 'chicken and egg' problem - no-one implements new operations for file systems because there are no clients needing them, and no-one clients use these operations because the file systems don't provide them. Worse, I suspect that there are probably several systems that do weird and wonderful tricks of their own - like allocating a large chunk of file as a contiguous extent of disk and then running their own block allocator on top of it.

Yes, it's not POSIX compliant. But it could easily be a new standard - something better.

posted at: 19:53 | path: /tech/ideas | permanent link to this entry

Mon, 12 Aug 2013

Fair use is theft, except when it's me using other people's work

Dear Linda Jaivin,

In your article for the Sydney Morning Herald on the 31st of July 2013, you say fair use is "theft" in all but name.

And on your blog you have mentioned Bruce Sterling's piece "The Ecuadorian Library". In fact, you've quoted directly from it.

So, is that fair use? Or are you going to hand yourself in for copyright theft now?

Now, you theoretically don't make any money from quoting Bruce, so maybe you think that because it's not a commercial use that therefore you're not "stealing". But I don't think you can have it both ways.

If we follow your argument - that any use that has some kind of commercial gain is, in fact, theft - then it simply becomes a question of what "commercial gain" is. And that's where lawyers come in.

Because you've obviously gained from referencing a quotation from Shakespeare in your article title. You're probably gained by mentioning songs or stories in your books - also copyrighted. And where does that end? Should you be paying the people who wrote the thesaurus every time you look up a synonym? Should you be paying the authors whose work you cribbed on the Russo-Japanese war? Should you be paying Bruce Sterling a proportion of your royalties, as he's clearly influenced your thinking?

You're also presenting a slippery slope that cannot help anyone. An academic quotes your book? Clearly they must pay! Someone satirises it? Clearly they must pay! A student quotes from it? Well, clearly they must pay in proportion to how much they quoted - after all, some people might read your book and not use a thing from it, and others quote entire sections! Someone mentions it on a radio show? They should pay for the privilege! Someone sells your book second-hand? Well, obviously you should get a cut too!

You're also a successful author, having published eight books and translated more. So it's kind of convenient for you to say, now, that you should be paid more for all that work. It doesn't help the new author, struggling to make a living and trying to read and learn from everything they can.

And, let's face it, the spectre of some dread international conglomerate ripping off your work and not giving you any money for it is kind of the wrong way around, isn't it? After all, you've basically been published by them - big printing companies who control distribution, decide who is going to be released where and when, and decide the royalties they will offer you and how they'll pay. They don't need to steal other people's work, they've got authors begging to be published sending them manuscripts all the time. Pretending that you're threatened by hungry companies desperate to rip your work off, and ignoring the one that's already only paying you trivial amounts compared to their own salaries and bonuses, is not a very good distraction.

I have nothing against you personally. I only think that your logic in defending a system that offers a pittance to the people who actually write the words we read, and in turn demand that no-one use your work without paying for it, while at the same time using other people's work without paying for it, seems to be mixed up.

Regards,

Paul

posted at: 22:26 | path: /society | permanent link to this entry

Thu, 01 Aug 2013

Preventing patent obscurity

One of the problems I see with the patent system is that patents are often written in obscure language, using unusual and non-standard jargon, so as to both apply as broadly as possible and not show up as "obvious" inventions.

So imagine I'm going to try to use a particular technology, or I'm going to patent a new invention. As part of my due diligence, I have to provide a certified document that shows what search terms I used to search for patents, and why any patents I found were inapplicable to my use. Then, when a patent troll comes along and says "you're using our patent", my defence is, "Sorry, but your patent did not appear relevant in our searches (documentation attached)."

If my searches are considered reasonable by the court, then I've proved I've done due diligence and the patent troll's patent is unreasonably hard to find. OTOH, if my searches were unreasonable I've shown that I have deliberately looked for the wrong thing in the hopes that I can get away with patent infringement, so damages would increase. If I have no filing of what searches I did, then I've walked into the field ignorant and the question then turns on whether I can be shown to have infringed the patent or whether it's not applicable, but I can be judged as not taking the patent system seriously.

The patent applicant should be the one responsible for writing the patent in the clearest, most useful language possible. If not, why not use Chinese? Arpy-Darpy? Ganster Jive? Why not make up terms: "we define a 'fnibjaw' to be a sequence of bits at least eight bits long and in multiples of eight bits"? Why not define operations in big-endian notation where the actual use is in little-endian notation, so that your constants are expressed differently and your mathematical operations look nothing like the actual ones performed but your patent is still relevant? The language of patents is already obscure enough, and even if you did want to actually use a patent it is already hard enough with some patents to translate their language into the standard terms of art. Patent trolls rely on their patents being deliberately obscure so that lawyers and judges have to interpret them, rather than technical experts.

The other thing this does is to promote actual patent searches and potential usage. If, as patent proponents say, the patent system is there to promote actual use and license of patents before a product is implemented, then they should welcome something that encourages users to search and potentially license existing patents. The current system encourages people to actively ignore the patent system, because unknowing infringement is seen as much less of an offence than knowing infringement - and therefore any evidence of actually searching the patent system is seen as proof of knowing infringement. Designing a system so that people don't use it doesn't say a lot about the system...

This could be phased in - make it apply to all new patents, and give a grace period where searches are encouraged but not required to be filed. Make it also apply so that any existing patent that is used in a patent suit can be queried by the defendent as "too obscure" or "not using the terms of art", and require the patent owner to rewrite them to the satisfaction of the court. That way a gradual clean-up of the current mess of incomprehensible patents that that have deliberately been obfuscated can occur.

If the people who say patents are a necessary and useful thing are really serious in their intent, then they should welcome any effort to make more people actually use the patent system rather than try to avoid it.

Personally I'm against patents. Every justification of patents appeals to the myth of the "home inventor", but they're clearly not the beneficiaries of the current system as is. The truth is that far from it being necessary to encourage people to invent, you can't stop people inventing! They'll do it regardless of whether they're sitting on billion-dollar ideas or just a better left-handed cheese grater. They're inventing and improving and thinking of new ideas all the time. And there are plenty of examples of patents not stopping infringement, and plenty of examples of companies with lots of money just steamrollering the "home inventor" regardless of the validity of their patents. Most of the "poster children" for the "home inventor" myth are now running patent troll companies. Nothing in the patent system is necessary for people to invent, and its actual objectives do not meet with the current reality.

I love watching companies like Microsoft and Apple get hit with patent lawsuits, especially by patent trolls, because they have to sit there with a stupid grin on their face and still admit that the system that is screwing billions of dollars in damages out of them is the one they also support because of their belief that patents actually have value.

So introducing some actual utility into the patent system should be a good thing, yeah?

posted at: 12:19 | path: /tech/ideas | permanent link to this entry

Tue, 14 May 2013

Modern kernels and uncooperative monitors

Our main TV screen is a Kogan 32" TV hooked up to a Mini-ITX machine running a MythTV frontend on Fedora 18. Due to Kogan buying the cheapest monitors, which are the ones with the worst firmware, it has several annoyingly braindead features that make it hard to use with a computer:

Now, not having an EDID used not to be a problem when X did most of the heavy work of setting up the display, because you could, at a pinch, tell it to trust you on what modes the monitor could support. With a program like cvt you could generate a modeline that you'd stick in your /etc/X11/xorg.conf and it'd output the right frequencies. This is what I had to do for Fedora 16.

The new paradigm now is that the kernel sets the monitor resolution and X is basically a client application to use it. This solves a lot of problems for most people, but unfortunately the kernel doesn't really handle the situation when the monitor doesn't actually respond with a valid EDID. More unfortunately, this actually happens in numerous situations - dodgy monitors and dodgy KVM switches being two obvious ones.

It turns out, however, that there is a workaround. You can tell the kernel that you have a (made-up) EDID block to load that it's going to pretend came from the monitor. To do this, you have to generate an EDID block - handily explained in the Kernel documentation - which requires grabbing the kernel source code and Making the files in the Documentation/EDID directory. Then put the required file, say 1920x1080.bin, in a new directory /lib/firmware/edid, and add the parameter "drm_kms_helper.edid_firmware=edid/1920x1080.bin" to your kernel boot line in GRUB, and away you go.

Well, nearly. Because the monitor literally does not respond, rather than responding with something useless, the kernel doesn't turn that display on (because, after all, not responding is also what the HDMI and DVI ports are also doing, because nothing is plugged into them). So you also have to tell the kernel that you really do have a monitor there, by also including the parameter "video=VGA-1:e" on the kernel boot line as well.

Once you've done that, you're good to go. Thank you to the people at OSADL for documenting this. Domestic harmony at PaulWay Central is now restored.

posted at: 21:11 | path: /tech | permanent link to this entry

Tue, 02 Apr 2013

Dear Mail Online editor,

I followed a link to one of the stories on the Daily Mail Online website, but my attention was arrested by the "FeMail Today" side bar. From the content, one would apparently think that all women are only interested in what famous women are wearing, what babies they have, who they're sleeping with or what they're saying about each other. Male celebrities might be involved but only when they're controversial or when the story is about their wife.

Don't give us that claptrap about "this is what women want". Don't give us some excuse about what sells or what your surveys have said. This is so obviously a sexist, demeaning bunch of claptrap that it's insulting to look at. It's shallow, it's boring, and it's painfully one-sided in its portrayal of women. No women scientists, leaders, or workers; no current politics, economics or public interest; nothing, in short, in common with the other readers of your paper.

Please grow a backbone, get rid of your demeaning sexist view of women, and start writing real content. Your women readers will thank you for it.

This post has also been sent to the Daily Mail Online editor.

Your sincerely,

Paul

posted at: 08:51 | path: /society | permanent link to this entry

Sat, 23 Mar 2013

Recording video at LCA

A couple of people have asked me about the process of recording the talks at Linux Conference Australia, and it's worth publishing something about it so more people get a better idea of what goes on.

The basic process of recording each talk involves recording a video camera, a number of microphones, the video (and possibly audio) of the speaker's laptop, and possibly other video and audio sources. For keynotes we recorded three different cameras plus the speaker's laptop video. In 2013 in the Manning Clark theatres we were able to tie into ANU's own video projection system, which mixed together the audio from the speaker's lapel microphone, the wireless microphone and the lectern microphone, and the video from the speaker's laptop and the document scanner. Llewellyn Hall provided a mixed feed of the audio in the room.

Immediately the problems are: how do you digitise all these things, how do you get them together into one recording system, and how do you produce a final recording of all of these things together? The answer to this at present is DVswitch, a program which takes one or more audio and video feeds and acts as a live mixing console. The sources can be local to the machine or available on other machines on the network, and the DVswitch program itself acts as a source that can then be saved to disk or mixed elsewhere. DVswitch also allows some effects such as picture-in-picture and fades between sources. The aim is for the room editor to start the recording before the start of the talk and cut each recording after the talk finishes so that each file ends up containing an entire talk. It's always better to record too much and cut it out later rather than stop recording just before the applause or questions. The file path gives the room and time and date of recording.

The current system then feeds these final per-room recordings into a system called Veyepar. It uses the programme of the conference to match the time, date and room of each recording with the talk being given in the room at that time. A fairly simple editing system then allows multiple people to 'mark up' the video - choosing which recorded files form part of the talk, and optionally setting the start and/or end times of each segment (so that the video starts at the speaker's introduction, not at the minute of setup beforehand).

When ready, the talk is marked for encoding in Veyepar and a script then runs the necessar programs to assemble the talk title and credits and the files that form the entire video into one single entity and produce the desired output files. These are stored on the main server and uploaded via rsync to mirror.linux.org.au and are then mirrored or downloaded from there. Veyepar can also email the speakers, tweet the completion of video files, and do other things to announce their existence to the world.

There are a couple of hurdles in this process. Firstly, DVswitch only deals with raw DV files recorded via Firewire. These consume about a gigabyte per hour of video, per room - the whole of LCA's raw recorded video for a week comes to about 2.2 terabytes. These are recorded to the hard drive of the master machine in each room; from there they have to be rsync'ed to the main video server before any actual mark-up and processing in Veyepar can begin. It also means that previews must be generated of each raw file before it can be watched normally in Veyepar, a further slow-down to the process of speedily delivering raw video. We tried using a file sink on the main video server that talked to the master laptop's DVswitch program and saved its recordings directly onto the disk in real time, but despite having tested this process in November 2012 and it working perfectly, during the conference it tended to produce a new file each second or three even when the master laptop was recording single, hour-long files.

Most people these days are wary of "yak shaving" - starting a series of dependent side-tasks that become increasingly irrelevant to solving the main problem. We're also wary of spending a lot of time doing something by hand that can or should be automated. In any large endeavour it is important to strike a balance between these two behaviours - one must work out when to stop work and improve the system as a whole, and when to keep using the system as is because improving it would take too long or risk breaking things irrevocably. I fear in running the AV system at LCA I have tended toward the latter too much - partly because of the desire within the team (and myself) to make sure we got video from the conference at all, and partly because I sometimes prefer a known irritation to the unknown.

The other major hurdle is that Veyepar is not inherently set up for distributed processing. In order to have a second Veyepar machine processing video, one must duplicate the entire Veyepar environment (which is written in Django) and point both at the same database on the main server. Due to a variety of complications, this was not possible without stopping Veyepar and possibly having to rebuild its database from scratch, and I and the team lacked the experience with Veyepar to know how to easily set it up in this configuration. I didn't want to start to set up Veyepar on other machines and finding myself shaving a yak and looking for a piece of glass to mount a piece of 1000-grit wet and dry sandpaper on to sharpen the razor correctly.

Instead, I wrote a separate system that produced batch files in a 'todo' directory. A script running on each 'slave' encoding machine periodically checked this directory for new scripts; when it found one it would move it to a 'wip' directory, run it, and move it and its dependent file into a 'done' directory when finished. If the processes in the script failed it would be moved into a 'failed' directory and could be resumed manually without having to be regenerated. A separate script (already supplied in Veyepar and modified by me) periodically checked Veyepar for talks that were set to "encode", wrote their encode script and set them to "review". Thus, as each talk was marked up and saved as ready to encode, it would automatically be fed into the pipeline. If a slave saw multiple scripts it would try to execute them all, but would check that each script file existed before trying to execute it in case another encoding machine had got to it first.

That system took me about a week of gradual improvements to refine. It also took me giving a talk at the CLUG programming SIG on parallelising work (and the tricks thereof) to realise that instead of each machine trying to allocate work to itself in parallel, it was much more efficient to make each slave script do one thing at a time and then run multiple slave scripts on each encoder to get more parallel processing, thus avoiding the explicit communication of a single work queue per machine. It relies on NFS correctly handling the timing of a file move so that one slave script cannot execute the script another has already moved into work in progress, but that at this granularity of work is a very small time of overlap.

I admit that, really, I was unprepared for just how much could go wrong with the gear during the conference. I had actually prepared; I had used the same system to record a number of CLUG talks in months leading up to the conference; I'd used the system by myself at home; I'd set it up with others in the team and tested it out for a weekend; I've used similar recording equipment for many years. What I wasn't prepared for was that things that I'd previously tested and had found to work perfectly would break in unexpected ways:

The other main problem that galls me is that there are inconsistencies in the recordings that I could have fixed if I'd been aware of them at the time. Some rooms are very loud, others quite soft. Some rooms cut the recording at the start of the applause, so I had to join the next segment of recording on and cut it early to include the applause that the speaker deserved. There were a few recordings that we missed entirely for reasons I don't know. I was busy trying to sort out all the problems with the main server and I was immensely proud of and thankful for the team of Matt Franklin, Tomas Miljenovic, Leon Wright, Euan De Koch, Luke John and Jason Nicholls who got there early, left late, worked tirelessly, and leapt - literally - up to fix a problem when it was reported. Even with a time machine some of those problems would never be fixed - I consider it both rude and amateur to interrupt a speaker to tell them that we them to start again due to some glitch in the recording process.

But the main lesson to me is that you can only practice setting it up, using it, packing it up and trying again with something different in order to find out all the problems and know how to avoid them. The 2014 team were there in the AV room and they'll know all of what we faced, but they may still find their own unique problems that arise as a result of their location and technology.

There's a lot of interest and effort being put in to improve what we have. Tim Ansell has started producing gstswitch, a Gstreamer-based program similar to DVswitch which can cope with modern, high-definition, compressed media. There's a lot of interest in the LCA 2014 team and in other people to produce a better video system that is better suited to distributed processing, distributed storage and cloud computing. I'm hoping to be involved in this process but my time is already split between many different priorities and I don't have the raw knowledge of the technologies to be able to easily lead or contribute greatly such a process. All I can do is to contribute my knowledge of how this particular LCA worked, and what I would improve.

posted at: 09:23 | path: /tech/lca | permanent link to this entry

Tue, 05 Mar 2013

Code on the beach!

In 2011 I ran an event called CodeCave, which saw nine intrepid coders and three intrepid family go to Yarrangobilly Caves to spend a cool, wet winter weekend coding, eating, exploring in caves, coding, playing Werewolf, taking photos, coding, swimming (!), talking, flying planes and helicopters, and coding. Being an extrovert, I love those opportunities to see friends doing cool things with code, and my impression is everyone enjoyed the weekend.

I had a hiatus in 2012 for various reasons, but this year I've decided to run another similar event. But, as lovely as Yarrangobilly is and as comfortable as the Caves House was to stay in, it's a fair old five hour drive for people in Sydney, and even Canberrans have to spend the best part of two hours driving to get there. And Peter Miller, who runs the fabulous CodeCon (on which CodeCave was styled) every year, is going to be a lot better off near his health care and preferred hospital. Where to have such an event, then?

One idea that I'd toyed with was the Pittwater YHA: close to Sydney (where many of the attendees of CodeCave and CodeCon come from), still within a reasonable driving distance from Canberra (from where much of the remainder of the attendees hail), and close to Peter's base in Gosford. But there's no road up to it, you literally have to catch the ferry and walk 15 minutes to get there - while this suits the internet-free aesthetic of previous events, for Peter it's probably less practical. I discussed it on Google+ a couple of weeks ago without a firm, obvious answer (Peter is, obviously, reserving his say until he knows what his health will be like, which will probably be somewhere about two to three weeks out I imagine :-) ).

And then Tridge calls me up and says "as it happens, my family has a house up on the Pittwater". To me it sounds brilliant - a house all to ourselves, with several bedrooms, a good kitchen, and best of all on the roads and transport side of the bay; close to local shops, close to public transport, and still within a reasonable drive via ambulance to Gosford Hospital (or, who knows, a helicopter). Tridge was enthusiastic, I was overjoyed, and after a week or so to reify some of my calendar that far out, I picked from Friday 26th July to Sunday 28th July 2013.

So it's now called CodeBeach 2013, and it also has a snazzy Google Form to take bookings on. Please drop me an email if you've got any questions. We'd love to have you there!

posted at: 21:13 | path: /tech | permanent link to this entry


All posts licensed under the CC-BY-NC license. Author Paul Wayper.

You can also read this blog as a syndicated RSS feed.