Thursday 6 December 2018

Why the anti-encryption law would be unworkable

The following came into a thread I saw on Twitter from Peter Logue, who used to work as a journalist. I contacted Peter in a DM and with his son's help we produced this for publication. Peter says his son has worked in game development in Australia and China. His son then worked for a few years with Appster before setting up an IT consultancy called SixSix with Carl Rigoni, who ran an innovation division at Australia Post. His son deals with software development and encryption and security issues on a daily basis. Some of his son’s peer group are senior advisers on IT security to the Department of Foreign Affairs and Trade, the Department of Defence, and the Department of Prime Minister and Cabinet.

My eldest son has worked in IT and game development for over 15 years. He’s now a consultant to many major companies and works with software developers. Here’s what he says about the proposed anti-encryption legislation the government is trying to get through federal Parliament: It’s not actually possible to implement it in a modern software company.

To give you an idea of why it's not possible, picture the following scenario: I manage a software development company that is building some technology that uses encryption. This technology is being built for a client of mine that chose to work with my company because of our good reputation. With the new legislation, the government can contact one of my employees behind my back and tell them to build a backdoor into the software without my knowledge. If the employee tells me about it, they could be facing jail time under the new rules.

With my management tools though, I track every minute of their work and every line of code that they write is reviewed and automatically tested. If there’s anything unexpected in there it will cause the automated tests to fail, which will highlight the presence of the rogue code. Even if my tests don’t pick up on it, I can see that this developer is taking far longer completing the work than they should (because they are spending time working on features for the government). I ask the developer about this, and they can’t tell me about it, so I check the work that’s been done and see that the software is compromised. At this point I start jumping to conclusions, and I might think it’s a Chinese hacker stealing our work or something like that, and report the employee to the police. What happens then?

I don't think the people that wrote this bill understand how technology development works. That’s probably the reason why much of the wording of this bill has been left vague or undefined. If political parties want to get votes to prove they have robust cybersecurity policies, then enacting laws to make local cybersecurity weaker is not a good way to do it. It's arse-backwards anti-security and will undermine international confidence in the Australian technology sector.

This comment from u/Groovyaardvark on Reddit sums it up well:
One of the ways #AABill gets access to systems is by commandeering employees of companies to write backdoors. But they’re not even allowed to tell their employer, or face jail time. I went through the mechanics of this, and realised how out of touch Canberra is...
Peter’s son adds the following conversation to describe how the laws would apply in real life once they are enacted:
"Johnson, why are all of your tickets building up!? What are we paying you for?! We will need to discuss a performance improvement plan.”  
“But, sir, I've been working really hard on this err... other project...”  
"What project?"  
“Errrm....it's umm...I can't tell you."   
So the choice is, A: Get fired, B: Go to prison.  
“I'll pack up my desk I guess.” 
I wonder who will be the next lucky developer chosen to secretly undermine and destroy their employer’s products behind their backs?

UPDATE 7 December 2018 6.25am: The bill passed through both houses of federal parliament yesterday on the last sitting day before the end-of-year break.

No comments: