The politics of protection and privacy

0

Privacy

By Adeline Teoh

The privacy versus security debate is not just about rights and metadata retention laws but how safe that information really is.

It was 33 minutes of news comedy that put government surveillance back on the agenda. In early April, US television channel HBO screened an episode of Last Week Tonight focusing on the upcoming renewal of Section 215 of the Patriot Act, which allows the US National Security Agency (NSA) to retain metadata of its citizens’ correspondence. After establishing—through a painful vox pop—that the general public has only a slippery grasp of the issue, host John Oliver flew to Russia to interview former NSA contractor-turned-leaker Edward Snowden on what the NSA’s powers really are. Tellingly, Oliver chose to frame the issue in terms of whether the government could access private photographs. He told Snowden: “This is the most visible line in the sand for people: Can they see my dick?”

The issue is no laughing matter for Australians. On 19 March 2015, parliament passed the Telecommunications (Interception and Access) Amendment (Data Retention) Bill 2015, which, among other things, requires telcos and internet service providers to retain metadata for two years. Apart from the economic arguments against this scheme—telcos would need to increase their storage capacity as well as secure this data for four times longer than existing requirements—there are questions about its effectiveness.

According to a study conducted by German parliament, data retention in Germany led to a negligible increase of 0.006% to the crime clearance rate. Germany not only concluded this was a financial burden considering the poor return on crime reduction, its Constitutional Court described metadata retention as a serious restriction of the right to privacy and advocated a retention period of six months as proportionate to security.

For most Australians, the attitude of ‘if I’m not doing anything wrong then why should I worry?’ is pervasive, but even if they let the government have their metadata and happily give away more data in the general Internet of Things through social media, app usage and other connections, what they often forget is how flimsy the security of that data is.

Once telcos start collecting and retaining this data it becomes a target for hackers. The ultimate question is whether you think your telco can defend itself against the hacker threat. Before you answer that, consider this: earlier this year, Telstra, Australia’s biggest telco, came under fire for breach of privacy laws when it accidentally leaked the personal information of almost 16,000 customers. Full names, addresses and phone numbers were accessible via a Google search between June 2012 and May 2013. How safe is your data?

Lags in the law

Privacy Professor Rebecca Herold says it’s an issue she’s been working through for several years and it’s not just about hackers or government surveillance. In the medical device sector, for example, privacy and security can often be the same thing and relate to appropriate access control.

“You need security controls to protect against mistakes. If somebody makes a change to a device that they shouldn’t have, a good security control will stop that change from occurring,” she explains. “If you have bad security on a device that controls your body, then if somebody gets into that device and changes by a decimal the amount of insulin being pumped or changes how often the heart should beat, you could actually kill someone.

The problem is many manufacturers do not consider privacy and security in the scope of their product, she says. “I’m seeing so many organisations creating devices, collecting all this data, and they aren’t thinking about how it has a privacy impact. They’re like, ‘well, we’ve addressed all the laws, so there are no privacy issues to address here.’ That’s something I’m trying to get them to understand: the laws lack the actual privacy risk.”

One issue centres on whether the information collected is appropriate. “With privacy, everything depends on context,” says Herold. “You might not have a privacy issue with a doctor collecting data because the doctor’s giving you care, but you didn’t give the doctor that data so he could go off and sell it to marketers. In that context, it’s not appropriate.”

Data and research and development often appear to conflict in this area. A common argument, Herold recounts, is that data restrictions limit innovation. “I hear ‘if we can’t have this data, you’re never going to find a cure for this disease’ and I think it’s a copout. There are ways you can use that data if you set parameters around how the data is used, how it’s shared. That’s the thing that’s missing. They always say ‘just give us the data because we want to use it to better the world’. I’m all for that, but let’s make accountability a part of what happens.”

The problem with the Internet of Things is individuals often don’t realise that a lot of data collected today for one reason can be easily repurposed for another with no requirement to disclose this change of use. This is one thing Herold is trying to change. She urges organisations to consider this line in addition to current legislation when developing privacy policies. “When you’re the one collecting the information, you need something in your brain that tells you, ‘this doesn’t feel right’ when you’ve told individuals that you’re collecting information and using it for one thing and now you’re using it for something else.”

Finding the line

Much has been made of ‘the creepy line’, which is where most people draw a boundary between what is appropriate use of data and what is not appropriate. The creepy line varies for different people, which is part of why it’s hard to define. The other aspect is that it moves as people get used to incremental encroachments on privacy, for example an activity tracker like Fitbit linking to social media, prompting such permission requests as: ‘Facebook would like to access your heart rate: Cancel/OK’—it’s as much the creeping line as the creepy line.

Oliver was clever to frame the NSA issue in terms of whether the agency could see ‘dick pics’, because it was a clear way to indicate to the general public how intimate the surveillance could actually become, even if the data did not actually comprise private photographs. Herold explains the line slightly differently but no less effectively. “The analogy that helps business leaders understand things is: ‘What if your child was using this service and you knew that your child was having all this information about them collected and used?’”

Unfortunately there is no definitive way to draw the line except through legislation and politics, in addition to public ignorance, can muddy the issue. In the meantime, the only thing security professionals can do is ensure the protection of the information collected and hope it is enough to defend against ill use.

What is metadata?

Metadata is what describes other pieces of data. It is the details about correspondence but not the content. If you made a phone call, for example, the metadata would include the number you called, the number you called from, where you and the person you called were at the time of the call, and the length of the call but not the actual conversation you had.

The Attorney-General’s department states:

Metadata is used in almost every serious criminal or national security investigation, including murder, counter-terrorism, counter-espionage, sexual assault and kidnapping cases. Agencies use metadata to help:

  • quickly rule innocent people out from suspicion and further investigation, for example by showing they had not been in contact with other suspects
  • identify suspects and networks of criminal associates
  • support applications to use more complex and intrusive tools, such as a warrant to intercept the content of communications
  • provide evidence in prosecutions.
Share.

Comments are closed.