Device security has been in the news this week, including a proposal for IoT security for smart devices, a more security-focused Android, and an Uber lawsuit.
The first indication that Internet of Things (IoT) devices posed a real security threat came in October 2016 when a botnet — made up of devices such as security cameras infected with Mirai malware — attacked Dyn, a provider of DNS services.The attack came in the form of a DDoS (distributed denial of service) assault on Dyn’s servers. By firing off multiple requests every second, the attack took down websites all over the USA and Europe, as DNS servers were overrun with requests from millions of infected devices.
Over the past couple years, the sad security state of so many IoT devices has become almost cliché. So it’s good news that this week the UK government published a report on the security of IoT devices which includes a proposed code of practice for all connected devices.
While governments may sometimes seem inept when it comes to proclamations about technology (who can forget Australian premier Malcolm Turnbull’s announcement that the laws of Australia trump the laws of math?), the UK’s Department of Culture, Media and Sport seems to have done a decent job identifying the problems with IoT devices and proposing a sensible set of standards.
First, the code calls for an end to default passwords — a critical weakness of many such devices. With consumers often not even knowing that their device lacks a unique password (or that they should always change the password on a new device!), this should go a long way to help secure the internet of things.
The proposed code of practice also calls for a vulnerability disclosure policy, which allows for several things:
The code also says that all devices and services should follow the principle of least privilege, which translates to giving users no more access than they need, and verifying software on devices with secure boot mechanisms that ensure compromised software won’t connect without admin approval.
The code also calls to ensure that personal data is protected, that systems are resilient in the face of outages, and that users can delete personal data (which is also part of the forthcoming EU-wide General Data Protection Regulation). Installation and maintenance of devices should be straightforward, says the code, and IoT providers should monitor any telemetry data they collect for issues with hardware or software security. Finally, the code mandates any data input should be validated so that systems cannot be compromised by deliberately formatted data or code.
From a security point of view, IoT devices have a long way to go before they are robust and trustworthy. Stories such as the tale of the compromised CloudPets toys will linger in the minds of security professionals for a long time to come — but this code of practice is a great start.
This week, Android-watchers were focused on Google’s plans for the next version of its mobile operating system as the organization on Wednesday revealed its first developer preview of what’s known for now as Android P.
Android has long been considered a less secure platform. Its open-source roots and tweakability provide a broad attack surface and offer less-than-stellar privacy controls, but it’s clear that with this latest version, Google is focusing on features that better secure the OS.
One key update is that Android P will default to encrypting all app traffic, although developers will be able to opt out on an app-by-app basis. Google says in its blog post announcing the preview: “You'll now need to make connections over TLS, unless you explicitly opt-in to cleartext for specific domains.”
Also good is that with this new version, apps won’t be able to access your phone’s microphone, camera, or sensors while idle unless they make it clear that they’re doing so. Google explains to developers: “While your app's UID is idle, the mic reports empty audio and sensors stop reporting events. Cameras used by your app are disconnected and will generate an error if the app tries to use them.”
Cloud backups of your device are also going to be better protected in Android P. If you want to restore your phone, you’ll be offered a choice of saved backups, just as you are now, but to access one of these, you’ll need to enter the lock code on your device. As those backups are encrypted, they’re not accessible to anyone without your lock code, so not even Google will be able to get to them.
Your browsing will be better protected, too. In addition to forcing HTTPS app traffic, Android P will obfuscate your MAC address — the unique number that identifies your device to the network. You’ll be able to generate a random MAC address for each network, making it much harder for marketers to track you and your phone.
The preview is available for developers, but be warned: it’s an early version and may not be stable. You’ll have to flash your device manually if you want to give it a spin. Currently, it’s only available for Google’s own Pixel, Pixel XL, Pixel 2, and Pixel 2 XL devices. A more inclusive beta program is expected to be available after Google I/O in May.
Uber has come under a lot of fire for its approach to its drivers’ and customers’ privacy, and it faced another blow to its reputation this week when Pennsylvania Attorney General Josh Shapiro said he was suing the ride-hailing app company for failing to disclose a breach in a timely manner.
The lawsuit alleges that Uber broke the state’s breach disclosure law when it failed to notify 13,500 Pennsylvania drivers that their personal details were among the data stolen by cybercriminals in October 2016.
The breach is thought to have affected 57 million Americans; and if that weren’t bad enough, Bloomberg reported in November last year that Uber paid the alleged crooks a ransom of $100,000 to cover up the theft of the data.
Uber kept the breach quiet for more than a year, which means, says Shapiro, that “Uber violated Pennsylvania law by failing to put our residents on timely notice of this massive data breach.”
Shapiro added that “instead of notifying impacted consumers of the breach within a reasonable amount of time, Uber hid the incident for over a year — and actually paid the hackers to delete the data and stay quiet. That’s just outrageous corporate misconduct, and I’m suing to hold them accountable and recover for Pennsylvanians.”
There’s no defined time limit for breach disclosure, but Shapiro and others feel that failing to disclose the hack for over a year, and (allegedly) actively working to cover it up, exceeds what might be described as “reasonable.”