SF Technotes

My Robot’s Staring at Me

By Michael Castelluccio
March 9, 2017

The writing was on the wall as of last October. A denial of service attack had slowed the internet, worldwide, to a stumble, and security experts pointed to IoT (Internet of Things) smarthome devices like home-security cameras as the 100,000 malicious endpoints that provided access for the hackers. With the alarm sounded, IoT makers now had a new set of features to develop. The message was pretty simple—harden your devices against hackers.







Now, four months later, the same call has been issued in a research paper posted by international security consultant IOActive of Seattle, Wash. The title of the paper is “Hacking Robots Before Skynet,” and the subtitle says it all: “IOActive finds rampant security vulnerabilities in home, business, and industrial robots.” The “Skynet” in the title is an ominous reference to the fictional AI system in the Terminator movies. Wikipedia defines Skynet as “a neural net-based conscious group mind. . .[that has] an overarching, global, artificial intelligence hierarchy which seeks to exterminate the human race in order to fulfill the mandates of its original coding.” That’s some uncomfortable malware.


The research paper was written by IOActive’s CTO, Cesar Cerrudo, and senior security consultant, Lucas Apa. Cerrudo explained the motivation for examining the hardware, software, and networks for AI-enabled robots: “Robots will soon be everywhere—from toys to personal assistants to manufacturing workers—the list is endless. Given this proliferation, focusing on cybersecurity is vital in ensuring these robots are safe and don’t present serious cyber or physical threats to the people and organizations they’re intended to serve.”


IOActive tested mobile applications, robot operating systems, firmware images, and other software over a six-month period. The robots they worked with were from a number of vendors, including SoftBank Robotics, UBTECH Robotics, ROBOTIS, Universal Robots, Rethink Robotics, and Asratec Corp. They focused on home, business, and industrial robots, along with robot-control software that’s common to several manufacturers.


“Given the huge attack surface,” Apa explained in a press release, “we found nearly 50 cybersecurity vulnerabilities in our initial research alone, ranging from insecure communications and authentication issues, to weak cryptography, memory corruption, and privacy problems, just to name a few.” A hacker exploiting these vulnerabilities could add a variety of dangerous intents to the malfunctions, from stealing personal information or enabling surveillance using the robot’s microphones and cameras to the remote capture and complete control of the robot.




Factories and businesses added 10% more robots in 2016 than in the year before. Reports indicate that spending overall on robots will reach up to $188 billion in 2020, so it’s time to get serious about security.


The IOActive research team decided to create a foundation of “practical cyberattacks against robot ecosystems” that would include hardware, software, and networks. Many of the 50 cybersecurity vulnerabilities they discovered were shared by a number of the ecosystems examined. They explained that their testing was not even a deep, extensive security audit. The goal was to gain a high-level sense of how insecure today’s robots are. Seven serious issues emerged.


Insecure Communications

You can communicate with a robot in a number of ways. You can program it with your computer or send commands in real time with a mobile app, and the robot can connect with internet services/cloud for updates and applications. If those channels are not secure, there’s a problem. Most of the robots tested by the team were using insecure communications sending information in unencrypted cleartext or with weak encryption over Bluetooth, Wi-Fi, and other connections.


Authentication Issues

As with any computer system, you need to control which users are authorized to access the robot’s systems. Usually passwords and usernames prevent unauthorized access. The team found key services that didn’t require a username and password. They described this as “one of the most critical problems we found, allowing anyone to remotely and easily hack the robots.”


Missing Authorization

They also found most of the robots didn’t require sufficient authorization to protect basic functions like installation of applications on the robots, updating operating system software, etc.


Weak Cryptography

Robots can store sensitive information like passwords, encryption keys, user’s email accounts and social media, and vendor service credentials. The researchers found most robots were not using encryption or they were not using it properly.


Privacy Issues

They found some robots’ mobile applications were sending users’ private information to remote servers without the users’ consent or knowledge. This included mobile network information, device information, and GPS location.


Weak Default Configuration

Most robots come with a set of features that are both accessible and programmable. The IOActive team found that some had insecure features that couldn’t be disabled or protected. They also found some with feature sets that had default passwords that were difficult to change or couldn’t be changed at all. This is a problem because the default passwords were publicly known or you could find them because some models share the same default passwords.


Vulnerable Environments


Having come out of an open source research environment, many robots use common open source frameworks and libraries like the Robot Operating System (ROS) used in several of the test robots from different vendors. The team explains, “ROS suffers from many known cybersecurity problems, such as cleartext communication, authentication issues, and weak authorization schemes. Each robot that we tested had many of the issues.”



What are intended to be major selling points for robots can also be problematic. The home robot that has microphones and a camera system creates an always-on surveillance opportunity for the hacker. If your robot can record or stream conversations or video over a network connection, that needs to be locked down. If your robot has a voice, it could even leverage other robotic systems like Amazon Echo to do things. Things like: “Alexa, please unlock the smarthome lock on the front door and turn off the home alarm system.”


Robots are computers with legs or wheels, and a compromised robot can have questionable uses for its mobility when the primary user is away from home. “A hacked robot becomes an inside threat, providing all of its functionality to external hackers.” In an industrial setting that might be disastrous, especially if “human safety protections and collision avoidance/detection mechanisms can be disabled by hacking the robot’s control services.” There have been fatal accidents involving malfunctioning robots on assembly lines along with deaths resulting from robotic surgeries.


The report has a three-page sampling of possible disasters arriving at the hands (clamps) of hacked robots. These are the Skynet nightmares alluded to in the paper’s title. The least disturbing of these is the ransomware example: “Once a home robot is hacked, it’s no longer the family’s robot, it’s essentially the attacker’s. Ransomware has been on the rise generally, and it’s on the horizon for robots as well, where people will have to pay attackers to regain the use of their robots.” The hijacked robot is likely to cost much more to ransom than a bricked laptop taken over by a ransomware hacker.


Other possible disasters discussed include much more disturbing scenarios possible from industrial and military robots.




Although it will be expensive to harden robots against all threats, the IOActive team does provide a nine-point checklist that they consider essential if we ever expect to be safe from the home healthcare robot that suddenly starts staring and using really inappropriate language or the assembly-line automaton that hacks the other robots in the network and shuts the entire line down.


Here’s what’s needed at a minimum for safe robotics:


Security from Day One: Vendors must implement Secure Software Development Life Cycle (SSDLC) processes to ensure that security is wired in from the date of delivery.


Encryption: Strong encryption must be applied wherever it’s needed in the catalog of the robot’s capabilities.


Authentication and Authorization: This includes passwording and locking user-only permissions and network permissions.


Factory Restore: Provide a setting that will give you a clean start back to the factory defaults.


Secure by Default: Security shouldn’t be something that you add as options after you begin using the robot.


Secure the Supply Chain: If a robot includes modules or software from outside the manufacturing facility, these units need the same hardening as the rest of its build.


Education: Vendors must educate everyone in their organization, providing training for engineers and developers, executives, and all others involved in product decisions.


Vulnerability Disclosure: Vendors need a clear communication channel for reporting cybersecurity issues with an individual or team in place to handle all reports.


Security Audits: A complete security assessment must be performed on all of the robot’s ecosystem components prior to going into production.


The IOActive research report has been sent to the vendors. In a few months, following last week’s disclosure, the team intends to publish the full technical details of what they discovered with the 50+ vulnerabilities.


The potential for this new disruptive technology can’t be as carelessly rolled out as the IoT devices that recently proved themselves capable as hacker ports for all kinds of exploits.


Michael Castelluccio has been the Technology Editor for Strategic Finance for 23 years. His SF TECHNOTES blog is in its 20th year. You can contact Mike at mcastelluccio@imanet.org.

0 No Comments