SF Technotes

Designing Safe Robotics

By Michael Castelluccio
July 28, 2021
0 comments

They can do precision spot welding and pack small boxes on the assembly line, so you would think building and writing the software instructions for a robotic assistant to help humans get dressed wouldn’t be a major challenge.

 

Just how wrong that assumption is was demonstrated on July 12, 2021 in a report Robotics: Science and Systems XVII—Online Proceedings from MIT Computer Science & Artificial Lab (CSAIL). The study, “Provably Safe and Efficient Motion Planning with Uncertain Human Dynamics,” describes a robotic arm that the researchers programmed to grab a vest and help a human to slip it over their arm. The inherent difficulties in doing this are indicated in the phrases “provably safe” and “uncertain human dynamics.” The researchers needed to guarantee safety for the user and to provide sufficient intelligence in the system to adjust to unpredictable movements on the part of the person being assisted.

 

UNCERTAIN HUMAN DYNAMICS

 

If you’ve ever seen a demonstration of one of those educational robotic arms that you “program” by grabbing the hand and slowly moving the apparatus through the motion you want it to take and then assuming it will remember and be able to duplicate the routine from a command on your laptop, the process looks like a snap. The MIT researchers, however, were working with a more powerful machine, so they began with a specific understanding for safety.

 

They formally defined “human physical safety as collision avoidance or safe impact in the event of a collision.” If you think of putting a vest on a person with limited motion due to a frozen shoulder or chronic arthritis, you can imagine why collision avoidance, the default protection for assembly-line robots, would, alone, not be sufficient. Some contact is likely necessary.

 

The second problem involved the need to address “uncertain human dynamics,” including everything from a person’s limited ability to assist to the flinching you might expect with a confused elderly client. The problems require a unique intelligence on the part of the robot assistant.

 

MIMICKING HUMAN ABILITIES

 

When engineers were trying to teach early robots to walk across a room with obstacles or uneven levels, the machines’ slow, awkward progress was kind of humorous. And some of the amusement was likely due to our assumption that learning to balance and walk should be easier. Some of that misjudgment is likely based on our idea that most parents teach their children to walk with not much more needed than just patience.

 

But in fact, only a child can teach itself to walk. Parents assist, but it’s the child’s neural connections among numerous muscular systems that keep the learner erect and moving. We don’t remember the specifics of those lessons because they happened within the black box of our own neural mappings. But the engineers suffering on the sidelines as their robots loudly crashed were doing their best to mimic that human ability, one they only vaguely understood.

 

So, how did the CSAIL researchers begin the education of their robot personal assistant? Well, like the parent, they didn’t try to teach what to do in every possible instance. They enabled a “proper human modeling” based on how humans move, react, and respond, and they removed an important limitation known as the “freezing robot problem.” If your default is to prohibit contact with the human to avoid clumsy bumps or injuries, the arm will stop at any juncture that anticipates contact. Shen Li, a lead author of the CSAIL paper explains, “By allowing robots to make non-harmful impact with humans, our method can find efficient robot trajectories to dress the human with a safety guarantee.”

 

But what is the response when contact is made. How does the system adjust? An MIT blog post explains, “Instead of having a single, default model where the robot only understands one potential reaction, the team gave the machine an understanding of many possible models, to more closely mimic how a human can understand other humans. As the robot gathers more data, it will reduce uncertainty and refine those models.” In other words, it will learn.

 

The smarter it gets, the more able it is to “reduce uncertainty and refine” the initial models. This is the hand-holding as the first steps are attempted. The hope is that the learning done by the robot will simply make it more humanlike in its responses.

 

The paper sums up the importance of this permission with, “To the best of our knowledge, [this] is the first work to provide a probabilistic safety guarantee under the uncertainty in human dynamic models for human-robot systems.”

 

NEXT UP IN REINFORCEMENT LEARNING

 

This progress that allows nonharmful impact seems to be a painfully short step in getting to a robot-assistant that can interact with humans allowing touch, but there’s another dimension here. It involves a next step in humanizing these assistants—machine learning. Tiernan Ray in a recent ZDNet posting “Way Beyond AlphaZero” claims, “The hardest and perhaps the most promising work of deep learning may lie in the realm of robotics, where the real world introduces constraints that cannot be fully anticipated.” He quotes Sergey Levine of the Berkeley department of electrical engineering and computer science, who also believes “real-world tasks in general present the greatest challenge—but also the greatest opportunities—for reinforcement learning.”

 

As a kind of AI, machine learning exists in three forms—supervised learning, unsupervised learning, and reinforcement learning. Reinforcement learning involves a search for possible solutions while carefully noting consequences, which are both then stored in memory for consideration regarding future tasks. It essentially has two algorithms that help decide future paths.

 

One is called a value function, and the other a policy function. These address goals and values and direct the creation of a search history. It’s basically a learn-by-trying system, and Ray points out that “all the calculations are based on the notion of an ultimate reward, such as winning the game of chess.” Reinforcement learning can produce astonishing results. The current world-champion Go player taught itself how to play an unbeatable masters-level game using the reinforcement algorithms.

 

DeepMind, the producer of the champion program, has just announced that “it has used its AI to predict the shapes of nearly every protein in the human body, as well as the shapes of hundreds of thousands of other proteins found in 20 of the most widely studied organisms, including yeast, fruit flies, and mice. The breakthrough could allow biologists from around the world to understand diseases better and develop new drugs.” The story in the MIT Technology Review, July 22, 2021, also quotes DeepMind developers who claim in the next few months, the list will grow to 100 million more with every known protein as a final goal.

 

When applied to a protein-folding mapping of all known proteins, there seems to be almost no limits to DeepMind’s AI. But mapping the neural networks that control how we adjust our responses when dressing a fidgeting child in order to ultimately create a machine that could mimic the same is still well out of reach. And it appears, the attempts that will be taken, for a time, will seem as sad as the YouTube videos of crashing robots trying to jump over a small box.

 



Michael Castelluccio has been the technology editor for Strategic Finance for 26 years. His SF TechNotes blog is in its 23rd year. You can contact Mike at mcastelluccio@imanet.org.


0 No Comments