At the end of July 2018, the San Francisco-based OpenAI research group published the results of a curious project called Learning Dexterity. At the center of the study was a robotic hand that learned how to find a letter on a cube and then would manipulate the cube’s position to hold it up to a camera for verification. The hand and the neural network to which it was connected were left to learn on their own and then practice the activity.
There are three remarkable aspects of the story. There’s the company that engineered the study, OpenAI, the way the experiment was set up, and the time spent practicing, which seemed to shorten evolutionary timeframes.
OPEN SOURCE AI
In the blog post that launched the OpenAI operations in December 2015, the founders announced their unique vision: “OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.”
OpenAI promised to build value for everyone rather than for shareholders. And they have practiced what they preach. On their blog (blog.openai.com), they not only publish the papers for their projects, but they also often share the code used in the experiments. They can afford to do this because the founders have committed $1 billion in support of the mission, with support coming from individuals and companies. The co-chairs of the organization are Sam Altman, president of Y Combinator, and Elon Musk, CEO of SpaceX.
PRACTICE, PRACTICE, PRACTICE
Shadow Dexterous Hand is an off-the-shelf robotic hand in the Learning Dexterity experiment from the London-based Shadow Robot Company. OpenAI nicknamed their experimental system Dactyl, and it included Shadow and three ordinary cameras connected to a neural network of a thousand computers running a reinforcement-learning algorithm. The computers recreate virtual representations of Shadow’s situation in the physical world and use their representations to test solutions and practice movements.
Image from Shadowrobot.com
A cube with six sides in different colors, each face with a different letter of the alphabet, is placed on the open palm of the Shadow hand. Asked to find a particular letter, the robotic hand must shift the cube’s position so it can see and then show the camera it has found the correct letter. The process looks like this:
Because the data was sent from the actual world to the neural network of computers, the computers were able to deal with the problems of friction and gravity in a virtual realm and could practice their manipulations at speeds that reduced the equivalent of 100 years of practice to several days. The recursive learning algorithm that was used is similar to the system used by Google’s DeepMind computer that taught itself game-playing skills for AlphaGo that can now defeat human Go masters. (See SF TECHNOTES: “The Computer that Taught Itself To Win at Go” )
The Dactyl system automatically figured out for itself several basic human grips and manipulations including finger pivoting, sliding, and finger gaiting, which continuously plans the ongoing grasp changes. And then it practiced rapidly and endlessly.
The video is impressive, but Will Knight of MIT Technology Review notes, “The robotic hand is still nowhere near as agile as a human one, and far too clumsy to be deployed in a factory or a warehouse.” Rodney Brooks, professor emeritus at MIT and founder of Rethink Robotics agrees that “it is not going to fit into an industrial workflow anytime soon. But that is fine.” He adds, “Research is a good thing to do.”
Another impressive part of the experiment, along with the idea of robots teaching themselves skill sets through trial and error, is the time scheme. With a thousand machines coordinated within a neural network doing virtual exercises, 100 years of repetitions can be run in just a few days at superhuman speeds. But even a century wasn’t enough to create assembly-line ready robotics. More learning and practice are needed.
The two sets of pentadactyl limbs that gave humanity such a considerable evolutionary advantage have proven to be especially problematic for android machines. The design can be straightforward—just copy the mechanical systems humans have at the ends of their arms and legs. Skillful operations, though, are something else entirely. Progress has been slow.
The human foot contains 26 bones, 33 joints, 19 muscles, and 57 ligaments and has been in development for several hundred million years along with the other systems working together to engineer our balance and mobility. Human hands are even more complex. Computers can shrink some of the time in development for android limbs, but you might expect dexterity and strength in android hands and feet to be among the last elements to become truly human-like.
ARTIFICIAL GENERAL INTELLIGENCE
In the updated “About” page on the OpenAI blog two significant new elements have been added to the overall mission—safety and AGI. At the top of the page you will now read: “OpenAI is a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence.” (AGI is defined as “artificial intelligence which matches or exceeds the intelligence and capabilities of human beings.”) The added requirement for safety is explained further with these statements:
“OpenAI’s mission is to build safe AGI, and ensure AGI’s benefits are as widely and evenly distributed as possible. We will not keep information private for private benefit, but in the long term, we expect to create formal processes for keeping technologies private when there are safety concerns.”
It appears the learning process continues at OpenAI for both the androids and researchers.