Industrial robots perform systematic tasks with an accuracy and speed largely superior to humans'. Yet, to date, they have mostly been confined to highly-controlled factories designed for them. This confinement is explained by the way industrial robots function. These robots execute programs that are specific to one task, and that assume a particular environment. If the task or the environment changes, the robot has to be re-programmed. For instance, let us consider a car-assembly robot that picks up wheels from a feeder and bolts them to an axle. In this type of scenario, the place of the feeder, and the pick-and-bolt behavior, are hard-coded into the robot's program. If we move the robot to another factory where the feeder is placed differently, or to a factory where the robot is expected to remove wheels instead of attaching them, a technician will need to reprogram the robot before it can work again. Today's robots are still far less versatile than humans. As a result, we mostly use them in highly-controlled workplaces designed for them.
Robots working in controlled vs. uncontrolled environments. Left: an industrial robot fixed at its workstation, right: a household robot in a kitchen.
The robotics research community is currently striving to develop robots that can evolve in regular factories, and houses, officies, or hospitals. Developing such robots is difficult because of the diversity inherent to human environments. Room layouts differ from one building to another. Most common objects exist in different sizes or colors, and vary in weight or in stiffness. Preprogramming a robot to readily work in an arbitrary house or factory is unpractical, as it would require the robot to have access to the complete layout of any building it enters, to have models of all objects and tools it may need to manipulate or use, and to have preprogrammed behaviors adapted to every other combination of tools and tasks. In response, the community has moved beyond preprogrammed designs, and it is now developing robots that learn to adapt to new tasks and environments. By observing the environmental effects of their actions and the actions of others, these robots can progressively acquire the knowledge necessary to execute their work. Consequently, the “program” that governs the robot's actions is evolving over time.
In my own research, I have for instance developed robotic agents that learned to move and manipulate objects – for instance, pick up a plate from a table and place it in a dishwasher, or slide a dish into an oven. By reproducing tasks demonstrated by a human, and also by experimenting on its own, the robot learned how to place its hand on various objects in order to grasp them. The robot also learned how to exploit tactile cues to maintain the stability of a grasp. By experimentation, the robot understood that certain tactile signals indicate that the grasp is wrong, or that the object is slipping away, and therefore a reactive action is necessary. As the robot became familiar with a small set of objects, it progressively abstracted generic skills from its experience. In turn, these skills could be transferred to novel objects as they appeared, allowing the robot to quickly adapt to a changing environment.