A robot hand known as Shadow Robot Dexterous Hand is similar to a human in size, shape, and movement capabilities.
Researchers from WMG, University of Warwick, have developed new AI algorithms that give the robotic hand the ability to learn how to manipulate objects. Robot hands can be used in different fields such as manufacturing, surgery, and dangerous activities like nuclear decommissioning. Robotic hands are used in computer assembly where assembling microchips requires a level of precision that only human hands can achieve. Robot hands in assembly lines help in higher productivity.
The researchers with help of Shadow’s robotic hands succeeded in making two hands pass and throw objects to each other, as well as spin a pen between its fingers. The algorithm is not limited to this particular task but can learn any task as long as it can be simulated. The 3-dimensional simulations were developed using Multi-Joint Dynamics with Contact (MuJoCo), a physics engine from the University of Washington.
The researcher uses two algorithms, 1st one a planning algorithm that produces a few rough examples of how the hand should be performing a particular task. These are then used by a reinforcement learning algorithm that masters the manipulation abilities on its own. In a paper PlanGAN: Model-based Planning With Sparse Rewards and Multiple Goals, the WMG researchers have produced a novel and general Artificial Intelligence approach that enables robots to learn tasks that include reaching and moving objects, which will further improve hand manipulation applications.
Professor Giovanni Montana comment that the future of digitalization relies on Artificial Intelligence algorithms that can learn by itself, and able to develop algorithms that give Shadow Robot’s hand the ability to operate as a real human hand without any human input. These autonomous hands could be used in future activities where dangerous activities are performed such as bomb disposal.
In future work, the robots observe the environment as accurately as humans do, not only through computer vision algorithms that can see the world but through sensors that detect vibrations, force, and temperature so the robot can learn what to do when it feels these types of sensations.