Deep learning can ease Robots with Objects

0
788

Due to the lockdowns and other COVID-19 safety measures, many are performing online shopping more popular than ever but shoot up demand is leaving many retailers struggling to fulfill orders while ensuring the safety of their warehouse employees.

Berkeley Researchers at the University of California created Artificial Intelligence (AI) software that gives robots the speed and skill to hold and move objects smoothly by making it fit for them to assist humans in warehouse environments. It is described in a paper published online on 18 November 2020 in the journal Science Robotics. Automating warehouse tasks can be tough as many actions that come naturally to humans such as deciding where and how to pick up different types of objects and then coordinating the shoulder, wrist, and arm movements needed to move each object from one place to another are difficult to robots. Robotic motion is shaking that can increase the risk of damaging both the products and the robots.

According to Ken Goldberg, William S. Floyd Jr.Distinguished Chair in Engineering at UC Berkeley, and senior author of the study, Warehouses are mostly operated by humans since it’s very tough for robots to accurately grasp different objects. In an automobile assembly line, every motion is similar and repeated so they can be automated, yet in a warehouse every order is different.

Grasp-Optimized Motion Planner motions generated by Goldberg and UC Berkeley postdoctoral researcher Jeffrey Ichnowski were jerky. The software parameter could be twisted to generate smoother motions these calculations took an average of about minute half to compute. This planner collaborates with UC Berkeley graduate student Yahav Avigal and undergraduate student Vishal Satish considerably speed up the computing time of the motion planner by integrating a deep learning neural network.

Neural networks let a robot learn from samples afterward, the Robot can generalize to similar objects and motions. Goldberg and Ichnowski discovered that the estimation generated by the neural network could then be optimized using the motion planner.

Ichnowski mentioned that the neural network takes only milliseconds to compute a motion since it is very fast, but inaccurate. If it is to feed that approximation into the motion planner, it only needs a few iterations to compute the final motion. By the combination of neural network and the motion planner, the team cut average computation time from 29 seconds to 80 milliseconds or less than one-tenth of a second.

Goldberg predicts that Robots will be assisting in warehouse environments in the next few years. Every activity like shopping for groceries, clothing, and many more, people are likely to continue shopping this way even after the pandemic is over. It’s an opportunity for robots to support human workers.