MIT Analysis Makes Robots Much less Clumsy With Fingertip Cameras

New research from the Massachusetts Institute of Technology could help robots get a literal grasp.

Two research groups from the Laboratory for Computer Science and Artificial Intelligence (CSAIL) today jointly researched the improvement of grippers in soft robotics, which, in contrast to traditional robotics, uses flexible materials.

The technology could lead to robotic hands that can not only grasp a variety of objects, but also recognize what the object is, and can even rotate and bend it.

Using a method called GelFlex, a research group has embedded cameras in the “fingertips” of a two-finger gripper so that the robot arm can better grasp different shapes, from large objects such as cans to thin objects such as a CD case. The researchers found that the GelFlex has an accuracy rate of over 96% in recognizing differently shaped objects.

Using a transparent silicone for the finger, the group embedded cameras with fisheye lenses. By painting the fingers with reflective ink and adding LED lights, the embedded camera could see how the silicon on the opposite finger reacted to the feel of the object. With specially trained artificial intelligence, the computer used data about how the finger bent to estimate the shape and size of the object.

"Our soft finger can provide high proprioception accuracy, accurately predict captured objects and withstand significant impacts without harming the interacting environment and itself," said Yu She, the lead author of the paper. "By restricting soft fingers with a flexible exoskeleton and performing high-resolution acquisition with embedded cameras, we open up a variety of functions for soft manipulators."

Another research team developed a sensor system that allowed soft robotic hands to correctly detect what they were picking up more than 90 percent of the time. The project provides a conical gripper that was developed from previous research by MIT and Harvard on tactile sensors. The team is led by postdoc Josie Hughes and professor and CSAIL director Daniela Rus.

The sensors are miniature latex balloons connected to a pressure transducer or sensor. Changes in pressure on these balloons are fed to an algorithm to determine how best to detect the object.

By adding the sensors to a robotic hand that is both soft and strong, the gripper can pick up small delicate items like a potato chip – and also classify what that item is. If the gripper understands what the object is, he can grasp the object correctly instead of turning the potato chip into crumbs.

While a robot feeds you potato chips, that may sound cool, but the research is probably meant to be made. However, further research could potentially lead to a “robotic skin” that perceives the feel of different objects.

Editor's recommendations




Leave a Reply

Your email address will not be published. Required fields are marked *