MIT develops AI algorithm to connect sight and vision for Robotics

By Srikanth 2 Min Read
2 Min Read
MIT develops AI algorithm to connect sight and vision for Robotics 1

Robotics has been progressing towards a harmonious yet accurate lifestyle, making life easier for humans.

Researchers have MIT have developed an AI algorithm for robots that can learn to see by touching and to feel by seeing. This takes robots a step closer to a safer tomorrow. The new AI-based system can create realistic tactile signals from visual inputs, and predict which object and what part is being touched directly from those tactile inputs. Robots can further be developed into more friendlier and accurate machines that can be used for advanced human tasks in the future.

“By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge”, said Yunzhu Li, Ph.D. student and lead author from MIT’s Computer Science and Artificial intelligence Laboratory (CSAIL).

Advertisement

The research team recorded nearly 200 objects, such as tools, household products, fabrics, and more, being touched more than 12,000 times using just a web camera. Breaking those 12,000 video clips down into static frames, the team compiled “VisGel,” a dataset of more than three million visual/tactile-paired images.”Bringing these two senses (vision and touch) together could empower the robot and reduce the data we might need for tasks involving manipulating and grasping objects,” said Li.

“By blindly touching around, our model can predict the interaction with the environment purely from tactile feelings,” Li added. However, all these actions have been encountered in a controlled environment. The team looks forward to a diverse usage of this algorithm to make the robots sustainable for the future.

Share This Article
Passionate Tech Blogger on Emerging Technologies, which brings revolutionary changes to the People life.., Interested to explore latest Gadgets, Saas Programs