(ORDO NEWS) — French researchers are confident that independent robotic systems must learn to perceive space, not based on human understanding. To do this, they developed a new approach.
Researchers from the Sorbonne and the French National Center for Scientific Research (CNRS) studied the premises of simplified spatial concepts in robotic systems based on the sensorimotor flow of the robot. Their work, published in the arXiv.org preprint database, is part of a larger project in which scientists are looking at how fundamental concepts of perception (body, space, object, color, and so on) can be grafted into biological or artificial systems.
Up to this point, the development of robotic systems has mainly reflected how a person perceives the world. However, because of this, robots, guided exclusively by human intuition, can be limited in the perception of what people experience.
To create fully autonomous robots, researchers may have to step back from conventional methods and allow robotic agents to develop their own perception of the world. According to a team of researchers from the Sorbonne and NCNI, the robot should gradually develop perception by analyzing sensorimotor experiences and identifying principles that make sense.
Alexander Terekhov, who worked on the project, and his colleagues showed that the concept of space as a phenomenon independent of the environment cannot be deduced only with the help of exteroceptive information, since it varies greatly depending on what is happening in the environment. This concept can be more clearly defined when studying the functions that connect motor commands with changes in external stimuli in relation to the agent.
“Important information comes from old research by the famous French mathematician Henri Poincaré, who was interested in how mathematics in general and geometry in particular can arise in human perception,” says Terekhov. “He suggested that touch timing could be crucial.”
Poincaré’s idea is easier to explain with a simple example. When we look at an object, the eye captures a specific image, which will change if the object moves 10 centimeters to the left. However, if we move 10 centimeters to the left, the image that we see will remain practically the same.
To apply these ideas to the development of robotic systems, the scientists programmed a virtual robotic arm with a camera at the end. The robot understood the measurements taken from the joints of the arm every time it received an image.
“By combining all these dimensions, the robot builds an abstraction that is mathematically equivalent to the position and orientation of its camera, even if it does not have direct access to this information,” explains Terekhov. – The most important thing: even though this abstract concept is derived from an image, in the end it becomes independent of it, which means it works for all environments. Likewise, our concept of space does not depend on the specific scene that we see.”
Contact us: [email protected]