Robot overcomes uncertainty to retrieve buried objects | MIT News

For humans, finding a lost wallet buried under a pile of items is pretty easy – we just remove things from the pile until we find the wallet. But for a robot, this task involves complex reasoning about the stack and objects in it, which presents a steep challenge.

MIT researchers have previously shown that robotic arm that combines visual information and radio frequency (RF) signals to find hidden objects tagged with RFID tags (which reflect signals sent by an antenna). Building on that work, they have now developed a new system that can efficiently retrieve any object buried in a pile. As long as some items in the stack have RFID tags, the target item does not need to be tagged for the system to recover.

The algorithms behind the system, known as FuseBot, reason about the likely location and orientation of objects below the stack. Then FuseBot finds the most efficient way to remove obstacles and get the target item out. This reasoning enabled FuseBot to find more hidden items in half the time than a state-of-the-art robotic system.

This speed can be especially useful in an e-commerce warehouse. A robot tasked with processing returns could find items in an unsorted stack more efficiently with the FuseBot system, said senior author Fadel Adib, an associate professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group at the University of Applied Sciences. Media Lab.

“What this article first shows is that just the presence of an RFID tagged item in the environment makes it much easier for you to perform other tasks in a more efficient way. We were able to do this because we added multimodal reasoning to the system – FuseBot can reason about both vision and RF to understand a stack of items,” adds Adib.

Join Adib on the newspaper his research assistants Tara Boroushaki, the lead author; Laura Dodds; and Nazi Naeem. The research will be presented at the Robotics: Science and Systems conference.

Targeting Tags

A recent market report indicates that more than 90 percent of US retailers now use RFID tags, but the technology is not universal, leading to situations where only a few objects within stacks are tagged.

This problem inspired the group’s research.

With FuseBot, a robotic arm uses a connected video camera and RF antenna to retrieve an untagged target item from a mixed stack. The system scans the stack with its camera to create a 3D model of the environment. At the same time, it sends signals from its antenna to locate RFID tags. These radio waves can pass through most solid surfaces, allowing the robot to “see” deep into the stack. Because the target item is not tagged, FuseBot knows that the item cannot be placed in the exact same place as an RFID tag.

Algorithms merge this information to update the 3D model of the environment and highlight potential locations of the target item; the robot knows its size and shape. The system then reasons about the objects in the stack and the locations of RFID tags to determine which item to remove, aiming to find the target item with the least amount of movement.

It was a challenge to get this reasoning into the system, says Boroushaki.

The robot isn’t sure how objects under the stack are oriented, or how to deform a squishy item by pressing heavier items on it. It overcomes this challenge with probabilistic reasoning, using what it knows about an object’s size and shape and the location of the RFID tag to model the 3D space that object is likely to occupy.

Because it removes items, it also uses reasoning to decide which item is “best” to remove next.

“If I give a human a stack of items to search, they will most likely remove the largest item first to see what’s underneath. What the robot does is similar, but it also includes RFID information to make a more informed decision.” It asks, “How much more will it understand of this pile if it removes this item from the surface?” says Boroushaki.

After removing an object, the robot rescans the stack and uses new information to optimize its strategy.

Get results

This reasoning, as well as the use of RF signals, gave FuseBot an edge over a state-of-the-art system that used only vision. The team conducted more than 180 experimental trials using real robotic arms and stacks of household items, such as office supplies, stuffed animals and clothing. They varied the size of the stacks and the number of RFID-tagged items in each stack.

FuseBot successfully extracted the target item 95 percent of the time, compared to 84 percent for the other robot system. It achieved this with 40 percent fewer movements and was able to locate and retrieve targeted items more than twice as fast.

“We see a big improvement in the success rate by including this RF information. It was also exciting to see that we were able to match and exceed the performance of our previous system in scenarios where the target item did not have an RFID tag,” says Dodds.

FuseBot can be applied in a variety of environments because the software that performs its complex reasoning can be implemented on any computer — it just needs to interact with a robotic arm with a camera and antenna, Boroushaki adds.

In the near future, the researchers plan to incorporate more complex models into FuseBot so that it performs better on deformable objects. In addition, they are interested in exploring various manipulations, such as a robotic arm pushing items out of the way. Future iterations of the system could also be used with a mobile robot that searches multiple piles for lost items.

“I think the work is very exciting in many ways and demonstrates the potential to closely integrate some of the advances in wireless signal technologies with robotics. For example, an important observation the paper builds on is that RF signals, as opposed to visible light and infrared, can pass through standard materials such as cardboard, wood and plastic. The paper capitalizes on this observation to address very difficult problems in robotics for which more conventional sensors are very limited, such as searching for objects in clutter,” said Stephanie Gil, assistant professor of computer science at the Harvard John A. Paulson School of Engineering and Applied Sciences, who was not involved in this study. “The paper goes on to discuss the state-of-the-art of using RF signals in advancing robotics by also considering the very difficult case of searching for unlabeled items in clutter. Overall, the paper shows great promise of integrating wireless communication technologies for sensing and sensing tasks in robotics and offers a very exciting look at robotics under this lens.”

This work was funded in part by the National Science Foundation, a Sloan Research Fellowship, NTT DATA, Toppan, Toppan Forms, and the MIT Media Lab.

Leave a Comment

Your email address will not be published.