The team demonstrated its system by tasking a two-fingered robot, RiceGrip, with reshaping deformable foam into a desired shape, much like you might shape sushi. It used a depth camera and object recognition to identify the foam, and then used the model to envision the foam as a dynamic graph for deformable materials. While it already had an idea as to how the particles would react, it would adjust its model if the “sushi” behaved in a way it didn’t expect.
It’s still early days, and the scientists want to improve their approach by using partly observable situations (such as knowing how a pile of boxes will fall. They’d also like it to work directly with imagery. If and when that happens, though, it could represent a breakthrough for robots. They’d have an easier time manipulating virtually any kind of object, even when liquids or soft solids might make the results difficult to determine in advance. While robots might not replace sushi chefs any time soon, MIT’s learning method makes the prospect that much more realistic.