A robot that learns how to tidy up after you

May 23, 2012 by Amara D. Angelica

A robot places an item in a refrigerator (credit: Saxena Lab)

Finally: a robot I really need. Like right now.

Researchers at Cornell’s Personal Robotics Lab have trained a robot to survey a room, identify all the objects, figure out where they belong, and put them away. Bingo!

“This is the first work that places objects in non-trivial places,” said Ashutosh Saxena, assistant professor of computer science.

“It learns not to put a shoe in the refrigerator,” explained graduate student Yun Jiang, “and on the floor, not on a table.”

A robot ‘mind’ at work

1. Survey the room with a Microsoft Kinect 3-D camera.

2. Stitch together images to create an overall view of the room and divide it into blocks based on discontinuities of color and shape (it has been shown several examples of each kind of object and learns what characteristics they have in common).

Dividing the target space into small chunks and computng a series of features of each chunk (credit: Saxena Lab)

3. For each block, compute the probability of a match with each object in my database and choose the most likely match.

4. For each object, examine the target area to decide on an appropriate and stable placement location. To do that, divide a 3-D image of the target space into small chunks and compute a series of features of each chunk, taking into account the shape of the object I am placing.

(It trains for this task from graphic simulations in which placement sites are labeled as good and bad, and it builds a model of what good placement sites have in common.).

5. Choose the chunk of space with the closest fit to that model.

6. Create a graphic simulation of how to move the object to its final location and carry out those movements.

7. Beg forgiveness for the broken wine glasses. (OK, I just make that one up.)

Robotic arm placing different objects in specific locations (credit: Saxena Lab)

Bad robot

Robot's defective model of a closet hanger: it hangs a jacket in slightly wrong location (credit: Saxena Lab)

The researchers tested the robot by placing dishes, books, clothing and toys on tables and in bookshelves, dish racks, refrigerators and closets. It was up to 98 percent successful in identifying and placing objects it had seen before, but success rates later fell to an average of 80 percent.

Ambiguously shaped objects, such as clothing and shoes, were most often misidentified. (Well, OK, but I have the same problem when I’m stumbling around at night.)

They still broke an occasional dish. Performance could be improved with cameras that provide higher-resolution images, and by preprogramming the robot with 3-D models of the objects it is going to handle, rather than leaving it to create its own model from what it sees, the researchers say.

The robot sees only part of a real object, Saxena explained, so a bowl could look the same as a globe. Tactile feedback from the robot’s hand would also help it to know when the object is in a stable position and can be released.

In the future, Saxena says, he’d like to add further “context,” so the robot can respond to more subtle features of objects. For example, a computer mouse can be placed anywhere on a table, but ideally it should go beside the keyboard.

Note to robot: while you’re up, could you bring me another beer? Oh, and tell Scooba to mop up the one you just spilled!

Ref.: Yun Jiang et al., Learning to place new objects in a scene, International Journal of Robotics, 2012, DOI: 10.1177/0278364912438781

Ref.: Yun Jiang et al., Learning to place new objects in a scene, International Journal of Robotics, 2012, [PDF] (open access)