These Animated AI Bots Learned to Dress Themselves, Awkwardly

Article preview thumbnail

After Millions of Trials, These Simulated Humans Learned to Do Perfect Backflips and Cartwheels

Using well-established machine learning techniques, researchers from University of California,…

Read more Read

The difference with self-dressing, however, is the need for haptic perception. Animated characters need to touch their clothing to infer progress. When dressing themselves, the bots must apply force to move their virtual arms through the clothing, while avoiding forces that could damage the garment, or cause a hand or elbow to get stuck. Consequently, the researchers had to add a second important element to the project: a physics engine capable of simulating the pulling, stretching, and manipulation of malleable materials, namely cloth.

During the training process, a bot gained points by successfully grasping the edge of a sleeve or poking its head through the collar. But when an action resulted in tearing or getting its arms hopelessly tangled, it would lose points.

Your browser does not support HTML5 video tag.Click here to view original GIFIn this example of a failed dressing attempt, the bot tears a hole through the shirt. GIF: Alexander Clegg/GIT/Gizmodo

Very quickly into the project, however, the researchers realized that a single, coherent dressing policy wasn’t going to work. The complicated task of dressing had to be broken down into a series of sub-policies. But this makes sense; when we teach children to dress themselves, we teach it one step at a time. The act of dressing can’t be broken down into a single philosophical policy—it’s a step-by-step process that leads toward a desired goal. Clegg’s team developed a policy-sequencing algorithm for this very purpose; at any given stage, an animated bot knew where it was in the dressing process, and which step was required next such that it could advance toward the desired goal.

Clegg and his colleagues say their new paper is the first to show that reinforcement learning, in conjunction with cloth simulation, can be used to teach a “robust dressing control policy” to bots, even though it’s “necessary to separate the dressing task into several subtasks” and have the system “learn a control policy for each subtask” to make it work, the authors write in the study.

Importantly, the study was limited to upper-body tasks; performing lower-body dressing tasks would have introduced an entirely new set of complications, such as maintaining balance while putting on pants. Also, the system was computationally demanding. Eventually, the researchers would like to incorporate memory into the system, which could “reduce the number of necessary subtasks and allow greater generalization of learned skills,” the authors write. Indeed, like the toddler who quickly acquires competency and flexibility through experience, the researchers would like their system to do likewise.

As a final note, this study shows how difficult it will be to create general artificial intelligence. It was a triumph of AI research to create machines capable of defeating grandmasters at chess and Go, but creating systems that can perform more mundane tasks—such as dressing themselves—is proving to be an enormous challenge as well.

[Georgia Institute of Technology]

Share This Story


Date:

by