Novel machine learning technique for simulating the every day task of dressing

  
Get dressed!
Computer scientists from the Georgia Institute of Technology and Google Brain, Google's artificial intelligence research arm, have devised a novel computational method, driven by machine learning techniques, to successfully and realistically simulate the multi-step process of putting on clothes. Credit: SIGGRAPH Asia

Putting on clothes is a daily, mundane task that most of us perform with little or no thought. We may never take into consideration the multiple steps and physical motions involved when we're getting dressed in the mornings. But that is precisely what needs to be explored when attempting to capture the motion of dressing and simulating cloth for computer animation.

Computer scientists from the Georgia Institute of Technology and Google Brain, Google's artificial intelligence research arm, have devised a novel computational method, driven by machine learning techniques, to successfully and realistically simulate the multi-step process of putting on clothes. When dissected, the task of dressing is quite complex, and involves several different physical interactions between the character and his or her clothing, primarily guided by the person's sense of touch.

Creating animation of a character putting on clothing is challenging due to the complex interactions between the character and the simulated garment. Most work in highly constrained character animation deals with static environments which don't react very much to the motion of the character, notes the researchers. In contrast, clothing can respond immediately and drastically to small changes in the position of the body; clothing has the tendency to fold, stick and cling to the body, making haptic, or touch sensation, essential to the task.

Another unique challenge about dressing is that it requires the character to perform a prolonged sequence of motion involving a diverse set of subtasks, such as grasping the front layer of a shirt, tucking a hand into the shirt opening and pushing a hand through a sleeve.

"Dressing seems easy to many of us because we practice it every single day. In reality, the dynamics of cloth make it very challenging to learn how to dress from scratch," says Alexander Clegg, lead author of the research and a computer science Ph.D. student at the Georgia Institute of Technology. "We leverage simulation to teach a neural network to accomplish these complex tasks by breaking the task down into smaller pieces with well-defined goals, allowing the character to try the task thousands of times and providing reward or penalty signals when the character tries beneficial or detrimental changes to its policy."

The researchers' method then updates the neural network one step at a time to make the discovered positive changes more likely to occur in the future. "In this way, we teach the character how to succeed at the task," notes Clegg.

Clegg and his collaborators at Georgia Tech include computer scientists Wenhao Yu, Greg Turk and Karen Liu. Together with Google Brain researcher Jie Tan, the group will present their work at SIGGRAPH Asia 2018 in Tokyo 4 December to 7 December. The annual conference features the most respected technical and creative members in the field of computer graphics and interactive techniques, and showcases leading edge research in science, art, gaming and animation, among other sectors.

In this study, the researchers demonstrated their approach on several dressing tasks: putting on a t-shirt, throwing on a jacket and robot-assisted dressing of a sleeve. With the trained neural network, they were able to achieve complex reenactment of a variety of ways an animated character puts on clothes. Key is incorporating the sense of touch into their framework to overcome the challenges in cloth simulation. The researchers found that careful selection of the cloth observations and the reward functions in their trained network are crucial to the framework's success. As a result, this novel approach not only enables single dressing sequences but a character controller that can successfully dress under various conditions.

"We've opened the door to a new way of animating multi-step interaction tasks in complex environments using reinforcement learning," says Clegg. "There is still plenty of work to be done continuing down this path, allowing simulation to provide experience and practice for task training in a virtual world." In expanding this work, the team is currently collaborating with other researchers in Georgia Tech's Healthcare Robotics lab to investigate the application of robotics for dressing assistance.

Explore further: Robot teaches itself how to dress people

More information: Paper: www.cc.gatech.edu/~aclegg3/pro … ess-synthesizing.pdf