Advancing Robotic Grasping, Dexterous Manipulation & Soft Robotics

You are here

Jeff Trinkle and his colleagues work to advance intrinsically safe soft robots, the future of human-machine collaboration.

Jeff Trinkle says he’s not one to get too worked up about things. Still, he has always had a keen interest in robot hands. And, though it may be a long way off, Trinkle, whose research in robotic grasping, dexterous manipulation and related simulation problems has been funded continuously by the National Science Foundation since 1989, says he’s most compelled by the prospect of robots performing “dexterous manipulation” at the level of a human “or beyond.” 

“I’ve always felt that for robots to be really useful they have to pick stuff up, they have to be able to manipulate it and put things together and fix things, to help you off the floor and all that,” he says, adding: “It takes so many technical areas together to look at a problem like that that a lot of people just don’t bother with it.”

But Trinkle, who was the NSF Program Officer in charge of the National Robotics Initiative before arriving at Lehigh's department of computer science and engineering, does more than bother with it. Some of the technical challenges involved in grasping are at the heart of one of his current projects: a collaboration to develop a new approach to the design and construction of soft robots inspired by the movement of natural muscles in soft animal structures—think: giraffe tongues, octopus tentacles and elephant trunks. 

Soft robots, which are pliable and can be deformed and reformed, are the future of human-machine collaboration. Trinkle calls them Robots 2.0.

“Robotics 1.0 was putting robots that are big and heavy—that you would never want to get in the way of—in a warehouse and having them build cars and paint them and all that,” he says. “Robots 2.0 is ‘OK, well, instead of forcing robots and humans to be separated for safety, why don’t we build robots that are intrinsically safe?’”

Robots that move the way tongues, tentacles and trunks move could be useful for sending into areas that are too dangerous for humans, such as disaster zones, the deep ocean or outer space. They may also be “intrinsically safe,” able to work side by side someday with humans in a warehouse or operating room.

Trinkle’s excitement for the potential of such robots led him to work with colleagues from Yale University, the University of Washington and Brown University on the soft robots project, funded by a National Science Foundation Emerging Frontiers in Research and Innovation grant. His role is to use mathematical models, along with computer science techniques like search algorithms, to develop the computer system that “tells” the robot how to move. He offers the Roomba, the robotic vacuum that moves across the floor autonomously, as an accessible example.

“Roombas use the same kind of technologies we are using, just in a very simplified setting,” he says. “Roombas know that there’s a wall. They have an internal map that ensures they can move around without bumping into things too much.”

Trinkle says this map functions similarly to the human brain.

“Your brain sends signals to your muscles and they change their length, contracting or expanding, depending on what you’re trying to execute,” he says. “We are doing something similar here. So imagine taking hundreds or thousands of modular units and using them to build your own elephant trunk.”

To do this, the researchers are closely examining the biology of a number of soft animal structures to better understand how muscle cells work in concert with tendons and other tissue to articulate movements. Trinkle and his students apply this biological data to construct a computer simulation of, in one example, an elephant trunk. The structure’s components and their connections to each other are represented with crisscrossing lines—red lines stand in for active muscle cells and gray for fat and connective tissue, for example. 

Using a simulation tool, Trinkle applies mathematical models to instruct the simulated appendage to curl around a simulated object such as a circle, representing what would be a disk in three dimensions.

To design the “brain” or “map” that will ultimately instruct a three-dimensional robot how to move, Trinkle uses techniques employed in building artificial neural networks, a type of machine learning that is modeled on the human brain. These neural networks are trained through data in a process similar to human learning. The data “trains” the system through a process akin to “trial and error.” In this case, the network is trained with data generated by the computer simulation of an abstracted elephant trunk.

To get the abstracted trunk to curl around the circle and, ultimately, move the circle to another part of the screen, involves multiple steps and a lot of trial and error as the system gets trained.

“[The system] doesn’t know anything, so when it tightens up the fibers on one side or the other, it doesn’t know in advance how it’s going to move, but the neural network is going to figure out that if it does certain things to its muscle fibers, it’s going to move a certain way,” says Trinkle. “Over a long time horizon, the simulation will figure some things out. It’s going to try to end up pushing the disk in the right location.”

He likens the process to an infant learning to crawl. 

“If a baby is trying to learn how to crawl, it’s going to do some things that won’t work, and eventually the infant figures it out,” says Trinkle. “At some point, all of a sudden, the baby solves the problems and now it’s crawling because its neural network has been trained from its experience.” 

In this research, computer simulation is the training ground for robot systems the teams will build.

Read full article>   

STORY BY Lori Friedman

PHOTOGRAPHY BY Illustrations by Chad Hagen