Skip to main content

Google research makes for an effortless robotic dog trot

As capable as robots are, the original animals after which they tend to be designed are always much, much better. That's partly because it's difficult to learn how to walk like a dog directly from a dog — but this research from Google's AI labs make it considerably easier.

As capable as robots are, the original animals after which they tend to be designed are always much, much better. That’s partly because it’s difficult to learn how to walk like a dog directly from a dog — but this research from Google’s AI labs make it considerably easier.

The goal of this research, a collaboration with UC Berkeley, was to find a way to efficiently and automatically transfer “agile behaviors” like a light-footed trot or spin from their source (a good dog) to a quadrupedal robot. This sort of thing has been done before, but as the researchers’ blog post points out, the established training process can often “require a great deal of expert insight, and often involves a lengthy reward tuning process for each desired skill.”

That doesn’t scale well, naturally, but that manual tuning is necessary to make sure the animal’s movements are approximated well by the robot. Even a very doglike robot isn’t actually a dog, and the way a dog moves may not be exactly the way the robot should, leading the latter to fall down, lock up, or otherwise fail.

The Google AI project addresses this by adding a bit of controlled chaos to the normal order of things. Ordinarily, the dog’s motions would be captured and key points like feet and joints would be carefully tracked. These points would be approximated to the robot’s in a digital simulation where a virtual version of the robot attempts to imitate the motions of the dog with its own, learning as it goes.

So far, so good, but the real problem comes when you try to use the results of that simulation to control an actual robot. The real world isn’t a 2D plane with idealized friction rules and all that. Unfortunately, that means that uncorrected simulation-based gaits tend to walk a robot right into the ground.

To prevent this, the researchers introduced an element of randomness to the physical parameters used in the simulation, making the virtual robot weigh more, or have weaker motors, or experience greater friction with the ground. This made the machine learning model describing how to walk have to account for all kinds of small variances and the complications they create down the line — and how to counteract them.

Learning to accommodate for that randomness made the learned walking method far more robust in the real world, leading to a passable imitation of the target dog walk, and even more complicated moves like turns and spins, without any manual intervention and only little extra virtual training.

Naturally manual tweaking could still be added to the mix if desired, but as it stands this is a large improvement over what could previously be done totally automatically.

In another research project described in the same post, another set of researchers describe a robot teaching itself to walk on its own, but imbued with the intelligence to avoid walking outside its designated area and to pick itself up when it falls. With those basic skills baked in, the robot was able to amble around its training area continuously with no human intervention, learning quite respectable locomotion skills.

The paper on learning agile behaviors from animals can be read here, while the one on robots learning to walk on their own (a collaboration with Berkeley and the Georgia Institute of Technology) is here.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.