Technology

Artificial intelligence made this robot dog a very good boy

The robot’s “brain” actually learned in a simulator.

Meet ANYmal—a four-legged robot whose name is pronounced “animal.” The 73-pound dog-like machine is a Swiss-made contraption that, thanks to artificial intelligence, can run faster, operate with more efficiency, and reset itself after a spill more successfully than it could before its AI training.

The robot, featured in a new study in the journal Science Robotics , learned not just with AI, but also through computer simulation on a desktop, a much faster approach than teaching a robotic new tricks in the real, physical world. In fact, simulation is roughly 1,000-times faster than the real world, according to the study.

This isn’t the only arena for which simulation is important: In the world of self-driving cars, time in simulation is a crucial way that companies test and refine the software that operates the vehicles. In this case, the researchers used a similar strategy, just with a robot dog.

To ensure that the simulator in which the virtual dog learned its skills was accurate, the researchers first made sure to incorporate data about how the robot behaves in the real world. Then, in simulation, a neural network—a type of machine learning tool—learned how to control the robot.

Besides the speed benefits of simulation, the technique allowed the researchers to do things to the robot that they wouldn’t want to do in real life. For example, they could virtually throw the breakable robo-dog in the air in the simulation, says Jemin Hwangbo, the lead researcher on the project and a postdoctoral fellow at the Robotic Systems Lab in Zurich, Switzerland. Then the pup could figure out how to stand back up after it landed.

After the neural network had finished its training in simulation, the team was able to deploy that learning onto the physical robot itself—which stands over 2 feet tall, has 12 joints, is electrically-powered, and looks similar to a robot called SpotMini made by Boston Dynamics.

The final result, after the sim time and AI, was that the robo pooch could follow instructions more precisely—for example, if commanded to walk at 1.1 mph, it could do that more precisely than before, according to Hwangbo; it also was able to get up successfully after a fall, and run faster. Programming a complex robot like ANYmal with specific instructions on how to get up after a fall is complicated, while letting it learn how to do it in simulation is a much more robust approach.

Show More
Back to top button
Close