Using heaps of data, Google trained a table-tennis-playing robot to take on human competitors and get better as it did so. The results were impressive and represent a leap forward in robotic speed and dexterity. It also looks really fun.
“Achieving human-level speed and performance on real world tasks is a north star for the robotics research community.” Thus begins a paper written by a team of Google scientists who helped create, train, and test the table-tennis bot.
We’ve certainly seen quite a bit of advancement in robotics that allows humanoid machines with the performance chops to handle real world tasks including everything from chopping ingredients for dinner to working in a BMW factory. But as the Google team’s quote suggests, the ability to add speed to that precision is developing a bit more, well, slowly.
That’s why the new table-tennis-playing robot is so impressive. As you can see in the following video, in games with human competitors, the bot was able to hold its own, although it’s not quite Olympic-level yet. During 29 matches, the bot had a 45% success rate, defeating 13 players. While that’s certainly better than a lot of New Atlas writers would do against any competitor, the bot was only able to excel against beginner to intermediate players. It lost all of the matches it played against advanced players. It also didn’t have the ability to serve the ball.
Some highlights – Achieving human level competitive robot table tennis
“Even a few months back, we projected that realistically the robot may not be able to win against people it had not played before,”. Pannag Sanketi, told MIT Technology Review. “The system certainly exceeded our expectations. The way the robot outmaneuvered even strong opponents was mind blowing.” Sanketi, who led the project, is the senior staff software engineer at Google DeepMind. Google’s DeepMind is the AI branch of the company, so this research was ultimately as much about data sets and decision making as it was about the actual performance of the paddle-wielding robot.
To train the system, the researchers amassed a large amount of data about ball states in table tennis including things like spin, speed, and position. Next, during simulated matches, the bot’s “brain” was trained in the basics of the game. That was enough to get it playing human competitors. Then, during the matches, the system used a set of cameras to respond to human challengers using what it knew. It was also able to continue learning and trying out new tactics to beat challengers, which meant it was able to improve on the fly.
“I’m a big fan of seeing robot systems actually working with and around real humans, and this is a fantastic example of this,” Sanketi told MIT. “It may not be a strong player, but the raw ingredients are there to keep improving and eventually get there.”
The following video shows even more details of the bot in training and the various skills it was able to employ.
Demonstrations – Achieving human level competitive robot table tennis
The research has been published in an Arxiv paper.
Sources: MIT Technology Review, Google