OpenAI, the artificial intelligence research lab founded by Sam Altman and Elon Musk, recently declared that it would be sending a team to Vancouver in August to participate in a professional tournament of the famous online battle game Dota 2. But unlike other teams that will be competing for the multi-million-dollar prize,
Called OpenAI Five, the team consists of five artificial neural networks that have been burning through the huge computing power of Google's cloud and practicing the game over and over, millions of times. OpenAI Five has already bested semi-pros at Dota 2 and will be testing its mettle against the top-1-percent players come August.
Absurd as it may seem to some, games have proven to be an important part of AI research. From chess to Dota 2, every game AI has conquered has helped us break new ground in computer science and other fields.
Games Help Trace the Progress of AI
Since the inception of the idea of artificial intelligence in the 1950s, games have been an efficient way to measure the capacity of AI. They're especially convenient in testing the capacity of new AI
The first game that researchers tried to master through AI was chess, which in early days was considered the ultimate test of advances in the field. In 1996, IBM's Deep Blue was the first computer to defeat a world champion (Garry Kasparov) in chess. The AI behind Deep Blue used a brute-force method that analyzed millions of sequences before making a move.
While the method enabled Deep Blue to master chess, it was nowhere near effective enough to tackle more complicated board games. By today's standards, it's considered
But in 2016, researchers at Google-owned AI company DeepMind created AlphaGo, a Go-playing AI that beat Lee Sedol, the world champion, 4 to 1 in a five-game competition. AlphaGo replaced the brute-force method of Deep Blue with deep learning, an AI technique that works in a much more similar way to how the human brain works. Instead of examining every possible combination, AlphaGo examined the way humans played Go, then tried to figure out and replicate successful gameplay patterns.
The researchers of DeepMind later created AlphaGo Zero, an improved version of AlphaGo that used reinforcement learning, a method that required zero human input. AlphaGo Zero was taught the basic rules of Go and learned the game by playing against itself countless times. And AlphaGo Zero beat its predecessor 100 to zero.
Board games have limitations, though. First, they are turn-based, which means the AI isn't under the strain to make decisions in an environment that changes constantly. Second, the AI has access to all the information in the environment (in this case the board) and doesn't have to make guesses or take risks based on unknown factors.
Considering this, an AI called Libratus made the next breakthrough in artificial intelligence research by beating the best players at Texas Hold 'Em poker. Developed by researchers at Carnegie Mellon, Libratus showed that AI can compete with humans in situations where it has access to partial information.
Real-time video games are the next frontier for AI, and OpenAI isn't the only organization involved in the field. Facebook has tested teaching AI to play the real-time strategy game StarCraft, and DeepMind has developed an AI that can play the first-person shooter game Quake III. Each game presents its own set of challenges, but the common denominator is that all of them present the AI with environments where they have to make decisions in real-time and with incomplete information. Moreover, they give AI an arena where it can test its might against a team of opponents and learn teamwork itself.
For now, no one had developed AI that can beat professional players. But the very fact that AI is competing with humans at such complex games shows how far we've come in the field.
Games Help Develop AI in Other Fields
While scientists have used games as testbeds for developing new AI techniques, their achievements have not remained limited to games. In fact, gameplaying AIs have paved the way for innovations in other fields.
In 2011, IBM introduced a supercomputer that was capable of natural language processing and generation (NLG/NLP) and was named after the company's former CEO Thomas J Watson. The computer played the famous TV-show quiz game Jeopardy against two of the world's best players and won. Watson later became the basis for a huge line of AI services by IBM in different domains including healthcare, cybersecurity and weather forecasting.
DeepMind is employing its experience from developing AlphaGo to use AI in other fields where reinforcement learning can help. The company launched a project with National Grid UK to use the
I for one am looking forward to seeing how OpenAI Five will perform in August's Dota 2 competition. While I'm not particularly interested in whether the neural networks and its developers take home the $15 million prize, I'm keen to see what new windows its accomplishments will open.