Google's DeepMind made an AI that's better than 99.8% of humans at 'Starcraft II,' but don't expect it to change the world any time soon

Advertisement
Google's DeepMind made an AI that's better than 99.8% of humans at 'Starcraft II,' but don't expect it to change the world any time soon

AS_Bnet_Protoss2 starcraft II

DeepMind Press Office

Battle units belonging to Google DeepMind AI agent AlphaStar (in green) engage in an encounter with opposing players during a game of Starcraft II.

Advertisement
  • Google's DeepMind research team has developed an artificial intelligence program named AlphaStar that has proven superior to 99.8% of Starcraft II players.
  • In January, AlphaStar went head to head with and beat professional Starcraft II players - but the AI enjoyed some advantages over its opponents.
  • This summer, however, AlphaStar had to play under similar conditions to human players, and successfully reached the level of "Grandmaster," ranking it among the top 0.2% of players, a new study announced.
  • This accomplishment could have possible applications for real-world technology like virtual assistants, robotics, and self-driving cars, the researchers said.
  • However, George Cybenko, a professor of engineering at Dartmouth College, tells Business Insider that as impressive as the AI might be, there's a big difference between being good at a video game and actually solving real-world problems.
  • Visit Business Insider's homepage for more stories.

Over the past two decades, scientists have been striving to develop artificial intelligence programs that can best their creators at competitive games. First it was backgammon, then chess, then the classic board game Go.

Now, the research team at Google's DeepMind AI subsidiary have created an AI named AlphaStar that is superior to 99.8% of human players at Blizzard's popular video game "Starcraft II."

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

"Starcraft II," a real-time strategy game played by millions of people worldwide, is also an incredibly popular esport. Professional players from all over the world compete against each other for hundreds of thousands of dollars in prize money while striving to attain the highest rank of "Grandmaster."

Last month, the researchers announced in a new study that AlphaStar achieved that Grandmaster level this summer - meaning the program is ranked among the top 0.2% of all players - and became the first AI to reach the uppermost echelons of the popular esport.

Advertisement

The feat is all the more impressive considering that AlphaStar played with the same constraints as any human - it couldn't see the whole board, for example.

What makes the AI such a formidable foe, the scientists said, is its ability to adapt to new situations and multitask accordingly. Indeed, putting aside the fact that AlphaStar is now technically one of the greatest "Starcraft II" players in the world, this achievement represents a major leap forward for machine learning.

AlphaStar's machine learning algorithms for multitasking could have broader applications in helping self-driving cars adapt to new situations, or for making virtual assistants like Google's own Assistant smarter and more helpful.

AlphaStar learned how to master 'Starcraft II' from human players

Developer Blizzard first released "Starcraft II" in 2010, and it quickly exploded in popularity as a strategy game in which competitors can go head to head in matches that can last anywhere from 5 minutes to more than 3 hours.

There's a lot about the game that makes it perfect to see how AI handles difficult situations under pressure: Players can only see tiny sections of the game space at a time, and aren't aware of their opponents' resources or locations before making offensive or defensive movements unless they send strategically-placed scouts ahead.

Advertisement

AS_Bnet_Zerg2 Starcraft

Deep Mind Press Office

Units belonging to Google DeepMind's AI AlphaStar (in red) defend against an aggressive incursion from opponents' units during a game of Starcraft II.

David Silver, the principal research scientist at DeepMind, explained in a recent press conference that there are 1026 possible choices for every move, and possibly thousands of those moves in a single game.

These conditions make it extremely challenging for an AI program to master Starcraft II, another study author and DeepMind scientist, Oriol Vinyals, said during a press conference. There is no single best strategy that an AI can be programmed with. Instead, it has to learn how to adapt and react to constantly-changing conditions.

The real breakthrough, however, may have come after it crushed some top-level players.

Last December, a preliminary version of AlphaStar beat professional StarCraft player Grzegorz "MaNa" Komincz multiple times, after playing Komincz's teammate Dario "TLO" Wünsch. The two noted that the AI had several unfair advantages: It could see the whole board at once, and it could react faster than any human ever could.

Advertisement

DeepMind took this feedback, and forced AlphaStar to abide by more human-like limits, Vinyals and Silver said. It could no longer see the whole map, and it was restricted to 22 commands every 5 seconds, comparable to the speed of human players. It also had to contend with a tenth of a second lag between observing the screen and issuing a command.

Oriol Vinyals January demo starcraft II

DeepMind Press Office

In January 2019, Oriol Vinyals, a research scientist at Google DeepMind, participated in a demonstration of the AI AlphaStar's Starcraft II gameplay ability.

Surprisingly, Silver said, these additional constraints actually served to strengthen AlphaStar's gameplay, not weaken it as he expected.

"AlphaStar developed a richer range of strategies; it chose to play more like what a human would do," he said.

Seven months after beating Wünsch and Komincz, AlphaStar went up against random competitors (from a pool of players who elected to opt-in) online. It bested 99.8% of them.

Advertisement

AlphaStar succeeded where its predecessors had failed because, rather than getting programmed with enough knowledge to know what to do in every permutation of every possible turn, it actually taught itself how to play the game. The AI analyzed data recorded from tens of thousands of past "Starcraft II" games, played by people, to formulate strategies.

Another secret to its success: AlphaStar clocked hundreds of years of playtime in just a few months by playing against versions of itself, which Vinyals called "exploiter agents."

The world's top Go player Lee Sedol reviews the match after the fourth match of the Google DeepMind Challenge Match against Google's artificial intelligence program AlphaGo in Seoul, South Korea, in this handout picture provided by Google and released by News1 on March 13, 2016.  REUTERS/Google/News1

Google/News1/Reuters

The world's top Go player, Lee Sedol, reviews a match after participating in the Google DeepMind Challenge Match against Google's artificial intelligence program AlphaGo in Seoul, South Korea on March 13, 2016.

This training method could be responsible for the AI's strategic supremacy.

How AlphaStar might have real world applications

Achievements in the gaming aside, AlphaStar's newfound supremacy could prove useful in technological niches in which AI needs to multitask, cooperate with multiple actors engaged in the same problem, and plan ahead.

Advertisement

Silver said some examples include building better personal assistants to help human operators achieve their goals, improving Google's recommendation systems, and informing safer self-driving car technology - though those applications may be far away on the horizon.

Silver and Vinyals both noted that AlphaStar's program could be used to train AI-operators of robotic hands, then maybe one day robotic arms.

Despite AlphaStar's achievements, Vinyals said there's still plenty to work on. "We absolutely did not 'solve StarCraft,'" he said. In the future, he'd like to pit the AI up against the same opponent over and over again in a tournament-style setting, rather than cycling through players online. The AI is good, he said, but not infallible.

"A few players still beat it," Vinyals added.

google selfdriving car

Reuters/Stephen Lam

A photo of Google's self-driving car.

Advertisement

George Cybenko, an engineering professor at Dartmouth College who researches machine learning and neural computing, agrees that recommendation systems could see a big boost from AlphaStar's algorithms.

Recommendation systems strive to track what users do, and predict what they might like to do next. The best-known examples of such systems include Amazon's "recommended for you" feature and the vaunted YouTube algorithm. An AI that can learn and adapt more quickly to user preferences would prove superior to existing systems.

Cybenko, however, was quick to caution against too much optimism about AlphaStar's applicability.

"Games like 'StarCraft II' are 'closed worlds' in the sense that the rules are fixed, the goals of the players are well-defined and so on," he told Business Insider in an email. "The same is not true of 'open world' applications such as autonomous driving, cyber security, military operations, finance and trading, etc."

Better to focus on immediate applications for DeepMind's new AI, Cybenko said, in domains where the environment is highly constrained and the AI operator can control how players interact with it - say, robotics in a controlled and constrained manufacturing environment.

Advertisement

After all, in the real world, anything goes.

Get the latest Google stock price here.

{{}}