Tag Archives: StarCraft

AI links video gaming and research

Today I want to introduce an Artificial Intelligence (AI) application developed by DeepMind, the company which created the famous AlphaGo. AlphaGo is the Go AI application which has beaten many top level human players over the last few years. This new application is a learning environment based on the video game StarCraft II. StarCraft II is a Real-time-strategy game based on a fictional future when several races struggle for survival and dominance in universe.  In the game you start with a base and a few workers. You need to collect resources, develop economy and technology, observe and disturb your enemy, build a strong army and finally destroy the enemy. Both StarCraft II and the original StarCraft are one of the most popular e-sports competitions which viewed by millions of fans.

Figure 1 Rogue, the 2017 StarCraft II World Champion1

Researchers of DeepMind believe StarCraft is the next challenge for research in AI after Go since this game mimic a compelling solution to the issue of evaluating and comparing different learning and planning approaches on standardised tasks. StarCraft resembles an imperfect-information problem due to a partially uncovered map in contrast with the Go that players know all the information about the game. The game possesses a large action space with the control of hundreds of units, and a professional player might need to keep their mind at three or four battle scenes at the same time. It also has delayed credit assignment as early decisions or strategies may have long-term effect after several minutes. Simply speaking, StarCraft II is much harder than Go. Therefore, SC2LE provides a new challenge for research in reinforcement learning algorithms and architectures.

The goal of developing an AI based on this video game is not to design a robot which can beat human players, but to demonstrate that AI could handle such a big amount of information and give immediate response, in contrast with the goal that the information is much less and players have much more time to think and make decision. DeepMind also want to use these famous games to promote their research in AI to players and normal audience.

Figure 2 A scene of StarCraft II Game2

They used similar technology to AlphaGo, reinforcement learning based on neural network. First, the AI need to know the rule of controlling the units and develop including the construction sequence of buildings as well as the conditions of win, i.e.  eliminating all of enemy’s building or forcing them to surrender by giving enough destruction.

Figure 3 Left is a human’s view, right is its layered structure3

Unlike human who read the information from the screen, the AI will observe “feature layers” generated by StarCraft II Application programming interface (API). Each layer represents something special in the game like unit type and hit points. The AI was designed to mimic human players as closely as possible including compound actions. It will generate a sequence of actions to mimic human like use shift to select multiple units then click on the screen to move.

Figure 4 How AI’s actions differ from human’s3

The learning algorithms can study useful strategies from games and replays not only choose strategies from a given pool. There is an in-game award points system based on the player’s performance. Its performances could also be simply examined by winning a game or not and how long the AI player survives in a game if it loses.

Unlike the Go, StaCraft are much more complicated with thousands of possible actions and combinations of these actions. Although now AI could make some good decisions of perform a certain task with accurate action flows, it still behaves very naïve when considering a full game and even much worse than beginner level human players. There is a still long way for AI researchers to go. The day when AI beat top level human players will be a milestone in AI research, because it means the AI technology have made substantial progress and would be much more useful than it is today.

References:

  1. https://wcs.starcraft2.com/en-us/news/21196407/The-World-Champion-Is-Crowned/
  2. https://www.instant-gaming.com/en/169-buy-key-battlenet-starcraft-2-heart-of-the-swarm/
  3. Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Kuttler, John Agapiou, Julian Schrittwieser, et al. StarCraft II: A New Challenge for Reinforcement Learning. arXiv:1708.04782, 2017.

By Jun Ling