StarCraft II Has a New Grandmaster, And It’s Not Human – ScienceAlert

Posted: October 31, 2019 at 8:52 am


without comments

Video games were invented for humans, by humans. But that doesn't necessarily mean we're the best when it comes to playing them.

In a new achievement that signifies just how far artificial intelligence (AI) has progressed, scientists have developed a learning algorithm that rose to the very top echelon of the esports powerhouse StarCraft II, reaching Grandmaster level.

According to the researchers who created the AI called AlphaStar the accomplishment of reaching the Grandmaster League means you're in the top 0.2 percent of StarCraft II players.

In other words, AlphaStar competes at a level in this multi-player real-time strategy game that could trounce millions of humans foolhardy enough to take it on.

In recent years, we've seen AI come to dominate games that represent more traditional tests of human skill, mastering the strategies of chess, poker, and Go.

For David Silver, principal research scientist at AI firm DeepMind in the UK, those kinds of milestones many of which DeepMind pioneered are what's led us to this inevitable moment: a game representing even greater problems than the ancient games that have challenged human minds for centuries.

"Ever since computers cracked Go, chess, and poker, StarCraft has emerged by consensus as the next grand challenge," Silver says.

"The game's complexity is much greater than chess, because players control hundreds of units; more complex than Go, because there are 1,026 possible choices for every move; and players have less information about their opponents than in poker."

Add it all together and mastering the complex real-time battles of StarCraft seems almost impossible for a machine, so how did they do it?

In a new paper published this week, the DeepMind team describes how they developed a multi-agent reinforcement learning algorithm, which trained itself up through self-play, including playing against itself, and playing humans, learning to mimic successful strategies, and also effective counter-strategies.

The research team has been working towards this goal for years. An earlier version of the system made headlines back in January when it started to beat human professionals.

"I will never forget the excitement and emotion we all felt when AlphaStar first started playing real competitive matches," says Dario "TLO" Wunsch, one of the top human StarCraft II players beaten by the algorithm.

"The system is very skilled at assessing its strategic position, and knows exactly when to engage or disengage with its opponent."

The latest algorithm takes things even further than that preliminary incarnation, and now effectively plays under artificial constraints designed to most realistically simulate gameplay as experienced by a human (such as observing the game at a distance, through a camera, and feeling the delay of network latency).

With all the imposed limitations of a human, AlphaStar still reached Grandmaster level in real, online competitive play, representing not just a world-first, but perhaps a sunset of these kinds of gaming challenges, given what the achievement now may make possible.

"Like StarCraft, real-world domains such as personal assistants, self-driving cars, or robotics require real-time decisions, over combinatorial or structured action spaces, given imperfectly observed information," the authors write.

"The success of AlphaStar in StarCraft II suggests that general-purpose machine learning algorithms may have a substantial effect on complex real-world problems."

The findings are reported in Nature.

Read more:
StarCraft II Has a New Grandmaster, And It's Not Human - ScienceAlert

Related Posts

Written by admin |

October 31st, 2019 at 8:52 am

Posted in Personal Success




matomo tracker