Page 112

Archive for the ‘Alphazero’ Category

When 3 is greater than 5 – Chessbase News

Posted: October 22, 2020 at 9:59 pm


without comments

10/18/2020 Star columnist Jon Speelman explores the exchange sacrifice. Speelman shares five illustrative examples to explain in which conditions giving up a rook for a minor piece is a good trade. As a general rule and in fact (almost all?) of the time you need other pieces on the board for an exchange sacrifice to work. | Pictured: Mikhail Tal and Tigran Petrosian following a post-mortem analysis at the 1961 European Team Championship in Oberhausen | Photo: Gerhard Hund

ChessBase 15 - Mega package

Find the right combination! ChessBase 15 program + new Mega Database 2020 with 8 million games and more than 80,000 master analyses. Plus ChessBase Magazine (DVD + magazine) and CB Premium membership for 1 year!

More...

[Note that Jon Speelman also looks at the content of the article in video format, here embedded at the end of the article.]

During the Norway tournament, I streamed commentarya couple of times myself at twitch.tv/jonspeelman, but mainly listened to the official commentaryby Vladimir Kramnik and Judit Polgar.

Both were very interesting, and Kramnik in particular has a chess aesthetic which I very much like. In his prime a powerhouse positional player with superb endgame technique, he started life much more tactically and his instinct is to sacrifice for the initiative whenever possible, especially the exchange: an approach which, after defence seemed to triumph under traditional chess engines, has been given a new lease of life by Alpha Zero.

So I thought today that Id look at some nice exchange sacrifices, but first a moment from Norway where I was actually a tad disappointed by a winning sacrifice.

At the end of a beautiful positional game, which has been annotated here in Game of the Week, Carlsen finished off with the powerful

42.Re8!

and after

42...Qxe8 43.Qh6+ Kg8 44.Qxg6+ Kh8 45.Nf6

Tari resigned

Of course, I would have played Re8 myself in a game if Id seen it, but I was hoping from an aesthetic perspective that Carlsen would complete this real masterclass and masterpiece with a nice zugzwang.

You start with c4 to prevent 42.f3 c4, creating some very slight confusion and then it goes:

42.c4 Kg8 43.f3

And for example: 43...Qd7 44.Qh6 Qe6 45.Kg3 fxe4 (45...Rg7 46.Nf6+ Kf7 47.Qh8 Qe7 48.Kg2) 46.dxe4 Rf4 47.Nxf4 exf4+ 48.Kxf4 Qf7+ 49.Kg3 Qg7 50.Qxg7+ Kxg7 51.Rxf8

Black can also try43...Rh7

and here after 44.Rxf8+ Kg7

as the engine pointed out to me, its best to use the Re8 trick:

45.Qxh7+! (45.Rf6 is much messier) 45...Kxh7 46.Re8!

Mega Database 2020

The ChessBase Mega Database 2020 is the premiere chess database with over eight million games from 1560 to 2019 in high quality. Packing more than 85,000 annotated games, Mega 2020 contains the worlds largest collection of high-class analysed games. Train like a pro! Prepare for your opponents with ChessBase and the Mega Database 2020. Let grandmasters explain how to best handle your favorite variations, improve your repertoire and much more.

The black queen is trapped.

For todays examples I used my memory and the ChessBase search mask when I couldnt track down a game exactly. For instance,for the first one byBotvinnik [pictured], I set him as Black with 0-1, disabled ignoring colours, and put Rd4 e5 c5 on the board which turned out to identify the single game I wanted a hole in 1!I also asked my stream on Thursday for any examples, and one of my stalwarts, a Scottish Frenchman, found me Reshevsky v Petrosian (I couldnt remember offhand who Petrosians opponent was) and drew my attention to the beautiful double exchange sacrifice by Erwin L'Ami from Wijk aan Zee B.

Before the games themselves, which are in chronological order,it might be worthwhile to consider what makes an exchange sacrifice successful. Whole books have been written on this and Im certainly not going to be able to go into serious detail. But a couple of points:

The need for extra pieces applies particularly to endgames. For instance,this diagram should definitely be lost for Black:

Its far from trivial, but as a general schema the white king should be able to advance right into Blacks guts and then White can do things with his pawns. Something like get Ke7 and Rf6, then g4 exchanging pawns if Black has played ...h5. Play f5, move the rook, play f6+, and arrange to play Rxf7.

But if you add a pair of rooks then it becomes enormously difficult. And indeed I really dont know whether God would beat God.

Select an entry from the list to switch between games

Master Class Vol.11: Vladimir Kramnik

This DVD allows you to learn from the example of one of the best players in the history of chess and from the explanations of the authors (Pelletier, Marin, Mller and Reeh) how to successfully organise your games strategically, consequently how to keep y

Read more:

When 3 is greater than 5 - Chessbase News

Written by admin

October 22nd, 2020 at 9:59 pm

Posted in Alphazero

AlphaZero – Wikipedia

Posted: October 17, 2020 at 10:54 am


without comments

Game-playing artificial intelligence

AlphaZero is a computer program developed by artificial intelligence research company DeepMind to master the games of chess, shogi and go. This algorithm uses an approach similar to AlphaGo Zero.

On December 5, 2017, the DeepMind team released a preprint introducing AlphaZero, which within 24 hours of training achieved a superhuman level of play in these three games by defeating world-champion programs Stockfish, elmo, and the 3-day version of AlphaGo Zero. In each case it made use of custom tensor processing units (TPUs) that the Google programs were optimized to use.[1] AlphaZero was trained solely via "self-play" using 5,000 first-generation TPUs to generate the games and 64 second-generation TPUs to train the neural networks, all in parallel, with no access to opening books or endgame tables. After four hours of training, DeepMind estimated AlphaZero was playing at a higher Elo rating than Stockfish 8; after 9 hours of training, the algorithm defeated Stockfish 8 in a time-controlled 100-game tournament (28 wins, 0 losses, and 72 draws).[1][2][3] The trained algorithm played on a single machine with four TPUs.

DeepMind's paper on AlphaZero was published in the journal Science on 7 December 2018.[4] In 2019 DeepMind published a new paper detailing MuZero, a new algorithm able to generalise on AlphaZero work playing both Atari and board games without knowledge of the rules or representations of the game.[5]

AlphaZero (AZ) is a more generalized variant of the AlphaGo Zero (AGZ) algorithm, and is able to play shogi and chess as well as Go. Differences between AZ and AGZ include:[1]

Comparing Monte Carlo tree search searches, AlphaZero searches just 80,000 positions per second in chess and 40,000 in shogi, compared to 70 million for Stockfish and 35 million for elmo. AlphaZero compensates for the lower number of evaluations by using its deep neural network to focus much more selectively on the most promising variation.[1]

AlphaZero was trained solely via self-play, using 5,000 first-generation TPUs to generate the games and 64 second-generation TPUs to train the neural networks. In parallel, the in-training AlphaZero was periodically matched against its benchmark (Stockfish, elmo, or AlphaGo Zero) in brief one-second-per-move games to determine how well the training was progressing. DeepMind judged that AlphaZero's performance exceeded the benchmark after around four hours of training for Stockfish, two hours for elmo, and eight hours for AlphaGo Zero.[1]

In AlphaZero's chess match against Stockfish 8 (2016 TCEC world champion), each program was given one minute per move. Stockfish was allocated 64 threads and a hash size of 1 GB,[1] a setting that Stockfish's Tord Romstad later criticized as suboptimal.[6][note 1] AlphaZero was trained on chess for a total of nine hours before the match. During the match, AlphaZero ran on a single machine with four application-specific TPUs. In 100 games from the normal starting position, AlphaZero won 25 games as White, won 3 as Black, and drew the remaining 72.[8] In a series of twelve, 100-game matches (of unspecified time or resource constraints) against Stockfish starting from the 12 most popular human openings, AlphaZero won 290, drew 886 and lost 24.[1]

AlphaZero was trained on shogi for a total of two hours before the tournament. In 100 shogi games against elmo (World Computer Shogi Championship 27 summer 2017 tournament version with YaneuraOu 4.73 search), AlphaZero won 90 times, lost 8 times and drew twice.[8] As in the chess games, each program got one minute per move, and elmo was given 64 threads and a hash size of 1GB.[1]

After 34 hours of self-learning of Go and against AlphaGo Zero, AlphaZero won 60 games and lost 40.[1][8]

DeepMind stated in its preprint, "The game of chess represented the pinnacle of AI research over several decades. State-of-the-art programs are based on powerful engines that search many millions of positions, leveraging handcrafted domain expertise and sophisticated domain adaptations. AlphaZero is a generic reinforcement learning algorithm originally devised for the game of go that achieved superior results within a few hours, searching a thousand times fewer positions, given no domain knowledge except the rules."[1] DeepMind's Demis Hassabis, a chess player himself, called AlphaZero's play style "alien": It sometimes wins by offering counterintuitive sacrifices, like offering up a queen and bishop to exploit a positional advantage. "It's like chess from another dimension."[9]

Given the difficulty in chess of forcing a win against a strong opponent, the +28 0 =72 result is a significant margin of victory. However, some grandmasters, such as Hikaru Nakamura and Komodo developer Larry Kaufman, downplayed AlphaZero's victory, arguing that the match would have been closer if the programs had access to an opening database (since Stockfish was optimized for that scenario).[10] Romstad additionally pointed out that Stockfish is not optimized for rigidly fixed-time moves and the version used is a year old.[6][11]

Similarly, some shogi observers argued that the elmo hash size was too low, that the resignation settings and the "EnteringKingRule" settings (cf. shogi Entering King) may have been inappropriate, and that elmo is already obsolete compared with newer programs.[12][13]

Papers headlined that the chess training took only four hours: "It was managed in little more than the time between breakfast and lunch."[2][14]Wired hyped AlphaZero as "the first multi-skilled AI board-game champ".[15] AI expert Joanna Bryson noted that Google's "knack for good publicity" was putting it in a strong position against challengers. "It's not only about hiring the best programmers. It's also very political, as it helps make Google as strong as possible when negotiating with governments and regulators looking at the AI sector."[8]

Human chess grandmasters generally expressed excitement about AlphaZero. Danish grandmaster Peter Heine Nielsen likened AlphaZero's play to that of a superior alien species.[8] Norwegian grandmaster Jon Ludvig Hammer characterized AlphaZero's play as "insane attacking chess" with profound positional understanding.[2] Former champion Garry Kasparov said "It's a remarkable achievement, even if we should have expected it after AlphaGo."[10][16]

Grandmaster Hikaru Nakamura was less impressed, and stated "I don't necessarily put a lot of credibility in the results simply because my understanding is that AlphaZero is basically using the Google supercomputer and Stockfish doesn't run on that hardware; Stockfish was basically running on what would be my laptop. If you wanna have a match that's comparable you have to have Stockfish running on a supercomputer as well."[7]

Top US correspondence chess player Wolff Morrow was also unimpressed, claiming that AlphaZero would probably not make the semifinals of a fair competition such as TCEC where all engines play on equal hardware. Morrow further stated that although he might not be able to beat AlphaZero if AlphaZero played drawish openings such as the Petroff Defence, AlphaZero would not be able to beat him in a correspondence chess game either.[17]

Motohiro Isozaki, the author of YaneuraOu, noted that although AlphaZero did comprehensively beat elmo, the rating of AlphaZero in shogi stopped growing at a point which is at most 100~200 higher than elmo. This gap is not that high, and elmo and other shogi software should be able to catch up in 12 years.[18]

DeepMind addressed many of the criticisms in their final version of the paper, published in December 2018 in Science.[4] They further clarified that AlphaZero was not running on a supercomputer; it was trained using 5,000 tensor processing units (TPUs), but only ran on four TPUs and a 44-core CPU in its matches.[19]

In the final results, Stockfish version 8 ran under the same conditions as in the TCEC superfinal: 44 CPU cores, Syzygy endgame tablebases, and a 32GB hash size. Instead of a fixed time control of one move per minute, both engines were given 3 hours plus 15 seconds per move to finish the game. In a 1000-game match, AlphaZero won with a score of 155 wins, 6 losses, and 839 draws. DeepMind also played a series of games using the TCEC opening positions; AlphaZero also won convincingly.

Similar to Stockfish, Elmo ran under the same conditions as in the 2017 CSA championship. The version of Elmo used was WCSC27 in combination with YaneuraOu 2017 Early KPPT 4.79 64AVX2 TOURNAMENT. Elmo operated on the same hardware as Stockfish: 44 CPU cores and a 32GB hash size. AlphaZero won 98.2% of games when playing black (which plays first in shogi) and 91.2% overall.

Human grandmasters were generally impressed with AlphaZero's games against Stockfish.[20] Former world champion Garry Kasparov said it was a pleasure to watch AlphaZero play, especially since its style was open and dynamic like his own.[21][22]

In the chess community, Komodo developer Mark Lefler called it a "pretty amazing achievement", but also pointed out that the data was old, since Stockfish had gained a lot of strength since January 2018 (when Stockfish 8 was released). Fellow developer Larry Kaufman said AlphaZero would probably lose a match against the latest version of Stockfish, Stockfish 10, under Top Chess Engine Championship (TCEC) conditions. Kaufman argued that the only advantage of neural networkbased engines was that they used a GPU, so if there was no regard for power consumption (e.g. in an equal-hardware contest where both engines had access to the same CPU and GPU) then anything the GPU achieved was "free". Based on this, he stated that the strongest engine was likely to be a hybrid with neural networks and standard alphabeta search.[23]

AlphaZero inspired the computer chess community to develop Leela Chess Zero, using the same techniques as AlphaZero. Leela contested several championships against Stockfish, where it showed similar strength.[24]

In 2019 DeepMind published MuZero, a unified system that played excellent chess, shogi, and go, as well as games in the Atari Learning Environment, without being pre-programmed with their rules.[25][26]

The match results by themselves are not particularly meaningful because of the rather strange choice of time controls and Stockfish parameter settings: The games were played at a fixed time of 1 minute/move, which means that Stockfish has no use of its time management heuristics (lot of effort has been put into making Stockfish identify critical points in the game and decide when to spend some extra time on a move; at a fixed time per move, the strength will suffer significantly). The version of Stockfish used is one year old, was playing with far more search threads than has ever received any significant amount of testing, and had way too small hash tables for the number of threads. I believe the percentage of draws would have been much higher in a match with more normal conditions.[7]

Link:

AlphaZero - Wikipedia

Written by admin

October 17th, 2020 at 10:54 am

Posted in Alphazero

AlphaZero: Shedding new light on chess, shogi, and Go …

Posted: at 10:54 am


without comments

As with Go, we are excited about AlphaZeros creative response to chess, which has been a grand challenge for artificial intelligence since the dawn of the computing age with early pioneers including Babbage, Turing, Shannon, and von Neumann all trying their hand at designing chess programs. But AlphaZero is about more than chess, shogi or Go. To create intelligent systems capable of solving a wide range of real-world problems we need them to be flexible and generalise to new situations. While there has been some progress towards this goal, it remains a major challenge in AI research with systems capable of mastering specific skills to a very high standard, but often failing when presented with even slightly modified tasks.

AlphaZeros ability to master three different complex games and potentially any perfect information game is an important step towards overcoming this problem. It demonstrates that a single algorithm can learn how to discover new knowledge in a range of settings. And, while it is still early days, AlphaZeros creative insights coupled with the encouraging results we see in other projects such as AlphaFold, give us confidence in our mission to create general purpose learning systems that will one day help us find novel solutions to some of the most important and complex scientific problems.

This work was done by David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis.

Follow this link:

AlphaZero: Shedding new light on chess, shogi, and Go ...

Written by admin

October 17th, 2020 at 10:54 am

Posted in Alphazero

AlphaGo Zero – Wikipedia

Posted: at 10:54 am


without comments

Artificial intelligence that plays Go

AlphaGo Zero is a version of DeepMind's Go software AlphaGo. AlphaGo's team published an article in the journal Nature on 19 October 2017, introducing AlphaGo Zero, a version created without using data from human games, and stronger than any previous version.[1] By playing games against itself, AlphaGo Zero surpassed the strength of AlphaGo Lee in three days by winning 100 games to 0, reached the level of AlphaGo Master in 21 days, and exceeded all the old versions in 40 days.[2]

Training artificial intelligence (AI) without datasets derived from human experts has significant implications for the development of AI with superhuman skills because expert data is "often expensive, unreliable or simply unavailable."[3]Demis Hassabis, the co-founder and CEO of DeepMind, said that AlphaGo Zero was so powerful because it was "no longer constrained by the limits of human knowledge".[4]David Silver, one of the first authors of DeepMind's papers published in Nature on AlphaGo, said that it is possible to have generalised AI algorithms by removing the need to learn from humans.[5]

Google later developed AlphaZero, a generalized version of AlphaGo Zero that could play chess and Shgi in addition to Go. In December 2017, AlphaZero beat the 3-day version of AlphaGo Zero by winning 60 games to 40, and with 8 hours of training it outperformed AlphaGo Lee on an Elo scale. AlphaZero also defeated a top chess program (Stockfish) and a top Shgi program (Elmo).[6][7]

AlphaGo Zero's neural network was trained using TensorFlow, with 64 GPU workers and 19 CPU parameter servers. Only four TPUs were used for inference. The neural network initially knew nothing about Go beyond the rules. Unlike earlier versions of AlphaGo, Zero only perceived the board's stones, rather than having some rare human-programmed edge cases to help recognize unusual Go board positions. The AI engaged in reinforcement learning, playing against itself until it could anticipate its own moves and how those moves would affect the game's outcome.[8] In the first three days AlphaGo Zero played 4.9 million games against itself in quick succession.[9] It appeared to develop the skills required to beat top humans within just a few days, whereas the earlier AlphaGo took months of training to achieve the same level.[10]

For comparison, the researchers also trained a version of AlphaGo Zero using human games, AlphaGo Master, and found that it learned more quickly, but actually performed more poorly in the long run.[11] DeepMind submitted its initial findings in a paper to Nature in April 2017, which was then published in October 2017.[1]

The hardware cost for a single AlphaGo Zero system in 2017, including the four TPUs, has been quoted as around $25 million.[12]

According to Hassabis, AlphaGo's algorithms are likely to be of the most benefit to domains that require an intelligent search through an enormous space of possibilities, such as protein folding or accurately simulating chemical reactions.[13] AlphaGo's techniques are probably less useful in domains that are difficult to simulate, such as learning how to drive a car.[14] DeepMind stated in October 2017 that it had already started active work on attempting to use AlphaGo Zero technology for protein folding, and stated it would soon publish new findings.[15][16]

AlphaGo Zero was widely regarded as a significant advance, even when compared with its groundbreaking predecessor, AlphaGo. Oren Etzioni of the Allen Institute for Artificial Intelligence called AlphaGo Zero "a very impressive technical result" in "both their ability to do itand their ability to train the system in 40 days, on four TPUs".[8]The Guardian called it a "major breakthrough for artificial intelligence", citing Eleni Vasilaki of Sheffield University and Tom Mitchell of Carnegie Mellon University, who called it an impressive feat and an outstanding engineering accomplishment" respectively.[14]Mark Pesce of the University of Sydney called AlphaGo Zero "a big technological advance" taking us into "undiscovered territory".[17]

Gary Marcus, a psychologist at New York University, has cautioned that for all we know, AlphaGo may contain "implicit knowledge that the programmers have about how to construct machines to play problems like Go" and will need to be tested in other domains before being sure that its base architecture is effective at much more than playing Go. In contrast, DeepMind is "confident that this approach is generalisable to a large number of domains".[9]

In response to the reports, South Korean Go professional Lee Sedol said, "The previous version of AlphaGo wasnt perfect, and I believe thats why AlphaGo Zero was made." On the potential for AlphaGo's development, Lee said he will have to wait and see but also said it will affect young Go players. Mok Jin-seok, who directs the South Korean national Go team, said the Go world has already been imitating the playing styles of previous versions of AlphaGo and creating new ideas from them, and he is hopeful that new ideas will come out from AlphaGo Zero. Mok also added that general trends in the Go world are now being influenced by AlphaGos playing style. "At first, it was hard to understand and I almost felt like I was playing against an alien. However, having had a great amount of experience, Ive become used to it," Mok said. "We are now past the point where we debate the gap between the capability of AlphaGo and humans. Its now between computers." Mok has reportedly already begun analyzing the playing style of AlphaGo Zero along with players from the national team. "Though having watched only a few matches, we received the impression that AlphaGo Zero plays more like a human than its predecessors," Mok said.[18] Chinese Go professional, Ke Jie commented on the remarkable accomplishments of the new program: "A pure self-learning AlphaGo is the strongest. Humans seem redundant in front of its self-improvement."[19]

Future of Go Summit

89:11 against AlphaGo Master

On 5 December 2017, DeepMind team released a preprint on arXiv, introducing AlphaZero, a program using generalized AlphaGo Zero's approach, which achieved within 24 hours a superhuman level of play in chess, shogi, and Go, defeating world-champion programs, Stockfish, Elmo, and 3-day version of AlphaGo Zero in each case.[6]

AlphaZero (AZ) is a more generalized variant of the AlphaGo Zero (AGZ) algorithm, and is able to play shogi and chess as well as Go. Differences between AZ and AGZ include:[6]

An open source program, Leela Zero, based on the ideas from the AlphaGo papers is available. It uses a GPU instead of the TPUs recent versions of AlphaGo rely on.

Link:

AlphaGo Zero - Wikipedia

Written by admin

October 17th, 2020 at 10:54 am

Posted in Alphazero

AlphaZero Crushes Stockfish In New 1,000-Game Match …

Posted: at 10:54 am


without comments

In news reminiscent of the initial AlphaZero shockwave last December, the artificial intelligence company DeepMind released astounding results from an updated version of the machine-learning chess project today.

The results leave no question, once again, that AlphaZero plays some of the strongest chess in the world.

The updated AlphaZero crushed Stockfish 8 in a new 1,000-game match, scoring +155 -6 =839. (See below for three sample games from this match with analysis by Stockfish 10 and video analysis by GM Robert Hess.)

AlphaZero also bested Stockfish in a series of time-odds matches, soundly beating the traditional engine even at time odds of 10 to one.

In additional matches, the new AlphaZero beat the"latest development version" of Stockfish, with virtually identical results as the match vs Stockfish 8, according to DeepMind. The pre-release copy of journal article, which is dated Dec. 7, 2018, does not specify the exact development version used.

[Update: Today's release of the full journal article specifies that the match was against the latest development version of Stockfish as of Jan. 13, 2018, which was Stockfish 9.]

The machine-learning engine also won all matches against "a variant of Stockfish that uses a strong opening book," according to DeepMind. Adding the opening book did seem to help Stockfish, which finally won a substantial number of games when AlphaZero was Blackbut not enough to win the match.

AlphaZero's results (wins green, losses red) vs the latest Stockfish and vs Stockfish with a strong opening book. Image by DeepMind via Science.

The results will be published in an upcoming article by DeepMind researchers in the journal Scienceand were provided to selected chess media by DeepMind, which is based in London and owned by Alphabet, the parent company of Google.

The 1,000-game match was played in early 2018. In the match, both AlphaZero and Stockfish were given three hours each game plus a 15-second increment per move. This time control would seem to make obsolete one of the biggest arguments against the impact of last year's match, namely that the 2017 time control of one minute per move played to Stockfish's disadvantage.

With three hours plus the 15-second increment, no such argument can be made, as that is an enormous amount of playing time for any computer engine. In the time odds games, AlphaZero was dominant up to 10-to-1 odds. Stockfish only began to outscore AlphaZero when the odds reached 30-to-1.

AlphaZero's results (wins green, losses red) vs Stockfish 8 in time odds matches. Image by DeepMind via Science.

AlphaZero's results in the time odds matches suggest it is not only much stronger than any traditional chess engine, but that it also uses a much more efficient search for moves. According to DeepMind, AlphaZero uses a Monte Carlo tree search, and examines about 60,000 positions per second, compared to 60 million for Stockfish.

An illustration of how AlphaZero searches for chess moves. Image by DeepMind via Science.

What can computer chess fans conclude after reading these results? AlphaZero has solidified its status as one of the elite chess players in the world. But the results are even more intriguing if you're following the ability of artificial intelligence to master general gameplay.

According to the journal article, the updated AlphaZero algorithm is identical in three challenging games: chess, shogi, and go. This version of AlphaZero was able to beat the top computer players of all three games after just a few hours of self-training, starting from just the basic rules of the games.

The updated AlphaZero results come exactly one year to the day since DeepMind unveiled the first, historic AlphaZero results in a surprise match vs Stockfish that changed chess forever.

Since then, an open-source project called Lc0 has attempted to replicate the success of AlphaZero, and the project has fascinated chess fans. Lc0 now competes along with the champion Stockfish and the rest of the world's top engines in the ongoing Chess.com Computer Chess Championship.

CCC fans will be pleased to see that some of the new AlphaZero games include "fawn pawns," the CCC-chat nickname for lone advanced pawns that cramp an opponent's position. Perhaps the establishment of these pawns is a critical winning strategy, as it seems AlphaZero and Lc0 have independently learned it.

DeepMind released 20 sample games chosen by GM Matthew Sadler from the 1,000 game match. Chess.com has selected three of these games with deep analysis by Stockfish 10 and video analysis by GM Robert Hess. You can download the 20 sample games at the bottom of this article, analyzed by Stockfish 10, and four sample games analyzed by Lc0.

Update: After this article was published, DeepMind released 210 sample games that you can download here.

Selected game 1 with analysis by Stockfish 10:

Game 1 video analysis by GM Robert Hess:

Selected game 2with analysis by Stockfish 10:

Game 2 video analysis by GM Robert Hess:

Selected game 3 with analysis by Stockfish 10:

Game 3 video analysis by GM Robert Hess:

IM Anna Rudolf also made a video analysis of one of the sample games, calling it "AlphaZero's brilliancy."

The new version of AlphaZero trained itself to play chess starting just from the rules of the game, using machine-learning techniques to continually update its neural networks. According to DeepMind, 5,000 TPUs (Google's tensor processing unit, an application-specific integrated circuit for article intelligence) were used to generate the first set of self-play games, and then 16 TPUs were used to train the neural networks.

The total training time in chess was nine hours from scratch. According to DeepMind, it took the new AlphaZero just four hours of training to surpass Stockfish; by nine hours it was far ahead of the world-champion engine.

For the games themselves, Stockfish used 44 CPU (central processing unit) cores and AlphaZero used a single machine with four TPUs and 44 CPU cores. Stockfish had a hash size of 32GB and used syzygy endgame tablebases.

AlphaZero's results vs. Stockfish in the most popular human openings. In the left bar, AlphaZero plays White; in the right bar, AlphaZero is Black. Image by DeepMind via Science. Click on the image for a larger version.

The sample games released were deemed impressive by chess professionals who were given preview access to them. GM Robert Hess categorized the games as "immensely complicated."

DeepMind itself noted the unique style of its creation in the journal article:

"In several games, AlphaZero sacrificed pieces for long-term strategic advantage, suggesting that it has a more fluid, context-dependent positional evaluation than the rule-based evaluations used by previous chess programs," the DeepMind researchers said.

The AI company also emphasized the importance of using the same AlphaZero version in three different games, touting it as a breakthrough in overall game-playing intelligence:

"These results bring us a step closer to fulfilling a longstanding ambition of artificial intelligence: a general game-playing system that can learn to master any game," the DeepMind researchers said.

You can download the 20 sample games provided by DeepMind and analyzed by Chess.com using Stockfish 10 on a powerful computer. The first set of games contains 10 games with no opening book, and the second set contains games with openings from the 2016 TCEC (Top Chess Engine Championship).

PGN downloads:

20 games with analysis by Stockfish 10:

4 selected games with analysis by Lc0:

Love AlphaZero? You can watch the machine-learning chess project it inspired, Lc0, in the ongoing Computer Chess Championship now.

Read the rest here:

AlphaZero Crushes Stockfish In New 1,000-Game Match ...

Written by admin

October 17th, 2020 at 10:54 am

Posted in Alphazero

ACM Prize in Computing Awarded to AlphaGo Developer – HPCwire

Posted: April 6, 2020 at 5:57 pm


without comments

NEW YORK, April 1, 2020 ACM, the Association for Computing Machinery, announced that David Silver is the recipient of the 2019 ACM Prize in Computing for breakthrough advances in computer game-playing. Silver is a Professor at University College London and a Principal Research Scientist at DeepMind, a Google-owned artificial intelligence company based in the United Kingdom. Silver is recognized as a central figure in the growing and impactful area of deep reinforcement learning.

Silvers most highly publicized achievement was leading the team that developed AlphaGo, a computer program that defeated the world champion of the game Go, a popular abstract board game. Silver developed the AlphaGo algorithm by deftly combining ideas from deep-learning, reinforcement-learning, traditional tree-search and large-scale computing. AlphaGo is recognized as a milestone in artificial intelligence (AI) research and was ranked byNew Scientistmagazine as one of the top 10 discoveries of the last decade.

AlphaGo was initialized by training on expert human games followed by reinforcement learning to improve its performance. Subsequently, Silver sought even more principled methods for achieving greater performance and generality. He developed the AlphaZero algorithm that learned entirely by playing games against itself, starting without any human data or prior knowledge except the game rules. AlphaZero achieved superhuman performance in the games of chess, Shogi, and Go, demonstrating unprecedented generality of the game-playing methods.

The ACM Prize in Computing recognizes early-to-mid-career computer scientists whose research contributions have fundamental impact and broad implications. The award carries a prize of $250,000, from an endowment provided by Infosys Ltd. Silver will formally receive the ACM Prize at ACMs annual awards banquet on June 20, 2020 in San Francisco.

Computer Game-Playing and AI Teaching computer programs to play games, against humans or other computers, has been a central practice in AI research since the 1950s. Game playing, which requires an agent to make a series of decisions toward an objectivewinningis seen as a useful facsimile of human thought processes. Game-playing also affords researchers results that are easily quantifiablethat is, did the computer follow the rules, score points, and/or win the game?

At the dawn of the field, researchers developed programs to compete with humans at checkers, and over the decades, increasingly sophisticated chess programs were introduced. A watershed moment occurred in 1997, when ACM sponsored a tournament in which IBMs DeepBlue became the first computer to defeat a world chess champion, Gary Kasparov. At the same time, the objective of the researchers was not simply to develop programs to win games, but to use game-playing as a touchstone to develop machines with capacities that simulated human intelligence.

Few other researchers have generated as much excitement in the AI field as David Silver, said ACM President Cherri M. Pancake. Human vs. machine contests have long been a yardstick for AI. Millions ofpeople around the world watched as AlphaGo defeated the Go world champion, Lee Sedol, on television in March 2016. But that was just the beginning of Silvers impact. His insights into deep reinforcement learning are already being applied in areas such as improving the efficiency of the UKs power grid, reducing power consumption at Googles data centers, and planning the trajectories of space probes for the European Space Agency.

Infosys congratulates David Silver for his accomplishments in making foundational contributions to deep reinforcement learning and thus rapidly accelerating the state of the art in artificial intelligence, said Pravin Rao, COO of Infosys. When computers can defeat world champions at complex board games, it captures the public imagination and attracts young researchers to areas like machine learning. Importantly, the frameworks that Silver and his colleagues have developed will inform all areas of AI, as well as practical applications in business and industry for many years to come. Infosys is proud to provide financial support for the ACM Prize in Computing and to join with ACM in recognizing outstanding young computing professionals.

Silver is credited with being one of the foremost proponents of a new machine learning tool called deep reinforcement learning, in which the algorithm learns by trial-and-error in an interactive environment. The algorithm continually adjusts its actions based on the information it accumulates while it is running. In deep reinforcement learning, artificial neural networkscomputation models which use different layers of mathematical processingare effectively combined with the reinforcement learning strategies to evaluate the trial-and-error results. Instead of having to perform calculations of every possible outcome, the algorithm makes predictions leading to a more efficient execution of a given task.

Learning Atari from Scratch At the Neural Information Processing Systems Conference (NeurIPS) in 2013, Silver and his colleagues at DeepMind presented a program that could play 50 Atari games to human-level ability. The program learned to play the games based solely on observing the pixels and scores while playing. Earlier reinforcement learning approaches had not achieved anything close to this level of ability.

Silver and his colleagues published their method of combining reinforcement learning with artificial neural networks in a seminal 2015 paper, Human Level Control Through Deep Reinforcement Learning, which was published inNature. The paper has been cited nearly 10,000 times and has had an immense impact on the field. Subsequently, Silver and his colleagues continued to refine these deep reinforcement learning algorithms with novel techniques, and these algorithms remain among the most widely-used tools in machine learning.

AlphaGo The game of Go was invented in China 2,500 years ago and has remained popular, especially in Asia. Go is regarded as far more complex than chess, as there are vastly more potential moves a player can make, as well as many more ways a game can play out. Silver first began exploring the possibility of developing a computer program that could master Go when he was a PhD student at the University of Alberta, and it remained a continuing research interest.

Silvers key insight in developing AlphaGo was to combine deep neural networks with an algorithm used in computer game-playing called Monte Carlo Tree Search. One strength of Monte Carlo Tree Search is that, while pursuing the perceived best strategy in a game, the algorithm is also continually investigating other alternatives. AlphaGos defeat of world Go champion Lee Sedol in March 2016 was hailed as a milestone moment in AI. Silver and his colleagues published the foundational technology underpinning AlphaGo in the paper Mastering the Game of Go with Deep Neural Networks and Tree Search that was published inNaturein 2016.

AlphaGo Zero, AlphaZero and AlphaStar Silver and his team at DeepMind have continued to develop new algorithms that have significantly advanced the state of the art in computer game-playing and achieved results many in the field thought were not yet possible for AI systems. In developing the AlphaGo Zero algorithm, Silver and his collaborators demonstrated that it is possible for a program to master Go without any access to human expert games. The algorithm learns entirely by playing itself without any human data or prior knowledge, except the rules of the game and, in a further iteration, without even knowing the rules.

Later, the DeepMind teams AlphaZero also achieved superhuman performance in chess, Shogi, and Go. In chess, AlphaZero easily defeated world computer chess champion Stockfish, a high-performance program designed by grandmasters and chess programming experts. Just last year, the DeepMind team, led by Silver, developed AlphaStar, which mastered the multiple-player video game StarCraft II, which had been regarded as a stunningly hard challenge for AI learning systems.

The DeepMind team continues to advance these technologies and find applications for them. Among other initiatives, Google is exploring how to use deep reinforcement learning approaches to manage robotic machinery at factories.

Biographical Background David Silver is Lead of the Reinforcement Learning Research Group at DeepMind, and a Professor of Computer Science at University College London. DeepMind, a subsidiary of Google, seeks to combine the best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms.

Silver earned Bachelors and Masters degrees from Cambridge University in 1997 and 2000, respectively. In 1998 he co-founded the video games company Elixir Studios, where he served as Chief Technology Officer and Lead Programmer. Silver returned to academia and earned a PhD in Computer Science from the University of Alberta in 2009. Silvers numerous honors include the Marvin Minksy Medal (2018) for outstanding achievements in artificial intelligence, the Royal Academy of Engineering Silver Medal (2017) for outstanding contribution to UK engineering, and the Mensa Foundation Prize (2017) for best scientific discovery in the field of artificial intelligence.

About the ACM Prize in Computing

TheACM Prize in Computingrecognizes an early- to mid-career fundamental innovative contribution in computing that, through its depth, impact and broad implications, exemplifies the greatest achievements in the discipline. The award carries a prize of $250,000. Financial support is provided by an endowment from Infosys Ltd. The ACM Prize in Computing was previously known as the ACM-Infosys Foundation Award in the Computing Sciences from 2007 through 2015. ACM Prize recipients are invited to participate in the Heidelberg Laureate Forum, an annual networking event that brings together young researchers from around the world with recipients of the ACM A.M. Turing Award (computer science), the Abel Prize (mathematics), the Fields Medal (mathematics), and the Nevanlinna Prize (mathematics).

About ACM

ACM, the Association for Computing Machinery, is the worlds largest educational and scientific computing society, uniting educators, researchers and professionals to inspire dialogue, share resources and address the fields challenges. ACM strengthens the computing professions collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

About Infosys

Infosysis a global leader in next-generation digital services and consulting. We enable clients in 46 countries to navigate their digital transformation. With over three decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem.

Source: ACM

Read the rest here:

ACM Prize in Computing Awarded to AlphaGo Developer - HPCwire

Written by admin

April 6th, 2020 at 5:57 pm

Posted in Alphazero

Magnus Carlsen: "In my country the authorities reacted quickly and the situation is under control" – Sportsfinding

Posted: at 5:57 pm


without comments

Saturday, 4 April 2020 07:45

In 2011, he climbed to number 1. No one has moved him from the vantage point. Classic chess world championship since 2013, closed 2019 with the triple crown. The coronavirus crisis has left him, for now, without a challenge; but challenge blocking with an online lite tournament

LONDON, ENGLAND NOVEMBER 28: Current world champion Norwegian Magnus lt; HIT gt; Carlsen lt; / HIT gt; speaks to the media after beating his opponent, American Fabiano Caruana, and regain his World Chess Championship title, on November 28, 2018 in London, England. (Photo by Dan Kitwood / Getty Images) Castelao, sports Dan Kitwood WORLD

I have had a small cold for quite some time, but I have no reason to think that it is connected to the coronavirus. Magnus Carlsen (Tnsberg, Norway; November 30, 1990) attends THE WORLD from Oslo. He lives near the center of the city, where he is confined.

This forcefulness has a reason. Carlsen challenged the blockade with an internet lite tournament in which to face seven of the best chess players in the world. Broadcast by chess24.com from April 17, it has the largest prize pool on-line always: 250,000 dollars.

According to the criteria of

Know more

See the article here:

Magnus Carlsen: "In my country the authorities reacted quickly and the situation is under control" - Sportsfinding

Written by admin

April 6th, 2020 at 5:57 pm

Posted in Alphazero

Fat Fritz 1.1 update and a small gift – Chessbase News

Posted: March 8, 2020 at 10:47 am


without comments

3/5/2020 As promised in the announcement of the release of Fat Fritz, the first update to the neural network has been released, stronger and more mature, and with it comes the brand new smaller and faster Fat Fritz for CPU neural network which will produce quality play even on a pure CPU setup. If you leave it analyzing the start position, it will say it likes the Sicilian Najdorf, which says a lot about its natural style. Read on to find out more!

If you havent yet updated your copy of Fat Fritz, now is the time to do it as it brings more thanminor enhancements or a few bug fixes. This update will bring the first major update to the Fat Fritz neural network, stronger than ever, as well as a new smaller one that is quite strong on a GPU, but also shines on even a plain CPU setup.

When you open Fritz 17, presuming you have Fat Fritz installed, you will be greeted with a message in the bottom right corner of your screen advising you there is an update available for Fat Fritz.

When you see this click on 'Update Fat Fritz'

Then you will be greeted with the update pane, and just need to click Next to get to it

When Fat Fritz was released with Fritz 17, updates were promised with the assurance it was still improving. Internally the version number of the release was v226, while this newest one is v471.

While thorough testing is always a challenge since resources are limited, a match against Leela 42850 at 1600 nodes per move over 1000 games yielded a positive result:

Score of Fat Fritz 471k vs Leela 42850: +260 -153 =587 [0.553] Elo difference: 37.32 +/- 13.79

1000 of 1000 games finished.

Also, in a match of 254 games at 3m +1s against Stockfish 11 in AlphaZero ratio conditions, this new version also came ahead by roughly 10 Elo.

Still, it isnt about Elo and never was, and the result is merely to say that you should enjoy strong competitive analysis. For one thing, it is eminently clear that while both Leela and Fat Fritz enjoy much of the same AlphaZero heritage,there are also distinct differences in style.

Perhaps one of the most obvious ways to highlight this is just the start position. If you let the engine run for a couple of minutes on decent hardware, it will tell you what it thinks is the best line of play for both White and Black based on its understanding of chess.

As such, I ran Leela 42850 with its core settings to see what it thought. After 2 million nodes it was adamant that perfect chess should take both players down the highly respected Berlin Defence of the Ruy Lopez.

Leela 42850 analysis:

info depth 19 seldepth 56 time 32675 nodes 2181544 score cp 23 hashfull 210 nps 75740 tbhits 0 pv e2e4 e7e5 g1f3 b8c6 f1b5 g8f6 e1g1 f6e4 d2d4 e4d6 b5c6 d7c6 d4e5 d6f5 d1d8 e8d8 h2h3

This is fine, but it is also very much a matter of taste.

Fat Fritz has a different outlook on chess as has already been pointed out in the past. At first it too will show a preference for the Ruy Lopez, though not the Berlin, but given a bit more time by 2.6 million nodes it will declare the best opening per its understanding of chess and calculations is the Sicillian Najdorf.

Within a couple of minutes this is its mainline:

info depth 16 seldepth 59 time 143945 nodes 7673855 score cp 28 wdl 380 336 284 hashfull 508 nps 54227 tbhits 0 pv e2e4 c7c5 g1f3 d7d6 b1c3 g8f6 d2d4 c5d4 f3d4 a7a6 f1e2 e7e5 d4b3 f8e7 e1g1 c8e6 c1e3 e8g8 f1e1 b8c6 h2h3 h7h6 e2f3 a8c8 d1d2 c6b8 a2a4 f6h7 a1d1 b8d7 f3e2 h7f6

From a purely analytical point of view it is quite interesting that it found 10.Re1! in the mainline. In a position where white scores 52.5% on average it picks a move that scores 58.3% / 58.9%.

Remember there is no right or wrong here, but it does help show the natural inclinations of each of these neural networks.

Even if chess is ultimately a draw, that doesnt mean there is only onepath, so while all roads may lead to Rome, they dont all need to pass through New Jersey.

Trying to find the ideal recipe of parameters for an engine can be daunting, and previously multiple attempts had been made with the well-know tuner called CLOP by Remi Coulom. Very recently a completely new tuner 'Bayes-Skopt' was designed byKarlson Pfannschmidt, a PhD student in Machine Learning in Paderborn University inGermany, who goes by the online nickname "Kiudee" (pronounced like the letters Q-D). It was used to find new improved values for Leela, which are now the new defaults.

His tuner is described as "A fully Bayesian implementation of sequential model-based optimization", a mouthful I know, and was set up with his kind help as it ran for over a week. It produces quite fascinating graphical imagery with its updated values. Here is what the final version looked like:

These values, slightly rounded, have been added as the new de facto defaults for Fat Fritz.

This is a completely new neural network trained from Fat Fritz games, but in a much smaller frame. Objectively it is not as strong as Fat Fritz, but it will run much faster, and above all it has the virtue of being quite decent on even a pure CPU machine. It wont challenge the likes of Stockfish, so lets get that out of the way, but in testing on quad-core machines (i.e. my i7 laptop) it defeats Fritz 16 by a healthy margin.

Note that this is not in the product description, soneedless to say, it is more nor less a gift to Fritz 17 owners.

Enjoy it!

More stories on Fat Fritz and Fritz 17...

See the original post:

Fat Fritz 1.1 update and a small gift - Chessbase News

Written by admin

March 8th, 2020 at 10:47 am

Posted in Alphazero

Google’s DeepMind effort for COVID-19 coronavirus is based on the shoulders of giants – Mashviral News – Mash Viral

Posted: at 10:47 am


without comments

Coronavirus could make remote work the norm, something companies need to know COVID-19 coronavirus outbreak could be the catalyst for a dramatic increase in telecommuting. Businesses should be preparing for the rise of remote work and the long-term effects on marketing budgets, corporate travel and commercial real estate.

Sixty years ago, research was underway to understand the structure of proteins, since Nobel Laureates Max Perutz and John Kendrew in the 1950s gave the world the first glimpse into what a protein looks like.

It was that pioneering work and decades of research that followed, which made Googles DeepMind announcement Thursday that an idea of the structure of a handful of proteins associated with respiratory disease known as COVID-19. which is spreading all over the world.

Proteins do a great deal of work for organisms, and understanding the three-dimensional shape of proteins in COVID-19 could possibly conceive of a type of virus behind the disease, which could be a vaccine. Efforts are being made around the world to determine the structure of these viral proteins, of which DeepMinds is merely an effort.

There is always a little self-promotion about DeepMinds AI accomplishments, so it helps to remember the context in which science was created. The DeepMind Protein Polling Program reflects decades of work by chemists, physicists and biologists, computer scientists and data scientists, and would not be possible without this intense global effort.

Since the 1960s, scientists have been fascinated by the difficult problem of protein structure. Proteins are amino acids, and the forces that pull them in a certain way are fairly straightforward because some amino acids are attracted or repelled by positive or negative charges, and some amino acids are hydrophobic that is, they hold further away. away from water molecules.

However, these forces, so basic and so easy to understand, lead to amazing protein forms that are difficult to predict only from the acids themselves. And so decades have passed, trying to guess what a given amino acid sequence will look like, usually developing increasingly sophisticated computer models to simulate the process of folding a protein, the interaction of forces that make a protein take it. whatever shape it ends up taking.

An illustration of the possible structure of a coronavirus-associated membrane protein, according to a model created by DeepMinds AlphaFold program.

DeepMind

Twenty-six years ago, a bi-annual competition, called Critical Evaluation of Predicting Protein Structure, or CASP, was held. Scientists are challenged to submit their best computer simulated predictions of a given protein after telling them only the amino acid sequence. The judges know the structure, which is determined by a lab experiment, so its a test of how you can guess what is in a lab.

DeepMind honored the latest CASP, CASP13, which took place throughout 2018.To grab gold, DeepMind developed a computer model, AlphaFold, which shares a naming convention with the DeepMind model that won. chess and Gos game. AlphaZero. In one of those trophy moments similar to other DeepMind headlines, the company found its closest competitor to the CASP13 competition in 2018, producing high-precision structures for 24 of the 43 domains of proteins, with the highest single effort. producing 14 models of this type.

Writing in Nature this January, Mohammed AlQuraishi with the Systems Pharmacology Lab at Harvard Medical School, called the development of AlphaFold a watershed moment for the science of protein folding. His essay accompanies DeepMinds formal AlphaFold scientific work in this issue, entitled Predicting Enhanced Protein Structure with Deep Learning Potentials.

AlphaFold is a union of AIs work with DeepMind, a product of decades of machine learning progress, but also decades of publicly-acquired protein knowledge. The deep neural network developed by DeepMind consists of a mechanism for measuring the local set of atoms in a convolutional filter-like protein perfected by Turing Yann LeCun winner and used in ubiquitous convolutive neural networks to determine structure local of an image. To that, DeepMind added the so-called waste blocks of the type developed a few years ago by Kaiming He and his colleagues at Microsoft.

DeepMind calls the resulting structure a deep two-dimensional diluted convolutive residual network. The purpose of this mouth is to predict the amino acid pairs distance given their sequence. AlphaFold does this by optimizing their convolutions and residual connections using the stochastic gradient descent learning rule developed in the 1980s, which powers all deep learning today.

This AlphaFold network would not be possible without decades of knowledge of proteins built into publicly accessible databases. The deep network takes in the known amino acid sequence, in a form called multiple sequence alignment, or MSA. These are the pixel equivalent of an image operated by a CNN when image recognition. These MSAs are only available for decades because scientists have been mounting them in databases, in particular the UniProt or Universal Protein Resource database, which is maintained by a consortium of research centers around the world. funded by a group of governments. offices, including the National Institutes of Health and the National Science Foundation. The six DeepMind protein structures published this week for COVID-19 began by taking the freely available amino acid sequences at UniProt, making UniProt the raw material for DeepMinds science.

In addition, on the road to his impressive results, AlphaFold had to be trained. The deep web of convolutions and residual blocks had to take their form, giving examples of structures known as labeled examples. This was made possible by another 49-year-old organization called NSF-funded Protein Data Bank, the U.S. Department of Energy and others. The basic PDB database is managed by a consortium of Rutgers University, the San Diego Supercomputer Center / University of California San Diego, and the National Institute of Standards and Technology. These institutions have the impressive task of retaining what you might consider as the huge data available to AlphaFold and other efforts. More than 144,000 protein structures have been gathered and can be downloaded and downloaded almost half a million times a year, according to the PDB. PDB also runs the CASP challenge.

The DeepMind structure predictions are published in a format called the PDB of the consortium. This means that even the language in which DeepMind can express its scientific findings is possible by the consortium.

The fact that dedicated teams have spent decades painstakingly assembling knowledge stores from which researchers can freely extract is a striking achievement in the history of science and, in fact, humanity.

DeepMinds publication of the protein files was praised by other scientists, such as the Francis Crick Institute. In their blog post about their work COVID-19, DeepMind scientists recognize a lot of work on the virus by other institutions. We are indebted to the work of many other laboratories, they write, this work would not be possible without the efforts of researchers around the world who have responded to the COVID-19 outbreak with incredible agility.

It is a responsible and worthy recognition. It can be added that it is not only the current laboratories that have made the AlphaFold files possible, but also that generations of work carried out by public and private suits have made it possible for the collective understanding of which AlphaFold is only the latest interesting wrinkle.

More:

Google's DeepMind effort for COVID-19 coronavirus is based on the shoulders of giants - Mashviral News - Mash Viral

Written by admin

March 8th, 2020 at 10:47 am

Posted in Alphazero

Explained: The Artificial Intelligence Race is an Arms Race – The National Interest Online

Posted: February 9, 2020 at 2:48 am


without comments

Whoever wins it will have an advantage in every conflict around the world.

Graham Allison alerts us to artificial intelligence being the epicenter of todays superpower arms race.

Drawing heavily on Kai-Fu Lees basic thesis, Allison draws the battlelines: the United States vs. China, across the domains of human talent, big data, and government commitment.

Allison further points to the absence of controls, or even dialogue, on what AI means for strategic stability. With implied resignation, his article acknowledges the smashing of Pandoras Box, noting many AI advancements occur in the private sector beyond government scrutiny or control.

However, unlike the chilling and destructive promise of nuclear weapons, the threat posed by AI in popular imagination is amorphous, restricted to economic dislocation or sci-fi depictions of robotic apocalypse.

Absent from Allisons call to action is explaining the so what?why does the future hinge on AI dominance? After all, the few examples (mass surveillance, pilot HUDs, autonomous weapons) Allison does provide reference continued enhancements to the status quoincremental change, not paradigm shift.

As Allison notes, President Xi Jinping awoke to the power of AI after AlphaGo defeated the worlds number one Go human player, Lee Sedol. But why? What did Xi see in this computation that persuaded him to make AI the centerpiece of Chinese national endeavor?

The answer: AIs superhuman capacity to think.

To explain, lets begin with what I am not talking about. I do not mean so-called general AIthe broad-spectrum intelligence with self-directed goals acting independent of, or in spite of, preferences of human creators.

Eminent figures such as Elon Musk and Sam Harris warn of the coming of general AI. In particular, the so-called singularity, wherein AI evolves the ability to rewrite its own code. According to Musk and Harris, this will precipitate an exponential explosion in that AIs capability, realizing 10,000 IQ and beyond in a matter of mere hours. At such time, they argue, AI will become to us what we are to ants, with similar levels of regard.

I concur with Sam and Elon that the advent of artificial general superintelligence is highly probable, but this still requires transformative technological breakthroughs the circumstances for which are hard to predict. Accordingly, whether general AI is realized 30 or 200 years from now remains unknown, as is the nature of the intelligence created; such as if it is conscious or instinctual, innocent or a weapon.

When I discuss the AI arms race I mean the continued refinement of existing technology. Artificial intelligence that, while being a true intelligence in the sense of having the ability to self-learn, it has a single programmed goal constrained within a narrow set of rules and parameters (such as a game).

To demonstrate what President Xi saw in AI winning a strategy game, and why the global balance of power hinges on it, we need to talk briefly about games.

Artificial Intelligence and Games

There are two types of strategy games: games of complete information and games of incomplete information. A game of complete information is one in which every player can see all of the parameters and options of every other player.

Tic-Tac-Toe is a game of complete information. An average adult can solve this game with less than thirty minutes of practice. That is, adopt a strategy that no matter what your opponent does, you can correctly counter it to obtain a draw. If your opponent deviates from that same strategy, you can exploit them and win.

Conversely, a basic game of uncertainty is Rock, Scissors, Paper. Upon learning the rules, all players immediately know the optimal strategy. If your opponent throws Rock, you want to throw Paper. If they throw Paper, you want to throw Scissors, and so on.

Unfortunately, you do not know ahead of time what your opponent is going to do. Being aware of this, what is the correct strategy?

The unexploitable strategy is to throw Rock 33 percent of the time, Scissors 33 percent of the time, and Paper 33 percent of the time, each option being chosen randomly to avoid observable patterns or bias.

This unexploitable strategy means that, no matter what approach your opponent adopts, they won't be able to gain an edge against you.

But lets imagine your opponent throws Rock 100 percent of the time. How does your randomized strategy stack up? 33 percent of the time you'll tie (Rock), 33 percent of the time you'll win (Paper), and 33 percent of the time you'll lose (Scissors)the total expected value of your strategy against theirs is 0.

Is this your optimal strategy? No. If your opponent is throwing Rock 100 percent of the time, you should be exploiting your opponent by throwing Paper.

Naturally, if your opponent is paying attention they, in turn, will adjust to start throwing Scissors. You and your opponent then go through a series of exploits and counter-exploits until you both gradually drift toward an unexploitable equilibrium.

With me so far? Good. Let's talk about computing and games.

As stated, nearly any human can solve Tic-Tac-Toe, and computers solved checkers many years ago. However more complex games such as Chess, Go, and No-limit Texas Holdem poker have not been solved.

Despite all being mind-bogglingly complex, of the three chess is simplest. In 1997, reigning world champion Garry Kasparov was soundly beaten by the supercomputer Deep Blue. Today, anyone reading this has access to a chess computer on their phone that could trounce any human player.

Meanwhile, the eastern game of Go eluded programmers. Go has many orders of magnitude more combinations than chess. Until recently, humans beat computers by being far more efficient in selecting moveswe don't spend our time trying to calculate every possible option twenty-five moves deep. Instead, we intuitively narrow our decisionmaking to a few good choices and assess those.

Moreover, unlike traditional computers, people are able to think in non-linear abstraction. Humans can, for example, imagine a future state during the late stages of the game beyond which a computer could possibly calculate. We are not constrained by a forward-looking linear progression. Humans can wonderfully imagine a future endpoint, and work backwards from there to formulate a plan.

Many previously believed that this combination of factorsnear-infinite combinations and the human ability to think abstractlymeant that go would forever remain beyond the reach of the computer.

Then in 2016 something unprecedented happened. The AI system, AlphaGo, defeated the reigning world champion go player Lee Sedol 4-1.

But that was nothing: two years later, a new AI system, AlphaZero, was pitched against AlphaGo.

Unlike its predecessor which contained significant databases of go theory, all AlphaZero knew was the rules, from which it played itself continuously over forty days.

After this period of self-learning, AlphaZero annihilated AlphaGo, not 4-1, but 100-0.

In forty days AlphaZero had superseded 2,500 years of total human accumulated knowledge and even invented a range of strategies that had never been discovered before in history.

Meanwhile, chess computers are now a whole new frontier of competition, with programmers pitting their systems against one another to win digital titles. At the time of writing the world's best chess engine is a program known as Stockfish, able to smash any human Grandmaster easily. In December 2017 Stockfish was pitted against AlphaZero.

Again, AlphaZero only knew the rules. AlphaZero taught itself to play chess over a period of nine hours. The result over 100 games? AlphaZero twenty-eight wins, zero losses, seventy-two draws.

Not only can artificial intelligence crush human players, it also obliterates the best computer programs that humans can design.

Artificial Intelligence and Abstraction

Most chess computers play a purely mathematical strategy in a game yet to be solved. They are raw calculators and look like it too. AlphaZero, at least in style, appears to play every bit like a human. It makes long-term positional plays as if it can visualize the board; spectacular piece sacrifices that no computer could ever possibly pull off, and exploitative exchanges that would make a computer, if it were able, cringe with complexity. In short, AlphaZero is a genuine intelligence. Not self-aware, and constrained by a sandboxed reality, but real.

Despite differences in complexity there is one limitation that chess and go both share they're games of complete information.

Enter No-limit Texas Holdem (hereon, Poker). This is the ultimate game of uncertainty and incomplete information. In poker, you know what your hole cards are, the stack sizes for each player, and the community cards that have so far come out on the board. However, you don't know your opponent's cards, whether they will bet or raise or how much, or what cards are coming out on later streets of betting.

Poker is arguably the most complex game in the world, combining mathematics, strategy, timing, psychology, and luck. Unlike Chess or Go, Pokers possibilities are truly infinite and across multiple players simultaneously. The idea that a computer could beat top Poker professionals seems risible.

Except that it has already happened. In 2017, the AI system Libratus comprehensively beat the best Head's-up (two-player) poker players in the world.

And now, just months ago, another AI system Pluribus achieved the unthinkableit crushed super high stakes poker games against multiple top professionals simultaneously, doing so at a win-rate of five big blinds per hour. For perspective, the difference in skill level between the best English Premier League soccer team and the worst would not be that much.

Read the rest here:

Explained: The Artificial Intelligence Race is an Arms Race - The National Interest Online

Written by admin

February 9th, 2020 at 2:48 am

Posted in Alphazero


Page 112