Page 31234

Archive for the ‘Alphazero’ Category

Magnus Carlsen: "In my country the authorities reacted quickly and the situation is under control" – Sportsfinding

Posted: April 6, 2020 at 5:57 pm


without comments

Saturday, 4 April 2020 07:45

In 2011, he climbed to number 1. No one has moved him from the vantage point. Classic chess world championship since 2013, closed 2019 with the triple crown. The coronavirus crisis has left him, for now, without a challenge; but challenge blocking with an online lite tournament

LONDON, ENGLAND NOVEMBER 28: Current world champion Norwegian Magnus lt; HIT gt; Carlsen lt; / HIT gt; speaks to the media after beating his opponent, American Fabiano Caruana, and regain his World Chess Championship title, on November 28, 2018 in London, England. (Photo by Dan Kitwood / Getty Images) Castelao, sports Dan Kitwood WORLD

I have had a small cold for quite some time, but I have no reason to think that it is connected to the coronavirus. Magnus Carlsen (Tnsberg, Norway; November 30, 1990) attends THE WORLD from Oslo. He lives near the center of the city, where he is confined.

This forcefulness has a reason. Carlsen challenged the blockade with an internet lite tournament in which to face seven of the best chess players in the world. Broadcast by chess24.com from April 17, it has the largest prize pool on-line always: 250,000 dollars.

According to the criteria of

Know more

See the article here:

Magnus Carlsen: "In my country the authorities reacted quickly and the situation is under control" - Sportsfinding

Written by admin

April 6th, 2020 at 5:57 pm

Posted in Alphazero

Fat Fritz 1.1 update and a small gift – Chessbase News

Posted: March 8, 2020 at 10:47 am


without comments

3/5/2020 As promised in the announcement of the release of Fat Fritz, the first update to the neural network has been released, stronger and more mature, and with it comes the brand new smaller and faster Fat Fritz for CPU neural network which will produce quality play even on a pure CPU setup. If you leave it analyzing the start position, it will say it likes the Sicilian Najdorf, which says a lot about its natural style. Read on to find out more!

If you havent yet updated your copy of Fat Fritz, now is the time to do it as it brings more thanminor enhancements or a few bug fixes. This update will bring the first major update to the Fat Fritz neural network, stronger than ever, as well as a new smaller one that is quite strong on a GPU, but also shines on even a plain CPU setup.

When you open Fritz 17, presuming you have Fat Fritz installed, you will be greeted with a message in the bottom right corner of your screen advising you there is an update available for Fat Fritz.

When you see this click on 'Update Fat Fritz'

Then you will be greeted with the update pane, and just need to click Next to get to it

When Fat Fritz was released with Fritz 17, updates were promised with the assurance it was still improving. Internally the version number of the release was v226, while this newest one is v471.

While thorough testing is always a challenge since resources are limited, a match against Leela 42850 at 1600 nodes per move over 1000 games yielded a positive result:

Score of Fat Fritz 471k vs Leela 42850: +260 -153 =587 [0.553] Elo difference: 37.32 +/- 13.79

1000 of 1000 games finished.

Also, in a match of 254 games at 3m +1s against Stockfish 11 in AlphaZero ratio conditions, this new version also came ahead by roughly 10 Elo.

Still, it isnt about Elo and never was, and the result is merely to say that you should enjoy strong competitive analysis. For one thing, it is eminently clear that while both Leela and Fat Fritz enjoy much of the same AlphaZero heritage,there are also distinct differences in style.

Perhaps one of the most obvious ways to highlight this is just the start position. If you let the engine run for a couple of minutes on decent hardware, it will tell you what it thinks is the best line of play for both White and Black based on its understanding of chess.

As such, I ran Leela 42850 with its core settings to see what it thought. After 2 million nodes it was adamant that perfect chess should take both players down the highly respected Berlin Defence of the Ruy Lopez.

Leela 42850 analysis:

info depth 19 seldepth 56 time 32675 nodes 2181544 score cp 23 hashfull 210 nps 75740 tbhits 0 pv e2e4 e7e5 g1f3 b8c6 f1b5 g8f6 e1g1 f6e4 d2d4 e4d6 b5c6 d7c6 d4e5 d6f5 d1d8 e8d8 h2h3

This is fine, but it is also very much a matter of taste.

Fat Fritz has a different outlook on chess as has already been pointed out in the past. At first it too will show a preference for the Ruy Lopez, though not the Berlin, but given a bit more time by 2.6 million nodes it will declare the best opening per its understanding of chess and calculations is the Sicillian Najdorf.

Within a couple of minutes this is its mainline:

info depth 16 seldepth 59 time 143945 nodes 7673855 score cp 28 wdl 380 336 284 hashfull 508 nps 54227 tbhits 0 pv e2e4 c7c5 g1f3 d7d6 b1c3 g8f6 d2d4 c5d4 f3d4 a7a6 f1e2 e7e5 d4b3 f8e7 e1g1 c8e6 c1e3 e8g8 f1e1 b8c6 h2h3 h7h6 e2f3 a8c8 d1d2 c6b8 a2a4 f6h7 a1d1 b8d7 f3e2 h7f6

From a purely analytical point of view it is quite interesting that it found 10.Re1! in the mainline. In a position where white scores 52.5% on average it picks a move that scores 58.3% / 58.9%.

Remember there is no right or wrong here, but it does help show the natural inclinations of each of these neural networks.

Even if chess is ultimately a draw, that doesnt mean there is only onepath, so while all roads may lead to Rome, they dont all need to pass through New Jersey.

Trying to find the ideal recipe of parameters for an engine can be daunting, and previously multiple attempts had been made with the well-know tuner called CLOP by Remi Coulom. Very recently a completely new tuner 'Bayes-Skopt' was designed byKarlson Pfannschmidt, a PhD student in Machine Learning in Paderborn University inGermany, who goes by the online nickname "Kiudee" (pronounced like the letters Q-D). It was used to find new improved values for Leela, which are now the new defaults.

His tuner is described as "A fully Bayesian implementation of sequential model-based optimization", a mouthful I know, and was set up with his kind help as it ran for over a week. It produces quite fascinating graphical imagery with its updated values. Here is what the final version looked like:

These values, slightly rounded, have been added as the new de facto defaults for Fat Fritz.

This is a completely new neural network trained from Fat Fritz games, but in a much smaller frame. Objectively it is not as strong as Fat Fritz, but it will run much faster, and above all it has the virtue of being quite decent on even a pure CPU machine. It wont challenge the likes of Stockfish, so lets get that out of the way, but in testing on quad-core machines (i.e. my i7 laptop) it defeats Fritz 16 by a healthy margin.

Note that this is not in the product description, soneedless to say, it is more nor less a gift to Fritz 17 owners.

Enjoy it!

More stories on Fat Fritz and Fritz 17...

See the original post:

Fat Fritz 1.1 update and a small gift - Chessbase News

Written by admin

March 8th, 2020 at 10:47 am

Posted in Alphazero

Google’s DeepMind effort for COVID-19 coronavirus is based on the shoulders of giants – Mashviral News – Mash Viral

Posted: at 10:47 am


without comments

Coronavirus could make remote work the norm, something companies need to know COVID-19 coronavirus outbreak could be the catalyst for a dramatic increase in telecommuting. Businesses should be preparing for the rise of remote work and the long-term effects on marketing budgets, corporate travel and commercial real estate.

Sixty years ago, research was underway to understand the structure of proteins, since Nobel Laureates Max Perutz and John Kendrew in the 1950s gave the world the first glimpse into what a protein looks like.

It was that pioneering work and decades of research that followed, which made Googles DeepMind announcement Thursday that an idea of the structure of a handful of proteins associated with respiratory disease known as COVID-19. which is spreading all over the world.

Proteins do a great deal of work for organisms, and understanding the three-dimensional shape of proteins in COVID-19 could possibly conceive of a type of virus behind the disease, which could be a vaccine. Efforts are being made around the world to determine the structure of these viral proteins, of which DeepMinds is merely an effort.

There is always a little self-promotion about DeepMinds AI accomplishments, so it helps to remember the context in which science was created. The DeepMind Protein Polling Program reflects decades of work by chemists, physicists and biologists, computer scientists and data scientists, and would not be possible without this intense global effort.

Since the 1960s, scientists have been fascinated by the difficult problem of protein structure. Proteins are amino acids, and the forces that pull them in a certain way are fairly straightforward because some amino acids are attracted or repelled by positive or negative charges, and some amino acids are hydrophobic that is, they hold further away. away from water molecules.

However, these forces, so basic and so easy to understand, lead to amazing protein forms that are difficult to predict only from the acids themselves. And so decades have passed, trying to guess what a given amino acid sequence will look like, usually developing increasingly sophisticated computer models to simulate the process of folding a protein, the interaction of forces that make a protein take it. whatever shape it ends up taking.

An illustration of the possible structure of a coronavirus-associated membrane protein, according to a model created by DeepMinds AlphaFold program.

DeepMind

Twenty-six years ago, a bi-annual competition, called Critical Evaluation of Predicting Protein Structure, or CASP, was held. Scientists are challenged to submit their best computer simulated predictions of a given protein after telling them only the amino acid sequence. The judges know the structure, which is determined by a lab experiment, so its a test of how you can guess what is in a lab.

DeepMind honored the latest CASP, CASP13, which took place throughout 2018.To grab gold, DeepMind developed a computer model, AlphaFold, which shares a naming convention with the DeepMind model that won. chess and Gos game. AlphaZero. In one of those trophy moments similar to other DeepMind headlines, the company found its closest competitor to the CASP13 competition in 2018, producing high-precision structures for 24 of the 43 domains of proteins, with the highest single effort. producing 14 models of this type.

Writing in Nature this January, Mohammed AlQuraishi with the Systems Pharmacology Lab at Harvard Medical School, called the development of AlphaFold a watershed moment for the science of protein folding. His essay accompanies DeepMinds formal AlphaFold scientific work in this issue, entitled Predicting Enhanced Protein Structure with Deep Learning Potentials.

AlphaFold is a union of AIs work with DeepMind, a product of decades of machine learning progress, but also decades of publicly-acquired protein knowledge. The deep neural network developed by DeepMind consists of a mechanism for measuring the local set of atoms in a convolutional filter-like protein perfected by Turing Yann LeCun winner and used in ubiquitous convolutive neural networks to determine structure local of an image. To that, DeepMind added the so-called waste blocks of the type developed a few years ago by Kaiming He and his colleagues at Microsoft.

DeepMind calls the resulting structure a deep two-dimensional diluted convolutive residual network. The purpose of this mouth is to predict the amino acid pairs distance given their sequence. AlphaFold does this by optimizing their convolutions and residual connections using the stochastic gradient descent learning rule developed in the 1980s, which powers all deep learning today.

This AlphaFold network would not be possible without decades of knowledge of proteins built into publicly accessible databases. The deep network takes in the known amino acid sequence, in a form called multiple sequence alignment, or MSA. These are the pixel equivalent of an image operated by a CNN when image recognition. These MSAs are only available for decades because scientists have been mounting them in databases, in particular the UniProt or Universal Protein Resource database, which is maintained by a consortium of research centers around the world. funded by a group of governments. offices, including the National Institutes of Health and the National Science Foundation. The six DeepMind protein structures published this week for COVID-19 began by taking the freely available amino acid sequences at UniProt, making UniProt the raw material for DeepMinds science.

In addition, on the road to his impressive results, AlphaFold had to be trained. The deep web of convolutions and residual blocks had to take their form, giving examples of structures known as labeled examples. This was made possible by another 49-year-old organization called NSF-funded Protein Data Bank, the U.S. Department of Energy and others. The basic PDB database is managed by a consortium of Rutgers University, the San Diego Supercomputer Center / University of California San Diego, and the National Institute of Standards and Technology. These institutions have the impressive task of retaining what you might consider as the huge data available to AlphaFold and other efforts. More than 144,000 protein structures have been gathered and can be downloaded and downloaded almost half a million times a year, according to the PDB. PDB also runs the CASP challenge.

The DeepMind structure predictions are published in a format called the PDB of the consortium. This means that even the language in which DeepMind can express its scientific findings is possible by the consortium.

The fact that dedicated teams have spent decades painstakingly assembling knowledge stores from which researchers can freely extract is a striking achievement in the history of science and, in fact, humanity.

DeepMinds publication of the protein files was praised by other scientists, such as the Francis Crick Institute. In their blog post about their work COVID-19, DeepMind scientists recognize a lot of work on the virus by other institutions. We are indebted to the work of many other laboratories, they write, this work would not be possible without the efforts of researchers around the world who have responded to the COVID-19 outbreak with incredible agility.

It is a responsible and worthy recognition. It can be added that it is not only the current laboratories that have made the AlphaFold files possible, but also that generations of work carried out by public and private suits have made it possible for the collective understanding of which AlphaFold is only the latest interesting wrinkle.

More:

Google's DeepMind effort for COVID-19 coronavirus is based on the shoulders of giants - Mashviral News - Mash Viral

Written by admin

March 8th, 2020 at 10:47 am

Posted in Alphazero

Explained: The Artificial Intelligence Race is an Arms Race – The National Interest Online

Posted: February 9, 2020 at 2:48 am


without comments

Whoever wins it will have an advantage in every conflict around the world.

Graham Allison alerts us to artificial intelligence being the epicenter of todays superpower arms race.

Drawing heavily on Kai-Fu Lees basic thesis, Allison draws the battlelines: the United States vs. China, across the domains of human talent, big data, and government commitment.

Allison further points to the absence of controls, or even dialogue, on what AI means for strategic stability. With implied resignation, his article acknowledges the smashing of Pandoras Box, noting many AI advancements occur in the private sector beyond government scrutiny or control.

However, unlike the chilling and destructive promise of nuclear weapons, the threat posed by AI in popular imagination is amorphous, restricted to economic dislocation or sci-fi depictions of robotic apocalypse.

Absent from Allisons call to action is explaining the so what?why does the future hinge on AI dominance? After all, the few examples (mass surveillance, pilot HUDs, autonomous weapons) Allison does provide reference continued enhancements to the status quoincremental change, not paradigm shift.

As Allison notes, President Xi Jinping awoke to the power of AI after AlphaGo defeated the worlds number one Go human player, Lee Sedol. But why? What did Xi see in this computation that persuaded him to make AI the centerpiece of Chinese national endeavor?

The answer: AIs superhuman capacity to think.

To explain, lets begin with what I am not talking about. I do not mean so-called general AIthe broad-spectrum intelligence with self-directed goals acting independent of, or in spite of, preferences of human creators.

Eminent figures such as Elon Musk and Sam Harris warn of the coming of general AI. In particular, the so-called singularity, wherein AI evolves the ability to rewrite its own code. According to Musk and Harris, this will precipitate an exponential explosion in that AIs capability, realizing 10,000 IQ and beyond in a matter of mere hours. At such time, they argue, AI will become to us what we are to ants, with similar levels of regard.

I concur with Sam and Elon that the advent of artificial general superintelligence is highly probable, but this still requires transformative technological breakthroughs the circumstances for which are hard to predict. Accordingly, whether general AI is realized 30 or 200 years from now remains unknown, as is the nature of the intelligence created; such as if it is conscious or instinctual, innocent or a weapon.

When I discuss the AI arms race I mean the continued refinement of existing technology. Artificial intelligence that, while being a true intelligence in the sense of having the ability to self-learn, it has a single programmed goal constrained within a narrow set of rules and parameters (such as a game).

To demonstrate what President Xi saw in AI winning a strategy game, and why the global balance of power hinges on it, we need to talk briefly about games.

Artificial Intelligence and Games

There are two types of strategy games: games of complete information and games of incomplete information. A game of complete information is one in which every player can see all of the parameters and options of every other player.

Tic-Tac-Toe is a game of complete information. An average adult can solve this game with less than thirty minutes of practice. That is, adopt a strategy that no matter what your opponent does, you can correctly counter it to obtain a draw. If your opponent deviates from that same strategy, you can exploit them and win.

Conversely, a basic game of uncertainty is Rock, Scissors, Paper. Upon learning the rules, all players immediately know the optimal strategy. If your opponent throws Rock, you want to throw Paper. If they throw Paper, you want to throw Scissors, and so on.

Unfortunately, you do not know ahead of time what your opponent is going to do. Being aware of this, what is the correct strategy?

The unexploitable strategy is to throw Rock 33 percent of the time, Scissors 33 percent of the time, and Paper 33 percent of the time, each option being chosen randomly to avoid observable patterns or bias.

This unexploitable strategy means that, no matter what approach your opponent adopts, they won't be able to gain an edge against you.

But lets imagine your opponent throws Rock 100 percent of the time. How does your randomized strategy stack up? 33 percent of the time you'll tie (Rock), 33 percent of the time you'll win (Paper), and 33 percent of the time you'll lose (Scissors)the total expected value of your strategy against theirs is 0.

Is this your optimal strategy? No. If your opponent is throwing Rock 100 percent of the time, you should be exploiting your opponent by throwing Paper.

Naturally, if your opponent is paying attention they, in turn, will adjust to start throwing Scissors. You and your opponent then go through a series of exploits and counter-exploits until you both gradually drift toward an unexploitable equilibrium.

With me so far? Good. Let's talk about computing and games.

As stated, nearly any human can solve Tic-Tac-Toe, and computers solved checkers many years ago. However more complex games such as Chess, Go, and No-limit Texas Holdem poker have not been solved.

Despite all being mind-bogglingly complex, of the three chess is simplest. In 1997, reigning world champion Garry Kasparov was soundly beaten by the supercomputer Deep Blue. Today, anyone reading this has access to a chess computer on their phone that could trounce any human player.

Meanwhile, the eastern game of Go eluded programmers. Go has many orders of magnitude more combinations than chess. Until recently, humans beat computers by being far more efficient in selecting moveswe don't spend our time trying to calculate every possible option twenty-five moves deep. Instead, we intuitively narrow our decisionmaking to a few good choices and assess those.

Moreover, unlike traditional computers, people are able to think in non-linear abstraction. Humans can, for example, imagine a future state during the late stages of the game beyond which a computer could possibly calculate. We are not constrained by a forward-looking linear progression. Humans can wonderfully imagine a future endpoint, and work backwards from there to formulate a plan.

Many previously believed that this combination of factorsnear-infinite combinations and the human ability to think abstractlymeant that go would forever remain beyond the reach of the computer.

Then in 2016 something unprecedented happened. The AI system, AlphaGo, defeated the reigning world champion go player Lee Sedol 4-1.

But that was nothing: two years later, a new AI system, AlphaZero, was pitched against AlphaGo.

Unlike its predecessor which contained significant databases of go theory, all AlphaZero knew was the rules, from which it played itself continuously over forty days.

After this period of self-learning, AlphaZero annihilated AlphaGo, not 4-1, but 100-0.

In forty days AlphaZero had superseded 2,500 years of total human accumulated knowledge and even invented a range of strategies that had never been discovered before in history.

Meanwhile, chess computers are now a whole new frontier of competition, with programmers pitting their systems against one another to win digital titles. At the time of writing the world's best chess engine is a program known as Stockfish, able to smash any human Grandmaster easily. In December 2017 Stockfish was pitted against AlphaZero.

Again, AlphaZero only knew the rules. AlphaZero taught itself to play chess over a period of nine hours. The result over 100 games? AlphaZero twenty-eight wins, zero losses, seventy-two draws.

Not only can artificial intelligence crush human players, it also obliterates the best computer programs that humans can design.

Artificial Intelligence and Abstraction

Most chess computers play a purely mathematical strategy in a game yet to be solved. They are raw calculators and look like it too. AlphaZero, at least in style, appears to play every bit like a human. It makes long-term positional plays as if it can visualize the board; spectacular piece sacrifices that no computer could ever possibly pull off, and exploitative exchanges that would make a computer, if it were able, cringe with complexity. In short, AlphaZero is a genuine intelligence. Not self-aware, and constrained by a sandboxed reality, but real.

Despite differences in complexity there is one limitation that chess and go both share they're games of complete information.

Enter No-limit Texas Holdem (hereon, Poker). This is the ultimate game of uncertainty and incomplete information. In poker, you know what your hole cards are, the stack sizes for each player, and the community cards that have so far come out on the board. However, you don't know your opponent's cards, whether they will bet or raise or how much, or what cards are coming out on later streets of betting.

Poker is arguably the most complex game in the world, combining mathematics, strategy, timing, psychology, and luck. Unlike Chess or Go, Pokers possibilities are truly infinite and across multiple players simultaneously. The idea that a computer could beat top Poker professionals seems risible.

Except that it has already happened. In 2017, the AI system Libratus comprehensively beat the best Head's-up (two-player) poker players in the world.

And now, just months ago, another AI system Pluribus achieved the unthinkableit crushed super high stakes poker games against multiple top professionals simultaneously, doing so at a win-rate of five big blinds per hour. For perspective, the difference in skill level between the best English Premier League soccer team and the worst would not be that much.

Read the rest here:

Explained: The Artificial Intelligence Race is an Arms Race - The National Interest Online

Written by admin

February 9th, 2020 at 2:48 am

Posted in Alphazero

Artificial intelligence in the arms race: Commentary by Avi Ben Ezra – Augusta Free Press

Posted: at 2:48 am


without comments

Published Tuesday, Feb. 4, 2020, 8:45 am

Front Page Business Artificial intelligence in the arms race: Commentary by Avi Ben Ezra

Join AFP's 100,000+ followers on Facebook

Purchase a subscription to AFP | Subscribe to AFP podcasts on iTunes

News, press releases, letters to the editor: augustafreepress2@gmail.com

Advertising inquiries: freepress@ntelos.net

Twitter Facebook WhatsApp LinkedIn Reddit Tumblr Email

Photo Credit: lilcrazyfuzzy/iStock Photo

Artificial intelligence is at the epicenter of the arms race, and whoever has superior AI will win.

For most people, the threat of AI has been limited to economic dislocation and the sci-fi robotic apocalypse. Yet, AI advancements are taking place in the private sector, outside governments control or scrutiny, and there is speculation that it is quietly being used for defense. Experts believe that a new arms race is developing particularly between United States and China.

The Chinese president realized the power of AI, and its superhuman capacity to think, after AlphaGo defeated the number one Go player. He obviously foresees, like some experts, AI evolving the ability to rewrite its own code in a few years time and exploding its IQ as high as 10,000. Humans will be like ants compared to such intelligent giants.

Achieving this artificial superintelligence will require breakthroughs in transformative technology whose circumstances and timing cannot be predicted at present. However, President Xi and other presidents saw the possibilities of AI in the global balance of power when AlphaGo won in Go, a game of strategy.

Strategy games come in two types. First, there are games of complete information, such as Tic-Tac-Toe, chess and Go in which players see all the parameters and options of the other players. Such games are generally easily be won with practice. Then there are games of incomplete information, such as Rock, Scissors, Paper, in which players can learn the rules and know the optimal strategy. However, no one is certain how the opponent will play so there is no definite winning strategy and winning is left to chance.

Humans used to win games against computers at one time and there was belief that humans ability to think abstractly and to narrow down decision making to a few good choices would always beat the computer. Then in 2016, AlphaGo, an AI system, defeated the Go world champion 4-1. In 2018, AlphaZero, a new AI system with ability to self-learn, defeated AlphaGo 100-0 by accumulating knowledge and inventing unheard-of strategies within 40 days. In 2017, AlphaZero was pitted against Stockfish in 100 games and, within 9 hours of learning the chess game, it won 28 games, drew 72 games and lost none. No chess grandmaster has ever beaten Stockfish yet AI superintelligence beat it.

In 2017, Libratus, another AI system, beat the best poker players in No-limit Texas Holdem, a poker game. In 2019, Pluribus beat multiple top professional poker players at the rate of 5 big blinds per hour. Poker is a game of incomplete information, uncertainty, and complexity that combines strategy, mathematics, psychology, timing and luck.

By the beginning of 2020, AI had beaten all human players and the best computer programs ever designed.

Avi Ben Ezra the CTO of the SnatchBot chatbot platform says It is normal that most analysts talk about the US and China, but actually, with the military chatbots that we created you have in excess of 40 countries who tackle a range of issues from information warfare to cybersecurity and fraud detection with clever AI chatbots that are integrated with robotic process automation (RPA).

Poker mimics life because of uncertainty. In the US-China rivalry, Chinas objective is to replace America as the dominant superpower. It knows Americas defense budget, force development plans and probably its military resources, capabilities and specifications to a certain degree. However, Americas alliances keep shifting, its capabilities and projects are classified and international crises are unpredictable. Therefore, the best that China can do is to invest optimally in order to exploit Americas weaknesses while managing its own risks and weaknesses. The US will do the same. Both countries defense planners are compromised in their outcomes by bureaucracy, internal rivalry, politics and vested interests. Theres obviously a lot of uncertainty.

Since AI beats the best humans in poker, its capability is obviously being tested in defense. In a few years, AI systems will be making all military decisions as generals that never tire, have no fear, are never distracted, and always perform at their peak. No human decision makers can compete.

Under those circumstances, the country with slightly worse AI will lose every battle and the winner will control AI. No one knows how its going to play out, but its certain that AI will lead the arms race as each nation places it at the very core of national achievement.

Bringing AI and RPA together from a military perspective is just like with any other organization: it improves efficiency and drives down cost. Yet the key issue is obviously that maintaining a technological edge, is at the heart of the strategy for several opposing players in the game.

Dick Vitale on Team of Destiny: This is a hoops story you will LOVE! Jerry and Chris capture the sensational and dramatic championship journey by Tony Bennett and his tenacious Cavalier team. UVA was Awesome Baby and so is this book!

Ralph Sampson on Team of Destiny: Jerry and Chris have lived and seen it all, even before my time. I highly recommend this book to every basketball fan across the globe. This story translates to all who know defeat and how to overcome it!

Buy here.

Read more:

Artificial intelligence in the arms race: Commentary by Avi Ben Ezra - Augusta Free Press

Written by admin

February 9th, 2020 at 2:48 am

Posted in Alphazero

John Robson: Why is man so keen to make man obsolete? – National Post

Posted: December 18, 2019 at 9:46 pm


without comments

We wish you a headless robot/ We wish you a headless robot/ We wish you a headless robot/ and an alpha zero. If that ditty lacked a certain something, you should be going Da da da doom! about the festive piece in Saturdays Post about a computer saying Roll Over Beethoven and finishing his fragmentary 10th Symphony for him, possibly as a weirdly soulless funeral march.

Evidently this most ambitious project of its type ever attempted will see AI replicate creative genius ending in a public performance by a symphony orchestra in Bonn, Beethovens birthplace part of celebrations to mark the 250th anniversary of the composers birth. Why its not being performed by flawless machines synthesizing perfect tones is unclear.

What is clear is that its one of those plans with only two obvious pitfalls. It might fail. Or it might work.

Its one of those plans with only two obvious pitfalls. It might fail. Or it might work

A bad computer symphony would be awful, like early chess programs beneath contempt in their non-human weakness. But now their non-human strength is above contempt, as they dispatch the strongest grandmasters without emotion.

So my main concern here isnt with the headless Beethoven thing failing. Its with it succeeding. I know theres no stopping progress, that from mustard gas we had to go on to nuclear weapons then autonomous killer bots. But must we whistle so cheerfully as we design heartless successors who will even whistle better than us?

Its strange how many people yearn for the abolition of man. From New Soviet Man to Walden II, radicals cant wait to reinvent everything, including getting rid of dumb old languages where bridges have gender, and dumb old Adam and Eve into the bargain. Our ancestors stank. And we stink. The founder of behaviourist B.F. Skinners utopian Walden II chortles that when his perfect successors arrive the rest of us will pass on to a well-deserved oblivion.

So who are these successors? In That Hideous Strength, C.S. Lewiss demented scientist Filostrato proclaims that In us organic life has produced Mind. It has done its work. After that we want no more of it. We do not want the world any longer furred over with organic life, like what you call the blue mould What if were nearly there?

Freed of the boring necessities of life we might be paddocked in a digital, this-worldly Garden of Eden. But unless we are remade, we shall be more than just restless there. Without purpose we would go insane, as in Logans Run or the planet Miranda.

Ah, but we shall be remade. Mondays Post profiled Jennifer Doudna, inventor of the Crispr-Cas9 gene-editing technique so simple and powerful theres an app for it. Scientists can now dial up better genes on their smartphones and leave all the messy calculating to the machines. But if the machines can outcompose Beethoven, why would they leave the creative redesign of humans to us?

If the machines can outcompose Beethoven, why would they leave the redesign of humans to us?

To her credit, Prof. Doudna has nightmares about Hitler welcoming her invention. But forget Hitler. Here comes Leela to edit us away. And if Walden IIs eagerly anticipated design of personalities and control of temperament are within reach, and desirable, why should the new ones look anything like our current wretched ones? Is there anything to cherish in fallible man? If not, what sleep shall come?

So as we ponder Christmas, if we do, let us remember that 2,000 years ago the world was turned upside down by a God made Man because he loved weakness not strength. As a baby, then in the hideous humiliation of crucifixion, Christ gave a dignity to the helpless and downtrodden you find nowhere else including operating systems. Is it all rubbish, from the theology to the morality?

Years ago I argued for genetic modifications to restore the normal human template. But not to improve it, from eagle eyes to three legs to eight feet tall. But what will the computers think, and why should they? If nature is an obstacle to transcendence, where will they get their standards? Not from us. Nor will they want a bunch of meat around, sweating, bruising, rotting. Say goodnight, HAL.

Already algorithmic pop music is not just worse but in some important way less human. Where is Greensleeves or Good King Wenceslas in this Brave New World? And where should it be?

Shall the digital future burst forth from our abdomens and laser away the mess? Or is there something precious about us frail, vain, petty and, yes, smelly mortals? If so, what?

Many people love Christmas without being Christian. But many do not. And I think it comes down to your ability, or inability, to love humans as we are, which the Bible says God did but which supercomputers have no obvious reason to do.

So sing a carol for fallen man while the machines work on a funeral march.

Go here to see the original:

John Robson: Why is man so keen to make man obsolete? - National Post

Written by admin

December 18th, 2019 at 9:46 pm

Posted in Alphazero

MuZero figures out chess, rules and all – Chessbase News

Posted: December 13, 2019 at 6:48 pm


without comments

12/12/2019 Just imagine you had a chess computer the auto-sensor kind. Would someone who had no knowledge of the game be able to work it out, just by moving pieces. Or imagine you are a very powerful computer. By looking at millions of images of chess games would you be able to figure out the rules and learn to play the game proficiently? The answer is yes because that has just been done by Google's Deep Mind team. For chess and 76 other games. It is interesting, and slightly disturbing. | Graphic: DeepMind

ChessBase 15 - Mega package

Find the right combination! ChessBase 15 program + new Mega Database 2020 with 8 million games and more than 80,000 master analyses. Plus ChessBase Magazine (DVD + magazine) and CB Premium membership for 1 year!

More...

In 1980 the first chess computer with an auto response board, the Chafitz ARB Sargon 2.5, was released. It was programmed by Dan and Kathe Spracklen and had a sensory board and magnet pieces. The magnets embedded in the pieces were all the same kind, so that the board could only detect whether there was a piece on the square or not. It would signal its moves with LEDs located on the corner of each square.

Chafitz ARB Sargon 2.5 | Photo:My Chess Computers

Some years after the release of this computer I visited the Spracklens in their home in San Diego, and one evening had an interesting discussion, especially with Kathy. What would happen, we wondered, if we set up a Sargon 2.5 in a jungle village where nobody knew chess. If we left the people alone with the permanently switched-on board and pieces, would they be able to figure out the game? If they lifted a piece, the LED on that square would light up; if they put it on another square that LED would light up briefly. If the move was legal, there would be a reassuring beep; the square of a piece of the opposite colour would light up, and if they picked up that piece another LED would light up. If the original move wasnt legal, the board would make an unpleasant sound.

Our question was: could they figure out, by trial and error, how chess was played? Kathy and I discussed it at length, over the Sargon board, and in the end came to the conclusion that it was impossible they could never figure out the game without human instructions. Chess is far too complex.

Now, three decades later, I have to modify our conclusion somewhat: maybe humans indeed cannot learn chess by pure trial and error, but computers can...

You remember how AlphaGo and AlphaZero were created, by Google's DeepMind division. The programs Leela and Fat Fritz were generated using the same principle: tell an AI program the rules of the game, how the pieces move, and then let it play millions of games against itself. The program draws its own conclusions about the game and starts to play master-level chess. In fact, it can be argued that these programs are the strongest entities to have ever played chess human or computer.

Now DeepMind has come up with a fairly atrocious (but scientifically fascinating) idea: instead of telling the AI software the rules of the game, just let it play, using trial and error. Let it teach itself the rules of the game, and in the process learn to play it professionally. DeepMind combined a tree-based search (where a tree is a data structure used for locating information from within a set) with a learning model. They called the project MuZero. The program must predict the quantities most relevant to game planning not just for chess, but for 57 different Atari games. The result: MuZero, we are told, matches the performance of AlphaZero in Go, chess, and shogi.

And this is how MuZero works (description from VenturBeat):

Fundamentally MuZero receives observations images of a Go board or Atari screen and transforms them into a hidden state. This hidden state is updated iteratively by a process that receives the previous state and a hypothetical next action, and at every step the model predicts the policy (e.g., the move to play), value function (e.g., the predicted winner), and immediate reward (e.g., the points scored by playing a move)."

Evaluation of MuZero throughout training in chess, shogi, Go, and Atari the y-axis shows Elo rating| Image: DeepMind

As the DeepMind researchers explain, one form of reinforcement learning the technique in which rewards drive an AI agent toward goals involves models. This form models a given environment as an intermediate step, using a state transition model that predicts the next step and a reward model that anticipates the reward. If you are interested in this subject you can read thearticle on VenturBeat,or visit the Deep Mind site. There you can read this paper on the general reinforcement learning algorithm that masters chess, shogi and Go through self-play. Here's an abstract:

The game of chess is the longest-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. By contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go by reinforcement learning from self-play. In this paper, we generalize this approach into a single AlphaZero algorithm that can achieve superhuman performance in many challenging games. Starting from random play and given no domain knowledge except the game rules, AlphaZero convincingly defeated a world champion program in the games of chess and shogi (Japanese chess), as well as Go.

That refers to the original AlphaGo development, which has now been extended to MuZero. Turns out it is possible not just to become highly proficient at a game by playing it a million times against yourself, but in fact it is possible to work out the rules of the game by trial and error.

I have just now learned about this development and need to think about the consequences discuss it with experts. My first somewhat flippant reaction to a member of the Deep Mind team: "What next? Show it a single chess piece and it figures out the whole game?"

See more here:

MuZero figures out chess, rules and all - Chessbase News

Written by admin

December 13th, 2019 at 6:48 pm

Posted in Alphazero

From AR to AI: The emerging technologies marketers can explore to enable and disrupt – Marketing Tech

Posted: at 6:48 pm


without comments

The entire written works of mankind in all languages from the beginning of recorded history is around 50 petabytes. One petabyte is about 20 million four drawer filing cabinets filled with text. Google processes about 20 petabytes per day so in three days they would have processed everything we have written ever. Meanwhile, data centres now annually consume as much energy as Sweden. By 2025 theyll consume a fifth of all of Earths power.

For some, this is a revolution being able to store and recall information at the touch of a button. For others, it is 1984 with Big Brother being able to record and recall your every move. But just what can we expect from technology in the future be it within our working life or leisure time?

We are now in the fourth industrial revolution.Technologies will revolutionise, empower, turbo-charge life as we know it. From changing economies to helping cure illnesses, technology can already allow us to translate in real time while on business calls to turn on our heating remotely on our way home from work.

A new race of superhumans is coming with Alphabet owned, DeepMind having already shown us how these superhumans can outwit not only humans, but other lesser tech with AlphaZero, an Artificial Intelligence project set against Stockfish, a Japanese chess program. Not only did it beat the program, it showed an unnerving amount of human intuition about how it played. As the New York Times commented: intuitively and beautifully, with a romantic, attacking style. It played gambits.

Closer to home, organisations across the globe are using VR (virtual reality), AR (augmented reality), MR (mixed reality), XR (mixed reality environment) and VR/360 to create experiential customer/user experiences.

The value of the AR industry for video games is $11.6bn. However, it is also valued at $5.1bn in healthcare, $4.7bn in engineering and $7m in education far from the entertainment tech it once was it is now a power being utilised for the greater good. 5G has the potential to revolutionise allowing super high definition content to be delivered to mobile devices while super realistic AR and VR immersive experiences will transform our experience of education, news and entertainment.

So, if robots are now able to think quicker and sharper than us and predict our nuances, whats next and how can it be used from an organisational point of view? Artificial intelligence can already predict your personality simply by tracking your eyes. Findings show that peoples eye movements reveal whether they are sociable, conscientious or curious, with the algorithm software reliably recognising four of the big five personality traits neuroticism, extroversion, agreeableness and conscientiousness.

As Yuval Noah Harrari in Homo Deus comments, Soon, books will be able to read you while you read them. If Kindle is upgraded with face recognition and biometric sensors, it can know what made you laugh, what made you sad and what made you angry.

This means that job interviews can be undertaken with the blink of an eye (literally) as one scan of a computer could tell potential employers if the interviewee has the relevant traits for the job. Criminal psychologists can read those under scrutiny faster and help solve crimes quicker with biometric sensors pointing towards dishonesty and those lacking in empathy.

Knowledge is power. And technology can create this knowledge. From using biometrics and health statistics from your Fitbit and phone it can show your health predispositions, levels of fitness and wellbeing and personality traits and tendencies from sleep patterns and exercise and nutritional information.

However, it can also go one step further, your DNA and biometrics such as the speed of your heartbeat can indicate whether you have just had an increase in activity so that could mean physical, sexual or other types of excitement, your sugar levels can indicate lifestyle choices and harmful habits.

This could mean office politics are a thing of the past as HR managers could build teams based on DNA proven personalities as well as skill sets. And promotions could be scientific allowing those with more leadership personalities to be placed in leadership positions quicker and those with more subservient traits being part of a team.

With the development of neural lace, an ultra-thin mesh that can be implanted in the skull to monitor brain function, and eventually nano-technology we will be able to plug our own brains directly into the cloud allowing software to manage mundane high volume data processing and freeing our brains to think more creatively with significantly more power perhaps to the 1000x. Which as Singularity Hubs Raya Bidshahri points out raises the question, with all this enhancement, what does I feel like anymore?

From an organisational point of view, it could mean information and data we store such as recall and memory from meetings and research could automatically be downloaded freeing up more of our brain power to problem solve and allow us to think more creatively and smarter than our human form has ever allowed before.

So, what does this advancement of tech mean for the business of the future? Who really knows? However, what is sure is that whatever your business sector, size or region you should ensure you are at the very least aware of the latest advancements and always be ready to embrace them into your business, work with agencies that have an eye on the insights to the future, because sooner, exponentially sooner, the future will be now.

Whether you believe technology is the creator or all things good or all things evil, there is no doubt it will change our landscape forever. From our formative steps into the digital world to the leaps and bounds of the future, the force will be with you.

Interested in hearing leading global brands discuss subjects like this in person?

Find out more aboutDigital Marketing World Forum (#DMWF) Europe, London, North America, and Singapore.

View post:

From AR to AI: The emerging technologies marketers can explore to enable and disrupt - Marketing Tech

Written by admin

December 13th, 2019 at 6:48 pm

Posted in Alphazero

Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits – Forbes

Posted: December 9, 2019 at 7:52 pm


without comments

Digital Human Brain Covered with Networks

Artificial intelligence is advancing rapidly. In a few decades machines will achieve superintelligence and become self-improving. Soon after that happens we will launch a thousand ships into space. These probes will land on distant planets, moons, asteroids, and comets. Using AI and terabytes of code, they will then nanoassemble local particles into living organisms. Each probe will, in fact, contain the information needed to create an entire ecosystem. Thanks to AI and advanced biotechnology, the species in each place will be tailored to their particular plot of rock. People will thrive in low temperatures, dim light, high radiation, and weak gravity. Humanity will become an incredibly elastic concept. In time our distant progeny will build megastructures that surround stars and capture most of their energy. Then the power of entire galaxies will be harnessed. Then life and AIlong a common entity by this pointwill construct a galaxy-sized computer. It will take a mind that large about a hundred-thousand years to have a thought. But those thoughts will pierce the veil of reality. They will grasp things as they really are. All will be one. This is our destiny.

Then again, maybe not.

There are, of course, innumerable reasons to reject this fantastic tale out of hand. Heres a quick and dirty one built around Copernicuss discovery that we are not the center of the universe. Most times, places, people, and things are average. But if sentient beings from Earth are destined to spend eons multiplying and spreading across the heavens, then those of us alive today are special. We are among the very few of our kind to live in our cosmic infancy, confined in our planetary cradle. Because we probably are not special, we probably are not at an extreme tip of the human timeline; were likely somewhere in the broad middle. Perhaps a hundred-billion modern humans have existed, across a span of around 50,000 years. To claim in the teeth of these figures that our species is on the cusp of spending millions of years spreading trillions of individuals across this galaxy and others, you must engage in some wishful thinking. You must embrace the notion that we today are, in a sense, back at the center of the universe.

It is in any case more fashionable to speculate about imminent catastrophes. Technology again looms large. In the gray goo scenario, runaway self-replicating nanobots consume all of the Earths biomass. Thinking along similar lines, philosopher Nick Bostrom imagines an AI-enhanced paperclip machine that, ruthlessly following its prime directive to make paperclips, liquidates mankind and converts the planet into a giant paperclip mill. Elon Musk, when he discusses this hypothetical, replaces paperclips with strawberries, so that he can worry about strawberry fields forever. What Bostrom and Musk are driving at is the fear that an advanced AI being will not share our values. We might accidently give it a bad aim (e.g., paperclips at all costs). Or it might start setting its own aims. As Stephen Hawking noted shortly before his death, a machine that sees your intelligence the way you see a snails might decide it has no need for you. Instead of using AI to colonize distant planets, we will use it to destroy ourselves.

When someone mentions AI these days, she is usually referring to deep neural networks. Such networks are far from the only form of AI, but they have been the source of most of the recent successes in the field. A deep neural network can recognize a complex pattern without relying on a large body of pre-set rules. It does this with algorithms that loosely mimic how a human brain tunes neural pathways.

The neurons, or units, in a deep neural network are layered. The first layer is an input layer that breaks incoming data into pieces. In a network that looks at black-and-white images, for instance, each of the first layers units might link to a single pixel. Each input unit in this network will translate its pixels grayscale brightness into a number. It might turn a white pixel into zero, a black pixel into one, and a gray pixel into some fraction in between. These numbers will then pass to the next layer of units. Each of the units there will generate a weighted sum of the values coming in from several of the previous layers units. The next layer will do the same thing to that second layer, and so on through many layers more. The deeper the layer, the more pixels accounted for in each weighted sum.

An early-layer unit will produce a high weighted sumit will fire, like a neuron doesfor a pattern as simple as a black pixel above a white pixel. A middle-layer unit will fire only when given a more complex pattern, like a line or a curve. An end-layer unit will fire only when the patternor, rather, the weighted sums of many other weighted sumspresented to it resembles a chair or a bonfire or a giraffe. At the end of the network is an output layer. If one of the units in this layer reliably fires only when the network has been fed an image with a giraffe in it, the network can be said to recognize giraffes.

A deep neural network is not born recognizing objects. The network just described would have to learn from pre-labeled examples. At first the network would produce random outputs. Each time the network did this, however, the correct answers for the labeled image would be run backward through the network. An algorithm would be used, in other words, to move the networks unit weighting functions closer to what they would need to be to recognize a given object. The more samples a network is fed, the more finely tuned and accurate it becomes.

Some deep neural networks do not need spoon-fed examples. Say you want a program equipped with such networks to play chess. Give it the rules of the game, instruct it to seek points, and tell it that a checkmate is worth a hundred points. Then have it use a Monte Carlo method to randomly simulate games. Through trial and error, the program will stumble on moves that lead to a checkmate, and then on moves that lead to moves that lead to a checkmate, and so on. Over time the program will assign value to moves that simply tend to lead toward a checkmate. It will do this by constantly adjusting its networks unit weighting functions; it will just use points instead of correctly labeled images. Once the networks are trained, the program can win discrete contests in much the way it learned to play in the first place. At each of its turns, the program will simulate games for each potential move it is considering. It will then choose the move that does best in the simulations. Thanks to constant fine-tuning, even these in-game simulations will get better and better.

There is a chess program that operates more or less this way. It is called AlphaZero, and at present it is the best chess player on the planet. Unlike other chess supercomputers, it has never seen a game between humans. It learned to play by spending just a few hours simulating moves with itself. In 2017 it played a hundred games against Stockfish 8, one of the best chess programs to that point. Stockfish8 examined 70million moves per second. AlphaZero examined only 80,000. AlphaZero won 28 games, drew 72, and lost zero. It sometimes made baffling moves (to humans) that turned out to be masterstrokes. AlphaZero is not just a chess genius; it is an alien chess genius.

AlphaZero is at the cutting edge of AI, and it is very impressive. But its success is not a sign that AI will take us to the starsor enslave usany time soon. In Artificial Intelligence: A Guide For Thinking Humans, computer scientist Melanie Mitchell makes the case for AI sobriety. AI currently excels, she notes, only when there are clear rules, straightforward reward functions (for example, rewards for points gained or for winning), and relatively few possible actions (moves). Take IBMs Watson program. In 2011 it crushed the best human competitors on the quiz show Jeopardy!, leading IBM executives to declare that its successors would soon be making legal arguments and medical diagnoses. It has not worked out that way. Real-world questions and answers in real-world domains, Mitchell explains, have neither the simple short structure of Jeopardy! clues nor their well-defined responses.

Even in the narrow domains that most suit it, AI is brittle. A program that is a chess grandmaster cannot compete on a board with a slightly different configuration of squares or pieces. Unlike humans, Mitchell observes, none of these programs can transfer anything it has learned about one game to help it learn a different game. Because the programs cannot generalize or abstract from what they know, they can function only within the exact parameters in which they have been trained.

A related point is that current AI does not understand even basic aspects of how the world works. Consider this sentence: The city council refused the demonstrators a permit because they feared violence. Who feared violence, the city council or the demonstrators? Using what she knows about bureaucrats, protestors, and riots, a human can spot at once that the fear resides in the city council. When AI-driven language-processing programs are asked this kind of question, however, their responses are little better than random guesses. When AI cant determine what it refers to in a sentence, Mitchell writes, quoting computer scientist Oren Etzioni, its hard to believe that it will take over the world.

And it is not accurate to say, as many journalists do, that a program like AlphaZero learns by itself. Humans must painstakingly decide how many layers a network should have, how much incoming data should link to each input unit, how fast data should aggregate as it passes through the layers, how much each unit weighting function should change in response to feedback, and much else. These settings and designs, adds Mitchell, must typically be decided anew for each task a network is trained on. It is hard to see nefarious unsupervised AI on the horizon.

The doom camp (AI will murder us) and the rapture camp (it will take us into the mind of God) share a common premise. Both groups extrapolate from past trends of exponential progress. Moores lawwhich is not really a law, but an observationsays that the number of transistors we can fit on a computer chip doubles every two years or so. This enables computer processing speeds to increase at an exponential rate. The futurist Ray Kurzweil asserts that this trend of accelerating improvement stretches back to the emergence of life, the appearance of Eukaryotic cells, and the Cambrian Explosion. Looking forward, Kurzweil sees an AI singularitythe rise of self-improving machine superintelligenceon the trendline around 2045.

The political scientist Philip Tetlock has looked closely at whether experts are any good at predicting the future. The short answer is that theyre terrible at it. But theyre not hopeless. Borrowing an analogy from Isaiah Berlin, Tetlock divides thinkers into hedgehogs and foxes. A hedgehog knows one big thing, whereas a fox knows many small things. A hedgehog tries to fit what he sees into a sweeping theory. A fox is skeptical of such theories. He looks for facts that will show he is wrong. A hedgehog gives answers and says moreover a lot. A fox asks questions and says however a lot. Tetlock has found that foxes are better forecasters than hedgehogs. The more distant the subject of the prediction, the more the hedgehogs performance lags.

Using a theory of exponential growth to predict an impending AI singularity is classic hedgehog thinking. It is a bit like basing a prediction about human extinction on nothing more than the Copernican principle. Kurzweils vision of the future is clever and provocative, but it is also hollow. It is almost as if huge obstacles to general AI will soon be overcome because the theory says so, rather than because the scientists on the ground will perform the necessary miracles. Gordon Moore himself acknowledges that his law will not hold much longer. (Quantum computers might pick up the baton. Well see.) Regardless, increased processing capacity might be just a small piece of whats needed for the next big leaps in machine thinking.

When at Thanksgiving dinner you see Aunt Jane sigh after Uncle Bob tells a blue joke, you can form an understanding of what Jane thinks about what Bob thinks. For that matter, you get the joke, and you can imagine analogous jokes that would also annoy Jane. You can infer that your cousin Mary, who normally likes such jokes but is not laughing now, is probably still angry at Bob for spilling the gravy earlier. You know that although you cant see Bobs feet, they exist, under the table. No deep neural network can do any of this, and its not at all clear that more layers or faster chips or larger training sets will close the gap. We probably need further advances that we have only just begun to contemplate. Enabling machines to form humanlike conceptual abstractions, Mitchell declares, is still an almost completely unsolved problem.

There has been some concern lately about the demise of the corporate laboratory. Mitchell gives the impression that, at least in the technology sector, the corporate basic-research division is alive and well. Over the course of her narrative, labs at Google, Microsoft, Facebook, and Uber make major breakthroughs in computer image recognition, decision making, and translation. In 2013, for example, researchers at Google trained a network to create vectors among a vast array of words. A vector set of this sort enables a language-processing program to define and use a word based on the other words with which it tends to appear. The researchers put their vector set online for public use. Google is in some ways the protagonist of Mitchells story. It is now an applied AI company, in Mitchells words, that has placed machine thinking at the center of diverse products, services, and blue-sky research.

Google has hired Ray Kurzweil, a move that might be taken as an implicit endorsement of his views. It is pleasing to think that many Google engineers earnestly want to bring on the singularity. The grand theory may be illusory, but the treasures produced in pursuit of it will be real.

Go here to see the original:

Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits - Forbes

Written by admin

December 9th, 2019 at 7:52 pm

Posted in Alphazero

Artificial intelligence: How to measure the I in AI – TechTalks

Posted: at 7:52 pm


without comments

Image credit: Depositphotos

This article is part ofDemystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

Last week, Lee Se-dol, the South Korean Go champion who lost in a historical matchup against DeepMinds artificial intelligence algorithm AlphaGo in 2016, declared his retirement from professional play.

With the debut of AI in Go games, Ive realized that Im not at the top even if I become the number one through frantic efforts, Lee told theYonhap news agency. Even if I become the number one, there is an entity that cannot be defeated.

Predictably, Se-dols comments quickly made the rounds across prominent tech publications, some of them using sensational headlines with AI dominance themes.

Since the dawn of AI, games have been one of the main benchmarks to evaluate the efficiency of algorithms. And thanks to advances in deep learning and reinforcement learning, AI researchers are creating programs that can master very complicated games and beat the most seasoned players across the world. Uninformed analysts have been picking up on these successes to suggest that AI is becoming smarter than humans.

But at the same time, contemporary AI fails miserably at some of the most basic that every human can perform.

This begs the question, does mastering a game prove anything? And if not, how can you measure the level of intelligence of an AI system?

Take the following example. In the picture below, youre presented with three problems and their solution. Theres also a fourth task that hasnt been solved. Can you guess the solution?

Youre probably going to think that its very easy. Youll also be able to solve different variations of the same problem with multiple walls, and multiple lines, and lines of different colors, just by seeing these three examples. But currently, theres no AI system, including the ones being developed at the most prestigious research labs, that can learn to solve such a problem with so few examples.

The above example is from The Measure of Intelligence, a paper by Franois Chollet, the creator of Keras deep learning library. Chollet published this paper a few weeks before Le-sedol declared his retirement. In it, he provided many important guidelines on understanding and measuring intelligence.

Ironically, Chollets paper did not receive a fraction of the attention it needs. Unfortunately, the media is more interested in covering exciting AI news that gets more clicks. The 62-page paper contains a lot of invaluable information and is a must-read for anyone who wants to understand the state of AI beyond the hype and sensation.

But I will do my best to summarize the key recommendations Chollet makes on measuring AI systems and comparing their performance to that of human intelligence.

The contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks, such as board games and video games, Chollet writes, adding that solely measuring skill at any given task falls short of measuring intelligence.

In fact, the obsession with optimizing AI algorithms for specific tasks has entrenched the community in narrow AI. As a result, work in AI has drifted away from the original vision of developing thinking machines that possess intelligence comparable to that of humans.

Although we are able to engineer systems that perform extremely well on specific tasks, they have still stark limitations, being brittle, data-hungry, unable to make sense of situations that deviate slightly from their training data or the assumptions of their creators, and unable to repurpose themselves to deal with novel tasks without significant involvement from human researchers, Chollet notes in the paper.

Chollets observations are in line with those made by other scientists on the limitations and challenges of deep learning systems. These limitations manifest themselves in many ways:

Heres an example: OpenAIs Dota-playing neural networks needed 45,000 years worth of gameplay to reach a professional level. The AI is also limited in the number of characters it can play, and the slightest change to the game rules will result in a sudden drop in its performance.

The same can be seen in other fields, such as self-driving cars. Despite millions of hours of road experience, the AI algorithms that power autonomous vehicles can make stupid mistakes, such as crashing into lane dividers or parked firetrucks.

One of the key challenges that the AI community has struggled with is defining intelligence. Scientists have debated for decades on providing a clear definition that allows us to evaluate AI systems and determine what is intelligent or not.

Chollet borrows the definition by DeepMind cofounder Shane Legg and AI scientist Marcus Hutter: Intelligence measures an agents ability to achieve goals in a wide range of environments.

Key here is achieve goals and wide range of environments. Most current AI systems are pretty good at the first part, which is to achieve very specific goals, but bad at doing so in a wide range of environments. For instance, an AI system that can detect and classify objects in images will not be able to perform some other related task, such as drawing images of objects.

Chollet then examines the two dominant approaches in creating intelligence systems: symbolic AI and machine learning.

Early generations of AI research focused on symbolic AI, which involves creating an explicit representation of knowledge and behavior in computer programs. This approach requires human engineers to meticulously write the rules that define the behavior of an AI agent.

It was then widely accepted within the AI community that the problem of intelligence would be solved if only we could encode human skills into formal rules and encode human knowledge into explicit databases, Chollet observes.

But rather than being intelligent by themselves, these symbolic AI systems manifest the intelligence of their creators in creating complicated programs that can solve specific tasks.

The second approach, machine learning systems, is based on providing the AI model with data from the problem space and letting it develop its own behavior. The most successful machine learning structure so far is artificial neural networks, which are complex mathematical functions that can create complex mappings between inputs and outputs.

For instance, instead of manually coding the rules for detecting cancer in x-ray slides, you feed a neural network with many slides annotated with their outcomes, a process called training. The AI examines the data and develops a mathematical model that represents the common traits of cancer patterns. It can then process new slides and outputs how likely it is that the patients have cancer.

Advances in neural networks and deep learning have enabled AI scientists to tackle many tasks that were previously very difficult or impossible with classic AI, such as natural language processing, computer vision and speech recognition.

Neural networkbased models, also known as connectionist AI, are named after their biological counterparts. They are based on the idea that the mind is a blank slate (tabula rasa) that turns experience (data) into behavior. Therefore, the general trend in deep learning has become to solve problems by creating bigger neural networks and providing them with more training data to improve their accuracy.

Chollet rejects both approaches because none of them has been able to create generalized AI that is flexible and fluid like the human mind.

We see the world through the lens of the tools we are most familiar with. Today, it is increasingly apparent that both of these views of the nature of human intelligenceeither a collection of special-purpose programs or a general-purpose Tabula Rasaare likely incorrect, he writes.

Truly intelligent systems should be able to develop higher-level skills that can span across many tasks. For instance, an AI program that masters Quake 3 should be able to play other first-person shooter games at a decent level. Unfortunately, the best that current AI systems achieve is local generalization, a limited maneuver room within their own narrow domain.

In his paper, Chollet argues that the generalization or generalization power for any AI system is its ability to handle situations (or tasks) that differ from previously encountered situations.

Interestingly, this is a missing component of both symbolic and connectionist AI. The former requires engineers to explicitly define its behavioral boundary and the latter requires examples that outline its problem-solving domain.

Chollet also goes further and speaks of developer-aware generalization, which is the ability of an AI system to handle situations that neither the system nor the developer of the system have encountered before.

This is the kind of flexibility you would expect from a robo-butler that could perform various chores inside a home without having explicit instructions or training data on them. An example is Steve Wozniaks famous coffee test, in which a robot would enter a random house and make coffee without knowing in advance the layout of the home or the appliances it contains.

Elsewhere in the paper, Chollet makes it clear that AI systems that cheat their way toward their goal by leveraging priors (rules) and experience (data) are not intelligent. For instance, consider Stockfish, the best rule-base chess-playing program. Stockfish, an open-source project, is the result of contributions from thousands of developers who have created and fine-tuned tens of thousands of rules. A neural networkbased example is AlphaZero, the multi-purpose AI that has conquered several board games by playing them millions of times against itself.

Both systems have been optimized to perform a specific task by making use of resources that are beyond the capacity of the human mind. The brightest human cant memorize tens of thousands of chess rules. Likewise, no human can play millions of chess games in a lifetime.

Solving any given task with beyond-human level performance by leveraging either unlimited priors or unlimited data does not bring us any closer to broad AI or general AI, whether the task is chess, football, or any e-sport, Chollet notes.

This is why its totally wrong to compare Deep Blue, Alpha Zero, AlphaStar or any other game-playing AI with human intelligence.

Likewise, other AI models, such as Aristo, the program that can pass an eighth-grade science test, does not possess the same knowledge as a middle school student. It owes its supposed scientific abilities to the huge corpora of knowledge it was trained on, not its understanding of the world of science.

(Note: Some AI researchers, such as computer scientist Rich Sutton, believe that the true direction for artificial intelligence research should be methods that can scale with the availability of data and compute resources.)

In the paper, Chollet presents the Abstraction Reasoning Corpus (ARC), a dataset intended to evaluate the efficiency of AI systems and compare their performance with that of human intelligence. ARC is a set of problem-solving tasks that tailored for both AI and humans.

One of the key ideas behind ARC is to level the playing ground between humans and AI. It is designed so that humans cant take advantage of their vast background knowledge of the world to outmaneuver the AI. For instance, it doesnt involve language-related problems, which AI systems have historically struggled with.

On the other hand, its also designed in a way that prevents the AI (and its developers) from cheating their way to success. The system does not provide access to vast amounts of training data. As in the example shown at the beginning of this article, each concept is presented with a handful of examples.

The AI developers must build a system that can handle various concepts such as object cohesion, object persistence, and object influence. The AI system must also learn to perform tasks such as scaling, drawing, connecting points, rotating and translating.

Also, the test dataset, the problems that are meant to evaluate the intelligence of the developed system, are designed in a way that prevents developers from solving the tasks in advance and hard-coding their solution in the program. Optimizing for evaluation sets is a popular cheating method in data science and machine learning competitions.

According to Chollet, ARC only assesses a general form of fluid intelligence, with a focus on reasoning and abstraction. This means that the test favors program synthesis, the subfield of AI that involves generating programs that satisfy high-level specifications. This approach is in contrast with current trends in AI, which are inclined toward creating programs that are optimized for a limited set of tasks (e.g., playing a single game).

In his experiments with ARC, Chollet has found that humans can fully solve ARC tests. But current AI systems struggle with the same tasks. To the best of our knowledge, ARC does not appear to be approachable by any existing machine learning technique (including Deep Learning), due to its focus on broad generalization and few-shot learning, Chollet notes.

While ARC is a work in progress, it can become a promising benchmark to test the level of progress toward human-level AI. We posit that the existence of a human-level ARC solver would represent the ability to program an AI from demonstrations alone (only requiring a handful of demonstrations to specify a complex task) to do a wide range of human-relatable tasks of a kind that would normally require human-level, human-like fluid intelligence, Chollet observes.

Original post:

Artificial intelligence: How to measure the I in AI - TechTalks

Written by admin

December 9th, 2019 at 7:52 pm

Posted in Alphazero


Page 31234



matomo tracker