Page 212

Archive for the ‘Alphazero’ Category

Artificial intelligence in the arms race: Commentary by Avi Ben Ezra – Augusta Free Press

Posted: February 9, 2020 at 2:48 am

without comments

Published Tuesday, Feb. 4, 2020, 8:45 am

Front Page Business Artificial intelligence in the arms race: Commentary by Avi Ben Ezra

Join AFP's 100,000+ followers on Facebook

Purchase a subscription to AFP | Subscribe to AFP podcasts on iTunes

News, press releases, letters to the editor:

Advertising inquiries:

Twitter Facebook WhatsApp LinkedIn Reddit Tumblr Email

Photo Credit: lilcrazyfuzzy/iStock Photo

Artificial intelligence is at the epicenter of the arms race, and whoever has superior AI will win.

For most people, the threat of AI has been limited to economic dislocation and the sci-fi robotic apocalypse. Yet, AI advancements are taking place in the private sector, outside governments control or scrutiny, and there is speculation that it is quietly being used for defense. Experts believe that a new arms race is developing particularly between United States and China.

The Chinese president realized the power of AI, and its superhuman capacity to think, after AlphaGo defeated the number one Go player. He obviously foresees, like some experts, AI evolving the ability to rewrite its own code in a few years time and exploding its IQ as high as 10,000. Humans will be like ants compared to such intelligent giants.

Achieving this artificial superintelligence will require breakthroughs in transformative technology whose circumstances and timing cannot be predicted at present. However, President Xi and other presidents saw the possibilities of AI in the global balance of power when AlphaGo won in Go, a game of strategy.

Strategy games come in two types. First, there are games of complete information, such as Tic-Tac-Toe, chess and Go in which players see all the parameters and options of the other players. Such games are generally easily be won with practice. Then there are games of incomplete information, such as Rock, Scissors, Paper, in which players can learn the rules and know the optimal strategy. However, no one is certain how the opponent will play so there is no definite winning strategy and winning is left to chance.

Humans used to win games against computers at one time and there was belief that humans ability to think abstractly and to narrow down decision making to a few good choices would always beat the computer. Then in 2016, AlphaGo, an AI system, defeated the Go world champion 4-1. In 2018, AlphaZero, a new AI system with ability to self-learn, defeated AlphaGo 100-0 by accumulating knowledge and inventing unheard-of strategies within 40 days. In 2017, AlphaZero was pitted against Stockfish in 100 games and, within 9 hours of learning the chess game, it won 28 games, drew 72 games and lost none. No chess grandmaster has ever beaten Stockfish yet AI superintelligence beat it.

In 2017, Libratus, another AI system, beat the best poker players in No-limit Texas Holdem, a poker game. In 2019, Pluribus beat multiple top professional poker players at the rate of 5 big blinds per hour. Poker is a game of incomplete information, uncertainty, and complexity that combines strategy, mathematics, psychology, timing and luck.

By the beginning of 2020, AI had beaten all human players and the best computer programs ever designed.

Avi Ben Ezra the CTO of the SnatchBot chatbot platform says It is normal that most analysts talk about the US and China, but actually, with the military chatbots that we created you have in excess of 40 countries who tackle a range of issues from information warfare to cybersecurity and fraud detection with clever AI chatbots that are integrated with robotic process automation (RPA).

Poker mimics life because of uncertainty. In the US-China rivalry, Chinas objective is to replace America as the dominant superpower. It knows Americas defense budget, force development plans and probably its military resources, capabilities and specifications to a certain degree. However, Americas alliances keep shifting, its capabilities and projects are classified and international crises are unpredictable. Therefore, the best that China can do is to invest optimally in order to exploit Americas weaknesses while managing its own risks and weaknesses. The US will do the same. Both countries defense planners are compromised in their outcomes by bureaucracy, internal rivalry, politics and vested interests. Theres obviously a lot of uncertainty.

Since AI beats the best humans in poker, its capability is obviously being tested in defense. In a few years, AI systems will be making all military decisions as generals that never tire, have no fear, are never distracted, and always perform at their peak. No human decision makers can compete.

Under those circumstances, the country with slightly worse AI will lose every battle and the winner will control AI. No one knows how its going to play out, but its certain that AI will lead the arms race as each nation places it at the very core of national achievement.

Bringing AI and RPA together from a military perspective is just like with any other organization: it improves efficiency and drives down cost. Yet the key issue is obviously that maintaining a technological edge, is at the heart of the strategy for several opposing players in the game.

Dick Vitale on Team of Destiny: This is a hoops story you will LOVE! Jerry and Chris capture the sensational and dramatic championship journey by Tony Bennett and his tenacious Cavalier team. UVA was Awesome Baby and so is this book!

Ralph Sampson on Team of Destiny: Jerry and Chris have lived and seen it all, even before my time. I highly recommend this book to every basketball fan across the globe. This story translates to all who know defeat and how to overcome it!

Buy here.

Read more:

Artificial intelligence in the arms race: Commentary by Avi Ben Ezra - Augusta Free Press

Written by admin

February 9th, 2020 at 2:48 am

Posted in Alphazero

John Robson: Why is man so keen to make man obsolete? – National Post

Posted: December 18, 2019 at 9:46 pm

without comments

We wish you a headless robot/ We wish you a headless robot/ We wish you a headless robot/ and an alpha zero. If that ditty lacked a certain something, you should be going Da da da doom! about the festive piece in Saturdays Post about a computer saying Roll Over Beethoven and finishing his fragmentary 10th Symphony for him, possibly as a weirdly soulless funeral march.

Evidently this most ambitious project of its type ever attempted will see AI replicate creative genius ending in a public performance by a symphony orchestra in Bonn, Beethovens birthplace part of celebrations to mark the 250th anniversary of the composers birth. Why its not being performed by flawless machines synthesizing perfect tones is unclear.

What is clear is that its one of those plans with only two obvious pitfalls. It might fail. Or it might work.

Its one of those plans with only two obvious pitfalls. It might fail. Or it might work

A bad computer symphony would be awful, like early chess programs beneath contempt in their non-human weakness. But now their non-human strength is above contempt, as they dispatch the strongest grandmasters without emotion.

So my main concern here isnt with the headless Beethoven thing failing. Its with it succeeding. I know theres no stopping progress, that from mustard gas we had to go on to nuclear weapons then autonomous killer bots. But must we whistle so cheerfully as we design heartless successors who will even whistle better than us?

Its strange how many people yearn for the abolition of man. From New Soviet Man to Walden II, radicals cant wait to reinvent everything, including getting rid of dumb old languages where bridges have gender, and dumb old Adam and Eve into the bargain. Our ancestors stank. And we stink. The founder of behaviourist B.F. Skinners utopian Walden II chortles that when his perfect successors arrive the rest of us will pass on to a well-deserved oblivion.

So who are these successors? In That Hideous Strength, C.S. Lewiss demented scientist Filostrato proclaims that In us organic life has produced Mind. It has done its work. After that we want no more of it. We do not want the world any longer furred over with organic life, like what you call the blue mould What if were nearly there?

Freed of the boring necessities of life we might be paddocked in a digital, this-worldly Garden of Eden. But unless we are remade, we shall be more than just restless there. Without purpose we would go insane, as in Logans Run or the planet Miranda.

Ah, but we shall be remade. Mondays Post profiled Jennifer Doudna, inventor of the Crispr-Cas9 gene-editing technique so simple and powerful theres an app for it. Scientists can now dial up better genes on their smartphones and leave all the messy calculating to the machines. But if the machines can outcompose Beethoven, why would they leave the creative redesign of humans to us?

If the machines can outcompose Beethoven, why would they leave the redesign of humans to us?

To her credit, Prof. Doudna has nightmares about Hitler welcoming her invention. But forget Hitler. Here comes Leela to edit us away. And if Walden IIs eagerly anticipated design of personalities and control of temperament are within reach, and desirable, why should the new ones look anything like our current wretched ones? Is there anything to cherish in fallible man? If not, what sleep shall come?

So as we ponder Christmas, if we do, let us remember that 2,000 years ago the world was turned upside down by a God made Man because he loved weakness not strength. As a baby, then in the hideous humiliation of crucifixion, Christ gave a dignity to the helpless and downtrodden you find nowhere else including operating systems. Is it all rubbish, from the theology to the morality?

Years ago I argued for genetic modifications to restore the normal human template. But not to improve it, from eagle eyes to three legs to eight feet tall. But what will the computers think, and why should they? If nature is an obstacle to transcendence, where will they get their standards? Not from us. Nor will they want a bunch of meat around, sweating, bruising, rotting. Say goodnight, HAL.

Already algorithmic pop music is not just worse but in some important way less human. Where is Greensleeves or Good King Wenceslas in this Brave New World? And where should it be?

Shall the digital future burst forth from our abdomens and laser away the mess? Or is there something precious about us frail, vain, petty and, yes, smelly mortals? If so, what?

Many people love Christmas without being Christian. But many do not. And I think it comes down to your ability, or inability, to love humans as we are, which the Bible says God did but which supercomputers have no obvious reason to do.

So sing a carol for fallen man while the machines work on a funeral march.

Go here to see the original:

John Robson: Why is man so keen to make man obsolete? - National Post

Written by admin

December 18th, 2019 at 9:46 pm

Posted in Alphazero

MuZero figures out chess, rules and all – Chessbase News

Posted: December 13, 2019 at 6:48 pm

without comments

12/12/2019 Just imagine you had a chess computer the auto-sensor kind. Would someone who had no knowledge of the game be able to work it out, just by moving pieces. Or imagine you are a very powerful computer. By looking at millions of images of chess games would you be able to figure out the rules and learn to play the game proficiently? The answer is yes because that has just been done by Google's Deep Mind team. For chess and 76 other games. It is interesting, and slightly disturbing. | Graphic: DeepMind

ChessBase 15 - Mega package

Find the right combination! ChessBase 15 program + new Mega Database 2020 with 8 million games and more than 80,000 master analyses. Plus ChessBase Magazine (DVD + magazine) and CB Premium membership for 1 year!


In 1980 the first chess computer with an auto response board, the Chafitz ARB Sargon 2.5, was released. It was programmed by Dan and Kathe Spracklen and had a sensory board and magnet pieces. The magnets embedded in the pieces were all the same kind, so that the board could only detect whether there was a piece on the square or not. It would signal its moves with LEDs located on the corner of each square.

Chafitz ARB Sargon 2.5 | Photo:My Chess Computers

Some years after the release of this computer I visited the Spracklens in their home in San Diego, and one evening had an interesting discussion, especially with Kathy. What would happen, we wondered, if we set up a Sargon 2.5 in a jungle village where nobody knew chess. If we left the people alone with the permanently switched-on board and pieces, would they be able to figure out the game? If they lifted a piece, the LED on that square would light up; if they put it on another square that LED would light up briefly. If the move was legal, there would be a reassuring beep; the square of a piece of the opposite colour would light up, and if they picked up that piece another LED would light up. If the original move wasnt legal, the board would make an unpleasant sound.

Our question was: could they figure out, by trial and error, how chess was played? Kathy and I discussed it at length, over the Sargon board, and in the end came to the conclusion that it was impossible they could never figure out the game without human instructions. Chess is far too complex.

Now, three decades later, I have to modify our conclusion somewhat: maybe humans indeed cannot learn chess by pure trial and error, but computers can...

You remember how AlphaGo and AlphaZero were created, by Google's DeepMind division. The programs Leela and Fat Fritz were generated using the same principle: tell an AI program the rules of the game, how the pieces move, and then let it play millions of games against itself. The program draws its own conclusions about the game and starts to play master-level chess. In fact, it can be argued that these programs are the strongest entities to have ever played chess human or computer.

Now DeepMind has come up with a fairly atrocious (but scientifically fascinating) idea: instead of telling the AI software the rules of the game, just let it play, using trial and error. Let it teach itself the rules of the game, and in the process learn to play it professionally. DeepMind combined a tree-based search (where a tree is a data structure used for locating information from within a set) with a learning model. They called the project MuZero. The program must predict the quantities most relevant to game planning not just for chess, but for 57 different Atari games. The result: MuZero, we are told, matches the performance of AlphaZero in Go, chess, and shogi.

And this is how MuZero works (description from VenturBeat):

Fundamentally MuZero receives observations images of a Go board or Atari screen and transforms them into a hidden state. This hidden state is updated iteratively by a process that receives the previous state and a hypothetical next action, and at every step the model predicts the policy (e.g., the move to play), value function (e.g., the predicted winner), and immediate reward (e.g., the points scored by playing a move)."

Evaluation of MuZero throughout training in chess, shogi, Go, and Atari the y-axis shows Elo rating| Image: DeepMind

As the DeepMind researchers explain, one form of reinforcement learning the technique in which rewards drive an AI agent toward goals involves models. This form models a given environment as an intermediate step, using a state transition model that predicts the next step and a reward model that anticipates the reward. If you are interested in this subject you can read thearticle on VenturBeat,or visit the Deep Mind site. There you can read this paper on the general reinforcement learning algorithm that masters chess, shogi and Go through self-play. Here's an abstract:

The game of chess is the longest-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. By contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go by reinforcement learning from self-play. In this paper, we generalize this approach into a single AlphaZero algorithm that can achieve superhuman performance in many challenging games. Starting from random play and given no domain knowledge except the game rules, AlphaZero convincingly defeated a world champion program in the games of chess and shogi (Japanese chess), as well as Go.

That refers to the original AlphaGo development, which has now been extended to MuZero. Turns out it is possible not just to become highly proficient at a game by playing it a million times against yourself, but in fact it is possible to work out the rules of the game by trial and error.

I have just now learned about this development and need to think about the consequences discuss it with experts. My first somewhat flippant reaction to a member of the Deep Mind team: "What next? Show it a single chess piece and it figures out the whole game?"

See more here:

MuZero figures out chess, rules and all - Chessbase News

Written by admin

December 13th, 2019 at 6:48 pm

Posted in Alphazero

From AR to AI: The emerging technologies marketers can explore to enable and disrupt – Marketing Tech

Posted: at 6:48 pm

without comments

The entire written works of mankind in all languages from the beginning of recorded history is around 50 petabytes. One petabyte is about 20 million four drawer filing cabinets filled with text. Google processes about 20 petabytes per day so in three days they would have processed everything we have written ever. Meanwhile, data centres now annually consume as much energy as Sweden. By 2025 theyll consume a fifth of all of Earths power.

For some, this is a revolution being able to store and recall information at the touch of a button. For others, it is 1984 with Big Brother being able to record and recall your every move. But just what can we expect from technology in the future be it within our working life or leisure time?

We are now in the fourth industrial revolution.Technologies will revolutionise, empower, turbo-charge life as we know it. From changing economies to helping cure illnesses, technology can already allow us to translate in real time while on business calls to turn on our heating remotely on our way home from work.

A new race of superhumans is coming with Alphabet owned, DeepMind having already shown us how these superhumans can outwit not only humans, but other lesser tech with AlphaZero, an Artificial Intelligence project set against Stockfish, a Japanese chess program. Not only did it beat the program, it showed an unnerving amount of human intuition about how it played. As the New York Times commented: intuitively and beautifully, with a romantic, attacking style. It played gambits.

Closer to home, organisations across the globe are using VR (virtual reality), AR (augmented reality), MR (mixed reality), XR (mixed reality environment) and VR/360 to create experiential customer/user experiences.

The value of the AR industry for video games is $11.6bn. However, it is also valued at $5.1bn in healthcare, $4.7bn in engineering and $7m in education far from the entertainment tech it once was it is now a power being utilised for the greater good. 5G has the potential to revolutionise allowing super high definition content to be delivered to mobile devices while super realistic AR and VR immersive experiences will transform our experience of education, news and entertainment.

So, if robots are now able to think quicker and sharper than us and predict our nuances, whats next and how can it be used from an organisational point of view? Artificial intelligence can already predict your personality simply by tracking your eyes. Findings show that peoples eye movements reveal whether they are sociable, conscientious or curious, with the algorithm software reliably recognising four of the big five personality traits neuroticism, extroversion, agreeableness and conscientiousness.

As Yuval Noah Harrari in Homo Deus comments, Soon, books will be able to read you while you read them. If Kindle is upgraded with face recognition and biometric sensors, it can know what made you laugh, what made you sad and what made you angry.

This means that job interviews can be undertaken with the blink of an eye (literally) as one scan of a computer could tell potential employers if the interviewee has the relevant traits for the job. Criminal psychologists can read those under scrutiny faster and help solve crimes quicker with biometric sensors pointing towards dishonesty and those lacking in empathy.

Knowledge is power. And technology can create this knowledge. From using biometrics and health statistics from your Fitbit and phone it can show your health predispositions, levels of fitness and wellbeing and personality traits and tendencies from sleep patterns and exercise and nutritional information.

However, it can also go one step further, your DNA and biometrics such as the speed of your heartbeat can indicate whether you have just had an increase in activity so that could mean physical, sexual or other types of excitement, your sugar levels can indicate lifestyle choices and harmful habits.

This could mean office politics are a thing of the past as HR managers could build teams based on DNA proven personalities as well as skill sets. And promotions could be scientific allowing those with more leadership personalities to be placed in leadership positions quicker and those with more subservient traits being part of a team.

With the development of neural lace, an ultra-thin mesh that can be implanted in the skull to monitor brain function, and eventually nano-technology we will be able to plug our own brains directly into the cloud allowing software to manage mundane high volume data processing and freeing our brains to think more creatively with significantly more power perhaps to the 1000x. Which as Singularity Hubs Raya Bidshahri points out raises the question, with all this enhancement, what does I feel like anymore?

From an organisational point of view, it could mean information and data we store such as recall and memory from meetings and research could automatically be downloaded freeing up more of our brain power to problem solve and allow us to think more creatively and smarter than our human form has ever allowed before.

So, what does this advancement of tech mean for the business of the future? Who really knows? However, what is sure is that whatever your business sector, size or region you should ensure you are at the very least aware of the latest advancements and always be ready to embrace them into your business, work with agencies that have an eye on the insights to the future, because sooner, exponentially sooner, the future will be now.

Whether you believe technology is the creator or all things good or all things evil, there is no doubt it will change our landscape forever. From our formative steps into the digital world to the leaps and bounds of the future, the force will be with you.

Interested in hearing leading global brands discuss subjects like this in person?

Find out more aboutDigital Marketing World Forum (#DMWF) Europe, London, North America, and Singapore.

View post:

From AR to AI: The emerging technologies marketers can explore to enable and disrupt - Marketing Tech

Written by admin

December 13th, 2019 at 6:48 pm

Posted in Alphazero

Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits – Forbes

Posted: December 9, 2019 at 7:52 pm

without comments

Digital Human Brain Covered with Networks

Artificial intelligence is advancing rapidly. In a few decades machines will achieve superintelligence and become self-improving. Soon after that happens we will launch a thousand ships into space. These probes will land on distant planets, moons, asteroids, and comets. Using AI and terabytes of code, they will then nanoassemble local particles into living organisms. Each probe will, in fact, contain the information needed to create an entire ecosystem. Thanks to AI and advanced biotechnology, the species in each place will be tailored to their particular plot of rock. People will thrive in low temperatures, dim light, high radiation, and weak gravity. Humanity will become an incredibly elastic concept. In time our distant progeny will build megastructures that surround stars and capture most of their energy. Then the power of entire galaxies will be harnessed. Then life and AIlong a common entity by this pointwill construct a galaxy-sized computer. It will take a mind that large about a hundred-thousand years to have a thought. But those thoughts will pierce the veil of reality. They will grasp things as they really are. All will be one. This is our destiny.

Then again, maybe not.

There are, of course, innumerable reasons to reject this fantastic tale out of hand. Heres a quick and dirty one built around Copernicuss discovery that we are not the center of the universe. Most times, places, people, and things are average. But if sentient beings from Earth are destined to spend eons multiplying and spreading across the heavens, then those of us alive today are special. We are among the very few of our kind to live in our cosmic infancy, confined in our planetary cradle. Because we probably are not special, we probably are not at an extreme tip of the human timeline; were likely somewhere in the broad middle. Perhaps a hundred-billion modern humans have existed, across a span of around 50,000 years. To claim in the teeth of these figures that our species is on the cusp of spending millions of years spreading trillions of individuals across this galaxy and others, you must engage in some wishful thinking. You must embrace the notion that we today are, in a sense, back at the center of the universe.

It is in any case more fashionable to speculate about imminent catastrophes. Technology again looms large. In the gray goo scenario, runaway self-replicating nanobots consume all of the Earths biomass. Thinking along similar lines, philosopher Nick Bostrom imagines an AI-enhanced paperclip machine that, ruthlessly following its prime directive to make paperclips, liquidates mankind and converts the planet into a giant paperclip mill. Elon Musk, when he discusses this hypothetical, replaces paperclips with strawberries, so that he can worry about strawberry fields forever. What Bostrom and Musk are driving at is the fear that an advanced AI being will not share our values. We might accidently give it a bad aim (e.g., paperclips at all costs). Or it might start setting its own aims. As Stephen Hawking noted shortly before his death, a machine that sees your intelligence the way you see a snails might decide it has no need for you. Instead of using AI to colonize distant planets, we will use it to destroy ourselves.

When someone mentions AI these days, she is usually referring to deep neural networks. Such networks are far from the only form of AI, but they have been the source of most of the recent successes in the field. A deep neural network can recognize a complex pattern without relying on a large body of pre-set rules. It does this with algorithms that loosely mimic how a human brain tunes neural pathways.

The neurons, or units, in a deep neural network are layered. The first layer is an input layer that breaks incoming data into pieces. In a network that looks at black-and-white images, for instance, each of the first layers units might link to a single pixel. Each input unit in this network will translate its pixels grayscale brightness into a number. It might turn a white pixel into zero, a black pixel into one, and a gray pixel into some fraction in between. These numbers will then pass to the next layer of units. Each of the units there will generate a weighted sum of the values coming in from several of the previous layers units. The next layer will do the same thing to that second layer, and so on through many layers more. The deeper the layer, the more pixels accounted for in each weighted sum.

An early-layer unit will produce a high weighted sumit will fire, like a neuron doesfor a pattern as simple as a black pixel above a white pixel. A middle-layer unit will fire only when given a more complex pattern, like a line or a curve. An end-layer unit will fire only when the patternor, rather, the weighted sums of many other weighted sumspresented to it resembles a chair or a bonfire or a giraffe. At the end of the network is an output layer. If one of the units in this layer reliably fires only when the network has been fed an image with a giraffe in it, the network can be said to recognize giraffes.

A deep neural network is not born recognizing objects. The network just described would have to learn from pre-labeled examples. At first the network would produce random outputs. Each time the network did this, however, the correct answers for the labeled image would be run backward through the network. An algorithm would be used, in other words, to move the networks unit weighting functions closer to what they would need to be to recognize a given object. The more samples a network is fed, the more finely tuned and accurate it becomes.

Some deep neural networks do not need spoon-fed examples. Say you want a program equipped with such networks to play chess. Give it the rules of the game, instruct it to seek points, and tell it that a checkmate is worth a hundred points. Then have it use a Monte Carlo method to randomly simulate games. Through trial and error, the program will stumble on moves that lead to a checkmate, and then on moves that lead to moves that lead to a checkmate, and so on. Over time the program will assign value to moves that simply tend to lead toward a checkmate. It will do this by constantly adjusting its networks unit weighting functions; it will just use points instead of correctly labeled images. Once the networks are trained, the program can win discrete contests in much the way it learned to play in the first place. At each of its turns, the program will simulate games for each potential move it is considering. It will then choose the move that does best in the simulations. Thanks to constant fine-tuning, even these in-game simulations will get better and better.

There is a chess program that operates more or less this way. It is called AlphaZero, and at present it is the best chess player on the planet. Unlike other chess supercomputers, it has never seen a game between humans. It learned to play by spending just a few hours simulating moves with itself. In 2017 it played a hundred games against Stockfish 8, one of the best chess programs to that point. Stockfish8 examined 70million moves per second. AlphaZero examined only 80,000. AlphaZero won 28 games, drew 72, and lost zero. It sometimes made baffling moves (to humans) that turned out to be masterstrokes. AlphaZero is not just a chess genius; it is an alien chess genius.

AlphaZero is at the cutting edge of AI, and it is very impressive. But its success is not a sign that AI will take us to the starsor enslave usany time soon. In Artificial Intelligence: A Guide For Thinking Humans, computer scientist Melanie Mitchell makes the case for AI sobriety. AI currently excels, she notes, only when there are clear rules, straightforward reward functions (for example, rewards for points gained or for winning), and relatively few possible actions (moves). Take IBMs Watson program. In 2011 it crushed the best human competitors on the quiz show Jeopardy!, leading IBM executives to declare that its successors would soon be making legal arguments and medical diagnoses. It has not worked out that way. Real-world questions and answers in real-world domains, Mitchell explains, have neither the simple short structure of Jeopardy! clues nor their well-defined responses.

Even in the narrow domains that most suit it, AI is brittle. A program that is a chess grandmaster cannot compete on a board with a slightly different configuration of squares or pieces. Unlike humans, Mitchell observes, none of these programs can transfer anything it has learned about one game to help it learn a different game. Because the programs cannot generalize or abstract from what they know, they can function only within the exact parameters in which they have been trained.

A related point is that current AI does not understand even basic aspects of how the world works. Consider this sentence: The city council refused the demonstrators a permit because they feared violence. Who feared violence, the city council or the demonstrators? Using what she knows about bureaucrats, protestors, and riots, a human can spot at once that the fear resides in the city council. When AI-driven language-processing programs are asked this kind of question, however, their responses are little better than random guesses. When AI cant determine what it refers to in a sentence, Mitchell writes, quoting computer scientist Oren Etzioni, its hard to believe that it will take over the world.

And it is not accurate to say, as many journalists do, that a program like AlphaZero learns by itself. Humans must painstakingly decide how many layers a network should have, how much incoming data should link to each input unit, how fast data should aggregate as it passes through the layers, how much each unit weighting function should change in response to feedback, and much else. These settings and designs, adds Mitchell, must typically be decided anew for each task a network is trained on. It is hard to see nefarious unsupervised AI on the horizon.

The doom camp (AI will murder us) and the rapture camp (it will take us into the mind of God) share a common premise. Both groups extrapolate from past trends of exponential progress. Moores lawwhich is not really a law, but an observationsays that the number of transistors we can fit on a computer chip doubles every two years or so. This enables computer processing speeds to increase at an exponential rate. The futurist Ray Kurzweil asserts that this trend of accelerating improvement stretches back to the emergence of life, the appearance of Eukaryotic cells, and the Cambrian Explosion. Looking forward, Kurzweil sees an AI singularitythe rise of self-improving machine superintelligenceon the trendline around 2045.

The political scientist Philip Tetlock has looked closely at whether experts are any good at predicting the future. The short answer is that theyre terrible at it. But theyre not hopeless. Borrowing an analogy from Isaiah Berlin, Tetlock divides thinkers into hedgehogs and foxes. A hedgehog knows one big thing, whereas a fox knows many small things. A hedgehog tries to fit what he sees into a sweeping theory. A fox is skeptical of such theories. He looks for facts that will show he is wrong. A hedgehog gives answers and says moreover a lot. A fox asks questions and says however a lot. Tetlock has found that foxes are better forecasters than hedgehogs. The more distant the subject of the prediction, the more the hedgehogs performance lags.

Using a theory of exponential growth to predict an impending AI singularity is classic hedgehog thinking. It is a bit like basing a prediction about human extinction on nothing more than the Copernican principle. Kurzweils vision of the future is clever and provocative, but it is also hollow. It is almost as if huge obstacles to general AI will soon be overcome because the theory says so, rather than because the scientists on the ground will perform the necessary miracles. Gordon Moore himself acknowledges that his law will not hold much longer. (Quantum computers might pick up the baton. Well see.) Regardless, increased processing capacity might be just a small piece of whats needed for the next big leaps in machine thinking.

When at Thanksgiving dinner you see Aunt Jane sigh after Uncle Bob tells a blue joke, you can form an understanding of what Jane thinks about what Bob thinks. For that matter, you get the joke, and you can imagine analogous jokes that would also annoy Jane. You can infer that your cousin Mary, who normally likes such jokes but is not laughing now, is probably still angry at Bob for spilling the gravy earlier. You know that although you cant see Bobs feet, they exist, under the table. No deep neural network can do any of this, and its not at all clear that more layers or faster chips or larger training sets will close the gap. We probably need further advances that we have only just begun to contemplate. Enabling machines to form humanlike conceptual abstractions, Mitchell declares, is still an almost completely unsolved problem.

There has been some concern lately about the demise of the corporate laboratory. Mitchell gives the impression that, at least in the technology sector, the corporate basic-research division is alive and well. Over the course of her narrative, labs at Google, Microsoft, Facebook, and Uber make major breakthroughs in computer image recognition, decision making, and translation. In 2013, for example, researchers at Google trained a network to create vectors among a vast array of words. A vector set of this sort enables a language-processing program to define and use a word based on the other words with which it tends to appear. The researchers put their vector set online for public use. Google is in some ways the protagonist of Mitchells story. It is now an applied AI company, in Mitchells words, that has placed machine thinking at the center of diverse products, services, and blue-sky research.

Google has hired Ray Kurzweil, a move that might be taken as an implicit endorsement of his views. It is pleasing to think that many Google engineers earnestly want to bring on the singularity. The grand theory may be illusory, but the treasures produced in pursuit of it will be real.

Go here to see the original:

Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits - Forbes

Written by admin

December 9th, 2019 at 7:52 pm

Posted in Alphazero

Artificial intelligence: How to measure the I in AI – TechTalks

Posted: at 7:52 pm

without comments

Image credit: Depositphotos

This article is part ofDemystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

Last week, Lee Se-dol, the South Korean Go champion who lost in a historical matchup against DeepMinds artificial intelligence algorithm AlphaGo in 2016, declared his retirement from professional play.

With the debut of AI in Go games, Ive realized that Im not at the top even if I become the number one through frantic efforts, Lee told theYonhap news agency. Even if I become the number one, there is an entity that cannot be defeated.

Predictably, Se-dols comments quickly made the rounds across prominent tech publications, some of them using sensational headlines with AI dominance themes.

Since the dawn of AI, games have been one of the main benchmarks to evaluate the efficiency of algorithms. And thanks to advances in deep learning and reinforcement learning, AI researchers are creating programs that can master very complicated games and beat the most seasoned players across the world. Uninformed analysts have been picking up on these successes to suggest that AI is becoming smarter than humans.

But at the same time, contemporary AI fails miserably at some of the most basic that every human can perform.

This begs the question, does mastering a game prove anything? And if not, how can you measure the level of intelligence of an AI system?

Take the following example. In the picture below, youre presented with three problems and their solution. Theres also a fourth task that hasnt been solved. Can you guess the solution?

Youre probably going to think that its very easy. Youll also be able to solve different variations of the same problem with multiple walls, and multiple lines, and lines of different colors, just by seeing these three examples. But currently, theres no AI system, including the ones being developed at the most prestigious research labs, that can learn to solve such a problem with so few examples.

The above example is from The Measure of Intelligence, a paper by Franois Chollet, the creator of Keras deep learning library. Chollet published this paper a few weeks before Le-sedol declared his retirement. In it, he provided many important guidelines on understanding and measuring intelligence.

Ironically, Chollets paper did not receive a fraction of the attention it needs. Unfortunately, the media is more interested in covering exciting AI news that gets more clicks. The 62-page paper contains a lot of invaluable information and is a must-read for anyone who wants to understand the state of AI beyond the hype and sensation.

But I will do my best to summarize the key recommendations Chollet makes on measuring AI systems and comparing their performance to that of human intelligence.

The contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks, such as board games and video games, Chollet writes, adding that solely measuring skill at any given task falls short of measuring intelligence.

In fact, the obsession with optimizing AI algorithms for specific tasks has entrenched the community in narrow AI. As a result, work in AI has drifted away from the original vision of developing thinking machines that possess intelligence comparable to that of humans.

Although we are able to engineer systems that perform extremely well on specific tasks, they have still stark limitations, being brittle, data-hungry, unable to make sense of situations that deviate slightly from their training data or the assumptions of their creators, and unable to repurpose themselves to deal with novel tasks without significant involvement from human researchers, Chollet notes in the paper.

Chollets observations are in line with those made by other scientists on the limitations and challenges of deep learning systems. These limitations manifest themselves in many ways:

Heres an example: OpenAIs Dota-playing neural networks needed 45,000 years worth of gameplay to reach a professional level. The AI is also limited in the number of characters it can play, and the slightest change to the game rules will result in a sudden drop in its performance.

The same can be seen in other fields, such as self-driving cars. Despite millions of hours of road experience, the AI algorithms that power autonomous vehicles can make stupid mistakes, such as crashing into lane dividers or parked firetrucks.

One of the key challenges that the AI community has struggled with is defining intelligence. Scientists have debated for decades on providing a clear definition that allows us to evaluate AI systems and determine what is intelligent or not.

Chollet borrows the definition by DeepMind cofounder Shane Legg and AI scientist Marcus Hutter: Intelligence measures an agents ability to achieve goals in a wide range of environments.

Key here is achieve goals and wide range of environments. Most current AI systems are pretty good at the first part, which is to achieve very specific goals, but bad at doing so in a wide range of environments. For instance, an AI system that can detect and classify objects in images will not be able to perform some other related task, such as drawing images of objects.

Chollet then examines the two dominant approaches in creating intelligence systems: symbolic AI and machine learning.

Early generations of AI research focused on symbolic AI, which involves creating an explicit representation of knowledge and behavior in computer programs. This approach requires human engineers to meticulously write the rules that define the behavior of an AI agent.

It was then widely accepted within the AI community that the problem of intelligence would be solved if only we could encode human skills into formal rules and encode human knowledge into explicit databases, Chollet observes.

But rather than being intelligent by themselves, these symbolic AI systems manifest the intelligence of their creators in creating complicated programs that can solve specific tasks.

The second approach, machine learning systems, is based on providing the AI model with data from the problem space and letting it develop its own behavior. The most successful machine learning structure so far is artificial neural networks, which are complex mathematical functions that can create complex mappings between inputs and outputs.

For instance, instead of manually coding the rules for detecting cancer in x-ray slides, you feed a neural network with many slides annotated with their outcomes, a process called training. The AI examines the data and develops a mathematical model that represents the common traits of cancer patterns. It can then process new slides and outputs how likely it is that the patients have cancer.

Advances in neural networks and deep learning have enabled AI scientists to tackle many tasks that were previously very difficult or impossible with classic AI, such as natural language processing, computer vision and speech recognition.

Neural networkbased models, also known as connectionist AI, are named after their biological counterparts. They are based on the idea that the mind is a blank slate (tabula rasa) that turns experience (data) into behavior. Therefore, the general trend in deep learning has become to solve problems by creating bigger neural networks and providing them with more training data to improve their accuracy.

Chollet rejects both approaches because none of them has been able to create generalized AI that is flexible and fluid like the human mind.

We see the world through the lens of the tools we are most familiar with. Today, it is increasingly apparent that both of these views of the nature of human intelligenceeither a collection of special-purpose programs or a general-purpose Tabula Rasaare likely incorrect, he writes.

Truly intelligent systems should be able to develop higher-level skills that can span across many tasks. For instance, an AI program that masters Quake 3 should be able to play other first-person shooter games at a decent level. Unfortunately, the best that current AI systems achieve is local generalization, a limited maneuver room within their own narrow domain.

In his paper, Chollet argues that the generalization or generalization power for any AI system is its ability to handle situations (or tasks) that differ from previously encountered situations.

Interestingly, this is a missing component of both symbolic and connectionist AI. The former requires engineers to explicitly define its behavioral boundary and the latter requires examples that outline its problem-solving domain.

Chollet also goes further and speaks of developer-aware generalization, which is the ability of an AI system to handle situations that neither the system nor the developer of the system have encountered before.

This is the kind of flexibility you would expect from a robo-butler that could perform various chores inside a home without having explicit instructions or training data on them. An example is Steve Wozniaks famous coffee test, in which a robot would enter a random house and make coffee without knowing in advance the layout of the home or the appliances it contains.

Elsewhere in the paper, Chollet makes it clear that AI systems that cheat their way toward their goal by leveraging priors (rules) and experience (data) are not intelligent. For instance, consider Stockfish, the best rule-base chess-playing program. Stockfish, an open-source project, is the result of contributions from thousands of developers who have created and fine-tuned tens of thousands of rules. A neural networkbased example is AlphaZero, the multi-purpose AI that has conquered several board games by playing them millions of times against itself.

Both systems have been optimized to perform a specific task by making use of resources that are beyond the capacity of the human mind. The brightest human cant memorize tens of thousands of chess rules. Likewise, no human can play millions of chess games in a lifetime.

Solving any given task with beyond-human level performance by leveraging either unlimited priors or unlimited data does not bring us any closer to broad AI or general AI, whether the task is chess, football, or any e-sport, Chollet notes.

This is why its totally wrong to compare Deep Blue, Alpha Zero, AlphaStar or any other game-playing AI with human intelligence.

Likewise, other AI models, such as Aristo, the program that can pass an eighth-grade science test, does not possess the same knowledge as a middle school student. It owes its supposed scientific abilities to the huge corpora of knowledge it was trained on, not its understanding of the world of science.

(Note: Some AI researchers, such as computer scientist Rich Sutton, believe that the true direction for artificial intelligence research should be methods that can scale with the availability of data and compute resources.)

In the paper, Chollet presents the Abstraction Reasoning Corpus (ARC), a dataset intended to evaluate the efficiency of AI systems and compare their performance with that of human intelligence. ARC is a set of problem-solving tasks that tailored for both AI and humans.

One of the key ideas behind ARC is to level the playing ground between humans and AI. It is designed so that humans cant take advantage of their vast background knowledge of the world to outmaneuver the AI. For instance, it doesnt involve language-related problems, which AI systems have historically struggled with.

On the other hand, its also designed in a way that prevents the AI (and its developers) from cheating their way to success. The system does not provide access to vast amounts of training data. As in the example shown at the beginning of this article, each concept is presented with a handful of examples.

The AI developers must build a system that can handle various concepts such as object cohesion, object persistence, and object influence. The AI system must also learn to perform tasks such as scaling, drawing, connecting points, rotating and translating.

Also, the test dataset, the problems that are meant to evaluate the intelligence of the developed system, are designed in a way that prevents developers from solving the tasks in advance and hard-coding their solution in the program. Optimizing for evaluation sets is a popular cheating method in data science and machine learning competitions.

According to Chollet, ARC only assesses a general form of fluid intelligence, with a focus on reasoning and abstraction. This means that the test favors program synthesis, the subfield of AI that involves generating programs that satisfy high-level specifications. This approach is in contrast with current trends in AI, which are inclined toward creating programs that are optimized for a limited set of tasks (e.g., playing a single game).

In his experiments with ARC, Chollet has found that humans can fully solve ARC tests. But current AI systems struggle with the same tasks. To the best of our knowledge, ARC does not appear to be approachable by any existing machine learning technique (including Deep Learning), due to its focus on broad generalization and few-shot learning, Chollet notes.

While ARC is a work in progress, it can become a promising benchmark to test the level of progress toward human-level AI. We posit that the existence of a human-level ARC solver would represent the ability to program an AI from demonstrations alone (only requiring a handful of demonstrations to specify a complex task) to do a wide range of human-relatable tasks of a kind that would normally require human-level, human-like fluid intelligence, Chollet observes.

Original post:

Artificial intelligence: How to measure the I in AI - TechTalks

Written by admin

December 9th, 2019 at 7:52 pm

Posted in Alphazero

This 90’s Japanese commercial for Street Fighter Alpha 2 doesn’t make a ton of sense, but it somehow still makes us want to play some Alpha -…

Posted: at 7:52 pm

without comments

The world of gaming is celebrating the 25th anniversary of Sony's PlayStation and so we've been seeing a ton of older content from decades past surface on social media.

When we came across this old Japanese Street Fighter Alpha (Zero in Japan) 2 ad, (thank you to Goegoezzz for posting) it brought a smile to our faces and we figured it'd likely do the same for you.

The television spot (which we assume is probably from around 1996, the year Alpha 2 came out) sees a hurried Sakura charging through the hustle and bustle of real life city streets.

She encounters a handful of her fellow Street Fighters along the way bumping into a levitating Dhalsim, passing an angry Chun-Li in the subway, and cutting off M. Bison in traffic, who is apparently an evil dictator by night, but the world's creepiest Lyft driver by day.

Sakura eventually stops, turns to the camera and states, "Ryu, I want to meet you once more." We then get about four and a half seconds of gameplay footage before cutting to the title screen. Perhaps the message is, "rush through your busy day so you can get home and play fighting games," or maybe it's, "we face off against metaphorical rivals at every turn in our daily lives."

Whatever the intended meaning may have been, the good news is that we know not to worry too much about Street Fighter storylines and just enjoy the battle. Check out the nostalgic TV spot right here and share any fond SFA2 memories you have in the comments.

Click image for animated version

Read more:

This 90's Japanese commercial for Street Fighter Alpha 2 doesn't make a ton of sense, but it somehow still makes us want to play some Alpha -...

Written by admin

December 9th, 2019 at 7:52 pm

Posted in Alphazero

Page 212