Page 31234

Archive for the ‘Alphago’ Category

AI on steroids: Much bigger neural nets to come with new hardware, say Bengio, Hinton, and LeCun – ZDNet

Posted: February 10, 2020 at 9:47 pm


without comments

Geoffrey Hinton, center. talks about what future deep learning neural nets may look like, flanked by Yann LeCun of Facebook, right, and Yoshua Bengio of Montreal's MILA institute for AI, during a press conference at the 34th annual AAAI conference on artificial intelligence.

The rise of dedicated chips and systems for artificial intelligence will "make possible a lot of stuff that's not possible now," said Geoffrey Hinton, the University of Toronto professor who is one of the godfathers of the "deep learning" school of artificial intelligence, during a press conference on Monday.

Hinton joined his compatriots, Yann LeCun of Facebook and Yoshua Bengio of Canada's MILA institute, fellow deep learning pioneers, in an upstairs meeting room of the Hilton Hotel on the sidelines of the 34th annual conference on AI by the Association for the Advancement of Artificial Intelligence. They spoke for 45 minutes to a small group of reporters on a variety of topics, including AI ethics and what "common sense" might mean in AI. The night before, all three had presented their latest research directions.

Regarding hardware, Hinton went into an extended explanation of the technical aspects that constrain today's neural networks. The weights of a neural network, for example, have to be used hundreds of times, he pointed out, making frequent, temporary updates to the weights. He said the fact graphics processing units (GPUs) have limited memory for weights and have to constantly store and retrieve them in external DRAM is a limiting factor.

Much larger on-chip memory capacity "will help with things like Transformer, for soft attention," said Hinton, referring to the wildly popular autoregressive neural network developed at Google in 2017. Transformers, which use "key/value" pairs to store and retrieve from memory, could be much larger with a chip that has substantial embedded memory, he said.

Also: Deep learning godfathers Bengio, Hinton, and LeCun say the field can fix its flaws

LeCun and Bengio agreed, with LeCun noting that GPUs "force us to do batching," where data samples are combined in groups as they pass through a neural network, "which isn't efficient." Another problem is that GPUs assume neural networks are built out of matrix products, which forces constraints on the kind of transformations scientists can build into such networks.

"Also sparse computation, which isn't convenient to run on GPUs ...," said Bengio, referring to instances where most of the data, such as pixel values, may be empty, with only a few significant bits to work on.

LeCun predicted they new hardware would lead to "much bigger neural nets with sparse activations," and he and Bengio both emphasized there is an interest in doing the same amount of work with less energy. LeCun defended AI against claims it is an energy hog, however. "This idea that AI is eating the atmosphere, it's just wrong," he said. "I mean, just compare it to something like raising cows," he continued. "The energy consumed by Facebook annually for each Facebook user is 1,500-watt hours," he said. Not a lot, in his view, compared to other energy-hogging technologies.

The biggest problem with hardware, mused LeCun, is that on the training side of things, it is a duopoly between Nvidia, for GPUs, and Google's Tensor Processing Unit (TPU), repeating a point he had made last year at the International Solid-State Circuits Conference.

Even more interesting than hardware for training, LeCun said, is hardware design for inference. "You now want to run on an augmented reality device, say, and you need a chip that consumes milliwatts of power and runs for an entire day on a battery." LeCun reiterated a statement made a year ago that Facebook is working on various internal hardware projects for AI, including for inference, but he declined to go into details.

Also: Facebook's Yann LeCun says 'internal activity' proceeds on AI chips

Today's neural networks are tiny, Hinton noted, with really big ones having perhaps just ten billion parameters. Progress on hardware might advance AI just by making much bigger nets with an order of magnitude more weights. "There are one trillion synapses in a cubic centimeter of the brain," he noted. "If there is such a thing as General AI, it would probably require one trillion synapses."

As for what "common sense" might look like in a machine, nobody really knows, Bengio maintained. Hinton complained people keep moving the goalposts, such as with natural language models. "We finally did it, and then they said it's not really understanding, and can you figure out the pronoun references in the Winograd Schema Challenge," a question-answering task used a computer language benchmark. "Now we are doing pretty well at that, and they want to find something else" to judge machine learning he said. "It's like trying to argue with a religious person, there's no way you can win."

But, one reporter asked, what's concerning to the public is not so much the lack of evidence of human understanding, but evidence that machines are operating in alien ways, such as the "adversarial examples." Hinton replied that adversarial examples show the behavior of classifiers is not quite right yet. "Although we are able to classify things correctly, the networks are doing it absolutely for the wrong reasons," he said. "Adversarial examples show us that machines are doing things in ways that are different from us."

LeCun pointed out animals can also be fooled just like machines. "You can design a test so it would be right for a human, but it wouldn't work for this other creature," he mused. Hinton concurred, observing "house cats have this same limitation."

Also: LeCun, Hinton, Bengio: AI conspirators awarded prestigious Turing prize

"You have a cat lying on a staircase, and if you bounce a soccer ball down the stairs toward a care, the cat will just sort of watch the ball bounce until it hits the cat in the face."

Another thing that could prove a giant advance for AI, all three agreed, is robotics. "We are at the beginning of a revolution," said Hinton. "It's going to be a big deal" to many applications such as vision. Rather than analyzing the entire contents of a static image or video frame, a robot creates a new "model of perception," he said.

"You're going to look somewhere, and then look somewhere else, so it now becomes a sequential process that involves acts of attention," he explained.

Hinton predicted last year's work by OpenAI in manipulating a Rubik's cube was a watershed moment for robotics, or, rather, an "AlphaGo moment," as he put it, referring to DeepMind's Go computer.

LeCun concurred, saying that Facebook is running AI projects not because Facebook has an extreme interest in robotics, per se, but because it is seen as an "important substrate for advances in AI research."

It wasn't all gee-whiz, the three scientists offered skepticism on some points. While most research in deep learning that matters is done out in the open, some companies boast of AI while keeping the details a secret.

"It's hidden because it's making it seem important," said Bengio, when in fact, a lot of work in the depths of companies may not be groundbreaking. "Sometimes companies make it look a lot more sophisticated than it is."

Bengio continued his role among the three of being much more outspoken on societal issues of AI, such as building ethical systems.

When LeCun was asked about the use of factual recognition algorithms, he noted technology can be used for good and bad purposes, and that a lot depends on the democratic institutions of society. But Bengio pushed back slightly, saying, "What Yann is saying is clearly true, but prominent scientists have a responsibility to speak out." LeCun mused that it's not the job of science to "decide for society," prompting Bengio to respond, "I'm not saying decide, I'm saying we should weigh in because governments in some countries are open to that involvement."

Hinton, who frequently punctuates things with a humorous aside, noted toward the end of the gathering his biggest mistake with respect to Nvidia. "I made a big mistake back in with Nvidia," he said. "In 2009, I told an audience of 1,000 grad students they should go and buy Nvidia GPUs to speed up their neural nets. I called Nvidia and said I just recommended your GPUs to 1,000 researchers, can you give me a free one, and they said no.

"What I should have done, if I was really smart, was take all my savings and put it into Nvidia stock. The stock was at $20 then, now it's, like, $250."

Visit link:

AI on steroids: Much bigger neural nets to come with new hardware, say Bengio, Hinton, and LeCun - ZDNet

Written by admin

February 10th, 2020 at 9:47 pm

Posted in Alphago

Why The Race For AI Dominance Is More Global Than You Think – Forbes

Posted: at 9:47 pm


without comments

Getty

When people hear about the race for Artificial Intelligence (AI) dominance, they often think that the main competition is between the US and China. After all, the US and China have most of the largest and most well funded AI companies on the planet, and the pace of funding, company growth, and adoption doesnt seem to be slowing anytime soon. However, if you look closely, youll see that many other countries have a stake in the AI race, and indeed, some countries have AI efforts, funding, technologies, and intellectual property that make them serious contenders in the jostling for AI dominance. In fact according to a recent report from analyst firm Cognilytica, France, Israel, United Kingdom, and the United States all are equally strong when it comes to AI, with China, Canada, Germany, Japan, and South Korea equally close in their AI strategic strength. (Disclosure: Im a principal analyst with Cognilytica).

The Current Leaders in AI Funding and Dominance: US and China

AI startups are raising more money than ever. AI-focused companies raised $12 Billion in 2017 alone, more than doubling venture funding over the previous year. Most of this funding is concentrated in US and Chinese companies, but the source of those funds is much more international. Softbank, based in Japan, has amassed a $100 Billion investment fund, with many international investors including Saudi Arabias sovereign investment fund and other global sources of capital. While US companies have put up significant investment rounds with the power of Silicon Valleys VC funds, China now has the most valuable AI startup, Sensetime, which raised over $1.2 Billion and a rumored additional $1 Billion raise on the way.

However, what makes AI as a technology sector different from previous major waves of investment, is that AI is seen as strategic technology by many governments. In 2017 China released a three step program outlining its goal to become a world leader in A.I. by 2030. The government aims to make the AI industry worth about $150 billion and is pushing for greater use of AI in a number of areas such as the military and smart cities. Furthermore, the Chinese government has made big bets including a planned $2.1 Billion AI-focused technology research park. And in 2019 TheBeijing AI Principleswere released by a multistakeholder coalition including the Beijing Academy of Artificial Intelligence (BAAI), Peking University, Tsinghua University, Institute of Automation and Institute of Computing Technology in Chinese Academy of Sciences, and an AI industrial league involving firms like Baidu, Alibaba and Tencent.

In addition, the Chinese technology ecosystem has developed to become a powerhouse in its own right. China has many multi-billion dollar tech giants including Alibaba, Baidu, Tencent, and Huawei Technologies, who are each heavily investing in AI. Chinese companies also work more closely with the Chinese government, and laws in China are the most relaxed with regards to customer privacy and use of AI technologies such as facial recognition on their citizens. Chinas government has already embraced the use of facial recognition technology and has quickly adopted this technology in everyday use. In most other counties such as the US for example, privacy concerns prevent pervasive use of facial recognition technology, but such concerns or impediments to adoption dont exist in China.

The story of technology company creation and funding in the United States is already well known. Silicon Valley is both a region as well as a euphemism for the entire tech industry, showing how dominant the US has been for the past several decades with technology creation and adoption. Venture capital as an industry was invented and perfected in the US, and the result of that has been the creation of such enduring tech giants like Amazon, Apple, Facebook, Microsoft, Google, IBM and thousands of other technology firms big and small. Collectively trillions of dollars has been invested in these firms by private and public sector investors to create the technology industry as we know it today. Certainly, none of that is going away anytime soon.

In addition, the US has an extremely well developed and highly skilled labor pool with academic powerhouses and research institutions that continue to push the boundaries of what is capable with AI. What is notable is that even in the US, the dominance of Silicon Valley as a specific, San Francisco-bay geographic region is starting to slip. The New York city region has produced many large AI-focused technology firms, and research in the Boston-area centered around MIT and Harvard, Pittsburgh with Carnegie Mellon, the Washington, DC metro area with its legions of government-focused contractors and development shops, Southern Californias emerging tech ecosystem, Seattle-based Amazon and Microsoft, and many more locations in the US are loosening the hold that Northern California has on the technology industry with respect to AI. And just outside the US, Canadian firms from Toronto, Montreal, and Vancouver are further eroding the dominance of Silicon Valley with respect to AI.

In 2018 the United States issued an Executive Order from the President naming AI the second highest R&D priority after the security of the American people for the fiscal year 2020. Additionally, the U.S. Department of Defenseannouncedit will invest up to $2 billion over the next five years towards the advancement of AI. As recently as 2020 the United States launched the American AI Initiative with the strategy aimed at focusing the federal government resources. The US federal government also launched AI.gov to make it easier to access all of the governmental AI initiatives currently underway. Once potentially seen lackluster in comparison to that of China and other countries the US government has really started making AI a priority to keep up in recent years.

Countries With Significant Stakes in AI

As mentioned above, what makes the AI industry unique is that it is actually not a new thing, but rather evolved over decades, even prior to the development of the modern digital computer. As a result, many technology developments, investment, and intellectual property exists outside the US and China. Countries that have been involved with AI since the early days are realizing the strategic nature of AI and doubling down on their efforts to retain a stake in global AI share and maintain their relevance and importance.

Japan

Japan has long been a leader in the AI industry, and in particular their development and adoption of robotics. Japanese firms introduced concepts such as the 3 Ds (Ks) of robotics that we discussed in our research on cobots. Not only is their technology research excellence on par with anywhere in the world, they have the funding to back it up. As mentioned earlier, Japan-based Softbank is an investor powerhouse unrivaled in the venture capital industry.

Japans government released their Artificial Intelligence Technology Strategy in March 2017. This strategy includes an Industrialization Roadmap and focuses the development of AI into three phases: the utilization and application of AI through 2020, the publics use of AI from 2025-2030, and lastly an ecosystem built by connecting multiplying domains. The countrys strategy focuses on R&D for AI, collaboration between industry, government, and academia to advance AI research, and addressing areas related to productivity, welfare and mobility.

However, it is important to note that while Japan continues to exhibit dominance in robotics and other AI fields as well as its Softbank powerhouse, many of the firms that Softbank is investing in are not Japan-based, and so much of the investment is not remaining focused on Japans own AI industry. In addition, while technology development is advanced and rapidly progressing and while Japan is known as a country to embrace technology, many Japanese companies have not been quick to embrace AI technology and the use of AI is largely limited to the financial sector and concentrated in the manufacturing industry. The country is also facing significant demographic pressure, with an aging population, causing a shortage in available workforce. On the one hand, the adoption of AI and robotic technologies are seen as a solution to labor and aging demographics, on the otherhand, the lack of workforce will cause strategic problems for creation of AI dominant companies.

South Korea

South Koreas government is a significant investor and strong supporter of local technology development, and AI is certainly no exception. The government recently announced it plans to spend $2 billion by 2022 to strengthen its AI R&D capability including creating at least six new AI schools by 2020, with plans to educate more than 5,000 new high quality engineers in Korea in response to a shortage of AI engineers. The government also plans to fund large scale AI projects related to medicine, national defense, and public safety as well as starting an AI R&D challenge similar to those developed by the US Defense Advanced Research Projects Agency (DARPA). The government will also invest to support the creation and development of AI startups and businesses. This support includes the creation of an AI-oriented start-up incubator to support emerging AI businesses and funding for the creation of an AI semiconductor by 2029.

South Korea is home to many large tech companies such as Samsung, LG, and Hyundai among others, and is known for its automotive, electronics, and semiconductor industries as well as the use of industrial robotics technology. It also famously hosted the match where DeepMinds AlphaGo defeated Gos world champion Lee Sedol (a Korean-native). Clearly, you cant count South Korea out of any race for AI dominance. The only thing significantly lacking is a well-developed venture capital ecosystem and a large number of startups. South Koreas AI efforts are almost entirely concentrated in the activities of the major technology incumbents and government activities.

United Kingdom

The United Kingdom is a clear leader for AI and the government is financially supporting AI initiatives. In November 2017, the UK government announced 68 million of funding for research into AI and robotics projects aimed at improving safety in extreme environments as well as funding four new research hubs that will be created to help develop robotic technology to improve safety in off-shore wind and nuclear energy. It has a goal to invest about $1.3 billion in AI investment from both public and private funds over the coming years. As part of this plan, Global Brain, a Japan-based venture capital firm, plans to invest about $48 million in AI-focused UK-based tech startups as well as open a European headquarters in the United Kingdom. Canadian venture capital firm Chrysalix also plans to open a European headquarters in the U.K. as well as invest over $100 million in UK-based startups who specialize in AI and robotics. The University of Cambridge is installing a $13 million supercomputer and will give U.K. businesses access to the new supercomputer to help with AI-related projects.

The U.K. is of course also the home of Alan Turing, renowned forefather of computing and an early proponent of AI, with the namesake Turing Test. The UK can also claim (in not such a great light) to be one of the precipitating factors of the first AI Winter when the Lighthill Report was released in 1973 leading to significant declines in AI investment. As such, the UK has exhibited in the past significant influence positively, and negatively, in worldwide AI spending and adoption. To avoid future problems, the U.K. is looking to position itself as a world leader in ethical AI standards. The UK sees this as an opportunity to position itself as an AI leader with ethical AI, helping to create standards used for all. It knows it cant compete with AI funding and development from counties like the US and China but thinks it has a shot by taking an ethical standards approach and leveraging its early status as a lead in AI development.

France

Frances President Emmanuel Macron released a national strategy for artificial intelligence in early 2018. The country announced that over the next five years it will invest more than 1.5 billion for AI-related research and support for emerging startups in a bid to compete with the US, China, and others for AI dominance. The French strategy is to put an emphasis on and target four specific areas of AI related to health, transportation (such as driverless cars), the environment, and defense/security. Some notable AI researchers and data scientists were educated in France, such as Facebooks head of AI Yann LeCun. France wants to try to keep that talent in France instead of moving to overseas companies.

Many companies such as Samsung, Fujitsu, DeepMind, IBM and Microsoft have announced plans to open offices in France for AI research. The French administration also wants to share new data sets with the public making it easy to access and build AI services using those data sets. The caveat to receiving public funds is that research projects or companies financed with public money will have to share their data. Many European Union (EU) officials have expressed dismay with the way that Facebook, Google, Microsoft, Amazon, and others have hoarded user data, and Macron and his administration are concerned about the black box of AI data and decision-making. France is also focused on addressing the ethical concerns around AI as well as trying to create unbiased data sets which is part of the reason for open algorithms and data sets. While Frances efforts are significant, they pale in terms of total money put into the industry and resources available to compete with the efforts of other nations.

Germany

Germany is an industrial powerhouse, has long been known to have great engineering capabilities, and Berlin is currently Europes top AI talent hub. According to Atomicos 2017 State of European Tech report, Germany is most likely to become a leader in areas such as autonomous vehicles, robotics and quantum computing. In fact, almost half of all worldwide patents on autonomous driving come from German car companies or their suppliers such as Bosch, Volkswagen, Audi and Porsche. These German companies had begun their autonomous vehicle development activities as early as 1986.

A new tech hub region in southern Germany, called Cyber Valley, is hoping to create new opportunities for collaboration between academics and businesses with a specific focus on AI. The new hub plans to focus on AI and robotics, make better use of research talent, and collaboratively work with companies such as Porsche, Daimler and Bosch. In addition to autonomous vehicles, Germany has an early lead with robotics, with one of the first cobots developed in Germany for use in manufacturing. Additionally, Germanys AI strategy was published in December 2018 in Nuremberg. And, in 2019, The German government tasked a new Data Ethics Commission with producing guidelines for the development and use of AI.

Despite these intellectual property and early market leads, Germany has not invested at the same levels as other countries, and the technology firms are highly concentrated in manufacturing, automotive, and industrial sectors, leaving other markets mostly untapped with AI capabilities. Furthermore, American automakers such as Ford, GM, and Google Waymo, as well as Uber and other firms are quickly catching up with the number of patents issued and threatening Germanys dominance for intellectual property in that area.

Russia

Russian president Vladimir Putin made a statement that: Artificial intelligence is the future, not only for Russia, but for all of humankind and that whichever country becomes the leader in this sphere will become the ruler of the world. This is one powerful statement. Russia has said that intelligent machines are vital to the future of their national security plans and, by 2025, it plans to make 30% of its countrys military equipment robotic. The government also wants to standardize development of artificial intelligence focusing on image recognition, speech recognition, autonomous military systems, and information support for weapons life-cycle. There is also a new Russian AI Association bringing the academic and private-sector together. Additionally, Russian President Vladimir Putin approved the National Strategy for the Development of Artificial Intelligence (NSDAI) for the period until 2030 on October 2019.

Russia is still a world superpower in terms of military might, and exerts significant influence in world markets, especially in the energy sector. Despite that, Russian investment in AI is still significantly lacking that of other countries, with only a reported $12M invested by the government in research efforts. While Russia has had significant input and efforts around AI research in the university setting, the countrys industry lacks overall AI talent and number of companies working towards AI related initiatives. Many skilled Russian engineers leave the country to work at other firms worldwide who are throwing lots of money at skilled talent. As such, the biggest application of AI in Russia is in physical and cyberwarfare situations, leveraging AI to enhance the capabilities of autonomous vehicles and information warfare. In this arena, Russia is certainly a country to be contended with regards to AI dominance.

Other AI Hotspots

In addition to the above, there are many countries that are seeing AI as a country level strategic initiative including Israel, India, Denmark, Sweden, Estonia, Finland, Netherlands, Poland, Singapore, Malaysia Australia, Italy, Canada, Taiwan, the United Arab Emirates (UAE), and other locations. Some of these countries have more financial than technical resources, or vice-versa. The key is that for each of these countries, they see AI in a strategic light and as such theyve crafted a strategic approach to AI.

AI technologies have the ability to transform and influence the lives of many people. Not only will AI transform the way we work, interact with each other and travel between locations, but it also has an impact on weapons technology, modern warfare, and a countrys cyber security. AI can also have a dramatic impact on the labor market, disrupting entire industries and creating whole new ones. As such, having a focus on AI dominance can also help strengthen that countrys economy, shift global leadership and power, and give military advantages. While the race for AI domination might seem similar to the Space Race or aspects of the Cold War, in reality the AI market doesnt support a winner take all approach. Indeed, continued advancement in AI requires research and industry collaboration, continued research and development, and industry-wide thinking and solutions to problems. While there will no doubt be winners and losers in terms of overall investment and return, countries worldwide will reap the benefits of increased adoption and development of cognitive technologies.

Read more here:

Why The Race For AI Dominance Is More Global Than You Think - Forbes

Written by admin

February 10th, 2020 at 9:47 pm

Posted in Alphago

Why asking an AI to explain itself can make things worse – MIT Technology Review

Posted: January 29, 2020 at 5:48 pm


without comments

Upol Ehsan once took a test ride in an Uber self-driving car. Instead of fretting about the empty drivers seat, anxious passengers were encouraged to watch a pacifier screen that showed a cars-eye view of the road: hazards picked out in orange and red, safe zones in cool blue.

For Ehsan, who studies the way humans interact with AI at the Georgia Institute of Technology in Atlanta, the intended message was clear: Dont get freaked outthis is why the car is doing what its doing. But something about the alien-looking street scene highlighted the strangeness of the experience rather than reassuring. It got Ehsan thinking: what if the self-driving car could really explain itself?

The success of deep learning is due to tinkering: the best neural networks are tweaked and adapted to make better ones, and practical results have outpaced theoretical understanding. As a result, the details of how a trained model works are typically unknown. We have come to think of them as black boxes.

A lot of the time were okay with that when it comes to things like playing Go or translating text or picking the next Netflix show to binge on. But if AI is to be used to help make decisions in law enforcement, medical diagnosis, and driverless cars, then we need to understand how it reaches those decisionsand know when they are wrong.

People need the power to disagree with or reject an automated decision, says Iris Howley, a computer scientist at Williams College in Williamstown, Massachusetts. Without this, people will push back against the technology. You can see this playing out right now with the public response to facial recognition systems, she says.

Ehsan is part of a small but growing group of researchers trying to make AIs better at explaining themselves, to help us look inside the black box. The aim of so-called interpretable or explainable AI (XAI) is to help people understand what features in the data a neural network is actually learningand thus whether the resulting model is accurate and unbiased.

One solution is to build machine-learning systems that show their workings: so-called glassboxas opposed to black-boxAI. Glassbox models are typically much-simplified versions of a neural network in which it is easier to track how different pieces of data affect the model.

There are people in the community who advocate for the use of glassbox models in any high-stakes setting, says Jennifer Wortman Vaughan, a computer scientist at Microsoft Research. I largely agree. Simple glassbox models can perform as well as more complicated neural networks on certain types of structured data, such as tables of statistics. For some applications that's all you need.

But it depends on the domain. If we want to learn from messy data like images or text, were stuck with deepand thus opaqueneural networks. The ability of these networks to draw meaningful connections between very large numbers of disparate features is bound up with their complexity.

Even here, glassbox machine learning could help. One solution is to take two passes at the data, training an imperfect glassbox model as a debugging step to uncover potential errors that you might want to correct. Once the data has been cleaned up, a more accurate black-box model can be trained.

It's a tricky balance, however. Too much transparency can lead to information overload. In a 2018 study looking at how professional users interact with machine-learning tools, Vaughan found that transparent models can actually make it harder to detect and correct the models mistakes.

Another approach is to include visualizations that show a few key properties of the model and its underlying data. The idea is that you can see serious problems at a glance. For example, the model could be relying too much on certain features, which could signal bias.

These visualization tools have proved incredibly popular in the short time theyve been around. But do they really help? In the first study of its kind, Vaughan and her team have tried to find outand exposed some serious issues.

The team took two popular interpretability tools that give an overview of a model via charts and data plots, highlighting things that the neural network picked up on most in training. Six machine-learning professionals were recruited from within Microsoft, all different in education, job roles, and experience. They took part in a mock interaction with a deep neural network trained on a national income data set taken from the 1994 US census. The experiment was designed specifically to mimic the way data scientists use interpretability tools in the kinds of tasks they face routinely.

What the team found was striking. Sure, the tools sometimes helped people spot missing values in the data. But this usefulness was overshadowed by a tendency to over-trust and misread the visualizations. In some cases, users couldnt even describe what the visualizations were showing. This led to incorrect assumptions about the data set, the models, and the interpretability tools themselves. And it instilled a false confidence about the tools that made participants more gung-ho about deploying the models, even when they felt something wasnt quite right. Worryingly, this was true even when the output had been manipulated to show explanations that made no sense.

To back up the findings from their small user study, the researchers then conducted an online survey of around 200 machine-learning professionals recruited via mailing lists and social media. They found similar confusion and misplaced confidence.

Worse, many participants were happy to use the visualizations to make decisions about deploying the model despite admitting that they did not understand the math behind them. It was particularly surprising to see people justify oddities in the data by creating narratives that explained them, says Harmanpreet Kaur at the University of Michigan, a coauthor on the study. The automation bias was a very important factor that we had not considered.

Ah, the automation bias. In other words, people are primed to trust computers. Its not a new phenomenon. When it comes to automated systems from aircraft autopilots to spell checkers, studies have shown that humans often accept the choices they make even when they are obviously wrong. But when this happens with tools designed to help us avoid this very phenomenon, we have an even bigger problem.

What can we do about it? For some, part of the trouble with the first wave of XAI is that it is dominated by machine-learning researchers, most of whom are expert users of AI systems. Says Tim Miller of the University of Melbourne, who studies how humans use AI systems: The inmates are running the asylum.

This is what Ehsan realized sitting in the back of the driverless Uber. It is easier to understand what an automated system is doingand see when it is making a mistakeif it gives reasons for its actions the way a human would. Ehsan and his colleague Mark Riedl are developing a machine-learning system that automatically generates such rationales in natural language. In an early prototype, the pair took a neural network that had learned how to play the classic 1980s video game Frogger and trained it to provide a reason every time it made a move.

Upol Ehsan

To do this, they showed the system many examples of humans playing the game while talking out loud about what they were doing. They then took a neural network for translating between two natural languages and adapted it to translate instead between actions in the game and natural-language rationales for those actions. Now, when the neural network sees an action in the game, it translates it into an explanation. The result is a Frogger-playing AI that says things like Im moving left to stay behind the blue truck every time it moves.

Ehsan and Riedls work is just a start. For one thing, it is not clear whether a machine-learning system will always be able to provide a natural-language rationale for its actions. Take DeepMinds board-game-playing AI AlphaZero. One of the most striking features of the software is its ability to make winning moves that most human players would not think to try at that point in a game. If AlphaZero were able to explain its moves, would they always make sense?

Reasons help whether we understand them or not, says Ehsan: The goal of human-centered XAI is not just to make the user agree to what the AI is sayingit is also to provoke reflection. Riedl recalls watching the livestream of the tournament match between DeepMind's AI and Korean Go champion Lee Sedol. The commentators were talking about what AlphaGo was seeing and thinking. "That wasnt how AlphaGo worked," says Riedl. "But I felt that the commentary was essential to understanding what was happening."

What this new wave of XAI researchers agree on is that if AI systems are to be used by more people, those people must be part of the design from the startand different people need different kinds of explanations. (This is backed up by a new study from Howley and her colleagues, in which they show that peoples ability to understand an interactive or static visualization depends on their education levels.) Think of a cancer-diagnosing AI, says Ehsan. Youd want the explanation it gives to an oncologist to be very different from the explanation it gives to the patient.

Ultimately, we want AIs to explain themselves not only to data scientists and doctors but to police officers using face recognition technology, teachers using analytics software in their classrooms, students trying to make sense of their social-media feedsand anyone sitting in the backseat of a self-driving car. Weve always known that people over-trust technology, and thats especially true with AI systems, says Riedl. The more you say its smart, the more people are convinced that its smarter than they are.

Explanations that anyone can understand should help pop that bubble.

Original post:

Why asking an AI to explain itself can make things worse - MIT Technology Review

Written by admin

January 29th, 2020 at 5:48 pm

Posted in Alphago

AlphaZero beat humans at Chess and StarCraft, now it’s working with quantum computers – The Next Web

Posted: January 18, 2020 at 4:42 pm


without comments

A team of researchers from Aarhus University in Denmark let DeepMinds AlphaZero algorithm loose on a few quantum computing optimization problems and, much to everyones surprise, the AI was able to solve the problems without any outside expert knowledge. Not bad for a machine learning paradigm designed to win at games like Chess and StarCraft.

Youve probably heard of DeepMind and its AI systems. The UK-based Google sister-company is responsible for both AlphaZero and AlphaGo, the systems that beat the worlds most skilled humans at the games of Chess and Go. In essence, what both systems do is try to figure out what the optimal next set of moves is. Where humans can only think so many moves ahead, the AI can look a bit further using optimized search and planning methods.

Related:DeepMinds AlphaZero AI is the new champion in chess, shogi, and Go

When the Aarhus team applied AlphaZeros optimization abilities to a trio of problems associated with optimizing quantum functions an open problem for the quantum computing world they learned that its ability to learn new parameters unsupervised transferred over from games to applications quite well.

Per the study:

AlphaZero employs a deep neural network in conjunction with deep lookahead in a guided tree search, which allows for predictive hidden-variable approximation of the quantum parameter landscape. To emphasize transferability, we apply and benchmark the algorithm on three classes of control problems using only a single common set of algorithmic hyperparameters.

The implications for AlphaZeros mastery over the quantum universe could be huge. Controlling a quantum computer requires an AI solution because operations at the quantum level quickly become incalculable by humans. The AI can find optimum paths between data clusters in order to emerge better solutions in tandem with computer processors. It works a lot like human heuristics, just scaled to the nth degree.

An example of this would be an algorithm that helps a quantum computer sort through near-infinite combinations of molecules to come up with chemical compounds that would be useful in the treatment of certain illnesses. The current paradigm would involve developing an algorithm that relies on human expertise and databases with previous findings to point it in the right direction.

But the kind of problems were looking at quantum computers to solve dont always have a good starting point. Some of these, optimization problems like the Traveling Salesman Problem, need an algorithm thats capable of figuring things out without the need for constant adjustment by developers.

DeepMinds algorithm and AI system may be the solution quantum computings been waiting for. The researchers effectively employ AlphaZero as a Tabula Rasa for quantum optimization: It doesnt necessarily need human expertise to find the optimum solution to a problem at the quantum computing level.

Before we start getting too concerned about unsupervised AI accessing quantum computers, its worth mentioning that so far AlphaZeros just solved a few problems in order to prove a concept. We know the algorithms can handle quantum optimization, now its time to figure out what we can do with it.

The researchers have already received interest from big tech and other academic institutions with queries related to collaborating on future research. Not for nothing, but DeepMinds sister-company Google has a little quantum computing program of its own. Were betting this isnt the last weve heard of AlphaZeros adventures in the quantum computing world.

Read next: Cyberpunk 2077 has been delayed to September (thank goodness)

Read the original:

AlphaZero beat humans at Chess and StarCraft, now it's working with quantum computers - The Next Web

Written by admin

January 18th, 2020 at 4:42 pm

Posted in Alphago

What are neural-symbolic AI methods and why will they dominate 2020? – The Next Web

Posted: at 4:42 pm


without comments

The recent commercial AI revolution has been largely driven by deep neural networks. First invented in the 1960s, deep NNs came into their own once fueled by the combination of internet-scale datasets and distributed GPU farms.

But the field of AI is much richer than just this one type of algorithm. Symbolic reasoning algorithms such as artificial logic systems, also pioneered in the 60s, may be poised to emerge into the spotlight to some extent perhaps on their own, but also hybridized with neural networks in the form of so-called neural-symbolic systems.

Deep neural nets have done amazing things for certain tasks, such as image recognition and machine translation. However, for many more complex applications, traditional deep learning approaches cannot match the ability of hybrid architecture systems that additionally leverage other AI techniques such as probabilistic reasoning, seed ontologies, and self-reprogramming ability.

Deep neural networks, by themselves, lack strong generalization, i.e. discovering new regularities and extrapolating beyond training sets. Deep neural networks interpolate and approximate on what is already known, which is why they cannot truly be creative in the sense that humans can, though they can produce creative-looking works that vary on the data they have ingested.

This is why large training sets are required to teach deep neural networks and also why data augmentation is such an important technique for deep learning, which needs humans to specify known data transformations. Even interpolation cannot be done perfectly without learning underlying regularities, which is vividly demonstrated by well-known adversarial attacks on deep neural networks.

The slavish adherence of deep neural nets to the particulars of their training data also makes them poorly interpretable. Humans cannot completely rely or interpret their results, especially in novel situations.

What is interesting is that, for the most part, the disadvantages of deep neural nets are strengths of symbolic systems (and vice versa), which inherently possess compositionality, interpretability, and can exhibit true generalization. Prior knowledge can also be easily incorporated into symbolic systems in contrast to neural nets.

Neural net architectures are very powerful at certain types of learning, modeling, and action but have limited capability for abstraction. That is why they are compared with the Ptolemaic epicycle model of our solar system they can become more and more precise, but they need more and more parameters and data for this, and they, by themselves, cannot discover Keplers laws and incorporate them into the knowledge base, and further infer Newtons laws from them.

Symbolic AI is powerful at manipulating and modeling abstractions, but deals poorly with massive empirical data streams.

This is why we believe that deep integration of neural and symbolic AI systems is the most viable path to human-level AGI on modern computer hardware.

Its worth noting in this light that many recent deep neural net successes are actually hybrid architectures, e.g. the AlphaGo architecture from Google DeepMind integrates two neural nets with one game tree. Their recent MuZero architecture, which can master both board and Atari games, goes further along this path using deep neural nets together with planning with a learned model.

The highly successful ERNIE architecture for Natural Language Processing question-answering from Tsinghua University integrates knowledge graphs into neural networks. The symbolic sides of these particular architectures are relatively simplistic, but they can be seen as pointing in the direction of more sophisticated neural-symbolic hybrid systems.

The integration of neural and symbolic methods relies heavily on what has been the most profound revolution in AI in the last 20 years the rise of probabilistic methods: e.g. neural generative models, Bayesian inference techniques, estimation of distribution algorithms, probabilistic programming.

As an example of the emerging practical applications of probabilistic neural-symbolic methods, at the Artificial General Intelligence (AGI) 2019 conference in Shenzhen last August, Hugo Latapie from Cisco Systems described work his team has done in collaboration with our AI team at SingularityNET Foundation, using the OpenCog AGI engine together with deep neural networks to analyze street scenes.

The OpenCog framework provides a neural-symbolic framework that is especially rich on the symbolic side, and interoperates with popular deep neural net frameworks. It features a combination of probabilistic logic networks (PLNs), probabilistic evolutionary program learning (MOSES), and probabilistic generative neural networks.

The traffic analytics system demonstrated by Latapie deploys OpenCog-based symbolic reasoning on top of deep neural models for street scene cameras, enabling feats such as semantic anomaly detection (flagging collisions, jaywalking, and other deviations from expectation), unsupervised scene labeling for new cameras, and single-shot transfer learning (e.g. learning about new signals for bus stops with a single example).

The difference between a pure deep neural net approach and a neural-symbolic approach in this case is stark. With deep neural nets deployed in a straightforward way, each neural network models what is seen by a single camera. Forming a holistic view of whats happening at a given intersection, let alone across a whole city, is much more of a challenge.

In the neural-symbolic architecture, the symbolic layer provides a shared ontology, so all cameras can be connected for to an integrated traffic management system. If an ambulance needs to be routed in a way that will neither encounter nor cause significant traffic, this sort of whole-scenario symbolic understanding is exactly what one needs.

The same architecture can be applied to many other related use cases where one can use neural-symbolic AI to both enrich local intelligence and connect multiple sources/locations into a holistic view for reasoning and action.

It may not be impossible to crack this particular problem using a more complex deep neural net architecture, with multiple neural nets working together in subtle ways. However, this is an example of something that is easier and more straightforward to address using a neural-symbolic approach. And it is quite close to machine vision, one of deep neural nets great strengths.

In other, more abstract application domains such as mathematical theorem-proving or biomedical discovery the critical value of the symbolic side of the neural-symbolic hybrid is even more dramatic.

Deep neural nets have done amazing things over the last few years, bringing applied AI to a whole new level. Were betting that the next phase of incredible AI achievements are going to be delivered via hybrid AI architectures such as neural-symbolic systems. This trend has already started in 2019 in a relatively quiet way and in 2020 we expect it will pick up speed dramatically.

Published January 15, 2020 09:00 UTC

The rest is here:

What are neural-symbolic AI methods and why will they dominate 2020? - The Next Web

Written by admin

January 18th, 2020 at 4:42 pm

Posted in Alphago

What is AlphaGo? – Definition from WhatIs.com

Posted: December 22, 2019 at 6:46 am


without comments

AlphaGo is an artificial intelligence (AI) agent that is specialized to play Go, a Chinese strategy board game, against human competitors. AlphaGo is a Google DeepMind project.

The ability to create a learning algorithm that can beat a human player at strategic games is a measure of AI development. AlphaGo is designed as a self-teaching AI and plays against itself to master the complex strategic game of Go. There have been versions of AlphaGo that beat human players but new versions are still being created.

Go is a Chinese board game similar to chesswith two players, one using black pieces and one white, placing a piece each turn. Pieces are placed on a grid that varies in size according to the level of play up to 19x19 placement points. The goal is to capture more territory (empty spaces) or enemy pieces by surrounding them with your pieces. Only positions that are horizontal and vertical relative to the players need to be covered to capture; it's not required that they all be diagonals. Either pieces or territory can be captured individually or in groups.

Chess may be a more famous board game with white and black pieces but Go has a googol more possible moves. The number of possible positions makes a traditional brute force approach, as was used with IBMs' Big Blue in chess, impossible with current computers. That difference in complexity of the problem required a new approach.

AlphaGo is based off a Monte Carlo algorithm tree search based looking at a list of possible moves from its machine-learned repertoire. Algorithms and learning differ among the various versions of AlphaGo. AlphaGo Master, the version that beat the world champion Go player Ke Jie, uses supervised learning. AlphaGo Zero, the unsupervised learning version of AlphaGo, learns by playing against itself. First, the AI plays randomly, then with increasing sophistication. Its increased sophistication is such that it consistently beats the Master version that dominates human players.

Watch SciShow cover AlphaGo in the video below:

See the article here:

What is AlphaGo? - Definition from WhatIs.com

Written by admin

December 22nd, 2019 at 6:46 am

Posted in Alphago

AI has bested chess and Go, but it struggles to find a diamond in Minecraft – The Verge

Posted: December 18, 2019 at 9:45 pm


without comments

Whether were learning to cook an omelet or drive a car, the path to mastering new skills often begins by watching others. But can artificial intelligence learn the same way? A new challenge teaching AI agents to play Minecraft suggests its much trickier for computers.

Announced earlier this year, the MineRL competition asked teams of researchers to create AI bots that could successfully mine a diamond in Minecraft. This isnt an impossible task, but it does require a mastery of the games basics. Players need to know how to cut down trees, craft pickaxes, and explore underground caves while dodging monsters and lava. These are the sorts of skills that most adults could pick up after a few hours of experimentation or learn much faster by watching tutorials on YouTube.

But of the 660 entries in the MineRL competition, none were able to complete the challenge, according to results that will be announced at the AI conference NeurIPS and that were first reported by BBC News. Although bots were able to learn intermediary steps, like constructing a furnace to make durable pickaxes, none successfully found a diamond.

The task we posed is very hard, Katja Hofmann, a principal researcher at Microsoft Research, which helped organize the challenge, told BBC News. While no submitted agent has fully solved the task, they have made a lot of progress and learned to make many of the tools needed along the way.

This may be a surprise, especially when you think that AI has managed to best humans at games like chess, Go, and Dota 2. But it reflects important limitations of the technology as well as restrictions put in place by MineRLs judges to really challenge the teams.

The bots in MineRL had to learn using a combination of methods known as imitation learning and reinforcement learning. In imitation learning, agents are shown data of the task ahead of them, and they try to imitate it. In reinforcement learning, theyre simply dumped into a virtual world and left to work things out for themselves using trial and error.

Often, AI is only able to take on big challenges by combining these two methods. The famous AlphaGo system, for example, first learned to play Go by being fed data of old games. It then honed its skills and surpassed all humans by playing itself over and over.

The MineRL bots took a similar approach, but the resources available to them were comparatively limited. While AI agents like AlphaGo are created with huge datasets, powerful computer hardware, and the equivalent of decades of training time, the MineRL bots had to make do with just 1,000 hours of recorded gameplay to learn from, a single Nvidia graphics processor to train with, and just four days to get up to speed.

Its the difference between the resources available to an MLB team coaches, nutritionists, the finest equipment money can buy and what a Little League squad has to make do with.

It may seem unfair to hamstring the MineRL bots in this way, but these constraints reflect the challenges of integrating AI into the real world. While bots like AlphaGo certainly push the boundary of what AI can achieve, very few companies and research labs can match the resources of Google-owned DeepMind.

The competitions lead organizer, Carnegie Mellon University PhD student William Guss, told BBC News that the challenge was meant to show that not every AI problem should be solved by throwing computing power at it. This mindset, said Guss, works directly against democratizing access to these reinforcement learning systems, and leaves the ability to train agents in complex environments to corporations with swathes of compute.

So while AI may be struggling in Minecraft now, when it cracks this challenge, itll hopefully deliver benefits to a wider audience. Just dont think about those poor Minecraft YouTubers who might be out of a job.

Continued here:

AI has bested chess and Go, but it struggles to find a diamond in Minecraft - The Verge

Written by admin

December 18th, 2019 at 9:45 pm

Posted in Alphago

AI is dangerous, but not for the reasons you think. – OUPblog

Posted: at 9:45 pm


without comments

In 1997, Deep Blue defeated Garry Kasparov, the reigning world chess champion. In 2011, Watson defeated Ken Jennings and Brad Rutter, the worlds best Jeopardy players. In 2016, AlphaGo defeated Ke Jie, the worlds best Go player. In 2017, DeepMind unleashed AlphaZero, which trounced the world-champion computer programs at chess, Go, and shogi.

If humans are no longer worthy opponents, then perhaps computers have moved so far beyond our intelligence that we should rely on their superior intelligence to make our important decisions. Nope.

Despite their freakish skill at board games, computer algorithms do not possess anything resembling human wisdom, common sense, or critical thinking. Deciding whether to accept a job offer, sell a stock, or buy a house is very different from recognizing that moving a bishop three spaces will checkmate an opponent. That is why it is perilous to trust computer programs we dont understand to make decisions for us.

Consider the challenges identified by Stanford computer science professorTerry Winograd,which have come to be known asWinograd schemas.For example, what does the word it refer to in this sentence?

I cant cut that tree down with that axe; it is too [thick/small].

If the bracketed word is thick, then it refers to the tree; if the bracketed word is small, then it refers to the axe. Sentences like these are understood immediately by humans but are very difficult for computers because they do not have the real-world experience to place words in context.

ParaphrasingOren Etzioni,CEO of the Allen Institute for Artificial Intelligence, how can machines take over the world when they cant even figure out what it refers to in a simple sentence?

When we see a tree, we know it is a tree. We might compare it to other trees and think about the similarities and differences between fruit trees and maple trees. We might recollect the smells wafting from some trees. We would not be surprised to see a squirrel run up a pine or a bird fly out of a dogwood. We might remember planting a tree and watching it grow year by year. We might remember cutting down a tree or watching a tree being cut down.

A computer does none of this. It can spellcheck the word tree, count the number of times the word is used in a story, and retrieve sentences that contain the word. But computers do not understand what trees are in any relevant sense. They are likeNigel Richards,who memorized the French Scrabble dictionary and has won the French-language Scrabble World Championship twice, even though he doesnt know the meaning of the French words he spells.

To demonstrate the dangers of relying on computer algorithms to make real-world decisions, consider an investigation of risk factors for fatal heart attacks.

I made up some household spending data for 1,000 imaginary people, of whom half had suffered heart attacks and half had not. For each such person, I used a random number generator to create fictitious data in 100 spending categories. These data were entirely random. There were no real people, no real spending, and no real heart attacks. It was just a bunch of random numbers. But the thing about random numbers is that coincidental patterns inevitably appear.

In 10 flips of a fair coin, there is a 46% chance of a streak of four or more heads in a row or four or more tails in a row. If that does not happen, heads and tails might alternate several times in a row. Or there might be two heads and a tail, followed by two more heads and a tail. In any event, some pattern will appear and it will be absolutely meaningless.

In the same way, some coincidental patterns were bound to turn up in my random spending numbers. As it turned out, by luck alone, the imaginary people who had not suffered heart attacks spent more money on small appliances and also on household paper products.

When we see these results, we should scoff and recognize that the patterns are meaningless coincidences. How could small appliances and household paper products prevent heart attacks?

A computer, by contrast, would take the results seriously because a computer has no idea what heart attacks, small appliances, and household paper products are. If the computer algorithm is hidden inside a black box, where we do not know how the result was attained, we would not have an opportunity to scoff.

Nonetheless, businesses and governments all over the world nowadays trust computers to make decisions based on coincidental statistical patterns just like these. One company, for example, decided that it would make more online sales if it changed the background color of the web page shown to British customers from blue to teal. Why? Because they tried several different colors in nearly 100 countries. Any given color was certain to fare better in some country than in others even if random numbers were analyzed instead of sales numbers. The change was made and sales went down.

Many marketing decisions, medical diagnoses, and stock trades are now done via computers. Loan applications and job applications are evaluated by computers. Election campaigns are run by computers, including Hillary Clintons disastrous 2016presidential campaign.If the algorithms are hidden inside black boxes, with no human supervision, then it is up to the computers to decide whether the discovered patterns make sense and they are utterly incapable of doing so because they do not understand anything about the real world.

Computers are not intelligent in any meaningful sense of the word, and it is hazardous to rely on them to make important decisions for us. The real danger today is not that computers are smarter than us, but that wethinkcomputers are smarter than us.

Featured image credit: Lumberjack Adventures by Abby Savage. CC0 via Unsplash.

See original here:

AI is dangerous, but not for the reasons you think. - OUPblog

Written by admin

December 18th, 2019 at 9:45 pm

Posted in Alphago

The Perils and Promise of Artificial Conscientiousness – WIRED

Posted: at 9:45 pm


without comments

We humans are notoriously bad at predicting the consequences of achieving our technological goals. Add seat belts to cars for safety, speeding and accidents can go up. Burn hydrocarbons for cheap energy, warm the planet. Give experts new technologies like surgical robots or predictive policing algorithms to enhance productivity, block apprentices from learning. Still, we're amazing at predicting unintended consequences compared to the intelligent technologies we're building.

WIRED OPINION

ABOUT

Matt Beane (@mattbeane) is an assistant professor of technology management at UC Santa Barbara and a research affiliate at MIT's Institute for the Digital Economy.

Take reinforcement learning, one particularly potent flavor of AI that's behind some of the more stupendous demonstrations as of late. RL systems take in reward states (aka goals, outcomes that they get points for) and go after them without regard to unintended consequences of their actions. DeepMind's AlphaGo was designed to win the board game Go, whatever it took. OpenAI's system did the same for Defense of the Ancients (DOTA), a fiendishly complex, multiplayer online war game. Both came up with unconventional, in some cases radical, new tactics required to beat the best that humanity had to offer, yet consumed disproportionately large amounts of energy and natural resources to do so. This kind of single-mindedness has inspired all kinds of fun sci-fi, including an AI designed to produce as many paperclips as possible proceeding to destroying the earth, and then the entire cosmos, in an effort to get the job done.

While seemingly innocuous, this win-at-any-cost approach is untenable with the more practical uses of AI. Otherwise we may end up swamped by power outages, flash-trading market failures, or (even more) hyper-polarized, isolated online communities. To be clear, these threats are possible only because AI is delivering amazing improvements on previous best practices: electrical grids are becoming much more efficient and reliable, microsecond-frequency trading allows for major improvements in global market efficiency, and social media platforms suggest beneficial connections to goods, services, information, and people that would otherwise remain hidden. But the more we hand these and similar processes over to AI that is singularly focused on its goals, the more they can produce consequences we dont like, sometimes at the speed of light.

Some within the AI community are already addressing these concerns. One of the founders of DeepMind cofounded the Partnership on AI, which aims to direct attention and effort on harnessing AI to contribute to solutions for some of humanitys most challenging problems. On December 4, PAI announced the release of SafeLife, a proof-of-concept reinforcement-learning model that can avoid unintended side effects of its optimization activity in a simple game. SafeLife has a clear way of characterizing those consequences: increases in entropy (or the degree of disorder or randomness) in the game system. By definition this is not a practical system, but it does show how a reinforcement-learning-driven system can optimize towards a goal while minimizing collateral damage.

This is very exciting work, and in principle it could help with all kinds of unintended effects of intelligent technologies like AI and robots. For example, it could help factory robots know they should slow down if a red-tailed hawk flies in their way. (I've seen this happen. Those buildings house pigeons, and, if big enough, birds of prey). A SafeLife-like model could override its programmed setting to maximize throughput, because destroying living things adds a lot of entropy to the world. But some things that we expect to help in theory end up contributing to the very problems they're trying to solve. Yes, that means the unintended consequences module in next-gen AI systems could be the very thing that creates potent unintended consequences. What happens if that robot slows down for that hawk while a nearby human expects it to keep moving? Safety and productivity could be threatened.

This is particularly problematic when these consequences span significant amounts of space and time. Take the DOTA algorithm. During a match, when it calculates its win probability is above 90 percent, it's programmed to taunt other players via chat. "Win probability 92 percent," you might read as you watch your hard-won forces and devious strategy decimated by a computer program. What effects does that have on players' approaches to the game? And, even further removed, what about their commitment to the game? To gaming generally? Their career aspirations? Their contributions to society? If this seems like armchair speculation, note that Lee Sedolthe world's best professional Go player, a wunderkind who has devoted his entire life to mastering the gamehas just quit the game publicly and permanently, saying that no human can beat the system. It's not obvious that Sedol's retirement is good or bad for the game, for him or for society, but it is a symbolic and significant unintended consequence of the actions of an AI-based system optimizing on its reward function.

Continued here:

The Perils and Promise of Artificial Conscientiousness - WIRED

Written by admin

December 18th, 2019 at 9:45 pm

Posted in Alphago

DeepMind Vs Google: The Inner Feud Between Two Tech Behemoths – Analytics India Magazine

Posted: at 9:45 pm


without comments

With the recent switch of DeepMinds co-founder Mustafa Suleyman to its sister concern Google, researchers are raising questions as to whether this unexpected move will cause a crack between the two companies. Suleyman was termed on leave by this London-based AI company for the last six months, but earlier this month, he confirmed his joining at Google through a Twitter post. In the post, Suleyman portrayed his excitement of joining the team at Google to work on opportunities and impacts of applied AI technologies.

Acquired by Googles parent company Alphabet in 2014, DeepMind was aimed at using machine intelligence to solve real-world problems, including healthcare and energy. While the co-founder Hassabis was running the core artificial intelligence research at DeepMind, Suleyman was in charge of developing Streams, a controversial health app, which gathered data from millions of NHS patients without their direct consent.

However, the relationship between Google and DeepMind has been fairly complicated since last year. After a brawl with Facebook in 2014, Google decided to acquire DeepMind for $600 million. However, it got separated from the tech giant in the year 2015, as a part of the Alphabets restructure, rising tension among Googles AI researchers.

Suleymans key project, Streams has created a considerable suspicion between the three companies Alphabet, DeepMind and Google. Although DeepMind promised to keep a privacy check on all 1.6 million Royal Free data, keeping it independent, its dealings with Google of taking over Streams, formed no legal foundation for this claim. Experts believed that such dealing is breaking DeepMinds promise of encryption. Nevertheless, in an interview, a DeepMind spokesperson mentioned how the company is still committed to its privacy statements and any dealing with Google is not going to affect the acquired data.

Previously Google has gone through several complexities like disenfranchising its employees, creating conflicts with the government, and ignoring its customers and clients. Google has also recently admitted its interest in serving China with the development of a censored search engine. These steps have in turn placed this company in an outrageous position, making it unpredictable and not-so-trustworthy for the mainstream media, privacy experts, giants of the industry, and even for the general population.

On the other side, Google has been wanting to capitalise on owning the highest concentration of AI talent, in the field of deep learning. But, DeepMinds contribution to Googles bottom line has been shocking. The company has been making a significant breakthrough with AI either in terms of diagnosing fatal diseases, engineering a bacteria to eat up plastic, or creating a computer program that plays the board game, called AlphaGo. However, the company turned out to be a big disaster for its investors considering the loss of $571 million last year with a constant debt to its parent company of approximately $1.4 billion. Such concerns added more complexities for DeepMind, which led Google to take over the control of the company contradicting the initial agreement, which allows DeepMind to operate independently.

Why did it come to this? The answer is a big gap in DeepMinds commercialisation research. According to industry experts, the company has been fixated with the development of general intelligence, however, the important aspect should have been working on short term projects which could potentially turn into products to solve real-world problems. Haitham Bou-Ammar, an executive at Cambridge-based AI startup Prowler.io, believed that the company requires a shift in focus, with strategies to make money with deep learning assets rather than creating an education lab.

With a single focus on deep-learning neural networks, DeepMinds AI approach hasnt been inclusive. The company should have rather focused on a multi-segment approach, which would have helped in creating evolutionary algorithms and decision making in a realistic environment. DeepMind has been putting all its eggs in one basket Deep Reinforcement Learning. Many also believe that that company should have been focusing on bridging gaps, instead, it has been dealing with issues related to their apparent independence.

DeepMind CEO Demis Hassabis once declined Googles offer of leading their robotics unit. On the other hand, while the companys provided with its WaveNet software to Google for replicating a human voice, the companys leadership totally declined its association with its cloud platform. Such developments showcased a bumpy relationship between the two. Critics started to fear that the change in management will shift the focus from research to products, while the privacy experts are worried about Googles unsolicited access to NHS data.

From a distance, DeepMind looks to have made great progress with built-in software that can learn to perform tasks at a superhuman level, and other strides at the gaming industry, demonstrating the power of reinforcement learning and the extraordinary ability of its computer programs. However, the company has missed a huge aspect that says that DeepMinds program has always been so restricted with no ability to react to changes in the environment, lacking in flexibility.

Another aspect which is hardly been touched by the company is the reward function a signal that allows the software to measure its progress which is directly related to the success of virtual environments. The company has always been focused on developing reward function for AlphaGo, however, in the real world, the progress is never measured with single scores and is usually varied according to the sectors.

Therefore, for now, deep reinforcement learning can now only be used in trusted and controlled environments with few or no changes in the system that works fine for Go games, but real-world problems cannot rely upon the same. The company, therefore, has to focus on finding a large scale commercial application of this technology. So far, the parent company has invested roughly $2 billion on DeepMind with a decent financial return some of which came from applying deep reinforcement learning within Alphabet to reduce power costs for cooling Googles servers.

According to experts and researchers, although the technology works fine for Go, it might not be suitable for real-world challenging problems, that the company is aspiring to solve with AI. Cutting DeepMind some slack, we all have to agree that no scientific innovation turns profitable overnight. However, the company definitely needs to dig deeper and bring in the technology with other techniques to create more stable results.

Even if DeepMinds current strategy is turning out to be less fruitful, nobody can exclude the vision of the company. Although it is taking time to bridge the gap between deep reinforcement learning and artificial intelligence, its impossible to ignore that the company is held by hundreds of PHDs and is also running on good funding. In fact, the success of Go, Atari, and Starcraft has given a promising name to the company.

Meanwhile, the substantial cash burn along with the departure of a high-level executive has caused wreckage, placing the subsidiary in deep confusion. According to the policies, DeepMind is supposed to provide AI-related assets to various companies and products under Alphabet, however, on the other hand, Googles in-house AI management Google Brain already started occupying a similar role within the Alphabets ecosystem. This perplexity is deepening the problems for the company, pushing it to work in silos. In its present condition, DeepMind seems to be in a critical point, where the company is constantly investing in deep learning research and developing AI assets, but not living up to its potential.

comments

Read the original:

DeepMind Vs Google: The Inner Feud Between Two Tech Behemoths - Analytics India Magazine

Written by admin

December 18th, 2019 at 9:45 pm

Posted in Alphago


Page 31234