Page 11234

Archive for the ‘Alphago’ Category

What the hell is reinforcement learning and how does it work? – The Next Web

Posted: November 2, 2020 at 1:56 am


without comments

Reinforcement learning is a subset of machine learning. It enables an agent to learn through the consequences of actions in a specific environment. It can be used to teach a robot new tricks, for example.

Reinforcement learning is a behavioral learning model where the algorithm provides data analysis feedback, directing the user to the best result.

It differs from other forms of supervised learning because the sample data set does not train the machine. Instead, it learns by trial and error. Therefore, a series of right decisions would strengthen the method as it better solves the problem.

Reinforced learning is similar to what we humans have when we are children. We all went through the learning reinforcement when you started crawling and tried to get up, you fell over and over, but your parents were there to lift you and teach you.

It is teaching based on experience, in which the machine must deal with what went wrong before and look for the right approach.

Although we dont describe the reward policy that is, the game rules we dont give the model any tips or advice on how to solve the game. It is up to the model to figure out how to execute the task to optimize the reward, beginning with random testing and sophisticated tactics.

By exploiting research power and multiple attempts, reinforcement learning is the most successful way to indicate computer imagination. Unlike humans, artificial intelligence will gain knowledge from thousands of side games. At the same time, a reinforcement learning algorithm runs on robust computer infrastructure.

An example of reinforced learning is the recommendation on Youtube, for example. After watching a video, the platform will show you similar titles that you believe you will like. However, suppose you start watching the recommendation and do not finish it. In that case, the machine understands that the recommendation would not be a good one and will try another approach next time.

[Read: What audience intelligence data tells us about the 2020 US presidential election]

Reinforcement learnings key challenge is to plan the simulation environment, which relies heavily on the task to be performed. When trained in Chess, Go, or Atari games, the simulation environment preparation is relatively easy. Building a model capable of driving an autonomous car is key to creating a realistic prototype before letting the car ride the street. The model must decide how to break or prevent a collision in a safe environment. Transferring the model from the training setting to the real world becomes problematic.

Scaling and modifying the agents neural network is another problem. There is no way to connect with the network except by incentives and penalties. This may lead to disastrous forgetfulness, where gaining new information causes some of the old knowledge to be removed from the network. In other words, we must keep learning in the agents memory.

Another difficulty is reaching a great location that is, the agent executes the mission as it is, but not in the ideal or required manner. A hopper jumping like a kangaroo instead of doing what is expected of him is a perfect example. Finally, some agents can maximize the prize without completing their mission.

Games

RL is so well known today because it is the conventional algorithm used to solve different games and sometimes achieve superhuman performance.

The most famous must be AlphaGo and AlphaGo Zero. AlphaGo, trained with countless human games, has achieved superhuman performance using the Monte Carlo tree value research and value network (MCTS) in its policy network. However, the researchers tried a purer approach to RL training it from scratch. The researchers left the new agent, AlphaGo Zero, to play alone and finally defeat AlphaGo 1000.

Personalized recommendations

The work of news recommendations has always faced several challenges, including the dynamics of rapidly changing news, users who tire easily, and the Click Rate that cannot reflect the user retention rate. Guanjie et al. applied RL to the news recommendation system in a document entitled DRN: A Deep Reinforcement Learning Framework for News Recommendation to tackle problems.

In practice, they built four categories of resources, namely: A) user resources, B) context resources such as environment state resources, C) user news resources, and D) news resources such as action resources. The four resources were inserted into the Deep Q-Network (DQN) to calculate the Q value. A news list was chosen to recommend based on the Q value, and the users click on the news was part of the reward the RL agent received.

The authors also employed other techniques to solve other challenging problems, including memory repetition, survival models, Dueling Bandit Gradient Descent, and so on.

Resource management in computer clusters

Designing algorithms to allocate limited resources to different tasks is challenging and requires human-generated heuristics.

The article Resource management with deep reinforcement learning explains how to use RL to automatically learn how to allocate and schedule computer resources for jobs on hold to minimize the average job (task) slowdown.

The state-space was formulated as the current resource allocation and the resource profile of jobs. For the action space, they used a trick to allow the agent to choose more than one action at each stage of time. The reward was the sum of (-1 / job duration) across all jobs in the system. Then they combined the REINFORCE algorithm and the baseline value to calculate the policy gradients and find the best policy parameters that provide the probability distribution of the actions to minimize the objective.

Traffic light control

In the article Multi-agent system based on reinforcement learning to control network traffic signals, the researchers tried to design a traffic light controller to solve the congestion problem. Tested only in a simulated environment, their methods showed results superior to traditional methods and shed light on multi-agent RLs possible uses in traffic systems design.

Five agents were placed in the five intersections traffic network, with an RL agent at the central intersection to control traffic signaling. The state was defined as an eight-dimensional vector, with each element representing the relative traffic flow of each lane. Eight options were available to the agent, each representing a combination of phases, and the reward function was defined as a reduction in delay compared to the previous step. The authors used DQN to learn the Q value of {state, action} pairs.

Robotics

There is an incredible job in the application of RL in robotics. We recommend reading this paper with the result of RL research in robotics. In this other work, the researchers trained a robot to learn policies to map raw video images to the robots actions. The RGB images were fed into a CNN, and the outputs were the engine torques. The RL component was policy research guided to generate training data from its state distribution.

Web systems configuration

There are more than 100 configurable parameters in a Web System, and the process of adjusting the parameters requires a qualified operator and several tracking and error tests.

The article A learning approach by reinforcing the self-configuration of the online Web system showed the first attempt in the domain on how to autonomously reconfigure parameters in multi-layered web systems in dynamic VM-based environments.

The reconfiguration process can be formulated as a finite MDP. The state-space was the system configuration; the action space was {increase, decrease, maintain} for each parameter. The reward was defined as the difference between the intended response time and the measured response time. The authors used the Q-learning algorithm to perform the task.

Although the authors used some other technique, such as policy initialization, to remedy the large state space and the computational complexity of the problem, instead of the potential combinations of RL and neural network, it is believed that the pioneering work prepared the way for future research in this area

Chemistry

RL can also be applied to optimize chemical reactions. Researchers have shown that their model has outdone a state-of-the-art algorithm and generalized to different underlying mechanisms in the article Optimizing chemical reactions with deep reinforcement learning.

Combined with LSTM to model the policy function, agent RL optimized the chemical reaction with the Markov decision process (MDP) characterized by {S, A, P, R}, where S was the set of experimental conditions ( such as temperature, pH, etc.), A was the set of all possible actions that can change the experimental conditions, P was the probability of transition from the current condition of the experiment to the next condition and R was the reward that is a function of the state.

The application is excellent for demonstrating how RL can reduce time and trial and error work in a relatively stable environment.

Auctions and advertising

Researchers at Alibaba Group published the article Real-time auctions with multi-agent reinforcement learning in display advertising. They stated that their cluster-based distributed multi-agent solution (DCMAB) has achieved promising results and, therefore, plans to test the Taobao platforms life.

Generally speaking, the Taobao ad platform is a place for marketers to bid to show ads to customers. This can be a problem for many agents because traders bid against each other, and their actions are interrelated. In the article, merchants and customers were grouped into different groups to reduce computational complexity. The agents state-space indicated the agents cost-revenue status, the action space was the (continuous) bid, and the reward was the customer clusters revenue.

Deep learning

More and more attempts to combine RL and other deep learning architectures can be seen recently and have shown impressive results.

One of RLs most influential jobs is Deepminds pioneering work to combine CNN with RL. In doing so, the agent can see the environment through high-dimensional sensors and then learn to interact with it.

CNN with RL are other combinations used by people to try new ideas. RNN is a type of neural network that has memories. When combined with RL, RNN offers agents the ability to memorize things. For example, they combined LSTM with RL to create a deep recurring Q network (DRQN) for playing Atari 2600 games. They also usedLSTM with RL to solve problems in optimizing chemical reactions.

Deepmind showed how to use generative models and RL to generate programs. In the model, the adversely trained agent used the signal as a reward for improving actions, rather than propagating gradients to the entry space as in GAN training. Incredible, isnt it?

Reinforcement is done with rewards according to the decisions made; it is possible to learn continuously from interactions with the environment at all times. With each correct action, we will have positive rewards and penalties for incorrect decisions. In the industry, this type of learning can help optimize processes, simulations, monitoring, maintenance, and the control of autonomous systems.

Some criteria can be used in deciding where to use reinforcement learning:

In addition to industry, reinforcement learning is used in various fields such as education, health, finance, image, and text recognition.

This article was written by Jair Ribeiro and was originally published on Towards Data Science. You can read it here.

Published October 27, 2020 10:49 UTC

Originally posted here:

What the hell is reinforcement learning and how does it work? - The Next Web

Written by admin

November 2nd, 2020 at 1:56 am

Posted in Alphago

Investing in Artificial Intelligence (AI) – Everything You Need to Know – Securities.io

Posted: at 1:56 am


without comments

Artificial Intelligence (AI) is a field that requires no introduction. AI has ridden the tailcoats of Moores Law which states that the speed and capability of computers can be expected to double every two years. Since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a doubling every 3 to 4 months, with the end result that the amount of computing resources allocated to AI has grown by 300,000x since 2012. No other industry can compare with these growth statistics.

We will explore what fields of AI are leading this acceleration, what companies are best positioned to take advantage of this growth, and why it matters.

Machine learning is a subfield of AI which is essentially programming machines to learn. There are multiple types of machine learning algorithms, the most popular by far is deep learning, this involves feeding data into an Artificial Neural Network (ANN). An ANN is a very compute intensive network of mathematical functions joined together in a format inspired by the neural networks found in the human brain.

The more big data that is fed into an ANN, the more precise the ANN becomes. For example, if you are attempting to train an ANN to learn how to identify cat pictures, if you feed the network 1000 cat pictures the network will have a small level of accuracy of perhaps 70%, if you increase it to 10000 pictures, the level of accuracy may increase to 80%, if you increase it by 100000 pictures, then you have just increased the accuracy of the network to 90%, and onwards.

Herein lies one of the opportunities, companies that dominate the field of AI chip development are naturally ripe for growth.

There are many other types of machine learning that show promise, such as reinforcement learning, this is training an agent through the repetition of actions and associated rewards. By using reinforcement learning an AI system can compete against itself with the intention of improving how well it performs. For example, a program playing chess will play against itself repeatedly, with every instance of the gameplay improving how it performs in the next game.

Currently the best types of AI use a combination of both deep learning and reinforcement learning in what is commonly referred to as deep reinforcement learning. All of the leading AI companies in the world such as Tesla use some type of deep reinforcement learning.

While there are other types of important machine learning systems that are currently being advanced such as meta-learning, for the sake of simplicity deep learning and its more advanced cousin deep reinforcement learning are what investors should be most familiar with. The companies that are at the forefront of this technological advancement will be best positioned to take advantage of the huge exponential growth we are witnessing in AI.

If there is one differentiator between companies that will succeed, and become market leaders, and companies that will fail, it is big data. All types of machine learning are heavily reliant on data science, this is best described as a process of understanding the world from patterns in data. In this case the AI is learning from data, and the more data the more accurate the results. There are some exceptions to this rule due to what is called overfitting, but this is a concern that AI developers are aware of and take precautions to compensate for.

The importance of big data is why companies such as Tesla have a clear market advantage when it comes to autonomous vehicle technology. Every single Tesla that is in motion and using auto-pilot is feeding data into the cloud. This enables Tesla to use deep reinforcement learning, and other algorithm tweaks in order to improve the overall autonomous vehicle system.

This is also why companies such as Google will be so difficult for challengers to dethrone. Every day that goes by is a day that Google collects data from its myriad of products and services, this includes search results, Google Adsense, Android mobile device, the Chrome web browser, and even the Nest thermostat. Google is drowning is more data than any other company in the world. This is not even counting all of the moonshots they are involved in.

By understanding why deep learning and data science matters, we can ten infer why the companies below are so powerful.

There are three current market leaders that are going to be very difficult to challenge.

Alphabet Inc is the umbrella company for all Google products which includes the Google search engine. A short history lesson is necessary to explain why they are such a market leader in AI. In 2010, a British company DeepMind was launched with the goal of applying various machine learning techniques towards building general-purpose learning algorithms.

In 2013, DeepMind took the world by storm with various accomplishments including becoming world champion at seven Atari games by using deep reinforcement learning.

In 2014, Google acquired DeepMind for $500 Million, shortly thereafter in 2015 DeepMinds AlphaGo became the first AI program to defeat a professional human Go player, and the first program to defeat a Go world champion. For those who are unfamiliar Go is considered by many to be the most challenging game in existence.

DeepMind is currently considered a market leader in deep reinforcement learning, and Artificial General Intelligence (AGI), a futuristic type of AI with the goal of eventually achieving or surpassing human level intelligence.

We still need to factor in the other other types of AI that Google is currently involved in such as Waymo, a market leader in automonous vehicle technology, second only to Tesla, and the secretive AI systems currently used in the Google search engine.

Google is currently involved in so many levels of AI, that it would take an exhaustive paper to cover them all.

As previously stated Tesla is taking advantage of big data from its fleet of on-road vehicles to collect data from its auto-pilot. The more data that is collected the more it can improve using deep reinforcement, this is especially important for what are deemed as edge cases, this is known as scenarios that dont happen frequently in real-life.

For example, it is impossible to predict and program in every type of scenario that may happen on the road, such as a suitcase rolling into traffic, or a plane falling from the sky. In this case there is very little specific data, and the system needs to associate data from many different scenarios. This is another advantage of having a huge amount of data, while it may be the first time a Tesla in Houston encounters a scenario, it is possible that a Tesla in Dubai may have encountered something similar.

Tesla is also a market leader in battery technology, and in electric technology for vehicles. Both of these rely on AI systems to optimize the range of a vehicle before a recharge is required. Tesla is known for its frequent on-air updates with AI optimizations that improve by a few percentage points the performance and range of its vehicle fleet.

As if this was not sufficient, Tesla is also designing its own AI chips, this means it is no longer reliant on third-party chips, and they can optimize chips to work with their full self-driving software from the ground up.

NVIDIA is the company best positioned to take advantage of the current rise in demand in GPU (Graphics processing unit) chips, as they are currently responsible for 80% of all GPUsales.

While GPUs were initially used for video games, they were quickly adopted by the AI industry specifically for deep learning. The reason GPUs are so important is that the speed of AI computations is greatly enhanced when computations are carried out in parallel. While training a deep learning ANN, inputs are required and this depends heavily on matrix multiplications, where parallelism is important.

NVIDIA is constantly releasing new AI chips that are optimized for different use cases and requirements of AI researchers. It is this constant pressure to innovate that is maintaining NVIDIA as a market leader.

It is impossible to list all of the companies that are involved in some form of AI, what is important is understanding the machine learning technologies that are responsible for most of the innovation and growth that the industry has witnessed. We have highlighted 3 market leaders, many more will come along. To keep abreast of AI, you should stay current with AI news, avoid AI hype, and understand that this field is constantly evolving.

View post:

Investing in Artificial Intelligence (AI) - Everything You Need to Know - Securities.io

Written by admin

November 2nd, 2020 at 1:56 am

Posted in Alphago

How to Understand if AI is Swapping Civilization – Analytics Insight

Posted: October 3, 2020 at 5:57 am


without comments

What if we wake up one morning to the news that a super-power AI has emerged with disastrous consequences? Nick Bostroms Superintelligent and Max Tegmarks Life 3.0 books argue that malevolent superintelligence is an existential risk for humanity.

Rather than endless anticipation, its better to ask a more concrete, empirical question: What would alarm us that superintelligence is indeed at the doorstep?

If an AI program develops fundamental new capabilities, thats the equivalent of a canary collapsing.

AIs performance in games like Go, poker, or Quake 3, is not a canary. The bulk of AI in such games is social work to highlight the problem and design the solution. The credit for AlphaGos victory over human Go champions was the talented human team at DeepMind that merely ran the algorithm the people had created. It explains why it takes several years of hard work to translate AI success from one little challenge to the next. Techniques such as deep learning are general, but their impactful application to a particular task needs extensive human intervention.

Over the past decades, AIs core success is machine learning, yet the term machine learning is a misnomer. Machines own only a narrow silver of humans versatile learning abilities. If you say machine learning is like baby penguins, know how to fish. The reality is that adult penguins swim, catch fish, digest it. They regurgitate fish into their beaks and place morsels into their childrens mouths. Similarly, human scientists and engineers are spoon-feeding AI.

In contrast to machine learning, human learning plans personal motivation to a strategic learning plan. For example, I want to drive to be independent of my parents (Personal motivation) to take drivers ed and practice on weekends (strategic learning). An individual formulates specific learning targets, collects, and labels data. Machines cannot even remotely replicate any of these human abilities. Machines can perform like superhuman; including statistical calculations, but that is merely the last mile of learning.

The automated formula of learning problems is our first canary, and it does not seem anywhere close to dying.

The second canary is self-driving cars. As Elon Musk speculated, these are the future. Artificial intelligence can fail catastrophically in atypical circumstances, like when an individual in a wheelchair crosses the street. In this case, driving is more challenging than any other AI task because it requires making life-critical, real-time decisions based on the unpredictable physical world and interaction with pedestrians, human drivers, and others. We should deploy a limited number of self-driving cars when they reduce accident rates. Human-level driving is achieved only when this canary be said to have kneeled over.

Artificial intelligence doctors are the third canary. AI already has the capability of analysing medical images with superhuman accuracy, which is a little slice of a human doctors job. An AI doctors responsibility would be interviewing patients, considering complications, consulting other doctors, and so on. These are challenging tasks, which require understanding people, language, and medicine. This type of doctor would not have to fool a patient into wondering it is human. Thats why it is different from the Turing test. A human doctor can do a wide range of tasks in unanticipated situations.

One of the worlds most prominent AI experts, Andrew Ng, has stated, Worrying about AI turning evil is a little bit like worrying about overpopulation on Mars.

More:

How to Understand if AI is Swapping Civilization - Analytics Insight

Written by admin

October 3rd, 2020 at 5:57 am

Posted in Alphago

In the Know – UCI News

Posted: at 5:57 am


without comments

Monish Ramadoss, a computer science & engineering major with a keen interest in artificial intelligence, recalls hearing about the then-new AI@UCI Club in spring 2017. That first meeting, held in a windowless basement room of the Donald Bren School of Information & Computer Sciences, attracted a dozen or so like-minded undergraduates who wanted to discuss machine learning, computer vision and other aspects of AI.

Ramadoss kept attending the meetings for the new friendships he made as well as the stimulating subjects covered.

The idea of the club was to create an open forum for everybody to communicate their ideas about machine learning and other different topics an area where people could come together and develop new projects or even just connect, says Ramadoss, now a senior and one of 15 leaders of the AI@UCI Club.

The student-run organization currently has a mailing list of around 2,000 members.

During the academic year, 100 or so students gather twice a month for free workshops in which they get hands-on experience designing such things as chatbots think Alexa and other services that simulate conversations with humans by leveraging AI and natural language processing and fake online dating profiles, to illustrate the concept of generative adversarial networks (see definitions below). Once a quarter, guests speak at AI@UCI Club meetings about real-world applications of artificial intelligence.

Anthony Luu, a computer science major who, along with Ramadoss, is a group leader (they call themselves mentors), is interested in an AI-related career in the medical or cybersecurity field. At one club seminar, he talked about an app hes developing for UCI Dining Services: UCI is focused on zero waste. Im working on an app that allows users to take a picture of their trash, and then the app will tell them what bin to throw it in, along with instructions on how to properly dispose of it.

Club President Iman Amy Elsayed, a fifth-year computer science & engineering major, works closely with the groups academic adviser, Alex Ihler, a professor of computer science who teaches many of UCIs undergraduate machine learning classes.

AI encompasses so many different fields, Elsayed says, but at the end of the day, its all math. And I really enjoy math.

Says Ihler: The students approached me. The undergraduates wanted to have a club so they could get together and learn about different projects. A few years ago, AI clubs at universities were somewhat unusual. One reason is that machine learning in the past has not been very accessible to undergraduates. Now its starting to become more accessible and more relevant.

Shivan Vipani, a junior majoring in computer science, joined the AI@UCI Club when he was a freshman. I was interested in the workshops, and I wanted a little head start on being introduced to the world of AI, Vipani says. I thought itd be a cool way to learn and get my hands dirty.

To Andrew Laird, another club member and computer science major, AI feels like magic. He adds: Its super impressive what people have done with AI on the internet. When you get down to it, its all math and such, but its nice to be the magician.

With AI terminology a veritable alphabet soup of head-scratchers for the uninitiated, UCI Magazine asked mentors of the AI@UCI Club to explain in their own words what some of the key phrases mean.

Iman Amy Elsayed Fifth-year senior, computer science & engineering Career plans: computer vision/robotics engineer Mantra: Oreos are life.

Artificial Intelligence

Artificial intelligence is a broad term that means for a machine to demonstrate intelligence similar to humans or animals. The field encompasses several subfields such as computer vision, machine learning and natural language processing. While AI in the media often depicts a doomsday scenario, such as in The Terminator, machines dont have their own conscience and really only excel at what the programmer tells them to do. I like to describe programs as really smart 5-year-olds. They do exactly what you tell them to do.

Computer Vision

(object recognition and visual understanding) Face ID on iPhones is a popular application of computer vision, which allows technology to make sense of images, such as recognizing objects. When you open your iPhone, computer vision allows the phone to see you and unlock the phone once it recognizes you.

Jason Kahn Fifth-year senior, computer science and business information management Career plans: software and development and operations engineer Mantra: There is a time and place for decaf: never and in the trash.

Evolutionary Computation

(genetic algorithms, genetic programming) Think of this as a computerized version of Darwinism. Evolutionary computation refers to a machine learning method of optimization and learning inspired by genetics and evolution. Starting with a large group of possible solutions, we take the characteristics of the best-performing ones and produce a new set to eventually end up with the fittest. Its much like finding the best recipe for homemade cake: You try multiple different methods and find which aspects of each attempt produce the best-tasting cake. On each subsequent attempt, you only use methods that youve found create the best flavor. Over time, you end up with the perfect cake, making all your friends crown you the Cake Master.

Omkar Pathak Senior, computer science & engineering Career plans: machine learning software engineer Mantra: Enjoy what you do and do what you enjoy.

Speech Processing

(speech recognition and production) Virtual assistant technologies like Alexa, Siri or Google Assistant are great examples of speech processing AIs. When a user says Hey Siri, the iPhone employs speech recognition to understand what the user said. Then the AIs response is converted back to sound using speech synthesis, giving a more natural way to interact with your phone.

Shivan Vipani Junior, computer science Career plans: machine learning engineer Mantra: If you arent happy doing it, is it worth it?

Natural Language Processing

(machine translation) Natural language processing is the intersection between computer science and linguistics, dealing with how computers process and analyze human language. When your phone guesses what youll type next or Google answers a question for you, thats NLP hard at work to understand your sentence structure and the inherent meaning behind it. NLP is how systems like chatbots and Google Translate are able to run. As the volume of text information increases over time, NLP will play a key role in making sense of it all.

Explainable AI

Currently, AI is a black box where we dont always understand how an AI program makes a decision or comes to a conclusion. Explainable AI tries to extract and communicate why the AI program makes its decision in a way that humans can understand. Amy Elsayed

Andrew Laird Senior, computer science Career plans: machine learning software engineer Mantra: Work hard, play hard.

Reinforcement Learning

(scheduling, game playing) Reinforcement learning is the process of learning by interacting with an environment. Games are excellent examples of reinforcement learning. In 2016, Google DeepMinds AI, AlphaGo, beat the human world champion of Go an ancient Japanese game significantly more complex than chess. However, RL techniques are being applied to more than just games. Researchers are using RL to control robotic systems, optimize business strategies and even predict protein folding, which helps biologists fighting diseases, including COVID-19.

Machine Learning

Machine learning is a subfield of AI that uses statistical methods to make AI perform better with experience, or examples. We can teach a machine to recognize puppies by showing it many images of puppies and non-puppies (an example of supervised learning). The more examples of puppies we feed the machine to learn on, the better it will be at recognizing a puppy in a new image. Amy Elsayed

Anthony Luu Senior, computer science Career plans: use machine learning in the medical or cybersecurity field Mantra: Learn something new every day.

Supervised Learning

Most of the machine learning that you hear people talk about is supervised machine learning. Supervised learning uses labeled data and maps it between some observable information, or features (x) and output (y). This mapping allows the algorithm to take in something its never seen before and apply the mapping it learned from the data to produce a prediction of the correct output.

Satyam Sam Tandon Fifth-year senior, informatics, with a minor in statistics Career plans: data scientist/machine learning engineer Mantra: There are those who think pineapple goes on pizza and then there are those who are wrong.

Data Mining

Data mining is pretty great, despite its bad rap in popular media. (Remember how Target got slammed for being able to deduce, based on her purchases, that a teenage shopper was pregnant before her father knew it?) Instead of mining the Earth, data mining operates on large raw datasets, and instead of pickaxes, it uses tools like machine learning and statistics. In both, the goal is to chip away the surface and uncover hidden value in this case, patterns, trends and information. Such information can then be used for tasks ranging from providing better music recommendations to more accurately detecting cancer in X-rays.

Monish Ramadoss Senior, computer science & engineering Career plans: kernel engineer for AI accelerators Mantra: Live life and drink coffee.

Generative Adversarial Networks

Think of the classic cops and robbers scenario. In games, AI agents can use self-play to improve their skills. Generative adversarial networks are a similar idea used for supervised learning. We build two neural networks: a generator to make fake outputs and a discriminator to evaluate how real they are. Together they make a cops and robbers relationship in which the generator tries to fool the discriminator by creating more realistic outputs and the discriminator searches for ways to tell the difference. An example of this application is image upscaling, where a generator is trained to take a low-resolution image and upscale it to a higher resolution; the discriminator makes sure that the results look realistic.

Anurag Sengupta First-year graduate student, computer science Career plans: building software for machine learning applications Mantra: Love for sweets for any mood I happen to be in.

Recurrent Neural Networks

Recurrent neural networks are robust learning models that have found their application in various complex systems. Unmanned vehicles and robots defying gravity opened avenues in research in science and technology. Similarly, businesses benefit a lot from these methodologies in helping them analyze their data and everyday activities better. RNN serves as the state-of-the-art approach for determining future states of objects and data based on objects that happened earlier. Google Translate, Google Finance and Amazons Alexa chatbot all use such intuitive tools at the heart of their software.

View post:

In the Know - UCI News

Written by admin

October 3rd, 2020 at 5:57 am

Posted in Alphago

Test your Python skills with these 10 projects – Best gaming pro

Posted: at 5:57 am


without comments

Do you knowPythonis named anall-rounder programming language? Sure, its,though it shouldnt be used on every single project, You should utilize it to create desktop purposes, video games, cellular apps, web sites, and system software program. Its even probably the most appropriate language for the implementation of synthetic intelligence and machine studyingalgorithms.

So, I spent the previous couple of weeks gatheringdistinctive mission concepts for anyPython developer. These mission concepts will hopefully convey again your curiosity on this wonderful language. The very best half is youll be able to improve your Python programming expertise with these enjoyable however difficult tasks.

Lets take a look at them one-by-one:

Today, large progress has been made within the discipline of desktop software improvement. You will note many drag & drop GUI builders and speech recognition libraries. So, why not be a part of them collectively and create a consumer interface by speaking with the pc?

That is purely a brand new idea and after some analysis, I discovered that nobody has ever tried to do it. So, it may be somewhat bit more difficult than those talked about beneath.

Listed below are some directions to get began on this mission utilizing Python. To begin with, you want these packages:-

Now, the thought is to hardcode some speech instructions like:

Since that is going to be aMinimal Viable Product (MVP), itsfully superb if its important to hardcode many conditional statements. You get the purpose, proper? Its quite simple and easy so as to add extra instructions like these.

After establishing some primary instructions, its time to check the code. For now, youll be able to attempt to construct a really primary login type in a window.

The foremost flexibility of this concept is it may be carried out for recreation improvement, web sites, and cellular apps. Even in several programming languages.

Betting is an exercise the place individuals predict an end result and in the event that theyre proper then they obtain a reward in return. Easy, proper? Now, there are various technological advances that occurred in synthetic intelligence or machine studying up to now few years.

For instance, you might need heard about applications likeAlphaGo Master,AlphaGo Zero, andAlphaZerothat may playGo (game)higher than any skilled human participant. Youll be able to even get thesource codeof the same program referred to as Leela Zero.

The purpose I need to convey is that AI is getting smarter than us. Which means it may predict one thing higher by making an allowance for all the chances and study from previous experiences.

Lets apply some supervised studying ideas in Python to create an AI Betting Bot. Listed below are some libraries it is advisable get began:

To start, it is advisable choose a recreation (e.g. tennis, soccer, and many others.) for predicting the outcomes. Now,seek for historic match outcomes information that can be utilized to coach the mannequin.

For instance, the info of tennis matches might be downloaded in .csv format from thetennis-data.co.uk website.

In case youre not aware of betting, right heres the way it works.

After coaching the mannequin, now we have to compute theConfidence Degreefor every prediction, discover out the efficiency of our bot by checking what number of instances the prediction was proper, and at last keep watch overReturn On Funding (ROI).

Obtain the same open-sourceAI Betting Bot Projectby Edouard Thomas.

A buying and selling bot is similar to the earlier mission as a result of it additionally requires AI for prediction. Now the query is whether or not an AI can appropriately predict the fluctuation of inventory costs? And, the reply is Sure.

Earlier than getting began, we want some information to develop a buying and selling bot:

These assets from Investopedia may assist in coaching the bot:1

After studying each of those articles, youll now have a greater understanding of when to purchase shares and when to not. This data can simply be remodeled right into a Python program that mechanically makes the choice for us.

You can even take reference from this open-source buying and selling bot referred to asfreqtrade. Its constructed utilizing Python and implements a number of machine studying algorithms.

This concept is taken from the Hollywood film sequenceIron Man. The film revolves round know-how, robots, and AI.

Right here, the Iron Man has constructed a digital assistant for himself utilizing synthetic intelligence. This system is namedJarvisthat helps Iron Man in on a regular basis duties.

Iron Man provides directions to Jarvis utilizing easy English language and Jarvis responds in English too. It implies that our program will want speech recognition in addition to text-to-speech functionalities.

Id suggest utilizing these libraries:

For now, youll be able to hardcode the speech instructions like:

When you set analarm on cellular, you may as well use Jarvis for tons of different duties like:

Even Mark Zuckerberg has constructed aJarvisas a side-project.

Songkickis a extremely popular service that gives details about upcoming live shows. ItsAPIcan be utilized to seek for upcoming live shows by:

Youll be able to create a Python script that retains checking a selected live performance day by day utilizing Songkicks API. With all this arrange, youll be able to ship an e mail to your self each time the live performance is obtainable.

Generally Songkick even shows apurchase ticketshyperlink on their web site. However, this hyperlink may go to a special web site for various live shows. It means its very tough to mechanically buy tickets even when we make use of internet scraping.

As a substitute, we will merely show the purchase tickets hyperlink because its in our software for handbook motion.

Lets Encryptis a certificates authority that provides free SSL certificates. However, the difficulty is that this certificates is barely legitimate for 90 days. After 90 days, its important to renew it.

In my view, this can be a nice situation for automation utilizing Python. We will write some code that mechanically renews an internet site SSL certificates earlier than expiring.

Take a look at thiscode on GitHubfor inspiration.

Today, governments have put in surveillance cameras in public locations to extend the safety of their residents. Most of those cameras are merely to report video after which the forensic consultants need to manually acknowledge or hint the person.

What if we create a Python program that acknowledges every particular person in digital camera in real-time. To begin with, we want entry to a nationwide ID card database, which we in all probability dont have, clearly.

So, a simple choice is to create a database with your loved ones members information.

Youll be able to then use aFace Recognitionlibrary and join it with the output of the digital camera.

Contact Tracing is a method to establish all those who have come into contact with one another throughout a selected time interval. Its principally helpful in a pandemic like COVID-19 as a result of with none information about whos contaminated, we willt cease its unfold.

Python can be utilized with a machine studying algorithm referred to asDBSCAN (Density-Based mostly Spatial Clustering of Purposes with Noise)for contact tracing.

As that is only a side-project, so we dont have entry to any official information. For now, its higher to generate some reasonable check information utilizingMockaroo.

You will have a take a look atthis articlefor particular code implementation.

Nautilus File Supervisor in Ubuntu

This can be a very primary Python program that retains monitoring a folder. At any time when a file is added in that folder it checks its sort and strikes it to a selected folder accordingly.

For instance, we will observe our downloads folder. Now, when a brand new file is downloaded, then it should mechanically be moved in one other folder in line with its sort.

.exe information are likely software program setups, so transfer them contained in the software program folder. Whereas, shifting photographs (png, jpg, gif) contained in the photographs folder.

This manner we will arrange several types of information for fast entry.

Create an software that accepts the names of expertise that we have to study for a profession.

For instance, to develop into an internet developer, we have to study:

After coming into the talents, there shall be aGenerate Profession Pathbutton. It instructs our program to go lookingYouTubeand choose related movies/playlists in line with every ability. In case there are various related movies for ability then it should choose the one with probably the most views, feedback and likes.

This system then teams these movies in line with expertise and show their thumbnail, title, and hyperlink within the GUI. It is going to additionally analyze the length of every video, combination them, after which inform us about how a lot time it should take to study this profession path. Now, as a consumer, we will watch these movies that are ordered in a step-by-step method to develop into a grasp on this profession.

Difficult your self with distinctive programming tasks retains you energetic, improve your expertise, and helps you discover new prospects. A few of the mission concepts I discussed above can be used as yourUltimate Yr Venture. Its time to indicate your creativity with Python programming language and switch these concepts into one thing youre pleased with.

This article was initially printed on Live Code Stream by Juan Cruz Martinez (twitter: @bajcmartinez), founder and writer of Dwell Code Stream, entrepreneur, developer, writer, speaker, and doer of issues.

Live Code Stream can also be accessible as a free weekly publication. Join updates on every thing associated to programming, AI, and pc science basically.

Here is the original post:

Test your Python skills with these 10 projects - Best gaming pro

Written by admin

October 3rd, 2020 at 5:57 am

Posted in Alphago

Is Dystopian Future Inevitable with Unprecedented Advancements in AI? – Analytics Insight

Posted: June 26, 2020 at 9:42 am


without comments

Artificial super-intelligence (ASI) is a software-based system with intellectual powers beyond those of humans across an almost comprehensive range of categories and fields of endeavor.

The reality is that AI has been with here for a long time now, ever since computers were able to make decisions based on inputs and conditions. When we see a threatening Artificial Intelligence system in the movies, its the malevolence of the system, coupled with the power of some machine that scares people.

However, it still behaves in fundamentally human ways.

The kind of AI that prevails today can be described as an Artificial Functional Intelligence (AFI). These systems are programmed to perform a specific role and to do so as well or better than a human. They have also become more successful at this in a short period which no one has ever predicted. For example, beating human opponents in complex games like Go and StarCraft II which knowledgeable people thought wouldnt happen for years, if not decades.

However, Alpha Go might beat every single human Go player handily from now until the heat death of the Universe, but when it is asked for the current weather conditions there the machine lacks the intelligence of even single-celled organisms that respond to changes in temperature.

Moreover, the prospect of limitless expansion of technology granted by the development of Artificial Intelligence is certainly an inviting one. While investment and interest in the field only grow by every passing year, one can only imagine what we might have to come.

Dreams of technological utopias granted by super-intelligent computers are contrasted with those of an AI lead dystopia, and with many top researchers believing the world will see the arrival of AGI within the century, it is down to the actions people take now to influence which future they might see. While some believe that only Luddites worry about the power AI could one-day hold over humanity, the reality is that most tops AI academics carry a similar concern for its more grim potential.

Its high time people must understand that no one is going to get a second attempt at Powerful AI. Unlike other groundbreaking developments for humanity, if it goes wrong there is no opportunity to try again and learn from the mistakes. So what can we do to ensure we get it right the first time?

The trick to securing the ideal Artificial Intelligence utopia is ensuring that their goals do not become misaligned with that of humans; AI would not become evil in the same sense that much fear, the real issue is it making sure it could understand our intentions and goals. AI is remarkably good at doing what humans tell it, but when given free rein, it will often achieve the goal humans set in a way they never expected. Without proper preparation, a well-intended instruction could lead to catastrophic events, perhaps due to an unforeseen side effect, or in a more extreme example, the AI could even see humans as a threat to fully completing the task set.

The potential benefits of super-intelligent AI are so limitless that there is no question in the continued development towards it. However, to prevent AGI from being a threat to humanity, people need to invest in AI safety research. In this race, one must learn how to effectively control a powerful AI before its creations.

The issue of ethics in AI, super-intelligent or otherwise, is being addressed to a certain extent, evidenced by the development of ethical advisory boards and executive positions to manage the matter directly. DeepMind has such a department in place, and international oversight organizations such as the IEEE have also created specific standards intended for managing the coexistence of highly advanced AI systems and the human beings who program them. But as AI draws ever closer to the point where super-intelligence is commonplace and ever more organizations adopt existing AI platforms, ethics must be top of mind for all major stakeholders in companies hoping to get the most out of the technology.

Smriti is a Content Analyst at Analytics Insight. She writes Tech/Business articles for Analytics Insight. Her creative work can be confirmed @analyticsinsight.net. She adores crushing over books, crafts, creative works and people, movies and music from eternity!!

The rest is here:

Is Dystopian Future Inevitable with Unprecedented Advancements in AI? - Analytics Insight

Written by admin

June 26th, 2020 at 9:42 am

Posted in Alphago

Enterprise hits and misses – contactless payments on the rise, equality on the corporate agenda, and Zoom and Slack in review – Diginomica

Posted: June 8, 2020 at 4:47 pm


without comments

Lead story - The future of hands-free commerce - is COVID-19 the catalyst?

MyPOV: Overseas travelers to the U.S. have noted that the U.S. is taking its sweet @ss time not exactly out in front on contactless commerce. But is that finally changing? As Chris notes in Is COVID-19 the catalyst for tapping into a contactless payment revolution in the US?:

In contrast, figures out this week in the U.K. from U.K. Finance... revealed that 80% of people made a contactless purchase in 2019, up from 69% the year before. That is, of course, pre COVID-19, which is likely to prompt a further uptick.

Industry giants see an opening. Stuart picks the story up in Tracking contactless - how Visa and Mastercard are planning for a COVID-19 bump for hands-free digital commerce. Health needs and CX converge:

Leaving the public health implications to one side, a shift to contactless tech also provides financial services providers and retail merchants with a better customer experience.

But a so-called "contactless revolution" can widen the digital divide - not exactly the type of tension we need in the U.S. right now. Chris puts it well:

The challenge remains the extent to which digitally-excluded customers and the unbanked may find themselves living in a cashless society by default, perhaps locked out from being able to pay for some goods and services.

There are potential solutions to these problems, e.g. contactless payment cards bought with cash. As with most things tech, a good rollout calls for a thoughtful design.

Diginomica picks - my top stories on diginomica this week

Vendor analysis, diginomica style. It was a news blowout from collaboration economy darlings, each with their dilemmas and upsides:

Meanwhile, Derek's ServiceNow Knowledge 2020 coverage caravan rolls out, with the fun of a sit-down with quote machine and CEO Bill McDermott:

A couple more vendor picks, without the quotables:

Jon's grab bag - Neil examines the regulatory resistance to telemedicine in Telemedicine adoption amidst a pandemic - can we overcome the barriers?. Sooraj documents how a company avoided ransomeware by taking a wake-up call to heart in Aston Martin CIO - WannaCry pushed us into a cyber security refresh.

Guest contributor Simon Griffiths shares How to re-engineer business processes in uncertain times. Uncle Den opens up the digi-kimono, and details how (not) to make your core team nuts to deliver platform upgrades in record time in How not to drive users and developers crazy.

Genuine change is about action, not platitudes or P.R. festivals. Ergo, I enjoyed Jason Corsello's Diversity & The Future of Work We Can No Longer Sit on the Sidelines! Corsello has a similar ax to grind, and wants to see companies push for corporate change as well:

As leaders of people and organizations, those same executives can stand up to racism by the examples they create in their own companies.

Where to get started? That can be an excuse or a legit area of question. To counter this, Corsello runs through ten action steps, from addressing pay inequity to rolling out mentoring programs, a la Slack's "Rising Tides" for diverse, emerging leaders. I doubt any organization could give themselves a solid grade on all ten - including diginomica. We all have work to do, but it's the right work.

Honorable mention

Speaking of incredibly exciting developments in A.I. respect for the bruising lessons of tech history, I got a kick out of this weekend's discovery:

But hey, there's good news: A.I. has come a long way from 1972, making our workplaces so much better:

Speaking of the future, McKinsey got way ahead of themselves with this extravagant headline:

Without doubt, the most concerning whiff of the week: The May jobs report had 'misclassification error' that made the unemployment rate look lower than it is. Here's what happened. Thankfully, even after the three percentage point error, the news was still better than expected, but that's a market whopper nonetheless.

I need to leave you with a lighter headline than that. How about Bill Would Prevent the President from Nuking Hurricanes.

Not quite light enough? Okay, I'll revert to animals. How about this video of a pet cockatoo strenuously objecting, in many languages known and unknown, about a pending trip to the vet? See if that doesn't put a smile on your Monday. Catch you next time...

If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed. 'myPOV' is borrowed with reluctant permission from the ubiquitous Ray Wang.

Continued here:

Enterprise hits and misses - contactless payments on the rise, equality on the corporate agenda, and Zoom and Slack in review - Diginomica

Written by admin

June 8th, 2020 at 4:47 pm

Posted in Alphago

AlphaGo – Top Documentary Films

Posted: June 5, 2020 at 4:48 pm


without comments

Go is considered the most challenging board game ever devised by man. Invented in China near 3000 years ago, it also remains one of the most popular. Although it may appear like a deceptively simple version of chess at first, the game demands an incredible intensity of concentration, smarts, strategy and intuition. The variations of game play are infinite. Its challenges represent much more than a game of play in many circles; some consider it a great art and a defining human endeavor. As one of the world's most intellectually demanding games, Go constitutes a major goalpost in the world of artificial intelligence as well. AlphaGo is a thrilling feature-length documentary which chronicles the first match-ups between a human champion of the game and an AI opponent.

The computer program known as AlphaGo was devised by Deep Mind Technologies. Their efforts to master the game through artificial intelligence is about much more than mere fun and games; they hope they can apply these same self-learning technologies to resolve more meaningful issues and challenges that mystify and trouble the human species. But first, in order to prove its competence, they must put their program through the ultimate test.

Enter Lee Sedol, a South Korean Go champion of unparalleled skill. Seen as "the ultimate human versus machine smackdown", the match generated a global media frenzy. Sedol is a creative player of great ingenuity and instinct. He entered the match with extreme confidence while the designers of the program expressed uncertainty of the outcome. The film depicts the journey to that outcome with all the nail-biting tension of a Rocky film.

In the lead-up to the championship bout, the film traces the origins of Go and its consistent prominence around the world today, and how it defines the lives and philosophies of its players. We also follow the efforts of programmers and designers in crafting a more efficient and competitive form of AI.

By the conclusion of this captivating documentary, AlphaGo raises even deeper issues about our relationships to these technologies, how they challenge us to make more of ourselves or question the limitations of our own species. What can we learn from them and vice versa?

Read the original post:

AlphaGo - Top Documentary Films

Written by admin

June 5th, 2020 at 4:48 pm

Posted in Alphago

AlphaGo (2017) – Rotten Tomatoes

Posted: at 4:48 pm


without comments

Critics Consensus

No consensus yet.

See score details

Subscription

Rent or buy

Rent or buy

1:30

AlphaGo: Trailer 1

With more board configurations than there are atoms in the universe, the ancient Chinese game of 'Go' has long been considered a grand challenge for artificial intelligence. On March 9 2016, the worlds of Go and artificial intelligence collided in South Korea for an extraordinary best-of-five-game competition, coined The Google DeepMind Challenge Match. Hundreds of millions of people around the world watched as a legendary Go master took on an unproven AI challenger for the first time in history. AlphaGo chronicles a journey from the halls of Cambridge, through the backstreets of Bordeaux, past the coding terminals of DeepMind in London, and, ultimately, to the seven-day tournament in Seoul. As the drama unfolds, more questions emerge: What can artificial intelligence reveal about a 3000-year-old game? What can it teach us about humanity?

Rating:

NR

Genre:

Documentary

Directed By:

In Theaters:

Sep 29, 2017 limited

Runtime:

90 minutes

Studio:

Reel As Dirt

All 42 Fresh Movies and Originals Coming to Netflix in January

Critics Choice Documentary Award Winners

Critics Choice Documentary Award Nominations

150 Erotic Movies

2020's Most Anticipated Movies

Video Game Movies Ranked

Best Netflix Series and Shows

The percentage of Approved Tomatometer Critics who have given this movie a positive review

The percentage of users who rated this 3.5 stars or higher.

View original post here:

AlphaGo (2017) - Rotten Tomatoes

Written by admin

June 5th, 2020 at 4:48 pm

Posted in Alphago

Why the buzz around DeepMind is dissipating as it transitions from games to science – CNBC

Posted: at 4:48 pm


without comments

Google Deepmind head Demis Hassabis speaks during a press conference ahead of the Google DeepMind Challenge Match in Seoul on March 8, 2016.

Jung Yeon-Je | AFP |Getty Images | Getty Images

In 2016, DeepMind, an Alphabet-owned AI unit headquartered in London, was riding a wave of publicity thanks to AlphaGo, its computer program that took on the best player in the world at the ancient Asian board game Go and won.

Photos of DeepMind's leader, Demis Hassabis, were splashed across the front pages of newspapers and websites, and Netflix even went on to make a documentary about the five-game Go match between AlphaGo and world champion Lee SeDol. Fast-forward four years, and things have gone surprisingly quiet about DeepMind.

"DeepMind has done some of the most exciting things in AI in recent years. It would be virtually impossible for any company to sustain that level of excitement indefinitely," said William Tunstall-Pedoe, a British entrepreneur who sold his AI start-up Evi to Amazon for a reported $26 million. "I expect them to do further very exciting things."

AI pioneer Stuart Russell, a professor at the University of California, Berkeley, agreed it was inevitable that excitement around DeepMind would tail off after AlphaGo.

"Go was a recognized milestone in AI, something that some commentators said would take another 100 years," he said. "In Asia in particular, top-level Go is considered the pinnacle of human intellectual powers. It's hard to see what else DeepMind could do in the near term to match that."

DeepMind's army of 1,000 plus people, which includes hundreds of highly-paid PhD graduates, continues to pump out academic paper after academic paper, but only a smattering of the work gets picked up by the mainstream media. The research lab has churned out over 1,000 papers and 13 of them have been published by Nature or Science, which are widely seen as the world's most prestigious academic journals. Nick Bostrom, the author of Superintelligence and the director of the University of Oxford's Future of Humanity Institute described DeepMind's team as world-class, large, and diverse.

"Their protein folding work was super impressive," said Neil Lawrence, a professor of machine learning at the University of Cambridge, whose role is funded by DeepMind. He's referring to a competition-winning DeepMind algorithm that can predict the structure of a protein based on its genetic makeup. Understanding the structure of proteins is important as it could make it easier to understand diseases and create new drugs in the future.

The World's top human Go player, 19-year-old Ke Jie (L) competes against AI program AlphaGo, which was developed by DeepMind, the artificial intelligence arm of Google's parent Alphabet. Machine won the three-game match against man in 2017. The AI didn't lose a single game.

VCG | Visual China Group | Getty Images

DeepMind is keen to move away from developing relatively "narrow" so-called "AI agents," that can do one thing well, such as master a game. Instead, the company is trying to develop more general AI systems that can do multiple things well, and have real world impact.

It's particularly keen to use its AI to leverage breakthroughs in other areas of science including healthcare, physics and climate change.

But the company's scientific work seems to be of less interest to the media.In 2016, DeepMind was mentioned in 1,842 articles, according to media tracker LexisNexis. By 2019, that number had fallen to 1,363.

One ex-DeepMinder said the buzz around the company is now more in line with what it should be. "The whole AlphaGo period was nuts," they said. "I think they've probably got another few milestones ahead, but progress should be more low key. It's a marathon not a sprint, so to speak."

DeepMind denied that excitement surrounding the company has tailed off since AlphaGo, pointing to the fact that it has had more papers in Nature and Science in recent years.

"We have created a unique environment where ambitious AI research can flourish. Our unusually interdisciplinary approach has been core to our progress, with 13 major papers in Nature and Science including 3 so far this year," a DeepMind spokesperson said. "Our scientists and engineers have built agents that can learn to cooperate, devise new strategies to play world-class chess and Go, diagnose eye disease, generate realistic speech now used in Google products around the world, and much more."

"More recently, we've been excited to see early signs of how we could use our progress in fundamental AI research to understand the world around us in a much deeper way. Our protein folding work is our first significant milestone applying artificial intelligence to a core question in science, and this is just the start of the exciting advances we hope to see more of over the next decade, creating systems that could provide extraordinary benefits to society."

The company, which competes with Facebook AI Research and OpenAI, did a good job of building up hype around what it was doing in the early days.

Hassabis and Mustafa Suleyman, the intellectual co-founders who have been friends since school, gave inspiring speeches where they would explain how they were on a mission to "solve intelligence" and use that to solve everything else.

There was also plenty of talk of developing "artificial general intelligence" or AGI, which has been referred to as the holy grail in AI and is widely viewed as the point when machine intelligence passes human intelligence.

But the speeches have become less frequent (partly because Suleyman left Deepmind and works for Google now), and AGI doesn't get mentioned anywhere near as much as it used to.

Larry Page, left, and Sergey Brin, co-founders of Google Inc.

JB Reed | Bloomberg | Getty Images

Google co-founders Larry Page and Sergey Brin were huge proponents of DeepMind and its lofty ambitions, but they left the company last year and its less obvious how Google CEO Sundar Pichai feels about DeepMind and AGI.

It's also unclear how much free reign Pichai will give the company, which cost Alphabet $571 million in 2018. Just one year earlier, the company had losses of $368 million.

"As far as I know, DeepMind is still working on the AGI problem and believes it is making progress," Russell said. "I suspect the parent company (Google/Alphabet) got tired of the media turning every story about Google and AI into the Terminator scenario, complete with scary pictures."

One academic who is particularly skeptical about DeepMind's achievements is AI entrepreneur Gary Marcus, who sold a machine-learning start-up to Uber in 2016 for an undisclosed sum.

"I think they realize the gulf between what they're doing and what they aspire to do," he said. "In their early years they thought that the techniques they were using would carry us all the way to AGI. And some of us saw immediately that that wasn't going to work. It took them longer to realize but I think they've realized it now."

Marcus said he's heard that DeepMind employees refer to him as the "anti-Christ" because he has questioned how far the "deep learning" AI technique that DeepMind has focused on can go.

"There are major figures now that recognize that the current techniques are not enough," he said. "It's very different from two years ago. It's a radical shift."

He added that while DeepMind's work on games and biology had been impressive, it's had relatively little impact.

"They haven't used their stuff much in the real world," he said. "The work that they're doing requires an enormous amount of data and an enormous amount of compute, and a very stable world. The techniques that they're using are very, very data greedy and real-world problems often don't supply that level of data."

See the article here:

Why the buzz around DeepMind is dissipating as it transitions from games to science - CNBC

Written by admin

June 5th, 2020 at 4:48 pm

Posted in Alphago


Page 11234