Page 112

Archive for the ‘Alphago’ Category

AlphaZero beat humans at Chess and StarCraft, now it’s working with quantum computers – The Next Web

Posted: January 18, 2020 at 4:42 pm


without comments

A team of researchers from Aarhus University in Denmark let DeepMinds AlphaZero algorithm loose on a few quantum computing optimization problems and, much to everyones surprise, the AI was able to solve the problems without any outside expert knowledge. Not bad for a machine learning paradigm designed to win at games like Chess and StarCraft.

Youve probably heard of DeepMind and its AI systems. The UK-based Google sister-company is responsible for both AlphaZero and AlphaGo, the systems that beat the worlds most skilled humans at the games of Chess and Go. In essence, what both systems do is try to figure out what the optimal next set of moves is. Where humans can only think so many moves ahead, the AI can look a bit further using optimized search and planning methods.

Related:DeepMinds AlphaZero AI is the new champion in chess, shogi, and Go

When the Aarhus team applied AlphaZeros optimization abilities to a trio of problems associated with optimizing quantum functions an open problem for the quantum computing world they learned that its ability to learn new parameters unsupervised transferred over from games to applications quite well.

Per the study:

AlphaZero employs a deep neural network in conjunction with deep lookahead in a guided tree search, which allows for predictive hidden-variable approximation of the quantum parameter landscape. To emphasize transferability, we apply and benchmark the algorithm on three classes of control problems using only a single common set of algorithmic hyperparameters.

The implications for AlphaZeros mastery over the quantum universe could be huge. Controlling a quantum computer requires an AI solution because operations at the quantum level quickly become incalculable by humans. The AI can find optimum paths between data clusters in order to emerge better solutions in tandem with computer processors. It works a lot like human heuristics, just scaled to the nth degree.

An example of this would be an algorithm that helps a quantum computer sort through near-infinite combinations of molecules to come up with chemical compounds that would be useful in the treatment of certain illnesses. The current paradigm would involve developing an algorithm that relies on human expertise and databases with previous findings to point it in the right direction.

But the kind of problems were looking at quantum computers to solve dont always have a good starting point. Some of these, optimization problems like the Traveling Salesman Problem, need an algorithm thats capable of figuring things out without the need for constant adjustment by developers.

DeepMinds algorithm and AI system may be the solution quantum computings been waiting for. The researchers effectively employ AlphaZero as a Tabula Rasa for quantum optimization: It doesnt necessarily need human expertise to find the optimum solution to a problem at the quantum computing level.

Before we start getting too concerned about unsupervised AI accessing quantum computers, its worth mentioning that so far AlphaZeros just solved a few problems in order to prove a concept. We know the algorithms can handle quantum optimization, now its time to figure out what we can do with it.

The researchers have already received interest from big tech and other academic institutions with queries related to collaborating on future research. Not for nothing, but DeepMinds sister-company Google has a little quantum computing program of its own. Were betting this isnt the last weve heard of AlphaZeros adventures in the quantum computing world.

Read next: Cyberpunk 2077 has been delayed to September (thank goodness)

Read the original:

AlphaZero beat humans at Chess and StarCraft, now it's working with quantum computers - The Next Web

Written by admin

January 18th, 2020 at 4:42 pm

Posted in Alphago

What are neural-symbolic AI methods and why will they dominate 2020? – The Next Web

Posted: at 4:42 pm


without comments

The recent commercial AI revolution has been largely driven by deep neural networks. First invented in the 1960s, deep NNs came into their own once fueled by the combination of internet-scale datasets and distributed GPU farms.

But the field of AI is much richer than just this one type of algorithm. Symbolic reasoning algorithms such as artificial logic systems, also pioneered in the 60s, may be poised to emerge into the spotlight to some extent perhaps on their own, but also hybridized with neural networks in the form of so-called neural-symbolic systems.

Deep neural nets have done amazing things for certain tasks, such as image recognition and machine translation. However, for many more complex applications, traditional deep learning approaches cannot match the ability of hybrid architecture systems that additionally leverage other AI techniques such as probabilistic reasoning, seed ontologies, and self-reprogramming ability.

Deep neural networks, by themselves, lack strong generalization, i.e. discovering new regularities and extrapolating beyond training sets. Deep neural networks interpolate and approximate on what is already known, which is why they cannot truly be creative in the sense that humans can, though they can produce creative-looking works that vary on the data they have ingested.

This is why large training sets are required to teach deep neural networks and also why data augmentation is such an important technique for deep learning, which needs humans to specify known data transformations. Even interpolation cannot be done perfectly without learning underlying regularities, which is vividly demonstrated by well-known adversarial attacks on deep neural networks.

The slavish adherence of deep neural nets to the particulars of their training data also makes them poorly interpretable. Humans cannot completely rely or interpret their results, especially in novel situations.

What is interesting is that, for the most part, the disadvantages of deep neural nets are strengths of symbolic systems (and vice versa), which inherently possess compositionality, interpretability, and can exhibit true generalization. Prior knowledge can also be easily incorporated into symbolic systems in contrast to neural nets.

Neural net architectures are very powerful at certain types of learning, modeling, and action but have limited capability for abstraction. That is why they are compared with the Ptolemaic epicycle model of our solar system they can become more and more precise, but they need more and more parameters and data for this, and they, by themselves, cannot discover Keplers laws and incorporate them into the knowledge base, and further infer Newtons laws from them.

Symbolic AI is powerful at manipulating and modeling abstractions, but deals poorly with massive empirical data streams.

This is why we believe that deep integration of neural and symbolic AI systems is the most viable path to human-level AGI on modern computer hardware.

Its worth noting in this light that many recent deep neural net successes are actually hybrid architectures, e.g. the AlphaGo architecture from Google DeepMind integrates two neural nets with one game tree. Their recent MuZero architecture, which can master both board and Atari games, goes further along this path using deep neural nets together with planning with a learned model.

The highly successful ERNIE architecture for Natural Language Processing question-answering from Tsinghua University integrates knowledge graphs into neural networks. The symbolic sides of these particular architectures are relatively simplistic, but they can be seen as pointing in the direction of more sophisticated neural-symbolic hybrid systems.

The integration of neural and symbolic methods relies heavily on what has been the most profound revolution in AI in the last 20 years the rise of probabilistic methods: e.g. neural generative models, Bayesian inference techniques, estimation of distribution algorithms, probabilistic programming.

As an example of the emerging practical applications of probabilistic neural-symbolic methods, at the Artificial General Intelligence (AGI) 2019 conference in Shenzhen last August, Hugo Latapie from Cisco Systems described work his team has done in collaboration with our AI team at SingularityNET Foundation, using the OpenCog AGI engine together with deep neural networks to analyze street scenes.

The OpenCog framework provides a neural-symbolic framework that is especially rich on the symbolic side, and interoperates with popular deep neural net frameworks. It features a combination of probabilistic logic networks (PLNs), probabilistic evolutionary program learning (MOSES), and probabilistic generative neural networks.

The traffic analytics system demonstrated by Latapie deploys OpenCog-based symbolic reasoning on top of deep neural models for street scene cameras, enabling feats such as semantic anomaly detection (flagging collisions, jaywalking, and other deviations from expectation), unsupervised scene labeling for new cameras, and single-shot transfer learning (e.g. learning about new signals for bus stops with a single example).

The difference between a pure deep neural net approach and a neural-symbolic approach in this case is stark. With deep neural nets deployed in a straightforward way, each neural network models what is seen by a single camera. Forming a holistic view of whats happening at a given intersection, let alone across a whole city, is much more of a challenge.

In the neural-symbolic architecture, the symbolic layer provides a shared ontology, so all cameras can be connected for to an integrated traffic management system. If an ambulance needs to be routed in a way that will neither encounter nor cause significant traffic, this sort of whole-scenario symbolic understanding is exactly what one needs.

The same architecture can be applied to many other related use cases where one can use neural-symbolic AI to both enrich local intelligence and connect multiple sources/locations into a holistic view for reasoning and action.

It may not be impossible to crack this particular problem using a more complex deep neural net architecture, with multiple neural nets working together in subtle ways. However, this is an example of something that is easier and more straightforward to address using a neural-symbolic approach. And it is quite close to machine vision, one of deep neural nets great strengths.

In other, more abstract application domains such as mathematical theorem-proving or biomedical discovery the critical value of the symbolic side of the neural-symbolic hybrid is even more dramatic.

Deep neural nets have done amazing things over the last few years, bringing applied AI to a whole new level. Were betting that the next phase of incredible AI achievements are going to be delivered via hybrid AI architectures such as neural-symbolic systems. This trend has already started in 2019 in a relatively quiet way and in 2020 we expect it will pick up speed dramatically.

Published January 15, 2020 09:00 UTC

The rest is here:

What are neural-symbolic AI methods and why will they dominate 2020? - The Next Web

Written by admin

January 18th, 2020 at 4:42 pm

Posted in Alphago

What is AlphaGo? – Definition from WhatIs.com

Posted: December 22, 2019 at 6:46 am


without comments

AlphaGo is an artificial intelligence (AI) agent that is specialized to play Go, a Chinese strategy board game, against human competitors. AlphaGo is a Google DeepMind project.

The ability to create a learning algorithm that can beat a human player at strategic games is a measure of AI development. AlphaGo is designed as a self-teaching AI and plays against itself to master the complex strategic game of Go. There have been versions of AlphaGo that beat human players but new versions are still being created.

Go is a Chinese board game similar to chesswith two players, one using black pieces and one white, placing a piece each turn. Pieces are placed on a grid that varies in size according to the level of play up to 19x19 placement points. The goal is to capture more territory (empty spaces) or enemy pieces by surrounding them with your pieces. Only positions that are horizontal and vertical relative to the players need to be covered to capture; it's not required that they all be diagonals. Either pieces or territory can be captured individually or in groups.

Chess may be a more famous board game with white and black pieces but Go has a googol more possible moves. The number of possible positions makes a traditional brute force approach, as was used with IBMs' Big Blue in chess, impossible with current computers. That difference in complexity of the problem required a new approach.

AlphaGo is based off a Monte Carlo algorithm tree search based looking at a list of possible moves from its machine-learned repertoire. Algorithms and learning differ among the various versions of AlphaGo. AlphaGo Master, the version that beat the world champion Go player Ke Jie, uses supervised learning. AlphaGo Zero, the unsupervised learning version of AlphaGo, learns by playing against itself. First, the AI plays randomly, then with increasing sophistication. Its increased sophistication is such that it consistently beats the Master version that dominates human players.

Watch SciShow cover AlphaGo in the video below:

See the article here:

What is AlphaGo? - Definition from WhatIs.com

Written by admin

December 22nd, 2019 at 6:46 am

Posted in Alphago

AI has bested chess and Go, but it struggles to find a diamond in Minecraft – The Verge

Posted: December 18, 2019 at 9:45 pm


without comments

Whether were learning to cook an omelet or drive a car, the path to mastering new skills often begins by watching others. But can artificial intelligence learn the same way? A new challenge teaching AI agents to play Minecraft suggests its much trickier for computers.

Announced earlier this year, the MineRL competition asked teams of researchers to create AI bots that could successfully mine a diamond in Minecraft. This isnt an impossible task, but it does require a mastery of the games basics. Players need to know how to cut down trees, craft pickaxes, and explore underground caves while dodging monsters and lava. These are the sorts of skills that most adults could pick up after a few hours of experimentation or learn much faster by watching tutorials on YouTube.

But of the 660 entries in the MineRL competition, none were able to complete the challenge, according to results that will be announced at the AI conference NeurIPS and that were first reported by BBC News. Although bots were able to learn intermediary steps, like constructing a furnace to make durable pickaxes, none successfully found a diamond.

The task we posed is very hard, Katja Hofmann, a principal researcher at Microsoft Research, which helped organize the challenge, told BBC News. While no submitted agent has fully solved the task, they have made a lot of progress and learned to make many of the tools needed along the way.

This may be a surprise, especially when you think that AI has managed to best humans at games like chess, Go, and Dota 2. But it reflects important limitations of the technology as well as restrictions put in place by MineRLs judges to really challenge the teams.

The bots in MineRL had to learn using a combination of methods known as imitation learning and reinforcement learning. In imitation learning, agents are shown data of the task ahead of them, and they try to imitate it. In reinforcement learning, theyre simply dumped into a virtual world and left to work things out for themselves using trial and error.

Often, AI is only able to take on big challenges by combining these two methods. The famous AlphaGo system, for example, first learned to play Go by being fed data of old games. It then honed its skills and surpassed all humans by playing itself over and over.

The MineRL bots took a similar approach, but the resources available to them were comparatively limited. While AI agents like AlphaGo are created with huge datasets, powerful computer hardware, and the equivalent of decades of training time, the MineRL bots had to make do with just 1,000 hours of recorded gameplay to learn from, a single Nvidia graphics processor to train with, and just four days to get up to speed.

Its the difference between the resources available to an MLB team coaches, nutritionists, the finest equipment money can buy and what a Little League squad has to make do with.

It may seem unfair to hamstring the MineRL bots in this way, but these constraints reflect the challenges of integrating AI into the real world. While bots like AlphaGo certainly push the boundary of what AI can achieve, very few companies and research labs can match the resources of Google-owned DeepMind.

The competitions lead organizer, Carnegie Mellon University PhD student William Guss, told BBC News that the challenge was meant to show that not every AI problem should be solved by throwing computing power at it. This mindset, said Guss, works directly against democratizing access to these reinforcement learning systems, and leaves the ability to train agents in complex environments to corporations with swathes of compute.

So while AI may be struggling in Minecraft now, when it cracks this challenge, itll hopefully deliver benefits to a wider audience. Just dont think about those poor Minecraft YouTubers who might be out of a job.

Continued here:

AI has bested chess and Go, but it struggles to find a diamond in Minecraft - The Verge

Written by admin

December 18th, 2019 at 9:45 pm

Posted in Alphago

AI is dangerous, but not for the reasons you think. – OUPblog

Posted: at 9:45 pm


without comments

In 1997, Deep Blue defeated Garry Kasparov, the reigning world chess champion. In 2011, Watson defeated Ken Jennings and Brad Rutter, the worlds best Jeopardy players. In 2016, AlphaGo defeated Ke Jie, the worlds best Go player. In 2017, DeepMind unleashed AlphaZero, which trounced the world-champion computer programs at chess, Go, and shogi.

If humans are no longer worthy opponents, then perhaps computers have moved so far beyond our intelligence that we should rely on their superior intelligence to make our important decisions. Nope.

Despite their freakish skill at board games, computer algorithms do not possess anything resembling human wisdom, common sense, or critical thinking. Deciding whether to accept a job offer, sell a stock, or buy a house is very different from recognizing that moving a bishop three spaces will checkmate an opponent. That is why it is perilous to trust computer programs we dont understand to make decisions for us.

Consider the challenges identified by Stanford computer science professorTerry Winograd,which have come to be known asWinograd schemas.For example, what does the word it refer to in this sentence?

I cant cut that tree down with that axe; it is too [thick/small].

If the bracketed word is thick, then it refers to the tree; if the bracketed word is small, then it refers to the axe. Sentences like these are understood immediately by humans but are very difficult for computers because they do not have the real-world experience to place words in context.

ParaphrasingOren Etzioni,CEO of the Allen Institute for Artificial Intelligence, how can machines take over the world when they cant even figure out what it refers to in a simple sentence?

When we see a tree, we know it is a tree. We might compare it to other trees and think about the similarities and differences between fruit trees and maple trees. We might recollect the smells wafting from some trees. We would not be surprised to see a squirrel run up a pine or a bird fly out of a dogwood. We might remember planting a tree and watching it grow year by year. We might remember cutting down a tree or watching a tree being cut down.

A computer does none of this. It can spellcheck the word tree, count the number of times the word is used in a story, and retrieve sentences that contain the word. But computers do not understand what trees are in any relevant sense. They are likeNigel Richards,who memorized the French Scrabble dictionary and has won the French-language Scrabble World Championship twice, even though he doesnt know the meaning of the French words he spells.

To demonstrate the dangers of relying on computer algorithms to make real-world decisions, consider an investigation of risk factors for fatal heart attacks.

I made up some household spending data for 1,000 imaginary people, of whom half had suffered heart attacks and half had not. For each such person, I used a random number generator to create fictitious data in 100 spending categories. These data were entirely random. There were no real people, no real spending, and no real heart attacks. It was just a bunch of random numbers. But the thing about random numbers is that coincidental patterns inevitably appear.

In 10 flips of a fair coin, there is a 46% chance of a streak of four or more heads in a row or four or more tails in a row. If that does not happen, heads and tails might alternate several times in a row. Or there might be two heads and a tail, followed by two more heads and a tail. In any event, some pattern will appear and it will be absolutely meaningless.

In the same way, some coincidental patterns were bound to turn up in my random spending numbers. As it turned out, by luck alone, the imaginary people who had not suffered heart attacks spent more money on small appliances and also on household paper products.

When we see these results, we should scoff and recognize that the patterns are meaningless coincidences. How could small appliances and household paper products prevent heart attacks?

A computer, by contrast, would take the results seriously because a computer has no idea what heart attacks, small appliances, and household paper products are. If the computer algorithm is hidden inside a black box, where we do not know how the result was attained, we would not have an opportunity to scoff.

Nonetheless, businesses and governments all over the world nowadays trust computers to make decisions based on coincidental statistical patterns just like these. One company, for example, decided that it would make more online sales if it changed the background color of the web page shown to British customers from blue to teal. Why? Because they tried several different colors in nearly 100 countries. Any given color was certain to fare better in some country than in others even if random numbers were analyzed instead of sales numbers. The change was made and sales went down.

Many marketing decisions, medical diagnoses, and stock trades are now done via computers. Loan applications and job applications are evaluated by computers. Election campaigns are run by computers, including Hillary Clintons disastrous 2016presidential campaign.If the algorithms are hidden inside black boxes, with no human supervision, then it is up to the computers to decide whether the discovered patterns make sense and they are utterly incapable of doing so because they do not understand anything about the real world.

Computers are not intelligent in any meaningful sense of the word, and it is hazardous to rely on them to make important decisions for us. The real danger today is not that computers are smarter than us, but that wethinkcomputers are smarter than us.

Featured image credit: Lumberjack Adventures by Abby Savage. CC0 via Unsplash.

See original here:

AI is dangerous, but not for the reasons you think. - OUPblog

Written by admin

December 18th, 2019 at 9:45 pm

Posted in Alphago

The Perils and Promise of Artificial Conscientiousness – WIRED

Posted: at 9:45 pm


without comments

We humans are notoriously bad at predicting the consequences of achieving our technological goals. Add seat belts to cars for safety, speeding and accidents can go up. Burn hydrocarbons for cheap energy, warm the planet. Give experts new technologies like surgical robots or predictive policing algorithms to enhance productivity, block apprentices from learning. Still, we're amazing at predicting unintended consequences compared to the intelligent technologies we're building.

WIRED OPINION

ABOUT

Matt Beane (@mattbeane) is an assistant professor of technology management at UC Santa Barbara and a research affiliate at MIT's Institute for the Digital Economy.

Take reinforcement learning, one particularly potent flavor of AI that's behind some of the more stupendous demonstrations as of late. RL systems take in reward states (aka goals, outcomes that they get points for) and go after them without regard to unintended consequences of their actions. DeepMind's AlphaGo was designed to win the board game Go, whatever it took. OpenAI's system did the same for Defense of the Ancients (DOTA), a fiendishly complex, multiplayer online war game. Both came up with unconventional, in some cases radical, new tactics required to beat the best that humanity had to offer, yet consumed disproportionately large amounts of energy and natural resources to do so. This kind of single-mindedness has inspired all kinds of fun sci-fi, including an AI designed to produce as many paperclips as possible proceeding to destroying the earth, and then the entire cosmos, in an effort to get the job done.

While seemingly innocuous, this win-at-any-cost approach is untenable with the more practical uses of AI. Otherwise we may end up swamped by power outages, flash-trading market failures, or (even more) hyper-polarized, isolated online communities. To be clear, these threats are possible only because AI is delivering amazing improvements on previous best practices: electrical grids are becoming much more efficient and reliable, microsecond-frequency trading allows for major improvements in global market efficiency, and social media platforms suggest beneficial connections to goods, services, information, and people that would otherwise remain hidden. But the more we hand these and similar processes over to AI that is singularly focused on its goals, the more they can produce consequences we dont like, sometimes at the speed of light.

Some within the AI community are already addressing these concerns. One of the founders of DeepMind cofounded the Partnership on AI, which aims to direct attention and effort on harnessing AI to contribute to solutions for some of humanitys most challenging problems. On December 4, PAI announced the release of SafeLife, a proof-of-concept reinforcement-learning model that can avoid unintended side effects of its optimization activity in a simple game. SafeLife has a clear way of characterizing those consequences: increases in entropy (or the degree of disorder or randomness) in the game system. By definition this is not a practical system, but it does show how a reinforcement-learning-driven system can optimize towards a goal while minimizing collateral damage.

This is very exciting work, and in principle it could help with all kinds of unintended effects of intelligent technologies like AI and robots. For example, it could help factory robots know they should slow down if a red-tailed hawk flies in their way. (I've seen this happen. Those buildings house pigeons, and, if big enough, birds of prey). A SafeLife-like model could override its programmed setting to maximize throughput, because destroying living things adds a lot of entropy to the world. But some things that we expect to help in theory end up contributing to the very problems they're trying to solve. Yes, that means the unintended consequences module in next-gen AI systems could be the very thing that creates potent unintended consequences. What happens if that robot slows down for that hawk while a nearby human expects it to keep moving? Safety and productivity could be threatened.

This is particularly problematic when these consequences span significant amounts of space and time. Take the DOTA algorithm. During a match, when it calculates its win probability is above 90 percent, it's programmed to taunt other players via chat. "Win probability 92 percent," you might read as you watch your hard-won forces and devious strategy decimated by a computer program. What effects does that have on players' approaches to the game? And, even further removed, what about their commitment to the game? To gaming generally? Their career aspirations? Their contributions to society? If this seems like armchair speculation, note that Lee Sedolthe world's best professional Go player, a wunderkind who has devoted his entire life to mastering the gamehas just quit the game publicly and permanently, saying that no human can beat the system. It's not obvious that Sedol's retirement is good or bad for the game, for him or for society, but it is a symbolic and significant unintended consequence of the actions of an AI-based system optimizing on its reward function.

Continued here:

The Perils and Promise of Artificial Conscientiousness - WIRED

Written by admin

December 18th, 2019 at 9:45 pm

Posted in Alphago

DeepMind Vs Google: The Inner Feud Between Two Tech Behemoths – Analytics India Magazine

Posted: at 9:45 pm


without comments

With the recent switch of DeepMinds co-founder Mustafa Suleyman to its sister concern Google, researchers are raising questions as to whether this unexpected move will cause a crack between the two companies. Suleyman was termed on leave by this London-based AI company for the last six months, but earlier this month, he confirmed his joining at Google through a Twitter post. In the post, Suleyman portrayed his excitement of joining the team at Google to work on opportunities and impacts of applied AI technologies.

Acquired by Googles parent company Alphabet in 2014, DeepMind was aimed at using machine intelligence to solve real-world problems, including healthcare and energy. While the co-founder Hassabis was running the core artificial intelligence research at DeepMind, Suleyman was in charge of developing Streams, a controversial health app, which gathered data from millions of NHS patients without their direct consent.

However, the relationship between Google and DeepMind has been fairly complicated since last year. After a brawl with Facebook in 2014, Google decided to acquire DeepMind for $600 million. However, it got separated from the tech giant in the year 2015, as a part of the Alphabets restructure, rising tension among Googles AI researchers.

Suleymans key project, Streams has created a considerable suspicion between the three companies Alphabet, DeepMind and Google. Although DeepMind promised to keep a privacy check on all 1.6 million Royal Free data, keeping it independent, its dealings with Google of taking over Streams, formed no legal foundation for this claim. Experts believed that such dealing is breaking DeepMinds promise of encryption. Nevertheless, in an interview, a DeepMind spokesperson mentioned how the company is still committed to its privacy statements and any dealing with Google is not going to affect the acquired data.

Previously Google has gone through several complexities like disenfranchising its employees, creating conflicts with the government, and ignoring its customers and clients. Google has also recently admitted its interest in serving China with the development of a censored search engine. These steps have in turn placed this company in an outrageous position, making it unpredictable and not-so-trustworthy for the mainstream media, privacy experts, giants of the industry, and even for the general population.

On the other side, Google has been wanting to capitalise on owning the highest concentration of AI talent, in the field of deep learning. But, DeepMinds contribution to Googles bottom line has been shocking. The company has been making a significant breakthrough with AI either in terms of diagnosing fatal diseases, engineering a bacteria to eat up plastic, or creating a computer program that plays the board game, called AlphaGo. However, the company turned out to be a big disaster for its investors considering the loss of $571 million last year with a constant debt to its parent company of approximately $1.4 billion. Such concerns added more complexities for DeepMind, which led Google to take over the control of the company contradicting the initial agreement, which allows DeepMind to operate independently.

Why did it come to this? The answer is a big gap in DeepMinds commercialisation research. According to industry experts, the company has been fixated with the development of general intelligence, however, the important aspect should have been working on short term projects which could potentially turn into products to solve real-world problems. Haitham Bou-Ammar, an executive at Cambridge-based AI startup Prowler.io, believed that the company requires a shift in focus, with strategies to make money with deep learning assets rather than creating an education lab.

With a single focus on deep-learning neural networks, DeepMinds AI approach hasnt been inclusive. The company should have rather focused on a multi-segment approach, which would have helped in creating evolutionary algorithms and decision making in a realistic environment. DeepMind has been putting all its eggs in one basket Deep Reinforcement Learning. Many also believe that that company should have been focusing on bridging gaps, instead, it has been dealing with issues related to their apparent independence.

DeepMind CEO Demis Hassabis once declined Googles offer of leading their robotics unit. On the other hand, while the companys provided with its WaveNet software to Google for replicating a human voice, the companys leadership totally declined its association with its cloud platform. Such developments showcased a bumpy relationship between the two. Critics started to fear that the change in management will shift the focus from research to products, while the privacy experts are worried about Googles unsolicited access to NHS data.

From a distance, DeepMind looks to have made great progress with built-in software that can learn to perform tasks at a superhuman level, and other strides at the gaming industry, demonstrating the power of reinforcement learning and the extraordinary ability of its computer programs. However, the company has missed a huge aspect that says that DeepMinds program has always been so restricted with no ability to react to changes in the environment, lacking in flexibility.

Another aspect which is hardly been touched by the company is the reward function a signal that allows the software to measure its progress which is directly related to the success of virtual environments. The company has always been focused on developing reward function for AlphaGo, however, in the real world, the progress is never measured with single scores and is usually varied according to the sectors.

Therefore, for now, deep reinforcement learning can now only be used in trusted and controlled environments with few or no changes in the system that works fine for Go games, but real-world problems cannot rely upon the same. The company, therefore, has to focus on finding a large scale commercial application of this technology. So far, the parent company has invested roughly $2 billion on DeepMind with a decent financial return some of which came from applying deep reinforcement learning within Alphabet to reduce power costs for cooling Googles servers.

According to experts and researchers, although the technology works fine for Go, it might not be suitable for real-world challenging problems, that the company is aspiring to solve with AI. Cutting DeepMind some slack, we all have to agree that no scientific innovation turns profitable overnight. However, the company definitely needs to dig deeper and bring in the technology with other techniques to create more stable results.

Even if DeepMinds current strategy is turning out to be less fruitful, nobody can exclude the vision of the company. Although it is taking time to bridge the gap between deep reinforcement learning and artificial intelligence, its impossible to ignore that the company is held by hundreds of PHDs and is also running on good funding. In fact, the success of Go, Atari, and Starcraft has given a promising name to the company.

Meanwhile, the substantial cash burn along with the departure of a high-level executive has caused wreckage, placing the subsidiary in deep confusion. According to the policies, DeepMind is supposed to provide AI-related assets to various companies and products under Alphabet, however, on the other hand, Googles in-house AI management Google Brain already started occupying a similar role within the Alphabets ecosystem. This perplexity is deepening the problems for the company, pushing it to work in silos. In its present condition, DeepMind seems to be in a critical point, where the company is constantly investing in deep learning research and developing AI assets, but not living up to its potential.

comments

Read the original:

DeepMind Vs Google: The Inner Feud Between Two Tech Behemoths - Analytics India Magazine

Written by admin

December 18th, 2019 at 9:45 pm

Posted in Alphago

AlphaGo – Wikipedia

Posted: December 11, 2019 at 8:48 pm


without comments

AlphaGo is a computer program that plays the board game Go.[1] It was developed by DeepMind Technologies [2] which was later acquired by Google. AlphaGo had three far more powerful successors, called AlphaGo Master, AlphaGo Zero[3] and AlphaZero.

In October 2015, the original AlphaGo became the first computer Go program to beat a human professional Go player without handicap on a full-sized 1919 board.[4][5] In March 2016, it beat Lee Sedol in a five-game match, the first time a computer Go program has beaten a 9-dan professional without handicap.[6] Although it lost to Lee Sedol in the fourth game, Lee resigned in the final game, giving a final score of 4 games to 1 in favour of AlphaGo. In recognition of the victory, AlphaGo was awarded an honorary 9-dan by the Korea Baduk Association.[7] The lead up and the challenge match with Lee Sedol were documented in a documentary film also titled AlphaGo,[8] directed by Greg Kohs. It was chosen by Science as one of the Breakthrough of the Year runners-up on 22 December 2016.[9]

At the 2017 Future of Go Summit, its successor AlphaGo Master beat Ke Jie, the world No.1 ranked player at the time, in a three-game match (the even more powerful AlphaGo Zero already existed but was not yet announced). After this, AlphaGo was awarded professional 9-dan by the Chinese Weiqi Association.[10]

AlphaGo and its successors use a Monte Carlo tree search algorithm to find its moves based on knowledge previously "learned" by machine learning, specifically by an artificial neural network (a deep learning method) by extensive training, both from human and computer play.[11] A neural network is trained to predict AlphaGo's own move selections and also the winner's games. This neural net improves the strength of tree search, resulting in higher quality of move selection and stronger self-play in the next iteration.

After the match between AlphaGo and Ke Jie, DeepMind retired AlphaGo, while continuing AI research in other areas.[12] Starting from a 'blank page', with only a short training period, AlphaGo Zero achieved a 100-0 victory against the champion-defeating AlphaGo, while its successor, the self-taught AlphaZero, is currently perceived as the world's top player in Go as well as possibly in chess.

Go is considered much more difficult for computers to win than other games such as chess, because its much larger branching factor makes it prohibitively difficult to use traditional AI methods such as alphabeta pruning, tree traversal and heuristic search.[4][13]

Almost two decades after IBM's computer Deep Blue beat world chess champion Garry Kasparov in the 1997 match, the strongest Go programs using artificial intelligence techniques only reached about amateur 5-dan level,[11] and still could not beat a professional Go player without a handicap.[4][5][14] In 2012, the software program Zen, running on a four PC cluster, beat Masaki Takemiya (9p) twice at five- and four-stone handicaps.[15] In 2013, Crazy Stone beat Yoshio Ishida (9p) at a four-stone handicap.[16]

According to DeepMind's David Silver, the AlphaGo research project was formed around 2014 to test how well a neural network using deep learning can compete at Go.[17] AlphaGo represents a significant improvement over previous Go programs. In 500 games against other available Go programs, including Crazy Stone and Zen,[18] AlphaGo running on a single computer won all but one.[19] In a similar matchup, AlphaGo running on multiple computers won all 500 games played against other Go programs, and 77% of games played against AlphaGo running on a single computer. The distributed version in October 2015 was using 1,202 CPUs and 176 GPUs.[11]

In October 2015, the distributed version of AlphaGo defeated the European Go champion Fan Hui,[20] a 2-dan (out of 9 dan possible) professional, five to zero.[5][21] This was the first time a computer Go program had beaten a professional human player on a full-sized board without handicap.[22] The announcement of the news was delayed until 27 January 2016 to coincide with the publication of a paper in the journal Nature[11] describing the algorithms used.[5]

AlphaGo played South Korean professional Go player Lee Sedol, ranked 9-dan, one of the best players at Go,[14][needs update] with five games taking place at the Four Seasons Hotel in Seoul, South Korea on 9, 10, 12, 13, and 15 March 2016,[23][24] which were video-streamed live.[25] Out of five games, AlphaGo won four games and Lee won the fourth game which made him recorded as the only human player who beat AlphaGo in all of its 74 official games.[26] AlphaGo ran on Google's cloud computing with its servers located in the United States.[27] The match used Chinese rules with a 7.5-point komi, and each side had two hours of thinking time plus three 60-second byoyomi periods.[28] The version of AlphaGo playing against Lee used a similar amount of computing power as was used in the Fan Hui match.[29]The Economist reported that it used 1,920 CPUs and 280 GPUs.[30] At the time of play, Lee Sedol had the second-highest number of Go international championship victories in the world after South Korean player Lee Changho who kept the world championship title for 16 years.[31] Since there is no single official method of ranking in international Go, the rankings may vary among the sources. While he was ranked top sometimes, some sources ranked Lee Sedol as the fourth-best player in the world at the time.[32][33] AlphaGo was not specifically trained to face Lee nor was designed to compete with any specific human players.

The first three games were won by AlphaGo following resignations by Lee.[34][35] However, Lee beat AlphaGo in the fourth game, winning by resignation at move 180. AlphaGo then continued to achieve a fourth win, winning the fifth game by resignation.[36]

The prize was US$1 million. Since AlphaGo won four out of five and thus the series, the prize will be donated to charities, including UNICEF.[37] Lee Sedol received $150,000 for participating in all five games and an additional $20,000 for his win.[28]

In June 2016, at a presentation held at a university in the Netherlands, Aja Huang, one of the Deep Mind team, revealed that they had patched the logical weakness that occurred during the 4th game of the match between AlphaGo and Lee, and that after move 78 (which was dubbed the "divine move" by many professionals), it would play as intended and maintain Black's advantage. Before move 78, AlphaGo was leading throughout the game, but Lee's move caused the program's computing powers to be diverted and confused.[38] Huang explained that AlphaGo's policy network of finding the most accurate move order and continuation did not precisely guide AlphaGo to make the correct continuation after move 78, since its value network did not determine Lee's 78th move as being the most likely, and therefore when the move was made AlphaGo could not make the right adjustment to the logical continuation.[39]

On 29 December 2016, a new account on the Tygem server named "Magister" (shown as 'Magist' at the server's Chinese version) from South Korea began to play games with professional players. It changed its account name to "Master" on 30 December, then moved to the FoxGo server on 1 January 2017. On 4 January, DeepMind confirmed that the "Magister" and the "Master" were both played by an updated version of AlphaGo, called AlphaGo Master.[40][41] As of 5 January 2017, AlphaGo Master's online record was 60 wins and 0 losses,[42] including three victories over Go's top-ranked player, Ke Jie,[43] who had been quietly briefed in advance that Master was a version of AlphaGo.[42] After losing to Master, Gu Li offered a bounty of 100,000 yuan (US$14,400) to the first human player who could defeat Master.[41] Master played at the pace of 10 games per day. Many quickly suspected it to be an AI player due to little or no resting between games. Its adversaries included many world champions such as Ke Jie, Park Jeong-hwan, Yuta Iyama, Tuo Jiaxi, Mi Yuting, Shi Yue, Chen Yaoye, Li Qincheng, Gu Li, Chang Hao, Tang Weixing, Fan Tingyu, Zhou Ruiyang, Jiang Weijie, Chou Chun-hsun, Kim Ji-seok, Kang Dong-yun, Park Yeong-hun, and Won Seong-jin; national champions or world championship runners-up such as Lian Xiao, Tan Xiao, Meng Tailing, Dang Yifei, Huang Yunsong, Yang Dingxin, Gu Zihao, Shin Jinseo, Cho Han-seung, and An Sungjoon. All 60 games except one were fast-paced games with three 20 or 30 seconds byo-yomi. Master offered to extend the byo-yomi to one minute when playing with Nie Weiping in consideration of his age. After winning its 59th game Master revealed itself in the chatroom to be controlled by Dr. Aja Huang of the DeepMind team,[44] then changed its nationality to the United Kingdom. After these games were completed, the co-founder of Google DeepMind, Demis Hassabis, said in a tweet, "we're looking forward to playing some official, full-length games later [2017] in collaboration with Go organizations and experts".[40][41]

Go experts were impressed by the program's performance and its nonhuman play style; Ke Jie stated that "After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong... I would go as far as to say not a single human has touched the edge of the truth of Go."[42]

In the Future of Go Summit held in Wuzhen in May 2017, AlphaGo Master played three games with Ke Jie, the world No.1 ranked player, as well as two games with several top Chinese professionals, one pair Go game and one against a collaborating team of five human players.[45]

Google DeepMind offered 1.5 million dollar winner prizes for the three-game match between Ke Jie and Master while the losing side took 300,000 dollars.[46][47][48] Master won all three games against Ke Jie,[49][50] after which AlphaGo was awarded professional 9-dan by the Chinese Weiqi Association.[10]

After winning its three-game match against Ke Jie, the top-rated world Go player, AlphaGo retired. DeepMind also disbanded the team that worked on the game to focus on AI research in other areas.[12] After the Summit, Deepmind published 50 full length AlphaGo vs AlphaGo matches, as a gift to the Go community.[51]

AlphaGo's team published an article in the journal Nature on 19 October 2017, introducing AlphaGo Zero, a version without human data and stronger than any previous human-champion-defeating version.[52] By playing games against itself, AlphaGo Zero surpassed the strength of AlphaGo Lee in three days by winning 100 games to 0, reached the level of AlphaGo Master in 21 days, and exceeded all the old versions in 40 days.[53]

In a paper released on arXiv on 5 December 2017, DeepMind claimed that it generalized AlphaGo Zero's approach into a single AlphaZero algorithm, which achieved within 24 hours a superhuman level of play in the games of chess, shogi, and Go by defeating world-champion programs, Stockfish, Elmo, and 3-day version of AlphaGo Zero in each case.[54]

On 11 December 2017, DeepMind released AlphaGo teaching tool on its website[55] to analyze winning rates of different Go openings as calculated by AlphaGo Master.[56] The teaching tool collects 6,000 Go openings from 230,000 human games each analyzed with 10,000,000 simulations by AlphaGo Master. Many of the openings include human move suggestions.[56]

An early version of AlphaGo was tested on hardware with various numbers of CPUs and GPUs, running in asynchronous or distributed mode. Two seconds of thinking time was given to each move. The resulting Elo ratings are listed below.[11] In the matches with more time per move higher ratings are achieved.

In May 2016, Google unveiled its own proprietary hardware "tensor processing units", which it stated had already been deployed in multiple internal projects at Google, including the AlphaGo match against Lee Sedol.[57][58]

In the Future of Go Summit in May 2017, DeepMind disclosed that the version of AlphaGo used in this Summit was AlphaGo Master,[59][60] and revealed that it had measured the strength of different versions of the software. AlphaGo Lee, the version used against Lee, could give AlphaGo Fan, the version used in AlphaGo vs. Fan Hui, three stones, and AlphaGo Master was even three stones stronger.[61]

As of 2016, AlphaGo's algorithm uses a combination of machine learning and tree search techniques, combined with extensive training, both from human and computer play. It uses Monte Carlo tree search, guided by a "value network" and a "policy network," both implemented using deep neural network technology.[4][11] A limited amount of game-specific feature detection pre-processing (for example, to highlight whether a move matches a nakade pattern) is applied to the input before it is sent to the neural networks.[11]

The system's neural networks were initially bootstrapped from human gameplay expertise. AlphaGo was initially trained to mimic human play by attempting to match the moves of expert players from recorded historical games, using a database of around 30 million moves.[20] Once it had reached a certain degree of proficiency, it was trained further by being set to play large numbers of games against other instances of itself, using reinforcement learning to improve its play.[4] To avoid "disrespectfully" wasting its opponent's time, the program is specifically programmed to resign if its assessment of win probability falls beneath a certain threshold; for the match against Lee, the resignation threshold was set to 20%.[63]

Toby Manning, the match referee for AlphaGo vs. Fan Hui, has described the program's style as "conservative".[64] AlphaGo's playing style strongly favours greater probability of winning by fewer points over lesser probability of winning by more points.[17] Its strategy of maximising its probability of winning is distinct from what human players tend to do which is to maximise territorial gains, and explains some of its odd-looking moves.[65] It makes a lot of opening moves that have never or seldom been made by humans, while avoiding many second-line opening moves that human players like to make. It likes to use shoulder hits, especially if the opponent is over concentrated.[citation needed]

AlphaGo's March 2016 victory was a major milestone in artificial intelligence research.[66] Go had previously been regarded as a hard problem in machine learning that was expected to be out of reach for the technology of the time.[66][67][68] Most experts thought a Go program as powerful as AlphaGo was at least five years away;[69] some experts thought that it would take at least another decade before computers would beat Go champions.[11][70][71] Most observers at the beginning of the 2016 matches expected Lee to beat AlphaGo.[66]

With games such as checkers (that has been "solved" by the Chinook draughts player team), chess, and now Go won by computers, victories at popular board games can no longer serve as major milestones for artificial intelligence in the way that they used to. Deep Blue's Murray Campbell called AlphaGo's victory "the end of an era... board games are more or less done and it's time to move on."[66]

When compared with Deep Blue or with Watson, AlphaGo's underlying algorithms are potentially more general-purpose, and may be evidence that the scientific community is making progress towards artificial general intelligence.[17][72] Some commentators believe AlphaGo's victory makes for a good opportunity for society to start discussing preparations for the possible future impact of machines with general purpose intelligence. (As noted by entrepreneur Guy Suter, AlphaGo itself only knows how to play Go, and doesn't possess general-purpose intelligence: "[It] couldn't just wake up one morning and decide it wants to learn how to use firearms"[66]) In March 2016, AI researcher Stuart Russell stated that "AI methods are progressing much faster than expected, (which) makes the question of the long-term outcome more urgent," adding that "in order to ensure that increasingly powerful AI systems remain completely under human control... there is a lot of work to do."[73] Some scholars, such as Stephen Hawking, warned (in May 2015 before the matches) that some future self-improving AI could gain actual general intelligence, leading to an unexpected AI takeover; other scholars disagree: AI expert Jean-Gabriel Ganascia believes that "Things like 'common sense'... may never be reproducible",[74] and says "I don't see why we would speak about fears. On the contrary, this raises hopes in many domains such as health and space exploration."[73] Computer scientist Richard Sutton said "I don't think people should be scared... but I do think people should be paying attention."[75]

In China, AlphaGo was a "Sputnik moment" which helped convince the Chinese government to prioritize and dramatically increase funding for artificial intelligence.[76]

In 2017, the DeepMind AlphaGo team received the inaugural IJCAI Marvin Minsky medal for Outstanding Achievements in AI. AlphaGo is a wonderful achievement, and a perfect example of what the Minsky Medal was initiated to recognise, said Professor Michael Wooldridge, Chair of the IJCAI Awards Committee. What particularly impressed IJCAI was that AlphaGo achieves what it does through a brilliant combination of classic AI techniques as well as the state-of-the-art machine learning techniques that DeepMind is so closely associated with. Its a breathtaking demonstration of contemporary AI, and we are delighted to be able to recognise it with this award.[77]

Go is a popular game in China, Japan and Korea, and the 2016 matches were watched by perhaps a hundred million people worldwide.[66][78] Many top Go players characterized AlphaGo's unorthodox plays as seemingly-questionable moves that initially befuddled onlookers, but made sense in hindsight:[70] "All but the very best Go players craft their style by imitating top players. AlphaGo seems to have totally original moves it creates itself."[66] AlphaGo appeared to have unexpectedly become much stronger, even when compared with its October 2015 match[79] where a computer had beaten a Go professional for the first time ever without the advantage of a handicap.[80] The day after Lee's first defeat, Jeong Ahram, the lead Go correspondent for one of South Korea's biggest daily newspapers, said "Last night was very gloomy... Many people drank alcohol."[81] The Korea Baduk Association, the organization that oversees Go professionals in South Korea, awarded AlphaGo an honorary 9-dan title for exhibiting creative skills and pushing forward the game's progress.[82]

China's Ke Jie, an 18-year-old generally recognized as the world's best Go player at the time,[32][83] initially claimed that he would be able to beat AlphaGo, but declined to play against it for fear that it would "copy my style".[83] As the matches progressed, Ke Jie went back and forth, stating that "it is highly likely that I (could) lose" after analysing the first three matches,[84] but regaining confidence after AlphaGo displayed flaws in the fourth match.[85]

Toby Manning, the referee of AlphaGo's match against Fan Hui, and Hajin Lee, secretary general of the International Go Federation, both reason that in the future, Go players will get help from computers to learn what they have done wrong in games and improve their skills.[80]

After game two, Lee said he felt "speechless": "From the very beginning of the match, I could never manage an upper hand for one single move. It was AlphaGo's total victory."[86] Lee apologized for his losses, stating after game three that "I misjudged the capabilities of AlphaGo and felt powerless."[66] He emphasized that the defeat was "Lee Se-dol's defeat" and "not a defeat of mankind".[26][74] Lee said his eventual loss to a machine was "inevitable" but stated that "robots will never understand the beauty of the game the same way that we humans do."[74] Lee called his game four victory a "priceless win that I (would) not exchange for anything."[26]

Facebook has also been working on its own Go-playing system darkforest, also based on combining machine learning and Monte Carlo tree search.[64][87] Although a strong player against other computer Go programs, as of early 2016, it had not yet defeated a professional human player.[88] Darkforest has lost to CrazyStone and Zen and is estimated to be of similar strength to CrazyStone and Zen.[89]

DeepZenGo, a system developed with support from video-sharing website Dwango and the University of Tokyo, lost 21 in November 2016 to Go master Cho Chikun, who holds the record for the largest number of Go title wins in Japan.[90][91]

A 2018 paper in Nature cited AlphaGo's approach as the basis for a new means of computing potential pharmaceutical drug molecules.[92]

AlphaGo Master (white) v. Tang Weixing (31 December 2016), AlphaGo won by resignation. White 36 was widely praised.

The AlphaGo documentary film[93][94] raised hopes that Lee Sedol and Fan Hui would have benefitted from their experience of playing AlphaGo, but as of May 2018 their ratings were little changed; Lee Sedol was ranked 11th in the world, and Fan Hui 545th.[95] However the overall Go community may have been moved forward in how it plays the game.[citation needed]

Original post:

AlphaGo - Wikipedia

Written by admin

December 11th, 2019 at 8:48 pm

Posted in Alphago

DeepMind co-founder moves to Google as the AI lab positions itself for the future – The Verge

Posted: at 8:48 pm


without comments

The personnel changes at Alphabet continue, this time with Mustafa Suleyman one of the three co-founders of the companys influential AI lab DeepMind moving to Google.

Suleyman announced the news on Twitter, saying that after a wonderful decade at DeepMind, he would be joining Google to work with the companys head of AI Jeff Dean and its chief legal officer Kent Walker. The exact details of Suleymans new role are unclear but a representative for the company told The Verge it would involve work on AI policy.

The move is notable, though, as it was reported earlier this year that Suleyman had been placed on leave from DeepMind. (DeepMind disputed these reports, saying it was a mutual decision intended to give Suleyman time out ... after 10 hectic years.) Some speculated that Suleymans move was the fallout of reported tensions between DeepMind and Google, as the former struggled to commercialize its technology.

DeepMind breaks ground in AI research, but spends a lot of money doing it

Although DeepMind has achieved a number of research milestones in the AI world, most notably the success of its AlphaGo program in 2016, the lab has also recorded significant financial losses. In 2018, it doubled its revenues to 102.8 million ($135 million), but its expenditures also rose to 470.2 million ($618 million) and it recorded a total debt of more than 1 billion ($1.3 billion).

Suleyman, who founded DeepMind in 2010 along with Demis Hassabis (now CEO) and Shane Legg (now chief scientist), had spearheaded the companys health team, which offered the lab one avenue to monetize its research. DeepMinds engineers designed a number of health algorithms that broke new ground, and its team built an assistant app for nurses and doctors that promised to save time and money. But the venture was also criticized strongly for its mishandling of UK medical data, and in 2018 was absorbed into Google Health.

In addition to this, Suleyman also led the DeepMind for Google team, which aimed to put the companys research to practical uses in Google products, delivering tangible commercial benefits like improved battery life on Android devices and a more natural voice for Google Assistant.

Its difficult to parse the meaning behind Suleymans move to Google without more details on his new role, but its clear that DeepMind is still working out how to position itself for the future as highlighted by the publication of a blog post by Hassabis timed with the announcement of Suleymans departure.

In the post, Hassabis charts the journey of DeepMind from unlikely start-up to major scientific organization. And although he highlights collaborations the lab has made with other parts of Alphabet, he ultimately focuses on the fundamental breakthroughs and grand challenges that DeepMind hopes to tackle most notably, using artificial intelligence to augment scientific research. It seems clear that long-term research, not short-term profits, are still the priority for DeepMinds scientists.

Here is the original post:

DeepMind co-founder moves to Google as the AI lab positions itself for the future - The Verge

Written by admin

December 11th, 2019 at 8:48 pm

Posted in Alphago

Biggest scientific discoveries of the 2010s decade: photos – Business Insider

Posted: at 8:48 pm


without comments

In March 2010, anthropologists discovered a tiny, lone finger bone in the Denisova cave in Siberia. They determined it belonged to previously undiscovered species of human ancestor. Replica of a Denisovan finger bone fragment, originally found in Denisova Cave in 2008, at the Museum of Natural Sciences in Brussels, Belgium. Thilo Parg/Wikimedia Commons

Genetic analysis revealed that Denisovans (named after the cave in which they were found) were an enigmatic offshoot of Neanderthals.

Thus far, fossilized Denisovan remains have only been found in Siberia and Tibet. The species disappeared about 50,000 years ago but passed some of their genetic makeup to Homo sapiens.Denisovan DNA can be found in the genes of modern humans across Asia and some Pacific islands;up to 5% of modern Papua New Guinea residents' DNA shows remnants of interbreedingwith Denisovans.

People in Tibet today also possess some Denisovan traits and these traits appear to help Sherpas weather high altitudes.

Scientists discovered that both Neanderthals and Denisovans interbred with modern humans extensively.

Curiosity is the largest and most capable rover ever sent to Mars. It joined fellow rover Opportunity in searching the red planet for signs of water and clues about whether Mars was capable of supporting microbial lifeforms.

The Kepler mission was charged with finding and identifying Earth-like planets in our galaxy that existed within a star's "Goldilocks," or habitable, zone. Kepler 22-b is 600 light-years away.

Planets in habitable zones are capable of hosting liquid water, one of the requisites for being considered Earth-like.

NASA launched Voyager 1 in 1977. After flying by Jupiter and Saturn, Voyager 1 crossed into interstellar space. It continues to collect data to this day.

In 2019, Voyager 1's successor, Voyager 2, also entered interstellar space. Both probes have been flying longer than any other spacecraft in history.

Voyager 2 has beamed back unprecedented data about previously unknown boundary layers at the far edge of our solar system an area known as the heliopause. The discovery of these boundary layers suggests there are stages in the transition from our solar bubble to interstellar space that scientists did not previously know about.

SpaceX's groundbreaking spaceship was called Dragon.

Previously, only four governments the United States, Russia, Japan, and the European Space Agency had achieved this challenging technical feat.

Seven years later, SpaceX launched Dragon's successor, Crew Dragon, into orbit for the first time. Crew Dragon is designed to ferry astronauts to the ISS; its 2019 trip marked the first time that a commercial spaceship designed for humans had ever left Earth.

The Higgs Boson is nicknamed the "God particle" because it gives mass to all other fundamental particles in the universe that have mass, like electrons and protons.

Scientists knew a particle akin to the Higgs Boson had to exist otherwise nothing in the universe would have mass, and we wouldn't exist but had failed to find evidence of such a particle until 2012.

Crispr-Cas9 technology enables researchers to edit parts of the genome by removing, adding, or altering sections of DNA. Since 2012, scientists have edited mosquito, mushroom, and lizard DNA, among others. In 2018, a Chinese scientist announced he had edited the genetic information of two human embryos.

This discovery made Europa only the second known oceanic world in our solar system aside from Earth; NASA observed jets of water vapor spewing from Saturn's moon Enceladus in 2005.

The presence of liquid water and ice make these two moons ideal places to search for life in our corner of the galaxy.

Since 2013, water has also been discovered on the dwarf-planet Pluto, a moon of Neptune called Triton, and multiple other moons of Jupiter and Saturn.

In September 2012, NASA announced its Curiosity rover had identified gravel made by an ancient river in Mars' Gale Crater.

Then in March 2013, scientists found chemical ingredients for life sulfur, nitrogen, hydrogen, oxygen, phosphorus, and carbon in powder that Curiosity had drilled from rock near the ancient streambed.

"A fundamental question for this mission is whether Mars could have supported a habitable environment," Michael Meyer, who worked as the lead scientist for NASA's Mars Exploration Program at the time, said in a press release about the finding. "From what we know now, the answer is yes."

In the following years, evidence has mounted that the planet was once home to a vast ocean.

After three years of studying Mars, Italian scientists determined in July 2018 that it's possible the red planet has a 20-kilometer-wide lake of liquid water at its polar ice cap today.

"If these researchers are right, this is the first time we've found evidence of a large water body on Mars," Cassie Stuurman, a geophysicist at the University of Texas,told the Associated Press.

Other parts of Mars are too cold for water to stay liquid unless it's deep underground.

In a March 2019 study, researchers suggested that seasonal flow patterns in Mars's crater walls could come from pressurized groundwater 750 meters below the surface, which travels upward through cracks in the ground.

Researchers found the particles using the IceCube Neutrino Observatory, an array of sensors embedded in Antarctic ice. Neutrinos are nearly mass-less and unstoppable; they move at the speed of light and get discharged in the aftermath of exploding stars.

Scientists can use neutrinos to understand events happening in distant galaxies. In 2018, they found more of the particles in Antarctica, then traced them back to the source: a rapidly spinning black hole, millions of times the mass of the sun, that's gobbling up gas and dust.

The burger which took two years and $325,000 to make consists of 20,000 thin strips of cow muscle tissue that were grown in a Netherlands laboratory.

Since 2013, the lab-grown meat industry has grown in popularity and dropped in price. In 2015, one of the researchers responsible for the first lab-grown burger, said the per-pound cost had dropped to $37.

It took Rosetta 10 years to reach and orbit the comet, then launch a lander down to the surface.

Rosetta's lander, Philae, took the first-ever surface images of a comet.

Two spelunkers had accidentally stumbled across the Homo naledi fossils two years earlier, in a hidden cave 100 feet below the surface.

All told, the chamber contained 1,550 bones belonging to at least 15 individuals who all lived between 330,000 and 250,000 years ago.

The epigenome is made up of chemicals and proteins that can attach to DNA and modify its function turning our genes on and off.

An individual's lifestyle and environment factors like whether they smoke or what their diet looks like can prompt sometimes deadly changes in their epigenome that can cause cancer.

Mapping the epigenome may help scientists understand how tumors develop and cancer spreads.

NASA's Cassini spacecraft found that Enceladus emits plumes of water into space following the probe's arrival in 2004. But in 2015, scientists confirmed that the source of these plumes was a giant saltwater oceanhidden beneath the moon's icy crust.

That wasn't the first time AI beat humans in a complex game.

In 2011, IBM's supercomputer, Watson, defeated two "Jeopardy!" champions including Ken Jennings in a three-day contest.

A year after AlphaGo's success, anAI named Libaratus beat four of the world's top professional players in 120,000 hands of no-limit, two-player poker. Then, in 2019, another DeepMind AI program named AlphaStar bested 99.8% of human players in the popular video game "Starcraft II."

The catastrophic collision created ripples in space-time, also known asgravitational waves. Einstein predicted the existence of these gravitational waves in 1915, but he thought they'd be too weak to ever pick up on Earth. New detection tools have proved otherwise.

This collision was the first event scientists observed using gravitational-wave detectors. Then in 2017, they observed two neutron stars merging. In August 2019, astrophysicists detected the billion-year-old aftermath of a collision between a black hole and a neutron star(the super-dense remnant of a dead star).

The lost land ofZealandiasits on the ocean floor between New Zealand and New Caledonia.

It wasn't always sunken researchers have found fossils that suggested novel kinds of plants and organisms once lived there. Some argue that Zealandia should be countedalongsideour (more visible) seven continents.

In 2019, scientists found that another ancient continent had slid under what is now southern Europe about 120 million years ago. The researchers named this continent Greater Adria. Its uppermost regions formed mountain ranges across Europe, like the Alps.

All living creatures' DNA is made up of two types of amino acid pairs: A-T (adenine thymine) and G-C (guanine cytosine). This four-letter alphabet forms the basis for all genetic information in the natural world.

But scientists invented two new letters, an unnatural pair of X-Y bases, that they seamlessly integrated into the genetic alphabet of E. coli bacteria.

Floyd Romesburg, who led the research, previously told Business Insider that his invention could improve the way we treat diseases. For example, it could change the way proteins degrade inside the body, helping drugs stay in your system longer. Romesburg said his team will be investigating how the finding might help cancer treatments and drugs for autoimmune diseases.

In September 2017, Audi announced it had produced the world's first "Level 3" autonomous car meaning its self-driving mode requires no human feet, hands, or eyes. The A8 sedan can wholly, safely control itself in self-driving mode, only needing a human to take over in the event of bad weather or disappearing lane lines.

Tesla Autopilot drivers, for comparison, have to be ready to take over at any moment, so they're counseled to keep their eyes on the road at all times.

Just two months later, Waymo the autonomous vehicle division of Alphabet, Google's parent company revealed that it was testing self-driving minivans in the streets of Arizona without any humans at all behind the wheel. In 2018, Waymo launched the first fully autonomous taxi service in the US.

The two massive, exploded stars hit each other at one-third the speed of light and created gravitational waves. Scientific instruments on Earth picked up the waves from that crash, an event astronomers say only happens once every 100,000 years.

The crash happened 130 million light years away from Earth, researchers discovered. It caused the formation of $100,000,000,000,000,000,000,000,000,000 worth of gold and produced huge stores of silver and platinum, too.

Scientists only had a few weeks to study the interstellar interloper before it got too far, and too dim, to see with Earth-based telescopes.

Guesses as to what the object is run the gamut from comet to asteroid to alien spaceship. One Harvard University astronomer, Avi Loeb, has speculated that 'Oumuamua was an extraterrestrial scout, but nearly all other experts who have studied 'Oumuamua say that hypothesis is extraordinarily unlikely.

Cassini had been exploring Saturn and its moons for 13 years before the probe plunged to its death on September 15. Scientists planned the crash to ensure that Cassini wouldn't one day run out of fuel and hit one of Saturn's potentially habitable moons (thereby contaminating it with Earthly bacteria).

During its final dive, Cassini beamed back amazing photos of Saturn as we'd never seen the planet before. That last portion of the mission began with a flyby of the planet's moon, Titan. Then Cassini jetted through a 1,200-mile opening between Saturn and its rings of ice an unprecedented feat.

The spacecraft then angled down into the planet's clouds and burned up.

The cure for a form of hereditary blindness called leber congenital amaurosis is the first gene therapy approved by the FDA for an inherited disease.

The treatment, called Luxturna, is a one-time virus dose that gets injected into a patient's retina. The corrected gene in the virus taps out the flawed, blindness-inducing gene in the eye, and produces a key vision-producing protein that patients with the disease normally can't make.

People start noticing a difference in their sight within a month. In clinical trials of the treatment, 13 out of 20 patients saw positive results. The treatment costs $425,000 per eye, or $850,000 total, however.

Jiankui claimed to have edited genes in a pair of twins born in China in November. By using the DNA-editing technique called CRISPR, he said, the babies were born immune to HIV.

This type of genetic manipulation is banned in most parts of the world, since any genetic mutations that the babies may have would get passed on to their offspring, with potentially disastrous consequences.

In 2019, the MIT Technology Review released excerpts from Jiankui's research. The unpublished manuscripts revealed that in the process of trying to manipulate the babies' HIV resistance which some experts say was unsuccessful Jiankui may have introduced unintended mutations.

NASA's InSight lander spent more than six months careening through space before it landed safely on Martian soil.

The robot is charged with exploring Mars' deep interior and helping scientists understand why Mars wound up a cold desert planet while Earth did not.

InSight has given scientists the unprecedented ability to detect and monitor Mars quakes seismic events deep inside the planet.

Fossil fuels like coal contain carbon dioxide, methane, and other compounds that trap heat from the sun. When we extract and burn these fuels for energy, that releases those gases into the atmosphere, where they accumulate and heat up the Earth over time.

That's what made 2016 the hottest year on record. So far, 2019 is the second-hottest year since records began 140 years ago, with July being the hottest month ever recorded.

A landmark report by the Intergovernmental Panel in Climate Change (IPCC) warned that slashing greenhouse-gas emissions in the next decade is crucial in order to avoid the worst consequences of severe climate change.

An April 2019 studyrevealed that the Greenland ice sheet is sloughing off an average of 286 billion tons of ice per year. Two decades ago, the annual average was just 50 billion.

In 2012, Greenland lostmore than 400 billion tons of ice.

Antarctica, meanwhile, lost an average of 252 billion tons of ice per year in the last decade. In the 1980s, by comparison, Antarctica lost 40 billion tons of ice annually.

What's more, parts of Thwaites Glacier in western Antarctica are retreating by up to 2,625 feet per year, contributing to4% of sea-level rise worldwide.A study published in July suggested that Thwaites' melting is likely approachingan irreversible point after which the entire glacier could collapse into the ocean. If that happens, global sea levels could rise by more than 1.5 feet.

The object, called MU69, is nicknamed Arrokoth, which means "sky" in the Powhatan/Algonquian language (it was previously nicknamed Ultima Thule). It's themost distant objecthumanity has ever visited.

The New Horizons probe took hundreds of photographs as it flew by the space rockat 32,200 miles per hour.

Images revealed that Arrokoth isflat like a pancake, rather than spherical in shape. The unprecedented data will likely reveal new clues about the solar system's evolution and how planets like Earth formed, though scientists are still receiving and processing the information from the distant probe.

The Japan Aerospace Exploration Agency (JAXA) launched itsHayabusa-2probe in December 2014. Hayabusa-2 arrived at Ryugu in June 2018, but didn'tland on the asteroid's surface until this year.

In order to collect samples from deep within the space rock, Hayabusa-2blasted a hole in the asteroid before landing. The mission plan calls for the probe to bring those samples back to Earth. By studying Ryugu's innermost rocks and debris which have been sheltered from the wear and tear of space scientists hope to learn how asteroids like this may have seeded Earth with key ingredients for life billions of years ago.

The unprecedented photo shows the supermassive black hole at the center of the Messier 87 galaxy, which is about54 million light-years away from Earth. The black hole's mass is equivalent to 6.5 billion suns.

Though theimage is somewhat fuzzy, it showed that, as predicted, black holeslook like dark spheres surrounded by a glowing ring of light.

Scientists struggled for decades to capture a black hole on camera, since black holes distort space-time, ensuring that nothing can break free of their gravitational pull even light. That's why the image shows a unique shadow in the form of a perfect circle at the center.

In September, scientists announced they'ddetected water vapor on a potentially habitable planet for the first time.The planet, K2-18b, is asuper-Earththat orbits a red dwarf star 110 light-years away.

NASA's planet-hunting Kepler space telescope discovered K2-18b in 2015, three years before the telescope was shut down. During its nine-year mission, Kepler discovered more than 2,500 exoplanets.

But K2-18b is the only known planet outside our solar systemwith water, an atmosphere, and a temperature range that could support liquid water on its surface. That makes it our "best candidate for habitability," one researcher said.

In the pilot program, children up to 2 years old in Malawi, Ghana, and Kenya can receive the vaccine. The new vaccine prevented 4 in 10 malaria cases in clinical trials, including 3 in 10 life-threatening cases.

Malaria kills about 435,000 people each year, most of them children.

"We need new solutions to get the malaria response back on track, and this vaccine gives us a promising tool to get there," Tedros Adhanom Ghebreyesus, director-general of the World Health Organization, said in a release. "The malaria vaccine has the potential to save tens of thousands of children's lives."

The vaccine comes in addition to two experimental treatments proven to dramatically boost Ebola survival rates.

The two new treatments, called REGN-EB3 and mAb-114, are cocktails of antibodies that get injected into people's bloodstreams. These therapies saved about 90% of new infected patients in the Congo after theWHO declared the Ebola outbreak in Africa to be a global health emergency.

Morgan McFall-Johnsen contributed to this story.

Read the original post:

Biggest scientific discoveries of the 2010s decade: photos - Business Insider

Written by admin

December 11th, 2019 at 8:48 pm

Posted in Alphago


Page 112