Page 30«..1020..27282930

Archive for the ‘Machine Learning’ Category

Machine Learning Answers: If Nvidia Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? – Forbes

Posted: December 9, 2019 at 7:51 pm


without comments

Jen-Hsun Huang, president and chief executive officer of Nvidia Corp., gestures as he speaks during ... [+] the company's event at the 2019 Consumer Electronics Show (CES) in Las Vegas, Nevada, U.S., on Sunday, Jan. 6, 2019. CES showcases more than 4,500 exhibiting companies, including manufacturers, developers and suppliers of consumer technology hardware, content, technology delivery systems and more. Photographer: David Paul Morris/Bloomberg

We found that if Nvidia Stock drops 10% or more in a week (5 trading days), there is a solid 36% chance itll recover 10% or more, over the next month (about 20 trading days)

Nvidia stock has seen significant volatility this year. While the company has been impacted by the broader correction in the semiconductor space and the trade war between the U.S. and China, the stock is being supported by a strong long-term outlook for GPU demand amid growing applications in Deep Learning and Artificial Intelligence.

Considering the recent price swings, we started with a simple question that investors could be asking about Nvidia stock: given a certain drop or rise, say a 10% drop in a week, what should we expect for the next week? Is it very likely that the stock will recover the next week? What about the next month or a quarter? You can test a variety of scenarios on the Trefis Machine Learning Engine to calculate if Nvidia stock dropped, whats the chance itll rise.

For example, after a 5% drop over a week (5 trading days), the Trefis machine learning engine says chances of an additional 5% drop over the next month, are about 40%. Quite significant, and helpful to know for someone trying to recover from a loss. Knowing what to expect for almost any scenario is powerful. It can help you avoid rash moves. Given the recent volatility in the market, the mix of macroeconomic events (including the trade war with China and interest rate easing by the U.S. Fed), we think investors can prepare better.

Below, we also discuss a few scenarios and answer common investor questions:

Question 1: Does a rise in Nvidia stock become more likely after a drop?

Answer:

Not really.

Specifically, chances of a 5% rise in Nvidia stock over the next month:

= 40%% after Nvidia stock drops by 5% in a week.

versus,

= 44.5% after Nvidia stock rises by 5% in a week.

Question 2: What about the other way around, does a drop in Nvidia stock become more likely after a rise?

Answer:

No.

Specifically, chances of a 5% decline in Nvidia stock over the next month:

= 40% after NVIDIA stock drops by 5% in a week

versus,

= 27% after NVIDIA stock rises by 5% in a week

Question 3: Does patience pay?

Answer:

According to data and Trefis machine learning engines calculations, largely yes!

Given a drop of 5% in Nvidia stock over a week (5 trading days), while there is only about 28% chance the Nvidia stock will gain 5% over the subsequent week, there is more than 58% chance this will happen in 6 months.

The table below shows the trend:

Trefis

Question 4: What about the possibility of a drop after a rise if you wait for a while?

Answer:

After seeing a rise of 5% over 5 days, the chances of a 5% drop in Nvidia stock are about 30% over the subsequent quarter of waiting (60 trading days). However, this chance drops slightly to about 29% when the waiting period is a year (250 trading days).

Whats behind Trefis? See How Its Powering New Collaboration and What-Ifs ForCFOs and Finance Teams|Product, R&D, and Marketing Teams More Trefis Data Like our charts? Exploreexample interactive dashboardsand create your own

Follow this link:

Machine Learning Answers: If Nvidia Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes

Written by admin

December 9th, 2019 at 7:51 pm

Posted in Machine Learning

NFL Looks to Cloud and Machine Learning to Improve Player Safety – Which-50

Posted: at 7:51 pm


without comments

Americas National Football league is turning to emerging technology to try to solve its ongoing challenges around player safety. The sports governing body says it has amassed huge amounts of data but wants to apply machine learning to gain better insights and predictive capabilities.

It is hoped the insights will inform new rules, safer equipment, and better injury rehabilitation methods. However, the data will not be available to independent researchers.

Last week the NFL announced a partnership with Amazon Web Services to provide the digital services including machine learning and digital twin applications. Terms of the deal were not disclosed.

As the NFL has reached hyper professionalisation, data suggests player injuries have worsened, particularly head injuries sustained through high impact collisions. Several retired players have been diagnosed with or report symptoms of chronic traumatic encephalopathy, a neurodegenerative disease which can only be fully diagnosed post mortem.

As scrutiny has grown the NFL has responded with several rule changes and redesigning player helmets, both initiatives which it says has reduced concussions. However the league was also accused of failing to notify players of the links between concussions and brain injuries.

All of our initiatives on the health and safety side started with the engineering roadmap around minimising head impact on field, NFL executive vice president, Jeff Miller told Which-50 following the announcement.

Miller who is responsible for player health and safety, said the new technology is a new opportunity to minimise risk to players.

I think the speed, the pace of the insights that are available as a result of this [technology] are going to continue towards that same goal, hopefully in a much more efficient, and in fact mature, faster supersized scale.

Miller said the NFL has a responsibility to pass on the insights to lower levels of the game like high school and youth leagues. However, the data will not be available to external researchers initially.

As we find those insights I think were going to be able to share those, were going to be able to share those within the sport and hopefully over time outside of the sport as well.

NFL commissioner Roger Goodell announced the AWS deal, which builds on an existing partnership for game statistics, alongside Andy Jassy, the public cloud providers CEO, during the AWS:re:invent conference in Las Vegas last week.

Goodell said the NFL had amassed huge amounts of data from sensors and video feeds but needed the AWS tools to better leverage it.

When you take the combination of that the possibilities are enormous, the NFL boss said. We want to use the data to change the game. There are very few relationships we get involved with where the partner and the NFL can change the game.

When we apply next-generation technology to advance player health and safety, everyone wins from players to clubs to fans.

AWS machine learning tools will be applied to the data to help build a digital athlete, a type of digital twin which can be used to simulate certain scenarios including impacts.

The outcomes of our collaboration with AWS and what we will learn about the human body and how injuries happen could reach far beyond football, he said.

The author traveled to AWS re:Invent as a guest of Amazon.

Previous post

Next post

See more here:

NFL Looks to Cloud and Machine Learning to Improve Player Safety - Which-50

Written by admin

December 9th, 2019 at 7:51 pm

Posted in Machine Learning

Amazon Wants to Teach You Machine Learning Through Music? – Dice Insights

Posted: at 7:51 pm


without comments

Machine learning has rapidly become one of those buzzwords embraced by companies around the world. Even if they dont fully understand what it means, executives think that machine learning will magically transform their operations and generate massive profits. Thats good news for technologistsprovided they actually learn the technologys fundamentals, of course.

Amazon wants to help with the learning aspect of things. At this years AWS re:Invent, the company is previewing the DeepComposer, a 32-key keyboard thats designed to train you in machine learning fundamentals via the power of music.

No, seriously. AWS DeepComposer is the worlds first musical keyboard powered by machine learning to enable developers of all skill levels to learn Generative AI while creating original music outputs, reads Amazons ultra-helpful FAQ on the matter. DeepComposer consists of a USB keyboard that connects to the developers computer, and the DeepComposer service, accessed through the AWS Management Console.There are tutorials and training data included in the package.

Generative AI, the FAQ continues, allows computers to learn the underlying pattern of a given problem and use this knowledge to generate new content from input (such as image, music, and text). In other words, youre going to play a really simple song like Chopsticks, and this machine-learning platform will use that seed to build a four-hour Wagner-style opera. Just kidding! Or are we?

Jokes aside, the idea that a machine-learning platform can generate lots of data based on relatively little input is a powerful one. Of course, Amazon isnt totally altruistic in this endeavor; by serving as a training channel for up-and-coming technologists, the company obviously hopes that more people will turn to it for all of their machine learning and A.I. needs in future years. Those interested can sign up for the preview on a dedicated site.

This isnt the first time that Amazon has plunged into machine-learning training, either. Late last year, it introduced AWS DeepRacer, a model racecar designed to teach developers the principles of reinforcement learning. And in 2017, it rolled out AWS DeepLens camera, meant to introduce the technology world to Amazons take on computer vision and deep learning.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

For those who master the fundamentals of machine learning, the jobs can prove quite lucrative. In September, theIEEE-USA Salary & Benefits Salarysuggested that engineers with machine-learning knowledge make an annual average of $185,000. Earlier this year, meanwhile, Indeed pegged theaverage machine learning engineer salary at $146,085, and its job growth between 2015 and 2018 at 344 percent.

If youre not interested in Amazons version of a machine-learning education, there are other channels. For example, OpenAI, the sorta-nonprofit foundation (yes, its as odd as it sounds), hosts what it calls Gym, a toolkit for developing and comparing reinforcement algorithms; it also has a set of models and tools, along with a very extensive tutorialin deep reinforcement learning.

Google likewise has acrash course,complete with 25 lessons and 40+ exercises, thats a good introduction to machine learning concepts. Then theres Hacker Noon and its interesting breakdown of machine learning andartificial intelligence.

Once you have a firmer grasp on the core concepts, you can turn to Bloombergs Foundations of Machine Learning,a free online coursethat teaches advanced concepts such as optimization and kernel methods. A lot of math is involved.

Whatever learning route you take, its clear that machine learning skills have an incredible value right now. Familiarizing yourself through this technologywhether via traditional lessons or a musical keyboardcan only help your career in tech.

See more here:

Amazon Wants to Teach You Machine Learning Through Music? - Dice Insights

Written by admin

December 9th, 2019 at 7:51 pm

Posted in Machine Learning

Measuring Employee Engagement with A.I. and Machine Learning – Dice Insights

Posted: at 7:51 pm


without comments

A small number of companies have begun developing new tools to measure employee engagement without requiring workers to fill out surveys or sit through focus groups. HR professionals and engagement experts are watching to see if these tools gain traction and lead to more effective cultural and retention strategies.

Two of these companiesNetherlands-based KeenCorp and San Franciscos Cultivateglean data from day-to-day internal communications. KeenCorp analyzes patterns in an organizations (anonymized) email traffic to gauge changes in the level of tension experienced by a team, department or entire organization. Meanwhile, Cultivate analyzes manager email (and other digital communications) to provide leadership coaching.

These companies are likely to pitch to a ready audience of employers, especially in the technology space. With IT unemployment hovering around 2 percent, corporate and HR leaders cant help but be nervous about hiring and retention. When competition for talent is fierce, companies are likely to add more and more sweeteners to each offer until they reel in the candidates they want. Then theres the matter of retaining those employees in the face of equally sweet counteroffers.

Thats why businesses utilize a lot of effort and money on keeping their workers engaged. Companies spend more than $720 million annually on engagement, according to the Harvard Business Review. Yet their efforts have managed to engage just 13 percent of the workforce.

Given the competitive advantage tech organizations enjoy when their teams are happy and productivenot to mention the money they save by keeping employees in placeengagement and retention are critical. But HR cant create and maintain an engagement strategy if it doesnt know the workforces mindset. So companies have to measure, and they measure primarily through surveys.

Today, many experts believe surveys dont provide the information employers need to understand their workforces attitudes. Traditional surveys have their place, they say, but more effective methods are needed. They see the answer, of course, in artificial intelligence (A.I.) and machine learning (ML).

One issue with surveys is they only capture a part of the information, and thats the part that the employee is willing to release, said KeenCorp co-founder Viktor Mirovic. When surveyed, respondents often hold back information, he explained, leaving unsaid data that has an effect similar to unheard data.

I could try to raise an issue that you may not be open to because you have a prejudice, Mirovic added. If tools dont account for whats left unsaid and unheard, he argued, they provide an incomplete picture.

As an analogy, Mirovic described studies of combat aircraft damaged in World War II. By identifying where the most harm occurred, designers thought they could build safer planes. However, the study relied on the wrong data, Mirovic said. Why? Because they only looked at the planes that came back. The aircraft that presumably suffered the most grievous damagethose that were shot downwerent included in the research.

None of this means traditional surveys surveys dont provide value. I think the traditional methods are still useful, said Alex Kracov, head of marketing for Lattice, a San Francisco-based workforce management platform that focuses on small and mid-market employers. Sometimes just the idea of starting to track engagement in the first place, just to get a baseline, is really useful and can be powerful.

For example, Lattice itself recently surveyed its 60 employees for the first time. It was really interesting to see all of the data available and how people were feeling about specific themes and questions, he said. Similarly, Kracov believes that newer methods such as pulse surveyswhich are brief studies conducted at regular intervalscan prove useful in monitoring employee satisfaction, productivity and overall attitude.

Whereas surveys require an employees active participation, the up-and-coming tools dont ask them to do anything more than their work. When KeenCorps technology analyzes a companys email traffic, its looking for changes in the patterns of word use and compositional style. Fluctuations in the products index signify changes in collective levels of tension. When a change is flagged, HR can investigate to determine why attitudes are in flux and then proceed accordingly, either solving a problem or learning a lesson.

When I ask you a question, you have to think about the answer, Mirovic said. Once you think about the answer, you start to include all kinds of other attributes. You know, youre my boss or youve just given me a raise or youre married to my sister. Those could all affect my response. What we try to do is go in as objectively as possible, without disturbing people as we observe them in their natural habitats.

See the original post:

Measuring Employee Engagement with A.I. and Machine Learning - Dice Insights

Written by admin

December 9th, 2019 at 7:51 pm

Posted in Machine Learning

Cloudy with a chance of neurons: The tools that make neural networks work – Ars Technica

Posted: at 7:51 pm


without comments

Enlarge / Machine learning is really good at turning pictures of normal things into pictures of eldritch horrors.

Jim Salter

Artificial Intelligenceor, if you prefer, Machine Learningis today's hot buzzword. Unlike many buzzwords have come before it, though, this stuff isn't vaporware dreamsit's real, it's here already, and it's changing your life whether you realize it or not.

Before we go too much further, let's talk quickly about that term "Artificial Intelligence." Yes, it's warranted; no, it doesn't mean KITT from Knight Rider, or Samantha, the all-too-human unseen digital assistant voiced by Scarlett Johansson in 2013'sHer. Aside from being fictional, KITT and Samantha are examples ofstrong artificial intelligence, also known as Artificial General Intelligence (AGI). On the other hand, artificial intelligencewithout the "strong" or "general" qualifiersis an established academic term dating back to the 1955 proposal for the Dartmouth Summer Project on Artificial Intelligence (DSRPAI), written by Professors John McCarthy and Marvin Minsky.

All "artificial intelligence" really means is a system that emulates problem-solving skills normally seen in humans or animals. Traditionally, there are two branches of AIsymbolic and connectionist. Symbolic means an approach involving traditional rules-based programminga programmer tells the computer what to expect and how to deal with it, very explicitly. The "expert systems" of the 1980s and 1990s were examples of symbolic (attempts at) AI; while occasionally useful, it's generally considered impossible to scale this approach up to anything like real-world complexity.

NBCUniversal

Artificial Intelligence in the commonly used modern sense almost always refers to connectionist AI. Connectionist AI, unlike symbolic AI, isn't directly programmed by a human. Artificial neural networks are the most common type of connectionist AI, also sometimes referred to as machine learning. My colleague Tim Lee just got done writing about neural networks last weekyou can get caught up right here.

If you wanted to build a system that could drive a car, instead of programming it directly you might attach a sufficiently advanced neural network to its sensors and controls, and then let it "watch" a human driving for tens of thousands of hours. The neural network begins to attach weights to events and patterns in the data flow from its sensors that allow it to predict acceptable actions in response to various conditions. Eventually, you might give the network conditional control of the car's controls and allow it to accelerate, brake, and steer on its ownbut still with a human available. The partially trained neural network can continue learning in response to when the human assistant takes the controls away from it. "Whoops, shouldn't have donethat," and the neural network adjusts weighted values again.

Sounds very simple, doesn't it? In practice, not so muchthere are many different types of neural networks (simple, convolutional, generative adversarial, and more), and none of them is very bright on its ownthe brightest is roughly similar in scale to a worm's brain. Most complex, really interesting tasks will require networks of neural networks that preprocess data to find areas of interest, pass those areas of interest onto other neural networks trained to more accurately classify them, and so forth.

One last piece of the puzzle is that, when dealing with neural networks, there are two major modes of operation: inference and training. Training is just what it sounds likeyou give the neural network a large batch of data that represents a problem space, and let it chew through it, identifying things of interest and possibly learning to match them to labels you've provided along with the data. Inference, on the other hand, is using an already-trained neural network to give you answers in a problem space that it understands.

Both inference and training workloads can operate several orders of magnitude more rapidly on GPUs than on general-purpose CPUsbut that doesn't necessarily mean you want to do absolutely everything on a GPU. It's generally easier and faster to runsmall jobs directly on CPUs rather than invoking the initial overhead of loading models and data into a GPU and its onboard VRAM, so you'll very frequently see inference workloads run on standard CPUs.

Go here to read the rest:

Cloudy with a chance of neurons: The tools that make neural networks work - Ars Technica

Written by admin

December 9th, 2019 at 7:51 pm

Posted in Machine Learning

The Bot Decade: How AI Took Over Our Lives in the 2010s – Popular Mechanics

Posted: at 7:51 pm


without comments

Bots are a lot like humans: Some are cute. Some are ugly. Some are harmless. Some are menacing. Some are friendly. Some are annoying ... and a little racist. Bots serve their creators and society as helpers, spies, educators, servants, lab technicians, and artists. Sometimes, they save lives. Occasionally, they destroy them.

In the 2010s, automation got better, cheaper, and way less avoidable. Its still mysterious, but no longer foreign; the most Extremely Online among us interact with dozens of AIs throughout the day. That means driving directions are more reliable, instant translations are almost good enough, and everyone gets to be an adequate portrait photographer, all powered by artificial intelligence. On the other hand, each of us now sees a personalized version of the world that is curated by an AI to maximize engagement with the platform. And by now, everyone from fruit pickers to hedge fund managers has suffered through headlines about being replaced.

Humans and tech have always coexisted and coevolved, but this decade brought us closer togetherand closer to the futurethan ever. These days, you dont have to be an engineer to participate in AI projects; in fact, you have no choice but to help, as youre constantly offering your digital behavior to train AIs.

So heres how we changed our bots this decade, how they changed us, and where our strange relationship is going as we enter the 2020s.

All those little operational tweaks in our day come courtesy of a specific scientific approach to AI called machine learning, one of the most popular techniques for AI projects this decade. Thats when AI is tasked not only with finding the answers to questions about data sets, but with finding the questions themselves; successful deep learning applications require vast amounts of data and the time and computational power to self-test over and over again.

Deep learning, a subset of machine learning, uses neural networks to extract its own rules and adjust them until it can return the right results; other machine learning techniques might use Bayesian networks, vector maps, or evolutionary algorithms to achieve the same goal.

In January, Technology Reviews Karen Hao released an exhaustive analysis of recent papers in AI that concluded that machine learning was one of the defining features of AI research this decade. Machine learning has enabled near-human and even superhuman abilities in transcribing speech from voice, recognizing emotions from audio or video recordings, as well as forging handwriting or video, Hao wrote. Domestic spying is now a lucrative application for AI technologies, thanks to this powerful new development.

Haos report suggests that the age of deep learning is finally drawing to a close, but the next big thing may have already arrived. Reinforcement learning, like generative adversarial networks (GANs), pits neural nets against one another by having one evaluate the work of the other and distribute rewards and punishments accordinglynot unlike the way dogs and babies learn about the world.

The future of AI could be in structured learning. Just as young humans are thought to learn their first languages by processing data input from fluent caretakers with their internal language grammar, computers can also be taught how to teach themselves a taskespecially if the task is to imitate a human in some capacity.

This decade, artificial intelligence went from being employed chiefly as an academic subject or science fiction trope to an unobtrusive (though occasionally malicious) everyday companion. AIs have been around in some form since the 1500s or the 1980s, depending on your definition. The first search indexing algorithm was AltaVista in 1995, but it wasnt until 2010 that Google quietly introduced personalized search results for all customers and all searches. What was once background chatter from eager engineers has now become an inescapable part of daily life.

AI Can Tell If You're Going to Die Soon. But How?

'Fake News' Is Sparking an AI Arms Race

One function after another has been turned over to AI jurisdiction, with huge variations in efficacy and consumer response. The prevailing profit model for most of these consumer-facing applications, like social media platforms and map functions, is for users to trade their personal data for minor convenience upgrades, which are achieved through a combination of technical power, data access, and rapid worker disenfranchisement as increasingly complex service jobs are doubled up, automated away, or taken over by AI workers.

The Harvard social scientist Shoshana Zuboff explained the impact of these technologies on the economy with the term surveillance capitalism. This new economic system, she wrote, unilaterally claims human experience as free raw material for translation into behavioural data, in a bid to make profit from informed gambling based on predicted human behavior.

Were already using machine learning to make subjective decisionseven ones that have life-altering consequences. Medical applications are only some of the least controversial uses of artificial intelligence; by the end of the decade, AIs were locating stranded victims of Hurricane Maria, controlling the German power grid, and killing civilians in Pakistan.

The sheer scope of these AI-controlled decision systems is why automation has the potential to transform society on a structural level. In 2012, techno-socialist Zeynep Tufekci pointed out the presence on the Obama reelection campaign of an unprecedented number of data analysts and social scientists, bringing the traditional confluence of marketing and politics into a new age.

Intelligence that relies on data from an unjust world suffers from the principle of garbage in, garbage out, futurist Cory Doctorow observed in a recent blog post. Diverse perspectives on the design team would help, Doctorow wrote, but when it comes to certain technology, there might be no safe way to deploy:

It doesnt help that data collection for image-based AI has so far taken advantage of the most vulnerable populations first. The Facial Recognition Verification Testing Program is the industry standard for testing the accuracy of facial recognition tech; passing the program is imperative for new FR startups seeking funding.

But the datasets of human faces that the program uses are sourced, according to a report from March, from images of U.S. visa applicants, arrested people who have since died, and children exploited by child pornography. The report found that the majority of data subjects were people who had been arrested on suspicion of criminal activity. None of the millions of faces in the programs data sets belonged to people who had consented to this use of their data.

State-level efforts to regulate AI finally emerged this decade, with some success. The European Unions General Data Protection Regulation (GDPR), enforceable from 2018, limits the legal uses of valuable AI training datasets by defining the rights of the data subject (read: us); the GDPR also prohibits the black box model for machine learning applications, requiring both transparency and accountability on how data are stored and used. At the end of the decade, Google showed the class how not to regulate when they built, and then scrapped, an external AI ethics panel a week later, feigning shock at all the negative reception.

Why You Shouldn't Fear AI

AI Can Now Predict When Lightning Will Strike

How AI Is Helping Define William Shakespeare

Even attempted regulation is a good sign. It means were looking at AI for what it is: not a new life form that competes for resources, but as a formidable weapon. Technological tools are most dangerous in the hands of malicious actors who already hold significant power; you can always hire more programmers. During the long campaign for the 2016 U.S. presidential election, the Putin-backed IRA Twitter botnet campaignsessentially, teams of semi-supervised bot accounts that spread disinformation on purpose and learn from real propagandainfiltrated the very mechanics of American democracy.

Keeping up with AI capacities as they grow will be a massive undertaking. Things could still get much, much worse before they get better; authoritarian governments around the world have a tendency to use technology to further consolidate power and resist regulation.

Tech capabilities have long since proved too fast for traditional human lawmakers, but one hint of what the next decade might hold comes from AIs themselves, who are beginning to be deployed as weapons against the exact type of disinformation other AIs help to create and spread. There now exists, for example, a neural net devoted explicitly to the task of identifying neural net disinformation campaigns on Twitter. The neural nets name is Grover, and its really good at this.

More here:

The Bot Decade: How AI Took Over Our Lives in the 2010s - Popular Mechanics

Written by admin

December 9th, 2019 at 7:51 pm

Posted in Machine Learning

The Top Five AWS Re:Invent 2019 Announcements That Impact Your Enterprise Today – Forbes

Posted: at 7:51 pm


without comments

AWS CEO Andy Jassy, discusses a new initiative with the NFL that will transform player health and ... [+] safety using cloud computing during AWS re:Invent 2019 on Thursday, Dec. 5, 2019 in Las Vegas. (Isaac Brekken/AP Images for NFL)

Last week, I had the pleasure of attending Amazon.com AWSs re:Invent conference in Las Vegas. Re:Invent is AWSs once a year mega-event where it announces new services and holds 2,500 educational sessions for builders, CIOs, channel and ecosystem partners, customers, and of course, industry analysts like me. Its a large event at 65,000 attendees but could be much larger as it sells out after a few days. The attraction is simple. Its the most important cloud show you can attend and attendees want to get a head-start and hands-on with the latest and greatest of what AWS has to offer. AWS made hundreds of announcements and disclosures and while the Moor Insights & Strategy analyst team will be going deeper on the most impactful announcements, I wanted to make a top 5 list and why you should care.

1/ Graviton2 for EC2 M, R, and C 6thGen instances

AWS Graviton2 instances

Based on an Arm N1 core, AWS says these new instances deliver up to 40% improved price/performance over comparable x86-based Skylake instances. In preview, AWS will make these available for Mainstream (M), memory-intensive (R) and compute intensive (C) instances.

Why this matters

You may expect that I gave the #1 spot to new chips because I can be a chip nerd. I can be, but when you think about a 40% improvement over IaaS, PaaS and SaaS services that cant easily be copied, Id say thats important. Thats not saying that advantage will last forever, but its very disruptive right now. First off, Id say that now no one can say Arm isnt ready for general purpose datacenter compute. It is, as AWS IaaS is larger than the #2-10 IaaS provider combined. I can see VMware and Oracle accelerating its offerings and maybe SAP doing anything with Arm, which they arent publicly. Finally, dont overthink this related to AMD and Intel. The market is massive, growing and I dont believe this is anti-Intel or AMD. But if a small AWS team can outperform AMD and Intel on some cloud workloads, you do have to do a pause. I wrote in-depth on all of this here.

2/ Many new hybrid offerings

Local Zones

While AWS doesnt want to use the term hybrid a lot, I think enterprises understand that it means they can extend their AWS experience to on-prem or close to on-prem compute and storage. AWS announced three capabilities here that are important, including going GA on Outposts and announcing Local Zones and Wavelength.

AWS describes it as, AWS Outposts are fully-managed and configurable racks of AWS-designed hardware that bring native AWS capabilities to on-premises locations using the familiar AWS or VMware control plane and tools. AWS Local Zones place select AWS services close to large population, industry, and IT centers in order to deliver applications with single-digit millisecond latencies, without requiring customers to build and operate datacenters or co-location facilities. AWS Wavelength enables developers to deploy AWS compute and storage at the edge of the 5G network, in order to support emerging applications like machine learning at the edge, industrial IoT, and virtual and augmented reality on mobile and edge devices.

Why this matters

AWS took the hybrid idea and doubled down on it. If youre a customer who wants a low latency experience on-prem with Outposts, lowest-latency in the public cloud with Local Zones, or in the core carrier network with Wavelength, AWS has you covered. When you add this to what AWS is doing with Snowball and where I think its going, its hard not for me to say AWS wont have the broadest and most diverse hybrid play. After our analyst fireside chat and Q&A with AWSs Matt Garman, Im convinced we will see tremendous compute and storage variability with all of AWSs offerings. It doesnt have all the blanks filled in, but I believe it will. This isnt for show; its for world domination.

AWS Wavelength

What Im most interested to see is how the economics and agility stack up compared to on-prem giants Dell Technologies, Hewlett Packard Enterprise, Cisco Systems, Lenovo and IBM.

3/ SageMaker Studio

SageMaker Studio

AWS says the Amazon SageMaker Studio is the first comprehensive IDE (integrated developer environment) for machine learning, allowing developers to build, train, explain, inspect, monitor, debug, and run their machine learning models from a single interface. Developers now have a simple way to manage the end-to-end machine learning development workflows so they can build, train, and deploy high-quality machine learning models faster and easier.

Why this matters

Machine learning is really hard without an army of data scientists and DL/ML-savvy developers. The problem is that these skills are very expensive, hard to attract and retain, not to mention the need to have very unique infrastructure like GPUs, FPGAs and ASICs. AWS did a lot with its base ML services to help solve the infrastructure and SageMaker to connect the building, training, and deploying ML at scale. But how do you connect the developer on an end to end workflow basis? Enter SageMaker Studio. Studio replaces many other components and toolsets that exist today for building, training, explaining, inspecting, monitoring, debugging, and running that may make those ISVs unhappy, but developers could be a lot happier.

Im very interested in lining this up against what both Google Cloud and Azure are doing and getting customer feedback. With SageMaker Studio, AWS is delivering what enterprises want; the only question is if its better than or a lot less expensive in what devs can put together themselves or run on another cloud.

4/ Inf1 EC2 instances with Inferentia

Inf1 Instances

Last year, AWS pre-announced Inferentia, its custom silicon for machine learning inference. This year, it announced the availability of instances based on that chip, called EC2 Inf1. AWS explains that With Amazon EC2 Inf1 instances, customers receive the highest performance and lowest cost for machine learning inference in the cloud. Amazon EC2 Inf1 instances deliver 2x higher inference throughput, and up to 66% lower cost-per-inference than the Amazon EC2 G4 instance family, which was already the fastest and lowest cost instance for machine learning inference available in the cloud.

Why this matters

Machine learning workloads in the cloud are split into training and inference. Enterprises train the workload with big data and monster GPUs and then run the model, or infer on smaller silicon close to the edge. Currently, the highest performance training and inference currently occurs on NVIDA GPUs, namely the V100 and G4. Most inference is done on a CPU for lower cost and latency purposes as described by Amazon retail gurus during the last two Xeon launches. While I am sure NVIDIA is hard at work on its next generation silicon, this is fascinating as nothing has served as a challenge even to NVIDIAs highest-performance instances. While I havent done a deep dive yet like Graviton 2 above, when I do, I will report back as will ML lead Karl Freund. Whatever the outcome, its good to see the level of competition rising in this space.

5/ No ML experience required services

AWS came out strong touting new services that dont require ML experience. Think of these as SaaS or high-order PaaS capabilities where you dont need a framework expert or even a data scientist. Amazon said

Why this matters

I will posit that theres more market opportunity for AWS in ML PaaS and SaaS if for nothing else the lack of data scientists and framework-savvy developers. If youre not a Fortune 100 company, youre at a distinct disadvantage to attract and retain those resources and I doubt they can be at the scale that you need them. Also, as AWS does most of its business in IaaS, theres just more opportunity in PaaS and SaaS.

AWS ML Stack

Kendra sound incredible and it will have an immense amount of competition from Azure and Google Cloud. Azure likely already has a lot of the enterprise data through Office 365, Teams and Skype and Google is good at search. CodeGuru sounds too good to be true but isnt, based on a few developer conversations I had at the show. The only thing limiting this service will be the cost, which I think is dense, given what it can save, but its human nature to not see the big picture. Fraud detector, like Kendra, will have a lot of competition, especially from IBM who have been doing this for decades. I love that the service is bringing its knowledge from its Amazon.com dealings and Id be surprised if the website has the highest fraud attacks given it does 40% of online etail transactions. Transcribe Medical is a dream come true for surgeons like my brother in-law and I hope AWS runs a truck through the aged transcription industry. AWS will have a lot of competition from both Azure and Google Cloud. A2I has been needed in the industry for a while as no state or federally regulated industry can deal with a black box.

Honorable mentions

There were so many good announcements to choose from I had to do an honorable mention list with my quick take.

Wrapping up

While its impossible to do justice to a huge event like AWS re:Invent in a single point, I also think its as important to point out the highlights with some honorable mentions. All in all, AWS answered the hybrid critics and raised the ante, introduced some homegrown silicon that de-commoditizes IaaS, and gave more reasons to use its databases and machine learning services from newbie to Ph.D.

Moor Insights & Strategy analysts will be diving more into AWS Outposts and Graviton2 (Matt Kimball), Braket (Paul Smith-Goodson), Inf1 (Karl Freund), and overall impressions (Rhett Dillingham).

Read more from the original source:

The Top Five AWS Re:Invent 2019 Announcements That Impact Your Enterprise Today - Forbes

Written by admin

December 9th, 2019 at 7:51 pm

Posted in Machine Learning


Page 30«..1020..27282930



matomo tracker