Page 1,461«..1020..1,4601,4611,4621,463..1,4701,480..»

Christmas gift ideas for wellness lovers, from chocolate crystal meditation to yoga retreats – Evening Standard

Posted: December 11, 2019 at 4:41 am


The hottest luxury and A List news

Is there a person in your life who alwayshas a smoothie in-hand and is ready for a5amSoulCycleclass at a moments notice? This is probably the gift list for them.

If Gwyneth PaltrowsGoop gift guide wasnt quite goopy enough for you this holiday season, we've rounded up some of our favorite wellness gifts for the early rising, chia-loving, exercise-class hopper.

Chocolate Meditation Collection

Chocolate Meditation Collection with Lilly Pulitzer Terri Cashmere scarf (Sara Feigin)

$110 | Vosges| Buy it now

This is the ideal gift for a friend who can tell you what every single type of crystal does. The Chocolate Meditation Collection gives people truffles chosen for their properties along with a crystal pairing and affirmation cards. Simply grab the chocolate that sounds most appealing (there are plenty of inventive flavors, like curry or olive) and its corresponding crystal, then recite the chosen mantra that accompanies both chocolate and crystal.

Bawdy Beauty Butt Masks

Bawdy Beauty Butt Masks, $8 (Sara Feigin)

$34| Bawdy Beauty |Buy it now

Want to revitalize your body after a particularly intense spin class? Try Bawdy Beautys butt sheet masks. They tone, detoxify and rejuvenate your skin, with sheets for each cheek with cheeky sayings on them. Theyre also vegan, plant-based and clean.

Bawdy Beauty encourages users to post 'belfies' in the mask on Instagram. Simply put one on and either wander around your house (although you could scare a flatmate) or relax while watching Netflix (on your stomach, of course).

Yoga Club

$79| YogaClub|Buy it now

These days, there are subscriptions for everything, whether youre shopping for a discerning cook or a dog-lover. But Yoga Box is for your pal who sweats at Y7 religiously. Like other fashion boxes, either you or the gift recipient can fill out a long survey with what theyre looking for in athleisure, whether its mesh cut-outs or colorful prints. The box then arrives monthly.

Toast Full Spectrum Hemp Extract

Toast Full Spectrum Hemp Extract(Sara Feigin)

$55 | Toast Wellness | Buy it now

If you have a pal who wont stop chugging CBD soda, give them the gift of Toast. The brand offers also offers CBD pre-rolls with no tobacco- but if thats not their vibe, give them the oil, which they can add to their wellness smoothie in the morning. Its vegan, gluten free and sugar free so it should be ideal for any and all wellness fanatics.

DeoDoc kit

DeoDoc kit, $55 (Sara Feigin)

$70| DeoDoc Start Kit |Buy it now

What's goopier than an intimate grooming kit? We imagine this is would be GP approved (although it's no crystal). Deodoc's particular brand of 'intimate skincare' was developed by a doctor and is perfect for your friend who's a modern-day Samantha from Sex and the City - i.e. can't stop gossiping about men at brunch but loves hitting hot yoga. Get them the butt mask, too.

Souljourn Yoga

Souljourn'sretreats start at $400 (Souljourn)

$400+ | Souljourn Yoga |Buy it now

If you want to give a gift thats a bit more meaningful that most to your most wellness obsessed friend, Souljourn Yoga is the way to go. Its a non-profit that hosts immersive retreats all around the world to raise money for girls education in developing countries. In 2020, they already have retreats planned to Cape Town, Sri Lanka, Peru, Rwanda, Tibet and Morocco - and you could always get yourself a ticket, as well - after all, it is a vacation for a good cause.

DosistDissolvable Tablets

Dosist tablets (Sara Feigin)

Dosist Bliss Tablets |Buy it now

Dosist is all the rage in Los Angeles - but if you can't make it to LA to grab one of their pens, try the CBD-infused tablets for a post-workout buzz.

SPRI

Weight prices vary (SPRI)

$8 | Deluxe Vinyl Dumbells |Buy it now

If you want to purchase something useful for the wellness aficionado in your life that lets them pretend theyre at their favorite boutique fitness class, try SPRI. The products are used at fitness classes all over, including New Yorks intense workout at Switch Playground. The best part? The weights come in different colors, so your pal won't mind leaving them lying around her studio apartment if she doesn't have space for an in-home gym.

Mineral Sousa

$70 Mineral Sousa | Buy it now

(Sara Feigin)

For your friend who loves CrossFit and is always chugging a Coconut Water as part of recovery, gift them this luxurious CBD oil meant to help with post-workout inflammation.

Simris Algae Pills

Simris Algae Pills (Sara Feigin)

$105 | Simris Algae Pills | Buy it now

Get your favorite wellness friend these on-trend algae pills, made for athletes, mothers or fitness aficionados. The blue bottles look chic on a work desk and the pills themselves contain Omega-3, which is particularly helpful for your vegan pals. They're vegan, gluten-free, non-toxic and '100 percent ocean friendly,' so say goodbye to actual fish oil forever.

LARQ Self-Cleaning Water Bottle

(Larq)

$125 | LARQ | Buy it now

If you have a friend who refuses to put down their S'well water bottle it's time to upgrade them. LARQ's bottles are sleek and chic - but even better, they purify your water while you drink, neutralizing all harmful bacteria that could be lurking.

The White Company Sleep Kit

(Sara Feigin)

$100 | The White Company Sleep Well Gift Set |Buy it now

Once you've worked out, loaded up on CBD and are ready for bed, it's time for natural remedies. The White Company offers up a Sleep Well Gift Set that's ideal for your friend who's too wellness obsessed to even consider taking Melatonin. It comes with lotion, a candle and even sleep spray.

Read this article:

Christmas gift ideas for wellness lovers, from chocolate crystal meditation to yoga retreats - Evening Standard

Written by admin |

December 11th, 2019 at 4:41 am

Posted in Meditation

Meditation Cushion Market Upcoming Business Opportunities with New Innovative Solutions – Electronics Reports

Posted: at 4:41 am


New York, NY, Dec 11, 2019 (WiredRelease): The new research report titled Global Meditation Cushion Market Growth and Opportunities 2020-2029helps the readers to boost their profits and business making deals by obtaining complete insights of Meditation Cushion Industry. The meditation cushion market report also provides an exclusive survey of rising players in the market which is based on the various ambitions of an organization like profiling, the product blueprint, the quantity and quality of production, appropriate raw material, and the financial status of the organization.

Various key dynamics that control a solid influence over the Meditation Cushion market are analyzed to determine the value, size, and trends regulating the growth of the market. Also, the estimated history of the market is calculated, and various possible growth factors, constraints, and opportunities are also interpreted to get an in-depth understanding of the market. Global Meditation Cushion market report delivers specific analytical information that clarifies the future growth trend to be followed by the global Meditation Cushion market, based on the past and current situation of the market.

In Order To Request For Sample Copy of this report,(Use Company eMail ID to Get Higher Priority)Click here at:https://market.us/report/meditation-cushion-market/request-sample/

The report provides knowledge of the leading market players within the Meditation Cushion market. The industry dynamic factors for the market segments are examined in this report. This research report covers the Meditation Cushion growth factors of the global market based on end-users. The report offers the Meditation Cushion market growth rate, size, and forecasts at the global level in addition to the geographic areas: North America, Europe, Asia Pacific, Latin America, and Middle East & Africa.

For Proper Guidance for your Business, Invest On Report Here:https://market.us/purchase-report/?report_id=26212

Know the Reasons to Acquire Meditation Cushion Market Research Report:

1. To prepare a competitive strategy based on the competitive landscape.

2. To build a business strategy by analyzing the high growth and attractive Meditation Cushion market categories.

3. To prepare management and strategic presentations using the Meditation Cushion market data.

4. To organize for a new product launch and inventory in advance.

5. To identify potential business partners, acquisition, targets and buyers.

6. To design capital investment strategies depending on forecasted high potential segments.

Market Segmentation:

Meditation Cushion Market Segment By Top Competitors

Satori Wholesale Trevida Peace Yoga Seat Of Your Soul Waterglider International Bean Products

Meditation Cushion Market Segment ByTypes, Estimates and Forecast, 2020-2029

Kapok Fill Buckwheat Fill Memory Foam Fill Others

Meditation Cushion Market Segment ByApplications, Estimates and Forecast, 2020-2029

Commercial Household

To Get Detailed InformationAbout This Report, Enquire at:https://market.us/report/meditation-cushion-market/#inquiry

Highlights of the following key factors:

Business overview: An overall information of the organizations operations and business divisions and Background.

Company history: Evolution of key events associated with the organization.

Business strategy: Summarization of the organizations business strategy by Analysts.

Major products and services: A list of major brands, products and services of the organization.

Key competitors: A checklist of main competitors to the company.

Company locations and subsidiaries: A list and contact details of key locations and subsidiaries of the organization.

Detailed financial ratios for the past ten years: The latest financial ratios derived from the annual financial statements published by the organization with 10 years of history.

SWOT Analysis: A complete analysis of the organizations stability, flaws, opportunities, and obstacles.

Moreover, other factors that contribute toward the increase in growth of the Meditation Cushion market include sympathetic government initiatives related to the use of Meditation Cushion. On the contrary, high growth potential in emerging economies is expected to create lucrative opportunities for the market during the forecast period, (2020-2029)

Major Points Covered in Table of Contents:

1. Global Meditation Cushion Market Synopsis

2. Global Meditation Cushion Market Status and Development

3. Global Meditation Cushion Market Analysis by Manufacturers

4. Global Meditation Cushion Supply (Production), Consumption, Export, Import by Region (2020-2029)

5. Meditation Cushion Production, Revenue (Value), Price Trend by Type

6. Global Meditation Cushion Market Analysis by Application

7. Global Meditation Cushion Manufacturers Profiles/Analysis

8. Meditation Cushion Manufacturing Cost Analysis, Industry Chain, Upstream, and Downstream Customers Analysis

9. Regional and Industry Investment Opportunities & Challenges, Hazards and Affecting Factors

10. Marketing Strategy Analysis, Distributors/Traders

11. Global Meditation Cushion Market Forecast (2020-2029)

Access the complete report details of Global Meditation Cushion Marketat:https://market.us/report/meditation-cushion-market/

Get in Touch with Us :

Mr. Benni Johnson

Market.us (Powered By Prudour Pvt. Ltd.)

Send Email:inquiry@market.us

Address:420 Lexington Avenue, Suite 300 New York City, NY 10170, United States

Tel:+1 718 618 4351

Website:https://market.us

Refer to ourmost helpful Reports:

Aircraft Parts Manufacturing, Repair And Maintenance Market 2029 Strategic Employment, Economy, Prominent Players Analysis with Global Trends and Traders

Greatest Progress of Commercial Decor Papers Market

View post:

Meditation Cushion Market Upcoming Business Opportunities with New Innovative Solutions - Electronics Reports

Written by admin |

December 11th, 2019 at 4:41 am

Posted in Meditation

Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits – Forbes

Posted: December 9, 2019 at 7:52 pm


Digital Human Brain Covered with Networks

Artificial intelligence is advancing rapidly. In a few decades machines will achieve superintelligence and become self-improving. Soon after that happens we will launch a thousand ships into space. These probes will land on distant planets, moons, asteroids, and comets. Using AI and terabytes of code, they will then nanoassemble local particles into living organisms. Each probe will, in fact, contain the information needed to create an entire ecosystem. Thanks to AI and advanced biotechnology, the species in each place will be tailored to their particular plot of rock. People will thrive in low temperatures, dim light, high radiation, and weak gravity. Humanity will become an incredibly elastic concept. In time our distant progeny will build megastructures that surround stars and capture most of their energy. Then the power of entire galaxies will be harnessed. Then life and AIlong a common entity by this pointwill construct a galaxy-sized computer. It will take a mind that large about a hundred-thousand years to have a thought. But those thoughts will pierce the veil of reality. They will grasp things as they really are. All will be one. This is our destiny.

Then again, maybe not.

There are, of course, innumerable reasons to reject this fantastic tale out of hand. Heres a quick and dirty one built around Copernicuss discovery that we are not the center of the universe. Most times, places, people, and things are average. But if sentient beings from Earth are destined to spend eons multiplying and spreading across the heavens, then those of us alive today are special. We are among the very few of our kind to live in our cosmic infancy, confined in our planetary cradle. Because we probably are not special, we probably are not at an extreme tip of the human timeline; were likely somewhere in the broad middle. Perhaps a hundred-billion modern humans have existed, across a span of around 50,000 years. To claim in the teeth of these figures that our species is on the cusp of spending millions of years spreading trillions of individuals across this galaxy and others, you must engage in some wishful thinking. You must embrace the notion that we today are, in a sense, back at the center of the universe.

It is in any case more fashionable to speculate about imminent catastrophes. Technology again looms large. In the gray goo scenario, runaway self-replicating nanobots consume all of the Earths biomass. Thinking along similar lines, philosopher Nick Bostrom imagines an AI-enhanced paperclip machine that, ruthlessly following its prime directive to make paperclips, liquidates mankind and converts the planet into a giant paperclip mill. Elon Musk, when he discusses this hypothetical, replaces paperclips with strawberries, so that he can worry about strawberry fields forever. What Bostrom and Musk are driving at is the fear that an advanced AI being will not share our values. We might accidently give it a bad aim (e.g., paperclips at all costs). Or it might start setting its own aims. As Stephen Hawking noted shortly before his death, a machine that sees your intelligence the way you see a snails might decide it has no need for you. Instead of using AI to colonize distant planets, we will use it to destroy ourselves.

When someone mentions AI these days, she is usually referring to deep neural networks. Such networks are far from the only form of AI, but they have been the source of most of the recent successes in the field. A deep neural network can recognize a complex pattern without relying on a large body of pre-set rules. It does this with algorithms that loosely mimic how a human brain tunes neural pathways.

The neurons, or units, in a deep neural network are layered. The first layer is an input layer that breaks incoming data into pieces. In a network that looks at black-and-white images, for instance, each of the first layers units might link to a single pixel. Each input unit in this network will translate its pixels grayscale brightness into a number. It might turn a white pixel into zero, a black pixel into one, and a gray pixel into some fraction in between. These numbers will then pass to the next layer of units. Each of the units there will generate a weighted sum of the values coming in from several of the previous layers units. The next layer will do the same thing to that second layer, and so on through many layers more. The deeper the layer, the more pixels accounted for in each weighted sum.

An early-layer unit will produce a high weighted sumit will fire, like a neuron doesfor a pattern as simple as a black pixel above a white pixel. A middle-layer unit will fire only when given a more complex pattern, like a line or a curve. An end-layer unit will fire only when the patternor, rather, the weighted sums of many other weighted sumspresented to it resembles a chair or a bonfire or a giraffe. At the end of the network is an output layer. If one of the units in this layer reliably fires only when the network has been fed an image with a giraffe in it, the network can be said to recognize giraffes.

A deep neural network is not born recognizing objects. The network just described would have to learn from pre-labeled examples. At first the network would produce random outputs. Each time the network did this, however, the correct answers for the labeled image would be run backward through the network. An algorithm would be used, in other words, to move the networks unit weighting functions closer to what they would need to be to recognize a given object. The more samples a network is fed, the more finely tuned and accurate it becomes.

Some deep neural networks do not need spoon-fed examples. Say you want a program equipped with such networks to play chess. Give it the rules of the game, instruct it to seek points, and tell it that a checkmate is worth a hundred points. Then have it use a Monte Carlo method to randomly simulate games. Through trial and error, the program will stumble on moves that lead to a checkmate, and then on moves that lead to moves that lead to a checkmate, and so on. Over time the program will assign value to moves that simply tend to lead toward a checkmate. It will do this by constantly adjusting its networks unit weighting functions; it will just use points instead of correctly labeled images. Once the networks are trained, the program can win discrete contests in much the way it learned to play in the first place. At each of its turns, the program will simulate games for each potential move it is considering. It will then choose the move that does best in the simulations. Thanks to constant fine-tuning, even these in-game simulations will get better and better.

There is a chess program that operates more or less this way. It is called AlphaZero, and at present it is the best chess player on the planet. Unlike other chess supercomputers, it has never seen a game between humans. It learned to play by spending just a few hours simulating moves with itself. In 2017 it played a hundred games against Stockfish 8, one of the best chess programs to that point. Stockfish8 examined 70million moves per second. AlphaZero examined only 80,000. AlphaZero won 28 games, drew 72, and lost zero. It sometimes made baffling moves (to humans) that turned out to be masterstrokes. AlphaZero is not just a chess genius; it is an alien chess genius.

AlphaZero is at the cutting edge of AI, and it is very impressive. But its success is not a sign that AI will take us to the starsor enslave usany time soon. In Artificial Intelligence: A Guide For Thinking Humans, computer scientist Melanie Mitchell makes the case for AI sobriety. AI currently excels, she notes, only when there are clear rules, straightforward reward functions (for example, rewards for points gained or for winning), and relatively few possible actions (moves). Take IBMs Watson program. In 2011 it crushed the best human competitors on the quiz show Jeopardy!, leading IBM executives to declare that its successors would soon be making legal arguments and medical diagnoses. It has not worked out that way. Real-world questions and answers in real-world domains, Mitchell explains, have neither the simple short structure of Jeopardy! clues nor their well-defined responses.

Even in the narrow domains that most suit it, AI is brittle. A program that is a chess grandmaster cannot compete on a board with a slightly different configuration of squares or pieces. Unlike humans, Mitchell observes, none of these programs can transfer anything it has learned about one game to help it learn a different game. Because the programs cannot generalize or abstract from what they know, they can function only within the exact parameters in which they have been trained.

A related point is that current AI does not understand even basic aspects of how the world works. Consider this sentence: The city council refused the demonstrators a permit because they feared violence. Who feared violence, the city council or the demonstrators? Using what she knows about bureaucrats, protestors, and riots, a human can spot at once that the fear resides in the city council. When AI-driven language-processing programs are asked this kind of question, however, their responses are little better than random guesses. When AI cant determine what it refers to in a sentence, Mitchell writes, quoting computer scientist Oren Etzioni, its hard to believe that it will take over the world.

And it is not accurate to say, as many journalists do, that a program like AlphaZero learns by itself. Humans must painstakingly decide how many layers a network should have, how much incoming data should link to each input unit, how fast data should aggregate as it passes through the layers, how much each unit weighting function should change in response to feedback, and much else. These settings and designs, adds Mitchell, must typically be decided anew for each task a network is trained on. It is hard to see nefarious unsupervised AI on the horizon.

The doom camp (AI will murder us) and the rapture camp (it will take us into the mind of God) share a common premise. Both groups extrapolate from past trends of exponential progress. Moores lawwhich is not really a law, but an observationsays that the number of transistors we can fit on a computer chip doubles every two years or so. This enables computer processing speeds to increase at an exponential rate. The futurist Ray Kurzweil asserts that this trend of accelerating improvement stretches back to the emergence of life, the appearance of Eukaryotic cells, and the Cambrian Explosion. Looking forward, Kurzweil sees an AI singularitythe rise of self-improving machine superintelligenceon the trendline around 2045.

The political scientist Philip Tetlock has looked closely at whether experts are any good at predicting the future. The short answer is that theyre terrible at it. But theyre not hopeless. Borrowing an analogy from Isaiah Berlin, Tetlock divides thinkers into hedgehogs and foxes. A hedgehog knows one big thing, whereas a fox knows many small things. A hedgehog tries to fit what he sees into a sweeping theory. A fox is skeptical of such theories. He looks for facts that will show he is wrong. A hedgehog gives answers and says moreover a lot. A fox asks questions and says however a lot. Tetlock has found that foxes are better forecasters than hedgehogs. The more distant the subject of the prediction, the more the hedgehogs performance lags.

Using a theory of exponential growth to predict an impending AI singularity is classic hedgehog thinking. It is a bit like basing a prediction about human extinction on nothing more than the Copernican principle. Kurzweils vision of the future is clever and provocative, but it is also hollow. It is almost as if huge obstacles to general AI will soon be overcome because the theory says so, rather than because the scientists on the ground will perform the necessary miracles. Gordon Moore himself acknowledges that his law will not hold much longer. (Quantum computers might pick up the baton. Well see.) Regardless, increased processing capacity might be just a small piece of whats needed for the next big leaps in machine thinking.

When at Thanksgiving dinner you see Aunt Jane sigh after Uncle Bob tells a blue joke, you can form an understanding of what Jane thinks about what Bob thinks. For that matter, you get the joke, and you can imagine analogous jokes that would also annoy Jane. You can infer that your cousin Mary, who normally likes such jokes but is not laughing now, is probably still angry at Bob for spilling the gravy earlier. You know that although you cant see Bobs feet, they exist, under the table. No deep neural network can do any of this, and its not at all clear that more layers or faster chips or larger training sets will close the gap. We probably need further advances that we have only just begun to contemplate. Enabling machines to form humanlike conceptual abstractions, Mitchell declares, is still an almost completely unsolved problem.

There has been some concern lately about the demise of the corporate laboratory. Mitchell gives the impression that, at least in the technology sector, the corporate basic-research division is alive and well. Over the course of her narrative, labs at Google, Microsoft, Facebook, and Uber make major breakthroughs in computer image recognition, decision making, and translation. In 2013, for example, researchers at Google trained a network to create vectors among a vast array of words. A vector set of this sort enables a language-processing program to define and use a word based on the other words with which it tends to appear. The researchers put their vector set online for public use. Google is in some ways the protagonist of Mitchells story. It is now an applied AI company, in Mitchells words, that has placed machine thinking at the center of diverse products, services, and blue-sky research.

Google has hired Ray Kurzweil, a move that might be taken as an implicit endorsement of his views. It is pleasing to think that many Google engineers earnestly want to bring on the singularity. The grand theory may be illusory, but the treasures produced in pursuit of it will be real.

Go here to see the original:

Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits - Forbes

Written by admin |

December 9th, 2019 at 7:52 pm

Posted in Alphazero

Artificial intelligence: How to measure the I in AI – TechTalks

Posted: at 7:52 pm


Image credit: Depositphotos

This article is part ofDemystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

Last week, Lee Se-dol, the South Korean Go champion who lost in a historical matchup against DeepMinds artificial intelligence algorithm AlphaGo in 2016, declared his retirement from professional play.

With the debut of AI in Go games, Ive realized that Im not at the top even if I become the number one through frantic efforts, Lee told theYonhap news agency. Even if I become the number one, there is an entity that cannot be defeated.

Predictably, Se-dols comments quickly made the rounds across prominent tech publications, some of them using sensational headlines with AI dominance themes.

Since the dawn of AI, games have been one of the main benchmarks to evaluate the efficiency of algorithms. And thanks to advances in deep learning and reinforcement learning, AI researchers are creating programs that can master very complicated games and beat the most seasoned players across the world. Uninformed analysts have been picking up on these successes to suggest that AI is becoming smarter than humans.

But at the same time, contemporary AI fails miserably at some of the most basic that every human can perform.

This begs the question, does mastering a game prove anything? And if not, how can you measure the level of intelligence of an AI system?

Take the following example. In the picture below, youre presented with three problems and their solution. Theres also a fourth task that hasnt been solved. Can you guess the solution?

Youre probably going to think that its very easy. Youll also be able to solve different variations of the same problem with multiple walls, and multiple lines, and lines of different colors, just by seeing these three examples. But currently, theres no AI system, including the ones being developed at the most prestigious research labs, that can learn to solve such a problem with so few examples.

The above example is from The Measure of Intelligence, a paper by Franois Chollet, the creator of Keras deep learning library. Chollet published this paper a few weeks before Le-sedol declared his retirement. In it, he provided many important guidelines on understanding and measuring intelligence.

Ironically, Chollets paper did not receive a fraction of the attention it needs. Unfortunately, the media is more interested in covering exciting AI news that gets more clicks. The 62-page paper contains a lot of invaluable information and is a must-read for anyone who wants to understand the state of AI beyond the hype and sensation.

But I will do my best to summarize the key recommendations Chollet makes on measuring AI systems and comparing their performance to that of human intelligence.

The contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks, such as board games and video games, Chollet writes, adding that solely measuring skill at any given task falls short of measuring intelligence.

In fact, the obsession with optimizing AI algorithms for specific tasks has entrenched the community in narrow AI. As a result, work in AI has drifted away from the original vision of developing thinking machines that possess intelligence comparable to that of humans.

Although we are able to engineer systems that perform extremely well on specific tasks, they have still stark limitations, being brittle, data-hungry, unable to make sense of situations that deviate slightly from their training data or the assumptions of their creators, and unable to repurpose themselves to deal with novel tasks without significant involvement from human researchers, Chollet notes in the paper.

Chollets observations are in line with those made by other scientists on the limitations and challenges of deep learning systems. These limitations manifest themselves in many ways:

Heres an example: OpenAIs Dota-playing neural networks needed 45,000 years worth of gameplay to reach a professional level. The AI is also limited in the number of characters it can play, and the slightest change to the game rules will result in a sudden drop in its performance.

The same can be seen in other fields, such as self-driving cars. Despite millions of hours of road experience, the AI algorithms that power autonomous vehicles can make stupid mistakes, such as crashing into lane dividers or parked firetrucks.

One of the key challenges that the AI community has struggled with is defining intelligence. Scientists have debated for decades on providing a clear definition that allows us to evaluate AI systems and determine what is intelligent or not.

Chollet borrows the definition by DeepMind cofounder Shane Legg and AI scientist Marcus Hutter: Intelligence measures an agents ability to achieve goals in a wide range of environments.

Key here is achieve goals and wide range of environments. Most current AI systems are pretty good at the first part, which is to achieve very specific goals, but bad at doing so in a wide range of environments. For instance, an AI system that can detect and classify objects in images will not be able to perform some other related task, such as drawing images of objects.

Chollet then examines the two dominant approaches in creating intelligence systems: symbolic AI and machine learning.

Early generations of AI research focused on symbolic AI, which involves creating an explicit representation of knowledge and behavior in computer programs. This approach requires human engineers to meticulously write the rules that define the behavior of an AI agent.

It was then widely accepted within the AI community that the problem of intelligence would be solved if only we could encode human skills into formal rules and encode human knowledge into explicit databases, Chollet observes.

But rather than being intelligent by themselves, these symbolic AI systems manifest the intelligence of their creators in creating complicated programs that can solve specific tasks.

The second approach, machine learning systems, is based on providing the AI model with data from the problem space and letting it develop its own behavior. The most successful machine learning structure so far is artificial neural networks, which are complex mathematical functions that can create complex mappings between inputs and outputs.

For instance, instead of manually coding the rules for detecting cancer in x-ray slides, you feed a neural network with many slides annotated with their outcomes, a process called training. The AI examines the data and develops a mathematical model that represents the common traits of cancer patterns. It can then process new slides and outputs how likely it is that the patients have cancer.

Advances in neural networks and deep learning have enabled AI scientists to tackle many tasks that were previously very difficult or impossible with classic AI, such as natural language processing, computer vision and speech recognition.

Neural networkbased models, also known as connectionist AI, are named after their biological counterparts. They are based on the idea that the mind is a blank slate (tabula rasa) that turns experience (data) into behavior. Therefore, the general trend in deep learning has become to solve problems by creating bigger neural networks and providing them with more training data to improve their accuracy.

Chollet rejects both approaches because none of them has been able to create generalized AI that is flexible and fluid like the human mind.

We see the world through the lens of the tools we are most familiar with. Today, it is increasingly apparent that both of these views of the nature of human intelligenceeither a collection of special-purpose programs or a general-purpose Tabula Rasaare likely incorrect, he writes.

Truly intelligent systems should be able to develop higher-level skills that can span across many tasks. For instance, an AI program that masters Quake 3 should be able to play other first-person shooter games at a decent level. Unfortunately, the best that current AI systems achieve is local generalization, a limited maneuver room within their own narrow domain.

In his paper, Chollet argues that the generalization or generalization power for any AI system is its ability to handle situations (or tasks) that differ from previously encountered situations.

Interestingly, this is a missing component of both symbolic and connectionist AI. The former requires engineers to explicitly define its behavioral boundary and the latter requires examples that outline its problem-solving domain.

Chollet also goes further and speaks of developer-aware generalization, which is the ability of an AI system to handle situations that neither the system nor the developer of the system have encountered before.

This is the kind of flexibility you would expect from a robo-butler that could perform various chores inside a home without having explicit instructions or training data on them. An example is Steve Wozniaks famous coffee test, in which a robot would enter a random house and make coffee without knowing in advance the layout of the home or the appliances it contains.

Elsewhere in the paper, Chollet makes it clear that AI systems that cheat their way toward their goal by leveraging priors (rules) and experience (data) are not intelligent. For instance, consider Stockfish, the best rule-base chess-playing program. Stockfish, an open-source project, is the result of contributions from thousands of developers who have created and fine-tuned tens of thousands of rules. A neural networkbased example is AlphaZero, the multi-purpose AI that has conquered several board games by playing them millions of times against itself.

Both systems have been optimized to perform a specific task by making use of resources that are beyond the capacity of the human mind. The brightest human cant memorize tens of thousands of chess rules. Likewise, no human can play millions of chess games in a lifetime.

Solving any given task with beyond-human level performance by leveraging either unlimited priors or unlimited data does not bring us any closer to broad AI or general AI, whether the task is chess, football, or any e-sport, Chollet notes.

This is why its totally wrong to compare Deep Blue, Alpha Zero, AlphaStar or any other game-playing AI with human intelligence.

Likewise, other AI models, such as Aristo, the program that can pass an eighth-grade science test, does not possess the same knowledge as a middle school student. It owes its supposed scientific abilities to the huge corpora of knowledge it was trained on, not its understanding of the world of science.

(Note: Some AI researchers, such as computer scientist Rich Sutton, believe that the true direction for artificial intelligence research should be methods that can scale with the availability of data and compute resources.)

In the paper, Chollet presents the Abstraction Reasoning Corpus (ARC), a dataset intended to evaluate the efficiency of AI systems and compare their performance with that of human intelligence. ARC is a set of problem-solving tasks that tailored for both AI and humans.

One of the key ideas behind ARC is to level the playing ground between humans and AI. It is designed so that humans cant take advantage of their vast background knowledge of the world to outmaneuver the AI. For instance, it doesnt involve language-related problems, which AI systems have historically struggled with.

On the other hand, its also designed in a way that prevents the AI (and its developers) from cheating their way to success. The system does not provide access to vast amounts of training data. As in the example shown at the beginning of this article, each concept is presented with a handful of examples.

The AI developers must build a system that can handle various concepts such as object cohesion, object persistence, and object influence. The AI system must also learn to perform tasks such as scaling, drawing, connecting points, rotating and translating.

Also, the test dataset, the problems that are meant to evaluate the intelligence of the developed system, are designed in a way that prevents developers from solving the tasks in advance and hard-coding their solution in the program. Optimizing for evaluation sets is a popular cheating method in data science and machine learning competitions.

According to Chollet, ARC only assesses a general form of fluid intelligence, with a focus on reasoning and abstraction. This means that the test favors program synthesis, the subfield of AI that involves generating programs that satisfy high-level specifications. This approach is in contrast with current trends in AI, which are inclined toward creating programs that are optimized for a limited set of tasks (e.g., playing a single game).

In his experiments with ARC, Chollet has found that humans can fully solve ARC tests. But current AI systems struggle with the same tasks. To the best of our knowledge, ARC does not appear to be approachable by any existing machine learning technique (including Deep Learning), due to its focus on broad generalization and few-shot learning, Chollet notes.

While ARC is a work in progress, it can become a promising benchmark to test the level of progress toward human-level AI. We posit that the existence of a human-level ARC solver would represent the ability to program an AI from demonstrations alone (only requiring a handful of demonstrations to specify a complex task) to do a wide range of human-relatable tasks of a kind that would normally require human-level, human-like fluid intelligence, Chollet observes.

Original post:

Artificial intelligence: How to measure the I in AI - TechTalks

Written by admin |

December 9th, 2019 at 7:52 pm

Posted in Alphazero

This 90’s Japanese commercial for Street Fighter Alpha 2 doesn’t make a ton of sense, but it somehow still makes us want to play some Alpha -…

Posted: at 7:52 pm


The world of gaming is celebrating the 25th anniversary of Sony's PlayStation and so we've been seeing a ton of older content from decades past surface on social media.

When we came across this old Japanese Street Fighter Alpha (Zero in Japan) 2 ad, (thank you to Goegoezzz for posting) it brought a smile to our faces and we figured it'd likely do the same for you.

The television spot (which we assume is probably from around 1996, the year Alpha 2 came out) sees a hurried Sakura charging through the hustle and bustle of real life city streets.

She encounters a handful of her fellow Street Fighters along the way bumping into a levitating Dhalsim, passing an angry Chun-Li in the subway, and cutting off M. Bison in traffic, who is apparently an evil dictator by night, but the world's creepiest Lyft driver by day.

Sakura eventually stops, turns to the camera and states, "Ryu, I want to meet you once more." We then get about four and a half seconds of gameplay footage before cutting to the title screen. Perhaps the message is, "rush through your busy day so you can get home and play fighting games," or maybe it's, "we face off against metaphorical rivals at every turn in our daily lives."

Whatever the intended meaning may have been, the good news is that we know not to worry too much about Street Fighter storylines and just enjoy the battle. Check out the nostalgic TV spot right here and share any fond SFA2 memories you have in the comments.

Click image for animated version

Read more:

This 90's Japanese commercial for Street Fighter Alpha 2 doesn't make a ton of sense, but it somehow still makes us want to play some Alpha -...

Written by admin |

December 9th, 2019 at 7:52 pm

Posted in Alphazero

10 Machine Learning Techniques and their Definitions – AiThority

Posted: at 7:52 pm


When one technology replaces another, its not easy to accurately ascertain how the new technology would impact our lives. With so much buzz around the modern applications of Artificial Intelligence, Machine Learning, and Data Science, it becomes difficult to track the developments of these technologies. Machine Learning, in particular, has undergone a remarkable evolution in recent years. Many Machine Learning (ML) techniques have come in the foreground recently, most of which go beyond the traditionally simple classifications of this highly scientific Data Science specialization.

Read More: Beyond RPA And Cognitive Document Automation: Intelligent Automation At Scale

Lets point out the top ML techniques that the industry leaders and investors are keenly following, their definition, and commercial application.

Perceptual Learning is the scientific technique of enabling AI ML algorithms with better perception abilities to categorize and differentiate spatial and temporal patterns in the physical world.

For humans, Perceptual Learning is mostly instinctive and condition-driven. It means humans learn perceptual skills without actual awareness. In the case of machines, these learning skills are mapped implicitly using sensors, mechanoreceptors, and connected intelligent machines.

Most AI ML engineering companies boast of developing and delivering AI ML models that run on an automated platform. They openly challenge the presence and need for a Data Scientist in the Engineering process.

Automated Machine Learning (AutoML) is defined as the fully automating the entire process of Machine Learning model development right up till the process of its application.

AutoML enables companies to leverage AI ML models in an automated environment without truly seeking the involvement and supervision of Data Scientists, AI Engineers or Analysts.

Google, Baidu, IBM, Amazon, H2O, and a bunch of other technology-innovation companies already offer a host of AutoML environment for many commercial applications. These applications have swept into every possible business in every industry, including in Healthcare, Manufacturing, FinTech, Marketing and Sales, Retail, Sports and more.

Bayesian Machine Learning is a unique specialization within AI ML projects that leverage statistical models along with Data Science techniques. Any ML technique that uses the Bayes Theorem and Bayesian statistical modeling approach in Machine Learning fall under the purview of Bayesian Machine Learning.

The contemporary applications of Bayesian ML involves the use of open-source coding platform Python. Unique applications include

A good ML program would be expected to perpetually learn to perform a set of complex tasks. This learning mechanism is understood from the specialized branch of AI ML techniques, called Meta-Learning.

The industry-wide definition for Meta-Learning is the ability to learn and generalize AI into different real-world scenarios encountered during the ML training time, using specific volume and variety of data.

Meta-Learning techniques can be further differentiated into three categories

In each of these categories, there is a unique learner, meta-learner, and vectors with labels that match Data-Time-Spatial vectors into a set of networking processes to weigh real-world scenarios labeled with context and inferences.

All the recent Image Processing and Voice Search techniques use the Meta-Learning techniques for their outcomes.

Adversarial ML is one of the fastest-growing and most sophisticated of all ML techniques. It is defined as the ML technique adopted to test and validate the effectiveness of any Machine Learning program in an adverse situation.

As the name suggests, its the antagonistic principle of genuine AI, but used nonetheless to test the veracity of any ML technique when it encounters a unique, adverse situation. It is mostly used to fool an ML model into doubting its own results, thereby leading to a malfunction.

Most ML models are capable of generating answer for one single parameter. But, can it be used to answer for x (unknown or variable) parameter. Thats where the Causal Inference ML techniques comes into play.

Most AI ML courses online are teaching Causal inference as a core ML modeling technique. Causal inference ML technique is defined as the causal reasoning process to draw a unique conclusion based on the impact variables and conditions have on the outcome. This technique is further categorized into Observational ML and Interventional ML, depending on what is driving the Causal Inference algorithm.

Also commercially popularized as Explainable AI (X AI), this technique involves the use of neural networking and interpretation models to make ML structures more easily understood by humans.

Deep Learning Interpretability is defined as the ML specialization to remove black boxes in AI models, providing decision-makers and data officers to understand data modeling structures and legally permit the use of AI ML for general purposes.

The ML technique may use one or more of these techniques for Deep Learning Interpretation.

Any data can be accurately plotted using graphs. In Machine Learning techniques, a graph is a data structure consisting of two components, Vertices (or nodes) and Edges.

Graph ML networks is a specialized ML technique used to connect problems with edges and graphs. Graph Neural Networks (NNs) give rise to the category of Connected NNs (CNSS) and AI NNs (ANN).

There are at least 50 more ML techniques that could be learned and deployed using various NN models and systems. Click here to know of the leading ML companies that are constantly transforming Data Science applications with AI ML techniques.

(To share your insights about ML techniques and commercial applications, please write to us at info@aithority.com)

Read more from the original source:

10 Machine Learning Techniques and their Definitions - AiThority

Written by admin |

December 9th, 2019 at 7:52 pm

Posted in Machine Learning

Managing Big Data in Real-Time with AI and Machine Learning – Database Trends and Applications

Posted: at 7:52 pm


Dec 9, 2019

Processing big data in real-time for artificial intelligence, machine learning, and the Internet of Things poses significant infrastructure challenges.

Whether it is for autonomous vehicles, connected devices, or scientific research, legacy NoSQL solutions often struggle at hyperscale. Theyve been built on top of existing RDBMs and tend to strain when looking to analyze and act upon data at hyperscale - petabytes and beyond.

DBTA recently held a webinar featuring Theresa Melvin, chief architect of AI-driven big data solutions, HPE, and Noel Yuhanna, principal analyst serving enterprise architecture professionals, Forrester, who discussed trends in what enterprises are doing to manage big data in real-time.

Data is the new currency and it is driving todays business strategy to fuel innovation and growth, Yuhanna said.

According to a Forrester survey, the top data challenges are data governance, data silos, and data growth, he explained.

More than 35% of enterprises have failed to get value from big data projects largely because of skills, budget, complexity and strategy. Most organizations are dealing with growing multi-format data volume thats in multiple repositories -relational, NoSQL, Hadoop, data lake..

The need has grown for real-time and agile data requirements, he explained. There are too many data silos multiple repositories, cloud sources.

There is a lack of visibility into data across personas -- developer, data scientist, data engineers, data architects, security etc..Traditional data platforms have failed to support new business requirements such as data warehouse, relational DBMS, and ETL tools.

Its all about the customer and its critical for organizations to have a platform to succeed, Yuhanna said. Customers prefer personalization. Companies are still early on their AI journey but they believe it will improve efficiency and effectiveness.

AI and machine learning can hyper-personalize customer experience with targeted offers, he explained. It can also prevent line shutdowns by predicting machine failures.

AI is not one technology. It is comprised of one or more building block technologies. According to the Forrester survey, Yuhanna said AI/ML for data will help end-users and customers to support data intelligence to support new next-generation use cases such as customer personalization, fraud detection, advanced IoT analytics and rea-time data sharing and collaboration.

AI/ML as a platform feature will help support automation within the BI platform for data integration, data quality, security, governance, transformation, etc., minimizing human effort required. This helps deliver insights quicker in hours instead of days and months.

Melvin suggested using HPE Persistent Memory. The platform offers real-time analysis, real-time persist, a single source of truth, and a persistent record.

An archived on-demand replay of this webinar is available here.

See the article here:

Managing Big Data in Real-Time with AI and Machine Learning - Database Trends and Applications

Written by admin |

December 9th, 2019 at 7:52 pm

Posted in Machine Learning

The NFL And Amazon Want To Transform Player Health Through Machine Learning – Forbes

Posted: at 7:52 pm


The NFL and Amazon announced an expansion of their partnership at their annual AWS re:Invent ... [+] conference in Las Vegas that will use artificial intelligence and machine learning to combat player injuries. (Photo by Michael Zagaris/San Francisco 49ers/Getty Images)

Injury prevention in sports is one of the most important issues facing a number of leagues. This is particularly true in the NFL, due to the brutal nature of that punishing sport, which leaves many players sidelined at some point during the season. A number of startups are utilizing technology to address football injury issues, specifically limiting the incidence of concussions. Now, one of the largest companies in the world is working with the league in these efforts.

A week after partnering with the Seattle Seahawks on its machine learning/artificial intelligence offerings, Amazon announced a partnership Thursday in which the technology giant will use those same tools to combat football injuries. Amazon has been involved with the league, with its Next Gen Stats partnership, and now the two companies will work to advance player health and safety as the sport moves forward after its 100th season this year. Amazons AWS cloud services will use its software to gather and analyze large volumes of player health data and scan video images with the objective of helping teams treat injuries and rehabilitate players more effectively. The larger goal will be to create a new Digital Athlete platform to anticipate injury before it even takes place.

This partnership expands the quickly growing relationship between the NFL and Amazon/AWS. as the two have already teamed up for two years with the leagues Thursday Night Football games streamed on the companys Amazon Prime Video platform. Amazon paid $130 million for rights that run through next season. The league also uses AWSs ML Solutions Lab,as well as Amazons SageMaker platform, that enables data scientists and developers to build and develop machine learning models that can also lead to the leagues ultimate goal of predicting and limiting player injury.

The NFL is committed to re-imagining the future of football, said NFL Commissioner Roger Goodell. When we apply next-generation technology to advance player health and safety, everyone wins from players to clubs to fans. The outcomes of our collaboration with AWS and what we will learn about the human body and how injuries happen could reach far beyond football. As we look ahead to our next 100 seasons, were proud to partner with AWS in that endeavor.

The new initiative was announced as part of Amazons AWS re:Invent conference in Las Vegas on Thursday. Among the technologies that AWS and the league announced in its Digital Athlete platform is a computer-simulated model of an NFL player that will model infinite scenarios within NFL gameplay in order to identify a game environment that limits the risk to a player. Digital Athlete uses Amazons full arsenal of technologies, including the AI, ML and computer vision technology that is used with Amazons Rekognition tool and that uses enormous data sets encompassing historical and more modern video to identify a wide variety of solutions, including the prediction of player injury.

By leveraging the breadth and depth of AWS services, the NFL is growing its leadership position in driving innovation and improvements in health and player safety, which is good news not only for NFL players but also for athletes everywhere, said Andy Jassy, CEO of AWS. This partnership represents an opportunity for the NFL and AWS to develop new approaches and advanced tools to prevent injury, both in and potentially beyond football.

These announcements come at a time when more NFL players are utilizing their large platforms to bring awareness to injuries and the enormous impact those injuries have on their bodies. Former New England Patriots tight end Rob Gronkowski has been one of the most productive NFL players at his position in league history but he had to retire from the league this year, at the age of 29, due to a rash of injuries.

The future Hall of Fame player estimated that he suffered probably 20 concussions in his football career. These admissions have significant consequences on youth participation rates in the sport. Partnerships like the one announced yesterday will need to be successful in order for the sport to remain on solid footing heading into the new decade.

See original here:

The NFL And Amazon Want To Transform Player Health Through Machine Learning - Forbes

Written by admin |

December 9th, 2019 at 7:52 pm

Posted in Machine Learning

Machine Learning Answers: If Nvidia Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? – Forbes

Posted: at 7:51 pm


Jen-Hsun Huang, president and chief executive officer of Nvidia Corp., gestures as he speaks during ... [+] the company's event at the 2019 Consumer Electronics Show (CES) in Las Vegas, Nevada, U.S., on Sunday, Jan. 6, 2019. CES showcases more than 4,500 exhibiting companies, including manufacturers, developers and suppliers of consumer technology hardware, content, technology delivery systems and more. Photographer: David Paul Morris/Bloomberg

We found that if Nvidia Stock drops 10% or more in a week (5 trading days), there is a solid 36% chance itll recover 10% or more, over the next month (about 20 trading days)

Nvidia stock has seen significant volatility this year. While the company has been impacted by the broader correction in the semiconductor space and the trade war between the U.S. and China, the stock is being supported by a strong long-term outlook for GPU demand amid growing applications in Deep Learning and Artificial Intelligence.

Considering the recent price swings, we started with a simple question that investors could be asking about Nvidia stock: given a certain drop or rise, say a 10% drop in a week, what should we expect for the next week? Is it very likely that the stock will recover the next week? What about the next month or a quarter? You can test a variety of scenarios on the Trefis Machine Learning Engine to calculate if Nvidia stock dropped, whats the chance itll rise.

For example, after a 5% drop over a week (5 trading days), the Trefis machine learning engine says chances of an additional 5% drop over the next month, are about 40%. Quite significant, and helpful to know for someone trying to recover from a loss. Knowing what to expect for almost any scenario is powerful. It can help you avoid rash moves. Given the recent volatility in the market, the mix of macroeconomic events (including the trade war with China and interest rate easing by the U.S. Fed), we think investors can prepare better.

Below, we also discuss a few scenarios and answer common investor questions:

Question 1: Does a rise in Nvidia stock become more likely after a drop?

Answer:

Not really.

Specifically, chances of a 5% rise in Nvidia stock over the next month:

= 40%% after Nvidia stock drops by 5% in a week.

versus,

= 44.5% after Nvidia stock rises by 5% in a week.

Question 2: What about the other way around, does a drop in Nvidia stock become more likely after a rise?

Answer:

No.

Specifically, chances of a 5% decline in Nvidia stock over the next month:

= 40% after NVIDIA stock drops by 5% in a week

versus,

= 27% after NVIDIA stock rises by 5% in a week

Question 3: Does patience pay?

Answer:

According to data and Trefis machine learning engines calculations, largely yes!

Given a drop of 5% in Nvidia stock over a week (5 trading days), while there is only about 28% chance the Nvidia stock will gain 5% over the subsequent week, there is more than 58% chance this will happen in 6 months.

The table below shows the trend:

Trefis

Question 4: What about the possibility of a drop after a rise if you wait for a while?

Answer:

After seeing a rise of 5% over 5 days, the chances of a 5% drop in Nvidia stock are about 30% over the subsequent quarter of waiting (60 trading days). However, this chance drops slightly to about 29% when the waiting period is a year (250 trading days).

Whats behind Trefis? See How Its Powering New Collaboration and What-Ifs ForCFOs and Finance Teams|Product, R&D, and Marketing Teams More Trefis Data Like our charts? Exploreexample interactive dashboardsand create your own

Follow this link:

Machine Learning Answers: If Nvidia Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes

Written by admin |

December 9th, 2019 at 7:51 pm

Posted in Machine Learning

NFL Looks to Cloud and Machine Learning to Improve Player Safety – Which-50

Posted: at 7:51 pm


Americas National Football league is turning to emerging technology to try to solve its ongoing challenges around player safety. The sports governing body says it has amassed huge amounts of data but wants to apply machine learning to gain better insights and predictive capabilities.

It is hoped the insights will inform new rules, safer equipment, and better injury rehabilitation methods. However, the data will not be available to independent researchers.

Last week the NFL announced a partnership with Amazon Web Services to provide the digital services including machine learning and digital twin applications. Terms of the deal were not disclosed.

As the NFL has reached hyper professionalisation, data suggests player injuries have worsened, particularly head injuries sustained through high impact collisions. Several retired players have been diagnosed with or report symptoms of chronic traumatic encephalopathy, a neurodegenerative disease which can only be fully diagnosed post mortem.

As scrutiny has grown the NFL has responded with several rule changes and redesigning player helmets, both initiatives which it says has reduced concussions. However the league was also accused of failing to notify players of the links between concussions and brain injuries.

All of our initiatives on the health and safety side started with the engineering roadmap around minimising head impact on field, NFL executive vice president, Jeff Miller told Which-50 following the announcement.

Miller who is responsible for player health and safety, said the new technology is a new opportunity to minimise risk to players.

I think the speed, the pace of the insights that are available as a result of this [technology] are going to continue towards that same goal, hopefully in a much more efficient, and in fact mature, faster supersized scale.

Miller said the NFL has a responsibility to pass on the insights to lower levels of the game like high school and youth leagues. However, the data will not be available to external researchers initially.

As we find those insights I think were going to be able to share those, were going to be able to share those within the sport and hopefully over time outside of the sport as well.

NFL commissioner Roger Goodell announced the AWS deal, which builds on an existing partnership for game statistics, alongside Andy Jassy, the public cloud providers CEO, during the AWS:re:invent conference in Las Vegas last week.

Goodell said the NFL had amassed huge amounts of data from sensors and video feeds but needed the AWS tools to better leverage it.

When you take the combination of that the possibilities are enormous, the NFL boss said. We want to use the data to change the game. There are very few relationships we get involved with where the partner and the NFL can change the game.

When we apply next-generation technology to advance player health and safety, everyone wins from players to clubs to fans.

AWS machine learning tools will be applied to the data to help build a digital athlete, a type of digital twin which can be used to simulate certain scenarios including impacts.

The outcomes of our collaboration with AWS and what we will learn about the human body and how injuries happen could reach far beyond football, he said.

The author traveled to AWS re:Invent as a guest of Amazon.

Previous post

Next post

See more here:

NFL Looks to Cloud and Machine Learning to Improve Player Safety - Which-50

Written by admin |

December 9th, 2019 at 7:51 pm

Posted in Machine Learning


Page 1,461«..1020..1,4601,4611,4621,463..1,4701,480..»



matomo tracker