Page 21«..10..20212223..30..»

Archive for the ‘Machine Learning’ Category

Automated Machine Learning is the Future of Data Science – Analytics Insight

Posted: April 16, 2020 at 8:48 pm


without comments

As the fuel that powers their progressing digital transformation endeavors, organizations wherever are searching for approaches to determine as much insight as could reasonably be expected from their data. The accompanying increased demand for advanced predictive and prescriptive analytics has, thus, prompted a call for more data scientists capable with the most recent artificial intelligence (AI) and machine learning (ML) tools.

However, such highly-skilled data scientists are costly and hard to find. Truth be told, theyre such a valuable asset, that the phenomenon of the citizen data scientist has of late emerged to help close the skills gap. A corresponding role, as opposed to an immediate substitution, citizen data scientists need explicit advanced data science expertise. However, they are fit for producing models utilizing best in class diagnostic and predictive analytics. Furthermore, this ability is incomplete because of the appearance of accessible new technologies, for example, automated machine learning (AutoML) that currently automate a significant number of the tasks once performed by data scientists.

The objective of autoML is to abbreviate the pattern of trial and error and experimentation. It burns through an enormous number of models and the hyperparameters used to design those models to decide the best model available for the data introduced. This is a dull and tedious activity for any human data scientist, regardless of whether the individual in question is exceptionally talented. AutoML platforms can play out this dreary task all the more rapidly and thoroughly to arrive at a solution faster and effectively.

A definitive estimation of the autoML tools isnt to supplant data scientists however to offload their routine work and streamline their procedure to free them and their teams to concentrate their energy and consideration on different parts of the procedure that require a more significant level of reasoning and creativity. As their needs change, it is significant for data scientists to comprehend the full life cycle so they can move their energy to higher-value tasks and sharpen their abilities to additionally hoist their value to their companies.

At Airbnb, they continually scan for approaches to improve their data science workflow. A decent amount of their data science ventures include machine learning and numerous pieces of this workflow are tedious. At Airbnb, they use machine learning to build customer lifetime value models (LTV) for guests and hosts. These models permit the company to improve its decision making and interactions with the community.

Likewise, they have seen AML tools as generally valuable for regression and classification problems involving tabular datasets, anyway, the condition of this area is rapidly progressing. In outline, it is accepted that in specific cases AML can immensely increase a data scientists productivity, often by an order of magnitude. They have used AML in many ways.

Unbiased presentation of challenger models: AML can rapidly introduce a plethora of challenger models utilizing a similar training set as your incumbent model. This can help the data scientist in picking the best model family. Identifying Target Leakage: In light of the fact that AML builds candidate models amazingly fast in an automated way, we can distinguish data leakage earlier in the modeling lifecycle. Diagnostics: As referenced prior, canonical diagnostics can be automatically created, for example, learning curves, partial dependence plots, feature importances, etc. Tasks like exploratory data analysis, pre-processing of data, hyper-parameter tuning, model selection and putting models into creation can be automated to some degree with an Automated Machine Learning system.

Companies have moved towards enhancing predictive power by coupling huge data with complex automated machine learning. AutoML, which uses machine learning to create better AI, is publicized as affording opportunities to democratise machine learning by permitting firms with constrained data science expertise to create analytical pipelines equipped for taking care of refined business issues.

Including a lot of algorithms that automate that writing of other ML algorithms, AutoML automates the end-to-end process of applying ML to real-world problems. By method for representation, a standard ML pipeline consists of the following: data pre-processing, feature extraction, feature selection, feature engineering, algorithm selection, and hyper-parameter tuning. In any case, the significant ability and time it takes to execute these strides imply theres a high barrier to entry.

In an article distributed on Forbes, Ryohei Fujimaki, the organizer and CEO of dotData contends that the discussion is lost if the emphasis on AutoML systems is on supplanting or decreasing the role of the data scientist. All things considered, the longest and most challenging part of a typical data science workflow revolves around feature engineering. This involves interfacing data sources against a rundown of wanted features that are assessed against different Machine Learning algorithms.

Success with feature engineering requires an elevated level of domain aptitude to recognize the ideal highlights through a tedious iterative procedure. Automation on this front permits even citizen data scientists to make streamlined use cases by utilizing their domain expertise. More or less, this democratization of the data science process makes the way for new classes of developers, offering organizations a competitive advantage with minimum investments.

More here:

Automated Machine Learning is the Future of Data Science - Analytics Insight

Written by admin

April 16th, 2020 at 8:48 pm

Posted in Machine Learning

Model quantifies the impact of quarantine measures on Covid-19’s spread – MIT News

Posted: at 8:48 pm


without comments

The research described in this article has been published on a preprint server but has not yet been peer-reviewed by scientific or medical experts.

Every day for the past few weeks, charts and graphs plotting the projected apex of Covid-19 infections have been splashed across newspapers and cable news. Many of these models have been built using data from studies on previous outbreaks like SARS or MERS. Now, a team of engineers at MIT has developed a model that uses data from the Covid-19 pandemic in conjunction with a neural network to determine the efficacy of quarantine measures and better predict the spread of the virus.

Our model is the first which uses data from the coronavirus itself and integrates two fields: machine learning and standard epidemiology, explains Raj Dandekar, a PhD candidate studying civil and environmental engineering. Together with George Barbastathis, professor of mechanical engineering, Dandekar has spent the past few months developing the model as part of the final project in class 2.168 (Learning Machines).

Most models used to predict the spread of a disease follow what is known as the SEIR model, which groups people into susceptible, exposed, infected, and recovered. Dandekar and Barbastathis enhanced the SEIR model by training a neural network to capture the number of infected individuals who are under quarantine, and therefore no longer spreading the infection to others.

The model finds that in places like South Korea, where there was immediate government intervention in implementing strong quarantine measures, the virus spread plateaued more quickly. In places that were slower to implement government interventions, like Italy and the United States, the effective reproduction number of Covid-19 remains greater than one, meaning the virus has continued to spread exponentially.

The machine learning algorithm shows that with the current quarantine measures in place, the plateau for both Italy and the United States will arrive somewhere between April 15-20. This prediction is similar to other projections like that of the Institute for Health Metrics and Evaluation.

Our model shows that quarantine restrictions are successful in getting the effective reproduction number from larger than one to smaller than one, says Barbastathis. That corresponds to the point where we can flatten the curve and start seeing fewer infections.

Quantifying the impact of quarantine

In early February, as news of the virus troubling infection rate started dominating headlines, Barbastathis proposed a project to students in class 2.168. At the end of each semester, students in the class are tasked with developing a physical model for a problem in the real world and developing a machine learning algorithm to address it. He proposed that a team of students work on mapping the spread of what was then simply known as the coronavirus.

Students jumped at the opportunity to work on the coronavirus, immediately wanting to tackle a topical problem in typical MIT fashion, adds Barbastathis.

One of those students was Dandekar. The project really interested me because I got to apply this new field of scientific machine learning to a very pressing problem, he says.

As Covid-19 started to spread across the globe, the scope of the project expanded. What had originally started as a project looking just at spread within Wuhan, China grew to also include the spread in Italy, South Korea, and the United States.

The duo started modeling the spread of the virus in each of these four regions after the 500th case was recorded. That milestone marked a clear delineation in how different governments implemented quarantine orders.

Armed with precise data from each of these countries, the research team took the standard SEIR model and augmented it with a neural network that learns how infected individuals under quarantine impact the rate of infection. They trained the neural network through 500 iterations so it could then teach itself how to predict patterns in the infection spread.

Using this model, the research team was able to draw a direct correlation between quarantine measures and a reduction in the effective reproduction number of the virus.

The neural network is learning what we are calling the quarantine control strength function, explains Dandekar. In South Korea, where strong measures were implemented quickly, the quarantine control strength function has been effective in reducing the number of new infections. In the United States, where quarantine measures have been slowly rolled out since mid-March, it has been more difficult to stop the spread of the virus.

Predicting the plateau

As the number of cases in a particular country decreases, the forecasting model transitions from an exponential regime to a linear one. Italy began entering this linear regime in early April, with the U.S. not far behind it.

The machine learning algorithm Dandekar and Barbastathis have developed predictedthat the United States will start to shift from an exponential regime to a linear regime in the first week of April, with a stagnation in the infected case count likely betweenApril 15 and April20. It also suggests that the infection count will reach 600,000 in the United States before the rate of infection starts to stagnate.

This is a really crucial moment of time. If we relax quarantine measures, it could lead to disaster, says Barbastathis.

According to Barbastathis, one only has to look to Singapore to see the dangers that could stem from relaxing quarantine measures too quickly. While the team didnt study Singapores Covid-19 cases in their research, the second wave of infection this country is currently experiencing reflects their models finding about the correlation between quarantine measures and infection rate.

If the U.S. were to follow the same policy of relaxing quarantine measures too soon, we have predicted that the consequences would be far more catastrophic, Barbastathis adds.

The team plans to share the model with other researchers in the hopes that it can help inform Covid-19 quarantine strategies that can successfully slow the rate of infection.

View post:

Model quantifies the impact of quarantine measures on Covid-19's spread - MIT News

Written by admin

April 16th, 2020 at 8:48 pm

Posted in Machine Learning

Qligent Foresight Released as Predictive Analysis Tool – TV Technology

Posted: at 8:48 pm


without comments

MELBOURNE, Fla.Qligent is now sharing its second-generation, cloud-based predictive analysis platform Foresight, which uses AI, machine learning and big data to handle content distribution issues. Foresight is designed to provide real-time 24/7 data analytics based on system performance and user behavior.

The Foresight platform aggregates data points from end user equipment, including set-top boxes, smart TVs and iOS and Android devices, as well as CDN logs, stream monitoring data, CRMs, support ticketing systems, network monitoring systems and other hardware monitoring systems.

With scalable cloud processing, Foresights integrated AI and machine learning provide automated data collection, while deep learning technology mines data from layers of data. Big data technology then correlates and aggregates the data for quality assurance.

Foresight features networked and virtual probes that create a controlled data mining environment, which Qligent says is not compromised by operator error, viewer disinterest, user hardware malfunction or other variables.

Users can access customizable reports that summarize key performance indicators, key quality indicators and other criteria for multiplatform content distribution. All findings are presented on Qligents dashboard, which is accessible on a computer or mobile device.

The Qligent Foresight system is available immediately. For more information, visit http://www.qligent.com.

More here:

Qligent Foresight Released as Predictive Analysis Tool - TV Technology

Written by admin

April 16th, 2020 at 8:48 pm

Posted in Machine Learning

10 of the best affordable online data science courses and programs – Business Insider – Business Insider

Posted: at 8:48 pm


without comments

When you buy through our links, we may earn money from our affiliate partners. Learn more.

As companies amass more data than ever, the employees best able to interpret it and apply key insights to important decision-making processes become increasingly valuable.

But while the skillset grows more desirable, the supply of workers with the correct skills isn't sufficient making data science skills among the most in-demand hard skills in 2020, according to LinkedIn's research.

Thankfully, there are plenty of online learning opportunities to help you prepare for a career in data science, whether it's a course that helps you master a specific skill or an intensive year-long program that helps you jump up the ladder in your current role. Many classes are offered by top schools such as Harvard and MIT, and many programs were designed by major companies like IBMand Google specifically for educating a useful future workforce. Some of them offer students the opportunity to join their talent network after completing a specific course level.

Below are a few of the most popular data science options online, including MicroMasters, professional certificates, and individual courses.

Professional certificates are bundles of related courses that help you master a specific skill, and they tend to be most useful for breaking into a new industry or getting you to the next level of your career. They can take anywhere from a few months to more than a year to complete. At Coursera, professional certificate programs typically have a 7-day free trial and a monthly fee afterward. So, the faster you complete it, the more money you'll save. At edX, professional certificates typically have a flat one-time fee.

MicroMasters are a bundle of graduate-level courses that are designed to help you advance your career. Students have the option of applying to the university that's offering credit for the MicroMasters program certificate and, if accepted, can pursue an accelerated and less expensive Master's Degree. You can learn more here.

If you end up taking a Coursera course, and you think you'll realistically spend more than $399 in monthly fees or on individual classes throughout the year, you may want to consider Coursera Plus if all the courses and programs you plan to take are included in the annual membership (90% of the site is). And, if your employer offers to cover educational costs that include online-learning programs, you may even be able to get reimbursed for the following courses.

Read this article:

10 of the best affordable online data science courses and programs - Business Insider - Business Insider

Written by admin

April 16th, 2020 at 8:48 pm

Posted in Machine Learning

Want to Be Better at Sports? Listen to the Machines – Moneycontrol

Posted: at 8:48 pm


without comments

A couple of decades ago, Jeff Alger, then a senior manager at Microsoft, was coaching state-level soccer teams and realised that there was very little science to player development.

There were no objective ways of measuring how good players are, Alger said, and without being able to measure, you have nothing.

He said it offended his sense of systems design to recognise a problem but do nothing about it, so he quit his job, got a masters degree in sports management and started a company that would use artificial intelligence (AI) to assess athletic talent and training.

His company, Seattle Sports Sciences, is one of a handful using the pattern-recognising power of machine learning to revolutionise coaching and make advanced analytics available to teams of all kinds.

The trend is touching professional sports and changing sports medicine. And, perhaps inevitably, it has altered the odds in sports betting.

John Milton, architect of Seattle Sports Sciences artificial intelligence system, spent a week in October with Spanish soccer team Mlaga, which plays in Spains second division, capturing everything that happened on the pitch with about 20 synchronised cameras in 4K ultra high-definition video.

Its like omniscience, Milton said. The system, ISOTechne, evaluates a players skill and consistency and who is passing or receiving with what frequency, as well as the structure of the teams defence. It even tracks the axis of spin and rate of rotation of the ball.

That is not the only way that the companys technology is being used. Professional soccer teams derive a growing slice of revenue from selling players. Soccer academies have become profit centers for many teams as they develop talented players and then sell them to other teams. It is now a $7 billion business. But without objective measurements of a players ability, putting a value on an athlete is difficult.

Its a matter of whether that players movements and what they do with the ball correspond to the demands that they will have on your particular team, said Alger, now the President and Chief Executive of Seattle Sports Sciences. He said, for example, that his company could identify a player who was less skilled at other phases of the game but was better at delivering the ball on a corner kick or a free kick a skill that a coach could be looking for.

Some systems can also detect and predict injuries. Dr Phil Wagner, Chief Executive and founder of Sparta Science, works from a warehouse in Silicon Valley that has a running track and is scattered with equipment for assessing athletes physical condition.

The company uses machine learning to gather data from electronic plates on the ground that measure force and balance. The system gathers 3,000 data points a second and a test jumping or balancing takes about 20 seconds.

Athletes dont recognise that theres an injury coming or theres an injury that exists, Wagner said, adding that the system has a proven record of diagnosing or predicting injury. Were identifying risk and then providing the best recommendation to reduce that risk.

Tyson Ross, a pitcher competing for a roster spot with the San Francisco Giants, has been using Sparta Sciences system since he was drafted in 2008. He visits the companys facilities roughly every other week during the offseason to do vertical jumps, sway tests, a single leg balance test and a one-arm plank on the plate, blindfolded.

Based on the data thats collected, it tells me how Im moving compared to previously and how Im moving compared to my ideal movement signature, as they call it, Ross said. Sparta Science then tailors his workouts to move him closer to that ideal.

The Pittsburgh Steelers, the Detroit Lions and the Washington Redskins, among others, use the system regularly, Wagner said. Sparta Science is also used to evaluate college players in the National Football Leagues annual scouting combine.

Of course, it is inevitable that machine learnings predictive power would be applied to another lucrative end of the sports industry: betting. Sportlogiq, a Montreal-based firm, has a system that primarily relies on broadcast feeds to analyse players and teams in hockey, soccer, football and lacrosse.

Mehrsan Javan, the companys Chief Technology Officer and one of its co-founders, said the majority of National Hockey League teams, including the last four Stanley Cup champions, used Sportlogiqs system to evaluate players.

Josh Flynn, Assistant General Manager for the Columbus Blue Jackets, Ohios professional hockey franchise, said the team used Sportlogiq to analyse players and strategy. We can dive levels deeper into questions we have about the game than we did before, Flynn said.

But Sportlogiq also sells analytic data to bookmakers in the United States, helping them set odds on bets, and hopes to sell information to individual bettors soon. Javan is looking to hire a vice president of betting.

They key to all of this sports-focused technology is data.

Algorithms come and go, but data is forever, Alger is fond of saying. Computer vision systems have to be told what to look for, whether it be tumours in an X-ray or bicycles on the road. In Seattle Sports Sciences case, the computers must be trained to recognise the ball in various lighting conditions as well as understand which plane of the foot is striking the ball.

To do that, teams of workers first have to painstakingly annotate millions of images. The more annotated data, the more accurate the machine-learning analysis will be. Basically, whoever has the most labelled data wins, said Milton, the AI architect.

Seattle Sports Sciences uses Labelbox, a training data platform that allows Miltons data science team in Seattle to work with shifts of workers in India who label data 24 hours a day. Thats how fast you have to move to compete in modern vision AI, Milton said. Its basically a labelling arms race.

Wagner of Sparta Science agrees, noting that with algorithms readily available and cloud computing power now available everywhere, the differentiator is data. He said it took Sparta Science 10 years to build up enough data to train its machine-learning system adequately.

Sam Robertson, who runs the sports performance and business programme at Victoria University in Melbourne, Australia, said it would take time for the technology to transform sports. The decision-making component of this right now is still almost exclusively done by humans, he said.

We need to work on the quality of the inputs, he said, meaning the labelled data. Thats whats going to improve things.

Time to show-off your poker skills and win Rs.25 lakhs with no investment. Register Now!

Read the original here:

Want to Be Better at Sports? Listen to the Machines - Moneycontrol

Written by admin

April 16th, 2020 at 8:48 pm

Posted in Machine Learning

How Machine Learning Is Being Used To Eradicate Medication Errors – Analytics India Magazine

Posted: April 11, 2020 at 12:49 am


without comments

People working in the healthcare sector take extra precautions to avoid mistakes and medication errors that can put the lives of patients at risk. Yet, despite this, 2% of patients face preventable medical-related incidents that could be life-threatening. Inadequate systems, tools, processes or working conditions are some of the reasons contributing to these medical mistakes.

In a bid to solve this problem, Google collaborated with UCSFs Bakar Computational Health Sciences Institute to publish Predicting Inpatient Medication Orders in Electronic Health Record Data in Clinical Pharmacology and Therapeutics. The published paper discusses how machine learning (ML) can be used to anticipate standard prescribing patterns by doctors as per the availability of electronic health records.

Google used clinical data of de-identified patients, which included vital signs, laboratory results, past medications, procedures, diagnoses, and more. Googles new model was designed to anticipate a physicians prescription decisions three-quarters of the time, after evaluating the patients current state and medical history.

To train the model, Google chose a dataset containing approximately three million medication orders from more than 1,00,000 hospitals. The company acquired the retrospective electronic health data through de-identification, by choosing random dates and removing all the identifying checkpoints of the record as per the HIPPA rules and guidelines. The company did not gather any identifying information such as names, addresses, contact details, record numbers, names of physicians, free-text notes, images, etc.

The research by the tech giant was done using the open-sourced Fast Healthcare Interoperability Resources (FHIR) format that the company claims was previously applied to improve healthcare data and make it more useful for machine learning. Google did not restrict the dataset to a particular disease, which made the ML activity more demanding. It also allowed the model to identify a wider variety of medical conditions.

Also Read Best Habits For Budding Machine Learning Researchers

Google approached two different ML models the long short-term recurrent neural network, and the regularized time-bucketed logistic model, which are often used in clinical research. Both models were put into comparison against a simple baseline, which was ranked as the most commonly ordered medication based on a patients hospital service, along with time spent since the admission in the hospital. The models ranked a list of 990 possible medications every time a medication was entered in the retrospective data. The team further assessed if the models assigned high probabilities to the medication that were provided by the doctors for each case.

Googles best performing model was the LSTM model, which is capable of handling sequential data, including text and language. The model has been designed to choose the recent events in data and their order, which makes it an excellent option to deal with this problem. Almost 93% of the top-10 list included at least one medication that a clinician would prescribe to a patient within the next day.

The model rightly forecasted the medications prescribed by a doctor as one of the top-10 most likely medications, which calculated to an accuracy amount of 55%. 75% of the ordered medication were ranked in top-25, whereas false-negative cases, where a doctors medication did not make it into the top-25 results, found itself to be in the same 42% of the time as ranked by the model.

These models are trained to mimic a physicians behavior as it appears in historical data, and did not learn the optimal prescribing pattern. Due to this, the models do not understand how the medications might work, or if they have any side effects or not. As per Google, the learning sequence will take time to show normal behavior in a bid to spot abnormal and potentially dangerous orders. In the next phase, the company will examine the models under different circumstances to understand which medication error can cause harm to patients.

Also Read 10 leading Analytics Accelerators/Incubators in India

The result of this work by Google is a small step towards testing the hypothesis that machine learning can be applied to build different systems which can prevent mistakes on the part of doctors and clinicians to keep patients safe. Google is looking forward to collaborating with doctors, pharmacists, clinicians and patients to continue the research for a better result.

comments

Excerpt from:

How Machine Learning Is Being Used To Eradicate Medication Errors - Analytics India Magazine

Written by admin

April 11th, 2020 at 12:49 am

Posted in Machine Learning

Self-supervised learning is the future of AI – The Next Web

Posted: at 12:49 am


without comments

Despite the huge contributions of deep learning to the field of artificial intelligence, theres something very wrong with it: It requires huge amounts of data. This is one thing that boththe pioneersandcritics of deep learningagree on. In fact, deep learning didnt emerge as the leading AI technique until a few years ago because of the limited availability of useful data and the shortage of computing power to process that data.

Reducing the data-dependency of deep learning is currently among the top priorities of AI researchers.

In hiskeynote speech at the AAAI conference, computer scientist Yann LeCun discussed the limits of current deep learning techniques and presented the blueprint for self-supervised learning, his roadmap to solve deep learnings data problem. LeCun is one of thegodfathers of deep learningand the inventor ofconvolutional neural networks (CNN), one of the key elements that have spurred a revolution in artificial intelligence in the past decade.

Self-supervised learning is one of several plans to create data-efficient artificial intelligence systems. At this point, its really hard to predict which technique will succeed in creating the next AI revolution (or if well end up adopting a totally different strategy). But heres what we know about LeCuns masterplan.

First, LeCun clarified that what is often referred to as the limitations of deep learning is, in fact, a limit ofsupervised learning. Supervised learning is the category of machine learning algorithms that require annotated training data. For instance, if you want to create an image classification model, you must train it on a vast number of images that have been labeled with their proper class.

[Deep learning] is not supervised learning. Its not justneural networks. Its basically the idea of building a system by assembling parameterized modules into a computation graph, LeCun said in his AAAI speech. You dont directly program the system. You define the architecture and you adjust those parameters. There can be billions.

Deep learning can be applied to different learning paradigms, LeCun added, including supervised learning,reinforcement learning, as well as unsupervised or self-supervised learning.

But the confusion surrounding deep learning and supervised learning is not without reason. For the moment, the majority of deep learning algorithms that have found their way into practical applications are based on supervised learning models, which says a lot aboutthe current shortcomings of AI systems. Image classifiers, facial recognition systems, speech recognition systems, and many of the other AI applications we use every day have been trained on millions of labeled examples.

Reinforcement learning and unsupervised learning, the other categories of learning algorithms, have so far found very limited applications.

Supervised deep learning has given us plenty of very useful applications, especially in fields such ascomputer visionand some areas of natural language processing. Deep learning is playing an increasingly important role in sensitive applications, such as cancer detection. It is also proving to be extremely useful in areas where the scale of the problem is beyond being addressed with human efforts, such aswith some caveatsreviewing the huge amount of content being posted on social media every day.

If you take deep learning from Facebook, Instagram, YouTube, etc., those companies crumble, LeCun says. They are completely built around it.

But as mentioned, supervised learning is only applicable where theres enough quality data and the data can capture the entirety of possible scenarios. As soon as trained deep learning models face novel examples that differ from their training examples, they start to behave in unpredictable ways. In some cases,showing an object from a slightly different anglemight be enough to confound a neural network into mistaking it with something else.

ImageNet vs reality: In ImageNet (left column) objects are neatly positioned, in ideal background and lighting conditions. In the real world, things are messier (source: objectnet.dev)

Deep reinforcement learning has shownremarkable results in games and simulation. In the past few years, reinforcement learning has conquered many games that were previously thought to off-limits for artificial intelligence. AI programs have already decimated human world champions atStarCraft 2, Dota, and the ancient Chinese board game Go.

But the way these AI programs learn to solve problems is drastically different from that of humans. Basically, a reinforcement learning agent starts with a blank slate and is only provided with a basic set of actions it can perform in its environment. The AI is then left on its own to learn through trial-and-error how to generate the most rewards (e.g., win more games).

This model works when the problem space is simple and you have enough compute power to run as many trial-and-error sessions as possible. In most cases, reinforcement learning agents take an insane amount of sessions to master games. The huge costs have limited reinforcement learning research to research labsowned or funded by wealthy tech companies.

Reinforcement learning agents must be trained on hundreds of years worth of session to master games, much more than humans can play in a lifetime (source: Yann LeCun).

Reinforcement learning systems are very bad attransfer learning. A bot that plays StarCraft 2 at grandmaster level needs to be trained from scratch if it wants to play Warcraft 3. In fact, even small changes to the StarCraft game environment can immensely degrade the performance of the AI. In contrast, humans are very good at extracting abstract concepts from one game and transferring it to another game.

Reinforcement learning really shows its limits when it wants to learn to solve real-world problems that cant be simulated accurately. What if you want to train a car to drive itself? And its very hard to simulate this accurately, LeCun said, adding that if we wanted to do it in real life, we would have to destroy many cars. And unlike simulated environments, real life doesnt allow you to run experiments in fast forward, and parallel experiments, when possible, would result in even greater costs.

LeCun breaks down the challenges of deep learning into three areas.

First, we need to develop AI systems that learn with fewer samples or fewer trials. My suggestion is to use unsupervised learning, or I prefer to call it self-supervised learning because the algorithms we use are really akin to supervised learning, which is basically learning to fill in the blanks, LeCun says. Basically, its the idea of learning to represent the world before learning a task. This is what babies and animals do. We run about the world, we learn how it works before we learn any task. Once we have good representations of the world, learning a task requires few trials and few samples.

Babies develop concepts of gravity, dimensions, and object persistence in the first few months after their birth. While theres debate on how much of these capabilities are hardwired into the brain and how much of it is learned, what is for sure is that we develop many of our abilities simply by observing the world around us.

The second challenge is creating deep learning systems that can reason. Current deep learning systems are notoriously bad at reasoning and abstraction, which is why they need huge amounts of data to learn simple tasks.

The question is, how do we go beyond feed-forward computation and system 1? How do we make reasoning compatible with gradient-based learning? How do we make reasoning differentiable? Thats the bottom line, LeCun said.

System 1 is the kind of learning tasks that dont require active thinking, such as navigating a known area or making small calculations. System 2 is the more active kind of thinking, which requires reasoning.Symbolic artificial intelligence, the classic approach to AI, has proven to be much better at reasoning and abstraction.

But LeCun doesnt suggest returning to symbolic AI or tohybrid artificial intelligence systems, as other scientists have suggested. His vision for the future of AI is much more in line with that of Yoshua Bengio, another deep learning pioneer, who introduced the concept ofsystem 2 deep learningat NeurIPS 2019 and further discussed it at AAAI 2020. LeCun, however, did admit that nobody has a completely good answer to which approach will enable deep learning systems to reason.

The third challenge is to create deep learning systems that can lean and plan complex action sequences, and decompose tasks into subtasks. Deep learning systems are good at providing end-to-end solutions to problems but very bad at breaking them down into specific interpretable and modifiable steps. There have been advances in creatinglearning-based AI systems that can decompose images, speech, and text. Capsule networks, invented by Geoffry Hinton, address some of these challenges.

But learning to reason about complex tasks is beyond todays AI. We have no idea how to do this, LeCun admits.

The idea behind self-supervised learning is to develop a deep learning system that can learn to fill in the blanks.

You show a system a piece of input, a text, a video, even an image, you suppress a piece of it, mask it, and you train a neural net or your favorite class or model to predict the piece thats missing. It could be the future of a video or the words missing in a text, LeCun says.

The closest we have to self-supervised learning systems are Transformers, an architecture that has proven very successful innatural language processing. Transformers dont require labeled data. They are trained on large corpora of unstructured text such as Wikipedia articles. And theyve proven to be much better than their predecessors at generating text, engaging in conversation, and answering questions. (But they are stillvery far from really understanding human language.)

Transformers have become very popular and are the underlying technology for nearly all state-of-the-art language models, including Googles BERT, Facebooks RoBERTa,OpenAIs GPT2, and GooglesMeena chatbot.

More recently, AI researchers have proven thattransformers can perform integration and solve differential equations, problems that require symbol manipulation. This might be a hint that the evolution of transformers might enable neural networks to move beyond pattern recognition and statistical approximation tasks.

So far, transformers have proven their worth in dealing with discreet data such as words and mathematical symbols. Its easy to train a system like this because there is some uncertainty about which word could be missing but we can represent this uncertainty with a giant vector of probabilities over the entire dictionary, and so its not a problem, LeCun says.

But the success of Transformers has not transferred to the domain of visual data. It turns out to be much more difficult to represent uncertainty and prediction in images and video than it is in text because its not discrete. We can produce distributions over all the words in the dictionary. We dont know how to represent distributions over all possible video frames, LeCun says.

For each video segment, there are countless possible futures. This makes it very hard for an AI system to predict a single outcome, say the next few frames in a video. The neural network ends up calculating the average of possible outcomes, which results in blurry output.

This is the main technical problem we have to solve if we want to apply self-supervised learning to a wide variety of modalities like video, LeCun says.

LeCuns favored method to approach supervised learning is what he calls latent variable energy-based models. The key idea is to introduce a latent variable Z which computes the compatibility between a variable X (the current frame in a video) and a prediction Y (the future of the video) and selects the outcome with the best compatibility score. In his speech, LeCun further elaborates on energy-based models and other approaches to self-supervised learning.

Energy-based models use a latent variable Z to compute the compatibility between a variable X and a prediction Y and select the outcome with the best compatibility score (image credit: Yann LeCun).

I think self-supervised learning is the future. This is whats going to allow to our AI systems, deep learning system to go to the next level, perhaps learn enough background knowledge about the world by observation, so that some sort of common sense may emerge, LeCun said in his speech at the AAAI Conference.

One of the key benefits of self-supervised learning is the immense gain in the amount of information outputted by the AI. In reinforcement learning, training the AI system is performed at scalar level; the model receives a single numerical value as reward or punishment for its actions. In supervised learning, the AI system predicts a category or a numerical value for each input.

In self-supervised learning, the output improves to a whole image or set of images. Its a lot more information. To learn the same amount of knowledge about the world, you will require fewer samples, LeCun says.

We must still figure out how the uncertainty problem works, but when the solution emerges, we will have unlocked a key component of the future of AI.

If artificial intelligence is a cake, self-supervised learning is the bulk of the cake, LeCun says. The next revolution in AI will not be supervised, nor purely reinforced.

This story is republished fromTechTalks, the blog that explores how technology is solving problems and creating new ones. Like them onFacebookhere and follow them down here:

Published April 5, 2020 05:00 UTC

Go here to see the original:

Self-supervised learning is the future of AI - The Next Web

Written by admin

April 11th, 2020 at 12:49 am

Posted in Machine Learning

Want to Be Better at Sports? Listen to the Machines – The New York Times

Posted: at 12:49 am


without comments

Based on the data thats collected, it tells me how Im moving compared to previously and how Im moving compared to my ideal movement signature, as they call it, Mr. Ross said. Sparta Science then tailors his workouts to move him closer to that ideal.

The Pittsburgh Steelers, the Detroit Lions and the Washington Redskins, among others, use the system regularly, Dr. Wagner said. Sparta Science is also used to evaluate college players in the National Football Leagues annual scouting combine.

Of course, it is inevitable that machine learnings predictive power would be applied to another lucrative end of the sports industry: betting. Sportlogiq, a Montreal-based firm, has a system that primarily relies on broadcast feeds to analyze players and teams in hockey, soccer, football and lacrosse.

Mehrsan Javan, the companys chief technology officer and one of its co-founders, said the majority of National Hockey League teams, including the last four Stanley Cup champions, used Sportlogiqs system to evaluate players.

Josh Flynn, assistant general manager for the Columbus Blue Jackets, Ohios professional hockey franchise, said the team used Sportlogiq to analyze players and strategy. We can dive levels deeper into questions we have about the game than we did before, Mr. Flynn said.

But Sportlogiq also sells analytic data to bookmakers in the United States, helping them set odds on bets, and hopes to sell information to individual bettors soon. Mr. Javan is looking to hire a vice president of betting.

They key to all of this sports-focused technology is data.

Algorithms come and go, but data is forever, Mr. Alger is fond of saying. Computer vision systems have to be told what to look for, whether it be tumors in an X-ray or bicycles on the road. In Seattle Sports Sciences case, the computers must be trained to recognize the ball in various lighting conditions as well as understand which plane of the foot is striking the ball.

View original post here:

Want to Be Better at Sports? Listen to the Machines - The New York Times

Written by admin

April 11th, 2020 at 12:49 am

Posted in Machine Learning

Don’t Turn Your Marketing Function Over To AI Just Yet – Forbes

Posted: at 12:49 am


without comments

amano-soul_1280x720

by Kristen Senz

Imagine a future in which a smart marketing machine can predict the needs and habits of individual consumers and the dynamics of competitors across industries and markets. This device would collect data to answer strategic questions, guide managerial decisions, and enable marketers to quickly test how new products or services would perform at various prices or with different characteristics.

The machine learning algorithms that might power such a device are, at least for now, incapable of producing such promising results. But what about tomorrow? According to a group of researchers, the envisioned virtual market machine could become a reality but would still require one missing ingredient: a soul.

The soul is our human intuition, scientific expertise, awareness of customer preferences, and industry knowledgeall capabilities that machines lack and intelligent marketing decisions require.

Without a soul, without human insight, the capabilities of the machine will be limited, a group of 13 marketing scholars write in their working paper,Soul and Machine (Learning), which takes a high-level view of the present and future role of machine learning tools in marketing research and practice. We propose to step back and ask how we can best integrate machine learning to solve previously untenable marketing problems facing real companies?

A product of the11th Triennial Invitational Choice Symposiumheld last year, the paper explains how machine learning leverages Big Data, giving managers new tools to help unravel complex marketing puzzles and understand consumer behavior like never before.Tomomichi Amano, assistant professor in the Marketing Unit at Harvard Business School, is one of the papers authors.

We tend to think that when we have all this rich data and this machine learning technology, that the machines are going to just come up with the best solution, says Amano. But thats not something were able to do now, and to have any hope of doing that, we need to be integrating the specialized domain knowledge that managers possess into these tools and systems.

Marketers have long envisioned the potential for technology to bring about a virtual marketan algorithm so sophisticated that multiple departments within the firm could query it for answers to questions ranging from optimal pricing to product design. What prevents this from materializing? After all, machine learning is delivering self-driving cars and beating human players onJeopardy!

The answer: context specificity, says Amano.

The factors that influence consumer behavior are so varied and complex, and the data that companies collect is so rich, just modeling how consumers search a single retail website is a monumental task. Each companys data are so firm and occasion-specific that building and scaling such models is neither feasible nor economical. Machine learning technology today excels at self-contained tasks like image recognition and content-sorting.

The kind of tasks that we want to do in marketing tend to be more challenging, because were trying to model human behavior, Amano says. So the number of things the model cannot systematically predict is much larger. In other words, theres lots of noise in human behavior.

Instead of working to create the virtual market, marketers and marketing researchers are trying to break it down into more manageable pieces. Amano approaches this from an economic perspective, using basic economic principlesassuming customers prefer lower-priced products, for exampleto build models that can begin to explain how consumers approach online search. (SeeLarge-Scale Demand Estimation with Search Data.)

Other researchers are developing machine learning tools that can leverage content from customers product reviews to identify their future needs. But here the human analysts are key players. They must review the selected content and formulate customer needs, because natural language processing technology still lacks the sophistication to infer them. Increasingly, this hybrid approach is allowing companies to replace traditional customer interviews and focus groups, according to Amano and his colleagues.

Understanding what prompts a customer to purchase a producta concept known as attributionis an area ripe for new hybrid tactics, says Amano. For example, a customer exposed to three different ads for a cell phoneon a bus, on TV, and onlinetalks to his or her friends about cell phones and then buys the phone from the ads a week later.

Regardless of how much data is collected, we dont know how much that bus ad you saw contributed to your purchase of the cell phone, Amano says. We dont know how to model that, and we dont know how to think about it, but its a really important question, because that informs whether you run another ad on the bus.

Heres where managerial insight and behavioral theory can guide firms use of data and machine learning to gain new knowledge about current and potential market segments. It might be that people on the bus use their cell phones more, Amano posits, so they just tend to buy cell phones more often.

Managers who implement marketing tactics and analytics that meld human capital and the machine learning toolbox stand to improve decision-making and product development. But doing so requires careful consideration of the balance between personalization and privacy. At what point do curated online product recommendations become so creepy or intrusive that they sour customers on the brand?

Amano points out that the benefits of personalized marketing are often overshadowed by the creepiness factor. There definitely are a bunch of benefits that we reap from the fact that firms and governments have access to more of our data, he says, even though some of those benefits are hard to see.

Receiving information about available products is one benefit to consumers. In the case of government, the marketing scholars who attended the Choice Symposium contend that machine learning will soon augment or replace expensive survey-based data gathering techniques to keep important indices, such as unemployment rates, up to date.

Machines can scrape at high frequency to collect publicly available information about consumers, firms, jobs, social media, etc., which can be used to generate indices in real-time, the scholars write. With careful development, these measures will be more precise and able to better predict the economic conditions of geographic areas at high granularity, from zip codes to cities, to states and nations.

But privacy concerns among consumers are real and growing, and marketing professionals and scholars are still trying to understand the implications.

Facebook and Googlethese services are free from a monetary perspective, but I think theres some recognition that we are paying some cost in using them, by giving out some of our data, and from that perspective, there is some more research we have to do on the academic front to make sure we understand how firms ought to be responding to these concerns, Amano says.

Managers, in the meantime, must rely on their own insight and experience to find the answer to that question and others. They also need to keep their expectations realistic when it comes to the capacity of machine learning tools, says Amano, and employ people who can communicate effectively about data-based approaches. Ultimately, managers who have the foresight to collaborate with data analysts to design data collection efforts and stagger promotions will be well positioned to harness the power of new machine learning tools in marketing.

You cant do something in business, and then collect the data, and then expect the machine learning methods to spit out insight for you, Amano says. Its important that throughout the process you consult and think about your goals and how what youre doing is going to influence the kind of data you can collect.

Visit link:

Don't Turn Your Marketing Function Over To AI Just Yet - Forbes

Written by admin

April 11th, 2020 at 12:49 am

Posted in Machine Learning

How Will the Emergence of 5G Affect Federated Learning? – IoT For All

Posted: at 12:49 am


without comments

As development teams race to build outAI tools, it is becoming increasingly common to train algorithms on edge devices. Federated learning, a subset of distributed machine learning, is a relatively new approach that allows companies to improve their AI tools without explicitlyaccessing raw user data.

Conceived byGoogle in 2017, federated learning is a decentralized learning model through which algorithms are trained on edge devices. In regard to Googles on-device machine learning approach, the search giant pushed their predictive text algorithm to Android devices, aggregated the data and sent a summary of the new knowledge back to a central server. To protect the integrity of the user data, this data was eitherdelivered via homomorphic encryption or differential privacy, which is the practice of adding noise to the data in order to obfuscate the results.

Generally speaking, with federated learning, the AI algorithm is trained without ever recognizing any individual users specific data; in fact, the raw data never leaves the device itself. Only aggregated model updates are sent back. These model updates are thendecrypted upon delivery to the central server. Test versions of the updated model are then sent back to select devices, and after this process is repeated thousands of times, the AI algorithm is significantly improvedall while never jeopardizing user privacy.

This technology is expected to make waves in the healthcare sector. For example, federated learning is currently being explored by medical start-up Owkin. Seeking to leverage patient data from several healthcare organizations, Owkin uses federated learning to build AI algorithms with data from various hospitals. This can have far-reaching effects, especially as its invaluable that hospitals are able to share disease progression data with each other while preserving the integrity of patient data and adhering to HIPAA regulations. By no means is healthcare the only sector employing this technology; federated learning will be increasingly used by autonomous car companies, smart cities, drones, and fintech organizations. Several other federated learning start-ups are coming to market, includingSnips,S20.ai, andXnor.ai, which was recently acquired by Apple.

Seeing as these AI algorithms are worth a great deal of money, its expected that these models will be a lucrative target for hackers. Nefarious actors will attempt to perform man-in-the-middle attacks. However, as mentioned earlier, by adding noise and aggregating data from various devices and then encrypting this aggregate data, companies can make things difficult for hackers.

Perhaps more concerning are attacks that poison the model itself. A hacker could conceivably compromise the model through his or her own device, or by taking over another users device on the network. Ironically, because federated learning aggregates the data from different devices and sends the encrypted summaries back to the central server, hackers who enter via a backdoor are given a degree of cover. Because of this, it is difficult, if not impossible, to identify where anomalies are located.

Althoughon-device machine learning effectively trains algorithms without exposing raw user data, it does require a ton of local power and memory. Companies attempt to circumvent this by only training their AI algorithms on the edge when devices are idle, charging, or connected to Wi-Fi; however, this is a perpetual challenge.

As 5G expands across the globe, edge devices will no longer be limited by bandwidth and processing speed constraints.According to a recentNokia report, 4G base stations can support 100,000 devices per square kilometer; whereas, the forthcoming 5G stations will support up to 1 million devices in the same area.Withenhanced mobile broadband and low latency, 5G will provide energy efficiency, while facilitating device-to-device communication (D2D). In fact, it is predicted that 5G will usher in a 10-100x increase in bandwidth and a 5-10x decrease in latency.

When 5G becomes more prevalent, well experience faster networks, more endpoints, and a larger attack surface, which may attract an influx of DDoS attacks. Also, 5G comes with a slicing feature, which allows slices (virtual networks) to be easily created, modified, and deleted based on the needs of users.According to aresearch manuscript on the disruptive force of 5G, it remains to be seen whether this network slicing component will allay security concerns or bring a host of new problems.

To summarize, there are new concerns from both a privacy and a security perspective; however, the fact remains: 5G is ultimately a boon for federated learning.

Here is the original post:

How Will the Emergence of 5G Affect Federated Learning? - IoT For All

Written by admin

April 11th, 2020 at 12:49 am

Posted in Machine Learning


Page 21«..10..20212223..30..»



matomo tracker