Page 1,094«..1020..1,0931,0941,0951,096..1,1001,110..»

Inspiration- Rewire your brain and reach your dreams with ease, on autopilot! – BlogTalkRadio

Posted: April 11, 2020 at 6:41 pm


This is a a LIFE CHANGER!!Join Anna and Colette, discussing M.A.P., Law of Attraction, spirituality and MORE!!

In 2014 after 35 years of research and training in the field of personal growth, energy psychology, spirituality, oriental medicine, and quantum physics, Colette developed The MAP Method(). It is a revolutionary coaching approach that helps quickly and effectively rewire your brain so you can reach your dreams with ease and on automatic. The MAP process is based on the latest neuroscience that teaches the brain how to rewire itself in minutes.

The MAP Coaching Institute, Colette founded,is a worldwide organization (Singapore, AU, NZ, France, Germany, US, CAN etc.). The institute offers a certification program for coaches, therapists and healers. The Institute also offers an innovative coaching program for ultimate personal growth (Online daily live sessions to rewire the brain for success, health and happiness).

Licensed Psychotherapist, Colette Streicher developed MAP - Make Anything Possible().Author of the #1 Amazon international bestseller Abundance on Demand. She is the secret weapon of peak performers, top athletes, CEOs, and business leaders.

Click Here to receive a MAP Coaching Workshop Experience for $7

Anna Banguilanis a Life Coach & Spiritual Humorist. Blessing the messes and now helping others mastermind their Superconscious Mind to release blocks and resistance to what they truly want, bringing more clarity, joy, peace and revealing their true identity.www.lifegetsbetterandbetter.com

FOLLOW US! facebook.com/universalenergyradio

View post:
Inspiration- Rewire your brain and reach your dreams with ease, on autopilot! - BlogTalkRadio

Written by admin |

April 11th, 2020 at 6:41 pm

Posted in Life Coaching

How Machine Learning Is Being Used To Eradicate Medication Errors – Analytics India Magazine

Posted: at 12:49 am


People working in the healthcare sector take extra precautions to avoid mistakes and medication errors that can put the lives of patients at risk. Yet, despite this, 2% of patients face preventable medical-related incidents that could be life-threatening. Inadequate systems, tools, processes or working conditions are some of the reasons contributing to these medical mistakes.

In a bid to solve this problem, Google collaborated with UCSFs Bakar Computational Health Sciences Institute to publish Predicting Inpatient Medication Orders in Electronic Health Record Data in Clinical Pharmacology and Therapeutics. The published paper discusses how machine learning (ML) can be used to anticipate standard prescribing patterns by doctors as per the availability of electronic health records.

Google used clinical data of de-identified patients, which included vital signs, laboratory results, past medications, procedures, diagnoses, and more. Googles new model was designed to anticipate a physicians prescription decisions three-quarters of the time, after evaluating the patients current state and medical history.

To train the model, Google chose a dataset containing approximately three million medication orders from more than 1,00,000 hospitals. The company acquired the retrospective electronic health data through de-identification, by choosing random dates and removing all the identifying checkpoints of the record as per the HIPPA rules and guidelines. The company did not gather any identifying information such as names, addresses, contact details, record numbers, names of physicians, free-text notes, images, etc.

The research by the tech giant was done using the open-sourced Fast Healthcare Interoperability Resources (FHIR) format that the company claims was previously applied to improve healthcare data and make it more useful for machine learning. Google did not restrict the dataset to a particular disease, which made the ML activity more demanding. It also allowed the model to identify a wider variety of medical conditions.

Also Read Best Habits For Budding Machine Learning Researchers

Google approached two different ML models the long short-term recurrent neural network, and the regularized time-bucketed logistic model, which are often used in clinical research. Both models were put into comparison against a simple baseline, which was ranked as the most commonly ordered medication based on a patients hospital service, along with time spent since the admission in the hospital. The models ranked a list of 990 possible medications every time a medication was entered in the retrospective data. The team further assessed if the models assigned high probabilities to the medication that were provided by the doctors for each case.

Googles best performing model was the LSTM model, which is capable of handling sequential data, including text and language. The model has been designed to choose the recent events in data and their order, which makes it an excellent option to deal with this problem. Almost 93% of the top-10 list included at least one medication that a clinician would prescribe to a patient within the next day.

The model rightly forecasted the medications prescribed by a doctor as one of the top-10 most likely medications, which calculated to an accuracy amount of 55%. 75% of the ordered medication were ranked in top-25, whereas false-negative cases, where a doctors medication did not make it into the top-25 results, found itself to be in the same 42% of the time as ranked by the model.

These models are trained to mimic a physicians behavior as it appears in historical data, and did not learn the optimal prescribing pattern. Due to this, the models do not understand how the medications might work, or if they have any side effects or not. As per Google, the learning sequence will take time to show normal behavior in a bid to spot abnormal and potentially dangerous orders. In the next phase, the company will examine the models under different circumstances to understand which medication error can cause harm to patients.

Also Read 10 leading Analytics Accelerators/Incubators in India

The result of this work by Google is a small step towards testing the hypothesis that machine learning can be applied to build different systems which can prevent mistakes on the part of doctors and clinicians to keep patients safe. Google is looking forward to collaborating with doctors, pharmacists, clinicians and patients to continue the research for a better result.

comments

Excerpt from:

How Machine Learning Is Being Used To Eradicate Medication Errors - Analytics India Magazine

Written by admin |

April 11th, 2020 at 12:49 am

Posted in Machine Learning

Self-supervised learning is the future of AI – The Next Web

Posted: at 12:49 am


Despite the huge contributions of deep learning to the field of artificial intelligence, theres something very wrong with it: It requires huge amounts of data. This is one thing that boththe pioneersandcritics of deep learningagree on. In fact, deep learning didnt emerge as the leading AI technique until a few years ago because of the limited availability of useful data and the shortage of computing power to process that data.

Reducing the data-dependency of deep learning is currently among the top priorities of AI researchers.

In hiskeynote speech at the AAAI conference, computer scientist Yann LeCun discussed the limits of current deep learning techniques and presented the blueprint for self-supervised learning, his roadmap to solve deep learnings data problem. LeCun is one of thegodfathers of deep learningand the inventor ofconvolutional neural networks (CNN), one of the key elements that have spurred a revolution in artificial intelligence in the past decade.

Self-supervised learning is one of several plans to create data-efficient artificial intelligence systems. At this point, its really hard to predict which technique will succeed in creating the next AI revolution (or if well end up adopting a totally different strategy). But heres what we know about LeCuns masterplan.

First, LeCun clarified that what is often referred to as the limitations of deep learning is, in fact, a limit ofsupervised learning. Supervised learning is the category of machine learning algorithms that require annotated training data. For instance, if you want to create an image classification model, you must train it on a vast number of images that have been labeled with their proper class.

[Deep learning] is not supervised learning. Its not justneural networks. Its basically the idea of building a system by assembling parameterized modules into a computation graph, LeCun said in his AAAI speech. You dont directly program the system. You define the architecture and you adjust those parameters. There can be billions.

Deep learning can be applied to different learning paradigms, LeCun added, including supervised learning,reinforcement learning, as well as unsupervised or self-supervised learning.

But the confusion surrounding deep learning and supervised learning is not without reason. For the moment, the majority of deep learning algorithms that have found their way into practical applications are based on supervised learning models, which says a lot aboutthe current shortcomings of AI systems. Image classifiers, facial recognition systems, speech recognition systems, and many of the other AI applications we use every day have been trained on millions of labeled examples.

Reinforcement learning and unsupervised learning, the other categories of learning algorithms, have so far found very limited applications.

Supervised deep learning has given us plenty of very useful applications, especially in fields such ascomputer visionand some areas of natural language processing. Deep learning is playing an increasingly important role in sensitive applications, such as cancer detection. It is also proving to be extremely useful in areas where the scale of the problem is beyond being addressed with human efforts, such aswith some caveatsreviewing the huge amount of content being posted on social media every day.

If you take deep learning from Facebook, Instagram, YouTube, etc., those companies crumble, LeCun says. They are completely built around it.

But as mentioned, supervised learning is only applicable where theres enough quality data and the data can capture the entirety of possible scenarios. As soon as trained deep learning models face novel examples that differ from their training examples, they start to behave in unpredictable ways. In some cases,showing an object from a slightly different anglemight be enough to confound a neural network into mistaking it with something else.

ImageNet vs reality: In ImageNet (left column) objects are neatly positioned, in ideal background and lighting conditions. In the real world, things are messier (source: objectnet.dev)

Deep reinforcement learning has shownremarkable results in games and simulation. In the past few years, reinforcement learning has conquered many games that were previously thought to off-limits for artificial intelligence. AI programs have already decimated human world champions atStarCraft 2, Dota, and the ancient Chinese board game Go.

But the way these AI programs learn to solve problems is drastically different from that of humans. Basically, a reinforcement learning agent starts with a blank slate and is only provided with a basic set of actions it can perform in its environment. The AI is then left on its own to learn through trial-and-error how to generate the most rewards (e.g., win more games).

This model works when the problem space is simple and you have enough compute power to run as many trial-and-error sessions as possible. In most cases, reinforcement learning agents take an insane amount of sessions to master games. The huge costs have limited reinforcement learning research to research labsowned or funded by wealthy tech companies.

Reinforcement learning agents must be trained on hundreds of years worth of session to master games, much more than humans can play in a lifetime (source: Yann LeCun).

Reinforcement learning systems are very bad attransfer learning. A bot that plays StarCraft 2 at grandmaster level needs to be trained from scratch if it wants to play Warcraft 3. In fact, even small changes to the StarCraft game environment can immensely degrade the performance of the AI. In contrast, humans are very good at extracting abstract concepts from one game and transferring it to another game.

Reinforcement learning really shows its limits when it wants to learn to solve real-world problems that cant be simulated accurately. What if you want to train a car to drive itself? And its very hard to simulate this accurately, LeCun said, adding that if we wanted to do it in real life, we would have to destroy many cars. And unlike simulated environments, real life doesnt allow you to run experiments in fast forward, and parallel experiments, when possible, would result in even greater costs.

LeCun breaks down the challenges of deep learning into three areas.

First, we need to develop AI systems that learn with fewer samples or fewer trials. My suggestion is to use unsupervised learning, or I prefer to call it self-supervised learning because the algorithms we use are really akin to supervised learning, which is basically learning to fill in the blanks, LeCun says. Basically, its the idea of learning to represent the world before learning a task. This is what babies and animals do. We run about the world, we learn how it works before we learn any task. Once we have good representations of the world, learning a task requires few trials and few samples.

Babies develop concepts of gravity, dimensions, and object persistence in the first few months after their birth. While theres debate on how much of these capabilities are hardwired into the brain and how much of it is learned, what is for sure is that we develop many of our abilities simply by observing the world around us.

The second challenge is creating deep learning systems that can reason. Current deep learning systems are notoriously bad at reasoning and abstraction, which is why they need huge amounts of data to learn simple tasks.

The question is, how do we go beyond feed-forward computation and system 1? How do we make reasoning compatible with gradient-based learning? How do we make reasoning differentiable? Thats the bottom line, LeCun said.

System 1 is the kind of learning tasks that dont require active thinking, such as navigating a known area or making small calculations. System 2 is the more active kind of thinking, which requires reasoning.Symbolic artificial intelligence, the classic approach to AI, has proven to be much better at reasoning and abstraction.

But LeCun doesnt suggest returning to symbolic AI or tohybrid artificial intelligence systems, as other scientists have suggested. His vision for the future of AI is much more in line with that of Yoshua Bengio, another deep learning pioneer, who introduced the concept ofsystem 2 deep learningat NeurIPS 2019 and further discussed it at AAAI 2020. LeCun, however, did admit that nobody has a completely good answer to which approach will enable deep learning systems to reason.

The third challenge is to create deep learning systems that can lean and plan complex action sequences, and decompose tasks into subtasks. Deep learning systems are good at providing end-to-end solutions to problems but very bad at breaking them down into specific interpretable and modifiable steps. There have been advances in creatinglearning-based AI systems that can decompose images, speech, and text. Capsule networks, invented by Geoffry Hinton, address some of these challenges.

But learning to reason about complex tasks is beyond todays AI. We have no idea how to do this, LeCun admits.

The idea behind self-supervised learning is to develop a deep learning system that can learn to fill in the blanks.

You show a system a piece of input, a text, a video, even an image, you suppress a piece of it, mask it, and you train a neural net or your favorite class or model to predict the piece thats missing. It could be the future of a video or the words missing in a text, LeCun says.

The closest we have to self-supervised learning systems are Transformers, an architecture that has proven very successful innatural language processing. Transformers dont require labeled data. They are trained on large corpora of unstructured text such as Wikipedia articles. And theyve proven to be much better than their predecessors at generating text, engaging in conversation, and answering questions. (But they are stillvery far from really understanding human language.)

Transformers have become very popular and are the underlying technology for nearly all state-of-the-art language models, including Googles BERT, Facebooks RoBERTa,OpenAIs GPT2, and GooglesMeena chatbot.

More recently, AI researchers have proven thattransformers can perform integration and solve differential equations, problems that require symbol manipulation. This might be a hint that the evolution of transformers might enable neural networks to move beyond pattern recognition and statistical approximation tasks.

So far, transformers have proven their worth in dealing with discreet data such as words and mathematical symbols. Its easy to train a system like this because there is some uncertainty about which word could be missing but we can represent this uncertainty with a giant vector of probabilities over the entire dictionary, and so its not a problem, LeCun says.

But the success of Transformers has not transferred to the domain of visual data. It turns out to be much more difficult to represent uncertainty and prediction in images and video than it is in text because its not discrete. We can produce distributions over all the words in the dictionary. We dont know how to represent distributions over all possible video frames, LeCun says.

For each video segment, there are countless possible futures. This makes it very hard for an AI system to predict a single outcome, say the next few frames in a video. The neural network ends up calculating the average of possible outcomes, which results in blurry output.

This is the main technical problem we have to solve if we want to apply self-supervised learning to a wide variety of modalities like video, LeCun says.

LeCuns favored method to approach supervised learning is what he calls latent variable energy-based models. The key idea is to introduce a latent variable Z which computes the compatibility between a variable X (the current frame in a video) and a prediction Y (the future of the video) and selects the outcome with the best compatibility score. In his speech, LeCun further elaborates on energy-based models and other approaches to self-supervised learning.

Energy-based models use a latent variable Z to compute the compatibility between a variable X and a prediction Y and select the outcome with the best compatibility score (image credit: Yann LeCun).

I think self-supervised learning is the future. This is whats going to allow to our AI systems, deep learning system to go to the next level, perhaps learn enough background knowledge about the world by observation, so that some sort of common sense may emerge, LeCun said in his speech at the AAAI Conference.

One of the key benefits of self-supervised learning is the immense gain in the amount of information outputted by the AI. In reinforcement learning, training the AI system is performed at scalar level; the model receives a single numerical value as reward or punishment for its actions. In supervised learning, the AI system predicts a category or a numerical value for each input.

In self-supervised learning, the output improves to a whole image or set of images. Its a lot more information. To learn the same amount of knowledge about the world, you will require fewer samples, LeCun says.

We must still figure out how the uncertainty problem works, but when the solution emerges, we will have unlocked a key component of the future of AI.

If artificial intelligence is a cake, self-supervised learning is the bulk of the cake, LeCun says. The next revolution in AI will not be supervised, nor purely reinforced.

This story is republished fromTechTalks, the blog that explores how technology is solving problems and creating new ones. Like them onFacebookhere and follow them down here:

Published April 5, 2020 05:00 UTC

Go here to see the original:

Self-supervised learning is the future of AI - The Next Web

Written by admin |

April 11th, 2020 at 12:49 am

Posted in Machine Learning

Want to Be Better at Sports? Listen to the Machines – The New York Times

Posted: at 12:49 am


Based on the data thats collected, it tells me how Im moving compared to previously and how Im moving compared to my ideal movement signature, as they call it, Mr. Ross said. Sparta Science then tailors his workouts to move him closer to that ideal.

The Pittsburgh Steelers, the Detroit Lions and the Washington Redskins, among others, use the system regularly, Dr. Wagner said. Sparta Science is also used to evaluate college players in the National Football Leagues annual scouting combine.

Of course, it is inevitable that machine learnings predictive power would be applied to another lucrative end of the sports industry: betting. Sportlogiq, a Montreal-based firm, has a system that primarily relies on broadcast feeds to analyze players and teams in hockey, soccer, football and lacrosse.

Mehrsan Javan, the companys chief technology officer and one of its co-founders, said the majority of National Hockey League teams, including the last four Stanley Cup champions, used Sportlogiqs system to evaluate players.

Josh Flynn, assistant general manager for the Columbus Blue Jackets, Ohios professional hockey franchise, said the team used Sportlogiq to analyze players and strategy. We can dive levels deeper into questions we have about the game than we did before, Mr. Flynn said.

But Sportlogiq also sells analytic data to bookmakers in the United States, helping them set odds on bets, and hopes to sell information to individual bettors soon. Mr. Javan is looking to hire a vice president of betting.

They key to all of this sports-focused technology is data.

Algorithms come and go, but data is forever, Mr. Alger is fond of saying. Computer vision systems have to be told what to look for, whether it be tumors in an X-ray or bicycles on the road. In Seattle Sports Sciences case, the computers must be trained to recognize the ball in various lighting conditions as well as understand which plane of the foot is striking the ball.

View original post here:

Want to Be Better at Sports? Listen to the Machines - The New York Times

Written by admin |

April 11th, 2020 at 12:49 am

Posted in Machine Learning

Don’t Turn Your Marketing Function Over To AI Just Yet – Forbes

Posted: at 12:49 am


amano-soul_1280x720

by Kristen Senz

Imagine a future in which a smart marketing machine can predict the needs and habits of individual consumers and the dynamics of competitors across industries and markets. This device would collect data to answer strategic questions, guide managerial decisions, and enable marketers to quickly test how new products or services would perform at various prices or with different characteristics.

The machine learning algorithms that might power such a device are, at least for now, incapable of producing such promising results. But what about tomorrow? According to a group of researchers, the envisioned virtual market machine could become a reality but would still require one missing ingredient: a soul.

The soul is our human intuition, scientific expertise, awareness of customer preferences, and industry knowledgeall capabilities that machines lack and intelligent marketing decisions require.

Without a soul, without human insight, the capabilities of the machine will be limited, a group of 13 marketing scholars write in their working paper,Soul and Machine (Learning), which takes a high-level view of the present and future role of machine learning tools in marketing research and practice. We propose to step back and ask how we can best integrate machine learning to solve previously untenable marketing problems facing real companies?

A product of the11th Triennial Invitational Choice Symposiumheld last year, the paper explains how machine learning leverages Big Data, giving managers new tools to help unravel complex marketing puzzles and understand consumer behavior like never before.Tomomichi Amano, assistant professor in the Marketing Unit at Harvard Business School, is one of the papers authors.

We tend to think that when we have all this rich data and this machine learning technology, that the machines are going to just come up with the best solution, says Amano. But thats not something were able to do now, and to have any hope of doing that, we need to be integrating the specialized domain knowledge that managers possess into these tools and systems.

Marketers have long envisioned the potential for technology to bring about a virtual marketan algorithm so sophisticated that multiple departments within the firm could query it for answers to questions ranging from optimal pricing to product design. What prevents this from materializing? After all, machine learning is delivering self-driving cars and beating human players onJeopardy!

The answer: context specificity, says Amano.

The factors that influence consumer behavior are so varied and complex, and the data that companies collect is so rich, just modeling how consumers search a single retail website is a monumental task. Each companys data are so firm and occasion-specific that building and scaling such models is neither feasible nor economical. Machine learning technology today excels at self-contained tasks like image recognition and content-sorting.

The kind of tasks that we want to do in marketing tend to be more challenging, because were trying to model human behavior, Amano says. So the number of things the model cannot systematically predict is much larger. In other words, theres lots of noise in human behavior.

Instead of working to create the virtual market, marketers and marketing researchers are trying to break it down into more manageable pieces. Amano approaches this from an economic perspective, using basic economic principlesassuming customers prefer lower-priced products, for exampleto build models that can begin to explain how consumers approach online search. (SeeLarge-Scale Demand Estimation with Search Data.)

Other researchers are developing machine learning tools that can leverage content from customers product reviews to identify their future needs. But here the human analysts are key players. They must review the selected content and formulate customer needs, because natural language processing technology still lacks the sophistication to infer them. Increasingly, this hybrid approach is allowing companies to replace traditional customer interviews and focus groups, according to Amano and his colleagues.

Understanding what prompts a customer to purchase a producta concept known as attributionis an area ripe for new hybrid tactics, says Amano. For example, a customer exposed to three different ads for a cell phoneon a bus, on TV, and onlinetalks to his or her friends about cell phones and then buys the phone from the ads a week later.

Regardless of how much data is collected, we dont know how much that bus ad you saw contributed to your purchase of the cell phone, Amano says. We dont know how to model that, and we dont know how to think about it, but its a really important question, because that informs whether you run another ad on the bus.

Heres where managerial insight and behavioral theory can guide firms use of data and machine learning to gain new knowledge about current and potential market segments. It might be that people on the bus use their cell phones more, Amano posits, so they just tend to buy cell phones more often.

Managers who implement marketing tactics and analytics that meld human capital and the machine learning toolbox stand to improve decision-making and product development. But doing so requires careful consideration of the balance between personalization and privacy. At what point do curated online product recommendations become so creepy or intrusive that they sour customers on the brand?

Amano points out that the benefits of personalized marketing are often overshadowed by the creepiness factor. There definitely are a bunch of benefits that we reap from the fact that firms and governments have access to more of our data, he says, even though some of those benefits are hard to see.

Receiving information about available products is one benefit to consumers. In the case of government, the marketing scholars who attended the Choice Symposium contend that machine learning will soon augment or replace expensive survey-based data gathering techniques to keep important indices, such as unemployment rates, up to date.

Machines can scrape at high frequency to collect publicly available information about consumers, firms, jobs, social media, etc., which can be used to generate indices in real-time, the scholars write. With careful development, these measures will be more precise and able to better predict the economic conditions of geographic areas at high granularity, from zip codes to cities, to states and nations.

But privacy concerns among consumers are real and growing, and marketing professionals and scholars are still trying to understand the implications.

Facebook and Googlethese services are free from a monetary perspective, but I think theres some recognition that we are paying some cost in using them, by giving out some of our data, and from that perspective, there is some more research we have to do on the academic front to make sure we understand how firms ought to be responding to these concerns, Amano says.

Managers, in the meantime, must rely on their own insight and experience to find the answer to that question and others. They also need to keep their expectations realistic when it comes to the capacity of machine learning tools, says Amano, and employ people who can communicate effectively about data-based approaches. Ultimately, managers who have the foresight to collaborate with data analysts to design data collection efforts and stagger promotions will be well positioned to harness the power of new machine learning tools in marketing.

You cant do something in business, and then collect the data, and then expect the machine learning methods to spit out insight for you, Amano says. Its important that throughout the process you consult and think about your goals and how what youre doing is going to influence the kind of data you can collect.

Visit link:

Don't Turn Your Marketing Function Over To AI Just Yet - Forbes

Written by admin |

April 11th, 2020 at 12:49 am

Posted in Machine Learning

How Will the Emergence of 5G Affect Federated Learning? – IoT For All

Posted: at 12:49 am


As development teams race to build outAI tools, it is becoming increasingly common to train algorithms on edge devices. Federated learning, a subset of distributed machine learning, is a relatively new approach that allows companies to improve their AI tools without explicitlyaccessing raw user data.

Conceived byGoogle in 2017, federated learning is a decentralized learning model through which algorithms are trained on edge devices. In regard to Googles on-device machine learning approach, the search giant pushed their predictive text algorithm to Android devices, aggregated the data and sent a summary of the new knowledge back to a central server. To protect the integrity of the user data, this data was eitherdelivered via homomorphic encryption or differential privacy, which is the practice of adding noise to the data in order to obfuscate the results.

Generally speaking, with federated learning, the AI algorithm is trained without ever recognizing any individual users specific data; in fact, the raw data never leaves the device itself. Only aggregated model updates are sent back. These model updates are thendecrypted upon delivery to the central server. Test versions of the updated model are then sent back to select devices, and after this process is repeated thousands of times, the AI algorithm is significantly improvedall while never jeopardizing user privacy.

This technology is expected to make waves in the healthcare sector. For example, federated learning is currently being explored by medical start-up Owkin. Seeking to leverage patient data from several healthcare organizations, Owkin uses federated learning to build AI algorithms with data from various hospitals. This can have far-reaching effects, especially as its invaluable that hospitals are able to share disease progression data with each other while preserving the integrity of patient data and adhering to HIPAA regulations. By no means is healthcare the only sector employing this technology; federated learning will be increasingly used by autonomous car companies, smart cities, drones, and fintech organizations. Several other federated learning start-ups are coming to market, includingSnips,S20.ai, andXnor.ai, which was recently acquired by Apple.

Seeing as these AI algorithms are worth a great deal of money, its expected that these models will be a lucrative target for hackers. Nefarious actors will attempt to perform man-in-the-middle attacks. However, as mentioned earlier, by adding noise and aggregating data from various devices and then encrypting this aggregate data, companies can make things difficult for hackers.

Perhaps more concerning are attacks that poison the model itself. A hacker could conceivably compromise the model through his or her own device, or by taking over another users device on the network. Ironically, because federated learning aggregates the data from different devices and sends the encrypted summaries back to the central server, hackers who enter via a backdoor are given a degree of cover. Because of this, it is difficult, if not impossible, to identify where anomalies are located.

Althoughon-device machine learning effectively trains algorithms without exposing raw user data, it does require a ton of local power and memory. Companies attempt to circumvent this by only training their AI algorithms on the edge when devices are idle, charging, or connected to Wi-Fi; however, this is a perpetual challenge.

As 5G expands across the globe, edge devices will no longer be limited by bandwidth and processing speed constraints.According to a recentNokia report, 4G base stations can support 100,000 devices per square kilometer; whereas, the forthcoming 5G stations will support up to 1 million devices in the same area.Withenhanced mobile broadband and low latency, 5G will provide energy efficiency, while facilitating device-to-device communication (D2D). In fact, it is predicted that 5G will usher in a 10-100x increase in bandwidth and a 5-10x decrease in latency.

When 5G becomes more prevalent, well experience faster networks, more endpoints, and a larger attack surface, which may attract an influx of DDoS attacks. Also, 5G comes with a slicing feature, which allows slices (virtual networks) to be easily created, modified, and deleted based on the needs of users.According to aresearch manuscript on the disruptive force of 5G, it remains to be seen whether this network slicing component will allay security concerns or bring a host of new problems.

To summarize, there are new concerns from both a privacy and a security perspective; however, the fact remains: 5G is ultimately a boon for federated learning.

Here is the original post:

How Will the Emergence of 5G Affect Federated Learning? - IoT For All

Written by admin |

April 11th, 2020 at 12:49 am

Posted in Machine Learning

How Microsoft Teams will use AI to filter out typing, barking, and other noise from video calls – VentureBeat

Posted: at 12:49 am


Last month, Microsoft announced that Teams, its competitor to Slack, Facebooks Workplace, and Googles Hangouts Chat, had passed 44 million daily active users. The milestone overshadowed its unveiling of a few new features coming later this year. Most were straightforward: a hand-raising feature to indicate you have something to say, offline and low-bandwidth support to read chat messages and write responses even if you have poor or no internet connection, and an option to pop chats out into a separate window. But one feature, real-time noise suppression, stood out Microsoft demoed how the AI minimized distracting background noise during a call.

Weve all been there. How many times have you asked someone to mute themselves or to relocate from a noisy area? Real-time noise suppression will filter out someone typing on their keyboard while in a meeting, the rustling of a bag of chips (as you can see in the video above), and a vacuum cleaner running in the background. AI will remove the background noise in real time so you can hear only speech on the call. But how exactly does it work? We talked to Robert Aichner, Microsoft Teams group program manager, to find out.

The use of collaboration and video conferencing tools is exploding as the coronavirus crisis forces millions to learn and work from home. Microsoft is pushing Teams as the solution for businesses and consumers as part of its Microsoft 365 subscription suite. The company is leaning on its machine learning expertise to ensure AI features are one of its big differentiators. When it finally arrives, real-time background noise suppression will be a boon for businesses and households full of distracting noises. Additionally, how Microsoft built the feature is also instructive to other companies tapping machine learning.

Of course, noise suppression has existed in the Microsoft Teams, Skype, and Skype for Business apps for years. Other communication tools and video conferencing apps have some form of noise suppression as well. But that noise suppression covers stationary noise, such as a computer fan or air conditioner running in the background. The traditional noise suppression method is to look for speech pauses, estimate the baseline of noise, assume that the continuous background noise doesnt change over time, and filter it out.

Going forward, Microsoft Teams will suppress non-stationary noises like a dog barking or somebody shutting a door. That is not stationary, Aichner explained. You cannot estimate that in speech pauses. What machine learning now allows you to do is to create this big training set, with a lot of representative noises.

In fact, Microsoft open-sourced its training set earlier this year on GitHub to advance the research community in that field. While the first version is publicly available, Microsoft is actively working on extending the data sets. A company spokesperson confirmed that as part of the real-time noise suppression feature, certain categories of noises in the data sets will not be filtered out on calls, including musical instruments, laughter, and singing. (More on that here: ProBeat: Microsoft Teams video calls and the ethics of invisible AI.)

Microsoft cant simply isolate the sound of human voices because other noises also happen at the same frequencies. On a spectrogram of speech signal, unwanted noise appears in the gaps between speech and overlapping with the speech. Its thus next to impossible to filter out the noise if your speech and noise overlap, you cant distinguish the two. Instead, you need to train a neural network beforehand on what noise looks like and speech looks like.

To get his points across, Aichner compared machine learning models for noise suppression to machine learning models for speech recognition. For speech recognition, you need to record a large corpus of users talking into the microphone and then have humans label that speech data by writing down what was said. Instead of mapping microphone input to written words, in noise suppression youre trying to get from noisy speech to clean speech.

We train a model to understand the difference between noise and speech, and then the model is trying to just keep the speech, Aichner said. We have training data sets. We took thousands of diverse speakers and more than 100 noise types. And then what we do is we mix the clean speech without noise with the noise. So we simulate a microphone signal. And then you also give the model the clean speech as the ground truth. So youre asking the model, From this noisy data, please extract this clean signal, and this is how it should look like. Thats how you train neural networks [in] supervised learning, where you basically have some ground truth.

For speech recognition, the ground truth is what was said into the microphone. For real-time noise suppression, the ground truth is the speech without noise. By feeding a large enough data set in this case hundreds of hours of data Microsoft can effectively train its model. Its able to generalize and reduce the noise with my voice even though my voice wasnt part of the training data, Aichner said. In real time, when I speak, there is noise that the model would be able to extract the clean speech [from] and just send that to the remote person.

Comparing the functionality to speech recognition makes noise suppression sound much more achievable, even though its happening in real time. So why has it not been done before? Can Microsofts competitors quickly recreate it? Aichner listed challenges for building real-time noise suppression, including finding representative data sets, building and shrinking the model, and leveraging machine learning expertise.

We already touched on the first challenge: representative data sets. The team spent a lot of time figuring out how to produce sound files that exemplify what happens on a typical call.

They used audio books for representing male and female voices, since speech characteristics do differ between male and female voices. They used YouTube data sets with labeled data that specify that a recording includes, say, typing and music. Aichners team then combined the speech data and noises data using a synthesizer script at different signal to noise ratios. By amplifying the noise, they could imitate different realistic situations that can happen on a call.

But audiobooks are drastically different than conference calls. Would that not affect the model, and thus the noise suppression?

That is a good point, Aichner conceded. Our team did make some recordings as well to make sure that we are not just training on synthetic data we generate ourselves, but that it also works on actual data. But its definitely harder to get those real recordings.

Aichners team is not allowed to look at any customer data. Additionally, Microsoft has strict privacy guidelines internally. I cant just simply say, Now I record every meeting.'

So the team couldnt use Microsoft Teams calls. Even if they could say, if some Microsoft employees opted-in to have their meetings recorded someone would still have to mark down when exactly distracting noises occurred.

And so thats why we right now have some smaller-scale effort of making sure that we collect some of these real recordings with a variety of devices and speakers and so on, said Aichner. What we then do is we make that part of the test set. So we have a test set which we believe is even more representative of real meetings. And then, we see if we use a certain training set, how well does that do on the test set? So ideally yes, I would love to have a training set, which is all Teams recordings and have all types of noises people are listening to. Its just that I cant easily get the same number of the same volume of data that I can by grabbing some other open source data set.

I pushed the point once more: How would an opt-in program to record Microsoft employees using Teams impact the feature?

You could argue that it gets better, Aichner said. If you have more representative data, it could get even better. So I think thats a good idea to potentially in the future see if we can improve even further. But I think what we are seeing so far is even with just taking public data, it works really well.

The next challenge is to figure out how to build the neural network, what the model architecture should be, and iterate. The machine learning model went through a lot of tuning. That required a lot of compute. Aichners team was of course relying on Azure, using many GPUs. Even with all that compute, however, training a large model with a large data set could take multiple days.

A lot of the machine learning happens in the cloud, Aichner said. So, for speech recognition for example, you speak into the microphone, thats sent to the cloud. The cloud has huge compute, and then you run these large models to recognize your speech. For us, since its real-time communication, I need to process every frame. Lets say its 10 or 20 millisecond frames. I need to now process that within that time, so that I can send that immediately to you. I cant send it to the cloud, wait for some noise suppression, and send it back.

For speech recognition, leveraging the cloud may make sense. For real-time noise suppression, its a nonstarter. Once you have the machine learning model, you then have to shrink it to fit on the client. You need to be able to run it on a typical phone or computer. A machine learning model only for people with high-end machines is useless.

Theres another reason why the machine learning model should live on the edge rather than the cloud. Microsoft wants to limit server use. Sometimes, there isnt even a server in the equation to begin with. For one-to-one calls in Microsoft Teams, the call setup goes through a server, but the actual audio and video signal packets are sent directly between the two participants. For group calls or scheduled meetings, there is a server in the picture, but Microsoft minimizes the load on that server. Doing a lot of server processing for each call increases costs, and every additional network hop adds latency. Its more efficient from a cost and latency perspective to do the processing on the edge.

You want to make sure that you push as much of the compute to the endpoint of the user because there isnt really any cost involved in that. You already have your laptop or your PC or your mobile phone, so now lets do some additional processing. As long as youre not overloading the CPU, that should be fine, Aichner said.

I pointed out there is a cost, especially on devices that arent plugged in: battery life. Yeah, battery life, we are obviously paying attention to that too, he said. We dont want you now to have much lower battery life just because we added some noise suppression. Thats definitely another requirement we have when we are shipping. We need to make sure that we are not regressing there.

Its not just regression that the team has to consider, but progression in the future as well. Because were talking about a machine learning model, the work never ends.

We are trying to build something which is flexible in the future because we are not going to stop investing in noise suppression after we release the first feature, Aichner said. We want to make it better and better. Maybe for some noise tests we are not doing as good as we should. We definitely want to have the ability to improve that. The Teams client will be able to download new models and improve the quality over time whenever we think we have something better.

The model itself will clock in at a few megabytes, but it wont affect the size of the client itself. He said, Thats also another requirement we have. When users download the app on the phone or on the desktop or laptop, you want to minimize the download size. You want to help the people get going as fast as possible.

Adding megabytes to that download just for some model isnt going to fly, Aichner said. After you install Microsoft Teams, later in the background it will download that model. Thats what also allows us to be flexible in the future that we could do even more, have different models.

All the above requires one final component: talent.

You also need to have the machine learning expertise to know what you want to do with that data, Aichner said. Thats why we created this machine learning team in this intelligent communications group. You need experts to know what they should do with that data. What are the right models? Deep learning has a very broad meaning. There are many different types of models you can create. We have several centers around the world in Microsoft Research, and we have a lot of audio experts there too. We are working very closely with them because they have a lot of expertise in this deep learning space.

The data is open source and can be improved upon. A lot of compute is required, but any company can simply leverage a public cloud, including the leaders Amazon Web Services, Microsoft Azure, and Google Cloud. So if another company with a video chat tool had the right machine learners, could they pull this off?

The answer is probably yes, similar to how several companies are getting speech recognition, Aichner said. They have a speech recognizer where theres also lots of data involved. Theres also lots of expertise needed to build a model. So the large companies are doing that.

Aichner believes Microsoft still has a heavy advantage because of its scale. I think that the value is the data, he said. What we want to do in the future is like what you said, have a program where Microsoft employees can give us more than enough real Teams Calls so that we have an even better analysis of what our customers are really doing, what problems they are facing, and customize it more towards that.

Originally posted here:

How Microsoft Teams will use AI to filter out typing, barking, and other noise from video calls - VentureBeat

Written by admin |

April 11th, 2020 at 12:49 am

Posted in Machine Learning

Machine Learning Market Insights on Trends, Application, Types and Users Analysis 2019-2025 – Science In Me

Posted: at 12:48 am


In 2018, the market size of Machine Learning Market is million US$ and it will reach million US$ in 2025, growing at a CAGR of from 2018; while in China, the market size is valued at xx million US$ and will increase to xx million US$ in 2025, with a CAGR of xx% during forecast period.

In this report, 2018 has been considered as the base year and 2018 to 2025 as the forecast period to estimate the market size for Machine Learning .

This report studies the global market size of Machine Learning , especially focuses on the key regions like United States, European Union, China, and other regions (Japan, Korea, India and Southeast Asia).

Request Sample Report @ https://www.researchmoz.com/enquiry.php?type=S&repid=2599556&source=atm

This study presents the Machine Learning Market production, revenue, market share and growth rate for each key company, and also covers the breakdown data (production, consumption, revenue and market share) by regions, type and applications. Machine Learning history breakdown data from 2014 to 2018, and forecast to 2025.

For top companies in United States, European Union and China, this report investigates and analyzes the production, value, price, market share and growth rate for the top manufacturers, key data from 2014 to 2018.

In global Machine Learning market, the following companies are covered:

The major players profiled in this report include: Company A

The end users/applications and product categories analysis: On the basis of product, this report displays the sales volume, revenue (Million USD), product price, market share and growth rate of each type, primarily split into- General Type

On the basis on the end users/applications, this report focuses on the status and outlook for major applications/end users, sales volume, market share and growth rate of Machine Learning for each application, including- Healthcare BFSI

Make An EnquiryAbout This Report @ https://www.researchmoz.com/enquiry.php?type=E&repid=2599556&source=atm

The content of the study subjects, includes a total of 15 chapters:

Chapter 1, to describe Machine Learning product scope, market overview, market opportunities, market driving force and market risks.

Chapter 2, to profile the top manufacturers of Machine Learning , with price, sales, revenue and global market share of Machine Learning in 2017 and 2018.

Chapter 3, the Machine Learning competitive situation, sales, revenue and global market share of top manufacturers are analyzed emphatically by landscape contrast.

Chapter 4, the Machine Learning breakdown data are shown at the regional level, to show the sales, revenue and growth by regions, from 2014 to 2018.

Chapter 5, 6, 7, 8 and 9, to break the sales data at the country level, with sales, revenue and market share for key countries in the world, from 2014 to 2018.

You can Buy This Report from Here @ https://www.researchmoz.com/checkout?rep_id=2599556&licType=S&source=atm

Chapter 10 and 11, to segment the sales by type and application, with sales market share and growth rate by type, application, from 2014 to 2018.

Chapter 12, Machine Learning market forecast, by regions, type and application, with sales and revenue, from 2018 to 2024.

Chapter 13, 14 and 15, to describe Machine Learning sales channel, distributors, customers, research findings and conclusion, appendix and data source.

View post:

Machine Learning Market Insights on Trends, Application, Types and Users Analysis 2019-2025 - Science In Me

Written by admin |

April 11th, 2020 at 12:48 am

Posted in Machine Learning

Its Time to Improve the Scientific Paper Review Process But How? – Synced

Posted: at 12:48 am


Head image courtesy Getty Images

The level-headed evaluation of submitted research by other experts in the field is what grants scientific journals and academic conferences their respected positions. Peer review determines which papers get published, and that in turn can determine which academic theories are promoted, which projects are funded, and which awards are won.

In recent years however peer review processes have come under fire especially from the machine learning community with complaints of long delays, inconsistent standards and unqualified reviewers.

A new paper proposes replacing peer review with a novel State-Of-the-Art Review (SOAR) system, a neoteric reviewing pipeline that serves as a plug-and-play replacement for peer review.

SOAR improves scaling, consistency and efficiency and can be easily implemented as a plugin to score papers and offer a direct read/dont read recommendation. The team explain that SOAR evaluates a papers efficacy and novelty by calculating the total occurrences in the manuscript of the terms state-of-the-art and novel.

If only a solution were that simple but yes, SOAR was an April Fools prank.

The paper was a product of SIGBOVIK 2020, a yearly satire event of the Association for Computational Heresy and Carnegie Mellon University that presents humorous fake research in computer science. Previous studies have included Denotational Semantics of Pidgin and Creole, Artificial Stupidity, Elbow Macaroni, Rasterized Love Triangles, and Operational Semantics of Chevy Tahoes.

Seriously though, since 1998 the volume of AI papers in peer-reviewed journals has grown by more than 300 percent, according to the AI Index 2019 Report. Meanwhile major AI conferences like NeurIPS, AAAI and CVPR are setting new paper submission records every year.

This has inevitably led to a shortage of qualified peer reviewers in the machine learning community. In a previous Synced story, CVPR 2019 and ICCV 2019 Area Chair Jia-Bin Huang introduced research that used deep learning to predict whether a paper should be accepted based solely on its visual appearance. He told Synced the idea of training a classifier to recognize good/bad papers has been around since 2010.

Huang knows that although his model achieves decent classification performance it is unlikely to ever be used in an actual conference. Such analysis and classification might however be helpful for junior authors when considering how to prepare for their paper submissions.

Turing awardee Yoshua Bengio meanwhile believes the fundamental problem with todays peer review process lies in a publish or perish paradigm that can sacrifice paper depth and quality in favour of speedy publication.

Bengio blogged on the topic earlier this year, proposing a rethink of the overall publication process in the field of machine learning, with reviewing being a crucial element to safeguard research culture amid the fields exponential growth in size.

Machine learning has almost completely switched to a conference publication model, Bengio wrote, and we go from one deadline to the next every two months. In the lead-up to conference submission deadlines, many papers are rushed and things are not checked properly. The race to get more papers out especially as first or co-first author can also be crushing and counterproductive. Bengio is strongly urging the community to take a step back, think deeply, verify things carefully, etc.

Bengio says he has been thinking of a potentially different publication model for ML, where papers are first submitted to a fast turnaround journal such as the Journal of Machine Learning Research for example, and then conference program committees select the papers they like from the list of accepted and reviewed (scored) papers.

Conferences have played a central role in ML, as they can speed up the research cycle, enable interactions between researchers, and generate a fast turnaround of ideas. And peer-reviewed journals have for decades been the backbone of the broader scientific research community. But with the growing popularity of preprint servers like arXiv and upcoming ML conferences going digital due to the COVID-19 pandemic, this may be the time to rethink, redesign and reboot the ML paper review and publication process.

Journalist: Yuan Yuan & Editor: Michael Sarazen

Like Loading...

Excerpt from:

Its Time to Improve the Scientific Paper Review Process But How? - Synced

Written by admin |

April 11th, 2020 at 12:48 am

Posted in Machine Learning

60% of Content Containing COVID-Related Keywords Is Brand Safe – MarTech Series

Posted: at 12:48 am


New data from GumGums content analysis AI system reveals that keyword-based safety strategies are unduly denying brands access to vast viable ad inventories

GumGum, Inc., an artificial intelligence company specializing in solutions for advertising and media, released data indicating that a majority of online content containing keywords related to the ongoing novel coronavirus pandemic is actually safe for brand advertising. The findings come from analysis by Verity, the companys machine learning-based content analysis and brand safety engine. Between March 25th and April 6th, Verity identified 2.85 million unique pages containing COVID-related keywords across GumGums publisher network. Of those pages, the systems threat detection models classified 62% as Safe.

Marketing Technology News: Wurl Network Surges Past 400 Channels

All the concerns raised lately about coronavirus keyword blocking hurting publishers are valid, said GumGum CEO Phil Schraeder. But this data shows that keyword-based brand safety is also failing brands. Its effectively freezing advertisers out of a huge volume of safe trending content, limiting their reach at a time when it should actually be expanding, as more people than ever are consuming online content.

In that one week alone, brands relying on keyword-based systems for brand safety protection missed out on over 1.5 billion impressions across GumGums supply, Mr. Schraeder pointed out, adding that GumGums publisher network offers a representative sample of impressions available across the wider web. Brands would have been blocked from accessing those impressions because the pages on which the impressions appeared contained one or more instance of the words covid, covid19, covid-19, covid 19, coronavirus, corona virus, pandemic, or quarantine.

Marketing Technology News: Help Stop the Spread of COVID-19 at TrackMyTemp

Verity deemed them brand safe based on multi-model natural language processing and computer vision analysis, which integrates assessments from eight machine learning models trained to evaluate threat-levels across distinct threat categories. The systems threat sensitivity is adjustable, as is its confidence threshold for validating safety conclusions. The findings released today are based on Veritys nominal safety and confidence settingsconfigured to align with the threat sensitivity of an average Fortune 100 brand.

Even when we apply the most conservative settings, more than half the content is safe, said GumGum CTO, Ken Weiner. Coronavirus is touching every facet of society, so its hardly surprising that even the most innocuous content references it. Keyword blocking just goes way too far, which is why people are calling for whitelisting of specific websites. That mindset shows whats wrong with the way people think about brand safety these days. The idea that you have to choose between reach and safety is false. Our industry needs to wake up to whats technologically available.

Marketing Technology News: WalkMe Ensures Business Continuity by Empowering Employees and Customers With Digital Adoption

Mr. Weiner noted that GumGums analysis shows that the pages containing COVID-related keywords in certain popular IAB content categories are particularly safe.

Let me put it this way: If youre looking for a quick and easy brand safety solution right now rather than keyword blocking or whitelisting everything Id recommend simply advertising on content categories like technology, pop culture, and video gaming. Youll get plenty of reach and over 80% of their COVID-related content is safe.

Marketing Technology News: Finding it Extremely Hard to Attract Audience to Your Blogs? Answer, SEO

For more than 50 years, Business Wire has been the global leader in press release distribution and regulatory disclosure.

The rest is here:

60% of Content Containing COVID-Related Keywords Is Brand Safe - MarTech Series

Written by admin |

April 11th, 2020 at 12:48 am

Posted in Machine Learning


Page 1,094«..1020..1,0931,0941,0951,096..1,1001,110..»



matomo tracker