Page 15«..10..14151617..20..»

Archive for the ‘Machine Learning’ Category

Google is using AI to design chips that will accelerate AI – MIT Technology Review

Posted: March 29, 2020 at 2:45 pm

without comments

A new reinforcement-learning algorithm has learned to optimize the placement of components on a computer chip to make it more efficient and less power-hungry.

3D Tetris: Chip placement, also known as chip floor planning, is a complex three-dimensional design problem. It requires the careful configuration of hundreds, sometimes thousands, of components across multiple layers in a constrained area. Traditionally, engineers will manually design configurations that minimize the amount of wire used between components as a proxy for efficiency. They then use electronic design automation software to simulate and verify their performance, which can take up to 30 hours for a single floor plan.

Time lag: Because of the time investment put into each chip design, chips are traditionally supposed to last between two and five years. But as machine-learning algorithms have rapidly advanced, the need for new chip architectures has also accelerated. In recent years, several algorithms for optimizing chip floor planning have sought to speed up the design process, but theyve been limited in their ability to optimize across multiple goals, including the chips power draw, computational performance, and area.

Intelligent design: In response to these challenges, Google researchers Anna Goldie and Azalia Mirhoseini took a new approach: reinforcement learning. Reinforcement-learning algorithms use positive and negative feedback to learn complicated tasks. So the researchers designed whats known as a reward function to punish and reward the algorithm according to the performance of its designs. The algorithm then produced tens to hundreds of thousands of new designs, each within a fraction of a second, and evaluated them using the reward function. Over time, it converged on a final strategy for placing chip components in an optimal way.

Validation: After checking the designs with the electronic design automation software, the researchers found that many of the algorithms floor plans performed better than those designed by human engineers. It also taught its human counterparts some new tricks, the researchers said.

Production line: Throughout the field's history, progress in AI has been tightly interlinked with progress in chip design. The hope is this algorithm will speed up the chip design process and lead to a new generation of improved architectures, in turn accelerating AI advancement.

To have more stories like this delivered directly to your inbox,sign upfor our Webby-nominated AI newsletter The Algorithm. It's free.

See the article here:

Google is using AI to design chips that will accelerate AI - MIT Technology Review

Written by admin

March 29th, 2020 at 2:45 pm

Posted in Machine Learning

PSD2: How machine learning reduces friction and satisfies SCA – The Paypers

Posted: at 2:45 pm

without comments

Andy Renshaw, Feedzai: It crosses borders but doesnt have a passport. Its meant to protect people but can make them angry. Its competitive by nature but doesnt want you to fail. What is it?

If the PSD2 regulations and Strong Customer Authentication (SCA) feel like a riddle to you, youre not alone. SCA places strict two-factor authentication requirements upon financial institutions (FIs) at a time when FIs are facing stiff competition for customers. On top of that, the variety of payment types, along with the sheer number of transactions, continue to increase.

According to UK Finance, the number of debit card transactions surpassed cash transactions since 2017, while mobile banking surged over the past year, particularly for contactless payments. The number of contactless payment transactions per customer is growing; this increase in transactions also raises the potential for customer friction.

The number of transactions isnt the only thing thats shown an exponential increase; the speed at which FIs must process them is too. Customers expect to send, receive, and access money with the swipe of a screen. Driven by customer expectations, instant payments are gaining traction across the globe with no sign of slowing down.

Considering the sheer number of transactions combined with the need to authenticate payments in real-time, the demands placed on FIs can create a real dilemma. In this competitive environment, how can organisations reduce fraud and satisfy regulations without increasing customer friction?

For countries that fall under PSD2s regulation, the answer lies in the one known way to avoid customer friction while meeting the regulatory requirement: keep fraud rates at or below SCA exemption thresholds.

How machine learning keeps fraud rates below the exemption threshold to bypass SCA requirements

Demonstrating significantly low fraud rates allows financial institutions to bypass the SCA requirement. The logic behind this is simple: if the FIs systems can prevent fraud at such high rates, they've demonstrated their systems are secure without authentication.

SCA exemption thresholds are:

Exemption Threshold Value

Remote electronic card-based payment

Remote electronic credit transfers

EUR 500

below 0.01% fraud rate

below 0.01% fraud rate

EUR 250

below 0.06% fraud rate

below 0.01% fraud rate

EUR 100

below 0.13% fraud rate

below 0.015% fraud rate

Looking at these numbers, you might think that achieving SCA exemption thresholds is impossible. After all, bank transfer scams rose 40% in the first six months of 2019. But state-of-the-art technology rises to the challenge of increased fraud. Artificial intelligence, and more specifically machine learning, makes achieving SCA exemption thresholds possible.

How machine learning achieves SCA exemption threshold values

Every transaction has hundreds of data points, called entities. Entities include time, date, location, device, card, cardless, sender, receiver, merchant, customer age the possibilities are almost endless. When data is cleaned and connected, meaning it doesnt live in siloed systems, the power of machine learning to provide actionable insights on that data is historically unprecedented.

Robust machine learning technology uses both rules and models and learns from both historical and real-time profiles of virtually every data point or entity in a transaction. The more data we feed the machine, the better it gets at learning fraud patterns. Over time, the machine learns to accurately score transactions in less than a second without the need for customer authentication.

Machine learning creates streamlined and flexible workflows

Of course, sometimes, authentication is inevitable. For example, if a customer who generally initiates a transaction in Brighton, suddenly initiates a transaction from Mumbai without a travel note on the account, authentication should be required. But if machine learning platforms have flexible data science environments that embed authentication steps seamlessly into the transaction workflow, the experience can be as customer-centric as possible.

Streamlined workflows must extend to the fraud analysts job

Flexible workflows arent just important to instant payments theyre important to all payments. And they cant just be a back-end experience in the data science environment. Fraud analysts need flexibility in their workflows too. They're under pressure to make decisions quickly and accurately, which means they need a full view of the customer not just the transaction.

Information provided at a transactional level doesnt allow analysts to connect all the dots. In this scenario, analysts are left opening up several case managers in an attempt to piece together a complete and accurate fraud picture. Its time-consuming and ultimately costly, not to mention the wear and tear on employee satisfaction. But some machine learning risk platforms can show both authentication and fraud decisions at the customer level, ensuring analysts have a 360-degree view of the customer.

Machine learning prevents instant payments from becoming instant losses

Instant payments can provide immediate customer satisfaction, but also instant fraud losses. Scoring transactions in real-time means institutions can increase the security around the payments going through their system before its too late.

Real-time transaction scoring requires a colossal amount of processing power because it cant use batch processing, an efficient method when dealing with high volumes of data. Thats because the lag time between when a customer transacts and when a batch is processed makes this method incongruent with instant payments. Therefore, scoring transactions in real-time requires supercomputers with super processing powers. The costs associated with this make hosting systems on the cloud more practical than hosting at the FIs premises, often referred to as on prem. Of course, FIs need to consider other factors, including cybersecurity concerns before determining where they should host their machine learning platform.

Providing exceptional customer experiences by keeping fraud at or below PSD2s SCA threshold can seem like a magic trick, but its not. Its the combined intelligence of humans and machines to provide the most effective method we have today to curb and prevent fraud losses. Its how we solve the friction-security puzzle and deliver customer satisfaction while satisfying SCA.

About Andy Renshaw

Andy Renshaw, Vice President of Banking Solutions at Feedzai, has over 20 years of experience in banking and the financial services industry, leading large programs and teams in fraud management and AML. Prior to joining Feedzai, Andy held roles in global financial institutions such as Lloyds Banking Group, Citibank, and Capital One, where he helped fight against the ever-evolving financial crime landscape as a technical expert, fraud prevention expert, and a lead product owner for fraud transformation.

About Feedzai

Feedzai is the market leader in fighting fraud with AI. Were coding the future of commerce with todays most advanced risk management platform powered by big data and machine learning. Founded and developed by data scientists and aerospace engineers, Feedzai has one mission: to make banking and commerce safe. The worlds largest banks, processors, and retailers use Feedzais fraud prevention and anti-money laundering products to manage risk while improving customer experience.

Read this article:

PSD2: How machine learning reduces friction and satisfies SCA - The Paypers

Written by admin

March 29th, 2020 at 2:45 pm

Posted in Machine Learning

Neural networks facilitate optimization in the search for new materials – MIT News

Posted: at 2:45 pm

without comments

When searching through theoretical lists of possible new materials for particular applications, such as batteries or other energy-related devices, there are often millions of potential materials that could be considered, and multiple criteria that need to be met and optimized at once. Now, researchers at MIT have found a way to dramatically streamline the discovery process, using a machine learning system.

As a demonstration, the team arrived at a set of the eight most promising materials, out of nearly 3 million candidates, for an energy storage system called a flow battery. This culling process would have taken 50 years by conventional analytical methods, they say, but they accomplished it in five weeks.

The findings are reported in the journal ACS Central Science, in a paper by MIT professor of chemical engineering Heather Kulik, Jon Paul Janet PhD 19, Sahasrajit Ramesh, and graduate student Chenru Duan.

The study looked at a set of materials called transition metal complexes. These can exist in a vast number of different forms, and Kulik says they are really fascinating, functional materials that are unlike a lot of other material phases. The only way to understand why they work the way they do is to study them using quantum mechanics.

To predict the properties of any one of millions of these materials would require either time-consuming and resource-intensive spectroscopy and other lab work, or time-consuming, highly complex physics-based computer modeling for each possible candidate material or combination of materials. Each such study could consume hours to days of work.

Instead, Kulik and her team took a small number of different possible materials and used them to teach an advanced machine-learning neural network about the relationship between the materials chemical compositions and their physical properties. That knowledge was then applied to generate suggestions for the next generation of possible materials to be used for the next round of training of the neural network. Through four successive iterations of this process, the neural network improved significantly each time, until reaching a point where it was clear that further iterations would not yield any further improvements.

This iterative optimization system greatly streamlined the process of arriving at potential solutions that satisfied the two conflicting criteria being sought. This kind of process of finding the best solutions in situations, where improving one factor tends to worsen the other, is known as a Pareto front, representing a graph of the points such that any further improvement of one factor would make the other worse. In other words, the graph represents the best possible compromise points, depending on the relative importance assigned to each factor.

Training typical neural networks requires very large data sets, ranging from thousands to millions of examples, but Kulik and her team were able to use this iterative process, based on the Pareto front model, to streamline the process and provide reliable results using only the few hundred samples.

In the case of screening for the flow battery materials, the desired characteristics were in conflict, as is often the case: The optimum material would have high solubility and a high energy density (the ability to store energy for a given weight). But increasing solubility tends to decrease the energy density, and vice versa.

Not only was the neural network able to rapidly come up with promising candidates, it also was able to assign levels of confidence to its different predictions through each iteration, which helped to allow the refinement of the sample selection at each step. We developed a better than best-in-class uncertainty quantification technique for really knowing when these models were going to fail, Kulik says.

The challenge they chose for the proof-of-concept trial was materials for use in redox flow batteries, a type of battery that holds promise for large, grid-scale batteries that could play a significant role in enabling clean, renewable energy. Transition metal complexes are the preferred category of materials for such batteries, Kulik says, but there are too many possibilities to evaluate by conventional means. They started out with a list of 3 million such complexes before ultimately whittling that down to the eight good candidates, along with a set of design rules that should enable experimentalists to explore the potential of these candidates and their variations.

Through that process, the neural net both gets increasingly smarter about the [design] space, but also increasingly pessimistic that anything beyond what weve already characterized can further improve on what we already know, she says.

Apart from the specific transition metal complexes suggested for further investigation using this system, she says, the method itself could have much broader applications. We do view it as the framework that can be applied to any materials design challenge where you're really trying to address multiple objectives at once. You know, all of the most interesting materials design challenges are ones where you have one thing you're trying to improve, but improving that worsens another. And for us, the redox flow battery redox couple was just a good demonstration of where we think we can go with this machine learning and accelerated materials discovery.

For example, optimizing catalysts for various chemical and industrial processes is another kind of such complex materials search, Kulik says. Presently used catalysts often involve rare and expensive elements, so finding similarly effective compounds based on abundant and inexpensive materials could be a significant advantage.

This paper represents, I believe, the first application of multidimensional directed improvement in the chemical sciences, she says. But the long-term significance of the work is in the methodology itself, because of things that might not be possible at all otherwise. You start to realize that even with parallel computations, these are cases where we wouldn't have come up with a design principle in any other way. And these leads that are coming out of our work, these are not necessarily at all ideas that were already known from the literature or that an expert would have been able to point you to.

This is a beautiful combination of concepts in statistics, applied math, and physical science that is going to be extremely useful in engineering applications, says George Schatz, a professor of chemistry and of chemical and biological engineering at Northwestern University, who was not associated with this work. He says this research addresses how to do machine learning when there are multiple objectives. Kuliks approach uses leading edge methods to train an artificial neural network that is used to predict which combination of transition metal ions and organic ligands will be best for redox flow battery electrolytes.

Schatz says this method can be used in many different contexts, so it has the potential to transform machine learning, which is a major activity around the world.

The work was supported by the Office of Naval Research, the Defense Advanced Research Projects Agency (DARPA), the U.S. Department of Energy, the Burroughs Wellcome Fund, and the AAAS Mar ion Milligan Mason Award.

See the original post here:

Neural networks facilitate optimization in the search for new materials - MIT News

Written by admin

March 29th, 2020 at 2:45 pm

Posted in Machine Learning

Deep Learning: What You Need To Know – Forbes

Posted: at 2:45 pm

without comments

AI (artificial Intelligence) concept.

During the past decade, deep learning has seen groundbreaking developments in the field of AI (Artificial Intelligence). But what is this technology? And why is it so important?

Well, lets first get a definition of deep learning.Heres how Kalyan Kumar, who is the Corporate Vice President & Chief Technology Officer of IT Services at HCL Technologies, describes it:Have you ever wondered how our brain can recognize the face of a friend whom you had met years ago or can recognize the voice of your mother among so many other voices in a crowded marketplace or how our brain can learn, plan and execute complex day-to-day activities? The human brain has around 100 billion cells called neurons. These build massively parallel and distributed networks, through which we learn and carry out complex activities. Inspired from these biological neural networks, scientists started building artificial neural networks so that computers could eventually learn and exhibit intelligence like humans.

Think of it this way:You first will start with a huge amount of unstructured data, say videos.Then you will use a sophisticated model that will process this information and try to determine underlying patterns, which are often not detectable by people.

During training, you define the number of neurons and layers your neural network will be comprised of and expose it to labeled training data, said Brian Cha, who is a Product Manager and Deep Learning evangelist at FLIR Systems.With this data, the neural network learns on its own what is good or bad. For example, if you want the neural network to grade fruits, you would show it images of fruits labeled Grade A, Grade B, Grade C, and so on. The neural network uses this training data to extract and assign weights to features that are unique to fruits labelled good, such as ideal size, shape, color, consistency of color and so on. You dont need to manually define these characteristics or even program what is too big or too small, the neural network trains itself using the training data. The process of evaluating new images using a neural network to make decisions on is called inference. When you present the trained neural network with a new image, it will provide an inference, such as Grade A with 95% confidence.

What about the algorithms?According to Bob Friday, who is the CTO of Mist Systems, a Juniper Networks company, There are two kinds of popular neural network models for different use cases: the Convolutional Neural Network (CNN) model is used in image related applications, such as autonomous driving, robots and image search. Meanwhile, the Recurrent Neural Network (RNN) model is used in most of the Natural Language Processing-based (NLP) text or voice applications, such as chatbots, virtual home and office assistants and simultaneous interpreters and in networking for anomaly detection.

Of course, deep learning requires lots of sophisticated tools.But the good news is that there are many available and some are even free like TensorFlow, PyTorch and Keras.

There are also cloud-based server computer services, said Ali Osman rs, who is the Director of AI Strategy and Strategic Partnerships for ADAS at NXP Semiconductors.These are referred to as Machine Learning as a Service (MLaaS) solutions. The main providers include Amazon AWS, Microsoft Azure, and Google Cloud.

Because of the enormous data loads and complex algorithms, there is usually a need for sophisticated hardware infrastructure.Keep in mind that it can sometimes take days to train a model

The unpredictable process of training neural networks requires rapid on-demand scaling of virtual machine pools, said Brent Schroeder, who is the Chief Technology Officer at SUSE. Container based deep learning workloads managed by Kubernetes can easily be deployed to different infrastructure depending upon the specific needs. An initial model can be developed on a small local cluster, or even an individual workstation with a Jupyter Notebook. But then as training needs to scale, the workload can be deployed to large, scalable cloud resources for the duration of the training. This makes Kubernetes clusters a flexible, cost-effective option for training different types of deep learning workloads.

Deep learning has been shown to be quite efficient and accurate with models.Probably the biggest advantage of deep learning over most other machine learning approaches is that the user does not need to worry about trimming down the number of features used, said Noah Giansiracusa, who is an Assistant Professor of Mathematical Sciences at Bentley University.With deep learning, since the neurons are being trained to perform conceptual taskssuch as finding edges in a photo, or facial features within a facethe neural network is in essence figuring out on its own which features in the data itself should be used.

Yet there are some notable drawbacks to deep learning.One is cost.Deep learning networks may require hundreds of thousands or millions of hand-labeled examples, said Evan Tann, who is the CTO and co-founder of Thankful.It is extremely expensive to train in fast timeframes, as serious players will need commercial-grade GPUs from Nvidia that easily exceed $10k each.

Deep learning is also essentially a black box.This means it can be nearly impossible to understand how the model really works!

This can be particularly problematic in applications that require such documentation like FDA approval of drugs and medical devices, said Dr. Ingo Mierswa, who is the Founder of RapidMiner.

And yes, there are some ongoing complexities with deep learning models, which can create bad outcomes.Say a neural network is used to identify cats from images, said Yuheng Chen, who is the COO of rct studio.It works perfectly, but when we want it to identify cats and dogs at the same time, its performance collapses.

But then again, there continues to be rapid progress, as companies continue to invest substantial amounts into deep learning.For the most part, things are still very much in the nascent stages.

The power of deep learning is what allows seamless speech recognition, image recognition, and automation and personalization across every possible industry today, so it's safe to say that you are already experiencing the benefits of deep learning, said Sajid Sadi, who is the VP of Research at Samsung and the Head of Think Tank Team.

Tom (@ttaulli) is the author of Artificial Intelligence Basics: A Non-Technical Introduction and The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems.

See the article here:

Deep Learning: What You Need To Know - Forbes

Written by admin

March 29th, 2020 at 2:45 pm

Posted in Machine Learning

Data to the Rescue! Predicting and Preventing Accidents at Sea – JAXenter

Posted: at 2:45 pm

without comments

Watch Dr. Yonit Hoffman's Machine Learning Conference session

Accidents at sea happen all the time. Their costs in terms of lives, money and environmental destruction are huge. Wouldnt it be great if they could be predicted and perhaps prevented? Dr. Yonit Hoffmans Machine Learning Conference session discusses new ways of preventing sea accidents with the power of data science.

Does machine learning hold the key to preventing accidents at sea?

With more than 350 years of history, the marine insurance industry is the first data science profession to try to predict accidents and estimate future risk. Yet the old ways no longer work, new waves of data and algorithms can offer significant improvements and are going to revolutionise the industry.

In her Machine Learning Conference session, Dr. Yonit Hoffman will show that it is now possible to predict accidents, and how data on a ships behaviour such as location, speed, maps and weather can help. She will show how fragments of information on ship movements can be gathered and taken all the way to machine learning models. In this session, she discusses the challenges, including introducing machine learning to an industry that still uses paper and quills (yes, really!) and explaining the models using SHAP.

Dr. Yonit Hoffman is a Senior Data Scientist at Windward, a world leader in maritime risk analytics. Before investigating supertanker accidents, she researched human cells and cancer at the Weizmann Institute, where she received her PhD and MSc. in Bioinformatics. Yonit also holds a BSc. in computer science and biology from Tel Aviv University.


Data to the Rescue! Predicting and Preventing Accidents at Sea - JAXenter

Written by admin

March 29th, 2020 at 2:45 pm

Posted in Machine Learning

What are the top AI platforms? – Gigabit Magazine – Technology News, Magazine and Website

Posted: at 2:45 pm

without comments

Business Overview

Microsoft AI is a platform used to develop AI solutions in conversational AI, machine learning, data sciences, robotics, IoT, and more.

Microsoft AI prides itself on driving innovation through; protecting wildlife, better brewing, feeding the world and preserving history.

Its Cognitive Services is described as a comprehensive family of AI services and cognitive APIs to help you build intelligent apps.


Tom Bernard Krake is the Azure Cloud Executive at Microsoft, responsible for leveraging and evaluating the Azure platform. Tom is joined by a team of experienced executives to optimise the Azure platform and oversee the many cognitive services that it provides.

Notable customers

Uber uses Cognitive Services to boost its security through facial recognition to ensure that the driver using the app matches the user that is on file.

KPMG helps financial institutions save millions in compliance costs through the use of Microsofts Cognitive Services. They do this through transcribing and logging thousands of hours of calls, reducing compliance costs by as much as 80 per cent. uses Cognitive Services to provide answers to its customers by infusing its customer chatbot with the intelligence to communicate using natural language.

The services:

Decision - Make smarter decisions faster through anomaly detectors, content moderators and personalizers.

Language - Extract meaning from unstructured text through the immersive reader, language understanding, Q&A maker, text analytics and translator text.

Speech - Integrate speech processing into apps and services through Speech-to-text, Text to speech, Speech translation and Speaker recognition.

Vision - Identify and analyse content within images, videos and digital ink through computer vision, custom vision, face, form recogniser, ink recogniser and video indexer.

Web Search -Find what you are looking for through the world-wide-web through autosuggest, custom search, entity search, image search, news search, spell check, video search, visual search and web search.

Read more:

What are the top AI platforms? - Gigabit Magazine - Technology News, Magazine and Website

Written by admin

March 29th, 2020 at 2:45 pm

Posted in Machine Learning

With Launch of COVID-19 Data Hub, The White House Issues A ‘Call To Action’ For AI Researchers – Machine Learning Times – machine learning & data…

Posted: at 2:45 pm

without comments

Originally published in TechCrunch, March 16, 2020

In a briefing on Monday, research leaders across tech, academia and the government joined the White House to announce an open data set full of scientific literature on the novel coronavirus. The COVID-19 Open Research Dataset, known as CORD-19, will also add relevant new research moving forward, compiling it into one centralized hub. The new data set is machine readable, making it easily parsed for machine learning purposes a key advantage according to researchers involved in the ambitious project.

In a press conference, U.S. CTO Michael Kratsios called the new data set the most extensive collection of machine readable coronavirus literature to date. Kratsios characterized the project as a call to action for the AI community, which can employ machine learning techniques to surface unique insights in the body of data. To come up with guidance for researchers combing through the data, the National Academies of Sciences, Engineering, and Medicine collaborated with the World Health Organization to come up with high priority questions about the coronavirus related to genetics, incubation, treatment, symptoms and prevention.

The partnership, announced today by the White House Office of Science and Technology Policy, brings together the Chan Zuckerberg Initiative, Microsoft Research, the Allen Institute for Artificial Intelligence, the National Institutes of Healths National Library of Medicine, Georgetown Universitys Center for Security and Emerging Technology, Cold Spring Harbor Laboratory and the Kaggle AI platform, owned by Google.

The database brings together nearly 30,000 scientific articles about the virus known as SARS-CoV-2. as well as related viruses in the broader coronavirus group. Around half of those articles make the full text available. Critically, the database will include pre-publication research from resources like medRxiv and bioRxiv, open access archives for pre-print health sciences and biology research.

To continue reading this article, click here.

Read more:

With Launch of COVID-19 Data Hub, The White House Issues A 'Call To Action' For AI Researchers - Machine Learning Times - machine learning & data...

Written by admin

March 29th, 2020 at 2:45 pm

Posted in Machine Learning

Deep Learning to Be Key Driver for Expansion and Adoption of AI in Asia-Pacific, Says GlobalData – MarTech Series

Posted: at 2:45 pm

without comments

Deep learning, a subset of machine learning and artificial intelligence (AI), is predicted to provide formidable momentum for the adoption and growth of artificial intelligence in the Asia-Pacific (APAC) region. The next few years will see deep learning become part of main-stream deployments, bringing commendable changes to businesses in the region, says GlobalData, a leading data and analytics company.

GlobalData estimates the APAC region to account for approximately 30% of the global AI platforms revenue (around US$97.5bn) by 2024. However, the share is expected to significantly go up, given the incumbent technology companies and the increasing number of start-ups that specialize in this field.

Furthermore, the technological enhancements supporting higher computation capabilities (CPU and GPU), and the huge amount of data, which is predicted to grow multiple folds due to the growth of connected devices ecosystem, are expected to contribute to this growth.

Marketing Technology News: SalesHood Steps Up to Offer Free Usage of Its Sales Enablement Platform During COVID-19 Crisis

Digital assistants like Cortana, Siri, GoogleNow and Alexa leverage deep learning to some extent for natural language processing (NLP) as well as speech recognition. Some of the other key usage areas of deep learning include multi-lingual chatbots, voice and image recognition, data processing, surveillance, fraud detection and diagnostics.

Sunil Kumar Verma, Lead ICT analyst at GlobalData, comments: The APAC market is proactively deploying deep learning-based AI solutions to bring increased offline automation, safety and security to businesses and their assets. In addition, AI hardware optimization with increased computing speed on small devices will result in the cost reduction and drive deep learning adoption across the region.

In APAC, deep learning is increasingly being adopted for various applications, driven by product launches and technical enhancements by regional technology vendors.

Marketing Technology News: Cognizant to Acquire Lev to Expand Digital Marketing Expertise

For instance, China-based SenseTime leverages its deep learning platform to power image recognition, intelligent video analytic and medical image recognition to its customers, through its facial recognition technology called DeepID. Similarly, DeepSight AI Labs, an India-based start-up (which also operates in the US), also uses deep learning to develop SuperSecure Platform, a smart retrofit video surveillance solution that works on any CCTV to provide a contextualized AI solution to detect objects and behaviors.

Australia-based Daisee too offers an algorithm called Lisa, which leverages a speech-to-text engine to identify key conversational elements, determine its meaning and derive its context. Similarly, Cognitive Software Group is using deep learning / machine learning for the tagging of unstructured data to enhance natural language understanding.

Verma concludes: Although still in its infancy, deep learning is proving to be a stepping stone for technology landscape evolution in APAC. However, with the lack of skilled professionals and the fact that only a handful of technology companies are focussing on investing, hiring and training their workforce specifically for Deep Learning, there would be some initial roadblocks before witnessing success in adoption rates.

Marketing Technology News: COVID-19 Phishes Explode as U.S. Reels From Pandemic

MarTech Series (MTS) is a business publication dedicated to helping marketers get more from marketing technology through in-depth journalism, expert author blogs and research reports.

We publish high quality, relevant Marketing Technology Insights to help the business community advance martech knowledge and develop new martech skills. Our focus is on bringing marketers the latest business trends, products and practices affecting their marketing strategy.

We help our readers make sense of the rapidly evolving martech landscape, and cover the incredible impact of marketing technologies adoption on the way we do business.

Read this article:

Deep Learning to Be Key Driver for Expansion and Adoption of AI in Asia-Pacific, Says GlobalData - MarTech Series

Written by admin

March 29th, 2020 at 2:45 pm

Posted in Machine Learning

AI Is Changing Work and Leaders Need to Adapt – Harvard Business Review

Posted: at 2:45 pm

without comments

Executive Summary

Recent empirical research by the MIT-IBM Watson AI Lab provides new insight into how work is changing in the face of AI. Based on this research, the author provides a roadmap for leaders intent on adapting their workforces and reallocating capital, while also delivering profitability. They argue that the key to unlocking the productivity potential while delivering on business objectives lies in three key strategies: rebalancing resources, investing in workforce reskilling and, on a larger scale, advancing new models of education and lifelong learning.

As AI is increasingly incorporated into our workplaces and daily lives, it is poised to fundamentally upend the way we live and work. Concern over this looming shift is widespread. A recent survey of 5,700 Harvard Business School alumni found that 52% of even this elite group believe the typical company will employ fewer workers three years from now.

The advent of AI poses new and unique challenges for business leaders. They must continue to deliver financial performance, while simultaneously making significant investments in hiring, workforce training, and new technologies that support productivity and growth. These seemingly competing business objectives can make for difficult, often agonizing, leadership decisions.

Against this backdrop, recent empirical research by our team at the MIT-IBM Watson AI Lab provides new insight into how work is changing in the face of AI. By examining these findings, we can create a roadmap for leaders intent on adapting their workforces and reallocating capital, while also delivering profitability.

The stakes are high. AI is an entirely new kind of technology, one that has the ability to anticipate future needs and provide recommendations to its users. For business leaders, that unique capability has the potential to increase employee productivity by taking on administrative tasks, providing better pricing recommendations to sellers, and streamlining recruitment, to name a few examples.

For business leaders navigating the AI workforce transition, the key to unlocking the productivity potential while delivering on business objectives lies in three key strategies: rebalancing resources, investing in workforce reskilling and, on a larger scale, advancing new models of education and lifelong learning.

Our research report, offers a window into how AI will change workplaces through the rebalancing and restructuring of occupations. Using AI and machine learning techniques, our MIT-IBM Watson AI Lab team analyzed 170 million online job posts between 2010 and 2017. The studys first implication: While occupations change slowly over years and even decades tasks become reorganized at a much faster pace.

Jobs are a collection of tasks. As workers take on jobs in various professions and industries, it is the tasks they perform that create value. With the advancement of technology, some existing tasks will be replaced by AI and machine learning. But our research shows that only 2.5% of jobs include a high proportion of tasks suitable for machine learning. These include positions like usher, lobby attendant, and ticket taker, where the main tasks involve verifying credentials and allowing only authorized people to enter a restricted space.

Most tasks will still be best performed by humans whether craft workers like plumbers, electricians and carpenters, or those who do design or analysis requiring industry knowledge. And new tasks will emerge that require workers to exercise new skills.

As this shift occurs, business leaders will need to reallocate capital accordingly. Broad adoption of AI may require additional research and development spending. Training and reskilling employees will very likely require temporarily removing workers from revenue-generating activities.

More broadly, salaries and other forms of employee compensation will need to reflect the shifting value of tasks all along the organization chart. Our research shows that as technology reduces the cost of some tasks because they can be done in part by AI, the value workers bring to the remaining tasks increases. Those tasks tend to require grounding in intellectual skill and insightsomething AI isnt as good at as people.

In high-wage business and finance occupations, for example, compensation for tasks requiring industry knowledge increased by more than $6,000, on average, between 2010 and 2017. By contrast, average compensation for manufacturing and production tasks fell by more than $5,000 during that period. As AI continues to reshape the workplace, business leaders who are mindful of this shifting calculus will come out ahead.

Companies today are held accountable not only for delivering shareholder value, but for positively impacting stakeholders such as customers, suppliers, communities and employees. Moreover, investment in talent and other stakeholders is increasingly considered essential to delivering long-term financial results. These new expectations are reflected in the Business Roundtables recently revised statement on corporate governance, which underscores corporations obligation to support employees through training and education that help develop new skills for a rapidly changing world.

Millions of workers will need to be retrained or reskilled as a result of AI over the next three years, according to a recent IBM Institute for Business Value study. Technical training will certainly be a necessary component. As tasks requiring intellectual skill, insight and other uniquely human attributes rise in value, executives and managers will also need to focus on preparing workers for the future by fostering and growing people skills such as judgement, creativity and the ability to communicate effectively. Through such efforts, leaders can help their employees make the shift to partnering with intelligent machines as tasks transform and change in value.

As AI continues to scale within businesses and across industries, it is incumbent upon innovators and business leaders to understand not only the business process implications, but also the societal impact. Beyond the need for investment in reskilling within organizations today, executives should work alongside policymakers and other public and private stakeholders to provide support for education and job training, encouraging investment in training and reskilling programs for all workers.

Our research shows that technology can disproportionately impact the demand and earning potential for mid-wage workers, causing a squeeze on the middle class. For every five tasks that shifted out of mid-wage jobs, we found, four tasks moved to low-wage jobs and one moved to a high-wage job. As a result, wages are rising faster in the low- and high-wage tiers than in the mid-wage tier.

New models of education and pathways to continuous learning can help address the growing skills gap, providing members of the middle class, as well as students and a broad array of mid-career professionals, with opportunities to build in-demand skills. Investment in all forms of education is key: community college, online learning, apprenticeships, or programs like P-TECH, a public-private partnership designed to prepare high school students for new collar technical jobs like cloud computing and cybersecurity.

Whether it is workers who are asked to transform their skills and ways of working, or leaders who must rethink everything from resource allocation to workforce training, fundamental economic shifts are never easy. But if AI is to fulfill its promise of improving our work lives and raising living standards, senior leaders must be ready to embrace the challenges ahead.

Read the original post:

AI Is Changing Work and Leaders Need to Adapt - Harvard Business Review

Written by admin

March 29th, 2020 at 2:45 pm

Posted in Machine Learning

Why AI might be the most effective weapon we have to fight COVID-19 – The Next Web

Posted: March 22, 2020 at 4:41 am

without comments

If not the most deadly, the novel coronavirus (COVID-19) is one of the most contagious diseases to have hit our green planet in the past decades. In little over three months since the virus was first spotted in mainland China, it has spread to more than 90 countries, infected more than 185,000 people, and taken more than 3,500 lives.

As governments and health organizations scramble to contain the spread of coronavirus, they need all the help they can get, including from artificial intelligence. Though current AI technologies arefar from replicating human intelligence, they are proving to be very helpful in tracking the outbreak, diagnosing patients, disinfecting areas, and speeding up the process of finding a cure for COVID-19.

Data science and machine learning might be two of the most effective weapons we have in the fight against the coronavirus outbreak.

Just before the turn of the year, BlueDot, an artificial intelligence platform that tracks infectious diseases around the world, flagged a cluster of unusual pneumonia cases happening around a market in Wuhan, China. Nine days later, the World Health Organization (WHO)released a statementdeclaring the discovery of a novel coronavirus in a hospitalized person with pneumonia in Wuhan.

BlueDot usesnatural language processingandmachine learning algorithmsto peruse information from hundreds of sources for early signs of infectious epidemics. The AI looks at statements from health organizations, commercial flights, livestock health reports, climate data from satellites, and news reports. With so much data being generated on coronavirus every day, the AI algorithms can help home in on the bits that can provide pertinent information on the spread of the virus. It can also find important correlations between data points, such as the movement patterns of the people who are living in the areas most affected by the virus.

The company also employs dozens of experts who specialize in a range of disciplines including geographic information systems, spatial analytics, data visualization, computer sciences, as well as medical experts in clinical infectious diseases, travel and tropical medicine, and public health. The experts review the information that has been flagged by the AI and send out reports on their findings.

Combined with the assistance of human experts, BlueDots AI can not only predict the start of an epidemic, but also forecast how it will spread. In the case of COVID-19, the AI successfully identified the cities where the virus would be transferred to after it surfaced in Wuhan. Machine learning algorithms studying travel patterns were able to predict where the people who had contracted coronavirus were likely to travel.

Coronavirus (COVID-19) (Image source:NIAID)

You have probably seen the COVID-19 screenings at border crossings and airports. Health officers use thermometer guns and visually check travelers for signs of fever, coughing, and breathing difficulties.

Now,computer vision algorithmscan perform the same at large scale. An AI system developed by Chinese tech giant Baidu uses cameras equipped with computer vision and infrared sensors to predict peoples temperatures in public areas. The system can screen up to 200 people per minute and detect their temperature within a range of 0.5 degrees Celsius. The AI flags anyone who has a temperature above 37.3 degrees. The technology is now in use in Beijings Qinghe Railway Station.

Alibaba, another Chinese tech giant, has developed an AI system that candetect coronavirus in chest CT scans. According to the researchers who developed the system, the AI has a 96-percent accuracy. The AI was trained on data from 5,000 coronavirus cases and can perform the test in 20 seconds as opposed to the 15 minutes it takes a human expert to diagnose patients. It can also tell the difference between coronavirus and ordinary viral pneumonia. The algorithm can give a boost to the medical centers that are already under a lot of pressure to screen patients for COVID-19 infection. The system is reportedly being adopted in 100 hospitals in China.

A separate AI developed by researchers from Renmin Hospital of Wuhan University, Wuhan EndoAngel Medical Technology Company, and the China University of Geosciences purportedly shows 95-percent accuracy on detecting COVID-19 in chest CT scans. The system is adeep learning algorithmtrained on 45,000 anonymized CT scans. According to a preprint paperpublished on medRxiv, the AIs performance is comparable to expert radiologists.

One of the main ways to prevent the spread of the novel coronavirus is to reduce contact between infected patients and people who have not contracted the virus. To this end, several companies and organizations have engaged in efforts to automate some of the procedures that previously required health workers and medical staff to interact with patients.

Chinese firms are using drones and robots to perform contactless delivery and to spray disinfectants in public areas to minimize the risk of cross-infection. Other robots are checking people for fever and other COVID-19 symptoms and dispensing free hand sanitizer foam and gel.

Inside hospitals, robots are delivering food and medicine to patients and disinfecting their rooms to obviate the need for the presence of nurses. Other robots are busy cooking rice without human supervision, reducing the number of staff required to run the facility.

In Seattle, doctors used a robot to communicate with and treat patients remotely to minimize exposure of medical staff to infected people.

At the end of the day, the war on the novel coronavirus is not over until we develop a vaccine that can immunize everyone against the virus. But developing new drugs and medicine is a very lengthy and costly process. It can cost more than a billion dollars and take up to 12 years. Thats the kind of timeframe we dont have as the virus continues to spread at an accelerating pace.

Fortunately, AI can help speed up the process. DeepMind, the AI research lab acquired by Google in 2014, recently declared that it has used deep learning to find new information about the structure of proteins associated with COVID-19. This is a process that could have taken many more months.

Understanding protein structures can provide important clues to the coronavirus vaccine formula. DeepMind is one of several organizations who are engaged in the race to unlock the coronavirus vaccine. It has leveraged the result of decades of machine learning progress as well as research on protein folding.

Its important to note that our structure prediction system is still in development and we cant be certain of the accuracy of the structures we are providing, although we are confident that the system is more accurate than our earlier CASP13 system, DeepMinds researchers wroteon the AI labs website. We confirmed that our system provided an accurate prediction for the experimentally determined SARS-CoV-2 spike protein structure shared in the Protein Data Bank, and this gave us confidence that our model predictions on other proteins may be useful.

Although its too early to tell whether were headed in the right direction, the efforts are commendable. Every day saved in finding the coronavirus vaccine can save hundredsor thousandsof lives.

This story is republished fromTechTalks, the blog that explores how technology is solving problems and creating new ones. Like them onFacebookhere and follow them down here:

Published March 21, 2020 17:00 UTC

Continue reading here:

Why AI might be the most effective weapon we have to fight COVID-19 - The Next Web

Written by admin

March 22nd, 2020 at 4:41 am

Posted in Machine Learning

Page 15«..10..14151617..20..»