Page 14«..10..13141516..»

Archive for the ‘Machine Learning’ Category

This tech firm used AI & machine learning to predict Coronavirus outbreak; warned people about danger zones – Economic Times

Posted: February 4, 2020 at 9:52 am


without comments

A couple of weeks after the Coronavirus outbreak and the disease has become a full-blown pandemic. According to official Chinese statistics, more than 130 people have died from the mysterious virus.

Contagious diseases may be diagnosed by men and women in face masks and lab coats, but warning signs of an epidemic can be detected by computer programmers sitting thousands of miles away. Around the tenth of January, news of a flu outbreak in Chinas Hubei province started making its way to mainstream media. It then spread to other parts of the country, and subsequently, overseas.

But the first to report of an impending biohazard was BlueDot, a Canadian firm that specializes in infectious disease surveillance. They predicted an impending outbreak of coronavirus on December 31 using an artificial intelligence-powered system that combs through animal and plant disease networks, news reports in vernacular websites, government documents, and other online sources to warn its clients against traveling to danger zones like Wuhan, much before foreign governments started issuing travel advisories.

They further used global airline ticketing data to correctly predict that the virus would spread to Seoul, Bangkok, Taipei, and Tokyo. Machine learning and natural language processing techniques were also employed to create models that process large amounts of data in real time. This includes airline ticketing data, news reports in 65 languages, animal and plant disease networks.

iStock

We know that governments may not be relied upon to provide information in a timely fashion. We can pick up news of possible outbreaks, little murmurs or forums or blogs of indications of some kind of unusual events going on, Kamran Khan, founder and CEO of BlueDot told a news magazine.

The death toll from the Coronavirus rose to 81 in China, with thousands of new cases registered each day. The government has extended the Lunar New Year holiday by three days to restrict the movement of people across the country, and thereby lower the chances of more people contracting the respiratory disease.

However, a lockdown of the affected area could be detrimental to public health, putting at risk the domestic population, even as medical supplies dwindle, causing much anger and resentment.

24 May, 2018

24 May, 2018

24 May, 2018

24 May, 2018

24 May, 2018

Excerpt from:

This tech firm used AI & machine learning to predict Coronavirus outbreak; warned people about danger zones - Economic Times

Written by admin

February 4th, 2020 at 9:52 am

Posted in Machine Learning

New Project at Jefferson Lab Aims to Use Machine Learning to Improve Up-Time of Particle Accelerators – HPCwire

Posted: at 9:52 am


without comments

NEWPORT NEWS, Va., Jan. 30, 2020 More than 1,600 nuclear physicists worldwide depend on the Continuous Electron Beam Accelerator Facility for their research. Located at the Department of Energys Thomas Jefferson National Accelerator Facility in Newport News, Va., CEBAF is a DOE User Facility that is scheduled to conduct research for limited periods each year, so it must perform at its best during each scheduled run.

But glitches in any one of CEBAFs tens of thousands of components can cause the particle accelerator to temporarily fault and interrupt beam delivery, sometimes by mere seconds but other times by many hours. Now, accelerator scientists are turning to machine learning in hopes that they can more quickly recover CEBAF from faults and one day even prevent them.

Anna Shabalina is a Jefferson Lab staff member and principal investigator on the project, which has been funded by theLaboratory Directed Research & Development programfor the fiscal year 2020. The program provides the resources for Jefferson Lab personnel to make rapid and significant contributions to critical science and technology problems of mission relevance to the lab and the DOE.

Shabalina says her team is specifically concerned with the types of faults that most often bring CEBAF grinding to a halt: those that concern the superconducting radiofrequency acceleration cavities.

Machine learning is quickly gaining popularity, particularly for optimizing, automating and speeding up data analysis, Shabalina says. This is exactly what is needed to reduce the workload for SRF cavity fault classification.

SRF cavities are the backbone of CEBAF. They configure electromagnetic fields to add energy to the electrons as they travel through the CEBAF accelerator. If an SRF cavity faults, the cavity is turned off, disrupting the electron beam and potentially requiring a reconfiguration that limits the energy of the electrons that are being accelerated for experiments.

Shabalina and her team plan to use a recently deployed data acquisition system that records data from individual cavities. The system records 17 parameters from a cavity that faults; it also records the 17 parameters from a cavity if one of its near neighbors faults.

At present, system experts visually inspect each data set by hand to identify the type of fault and which component caused it. The information is a valuable tool that helps CEBAF operators for how to mitigate the fault.

Each cavity fault leaves a unique signature in the data, Shabalina says. Machine learning is particularly well suited for finding patterns, even in noisy data.

The team plans to work off of this strength of machine learning to build a model that recognizes the various types of faults. When shown enough input signals and corresponding fault types, the model is expected to be able to identify the fault patterns in CEBAFs complex signals. The next step would then be to run the model during CEBAF operations so that it can classify in real time the different kinds of faults that cause the machine to automatically trip off.

We plan to develop machine learning models to identify the type of the fault and the cavity causing instability. This will give operators the ability to apply pointed measures to quickly bring the cavities back online for researchers, Shabalina explains.

If successful, the project would also open the possibility of extending the model to identify precursors to cavity trips, so that operators would have an early warning system of possible faults and can take action to prevent them from ever occurring.

About Jefferson Science Associates, LLC

Jefferson Science Associates, LLC, a joint venture of the Southeastern Universities Research Association, Inc. and PAE, manages and operates the Thomas Jefferson National Accelerator Facility, or Jefferson Lab, for the U.S. Department of Energys Office of Science. DOEs Office of Science is the single largest supporter of basic research in the physical sciences in the United Statesand is working to address some of the most pressing challenges of our time. For more information, visithttps://energy.gov/science.

Source: Thomas Jefferson National Accelerator Facility (Jefferson Lab)

See the original post:

New Project at Jefferson Lab Aims to Use Machine Learning to Improve Up-Time of Particle Accelerators - HPCwire

Written by admin

February 4th, 2020 at 9:52 am

Posted in Machine Learning

Euro machine learning startup plans NYC rental platform, the punch list goes digital & other proptech news – The Real Deal

Posted: at 9:52 am


without comments

New York City rentals (Credit: iStock)

Digital marketplace gets a boost

CRE digital marketplace CREXi nabbed $30 million in a Series B round led by Mitsubishi Estate Company, Industry Ventures, and Prudence Holdings. The new funds will help them build out a subscription service aimed at brokers and an analytics service that highlights trends in the industry. The company wants to become the go-to platform for every step in the CRE process, from marketing to sale.

Dude, wheres my tech-fueled hotel chain?

Ashton Kutchers Sound Ventures and travel-focused VC firm Thayer Ventures have gotten behind hospitality startup Life House, leading a $30 million Series B round. The company runs a boutique hotel chain as well as a management platform, which gives hotel owners access to AI-based pricing and automated financial accounting. Life House has over 800 rooms across cities such as Miami and Denver, with plans to expand to 25 hotels by next year.

Working from home

As the deadly Coronavirus virus outbreak becomes more serious with every hour, WeWork said it is temporarily closing 55 locations in China. The struggling co-working company encouraged employees at these sites to work from home or in private rooms to keep from catching the virus. Also this week, the startup closed a three-year deal to provide office space for 250 employees of gym membership company Gympass, per Reuters. WeWorks owner SoftBank is a minority investor in Gympass so it looks like Masa Sons using some parts of his portfolio to prop up others.

300,000

Thats how many listings rental platform/flatmate matcher Badi has across London, Berlin, Madrid, and Barcelona. Barcelona-based Badi claims to use machine-learning technology to match tenants and rooms. Badi plans on hopping across the pond to New York City within the year. Its an interesting market for the Barcelona-based company to enter. Though most people use a platform like StreetEasy to find an apartment with a traditional landlord, few established companies have cracked the sublet game without running afoul New York Citys rental laws. In effect, Badi would likely be competing with Facebook groups such as Gypsy Housing plus wanna-be-my-roommate startups like Roomi and SpareRoom. Badi is backed by Goodwater Capital, Target Global, Spark Capital and Mangrove Capital. The firm has raised over $45 million in VC funding since its founding in 2015.

Pink slips at Compass

Uh oh, yet another SoftBank-funded startup is laying off employees. Up to 40 employees of tech brokerage Compass in the IT, marketing and M&A departments will be getting the pink slip this week. Sources told E.B. Solomont that the nationwide cuts are a part of a reorganization to introduce a new Agent Experience Team that will take over onboarding and training new agents from former employees. Its a small number of cuts compared to the 18,000 employees Compass has across the U.S. but it isnt a great look in todays business climate.

Getting ready to move

As SoftBank-backed hospitality startup Oyo continues to cut back, their arch nemesis RedDoorz just launched a new co-living program in Indonesia. Theyre targeting young professionals and college students with the KoolKost service, dishing out shared units with flexible leases and free WiFi. Their main business, like Oyo, is running a network of budget hotels across Southeast Asia. Well see if co-living will help them avoid some of Oyos profitability problems.

Homes on Olympus

Its no secret that it can be a pain to figure out a place to live when work needs you to move to a new city for a bit. You can take your pick between bland corporate housing and Airbnbs designed for quick vacations. Thats where Zeus comes in (not with a thunderbolt but with a corporate housing platform.)

Zeus signs two-year minimum leases with landlords, furnishes the apartments with couches meant to look chic, and rents them out to employees for 30 days or more. They currently manage around 2,000 furnished homes with the goal of filling a newly added apartment within 10 days.

The corporate housing is a competitive space with startups like Domio and Sonder also trying to lure in business travelers. Youd think that Zeus would have to go one-on-one with Airbnb but the two companies actually have a partnership. The short-term rental giant lists Zeus properties on its platform and invested in the company as a part of a $55 million Series B round last month. Theyre trying to keep competition close.

Punch lists go digital

Home renovations platform Punch List just scored $4 million in a seed round led by early stage VC funds Bling Capital and Bedrock Capital, per Crunchbase. The platform lets homeowners track project progress and gives contractors a place to send digital invoices, all on a newly launched app. They want to make as much of the frustrating process of remodeling as digital as possible.

See original here:

Euro machine learning startup plans NYC rental platform, the punch list goes digital & other proptech news - The Real Deal

Written by admin

February 4th, 2020 at 9:52 am

Posted in Machine Learning

UB receives $800,000 NSF/Amazon grant to improve AI fairness in foster care – UB Now: News and views for UB faculty and staff – University at Buffalo…

Posted: at 9:52 am


without comments

A multidisciplinary UB research team has received an $800,000 grant to develop a machine learning system that could eventually help caseworkers and human services agencies determine the best available services for the more than 20,000 youth who annually age out of foster care without rejoining their families.

The National Science Foundation and Amazon, the grants joint funders, have partnered on a program called Fairness in Artificial Intelligence (FAI) that aims to address bias and build trustworthy computational systems that can contribute to solving the biggest challenges facing modern societies.

Over the course of three years, the UB researchers will collaborate with the Hillside Family of Agencies in Rochester, one of the oldest family and youth nonprofit human services organizations in the country, and a youth advisory council made up of individuals who have recently aged out of foster care to develop the tool. They will also consult with national experts across specializations to inform this complex work.

Researchers will use data from the Administration on Children and Families (ACF) federally mandated National Youth in Transition Database (NYTD) and input from collaborators to inform their predictive model. Each state participates in NYTD to report the experiences and services used by youth in foster care.

The teams three-pronged goal is to use the experiences of youth, case workers and experts in the foster care system to identify the often hard-to-find biases in data used to train machine learning models, to obtain multiple perspectives on fairness with respect to decisions about services and to then build a system that can more equitably and efficiently deliver services.

Social scientists have long considered questions of fairness and justice in societies, but beginning in the early part of the 21st century, there was growing awareness of how computers might be using unfair algorithms, according to Kenneth Joseph, assistant professor in the Department of Computer Science and Engineering and one of the co-investigators of the project.

Joseph is an expert in machine learning who focuses much of his research on better understanding how biases work their way into computational models, and how to understand and address the social and technical processes responsible for doing so.

Machine learning is any computer program that can help extract patterns in data. Unsupervised learning identifies patterns, while supervised learning tries to predict something based on those patterns.

Our supervised problem is to take the information available about a particular child and make a prediction about how to allocate services, says Joseph. Our goal is to help social workers identify youth who might benefit from preventative services, while doing so in a manner that participants within the system feel is fair and equitable.

We also want our approach to have applications beyond foster care, so that eventually our approach can be used in other public service settings.

A machine learning models greatest asset, however, might also be its greatest liability. Machine learning algorithms learn from no other source other than the data theyre provided. If the original data is biased, Joseph says the algorithm will learn and echo those biases.

For instance, models for loan distribution derived from data that gives income- and geography-based preferences to applicants could be using information with inherent race, ethnicity and gender disparities.

There are many ways algorithms can be unfair, and very few of them have anything to do with math, says Joseph.

Finding and correcting those biases raises questions about using computers to make decisions affecting what is already a vulnerable population.

By age 19, 47% of foster care youth who have not been reunited with their families have not finished high school, 20% have experienced homelessness and 27% of males have been incarcerated, according to the AFCs Childrens Bureau.

But Melanie Sage, assistant professor in the School of Social Work and another of the grants co-principal investigators, says this project is about providing caseworkers with an additional tool to help inform not replace their decision-making.

We never want algorithms to replace the decisions made by trained professionals, but we do need information about how to make decisions based on likely outcomes and what the data tell us about pathways for children in foster care, she says.

Sage says their work on this grant is critical given the generational impact caseworkers and agencies have on the lives of foster youth.

When a determination is made that services should be provided for protection because kids are not better off with their families, those kids are deserving of the best services and interventions that the child welfare system can offer, she says. This research ideally gives us another tool that helps make that happen.

The projects other co-investigators are Varun Chandola, assistant professor of computer science and engineering; Huei-Yen Chen, assistant professor of industrial and systems engineering; and Atri Rudra, associate professor of computer science and engineering.

See the article here:

UB receives $800,000 NSF/Amazon grant to improve AI fairness in foster care - UB Now: News and views for UB faculty and staff - University at Buffalo...

Written by admin

February 4th, 2020 at 9:52 am

Posted in Machine Learning

The Human-Powered Companies That Make AI Work – Forbes

Posted: at 9:52 am


without comments

Machine learning models require human labor for data labeling

The hidden secret of artificial intelligence is that much of it is actually powered by humans. Well, to be specific, the supervised learning algorithms that have gained much of the attention recently are dependent on humans to provide well-labeled training data that can be used to train machine learning algorithms. Since machines have to first be taught, they cant teach themselves (yet), so it falls upon the capabilities of humans to do this training. This is the secret achilles heel of AI: the need for humans to teach machines the things that they are not yet able to do on their own.

Machine learning is what powers todays AI systems. Organizations are implementing one or more of the seven patterns of AI, including computer vision, natural language processing, predictive analytics, autonomous systems, pattern and anomaly detection, goal-driven systems, and hyperpersonalization across a wide range of applications. However, in order for these systems to be able to create accurate generalizations, these machine learning systems must be trained on data. The more advanced forms of machine learning, especially deep learning neural networks, require significant volumes of data to be able to create models with desired levels of accuracy. It goes without saying then, that the machine learning data needs to be clean, accurate, complete, and well-labeled so the resulting machine learning models are accurate. Whereas it has always been the case that garbage in is garbage out in computing, it is especially the case with regards to machine learning data.

According to analyst firm Cognilytica, over 80% of AI project time is spent preparing and labeling data for use in machine learning projects:

Percentage of time allocated to machine learning tasks (Source: Cognilytica)

(Disclosure: Im a principal analyst at Cognilytica)

Fully one quarter of this time is spent providing the necessary labels on data so that supervised machine learning approaches will actually achieve their learning objectives. Customers have the data, but they dont have the resources to label large data sets, nor do they have a mechanism to insure accuracy and quality. Raw labor is easy to come by, but its much harder to guarantee any level of quality from a random, mostly transient labor force. Third party managed labeling solution providers address this gap by providing the labor force to do the labeling combined with the expertise in large-scale data labeling efforts and an infrastructure for managing labeling workloads and achieving desired quality levels.

According to a recent report from research firm Cognilytica, over 35 companies are currently engaged in providing human labor to add labels and annotation to data to power supervised learning algorithms. Some of these firms use general, crowdsourced approaches to data labeling, while others bring their own, managed and trained labor pools that can address a wide range of general and domain-specific data labeling needs.

As detailed in the Cognilytica report, the tasks for data labeling and annotation depend highly on the sort of data to be labeled for machine learning purposes and the specific learning task that is needed. The primary use cases for data labeling fall into the following major categories:

These labeling tasks are getting increasingly more complicated and domain-specific as machine learning models are developed that can handle more general use cases. For example, innovative medical technology companies are building machine learning models that can identify all manner of concerns within medical images, such as clots, fractures, tumors, obstructions, and other concerns. To build these models requires first training machine learning algorithms to identify those issues within images. To train the machine learning models requires lots of data that has been labeled with the specific areas of concern identified. To accomplish that labeling task requires some level of knowledge as to how to identify a particular issue and the knowledge of how to appropriately label it. This is not a task for the random, off-the-street individual. This requires some amount of domain expertise.

Consequently, labeling firms have evolved to provide more domain-specific capabilities and expanded the footprint of their offerings. As machine learning starts to be applied to ever more specific areas, the needs for this sort of domain-specific data labeling will only increase. According to the Cognilytica report, the demand for data labeling services from third parties will grow from $1.7 Billion (USD) in 2019 to over $4.1B by 2024. This is a significant market, much larger than most might be aware of.

Increasingly, machines are doing this work of data labeling as well. Data labeling providers are applying machine learning to their own labeling efforts to perform some of the work of labeling, perform quality control checks on human labor, and optimize the labeling process. These firms use machine learning inferencing to identify data types, things that dont match the structure of a data column, potential data quality or formatting issues, and provides recommendations to users for how they could clean the data. In this way, machine learning is helping the process of improving machine learning. AI applied to AI. Quite interesting.

For the foreseeable future, the need for human-based data labeling for machine learning will not diminish. If anything, the use of machine learning continues to grow into new domains that require new knowledge to be built and learned by systems. This in turn requires well-labeled data to learn in those new domains, and in turn, requires the services of the hidden army of human laborers making AI work as well as it does today.

View original post here:

The Human-Powered Companies That Make AI Work - Forbes

Written by admin

February 4th, 2020 at 9:52 am

Posted in Machine Learning

Global Deep Learning Market 2020-2024 | Growing Application of Deep Learning to Boost Market Growth | Technavio – Business Wire

Posted: at 9:51 am


without comments

LONDON--(BUSINESS WIRE)--The deep learning market is expected to grow by USD 7.2 billion during 2020-2024, according to the latest market research report by Technavio. Request a free sample report

Deep learning is popularly used in machine learning, which involves the use of artificial neural networks with several degrees of layers. Moreover, massive volumes of digital data that is produced at an unprecedented rate across industries is widening the application area of deep learning. In the healthcare industry, deep learning applications are used in drug research and development. Also, deep learning helps in training machines to understand the complexities associated with languages such as syntax and semantics and generating appropriate responses. Other application areas of deep learning are fraud detection, visual recognition, logistics, insurance, and agriculture. Thus, the growing applications of deep learning are expected to drive market growth during the forecast period.

To learn more about the global trends impacting the future of market research, download a free sample: https://www.technavio.com/talk-to-us?report=IRTNTR41147

As per Technavio, the growing emphasis on cloud-based deep learning will have a positive impact on the market and contribute to its growth significantly over the forecast period. This research report also analyzes other significant trends and market drivers that will influence market growth over 2020-2024.

Deep Learning Market: Growing Emphasis On Cloud-Based Deep Learning

Cloud computing is considered an appropriate platform for deep learning as it provides support for scalability, visualization, and storage of vast amounts of structured and unstructured data. The use of cloud computing in deep learning allows the integration of large datasets for training algorithms. Moreover, cloud computing also allows deep learning models to scale efficiently and at a much lower cost. Thus, the popularity of cloud-based deep learning is increasing, which will have a positive impact on the growth of the market during the forecast period.

Increasing collaboration among vendors and the rising investments in deep learning will have a significant impact on the deep learning market growth during the forecast period, says a senior analyst at Technavio.

Register for a free trial today and gain instant access to 17,000+ market research reports

Technavio's SUBSCRIPTION platform

Deep Learning Market: Segmentation Analysis

This market research report segments the deep learning market by type (software, services, and hardware), and geographic segmentation (APAC, Europe, MEA, North America and South America).

North America region led the deep learning market in 2019, and the region is expected to register the highest incremental growth during the forecast period. This can be attributed due to factors such as the increasing use of deep learning in various industrial applications such as voice recognition and image recognition.

Technavios sample reports are free of charge and contain multiple sections of the report, such as the market size and forecast, drivers, challenges, trends, and more. Request a free sample report

Some of the key topics covered in the report include:

Type segmentation

Geographic segmentation

Market Drivers

Market Challenges

Market Trends

Vendor Landscape

About Technavio

Technavio is a leading global technology research and advisory company. Their research and analysis focus on emerging market trends and provides actionable insights to help businesses identify market opportunities and develop effective strategies to optimize their market positions.

With over 500 specialized analysts, Technavios report library consists of more than 17,000 reports and counting, covering 800 technologies, spanning across 50 countries. Their client base consists of enterprises of all sizes, including more than 100 Fortune 500 companies. This growing client base relies on Technavios comprehensive coverage, extensive research, and actionable market insights to identify opportunities in existing and potential markets and assess their competitive positions within changing market scenarios.

Originally posted here:

Global Deep Learning Market 2020-2024 | Growing Application of Deep Learning to Boost Market Growth | Technavio - Business Wire

Written by admin

February 4th, 2020 at 9:51 am

Posted in Machine Learning

What Is Machine Learning? | How It Works, Techniques …

Posted: January 27, 2020 at 8:47 pm


without comments

Supervised Learning

Supervised machine learning builds a model that makes predictions based on evidence in the presence of uncertainty. A supervised learning algorithm takes a known set of input data and known responses to the data (output) and trains a model to generate reasonable predictions for the response to new data. Use supervised learning if you have known data for the output you are trying to predict.

Supervised learning uses classification and regression techniques to develop predictive models.

Classification techniques predict discrete responsesfor example, whether an email is genuine or spam, or whether a tumor is cancerous or benign. Classification models classify input data into categories. Typical applications include medical imaging, speech recognition, and credit scoring.

Use classification if your data can be tagged, categorized, or separated into specific groups or classes. For example, applications for hand-writing recognition use classification to recognize letters and numbers. In image processing and computer vision, unsupervised pattern recognition techniques are used for object detection and image segmentation.

Common algorithms for performing classification include support vector machine (SVM), boosted and bagged decision trees, k-nearest neighbor, Nave Bayes, discriminant analysis, logistic regression, and neural networks.

Regression techniques predict continuous responsesfor example, changes in temperature or fluctuations in power demand. Typical applications include electricity load forecasting and algorithmic trading.

Use regression techniques if you are working with a data range or if the nature of your response is a real number, such as temperature or the time until failure for a piece of equipment.

Common regression algorithms include linear model, nonlinear model, regularization, stepwise regression, boosted and bagged decision trees, neural networks, and adaptive neuro-fuzzy learning.

Continue reading here:

What Is Machine Learning? | How It Works, Techniques ...

Written by admin

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

Regulators Begin to Accept Machine Learning to Improve AML, But There Are Major Issues – PaymentsJournal

Posted: at 8:47 pm


without comments

This wide-ranging article identifies how regulators have slowly opened up to accept the use of machine learning models as a method of detecting AML activity, yet they remain concerned regarding the models lack of transparency. It reviews public comments made by key regulators regarding technology and the need to maintain balance between detection and inhibiting commerce and protecting privacy.

Here is one small part of the article that is well worth reading if you are interested in AML processing:

At a November, 2018, Fintech and the New Financial Landscape conference in Philadelphia Pennsylvania conference Dr. Lael Brainard presented her view about the potential for AI and machine learning. In short, while Dr Brainard is bullish on the transformative capabilities of AI and Machine Learning, she is cautious about explainability and the audit-ability of black box AI models. She states the need for guard-rails to contain AI risk, while observing safety and soundness and consumer financial protection.

In her address entitled What Are We Learning about Artificial Intelligence in Financial Services?, she told delegates she is optimistic about the potential for AI and machine learning in particular, but guarded on how new machine learning models can be audited.

Dr. Brainards well informed speech begins, Modern machine learning applies and refines, or trains, a series of algorithms on a large data set by optimizing iteratively as it learns in order to identify patterns and make predictions for new data. Machine learning essentially imposes much less structure on how data is interpreted compared to conventional approaches in which programmers impose ex ante rule sets to make decisions.

She accurately states the value of machine learning when applied to banking AML and loan processing; here are quotes from her remarks:

1.Firms view AI approaches as potentially having superior ability for pattern recognition, such as identifying relationships among variables that are not intuitive or not revealed by more traditional modeling.

2. Firms see potential cost efficiencies where AI approaches may be able to arrive at outcomes more cheaply with no reduction in performance.

3.AI approaches might have greater accuracy in processing because of their greater automation compared to approaches that have more human input and higher operator error.

4. Firms may see better predictive power with AI compared to more traditional approachesfor instance, in improving investment performance or expanding credit access.

5. AI approaches are better than conventional approaches at accommodating very large and less-structured data sets and processing those data more efficiently and effectively.

Dr. Brainard continues, The question is how should we approach regulation and supervision? It is incumbent on regulators to review the potential consequences of AI, including the possible risks, and take a balanced view about its use by supervised firms.Regulation and supervision need to be thoughtfully designed so that they ensure risks are appropriately mitigated but do not stand in the way of responsible innovations that might expand access and convenience for consumers and small businesses or bring greater efficiency, risk detection, and accuracy.

Overview byTim Sloane,VP, Payments Innovation at Mercator Advisory Group

Summary

Article Name

Regulators Begin to Accept Machine Learning to Improve AML but There Are Major Issues

Description

This wide ranging article identifies how regulators have slowly opened up to accept the use of machine learning models as a method of detecting AML activity yet remain concerned regarding the models lack of transparency.

Author

Tim Sloane

Publisher Name

PaymentsJournal

Publisher Logo

See the original post here:

Regulators Begin to Accept Machine Learning to Improve AML, But There Are Major Issues - PaymentsJournal

Written by admin

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

Iguazio Deployed by Payoneer to Prevent Fraud with Real-time Machine Learning – Yahoo Finance

Posted: at 8:47 pm


without comments

Payoneer uses Iguazio to move from detection to prevention of fraud with predictive machine learning models served in real-time.

Iguazio, the data science platform for real time machine learning applications, today announced that Payoneer, the digital payment platform empowering businesses around the world to grow globally, has selected Iguazios platform to provide its 4 million customers with a safer payment experience. By deploying Iguazio, Payoneer moved from a reactive fraud detection method to proactive prevention with real-time machine learning and predictive analytics.

Payoneer overcomes the challenge of detecting fraud within complex networks with sophisticated algorithms tracking multiple parameters, including account creation times and name changes. However, prior to using Iguazio, fraud was detected retroactively, enabling customers to only block users after damage had already been done. Payoneer is now able to take the same sophisticated machine learning models built offline and serve them in real-time against fresh data. This ensures immediate prevention of fraud and money laundering with predictive machine learning models identifying suspicious patterns continuously. The cooperation was facilitated by Belocal, a leading Data and IT solution integrator for mid and enterprise companies.

"Weve tackled one of our most elusive challenges with real-time predictive models, making fraud attacks almost impossible on Payoneer" noted Yaron Weiss, VP Corporate Security and Global IT Operations (CISO) at Payoneer. "With Iguazios Data Science Platform, we built a scalable and reliable system which adapts to new threats and enables us to prevent fraud with minimum false positives".

"Payoneer is leading innovation in the industry of digital payments and we are proud to be a part of it" said Asaf Somekh, CEO, Iguazio. "Were glad to see Payoneer accelerating its ability to develop new machine learning based services, increasing the impact of data science on the business."

"Payoneer and Iguazio are a great example of technology innovation applied in real-world use-cases and addressing real market gaps" said Hugo Georlette, CEO, Belocal. "We are eager to continue selling and implementing Iguazios Data Science Platform to make business impact across multiple industries."

Iguazios Data Science Platform enables Payoneer to bring its most intelligent data science strategies to life. Designed to provide a simple cloud experience deployed anywhere, it includes a low latency serverless framework, a real-time multi-model data engine and a modern Python eco-system running over Kubernetes.

Earlier today, Iguazio also announced having raised $24M from existing and new investors, including Samsung SDS and Kensington Capital Partners. The new funding will be used to drive future product innovation and support global expansion into new and existing markets.

About Iguazio

The Iguazio Data Science Platform enables enterprises to develop, deploy and manage AI applications at scale. With Iguazio, companies can run AI models in real time, deploy them anywhere; multi-cloud, on-prem or edge, and bring to life their most ambitious data-driven strategies. Enterprises spanning a wide range of verticals, including financial services, manufacturing, telecoms and gaming, use Iguazio to create business impact through a multitude of real-time use cases. Iguazio is backed by top financial and strategic investors including Samsung, Verizon, Bosch, CME Group, and Dell. The company is led by serial entrepreneurs and a diverse team of innovators in the USA, UK, Singapore and Israel. Find out more on http://www.iguazio.com

About Belocal

Since its inception in 2006, Belocal has experienced consistent and sustainable growth by developing strong long-term relationships with its technology partners and by providing tremendous value to its clients. We pride ourselves on delivering the most innovative technology solutions enabling our customers to lead their market segments and stay ahead of the competition. At Belocal, we pride ourselves in our ability to listen, our attention to detail and our expertise in innovation. Such strengths have enabled us to develop new solutions and services, to suit the changing needs of our clients and acquire new businesses by tailoring all our solutions and services to the specific needs of each client.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200127005311/en/

Contacts

Iguazio Media Contact: Sahar Dolev-Blitental, +972.73.321.0401 press@iguazio.com

Follow this link:

Iguazio Deployed by Payoneer to Prevent Fraud with Real-time Machine Learning - Yahoo Finance

Written by admin

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

Short- and long-term impacts of machine learning on contact centres – Which-50

Posted: at 8:47 pm


without comments

Which-50 and LogMeIn recently surveyed call centre managers and C-Suite executives with responsibility for the customer, asking them to nominate the technologies they believe will be most transformative.

AI & machine learning was nominated by more than three quarters of respondents, making it the top pick.

We asked Ryan Lester, Senior Director, Customer Experience Technologies at LogMeIn, to describe where the short- and longer-term impacts of AI are most likely to be felt, and also to describe the impact on contact centre agents.

Lester told Which-50 that AI is the broader umbrella and machine learning is the algorithms you build to improve the quality of your prediction.

He said brands should be very thoughtful if they are going to do machine learning themselves and invest in machine learning teams. However, he recommended that companies dont do that.

Rather, he said there are plenty of off-the-shelf solutions that are purpose-built for contact centres or for conversion metrics.

He said, You can buy a business application versus buying lets say a machine learning tool or platform.

Lester said that in the immediate term what companies can do to avoid some of the challenges around a bad investment is to use AI as their first round listening mechanism. Brands can leverage a solution built for the contact centre, and it will listen to these customer conversations over phone calls.

Then LogMeIn can see certain intents, Lester said, So Ill say here are intents Im seeing. You can also take large databases. If you have chat records from the last year, you can stick those into AI tools that will start to help you identify intents.

You can take historical data and use it as a place to say, well we should go investigate further here and then start building more purposeful applications around those workflows.

He said companies should build around their existing workflows. They should focus on those workflows today before they invest heavily in either a technology spend or research spend or a headcount spend.

The longer-term impact of machine learning is moving away from inbound response.

Lester said when a customer is contacting a company about a specific problem, the company should operationalise it. That means making it more efficient so making it self-service or reducing delivery costs. They want to align the right resource to the right problem.

Where theres an opportunity longer-term is to think about more of the entire customer lifecycle, he explained.

Lester said AI will help to discover what types of customers brands should be engaging with through leading indicators.

We should start being more proactive about engagement for these types of customers with these types of attributes. If were seeing retention challenges on particular types of customers, we should be offering up those types of offerings to those customers.

He believes many of the conversations are still really about inbound customer service, when in the longer term theres going to be a much bigger opportunity around the entire customer lifecycle.

Saying, for these types of customers we acquired this way, heres how were upselling them, heres how were better retaining them and looking much more at the lifecycle and how AI is helping across that entire lifecycle.

Athina Mallis is the editor of the Which-50 Digital Intelligence Unit of which LogMeIn is a corporate member. Members provide their insights and expertise for the benefit of the Which-5o community. Membership fees apply.

Previous post

Next post

View original post here:

Short- and long-term impacts of machine learning on contact centres - Which-50

Written by admin

January 27th, 2020 at 8:47 pm

Posted in Machine Learning


Page 14«..10..13141516..»