Page 10«..9101112..»

Archive for the ‘Machine Learning’ Category

What are the top AI platforms? – Gigabit Magazine – Technology News, Magazine and Website

Posted: March 29, 2020 at 2:45 pm


without comments

Business Overview

Microsoft AI is a platform used to develop AI solutions in conversational AI, machine learning, data sciences, robotics, IoT, and more.

Microsoft AI prides itself on driving innovation through; protecting wildlife, better brewing, feeding the world and preserving history.

Its Cognitive Services is described as a comprehensive family of AI services and cognitive APIs to help you build intelligent apps.

Executives

Tom Bernard Krake is the Azure Cloud Executive at Microsoft, responsible for leveraging and evaluating the Azure platform. Tom is joined by a team of experienced executives to optimise the Azure platform and oversee the many cognitive services that it provides.

Notable customers

Uber uses Cognitive Services to boost its security through facial recognition to ensure that the driver using the app matches the user that is on file.

KPMG helps financial institutions save millions in compliance costs through the use of Microsofts Cognitive Services. They do this through transcribing and logging thousands of hours of calls, reducing compliance costs by as much as 80 per cent.

Jet.com uses Cognitive Services to provide answers to its customers by infusing its customer chatbot with the intelligence to communicate using natural language.

The services:

Decision - Make smarter decisions faster through anomaly detectors, content moderators and personalizers.

Language - Extract meaning from unstructured text through the immersive reader, language understanding, Q&A maker, text analytics and translator text.

Speech - Integrate speech processing into apps and services through Speech-to-text, Text to speech, Speech translation and Speaker recognition.

Vision - Identify and analyse content within images, videos and digital ink through computer vision, custom vision, face, form recogniser, ink recogniser and video indexer.

Web Search -Find what you are looking for through the world-wide-web through autosuggest, custom search, entity search, image search, news search, spell check, video search, visual search and web search.

Read more:

What are the top AI platforms? - Gigabit Magazine - Technology News, Magazine and Website

Written by admin

March 29th, 2020 at 2:45 pm

Posted in Machine Learning

With Launch of COVID-19 Data Hub, The White House Issues A ‘Call To Action’ For AI Researchers – Machine Learning Times – machine learning & data…

Posted: at 2:45 pm


without comments

Originally published in TechCrunch, March 16, 2020

In a briefing on Monday, research leaders across tech, academia and the government joined the White House to announce an open data set full of scientific literature on the novel coronavirus. The COVID-19 Open Research Dataset, known as CORD-19, will also add relevant new research moving forward, compiling it into one centralized hub. The new data set is machine readable, making it easily parsed for machine learning purposes a key advantage according to researchers involved in the ambitious project.

In a press conference, U.S. CTO Michael Kratsios called the new data set the most extensive collection of machine readable coronavirus literature to date. Kratsios characterized the project as a call to action for the AI community, which can employ machine learning techniques to surface unique insights in the body of data. To come up with guidance for researchers combing through the data, the National Academies of Sciences, Engineering, and Medicine collaborated with the World Health Organization to come up with high priority questions about the coronavirus related to genetics, incubation, treatment, symptoms and prevention.

The partnership, announced today by the White House Office of Science and Technology Policy, brings together the Chan Zuckerberg Initiative, Microsoft Research, the Allen Institute for Artificial Intelligence, the National Institutes of Healths National Library of Medicine, Georgetown Universitys Center for Security and Emerging Technology, Cold Spring Harbor Laboratory and the Kaggle AI platform, owned by Google.

The database brings together nearly 30,000 scientific articles about the virus known as SARS-CoV-2. as well as related viruses in the broader coronavirus group. Around half of those articles make the full text available. Critically, the database will include pre-publication research from resources like medRxiv and bioRxiv, open access archives for pre-print health sciences and biology research.

To continue reading this article, click here.

Read more:

With Launch of COVID-19 Data Hub, The White House Issues A 'Call To Action' For AI Researchers - Machine Learning Times - machine learning & data...

Written by admin

March 29th, 2020 at 2:45 pm

Posted in Machine Learning

Deep Learning to Be Key Driver for Expansion and Adoption of AI in Asia-Pacific, Says GlobalData – MarTech Series

Posted: at 2:45 pm


without comments

Deep learning, a subset of machine learning and artificial intelligence (AI), is predicted to provide formidable momentum for the adoption and growth of artificial intelligence in the Asia-Pacific (APAC) region. The next few years will see deep learning become part of main-stream deployments, bringing commendable changes to businesses in the region, says GlobalData, a leading data and analytics company.

GlobalData estimates the APAC region to account for approximately 30% of the global AI platforms revenue (around US$97.5bn) by 2024. However, the share is expected to significantly go up, given the incumbent technology companies and the increasing number of start-ups that specialize in this field.

Furthermore, the technological enhancements supporting higher computation capabilities (CPU and GPU), and the huge amount of data, which is predicted to grow multiple folds due to the growth of connected devices ecosystem, are expected to contribute to this growth.

Marketing Technology News: SalesHood Steps Up to Offer Free Usage of Its Sales Enablement Platform During COVID-19 Crisis

Digital assistants like Cortana, Siri, GoogleNow and Alexa leverage deep learning to some extent for natural language processing (NLP) as well as speech recognition. Some of the other key usage areas of deep learning include multi-lingual chatbots, voice and image recognition, data processing, surveillance, fraud detection and diagnostics.

Sunil Kumar Verma, Lead ICT analyst at GlobalData, comments: The APAC market is proactively deploying deep learning-based AI solutions to bring increased offline automation, safety and security to businesses and their assets. In addition, AI hardware optimization with increased computing speed on small devices will result in the cost reduction and drive deep learning adoption across the region.

In APAC, deep learning is increasingly being adopted for various applications, driven by product launches and technical enhancements by regional technology vendors.

Marketing Technology News: Cognizant to Acquire Lev to Expand Digital Marketing Expertise

For instance, China-based SenseTime leverages its deep learning platform to power image recognition, intelligent video analytic and medical image recognition to its customers, through its facial recognition technology called DeepID. Similarly, DeepSight AI Labs, an India-based start-up (which also operates in the US), also uses deep learning to develop SuperSecure Platform, a smart retrofit video surveillance solution that works on any CCTV to provide a contextualized AI solution to detect objects and behaviors.

Australia-based Daisee too offers an algorithm called Lisa, which leverages a speech-to-text engine to identify key conversational elements, determine its meaning and derive its context. Similarly, Cognitive Software Group is using deep learning / machine learning for the tagging of unstructured data to enhance natural language understanding.

Verma concludes: Although still in its infancy, deep learning is proving to be a stepping stone for technology landscape evolution in APAC. However, with the lack of skilled professionals and the fact that only a handful of technology companies are focussing on investing, hiring and training their workforce specifically for Deep Learning, there would be some initial roadblocks before witnessing success in adoption rates.

Marketing Technology News: COVID-19 Phishes Explode as U.S. Reels From Pandemic

MarTech Series (MTS) is a business publication dedicated to helping marketers get more from marketing technology through in-depth journalism, expert author blogs and research reports.

We publish high quality, relevant Marketing Technology Insights to help the business community advance martech knowledge and develop new martech skills. Our focus is on bringing marketers the latest business trends, products and practices affecting their marketing strategy.

We help our readers make sense of the rapidly evolving martech landscape, and cover the incredible impact of marketing technologies adoption on the way we do business.

Read this article:

Deep Learning to Be Key Driver for Expansion and Adoption of AI in Asia-Pacific, Says GlobalData - MarTech Series

Written by admin

March 29th, 2020 at 2:45 pm

Posted in Machine Learning

AI Is Changing Work and Leaders Need to Adapt – Harvard Business Review

Posted: at 2:45 pm


without comments

Executive Summary

Recent empirical research by the MIT-IBM Watson AI Lab provides new insight into how work is changing in the face of AI. Based on this research, the author provides a roadmap for leaders intent on adapting their workforces and reallocating capital, while also delivering profitability. They argue that the key to unlocking the productivity potential while delivering on business objectives lies in three key strategies: rebalancing resources, investing in workforce reskilling and, on a larger scale, advancing new models of education and lifelong learning.

As AI is increasingly incorporated into our workplaces and daily lives, it is poised to fundamentally upend the way we live and work. Concern over this looming shift is widespread. A recent survey of 5,700 Harvard Business School alumni found that 52% of even this elite group believe the typical company will employ fewer workers three years from now.

The advent of AI poses new and unique challenges for business leaders. They must continue to deliver financial performance, while simultaneously making significant investments in hiring, workforce training, and new technologies that support productivity and growth. These seemingly competing business objectives can make for difficult, often agonizing, leadership decisions.

Against this backdrop, recent empirical research by our team at the MIT-IBM Watson AI Lab provides new insight into how work is changing in the face of AI. By examining these findings, we can create a roadmap for leaders intent on adapting their workforces and reallocating capital, while also delivering profitability.

The stakes are high. AI is an entirely new kind of technology, one that has the ability to anticipate future needs and provide recommendations to its users. For business leaders, that unique capability has the potential to increase employee productivity by taking on administrative tasks, providing better pricing recommendations to sellers, and streamlining recruitment, to name a few examples.

For business leaders navigating the AI workforce transition, the key to unlocking the productivity potential while delivering on business objectives lies in three key strategies: rebalancing resources, investing in workforce reskilling and, on a larger scale, advancing new models of education and lifelong learning.

Our research report, offers a window into how AI will change workplaces through the rebalancing and restructuring of occupations. Using AI and machine learning techniques, our MIT-IBM Watson AI Lab team analyzed 170 million online job posts between 2010 and 2017. The studys first implication: While occupations change slowly over years and even decades tasks become reorganized at a much faster pace.

Jobs are a collection of tasks. As workers take on jobs in various professions and industries, it is the tasks they perform that create value. With the advancement of technology, some existing tasks will be replaced by AI and machine learning. But our research shows that only 2.5% of jobs include a high proportion of tasks suitable for machine learning. These include positions like usher, lobby attendant, and ticket taker, where the main tasks involve verifying credentials and allowing only authorized people to enter a restricted space.

Most tasks will still be best performed by humans whether craft workers like plumbers, electricians and carpenters, or those who do design or analysis requiring industry knowledge. And new tasks will emerge that require workers to exercise new skills.

As this shift occurs, business leaders will need to reallocate capital accordingly. Broad adoption of AI may require additional research and development spending. Training and reskilling employees will very likely require temporarily removing workers from revenue-generating activities.

More broadly, salaries and other forms of employee compensation will need to reflect the shifting value of tasks all along the organization chart. Our research shows that as technology reduces the cost of some tasks because they can be done in part by AI, the value workers bring to the remaining tasks increases. Those tasks tend to require grounding in intellectual skill and insightsomething AI isnt as good at as people.

In high-wage business and finance occupations, for example, compensation for tasks requiring industry knowledge increased by more than $6,000, on average, between 2010 and 2017. By contrast, average compensation for manufacturing and production tasks fell by more than $5,000 during that period. As AI continues to reshape the workplace, business leaders who are mindful of this shifting calculus will come out ahead.

Companies today are held accountable not only for delivering shareholder value, but for positively impacting stakeholders such as customers, suppliers, communities and employees. Moreover, investment in talent and other stakeholders is increasingly considered essential to delivering long-term financial results. These new expectations are reflected in the Business Roundtables recently revised statement on corporate governance, which underscores corporations obligation to support employees through training and education that help develop new skills for a rapidly changing world.

Millions of workers will need to be retrained or reskilled as a result of AI over the next three years, according to a recent IBM Institute for Business Value study. Technical training will certainly be a necessary component. As tasks requiring intellectual skill, insight and other uniquely human attributes rise in value, executives and managers will also need to focus on preparing workers for the future by fostering and growing people skills such as judgement, creativity and the ability to communicate effectively. Through such efforts, leaders can help their employees make the shift to partnering with intelligent machines as tasks transform and change in value.

As AI continues to scale within businesses and across industries, it is incumbent upon innovators and business leaders to understand not only the business process implications, but also the societal impact. Beyond the need for investment in reskilling within organizations today, executives should work alongside policymakers and other public and private stakeholders to provide support for education and job training, encouraging investment in training and reskilling programs for all workers.

Our research shows that technology can disproportionately impact the demand and earning potential for mid-wage workers, causing a squeeze on the middle class. For every five tasks that shifted out of mid-wage jobs, we found, four tasks moved to low-wage jobs and one moved to a high-wage job. As a result, wages are rising faster in the low- and high-wage tiers than in the mid-wage tier.

New models of education and pathways to continuous learning can help address the growing skills gap, providing members of the middle class, as well as students and a broad array of mid-career professionals, with opportunities to build in-demand skills. Investment in all forms of education is key: community college, online learning, apprenticeships, or programs like P-TECH, a public-private partnership designed to prepare high school students for new collar technical jobs like cloud computing and cybersecurity.

Whether it is workers who are asked to transform their skills and ways of working, or leaders who must rethink everything from resource allocation to workforce training, fundamental economic shifts are never easy. But if AI is to fulfill its promise of improving our work lives and raising living standards, senior leaders must be ready to embrace the challenges ahead.

Read the original post:

AI Is Changing Work and Leaders Need to Adapt - Harvard Business Review

Written by admin

March 29th, 2020 at 2:45 pm

Posted in Machine Learning

Why AI might be the most effective weapon we have to fight COVID-19 – The Next Web

Posted: March 22, 2020 at 4:41 am


without comments

If not the most deadly, the novel coronavirus (COVID-19) is one of the most contagious diseases to have hit our green planet in the past decades. In little over three months since the virus was first spotted in mainland China, it has spread to more than 90 countries, infected more than 185,000 people, and taken more than 3,500 lives.

As governments and health organizations scramble to contain the spread of coronavirus, they need all the help they can get, including from artificial intelligence. Though current AI technologies arefar from replicating human intelligence, they are proving to be very helpful in tracking the outbreak, diagnosing patients, disinfecting areas, and speeding up the process of finding a cure for COVID-19.

Data science and machine learning might be two of the most effective weapons we have in the fight against the coronavirus outbreak.

Just before the turn of the year, BlueDot, an artificial intelligence platform that tracks infectious diseases around the world, flagged a cluster of unusual pneumonia cases happening around a market in Wuhan, China. Nine days later, the World Health Organization (WHO)released a statementdeclaring the discovery of a novel coronavirus in a hospitalized person with pneumonia in Wuhan.

BlueDot usesnatural language processingandmachine learning algorithmsto peruse information from hundreds of sources for early signs of infectious epidemics. The AI looks at statements from health organizations, commercial flights, livestock health reports, climate data from satellites, and news reports. With so much data being generated on coronavirus every day, the AI algorithms can help home in on the bits that can provide pertinent information on the spread of the virus. It can also find important correlations between data points, such as the movement patterns of the people who are living in the areas most affected by the virus.

The company also employs dozens of experts who specialize in a range of disciplines including geographic information systems, spatial analytics, data visualization, computer sciences, as well as medical experts in clinical infectious diseases, travel and tropical medicine, and public health. The experts review the information that has been flagged by the AI and send out reports on their findings.

Combined with the assistance of human experts, BlueDots AI can not only predict the start of an epidemic, but also forecast how it will spread. In the case of COVID-19, the AI successfully identified the cities where the virus would be transferred to after it surfaced in Wuhan. Machine learning algorithms studying travel patterns were able to predict where the people who had contracted coronavirus were likely to travel.

Coronavirus (COVID-19) (Image source:NIAID)

You have probably seen the COVID-19 screenings at border crossings and airports. Health officers use thermometer guns and visually check travelers for signs of fever, coughing, and breathing difficulties.

Now,computer vision algorithmscan perform the same at large scale. An AI system developed by Chinese tech giant Baidu uses cameras equipped with computer vision and infrared sensors to predict peoples temperatures in public areas. The system can screen up to 200 people per minute and detect their temperature within a range of 0.5 degrees Celsius. The AI flags anyone who has a temperature above 37.3 degrees. The technology is now in use in Beijings Qinghe Railway Station.

Alibaba, another Chinese tech giant, has developed an AI system that candetect coronavirus in chest CT scans. According to the researchers who developed the system, the AI has a 96-percent accuracy. The AI was trained on data from 5,000 coronavirus cases and can perform the test in 20 seconds as opposed to the 15 minutes it takes a human expert to diagnose patients. It can also tell the difference between coronavirus and ordinary viral pneumonia. The algorithm can give a boost to the medical centers that are already under a lot of pressure to screen patients for COVID-19 infection. The system is reportedly being adopted in 100 hospitals in China.

A separate AI developed by researchers from Renmin Hospital of Wuhan University, Wuhan EndoAngel Medical Technology Company, and the China University of Geosciences purportedly shows 95-percent accuracy on detecting COVID-19 in chest CT scans. The system is adeep learning algorithmtrained on 45,000 anonymized CT scans. According to a preprint paperpublished on medRxiv, the AIs performance is comparable to expert radiologists.

One of the main ways to prevent the spread of the novel coronavirus is to reduce contact between infected patients and people who have not contracted the virus. To this end, several companies and organizations have engaged in efforts to automate some of the procedures that previously required health workers and medical staff to interact with patients.

Chinese firms are using drones and robots to perform contactless delivery and to spray disinfectants in public areas to minimize the risk of cross-infection. Other robots are checking people for fever and other COVID-19 symptoms and dispensing free hand sanitizer foam and gel.

Inside hospitals, robots are delivering food and medicine to patients and disinfecting their rooms to obviate the need for the presence of nurses. Other robots are busy cooking rice without human supervision, reducing the number of staff required to run the facility.

In Seattle, doctors used a robot to communicate with and treat patients remotely to minimize exposure of medical staff to infected people.

At the end of the day, the war on the novel coronavirus is not over until we develop a vaccine that can immunize everyone against the virus. But developing new drugs and medicine is a very lengthy and costly process. It can cost more than a billion dollars and take up to 12 years. Thats the kind of timeframe we dont have as the virus continues to spread at an accelerating pace.

Fortunately, AI can help speed up the process. DeepMind, the AI research lab acquired by Google in 2014, recently declared that it has used deep learning to find new information about the structure of proteins associated with COVID-19. This is a process that could have taken many more months.

Understanding protein structures can provide important clues to the coronavirus vaccine formula. DeepMind is one of several organizations who are engaged in the race to unlock the coronavirus vaccine. It has leveraged the result of decades of machine learning progress as well as research on protein folding.

Its important to note that our structure prediction system is still in development and we cant be certain of the accuracy of the structures we are providing, although we are confident that the system is more accurate than our earlier CASP13 system, DeepMinds researchers wroteon the AI labs website. We confirmed that our system provided an accurate prediction for the experimentally determined SARS-CoV-2 spike protein structure shared in the Protein Data Bank, and this gave us confidence that our model predictions on other proteins may be useful.

Although its too early to tell whether were headed in the right direction, the efforts are commendable. Every day saved in finding the coronavirus vaccine can save hundredsor thousandsof lives.

This story is republished fromTechTalks, the blog that explores how technology is solving problems and creating new ones. Like them onFacebookhere and follow them down here:

Published March 21, 2020 17:00 UTC

Continue reading here:

Why AI might be the most effective weapon we have to fight COVID-19 - The Next Web

Written by admin

March 22nd, 2020 at 4:41 am

Posted in Machine Learning

Are machine-learning-based automation tools good enough for storage management and other areas of IT? Let us know – The Register

Posted: at 4:41 am


without comments

Reader survey We hear a lot these days about IT automation. Yet whether it's labelled intelligent infrastructure, AIOps, self-driving IT, or even private cloud, the aim is the same.

And that aim is: to use the likes of machine learning, workflow automation, and infrastructure-as-code to automatically make changes in real-time, eliminating as much as possible of the manual drudgery associated with routine IT administration.

Are the latest AI/ML-powered intelligent automation solutions trustworthy and ready for mainstream deployment, particularly in areas such as storage management?

Should we go ahead and implement the technology now on offer?

This controversial topic is the subject of our latest reader survey, and we are eager to hear your views.

Please complete our short survey, here.

As always, your responses will be anonymous and your privacy assured.

Sponsored: Webcast: Why you need managed detection and response

Read more from the original source:

Are machine-learning-based automation tools good enough for storage management and other areas of IT? Let us know - The Register

Written by admin

March 22nd, 2020 at 4:41 am

Posted in Machine Learning

With launch of COVID-19 data hub, the White House issues a call to action for AI researchers – TechCrunch

Posted: at 4:41 am


without comments

In a briefing on Monday, research leaders across tech, academia and the government joined the White House to announce an open data set full of scientific literature on the novel coronavirus. The COVID-19 Open Research Dataset, known as CORD-19, will also add relevant new research moving forward, compiling it into one centralized hub. The new data set is machine readable, making it easily parsed for machine learning purposes a key advantage according to researchers involved in the ambitious project.

In a press conference, U.S. CTO Michael Kratsios called the new data set the most extensive collection of machine readable coronavirus literature to date. Kratsios characterized the project as a call to action for the AI community, which can employ machine learning techniques to surface unique insights in the body of data. To come up with guidance for researchers combing through the data, the National Academies of Sciences, Engineering, and Medicine collaborated with the World Health Organization to come up with high priority questions about the coronavirus related to genetics, incubation, treatment, symptoms and prevention.

The partnership, announced today by the White House Office of Science and Technology Policy, brings together the Chan Zuckerberg Initiative, Microsoft Research, the Allen Institute for Artificial Intelligence, the National Institutes of Healths National Library of Medicine, Georgetown Universitys Center for Security and Emerging Technology, Cold Spring Harbor Laboratory and the Kaggle AI platform, owned by Google.

The database brings together nearly 30,000 scientific articles about the virus known as SARS-CoV-2. as well as related viruses in the broader coronavirus group. Around half of those articles make the full text available. Critically, the database will include pre-publication research from resources like medRxiv and bioRxiv, open access archives for pre-print health sciences and biology research.

Sharing vital information across scientific and medical communities is key to accelerating our ability to respond to the coronavirus pandemic, Chan Zuckerberg Initiative Head of Science Cori Bargmann said of the project.

The Chan Zuckerberg Initiative hopes that the global machine learning community will be able to help the science community connect the dots on some of the enduring mysteries about the novel coronavirus as scientists pursue knowledge around prevention, treatment and a vaccine.

For updates to the CORD-19 data set, the Chan Zuckerberg Initiative will track new research on a dedicated page on Meta, the research search engine the organization acquired in 2017.

The CORD-19 data set announcement is certain to roll out more smoothly than the White Houses last attempt at a coronavirus-related partnership with the tech industry. The White House came under criticism last week for President Trumps announcement that Google would build a dedicated website for COVID-19 screening. In fact, the site was in development by Verily, Alphabets life science research group, and intended to serve California residents, beginning with San Mateo and Santa Clara County. (Alphabet is the parent company of Google.)

The site, now live, offers risk screening through an online questionnaire to direct high-risk individuals toward local mobile testing sites. At this time, the project has no plans for a nationwide rollout.

Google later clarified that the company is undertaking its own efforts to bring crucial COVID-19 information to users across its products, but that may have become conflated with Verilys much more limited screening site rollout. On Twitter, Googles comms team noted that Google is indeed working with the government on a website, but not one intended to screen potential COVID-19 patients or refer them to local testing sites.

In a partial clarification over the weekend, Vice President Pence, one of the Trump administrations designated point people on the pandemic, indicated that the White House is working with Google but also working with many other tech companies. Its not clear if that means a central site will indeed launch soon out of a White House collaboration with Silicon Valley, but Pence hinted that might be the case. If that centralized site will handle screening and testing location referral is not clear.

Our best estimate is that some point early in the week we will have a website that goes up, Pence said.

The rest is here:

With launch of COVID-19 data hub, the White House issues a call to action for AI researchers - TechCrunch

Written by admin

March 22nd, 2020 at 4:41 am

Posted in Machine Learning

Emerging Trend of Machine Learning in Retail Market 2019 by Company, Regions, Type and Application, Forecast to 2024 – Bandera County Courier

Posted: at 4:41 am


without comments

The latest report titled, Global Machine Learning in Retail Market 2019 by Company, Regions, Type and Application, Forecast to 2024 unveils the value at which the Machine Learning in Retail industry is anticipated to grow during the forecast period, 2019 to 2024. The report estimates CAGR analysis, competitive strategies, growth factors and regional outlook 2024. The report is a rich source of an exhaustive study of the driving elements, limiting components, and different market changes. It states market structure and then further forecasts several segments and sub-segments of the global market. The market study is provided on the basis of type, application, manufacturer as well as geography. Different elements such as opportunities, drivers, restraints, and challenges, market situation, market share, growth rate, future trends, risks, entry limits, sales channels, distributors are analyzed and examined within this report.

Exploring The Growth Rate Over A Period:

Business owners want to expand their business can refer to this report as it includes data regarding the rise in sales within a given consumer base for the forecast period, 2019 to 2024. The research analysts have mentioned a comparison between the Machine Learning in Retail market growth rate and product sales to allow business owners to discover the success or failure of a specific product or service. They have also added the driving factors such as demographics and revenue generated from other products to offer a better analysis of products and services by owners.

DOWNLOAD FREE SAMPLE REPORT: https://www.magnifierresearch.com/report-detail/7570/request-sample

Top industry players assessment: IBM, Microsoft, Amazon Web Services, Oracle, SAP, Intel, NVIDIA, Google, Sentient Technologies, Salesforce, ViSenze,

Product type assessment based on the following types: Cloud Based, On-Premises

Application assessment based on application mentioned below: Online, Offline

Leading market regions covered in the report are: North America (United States, Canada and Mexico), Europe (Germany, France, UK, Russia and Italy), Asia-Pacific (China, Japan, Korea, India and Southeast Asia), South America (Brazil, Argentina, Colombia), Middle East and Africa (Saudi Arabia, UAE, Egypt, Nigeria and South Africa)

Main Features Covered In Global Machine Learning in Retail Market 2019 Report:

ACCESS FULL REPORT: https://www.magnifierresearch.com/report/global-machine-learning-in-retail-market-2019-by-7570.html

Moreover in the report, supply chain analysis, regional marketing type analysis, international trade type analysis by the market as well as consumer analysis of Machine Learning in Retail market has been covered. Further, it determines the manufacturing plants and technical data analysis, capacity, and commercial production date, R&D Status, manufacturing area distribution, technology source, and raw materials sources analysis. It also depicts to depict sales, merchants, brokers, wholesalers, research findings and conclusion, and information sources.

Customization of the Report: This report can be customized to meet the clients requirements. Please connect with our sales team (sales@magnifierresearch.com), who will ensure that you get a report that suits your needs. You can also get in touch with our executives on +1-201-465-4211 to share your research requirements.

Read more:

Emerging Trend of Machine Learning in Retail Market 2019 by Company, Regions, Type and Application, Forecast to 2024 - Bandera County Courier

Written by admin

March 22nd, 2020 at 4:41 am

Posted in Machine Learning

Keeping Machine Learning Algorithms Humble and Honest in the Ethics-First Era – Datamation

Posted: at 4:41 am


without comments

By Davide Zilli, Client Services Director at Mind Foundry

Today in so many industries, from manufacturing and life sciences to financial services and retail, we rely on algorithms to conduct large-scale machine learning analysis. They are hugely effective for problem-solving and beneficial for augmenting human expertise within an organization. But they are now under the spotlight for many reasons and regulation is on the horizon, with Gartner projecting four of the G7 countries will establish dedicated associations to oversee AI and ML design by 2023. It remains vital that we understand their reasoning and decision-making process at every step.

Algorithms need to be fully transparent in their decisions, easily validated and monitored by a human expert. Machine learning tools must introduce this full accountability to evolve beyond unexplainable black box solutions and eliminate the easy excuse of the algorithm made me do it!"

Bias can be introduced into the machine learning process as early as the initial data upload and review stages. There are hundreds of parameters to take into consideration during data preparation, so it can often be difficult to strike a balance between removing bias and retaining useful data.

Gender for example might be a useful parameter when looking to identify specific disease risks or health threats, but using gender in many other scenarios is completely unacceptable if it risks introducing bias and, in turn, discrimination. Machine learning models will inevitably exploit any parameters such as gender in data sets they have access to, so it is vital for users to understand the steps taken for a model to reach a specific conclusion.

Removing the complexity of the data science procedure will help users discover and address bias faster and better understand the expected accuracy and outcomes of deploying a particular model.

Machine learning tools with built-in explainability allow users to demonstrate the reasoning behind applying ML to a tackle a specific problem, and ultimately justify the outcome. First steps towards this explainability would be features in the ML tool to enable the visual inspection of data with the platform alerting users to potential bias during preparation and metrics on model accuracy and health, including the ability to visualize what the model is doing.

Beyond this, ML platforms can take transparency further by introducing full user visibility, tracking each step through a consistent audit trail. This records how and when data sets have been imported, prepared and manipulated during the data science process. It also helps ensure compliance with national and industry regulations such as the European Unions GDPR right to explanation clause and helps effectively demonstrate transparency to consumers.

There is a further advantage here of allowing users to quickly replicate the same preparation and deployment steps, guaranteeing the same results from the same data particularly vital for achieving time efficiencies on repetitive tasks. We find for example in the Life Sciences sector, users are particularly keen on replicability and visibility for ML where it becomes an important facility in areas such as clinical trials and drug discovery.

There are so many different model types that it can be a challenge to select and deploy the best model for a task. Deep neural network models, for example, are inherently less transparent than probabilistic methods, which typically operate in a more honest and transparent manner.

Heres where many machine learning tools fall short. Theyre fully automated with no opportunity to review and select the most appropriate model. This may help users rapidly prepare data and deploy a machine learning model, but it provides little to no prospect of visual inspection to identify data and model issues.

An effective ML platform must be able to help identify and advise on resolving possible bias in a model during the preparation stage, and provide support through to creation where it will visualize what the chosen model is doing and provide accuracy metrics and then on to deployment, where it will evaluate model certainty and provide alerts when a model requires retraining.

Building greater visibility into data preparation and model deployment, we should look towards ML platforms that incorporate testing features, where users can test a new data set and receive best scores of the model performance. This helps identify bias and make changes to the model accordingly.

During model deployment, the most effective platforms will also extract extra features from data that are otherwise difficult to identify and help the user understand what is going on with the data at a granular level, beyond the most obvious insights.

The end goal is to put power directly into the hands of the users, enabling them to actively explore, visualize and manipulate data at each step, rather than simply delegating to an ML tool and risking the introduction of bias.

The introduction of explainability and enhanced governance into ML platforms is an important step towards ethical machine learning deployments, but we can and should go further.

Researchers and solution vendors hold a responsibility as ML educators to inform users of the use and abuses of bias in machine learning. We need to encourage businesses in this field to set up dedicated education programs on machine learning including specific modules that cover ethics and bias, explaining how users can identify and in turn tackle or outright avoid the dangers.

Raising awareness in this manner will be a key step towards establishing trust for AI and ML in sensitive deployments such as medical diagnoses, financial decision-making and criminal sentencing.

AI and machine learning offer truly limitless potential to transform the way we work, learn and tackle problems across a range of industriesbut ensuring these operations are conducted in an open and unbiased manner is paramount to winning and retaining both consumer and corporate trust in these applications.

The end goal is truly humble, honest algorithms that work for us and enable us to make unbiased, categorical predictions and consistently provide context, explainability and accuracy insights.

Recent research shows that 84% of CEOs agree that AI-based decisions must be explainable in order to be trusted. The time is ripe to embrace AI and ML solutions with baked in transparency.

About the author:

Davide Zilli, Client Services Director at Mind Foundry

Artificial Intelligence and RPA: Keys to Digital Transformation

FEATURE|ByJames Maguire, March 18, 2020

Robotic Process Automation: Pros and Cons

ARTIFICIAL INTELLIGENCE|ByJames Maguire, March 16, 2020

Using AI and Automation in Your Business

ARTIFICIAL INTELLIGENCE|ByJames Maguire, March 13, 2020

IBM's Prototype AutoML Could Vastly Improve AI Responses To Pandemics

FEATURE|ByRob Enderle, March 13, 2020

How 5G Will Enable The First General Purpose AI

ARTIFICIAL INTELLIGENCE|ByRob Enderle, February 28, 2020

Artificial Intelligence, Smart Robots and Conscious Computers: Is Your Business Ready?

ARTIFICIAL INTELLIGENCE|ByJames Maguire, February 13, 2020

Datamation's Emerging Tech Podcast and Webcast

ARTIFICIAL INTELLIGENCE|ByJames Maguire, February 11, 2020

The Human-Emulating Quantum AI Coming This Decade

FEATURE|ByRob Enderle, January 30, 2020

How to Get Started with Artificial Intelligence

FEATURE|ByJames Maguire, January 29, 2020

Top Machine Learning Services in the Cloud

ARTIFICIAL INTELLIGENCE|BySean Michael Kerner, January 29, 2020

Quantum Computing: The Biggest Announcement from CES

ARTIFICIAL INTELLIGENCE|ByRob Enderle, January 10, 2020

The Artificial Intelligence Index: AI Hiring, Data, Trends

FEATURE|ByJames Maguire, January 07, 2020

Artificial Intelligence in 2020: Urgency and Pragmatism

ARTIFICIAL INTELLIGENCE|ByJames Maguire, December 20, 2019

Intel Buys Habana And Gets Serious About Deep Learning AI

FEATURE|ByRob Enderle, December 17, 2019

Qualcomm And Rethinking the PC And Smartphone

ARTIFICIAL INTELLIGENCE|ByRob Enderle, December 06, 2019

Machine Learning in 2020

FEATURE|ByJames Maguire, December 06, 2019

Three Tactics Hi-Tech Companies Can Leverage to Drive Growth

FEATURE|ByGuest Author, November 11, 2019

Could IBM Watson Fix Facebook's 'Truth Problem'?

ARTIFICIAL INTELLIGENCE|ByRob Enderle, November 04, 2019

How Artificial Intelligence is Changing Healthcare

ARTIFICIAL INTELLIGENCE|ByJames Maguire, October 09, 2019

Artificial Intelligence Trends: Expert Insight on AI and ML Trends

ARTIFICIAL INTELLIGENCE|ByJames Maguire, September 17, 2019

The rest is here:

Keeping Machine Learning Algorithms Humble and Honest in the Ethics-First Era - Datamation

Written by admin

March 22nd, 2020 at 4:41 am

Posted in Machine Learning

FYI: You can trick image-recog AI into, say, mixing up cats and dogs by abusing scaling code to poison training data – The Register

Posted: at 4:41 am


without comments

Boffins in Germany have devised a technique to subvert neural network frameworks so they misidentify images without any telltale signs of tampering.

Erwin Quiring, David Klein, Daniel Arp, Martin Johns, and Konrad Rieck, computer scientists at TU Braunschweig, describe their attack in a pair of papers, slated for presentation at technical conferences in May and in August this year events that may or may not take place given the COVID-19 global health crisis.

The papers, titled "Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning" [PDF] and "Backdooring and Poisoning Neural Networks with Image-Scaling Attacks [PDF]," explore how the preprocessing phase involved in machine learning presents an opportunity to fiddle with neural network training in a way that isn't easily detected. The idea being: secretly poison the training data so that the software later makes bad decisions and predictions.

This example image, provided by the academics, of a cat has been modified so that when downscaled by an AI framework for training, it turns into a dog, thus muddying the training dataset

There have been numerous research projects that have demonstrated that neural networks can be manipulated to return incorrect results, but the researchers say such interventions can be spotted at training or test time through auditing.

"Our findings show that an adversary can significantly conceal image manipulations of current backdoor attacks and clean-label attacks without an impact on their overall attack success rate," explained Quiring and Rieck in the Backdooring paper. "Moreover, we demonstrate that defenses designed to detect image scaling attacks fail in the poisoning scenario."

Their key insight is that algorithms used by AI frameworks for image scaling a common preprocessing step to resize images in a dataset so they all have the same dimensions do not treat every pixel equally. Instead, these algorithms, in the imaging libraries of Caffe's OpenCV, TensorFlow's tf.image, and PyTorch's Pillow, specifically, consider only a third of the pixels to compute scaling.

"This imbalanced influence of the source pixels provides a perfect ground for image-scaling attacks," the academics explained. "The adversary only needs to modify those pixels with high weights to control the scaling and can leave the rest of the image untouched."

On their explanatory website, the eggheads show how they were able to modify a source image of a cat, without any visible sign of alteration, to make TensorFlow's nearest scaling algorithm output a dog.

This sort of poisoning attack during the training of machine learning systems can result in unexpected output and incorrect classifier labels. Adversarial examples can have a similar effect, the researchers say, but these work against one machine learning model.

Image scaling attacks "are model-independent and do not depend on knowledge of the learning model, features or training data," the researchers explained. "The attacks are effective even if neural networks were robust against adversarial examples, as the downscaling can create a perfect image of the target class."

The attack has implications for facial recognition systems in that it could allow a person to be identified as someone else. It could also be used to meddle with machine learning classifiers such that a neural network in a self-driving car could be made to see an arbitrary object as something else, like a stop sign.

To mitigate the risk of such attacks, the boffins say the area scaling capability implemented in many scaling libraries can help, as can Pillow's scaling algorithms (so long as it's not Pillow's nearest scaling scheme). They also discuss a defense technique that involves image reconstruction.

The researchers plan to publish their code and data set on May 1, 2020. They say their work shows the need for more robust defenses against image-scaling attacks and they observe that other types of data that get scaled like audio and video may be vulnerable to similar manipulation in the context of machine learning.

Sponsored: Webcast: Why you need managed detection and response

Go here to read the rest:

FYI: You can trick image-recog AI into, say, mixing up cats and dogs by abusing scaling code to poison training data - The Register

Written by admin

March 22nd, 2020 at 4:41 am

Posted in Machine Learning


Page 10«..9101112..»