Page 25«..1020..24252627..30..»

Archive for the ‘Machine Learning’ Category

Are machine-learning-based automation tools good enough for storage management and other areas of IT? Let us know – The Register

Posted: March 22, 2020 at 4:41 am


without comments

Reader survey We hear a lot these days about IT automation. Yet whether it's labelled intelligent infrastructure, AIOps, self-driving IT, or even private cloud, the aim is the same.

And that aim is: to use the likes of machine learning, workflow automation, and infrastructure-as-code to automatically make changes in real-time, eliminating as much as possible of the manual drudgery associated with routine IT administration.

Are the latest AI/ML-powered intelligent automation solutions trustworthy and ready for mainstream deployment, particularly in areas such as storage management?

Should we go ahead and implement the technology now on offer?

This controversial topic is the subject of our latest reader survey, and we are eager to hear your views.

Please complete our short survey, here.

As always, your responses will be anonymous and your privacy assured.

Sponsored: Webcast: Why you need managed detection and response

Read more from the original source:

Are machine-learning-based automation tools good enough for storage management and other areas of IT? Let us know - The Register

Written by admin

March 22nd, 2020 at 4:41 am

Posted in Machine Learning

With launch of COVID-19 data hub, the White House issues a call to action for AI researchers – TechCrunch

Posted: at 4:41 am


without comments

In a briefing on Monday, research leaders across tech, academia and the government joined the White House to announce an open data set full of scientific literature on the novel coronavirus. The COVID-19 Open Research Dataset, known as CORD-19, will also add relevant new research moving forward, compiling it into one centralized hub. The new data set is machine readable, making it easily parsed for machine learning purposes a key advantage according to researchers involved in the ambitious project.

In a press conference, U.S. CTO Michael Kratsios called the new data set the most extensive collection of machine readable coronavirus literature to date. Kratsios characterized the project as a call to action for the AI community, which can employ machine learning techniques to surface unique insights in the body of data. To come up with guidance for researchers combing through the data, the National Academies of Sciences, Engineering, and Medicine collaborated with the World Health Organization to come up with high priority questions about the coronavirus related to genetics, incubation, treatment, symptoms and prevention.

The partnership, announced today by the White House Office of Science and Technology Policy, brings together the Chan Zuckerberg Initiative, Microsoft Research, the Allen Institute for Artificial Intelligence, the National Institutes of Healths National Library of Medicine, Georgetown Universitys Center for Security and Emerging Technology, Cold Spring Harbor Laboratory and the Kaggle AI platform, owned by Google.

The database brings together nearly 30,000 scientific articles about the virus known as SARS-CoV-2. as well as related viruses in the broader coronavirus group. Around half of those articles make the full text available. Critically, the database will include pre-publication research from resources like medRxiv and bioRxiv, open access archives for pre-print health sciences and biology research.

Sharing vital information across scientific and medical communities is key to accelerating our ability to respond to the coronavirus pandemic, Chan Zuckerberg Initiative Head of Science Cori Bargmann said of the project.

The Chan Zuckerberg Initiative hopes that the global machine learning community will be able to help the science community connect the dots on some of the enduring mysteries about the novel coronavirus as scientists pursue knowledge around prevention, treatment and a vaccine.

For updates to the CORD-19 data set, the Chan Zuckerberg Initiative will track new research on a dedicated page on Meta, the research search engine the organization acquired in 2017.

The CORD-19 data set announcement is certain to roll out more smoothly than the White Houses last attempt at a coronavirus-related partnership with the tech industry. The White House came under criticism last week for President Trumps announcement that Google would build a dedicated website for COVID-19 screening. In fact, the site was in development by Verily, Alphabets life science research group, and intended to serve California residents, beginning with San Mateo and Santa Clara County. (Alphabet is the parent company of Google.)

The site, now live, offers risk screening through an online questionnaire to direct high-risk individuals toward local mobile testing sites. At this time, the project has no plans for a nationwide rollout.

Google later clarified that the company is undertaking its own efforts to bring crucial COVID-19 information to users across its products, but that may have become conflated with Verilys much more limited screening site rollout. On Twitter, Googles comms team noted that Google is indeed working with the government on a website, but not one intended to screen potential COVID-19 patients or refer them to local testing sites.

In a partial clarification over the weekend, Vice President Pence, one of the Trump administrations designated point people on the pandemic, indicated that the White House is working with Google but also working with many other tech companies. Its not clear if that means a central site will indeed launch soon out of a White House collaboration with Silicon Valley, but Pence hinted that might be the case. If that centralized site will handle screening and testing location referral is not clear.

Our best estimate is that some point early in the week we will have a website that goes up, Pence said.

The rest is here:

With launch of COVID-19 data hub, the White House issues a call to action for AI researchers - TechCrunch

Written by admin

March 22nd, 2020 at 4:41 am

Posted in Machine Learning

Emerging Trend of Machine Learning in Retail Market 2019 by Company, Regions, Type and Application, Forecast to 2024 – Bandera County Courier

Posted: at 4:41 am


without comments

The latest report titled, Global Machine Learning in Retail Market 2019 by Company, Regions, Type and Application, Forecast to 2024 unveils the value at which the Machine Learning in Retail industry is anticipated to grow during the forecast period, 2019 to 2024. The report estimates CAGR analysis, competitive strategies, growth factors and regional outlook 2024. The report is a rich source of an exhaustive study of the driving elements, limiting components, and different market changes. It states market structure and then further forecasts several segments and sub-segments of the global market. The market study is provided on the basis of type, application, manufacturer as well as geography. Different elements such as opportunities, drivers, restraints, and challenges, market situation, market share, growth rate, future trends, risks, entry limits, sales channels, distributors are analyzed and examined within this report.

Exploring The Growth Rate Over A Period:

Business owners want to expand their business can refer to this report as it includes data regarding the rise in sales within a given consumer base for the forecast period, 2019 to 2024. The research analysts have mentioned a comparison between the Machine Learning in Retail market growth rate and product sales to allow business owners to discover the success or failure of a specific product or service. They have also added the driving factors such as demographics and revenue generated from other products to offer a better analysis of products and services by owners.

DOWNLOAD FREE SAMPLE REPORT: https://www.magnifierresearch.com/report-detail/7570/request-sample

Top industry players assessment: IBM, Microsoft, Amazon Web Services, Oracle, SAP, Intel, NVIDIA, Google, Sentient Technologies, Salesforce, ViSenze,

Product type assessment based on the following types: Cloud Based, On-Premises

Application assessment based on application mentioned below: Online, Offline

Leading market regions covered in the report are: North America (United States, Canada and Mexico), Europe (Germany, France, UK, Russia and Italy), Asia-Pacific (China, Japan, Korea, India and Southeast Asia), South America (Brazil, Argentina, Colombia), Middle East and Africa (Saudi Arabia, UAE, Egypt, Nigeria and South Africa)

Main Features Covered In Global Machine Learning in Retail Market 2019 Report:

ACCESS FULL REPORT: https://www.magnifierresearch.com/report/global-machine-learning-in-retail-market-2019-by-7570.html

Moreover in the report, supply chain analysis, regional marketing type analysis, international trade type analysis by the market as well as consumer analysis of Machine Learning in Retail market has been covered. Further, it determines the manufacturing plants and technical data analysis, capacity, and commercial production date, R&D Status, manufacturing area distribution, technology source, and raw materials sources analysis. It also depicts to depict sales, merchants, brokers, wholesalers, research findings and conclusion, and information sources.

Customization of the Report: This report can be customized to meet the clients requirements. Please connect with our sales team (sales@magnifierresearch.com), who will ensure that you get a report that suits your needs. You can also get in touch with our executives on +1-201-465-4211 to share your research requirements.

Read more:

Emerging Trend of Machine Learning in Retail Market 2019 by Company, Regions, Type and Application, Forecast to 2024 - Bandera County Courier

Written by admin

March 22nd, 2020 at 4:41 am

Posted in Machine Learning

Keeping Machine Learning Algorithms Humble and Honest in the Ethics-First Era – Datamation

Posted: at 4:41 am


without comments

By Davide Zilli, Client Services Director at Mind Foundry

Today in so many industries, from manufacturing and life sciences to financial services and retail, we rely on algorithms to conduct large-scale machine learning analysis. They are hugely effective for problem-solving and beneficial for augmenting human expertise within an organization. But they are now under the spotlight for many reasons and regulation is on the horizon, with Gartner projecting four of the G7 countries will establish dedicated associations to oversee AI and ML design by 2023. It remains vital that we understand their reasoning and decision-making process at every step.

Algorithms need to be fully transparent in their decisions, easily validated and monitored by a human expert. Machine learning tools must introduce this full accountability to evolve beyond unexplainable black box solutions and eliminate the easy excuse of the algorithm made me do it!"

Bias can be introduced into the machine learning process as early as the initial data upload and review stages. There are hundreds of parameters to take into consideration during data preparation, so it can often be difficult to strike a balance between removing bias and retaining useful data.

Gender for example might be a useful parameter when looking to identify specific disease risks or health threats, but using gender in many other scenarios is completely unacceptable if it risks introducing bias and, in turn, discrimination. Machine learning models will inevitably exploit any parameters such as gender in data sets they have access to, so it is vital for users to understand the steps taken for a model to reach a specific conclusion.

Removing the complexity of the data science procedure will help users discover and address bias faster and better understand the expected accuracy and outcomes of deploying a particular model.

Machine learning tools with built-in explainability allow users to demonstrate the reasoning behind applying ML to a tackle a specific problem, and ultimately justify the outcome. First steps towards this explainability would be features in the ML tool to enable the visual inspection of data with the platform alerting users to potential bias during preparation and metrics on model accuracy and health, including the ability to visualize what the model is doing.

Beyond this, ML platforms can take transparency further by introducing full user visibility, tracking each step through a consistent audit trail. This records how and when data sets have been imported, prepared and manipulated during the data science process. It also helps ensure compliance with national and industry regulations such as the European Unions GDPR right to explanation clause and helps effectively demonstrate transparency to consumers.

There is a further advantage here of allowing users to quickly replicate the same preparation and deployment steps, guaranteeing the same results from the same data particularly vital for achieving time efficiencies on repetitive tasks. We find for example in the Life Sciences sector, users are particularly keen on replicability and visibility for ML where it becomes an important facility in areas such as clinical trials and drug discovery.

There are so many different model types that it can be a challenge to select and deploy the best model for a task. Deep neural network models, for example, are inherently less transparent than probabilistic methods, which typically operate in a more honest and transparent manner.

Heres where many machine learning tools fall short. Theyre fully automated with no opportunity to review and select the most appropriate model. This may help users rapidly prepare data and deploy a machine learning model, but it provides little to no prospect of visual inspection to identify data and model issues.

An effective ML platform must be able to help identify and advise on resolving possible bias in a model during the preparation stage, and provide support through to creation where it will visualize what the chosen model is doing and provide accuracy metrics and then on to deployment, where it will evaluate model certainty and provide alerts when a model requires retraining.

Building greater visibility into data preparation and model deployment, we should look towards ML platforms that incorporate testing features, where users can test a new data set and receive best scores of the model performance. This helps identify bias and make changes to the model accordingly.

During model deployment, the most effective platforms will also extract extra features from data that are otherwise difficult to identify and help the user understand what is going on with the data at a granular level, beyond the most obvious insights.

The end goal is to put power directly into the hands of the users, enabling them to actively explore, visualize and manipulate data at each step, rather than simply delegating to an ML tool and risking the introduction of bias.

The introduction of explainability and enhanced governance into ML platforms is an important step towards ethical machine learning deployments, but we can and should go further.

Researchers and solution vendors hold a responsibility as ML educators to inform users of the use and abuses of bias in machine learning. We need to encourage businesses in this field to set up dedicated education programs on machine learning including specific modules that cover ethics and bias, explaining how users can identify and in turn tackle or outright avoid the dangers.

Raising awareness in this manner will be a key step towards establishing trust for AI and ML in sensitive deployments such as medical diagnoses, financial decision-making and criminal sentencing.

AI and machine learning offer truly limitless potential to transform the way we work, learn and tackle problems across a range of industriesbut ensuring these operations are conducted in an open and unbiased manner is paramount to winning and retaining both consumer and corporate trust in these applications.

The end goal is truly humble, honest algorithms that work for us and enable us to make unbiased, categorical predictions and consistently provide context, explainability and accuracy insights.

Recent research shows that 84% of CEOs agree that AI-based decisions must be explainable in order to be trusted. The time is ripe to embrace AI and ML solutions with baked in transparency.

About the author:

Davide Zilli, Client Services Director at Mind Foundry

Artificial Intelligence and RPA: Keys to Digital Transformation

FEATURE|ByJames Maguire, March 18, 2020

Robotic Process Automation: Pros and Cons

ARTIFICIAL INTELLIGENCE|ByJames Maguire, March 16, 2020

Using AI and Automation in Your Business

ARTIFICIAL INTELLIGENCE|ByJames Maguire, March 13, 2020

IBM's Prototype AutoML Could Vastly Improve AI Responses To Pandemics

FEATURE|ByRob Enderle, March 13, 2020

How 5G Will Enable The First General Purpose AI

ARTIFICIAL INTELLIGENCE|ByRob Enderle, February 28, 2020

Artificial Intelligence, Smart Robots and Conscious Computers: Is Your Business Ready?

ARTIFICIAL INTELLIGENCE|ByJames Maguire, February 13, 2020

Datamation's Emerging Tech Podcast and Webcast

ARTIFICIAL INTELLIGENCE|ByJames Maguire, February 11, 2020

The Human-Emulating Quantum AI Coming This Decade

FEATURE|ByRob Enderle, January 30, 2020

How to Get Started with Artificial Intelligence

FEATURE|ByJames Maguire, January 29, 2020

Top Machine Learning Services in the Cloud

ARTIFICIAL INTELLIGENCE|BySean Michael Kerner, January 29, 2020

Quantum Computing: The Biggest Announcement from CES

ARTIFICIAL INTELLIGENCE|ByRob Enderle, January 10, 2020

The Artificial Intelligence Index: AI Hiring, Data, Trends

FEATURE|ByJames Maguire, January 07, 2020

Artificial Intelligence in 2020: Urgency and Pragmatism

ARTIFICIAL INTELLIGENCE|ByJames Maguire, December 20, 2019

Intel Buys Habana And Gets Serious About Deep Learning AI

FEATURE|ByRob Enderle, December 17, 2019

Qualcomm And Rethinking the PC And Smartphone

ARTIFICIAL INTELLIGENCE|ByRob Enderle, December 06, 2019

Machine Learning in 2020

FEATURE|ByJames Maguire, December 06, 2019

Three Tactics Hi-Tech Companies Can Leverage to Drive Growth

FEATURE|ByGuest Author, November 11, 2019

Could IBM Watson Fix Facebook's 'Truth Problem'?

ARTIFICIAL INTELLIGENCE|ByRob Enderle, November 04, 2019

How Artificial Intelligence is Changing Healthcare

ARTIFICIAL INTELLIGENCE|ByJames Maguire, October 09, 2019

Artificial Intelligence Trends: Expert Insight on AI and ML Trends

ARTIFICIAL INTELLIGENCE|ByJames Maguire, September 17, 2019

The rest is here:

Keeping Machine Learning Algorithms Humble and Honest in the Ethics-First Era - Datamation

Written by admin

March 22nd, 2020 at 4:41 am

Posted in Machine Learning

FYI: You can trick image-recog AI into, say, mixing up cats and dogs by abusing scaling code to poison training data – The Register

Posted: at 4:41 am


without comments

Boffins in Germany have devised a technique to subvert neural network frameworks so they misidentify images without any telltale signs of tampering.

Erwin Quiring, David Klein, Daniel Arp, Martin Johns, and Konrad Rieck, computer scientists at TU Braunschweig, describe their attack in a pair of papers, slated for presentation at technical conferences in May and in August this year events that may or may not take place given the COVID-19 global health crisis.

The papers, titled "Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning" [PDF] and "Backdooring and Poisoning Neural Networks with Image-Scaling Attacks [PDF]," explore how the preprocessing phase involved in machine learning presents an opportunity to fiddle with neural network training in a way that isn't easily detected. The idea being: secretly poison the training data so that the software later makes bad decisions and predictions.

This example image, provided by the academics, of a cat has been modified so that when downscaled by an AI framework for training, it turns into a dog, thus muddying the training dataset

There have been numerous research projects that have demonstrated that neural networks can be manipulated to return incorrect results, but the researchers say such interventions can be spotted at training or test time through auditing.

"Our findings show that an adversary can significantly conceal image manipulations of current backdoor attacks and clean-label attacks without an impact on their overall attack success rate," explained Quiring and Rieck in the Backdooring paper. "Moreover, we demonstrate that defenses designed to detect image scaling attacks fail in the poisoning scenario."

Their key insight is that algorithms used by AI frameworks for image scaling a common preprocessing step to resize images in a dataset so they all have the same dimensions do not treat every pixel equally. Instead, these algorithms, in the imaging libraries of Caffe's OpenCV, TensorFlow's tf.image, and PyTorch's Pillow, specifically, consider only a third of the pixels to compute scaling.

"This imbalanced influence of the source pixels provides a perfect ground for image-scaling attacks," the academics explained. "The adversary only needs to modify those pixels with high weights to control the scaling and can leave the rest of the image untouched."

On their explanatory website, the eggheads show how they were able to modify a source image of a cat, without any visible sign of alteration, to make TensorFlow's nearest scaling algorithm output a dog.

This sort of poisoning attack during the training of machine learning systems can result in unexpected output and incorrect classifier labels. Adversarial examples can have a similar effect, the researchers say, but these work against one machine learning model.

Image scaling attacks "are model-independent and do not depend on knowledge of the learning model, features or training data," the researchers explained. "The attacks are effective even if neural networks were robust against adversarial examples, as the downscaling can create a perfect image of the target class."

The attack has implications for facial recognition systems in that it could allow a person to be identified as someone else. It could also be used to meddle with machine learning classifiers such that a neural network in a self-driving car could be made to see an arbitrary object as something else, like a stop sign.

To mitigate the risk of such attacks, the boffins say the area scaling capability implemented in many scaling libraries can help, as can Pillow's scaling algorithms (so long as it's not Pillow's nearest scaling scheme). They also discuss a defense technique that involves image reconstruction.

The researchers plan to publish their code and data set on May 1, 2020. They say their work shows the need for more robust defenses against image-scaling attacks and they observe that other types of data that get scaled like audio and video may be vulnerable to similar manipulation in the context of machine learning.

Sponsored: Webcast: Why you need managed detection and response

Go here to read the rest:

FYI: You can trick image-recog AI into, say, mixing up cats and dogs by abusing scaling code to poison training data - The Register

Written by admin

March 22nd, 2020 at 4:41 am

Posted in Machine Learning

Proof in the power of data – PES Media

Posted: at 4:41 am


without comments

Engineers at the AMRC have researched the use of the cloud to capture data from machine tools with Tier 2 member Amido

Cloud data solutions being trialled at the University of Sheffield Advanced Manufacturing Research Centre (AMRC) could provide a secure and cost-effective way for SME manufacturers to explore how machine learning and Industry 4.0 technologies can boost their productivity.

Jon Stammers, AMRC technical fellow in the process monitoring and control team, says: Data is available on every shopfloor but a lot of time it isnt being captured due to lack of connectivity, and therefore cannot be analysed. If the cloud can capture and analyse that data then the possibilities are massive.

Engineers in the AMRCs Machining Group have researched the use of the cloud to capture data from machine tools with new Tier Two member Amido, an independent technical consultancy specialising in assembling, integrating and building cloud-native solutions.

Mr Stammers adds: Typically we would have a laptop sat next to a machine tool capturing its data; a researcher might do some analysis on that laptop and share the data on our internal file system or on a USB stick. There is a lot of data generated on the shopfloor and it is our job to capture it, but there are plenty of unanswered questions about the analysis process and the cloud has a lot to bring to that.

In the trial, data from two CNC machines in the AMRCs Factory of the Future: a Starrag STC 1250 and a DMG Mori DMU 40 eVo, was transferred to the Microsoft Azure Data Lake cloud service and converted into a parquet format, which allowed Amido to run a series of complex queries over a long period of time.

Steve Jones, engagement director at Amido, explains handling those high volumes of data is exactly what the cloud was designed for: Moving the data from the manufacturing process into the cloud means it can be stored securely and then structured for analysis. The data cant be intercepted in transit and it is immediately encrypted by Microsoft Azure.

Security is one of the huge benefits of cloud technology, Mr Stammers comments. When we ask companies to share their data for a project, it is usually rejected because they dont want their data going offsite. Part of the work were doing with Amido is to demonstrate that we can anonymise data and move it off site securely.

In addition to the security of the cloud, Mr Jones says transferring data into a data lake means large amounts can be stored for faster querying and machine learning.

One of the problems of a traditional database is when you add more data, you impact the ability for the query to return the answers to the questions you put in; by restructuring into a parquet format you limit that reduction in performance. Some of the queries that were taking one of the engineers up to 12 minutes to run on the local database, took us just 12 seconds using Microsoft Azure.

It was always our intention to run machine learning against this data to detect anomalies. A reading in the event data that stands out may help predict maintenance of a machine tool or prevent the failure of a part.

Storing data in the cloud is extremely inexpensive and that is why, according to software engineer in the process monitoring and control team Seun Ojo, cloud technology is a viable option for SMEs working with the AMRC, part of the High Value Manufacturing (HVM) Catapult.

He says: SMEs are typically aware of Industry 4.0 but concerned about the return on investment. Fortunately, cloud infrastructure is hosted externally and provided on a pay-per-use basis. Therefore, businesses may now access data capture, storage and analytics tools at a reduced cost.

Mr Jones adds: Businesses can easily hire a graphics processing unit (GPU) for an hour or a quantum computer for a day to do some really complicated processing and you can do all this on a pay-as-you-go basis.

The bar to entry to doing machine learning has never been lower. Ten years ago, only data scientists had the skills to do this kind of analysis but the tools available from cloud platforms like Microsoft Azure and Google Cloud now put a lot of power into the hands of inexpert users.

Mr Jones says the trials being done with Amido could feed into research being done by the AMRC into non-geometric validation.

He concludes: Rather than measuring the length and breadth of a finished part to validate that it has been machined correctly; I want to see engineers use data to determine the quality of a job.

That could be really powerful and if successful would make the process of manufacturing much quicker. That shows the value of data in manufacturing today.

AMRC http://www.amrc.co.uk

Amido http://www.amido.com

Michael Tyrrell

Digital Coordinator

More:

Proof in the power of data - PES Media

Written by admin

March 22nd, 2020 at 4:41 am

Posted in Machine Learning

The Power of AI in ‘Next Best Actions’ – CMSWire

Posted: at 4:41 am


without comments

PHOTO: Charles

Lets say you have a customer who has taken a certain action: downloaded an ebook, filled out an application, added a product to their cart, called into your call center or walked into your branch office, to name a few. What content, offer or message should you deliver to them next? What next step should you recommend? How can you best add value for that individual, while nurturing the person, wherever they are in their relationship with your business?

Based on your history (or even lack of history) with a given individual, you and your company might also have questions such as: Whats the best product to upsell to this particular client? (and should I even try to upsell that person?); Whats the right promotion to show an engaged shopper on my ecommerce site? and Whats the right item to promote to someone logged into my application? The list goes on.

These types of questions are all important to businesses today, who often talk about next best actions. This customer-centric (often 1-to-1) approach and sequencing strategy can take a number of forms. But at a basic level, the concept means what it sounds like: determining the most relevant or appropriate next action (or offer, promotion, content, etc.) to show a person in the moment, based on their current and previous actions or other information youve gathered about them across your online and offline channels. Next best actions can also include triggering messages to call center agents or sales reps to alert them of important activity, or to suggest the next best action they should take with a customer.

Companies put awide variety of thought, time and effort into establishing sequencing paths from none at all (with a one-size-fits-all message, promotion, offer, etc.) to a lot. At a majority of organizations, though, determining the next best action for their customers is very important, involving multiple teams of people across functions and divisions.

There are teams of marketers and designers, for instance, who create elaborate promotions and offers with different media for different channels. And there are customer experience teams who devote many cycles to thinking about call-center scripts and next best actions.

So when it comes to deploying those next best actions, it can devolve into an inter-departmental war about who gets the prime real estate. For example, when new visitors hit the homepage or when customers log into the app, what gets displayed in the hero area?

Why all the effort and involvement? Its because next best actions are strategically important to engagement and the bottom line. Present the right, relevant offer or action to a customer or prospect, and youre helping elicit interest and drive conversions. Present the wrong (e.g., outdated, irrelevant, mismatched to sales cycle stage, etc.) one, and youre losing customer interest or even turning them off your brand.

Related Article: Good Personalization Hinges on Good Data

For many years, organizations have taken a rule-based approach to determining the right next best action for a particular customer in a particular channel or at a particular stage in their journey. Rules are manually created and structured with if-then logic (e.g., IF a person takes this action or belongs to this group, THEN display this next). They govern the experiences and actions for audience segments which can be broad or get very narrow.

Three types of rules are the most frequently applied to next-best-action decisioning. These can be used on their own or, typically, in concert:

Related Article: Why Personalization Efforts Fail

But one problem with rules is the more targeted and relevant you want to get, the greater the number of rules you need to make. With rules, personalization of the next best action is inversely correlated to simplicity. In other words, to deliver truly relevant and highly specific actions and experiences using rules only, you quickly enter a world of nearly unmanageable complexity.

Theres also the time factor to consider. As you have likely experienced, it takes a lot of hours to create and prescribe sequencing via rules for the multitude of scenarios customers can encounter and the paths they can take. And unraveling a heavily nested set of rules in order to make minor adjustments (and make them correctly) can take many more hours.

Another problem with rules is that they are just a human guessing. Suppose youre wrong in the next best action youve set up for a customer to receive in fact, it may actually be hurting revenues or customer loyalty.

So while rules do play a vital role in determining and displaying next best actions, a rules-only-based approach generally isnt optimal or scalable in the long-term.

Related Article: Refine Your Personalization Efforts by Ditching Tech-First Tendencies

Machine learning, a type of artificial intelligence (AI), can supplement rules and play a powerful role in prioritization and other next-best-action decisions: pulling in everything known about an individual in the channel of engagement and across channels, factoring in data from similar people, and then computing and displaying the optimal, relevant next best action or offer at the 1-to-1 level. Typically, this all occurs in milliseconds faster than you can blink an eye.

Across industries, theres an enormous amount of behavioral data to parse through to uncover trends and indicators of what to do next with any given individual. This can be combined with attribute and transaction data to build a rich profile and predictive intelligence. Machine-learning algorithms automate this process, make surprising discoveries and keep learning based on ever-growing data: from studying both the individual customer and customers with similar attributes and behaviors, and from learning from how customers are reacting to the actions being suggested to them.

In addition, when multiple promotions or next actions are valid, you can apply machine learning to decide on and display the truly optimal one, balancing whats best for the customer with whats best for your business.

Optimized machine-learning-driven next best actions outperform manual ones, even when what they suggest might seem counter-intuitive. For example, a banking institution might promote its most popular cash-back credit card offer to all new site visitors. But for return visitors located in colder climate regions, a continuous learning algorithm might determine that the banks travel rewards card offer performs much better. Only machine learning can pick up on behavioral signals and information at scale (including seemingly unimportant information) in a way that humans simply cannot.

Related Article: 5 Drivers of Personalized Experiences: A Walk Through the AI Food Chain

Determining and displaying next best actions involve integrations and interplay across channels. One system is informing another of an action a customer has taken and what to do next. For example: a customer who joined the loyalty program could be eligible to receive a certain promotion in their email. Or a shopper who browsed purses online can be push-notified a coupon code to use in-store, thanks to beacon technology. An alert might get triggered to a call center agent based on a customers unfinished loan application letting the agent know to provide information on interest rates or help set up an appointment at the customers local branch as that person is calling in.

Given the wide range of activity and vast quantities of data, its important to have a single system that can arbitrate all these actions, apply prioritization and act as the central brain. This helps keep customer information unified and up-to-date, and aids in real-time interaction management and experience delivery.

In the end, everything organizations do when communicating and relating to their customers could be viewed as next best actions. In fact, personalization and next best actions are closely intertwined, as two sides of the same coin. Its hard to separate a next best action from the personalization decisioning driving it, which is why the two areas should be (and sometimes are) tied together from a strategy and systems perspective.

By effectively determining and triggering personalized next steps, you can tell a cohesive and consistent cross-channel story that bolsters brand perception, improves the buyer journey and turns next best actions into must-take ones.

Karl Wirth is the CEO and co-founder of Evergage, a Salesforce Company and a leading real-time personalization and interaction management platform provider. Karl is also the author of the award-winning book One-to-One Personalization in the Age of Machine Learning.

View original post here:

The Power of AI in 'Next Best Actions' - CMSWire

Written by admin

March 22nd, 2020 at 4:41 am

Posted in Machine Learning

The Global Deep Learning Chipset Market size is expected to reach $24.5 billion by 2025, rising at a market growth of 37% CAGR during the forecast…

Posted: at 4:41 am


without comments

Deep learning chips are customized Silicon chips that integrate AI technology and machine learning. Deep learning and machine learning, which are the sub-sets of Artificial Intelligence (AI) sub-sets, are used in carrying out AI related tasks.

New York, March 20, 2020 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Global Deep Learning Chipset Market By type By Technology By End User By Region, Industry Analysis and Forecast, 2019 - 2025" - https://www.reportlinker.com/p05876895/?utm_source=GNW Deep learning technology has entered many industries around the world and is accomplished through applications like computer vision, speech synthesis, voice recognition, machine translation, drug discovery, game play, and robotics.

The widespread adoption of artificial intelligence (AI) for practical business applications has brought in a range of complexities and risk factors in virtually every industry, but one thing is certain: in todays AI industry, hardware is the key to solving many of the main problems facing the sector, and chipsets are at the heart of that hardware solution. Considering AIs widespread applicability, its almost certain that every chip will have some kind of AI system embedded in future. The engine could make a wide range of forms, from a basic AI library running on a CPU to more complex, custom hardware. The potential for AI is better fulfilled when the chipsets are designed to provide the adequate amount of computing capacity for different AI applications at the right power budget. This is a trend that leads to increased specialization and diversifying of AI-optimized chipsets.

The factors influencing the development of the deep learning chipset market are increased acceptance of cloud-based technology and profound use of learning in big data analytics. A single-chip processor generates lighting effects and transforms objects each time a 3D scene is redrawn, or a graphic processing unit turns out to be very meaningful and efficient when applied to computation styles needed for neural nets. This in turn fuels the growth of the market for deep learning chipsets.

Based on type, the market is segmented into GPU, ASIC, CPU, FPGA and Others. Based on Technology, the market is segmented into System-on-chip (SoC), System-in-package (SIP) and Multi-chip module & Others. Based on End User, the market is segmented into Consumer Electronics, Industrial, Aerospace & Defense, Healthcare, Automotive and Others. Based on Regions, the market is segmented into North America, Europe, Asia Pacific, and Latin America, Middle East & Africa.

The major strategies followed by the market participants are Product Launches. Based on the Analysis presented in the Cardinal matrix, Google, Inc., Microsoft Corporation, Samsung Electronics Co., Ltd., Intel Corporation, Amazon.com, Inc., and IBM Corporation are some of the forerunners in the Deep Learning Chipset Market. Companies such as Advanced Micro Devices, Inc., Qualcomm, Inc., Nvidia Corporation, and Xilinx, Inc. are some of the key innovators in Deep Learning Chipset Market. The market research report covers the analysis of key stake holders of the market. Key companies profiled in the report include Samsung Electronics Co., Ltd. (Samsung Group), Microsoft Corporation, Intel Corporation, Nvidia Corporation, IBM Corporation, Google, Inc., Amazon.com, Inc. (Amazon Web Services), Qualcomm, Inc., Advanced Micro Devices, Inc., and Xilinx, Inc.

Recent strategies deployed in Deep Learning Chipset Market

Partnerships, Collaborations, and Agreements:

Jan-2020: Xilinx collaborated with Telechips, a leading Automotive System on Chip (SoC) supplier. The collaboration would provide a comprehensive solution for addressing the integration of in-cabin monitoring systems (ICMS) and IVI systems.

Dec-2019: Samsung Electronics teamed up with Baidu, a leading Chinese-language Internet search provider. Under the collaboration, the companies announced that the development of Baidu KUNLUN, its first cloud-to-edge AI accelerator has been completed. KUNLUN chip provides 512 gigabytes per second (Gbps) memory bandwidth and offers up to 260 Tera operations per second (TOPS) at 150 watts.

Oct-2019: Microsoft announced technology collaboration with Nvidia, a technology company. The collaboration was focused on intelligent edge computing, which is designed for helping the industries in gaining and managing the insights from the data created by warehouses, retail stores, manufacturing facilities, urban infrastructure, connected buildings, and other environments.

Oct-2019: Microsoft launched Lakefield, a dual-screen device powered by Intels unique processor. This device combines a hybrid CPU with Intels Foveros 3D packaging technology. This provides more flexibility to device makers for innovating designs, experience, and form factor.

Jun-2019: AMD came into partnership with Samsung following which, the former company is licensing its graphics technology to Samsung for use in future mobile chips. Under this partnership, Samsung paid AMD for getting access to its RDNA graphics architecture.

Jun-2019: Nvidia collaborated with Volvo for developing artificial intelligence that is used in self-driving trucks.

May-2019: Samsung Electronics came into partnership with Efinix, an innovator in programmable product platforms and technologies. Under this partnership, the companies were aimed at developing Quantum eFPGAs on Samsungs 10nm silicon process.

Dec-2018: IBM extended its partnership with Samsung for developing 7-nanometer (nm) microprocessors for IBM Power Systems, LinuxONE, and IBM Z. The expansion was aimed at driving the performance of the unmatched system including encryption and compression speed, acceleration, memory, and I/O bandwidth, as well as system scaling.

Jun-2018: AWS announced its collaboration with Cadence Design Systems. The collaboration was aimed at delivering a Cadence Cloud portfolio to electronic systems and semiconductor design.

Mar-2018: Nvidia came into partnership with Arm for bringing deep learning interface to billions of consumer electronics, mobile, and Internet of Things devices.

Acquisition and Mergers:

Aug-2019: Xilinx took over Solarflare, a provider of high-performance, low latency networking solutions. The acquisition helped in generating more revenues and enabled new marketing and R&D funds for the future.

Apr-2019: Intel completed the acquisition of Omnitek, a provider of video and vision field-programmable gate array (FPGA). Through the acquisition, the FPGA processor business of the company has been doubled.

Jul-2018: Intel took over eASIC, a fabless semiconductor company. The acquisition bolstered the companys business in providing chips.

Apr-2017: AMD acquired Nitero, a company engaged in providing technology to connect VR headsets wirelessly to PCs. The acquisition helped the company in getting control over VR experiences.

Product Launches and Product Expansions:

Dec-2019: Nvidia launched Drive AGX Orin, a new Orin AI processor or system-on-chip (SoC). This processor improves power efficiency and performance. This processor is used in evolving the automotive business.

Dec-2019: AWS unveiled Graviton2, the next-generation of its ARM processors. It is a custom chip that is designed with 7nm architecture and based on 64-bit ARM Neoverse cores.

Nov-2019: AMD launched two new Threadripper 3 CPUs with 24 and 32 cores. Both these CPUs will be integrated into AMDs new TRX40 platform using the new sTRX4 socket.

Nov-2019: Intel unveiled Ponte Vecchio GPUs, a graphics processing unit (GPU) architecture. This chip was designed for handling the artificial intelligence loads and heavy data in the data center.

Nov-2019: Intel launched Stratix 10 GX 10M, a new FPGA. This consists of two large FPGA dies and four transceiver tiles and has a total of 10.2 million logic elements and 2304 user I/O pins.

Oct-2018: Google launched TensorFlow, the popular open-source artificial intelligence framework. This framework runs deep learning, machine learning, and other predictive and statistical analytics workloads. This simplifies training models, the process of acquiring data, refining future results, and serving predictions.

Sep-2019: AWS released Amazon EC2 G4 GPU-powered Amazon Elastic Compute Cloud (Amazon EC2) instances. It delivers up to 1.8 TB of local NVMe storage and up to 100 Gbps of networking throughput to AWS custom Intel Cascade Lake CPUs and NVIDIA T4 GPUs.

Aug-2019: Xilinx released Virtex UltraScale+ VU19P, a 16nm device with 35 billion transistors. It has four chips on an interposer. It is the worlds largest field-programmable gate array (FPGA) and has 9 million logic cells.

May-2019: Nvidia introduced NVIDIA EGX, an accelerated computing platform. This platform was aimed at allowing the companies in performing low-latency AI at the edge for perceiving, understanding, and acting in real-time on continuous streaming data between warehouses, factories, 5G base stations, and retail stores.

Nov-2018: AWS introduced Inferentia and Elastic Inference, two chips and 13 machine learning capabilities and services. Through these launches, the company aimed towards attracting more developers.

Sep-2018: Qualcomm unveiled Snapdragon Wear 3100 chipset. This chipset is used in smartwatches and has extended battery life.

Aug-2018: AMD introduced B450 chipset for Ryzen processors. The chip runs about 2 watts lower in power than B350 chipset.

Jul-2018: Google introduced Tensor Processing Units or TPUs, the specialized chips. This chip lives in data centers of the company and simplifies the AI tasks. These chips are used in enterprise jobs.

Apr-2018: Qualcomm launched QCS605 and QCS603 SoCs, two new system-on-chips. These chips combine image signal processor, CPU, AI, GPU technology for accommodating several camera applications, smart displays, and robotics.

Scope of the Study

Market Segmentation:

By Compute Capacity

High

Low

By Type

GPU

ASIC

CPU

FPGA

Others

By Technology

System-on-chip (SoC)

System-in-package (SIP)

Multi-chip module & Others

By End User

Consumer Electronics

Industrial

Aerospace & Defense

Healthcare

Automotive

Others

By Geography

North America

o US

o Canada

o Mexico

o Rest of North America

Europe

o Germany

o UK

o France

o Russia

o Spain

o Italy

o Rest of Europe

Asia Pacific

o China

o Japan

o India

o South Korea

o Singapore

o Malaysia

o Rest of Asia Pacific

LAMEA

o Brazil

o Argentina

o UAE

o Saudi Arabia

o South Africa

o Nigeria

o Rest of LAMEA

Companies Profiled

Samsung Electronics Co., Ltd. (Samsung Group)

Microsoft Corporation

Intel Corporation

Nvidia Corporation

IBM Corporation

Google, Inc.

Amazon.com, Inc. (Amazon Web Services)

Qualcomm, Inc.

Advanced Micro Devices, Inc.

See the original post here:

The Global Deep Learning Chipset Market size is expected to reach $24.5 billion by 2025, rising at a market growth of 37% CAGR during the forecast...

Written by admin

March 22nd, 2020 at 4:41 am

Posted in Machine Learning

Workday, Machine Learning, and the Future of Enterprise Applications – Cloud Wars

Posted: February 29, 2020 at 4:46 am


without comments

That technological sophistication starts at the top. A few months ago, in an exclusive interview, Workday CEO Aneel Bhusri described himself as the companys Pied Piper of ML for his passionate advocacy about a technology that he believes will be even more disruptive than the cloud.

In his own understated but high-impact way, Workday cofounder and CEO Aneel Bhusri has become one of the worlds most-bullish evangelists for the extraordinary power and potential of machine learning.

Weve always talked about predictive analytics but theyre now a realityand its really a reality, Bhusri said in a recent exclusive interview.

Its what weve dreamed about for a long time. But we never actually got there because the technologies werent therebut now theyre here.

And Bhusri is making sure that Workdaywhich is on the verge of posting its first billion-dollar quarteris at the forefront in giving corporate customers the full benefits of MLs transformative capabilities.

Machine learning is just so profound, right? Its impacting all of our lives in so many ways, Bhusri said when I brought up his comment that ML will be even more disruptive than the cloud.

Internally I described my role to the company as the pied piper of machine learning, he said with a chuckle.And I asked every employee in the company to buy the bookPrediction Machinesand charge it back to Workday because we all have to get comfortable with this new world and be able to succeed in it and be able to talk to our customers about it.

It looks like one of the ways Bhusri is helping Workdays entire workforce to get comfortable with this new world is by letting them know that hes driving the conversation for that conversion.

For me theres actually something very gratifying when I can say, okay, not going to try to get the engineers to work on five different things, says Bhusri, who refers to himself self-effacingly as a products guy.

So every time I see one of our engineers or developers, I ask, what are you doing on machine learning? Or what do you think about machine learning? And what should we be doing with machine learning?

Pretty soon theyre all saying, Okay, before I meet with Aneel, I know hes going to ask about machine learning so I should have my act together, Bhusri said.It gets everybody on the same pagepeople are excited.

At least so far, Workdays customers have been eager to share that excitement and allow Workday to help them build their digital futures.

Read the original:

Workday, Machine Learning, and the Future of Enterprise Applications - Cloud Wars

Written by admin

February 29th, 2020 at 4:46 am

Posted in Machine Learning

Forget Chessthe Real Challenge Is Teaching AI to Play D&D – WIRED

Posted: at 4:46 am


without comments

Fans of games like Dungeons & Dragons know that the fun comes, in part, from a creative Dungeon Masteran all-powerful narrator who follows a storyline but has free rein to improvise in response to players actions and the fate of the dice.

This kind of spontaneous yet coherent storytelling is extremely difficult for artificial intelligence, even as AI has mastered more constrained board games such as chess and Go. The best text-generating AI programs too often produce confused and disjointed prose. So some researchers view spontaneous storytelling as a good test of progress toward more intelligent machines.

An attempt to build an artificial Dungeon Master offers hope that machines able to improvise a good storyline might be built. In 2018, Lara Martin, a graduate student at Georgia Tech, was seeking a way for AI and a human to work together to develop a narrative and suggested Dungeons & Dragons as a vehicle for the challenge. After a while, it hit me, she says. I go up to my adviser and say We're basically proposing a Dungeon Master, aren't we? He paused for a bit, and said Yeah, I guess we are!

Narratives produced by artificial intelligence offer a guide to where we are in the quest to create machines that are as clever as us. Martin says this would be more challenging than mastering a game like Go or poker because just about anything that can be imagined can happen in a game.

Since 2018, Martin has published work that outlines progress towards the goal of making an AI Dungeon Master. Her approach combines state-of-the-art machine learning algorithms with more old-fashioned rule-based features. Together this lets an AI system dream up different narratives while following the thread of a story consistently.

Martins latest work, presented at a conference held this month by the Association for the Advancement of Artificial Intelligence, describes a way for an algorithm to use the concept of events, consisting of a subject, verb, object, and other elements, in a coherent narrative. She trained the system on the storyline of such science fiction shows as Doctor Who, Futurama, and The X-Files. Then, when fed a snippet of text, it will identify events, and use them to shape a continuation of the plot churned out by a neural network. In another project, completed last year, Martin developed a way to guide a language model towards a particular event, such as two characters getting married.

Unfortunately, these systems still often get confused, and Martin doesnt think they would make a good DM. We're nowhere close to this being a reality yet, she says.

Noah Smith, a professor at the University of Washington who specializes in AI and language, says Martins work reflects a growing interest in combining two different approaches to AI: machine learning and rule-based programs. And although hes never played Dungeons & Dragons himself, Smith says creating a convincing Dungeon Master seems like a worthwhile challenge.

Sometimes grand challenge goals are helpful in getting a lot of researchers moving in a single direction, Smith says. And some of what spins out is also useful in more practical applications.

Maintaining a convincing narrative remains a fundamental and vexing problem with existing language algorithms.

Large neural networks trained to find statistical patterns in vast quantities of text scraped from the web have recently proven capable of generating convincing looking snippets of text. In February 2019, the AI company OpenAI developed a tool called GPT-2 capable of generating narratives in response to a short prompt. The output of GPT-2 could sometimes seem startlingly coherent and creative, but it also would inevitably produce weird gibberish.

Here is the original post:

Forget Chessthe Real Challenge Is Teaching AI to Play D&D - WIRED

Written by admin

February 29th, 2020 at 4:46 am

Posted in Machine Learning


Page 25«..1020..24252627..30..»



matomo tracker