Page 17«..10..16171819..»

Archive for the ‘Machine Learning’ Category

Global Machine Learning as a Service Market, Trends, Analysis, Opportunities, Share and Forecast 2019-2027 – NJ MMA News

Posted: February 29, 2020 at 4:46 am

without comments

Machine Learning as a Service Market valued approximately USD 0.87 billion in 2017 is anticipated to grow with a healthy growth rate of more than 43.9% over the forecast period 2018-2025.

Machine learning as a service is a significant range of solutions and services that are offered by cloud service providers. The tools offered by service providers include APIs, data visualization, natural language processing, face recognition, deep learning, and predictive analytics. The main benefit associated with these services is that the customers are able to quickly start with machine learning with no need to install or download any software on their servers.

Enhancements in technology, growth in data volume and rise in IT spending in some of the developing regions are the major factors which are driving the growth in the global market. Additionally, growth in acceptance of cloud-based technologies and the increasing need to know customer behavior is further boosting the demand for Machine learning as a service. Moreover, the high demand for private cloud in enterprises is likely to propel the growth of the market. Besides this, a rise in the area of application and growing investments in the healthcare sector represents significant growth opportunities for the market in the near future. However, scarcity of trained expertise and several security concerns are expected to hamper the market growth.

The regional analysis of Machine Learning as a Service Market is considered for the key regions such as Asia Pacific, North America, Europe, Latin America and Rest of the World. In a region such as Asia-Pacific, Middle-East and Africa, the rise in usage of passenger vehicles set the growth in Machine Learning as a Service Market over the forecasted period 2018-2025. Asia-Pacific is estimated to hold a prominent share of Machine Learning as a Service market. Developing countries, such as India and China, are significant players boosting the demand for Machine Learning as a Service Market. Europe, North America, and the Middle East and Africa are continuously witnessing infrastructural growth which fueling the demand for Machine Learning as a Service Market over the coming years. Asia Pacific region is contributing towards the growth of global Machine Learning as a Service Market and anticipated to exhibit higher growth rate / CAGR over the forecast period 2018-2025.

The objective of the study is to define market sizes of different segments & countries in recent years and to forecast the values to the coming eight years. The report is designed to incorporate both qualitative and quantitative aspects of the industry within each of the regions and countries involved in the study. Furthermore, the report also caters the detailed information about the crucial aspects such as driving factors & challenges which will define the future growth of the market. Additionally, the report shall also incorporate available opportunities in micro markets for stakeholders to invest along with the detailed analysis of competitive landscape and product offerings of key players. The detailed segments and sub-segment of the market are explained below:

By Type:

Software Tools Cloud and Web-based Application Programming Interface (APIs) Others

By Application:

Manufacturing Retail Healthcare & Life Sciences Telecom BFSI Others (Energy & Utilities, Education, Government)

By Regions: North America o The U.S. o Canada Europe o UK o Germany Asia Pacific o China o India o Japan Latin America o Brazil o Mexico Rest of the World

The leading Market players mainly include-

Google IBM Corporation Microsoft Corporation Amazon Web Services BigML FICO Yottamine Analytics Ersatz Labs Predictron Labs AT&T Sift Science

Target Audience of the Machine Learning as a Service Market in Market Study:

Key Consulting Companies & Advisors Large, medium-sized, and small enterprises Venture capitalists Value-Added Resellers (VARs) Third-party knowledge providers Investment bankers Investors

To request a sample copy or view summary of this report, click the link below:

About Digits N Markets:

Digits N Markets has a vast repository of latest market research reports on trending topics, niche company profiles, market size and other relevant data released by renowned publishers. We have access to the database related to niche markets and trending topics in various industries. We also update the data regularly to provide recent statistics to the client. Recent data and reports will be featured on our websites and clients will be able to access the same. Our clients will be able to benefit from qualitative & quantitative insights in the report which will support them in taking concrete business decisions.

Contact Us :Digits N Markets 410 E Santa Clara Street, Unit #762 San Jose, CA 95113 Phone :+1 408-622-0123

Visit link:

Global Machine Learning as a Service Market, Trends, Analysis, Opportunities, Share and Forecast 2019-2027 - NJ MMA News

Written by admin

February 29th, 2020 at 4:46 am

Posted in Machine Learning

TMR Projects Strong Growth for Property Management Software Market, AI and Machine Learning to Boost Valuation to ~US$ 2 Bn by 2027 – PRNewswire

Posted: at 4:46 am

without comments

ALBANY, New York, Feb. 26, 2020 /PRNewswire/ --The property management software market will witness a notable growth during the forecast period at a CAGR of 7.0%. The growth is notable for various reasons, including the emerging challenges in the sector such as relatively high-investments costs. However, the growth rate suggests a considerable growth in investor pool in the housing market, which conventionally relied on local investments. The notable shift in dynamics of the demand will a key trend to watch out for during 2019-2029 period.

According to TMR analysts, the level of competition in the property management softwaremarket continues to be intense as traditional players are diverging major resources to invest in new strategies, and digital-first outlook. Furthermore, the key players in the market continue to move towards AI-based techniques for growth, thanks to rising value generation of automated software-based property showcases.

Key Findings in the Property Management Software Market

Download PDF Brochure -

Key Impediments for Property Management Software Market Players

Apart from high-costs associated with property management softwares, the lack of customization options, and growing complexity in technology remains a challenge for players in the property management software market. The problem is often two-fold, wherein end-players are often looking for intuitive solutions. However, their limited ability to understand technology limits the end-use of products.

The rising influx of user-generated customized property management solutions promise a major challenge for established players in the property management software market. The high-costs of developing complex tools, and the ease of copying features, promises to make this an-ongoing battle for players in the property management software market. The growing demand for legal compliance for property is a promising avenue for established players to bring more legal outlook, which would be difficult to emulate for newly established small players in the property management software market.

Request for Custom Research-

Property Management Software Market: Region-wise Analysis

The rising popularity of social media, and influencer driven marketing in North America promises significant growth for players in the property management software market. The North America region reached a valuation of US$600 million in 2018, and will likely hold a dominant lead in the global market during the forecast period. Asia Pacific with rising disposable income, and rising demand for posh-gated communities will grow at the fastest CAGR during the forecast period.

Property Management software market growth in 30+ countries including US, Canada, Germany, United Kingdom, France, Italy, Russia, Poland, Benelux, Nordic, China, Japan, India, and South Korea. Request a sample of the study.

Property Management Software Market: Competitive Analysis

Key companies in the property management software market Chetu, Inc., Oracle Corporation, Alibaba Cloud, Eco Community Sdn Bhd, Yardi Systems Inc., and MRI Software Inc. The leading companies in the market are investing in heavily in cloud technology, AI, and augmented reality to expand global reach.






End User


Explore Transparency Market Research's award-winning coverage of the Global IT & Telecom Industry:

Software Defined Everything Market- The global software defined everything market is witnessing substantial growth due to factors such as growing requirement for minimizing IT spending in line with changing business environments and increase in adoption of cloud services among enterprises.

Software Assurance Market- There is rising adoption of internet of Things (IoT) to collect and exchange data which utilizes large number of software. This is contributing to the increasing need for software assurance solutions in the market.

Software Construction Components Market- Increasing development and maintenance costs in the software industry are the major drivers identified for the software construction components market. The advent of internet of things (IoT) has made software development a larger and complex process.

Software Localization Tools Market- Increase in the focus of enterprises on worldwide expansion is a driving factor for the software localization tools market. Enterprises in different regions are focusing on expanding their presence across the globe.

Gain access to Market Ngage, an AI-powered, real-time business intelligence that goes beyond the archaic research solutions to solve the complex strategy challenges that organizations face today. With over 15,000+ global and country-wise reports across 50,000+ application areas, Market Ngage is your tool for research on-the-go. From tracking new investment avenues to keeping a track of your competitor's moves, Market Ngage provides you with all the essential information to up your strategic game. Power your business with Market Ngage's actionable insights and remove the guesswork in making colossal decisions.

About Transparency Market Research

Transparency Market Research is a global market intelligence company, providing global business information reports and services. Our exclusive blend of quantitative forecasting and trends analysis provides forward-looking insight for thousands of decision makers. Our experienced team of analysts, researchers, and consultants use proprietary data sources and various tools and techniques to gather and analyze information.

Our data repository is continuously updated and revised by a team of research experts, so that it always reflects the latest trends and information. With a broad research and analysis capability, Transparency Market Research employs rigorous primary and secondary research techniques in developing distinctive data sets and research material for business reports.

Contact: Transparency Market Research State Tower, 90 State Street, Suite 700, Albany NY - 12207 United States USA - Canada Toll Free: 866-552-3453 Email: Website:

SOURCE Transparency Market Research

Read the original post:

TMR Projects Strong Growth for Property Management Software Market, AI and Machine Learning to Boost Valuation to ~US$ 2 Bn by 2027 - PRNewswire

Written by admin

February 29th, 2020 at 4:46 am

Posted in Machine Learning

This AI Researcher Thinks We Have It All Wrong – Forbes

Posted: February 23, 2020 at 12:50 pm

without comments

Dr. Luis Perez-Breva

Luis Perez-Breva is an MIT professor and the faculty director of innovation teams at the MIT School or Engineering. He is also an entrepreneur and part of The Martin Trust Center for MIT Entrepreneurship. Luis works to see how we can use technology to make our lives better and also on how we can work to get new technology out into the world. On a recent AI Today podcast, Professor Perez-Breva managed to get us to think deeply into our understanding of both artificial intelligence and machine learning.

Are we too focused on data?

Anyone who has been following artificial intelligence and machine learning knows the vital centrality of data. Without data, we cant train machine learning models. And without machine learning models, we dont have a way for systems to learn from experience. Surely, data needs to be the center of our attention to make AI systems a reality.

However, Dr. Perez-Breva thinks that we are overly focusing on data and perhaps that extensive focus is causing goals for machine learning and AI to go astray. According to Luis, so much focus is put into obtaining data that we judge how good a machine learning system is by how much data was collected, how large the neural network is, and how much training data was used. When you collect a lot of data you are using that data to build systems that are primarily driven by statistics. Luis says that we latch onto statistics when we feed AI so much data, and that we ascribe to systems intelligence, when in reality, all we have done is created large probabilistic systems that by virtue of large data sets exhibit things we ascribe to intelligence. He says that when our systems arent learning as we want, the primary gut reaction is to give these AI system more data so that we dont have to think as much about the hard parts about generalization and intelligence.

Many would argue that there are some areas where you do need data to help teach AI. Computers are better able to learn image recognition and similar tasks by having more data. The more data, the better the networks, and the more accurate the results. On the podcast, Luis asked whether deep learning is great enough that this works or if we have a big enough data set that image recognition now works. Basically: is it the algorithm or just the sheer quantity of data that is making this work?

Rather, what Luis argues is that if we can find a better way to structure the system as a whole, then the AI system should be able to reason through problems, even with very limited data. Luis compares using machine learning in every application to the retail world. He talks about how physical stores are seeing the success in online stores and trying to copy on that success. One of the ways they are doing this is by using apps to navigate stores. Luis mentioned that he visited a Target where he had to use his phone to navigate the store which was harder than being able to look at signs. Having a human to ask questions and talk to is both faster and part of the experience of being in a brick and mortar retail location. Luis says he would much rather have a human to interact with at one of these locations than a computer.

Is the problem deep learning?

He compares this to machine learning by saying that machine learning has a very narrow application. If you try to apply machine learning to every aspect of AI that you will end up with issues like he did at the Target. Basically looking at neural networks as a hammer and every AI problem as a nail. No one technology or solution works for every application. Perhaps deep learning only works because of vast quantities of data? Maybe theres a better algorithm that can generalize better, apply knowledge learned in one domain to another better, and use smaller amounts of data to get much better quality insights.

People have tried recently to automate many of the jobs that people do. Throughout history, Luis says that technology has killed businesses when it tries to replace humans. Technology and businesses are successful when they expand on what humans can do. Attempting to replace humans is a difficult task and one that is going to lead companies down the road to failure. As humans, he points out, we crave human interaction. Even the age that is constantly on their technology desires human interaction greatly.

Luis also makes a point that while many people mistakenly confuse automation and AI. Automation is using a computer to carry out specific tasks, it is not the creation of intelligence. This is something that many are mentioning on several occasions. Indeed, its the fear of automation and the fictional superintelligence that has many people worried about AI. Dr. Perez-Breva makes the point that many ascribe to machines human characteristics. But this should not be the case with AI system.

Rather, he sees AI systems more akin to a new species with a different mode of intelligence than humans. His opinion is that researchers are very far from creating an AI that is similar to what you will find in books and movies. He blames movies for giving people the impression of robots (AI) killing people and being dangerous technologies. While there are good robots in movies there are few of them and they get pushed to the side by bad robots. He points out that we need to move away from this pushing images of bad robots. Our focus needs to be on how artificial intelligence can help humans grow. It would be beneficial if the movie-making industry could help with this. As such, AI should be thought of as a new intelligent species were trying to create, not something that is meant to replace us.

A positive AI future

Despite negative images and talk, Luis is sure that artificial intelligence is here to stay. At least for a while. So many companies have made large investments into AI that it would be difficult for them to just stop using them or to stop the development.

As a final question in the interview, Luis was asked where he sees the industry of artificial intelligence going. Prefacing his answer with the fact that based on the earlier discussion people are investing in machine learning and not true artificial intelligence, Luis said that he is happy in the investment that businesses are making in what they call AI. He believes that these investments will help the development of this technology to stay around for a minimum of four years.

Once we can stop comparing humans to artificial intelligence, Luis believes that we will see great advancements in what AI can do. He believes that AI has the power to work alongside humans to unlock knowledge and tasks that we werent previously able to do. The point when this happens, he doesnt believe is that far away. We are getting closer to it every day.

Many of Luiss ideas are contrary to popular beliefs by many people who are interested in the world of artificial intelligence. At the same time, these ideas that he presents are presented in a very logical manner and are very thought-provoking. The only way that we will be able to see what is right or where his ideas go is time.


This AI Researcher Thinks We Have It All Wrong - Forbes

Written by admin

February 23rd, 2020 at 12:50 pm

Posted in Machine Learning

Removing the robot factor from AI – Gigabit Magazine – Technology News, Magazine and Website

Posted: at 12:50 pm

without comments

AI and machine learning have something of an image problem.

Theyve never been quite so widely discussed as topics, or, arguably, their potential so widely debated. This is, to some extent, part of the problem. Artificial Intelligence can, still, be anything, achieve anything. But until its results are put into practice for people, it remains a misunderstood concept, especially to the layperson.

While well-established industry thought leaders are rightly championing the fact that AI has the potential to be transformative and capable of a wide range of solutions, the lack of context for most people is fuelling fears that it is simply going to replace peoples roles and take over tasks, wholesale. It also ignores the fact that AI applications have been quietly assisting peoples jobs, in a light touch manner, for some time now and people are still in those roles.

Many people are imagining AI to be something it is not. Given the technology is still in a fast-development phase, some people think it is helpful to consider the tech as a type of plug and play, black box technology. Some believe this helps people to put it into the context of how it will work and what it will deliver for businesses. In our opinion, this limits a true understanding of its potential and what it could be delivering for companies day in, day out.

The hyperbole is also not helping. The statements we use AI and our products AI driven have already become well-worn by enthusiastic salespeople and marketeers. While theres a great sales case to be made by that exciting assertion, its rarely speaking the truth about the situation. What is really meant by the current use of artificial intelligence? Arguably, AI is not yet a thing in its own right; i.e the capability of machines to be able to do the things which people do instinctively, which machines instinctively do not. Instead of being excited by hearing the phrase we do AI!, people should see it as a red flag to dig deeper into the technology and the AI capability in question.


Machine learning, similarly, doesnt benefit from sci-fi associations or big sales patter bravado. In its simplest form, while machine learning sounds like a defined and independent process, it is actually a technique to deliver AI functions. Its maths, essentially, applied alongside data, processing power and technology to deliver an AI capability. Machine learning models dont execute actions or do anything themselves, unless people put them to use. They are still human tools, to be deployed by someone to undertake a specific action.

The tools and models are only as good as the human knowledge and skills programming them. People, especially in the legal sectors autologyx works with, are smart, adaptable and vastly knowledgeable. They can quickly shift from one case to another, and have their own methods and processes of approaching problem solving in the workplace. Where AI is coming in to lift the load is on lengthy, detailed, and highly repetitive tasks such as contract renewals. Humans can get understandably bored when reviewing highly repetitive, vast volumes of contracts to change just a few clauses and update the document. A machine learning solution does notnget bored, and performs consistently with a high degree of accuracy, freeing those legal teams up to work on more interesting, varied, or complicated casework.

Together, AI, machine learning and automation are the arms and armour businesses across a range of sectors need to acquire to adapt and continue to compete in the future. The future of the legal industry, for instance, is still a human one where knowledge of people will continue to be an asset. AI in that sector is more focused on codifying and leveraging that intelligence and while the machine and AI models learn and grow from people, so those people will continue to grow and expand their knowledge within the sector too. Today, AI and ML technologies are only as good as the people power programming them.

As Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence put it, AI is neither good nor evil. Its a tool. A technology for us to use. How we choose to apply it is entirely up to us.

By Ben Stoneham, founder and CEO, autologyx


Removing the robot factor from AI - Gigabit Magazine - Technology News, Magazine and Website

Written by admin

February 23rd, 2020 at 12:50 pm

Posted in Machine Learning

AI Is Top Game-Changing Technology In Healthcare Industry – Forbes

Posted: at 12:50 pm

without comments

Of the many ingredients that go into quality healthcare, comprehensive patient data is close to the top of the list. No one knows this more than Mayur Saxena, CEO and founder of Droice Labs. Saxena created his startup while he was pursuing his doctorate degree at Columbia University, and working at healthcare company conducting clinical trials on new medication. Hes energized by the plethora of opportunities to improve healthcare using artificial intelligence (AI) and machine learning.

Mayur Saxena, CEO and founder of Droice Labs, is energized by the plethora of opportunities to improve healthcare using artificial intelligence (AI) and machine learning.

Patient data is notoriously disorganized and complex, he said. With machine learning, healthcare professionals can organize that information to better understand the disease of every patient and reach them faster with interventions that improve their lives. Its an amazing feeling when you talk with someone whos recovered from an illness because they received the right care.

The idea behind Droice is to make messy data neat, so people can spend less time organizing it and more time analyzing it.

Insights drive personalized patient care

The startup has collected data from 50 million patients in working with healthcare providers, payors, and government organizations in the U.S. and Europe. Healthcare professionals in hospitals, pharmaceutical firms, medical device manufacturing, and insurance rely on Droice Labs natural language understanding (NLU) technology. NLU make sense of patient information in multiple languages from anywhere such as electronic medical records (EMR), insurance claims, research reports, and medical devices.

Our machine learning system takes all the data about an individual into account, and breaks it down so that a doctor, pharmaceutical scientist or healthcare insurer can understand patients better and faster, said Saxena. Instead of repetitive, disparate one-on-one diagnoses and follow-up care, were automating personalized care for a much larger patient population. With shared insights across a large patient population, physicians can chart disease progress and prescribe the best treatment plan. Clinical research into new drugs that took years could be reduced to days or weeks.

Saxena said that one hospital reduced the amount of time it took to arrive at an appropriate diagnosis for patients by over 20 percent.

SAP.iO Foundry opens up world of healthcare opportunities

Droice Labs recently participated in the latest healthcare-focused accelerator program at SAP.iO Foundry New York. It was one of seven up and coming startups working with hospital system providers, employee health and wellness solutions, medical devices, and health IT.

Weve learned so much about customers in the healthcare industry from SAPs sales and product teams, said Saxena. These large organizations have unique needs, and were grateful for the opportunity to partner with SAP, a company with a massive presence across so many geographies. Weve gained valuable insights about strategic global selling and scaling our technology to meet the unique requirements of these customers.

The Droice Labs machine learning platform is now downloadable on the SAP App Center.

Turning long-time passion into thriving startup

Droice Labs reflects Saxenas long-time personal and career commitment to healthcare. After earning his undergraduate degree in bioengineering and biomedical engineering, he worked in high-performance computing in Singapore before arriving in the United States. Thats when he acted on his passion, exploring how AI and machine learning can help improve patient care, and potentially eradicate disease.

Were looking at data from hundreds of thousands of patients a day, helping improve their care pathways across the healthcare system, said Saxena. We have the technology to work with patient data at scale. Im most excited about working together with recognized healthcare experts using state-of-the-art technology to address major challenges in this complicated, regulated industry.

Digitally trustworthy strategy

In an environment where patient concerns and regulations around data control continue to increase, Saxena emphasized his companys strategy of digital trust.

Everything we do is designed to respect individual patient privacy, he said. We dont possess related identifying data on patients, and we remove any identifiers. Working in a mission critical environment like healthcare brings a set of responsibilities. If there is a population suffering from disease, and by looking at their information we can partner with healthcare providers to help make their quality of life better, thats what well do. But we dont participate in business models targeted to specific individuals.

Saxena expected his companys rapid growth trajectory to continue, and it was easy to see why. According to Gartners 2020 CIO Survey, AI is the healthcare industrys top game-changing technology. These analysts predicted 75 percent ofhealthcare delivery organizationswill invest in an AI capability to explicitly improve either operational performance or clinical outcomes by 2021.

Originally posted here:

AI Is Top Game-Changing Technology In Healthcare Industry - Forbes

Written by admin

February 23rd, 2020 at 12:50 pm

Posted in Machine Learning

What is machine learning? Everything you need to know | ZDNet

Posted: February 22, 2020 at 8:45 pm

without comments

Machine learning is enabling computers to tackle tasks that have, until now, only been carried out by people.

From driving cars to translating speech, machine learning is driving an explosion in the capabilities of artificial intelligence -- helping software make sense of the messy and unpredictable real world.

But what exactly is machine learning and what is making the current boom in machine learning possible?

At a very high level, machine learning is the process of teaching a computer system how to make accurate predictions when fed data.

Those predictions could be answering whether a piece of fruit in a photo is a banana or an apple, spotting people crossing the road in front of a self-driving car, whether the use of the word book in a sentence relates to a paperback or a hotel reservation, whether an email is spam, or recognizing speech accurately enough to generate captions for a YouTube video.

The key difference from traditional computer software is that a human developer hasn't written code that instructs the system how to tell the difference between the banana and the apple.

Instead a machine-learning model has been taught how to reliably discriminate between the fruits by being trained on a large amount of data, in this instance likely a huge number of images labelled as containing a banana or an apple.

Data, and lots of it, is the key to making machine learning possible.

Machine learning may have enjoyed enormous success of late, but it is just one method for achieving artificial intelligence.

At the birth of the field of AI in the 1950s, AI was defined as any machine capable of performing a task that would typically require human intelligence.

AI systems will generally demonstrate at least some of the following traits: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

Alongside machine learning, there are various other approaches used to build AI systems, including evolutionary computation, where algorithms undergo random mutations and combinations between generations in an attempt to "evolve" optimal solutions, and expert systems, where computers are programmed with rules that allow them to mimic the behavior of a human expert in a specific domain, for example an autopilot system flying a plane.

Machine learning is generally split into two main categories: supervised and unsupervised learning.

This approach basically teaches machines by example.

During training for supervised learning, systems are exposed to large amounts of labelled data, for example images of handwritten figures annotated to indicate which number they correspond to. Given sufficient examples, a supervised-learning system would learn to recognize the clusters of pixels and shapes associated with each number and eventually be able to recognize handwritten numbers, able to reliably distinguish between the numbers 9 and 4 or 6 and 8.

However, training these systems typically requires huge amounts of labelled data, with some systems needing to be exposed to millions of examples to master a task.

As a result, the datasets used to train these systems can be vast, with Google's Open Images Dataset having about nine million images, its labeled video repository YouTube-8M linking to seven million labeled videos and ImageNet, one of the early databases of this kind, having more than 14 million categorized images. The size of training datasets continues to grow, with Facebook recently announcing it had compiled 3.5 billion images publicly available on Instagram, using hashtags attached to each image as labels. Using one billion of these photos to train an image-recognition system yielded record levels of accuracy -- of 85.4 percent -- on ImageNet's benchmark.

The laborious process of labeling the datasets used in training is often carried out using crowdworking services, such as Amazon Mechanical Turk, which provides access to a large pool of low-cost labor spread across the globe. For instance, ImageNet was put together over two years by nearly 50,000 people, mainly recruited through Amazon Mechanical Turk. However, Facebook's approach of using publicly available data to train systems could provide an alternative way of training systems using billion-strong datasets without the overhead of manual labeling.

In contrast, unsupervised learning tasks algorithms with identifying patterns in data, trying to spot similarities that split that data into categories.

An example might be Airbnb clustering together houses available to rent by neighborhood, or Google News grouping together stories on similar topics each day.

The algorithm isn't designed to single out specific types of data, it simply looks for data that can be grouped by its similarities, or for anomalies that stand out.

The importance of huge sets of labelled data for training machine-learning systems may diminish over time, due to the rise of semi-supervised learning.

As the name suggests, the approach mixes supervised and unsupervised learning. The technique relies upon using a small amount of labelled data and a large amount of unlabelled data to train systems. The labelled data is used to partially train a machine-learning model, and then that partially trained model is used to label the unlabelled data, a process called pseudo-labelling. The model is then trained on the resulting mix of the labelled and pseudo-labelled data.

The viability of semi-supervised learning has been boosted recently by Generative Adversarial Networks ( GANs), machine-learning systems that can use labelled data to generate completely new data, for example creating new images of Pokemon from existing images, which in turn can be used to help train a machine-learning model.

Were semi-supervised learning to become as effective as supervised learning, then access to huge amounts of computing power may end up being more important for successfully training machine-learning systems than access to large, labelled datasets.

A way to understand reinforcement learning is to think about how someone might learn to play an old school computer game for the first time, when they aren't familiar with the rules or how to control the game. While they may be a complete novice, eventually, by looking at the relationship between the buttons they press, what happens on screen and their in-game score, their performance will get better and better.

An example of reinforcement learning is Google DeepMind's Deep Q-network, which has beaten humans in a wide range of vintage video games. The system is fed pixels from each game and determines various information about the state of the game, such as the distance between objects on screen. It then considers how the state of the game and the actions it performs in game relate to the score it achieves.

Over the process of many cycles of playing the game, eventually the system builds a model of which actions will maximize the score in which circumstance, for instance, in the case of the video game Breakout, where the paddle should be moved to in order to intercept the ball.

Everything begins with training a machine-learning model, a mathematical function capable of repeatedly modifying how it operates until it can make accurate predictions when given fresh data.

Before training begins, you first have to choose which data to gather and decide which features of the data are important.

A hugely simplified example of what data features are is given in this explainer by Google, where a machine learning model is trained to recognize the difference between beer and wine, based on two features, the drinks' color and their alcoholic volume (ABV).

Each drink is labelled as a beer or a wine, and then the relevant data is collected, using a spectrometer to measure their color and hydrometer to measure their alcohol content.

An important point to note is that the data has to be balanced, in this instance to have a roughly equal number of examples of beer and wine.

The gathered data is then split, into a larger proportion for training, say about 70 percent, and a smaller proportion for evaluation, say the remaining 30 percent. This evaluation data allows the trained model to be tested to see how well it is likely to perform on real-world data.

Before training gets underway there will generally also be a data-preparation step, during which processes such as deduplication, normalization and error correction will be carried out.

The next step will be choosing an appropriate machine-learning model from the wide variety available. Each have strengths and weaknesses depending on the type of data, for example some are suited to handling images, some to text, and some to purely numerical data.

Basically, the training process involves the machine-learning model automatically tweaking how it functions until it can make accurate predictions from data, in the Google example, correctly labeling a drink as beer or wine when the model is given a drink's color and ABV.

A good way to explain the training process is to consider an example using a simple machine-learning model, known as linear regression with gradient descent. In the following example, the model is used to estimate how many ice creams will be sold based on the outside temperature.

Imagine taking past data showing ice cream sales and outside temperature, and plotting that data against each other on a scatter graph -- basically creating a scattering of discrete points.

To predict how many ice creams will be sold in future based on the outdoor temperature, you can draw a line that passes through the middle of all these points, similar to the illustration below.

Once this is done, ice cream sales can be predicted at any temperature by finding the point at which the line passes through a particular temperature and reading off the corresponding sales at that point.

Bringing it back to training a machine-learning model, in this instance training a linear regression model would involve adjusting the vertical position and slope of the line until it lies in the middle of all of the points on the scatter graph.

At each step of the training process, the vertical distance of each of these points from the line is measured. If a change in slope or position of the line results in the distance to these points increasing, then the slope or position of the line is changed in the opposite direction, and a new measurement is taken.

In this way, via many tiny adjustments to the slope and the position of the line, the line will keep moving until it eventually settles in a position which is a good fit for the distribution of all these points, as seen in the video below. Once this training process is complete, the line can be used to make accurate predictions for how temperature will affect ice cream sales, and the machine-learning model can be said to have been trained.

While training for more complex machine-learning models such as neural networks differs in several respects, it is similar in that it also uses a "gradient descent" approach, where the value of "weights" that modify input data are repeatedly tweaked until the output values produced by the model are as close as possible to what is desired.

Once training of the model is complete, the model is evaluated using the remaining data that wasn't used during training, helping to gauge its real-world performance.

To further improve performance, training parameters can be tuned. An example might be altering the extent to which the "weights" are altered at each step in the training process.

A very important group of algorithms for both supervised and unsupervised machine learning are neural networks. These underlie much of machine learning, and while simple models like linear regression used can be used to make predictions based on a small number of data features, as in the Google example with beer and wine, neural networks are useful when dealing with large sets of data with many features.

Neural networks, whose structure is loosely inspired by that of the brain, are interconnected layers of algorithms, called neurons, which feed data into each other, with the output of the preceding layer being the input of the subsequent layer.

Each layer can be thought of as recognizing different features of the overall data. For instance, consider the example of using machine learning to recognize handwritten numbers between 0 and 9. The first layer in the neural network might measure the color of the individual pixels in the image, the second layer could spot shapes, such as lines and curves, the next layer might look for larger components of the written number -- for example, the rounded loop at the base of the number 6. This carries on all the way through to the final layer, which will output the probability that a given handwritten figure is a number between 0 and 9.

See more: Special report: How to implement AI and machine learning (free PDF)

The network learns how to recognize each component of the numbers during the training process, by gradually tweaking the importance of data as it flows between the layers of the network. This is possible due to each link between layers having an attached weight, whose value can be increased or decreased to alter that link's significance. At the end of each training cycle the system will examine whether the neural network's final output is getting closer or further away from what is desired -- for instance is the network getting better or worse at identifying a handwritten number 6. To close the gap between between the actual output and desired output, the system will then work backwards through the neural network, altering the weights attached to all of these links between layers, as well as an associated value called bias. This process is called back-propagation.

Eventually this process will settle on values for these weights and biases that will allow the network to reliably perform a given task, such as recognizing handwritten numbers, and the network can be said to have "learned" how to carry out a specific task

An illustration of the structure of a neural network and how training works.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a huge number of layers that are trained using massive amounts of data. It is these deep neural networks that have fueled the current leap forward in the ability of computers to carry out task like speech recognition and computer vision.

There are various types of neural networks, with different strengths and weaknesses. Recurrent neural networks are a type of neural net particularly well suited to language processing and speech recognition, while convolutional neural networks are more commonly used in image recognition. The design of neural networks is also evolving, with researchers recently devising a more efficient design for an effective type of deep neural network called long short-term memory or LSTM, allowing it to operate fast enough to be used in on-demand systems like Google Translate.

The AI technique of evolutionary algorithms is even being used to optimize neural networks, thanks to a process called neuroevolution. The approach was recently showcased by Uber AI Labs, which released papers on using genetic algorithms to train deep neural networks for reinforcement learning problems.

While machine learning is not a new technique, interest in the field has exploded in recent years.

This resurgence comes on the back of a series of breakthroughs, with deep learning setting new records for accuracy in areas such as speech and language recognition, and computer vision.

What's made these successes possible are primarily two factors, one being the vast quantities of images, speech, video and text that is accessible to researchers looking to train machine-learning systems.

But even more important is the availability of vast amounts of parallel-processing power, courtesy of modern graphics processing units (GPUs), which can be linked together into clusters to form machine-learning powerhouses.

Today anyone with an internet connection can use these clusters to train machine-learning models, via cloud services provided by firms like Amazon, Google and Microsoft.

As the use of machine-learning has taken off, so companies are now creating specialized hardware tailored to running and training machine-learning models. An example of one of these custom chips is Google's Tensor Processing Unit (TPU), the latest version of which accelerates the rate at which machine-learning models built using Google's TensorFlow software library can infer information from data, as well as the rate at which they can be trained.

These chips are not just used to train models for Google DeepMind and Google Brain, but also the models that underpin Google Translate and the image recognition in Google Photo, as well as services that allow the public to build machine learning models using Google's TensorFlow Research Cloud. The second generation of these chips was unveiled at Google's I/O conference in May last year, with an array of these new TPUs able to train a Google machine-learning model used for translation in half the time it would take an array of the top-end GPUs, and the recently announced third-generation TPUs able to accelerate training and inference even further.

As hardware becomes increasingly specialized and machine-learning software frameworks are refined, it's becoming increasingly common for ML tasks to be carried out on consumer-grade phones and computers, rather than in cloud datacenters. In the summer of 2018, Google took a step towards offering the same quality of automated translation on phones that are offline as is available online, by rolling out local neural machine translation for 59 languages to the Google Translate app for iOS and Android.

Perhaps the most famous demonstration of the efficacy of machine-learning systems was the 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, a feat that wasn't expected until 2026. Go is an ancient Chinese game whose complexity bamboozled computers for decades. Go has about 200 moves per turn, compared to about 20 in Chess. Over the course of a game of Go, there are so many possible moves that searching through each of them in advance to identify the best play is too costly from a computational standpoint. Instead, AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games and feeding them into deep-learning neural networks.

Training the deep-learning networks needed can take a very long time, requiring vast amounts of data to be ingested and iterated over as the system gradually refines its model in order to achieve the best outcome.

However, more recently Google refined the training process with AlphaGo Zero, a system that played "completely random" games against itself, and then learnt from the results. At last year's prestigious Neural Information Processing Systems (NIPS) conference, Google DeepMind CEO Demis Hassabis revealed AlphaGo had also mastered the games of chess and shogi.

DeepMind continue to break new ground in the field of machine learning. In July 2018, DeepMind reported that its AI agents had taught themselves how to play the 1999 multiplayer 3D first-person shooter Quake III Arena, well enough to beat teams of human players. These agents learned how to play the game using no more information than the human players, with their only input being the pixels on the screen as they tried out random actions in game, and feedback on their performance during each game.

More recently DeepMind demonstrated an AI agent capable of superhuman performance across multiple classic Atari games, an improvement over earlier approaches where each AI agent could only perform well at a single game. DeepMind researchers say these general capabilities will be important if AI research is to tackle more complex real-world domains.

Machine learning systems are used all around us, and are a cornerstone of the modern internet.

Machine-learning systems are used to recommend which product you might want to buy next on Amazon or video you want to may want to watch on Netflix.

Every Google search uses multiple machine-learning systems, to understand the language in your query through to personalizing your results, so fishing enthusiasts searching for "bass" aren't inundated with results about guitars. Similarly Gmail's spam and phishing-recognition systems use machine-learning trained models to keep your inbox clear of rogue messages.

One of the most obvious demonstrations of the power of machine learning are virtual assistants, such as Apple's Siri, Amazon's Alexa, the Google Assistant, and Microsoft Cortana.

Each relies heavily on machine learning to support their voice recognition and ability to understand natural language, as well as needing an immense corpus to draw upon to answer queries.

But beyond these very visible manifestations of machine learning, systems are starting to find a use in just about every industry. These exploitations include: computer vision for driverless cars, drones and delivery robots; speech and language recognition and synthesis for chatbots and service robots; facial recognition for surveillance in countries like China; helping radiologists to pick out tumors in x-rays, aiding researchers in spotting genetic sequences related to diseases and identifying molecules that could lead to more effective drugs in healthcare; allowing for predictive maintenance on infrastructure by analyzing IoT sensor data; underpinning the computer vision that makes the cashierless Amazon Go supermarket possible, offering reasonably accurate transcription and translation of speech for business meetings -- the list goes on and on.

Deep-learning could eventually pave the way for robots that can learn directly from humans, with researchers from Nvidia recently creating a deep-learning system designed to teach a robot to how to carry out a task, simply by observing that job being performed by a human.

As you'd expect, the choice and breadth of data used to train systems will influence the tasks they are suited to.

For example, in 2016 Rachael Tatman, a National Science Foundation Graduate Research Fellow in the Linguistics Department at the University of Washington, found that Google's speech-recognition system performed better for male voices than female ones when auto-captioning a sample of YouTube videos, a result she ascribed to 'unbalanced training sets' with a preponderance of male speakers.

As machine-learning systems move into new areas, such as aiding medical diagnosis, the possibility of systems being skewed towards offering a better service or fairer treatment to particular groups of people will likely become more of a concern.

A heavily recommended course for beginners to teach themselves the fundamentals of machine learning is this free Stanford University and Coursera lecture series by AI expert and Google Brain founder Andrew Ng.

Another highly-rated free online course, praised for both the breadth of its coverage and the quality of its teaching, is this EdX and Columbia University introduction to machine learning, although students do mention it requires a solid knowledge of math up to university level.

Technologies designed to allow developers to teach themselves about machine learning are increasingly common, from AWS' deep-learning enabled camera DeepLens to Google's Raspberry Pi-powered AIY kits.

All of the major cloud platforms -- Amazon Web Services, Microsoft Azure and Google Cloud Platform -- provide access to the hardware needed to train and run machine-learning models, with Google letting Cloud Platform users test out its Tensor Processing Units -- custom chips whose design is optimized for training and running machine-learning models.

This cloud-based infrastructure includes the data stores needed to hold the vast amounts of training data, services to prepare that data for analysis, and visualization tools to display the results clearly.

Newer services even streamline the creation of custom machine-learning models, with Google recently revealing a service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise, similar to Microsoft's Azure Machine Learning Studio. In a similar vein, Amazon recently unveiled new AWS offerings designed to accelerate the process of training up machine-learning models.

For data scientists, Google's Cloud ML Engine is a managed machine-learning service that allows users to train, deploy and export custom machine-learning models based either on Google's open-sourced TensorFlow ML framework or the open neural network framework Keras, and which now can be used with the Python library sci-kit learn and XGBoost.

Database admins without a background in data science can use Google's BigQueryML, a beta service that allows admins to call trained machine-learning models using SQL commands, allowing predictions to be made in database, which is simpler than exporting data to a separate machine learning and analytics environment.

For firms that don't want to build their own machine-learning models, the cloud platforms also offer AI-powered, on-demand services -- such as voice, vision, and language recognition. Microsoft Azure stands out for the breadth of on-demand services on offer, closely followed by Google Cloud Platform and then AWS.

Meanwhile IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from healthcare to retail, grouping these offerings together under its IBM Watson umbrella.

Early in 2018, Google expanded its machine-learning driven services to the world of advertising, releasing a suite of tools for making more effective ads, both digital and physical.

While Apple doesn't enjoy the same reputation for cutting edge speech recognition, natural language processing and computer vision as Google and Amazon, it is investing in improving its AI services, recently putting Google's former chief in charge of machine learning and AI strategy across the company, including the development of its assistant Siri and its on-demand machine learning service Core ML.

In September 2018, NVIDIA launched a combined hardware and software platform designed to be installed in datacenters that can accelerate the rate at which trained machine-learning models can carry out voice, video and image recognition, as well as other ML-related services.

The NVIDIA TensorRT Hyperscale Inference Platform uses NVIDIA Tesla T4 GPUs, which delivers up to 40x the performance of CPUs when using machine-learning models to make inferences from data, and the TensorRT software platform, which is designed to optimize the performance of trained neural networks.

There are a wide variety of software frameworks for getting started with training and running machine-learning models, typically for the programming languages Python, R, C++, Java and MATLAB.

Famous examples include Google's TensorFlow, the open-source library Keras, the Python library Scikit-learn, the deep-learning framework CAFFE and the machine-learning library Torch.

Read the original:

What is machine learning? Everything you need to know | ZDNet

Written by admin

February 22nd, 2020 at 8:45 pm

Posted in Machine Learning

Why 2020 will be the Year of Automated Machine Learning – Gigabit Magazine – Technology News, Magazine and Website

Posted: at 8:45 pm

without comments

As the fuel that powers their ongoing digital transformation efforts, businesses everywhere are looking for ways to derive as much insight as possible from their data. The accompanying increased demand for advanced predictive and prescriptive analytics has, in turn, led to a call for more data scientists proficient with the latest artificial intelligence (AI) and machine learning (ML) tools.

But such highly-skilled data scientists are expensive and in short supply. In fact, theyre such a precious resource that the phenomenon of the citizen data scientist has recently arisen to help close the skills gap. A complementary role, rather than a direct replacement, citizen data scientists lack specific advanced data science expertise. However, they are capable of generating models using state-of-the-art diagnostic and predictive analytics. And this capability is partly due to the advent of accessible new technologies such as automated machine learning (AutoML) that now automate many of the tasks once performed by data scientists.

Algorithms and automation

According to a recent Harvard Business Review article, Organisations have shifted towards amplifying predictive power by coupling big data with complex automated machine learning. AutoML, which uses machine learning to generate better machine learning, is advertised as affording opportunities to democratise machine learning by allowing firms with limited data science expertise to develop analytical pipelines capable of solving sophisticated business problems.

Comprising a set of algorithms that automate the writing of other ML algorithms, AutoML automates the end-to-end process of applying ML to real-world problems. By way of illustration, a standard ML pipeline is made up of the following: data pre-processing, feature extraction, feature selection, feature engineering, algorithm selection, and hyper-parameter tuning. But the considerable expertise and time it takes to implement these steps means theres a high barrier to entry.

AutoML removes some of these constraints. Not only does it significantly reduce the time it would typically take to implement an ML process under human supervision, it can also often improve the accuracy of the model in comparison to hand-crafted models, trained and deployed by humans. In doing so, it offers organisations a gateway into ML, as well as freeing up the time of ML engineers and data practitioners, allowing them to focus on higher-order challenges.


Overcoming scalability problems

The trend for combining ML with Big Data for advanced data analytics began back in 2012, when deep learning became the dominant approach to solving ML problems. This approach heralded the generation of a wealth of new software, tooling, and techniques that altered both the workload and the workflow associated with ML on a large scale. Entirely new ML toolsets, such as TensorFlow and PyTorch were created, and people increasingly began to engage more with graphics processing units (GPUs) to accelerate their work.

Until this point, companies efforts had been hindered by the scalability problems associated with running ML algorithms on huge datasets. Now, though, they were able to overcome these issues. By quickly developing sophisticated internal tooling capable of building world-class AI applications, the BigTech powerhouses soon overtook their Fortune 500 peers when it came to realising the benefits of smarter data-driven decision-making and applications.

Insight, innovation and data-driven decisions

AutoML represents the next stage in MLs evolution, promising to help non-tech companies access the capabilities they need to quickly and cheaply build ML applications.

In 2018, for example, Google launched its Cloud AutoML. Based on Neural Architecture Search (NAS) and transfer learning, it was described by Google executives as having the potential to make AI experts even more productive, advance new fields in AI, and help less-skilled engineers build powerful AI systems they previously only dreamed of.

The one downside to Googles AutoML is that its a proprietary algorithm. There are, however, a number of alternative open-source AutoML libraries such as AutoKeras, developed by researchers at Texas University and used to power the NAS algorithm.

Technological breakthroughs such as these have given companies the capability to easily build production-ready models without the need for expensive human resources. By leveraging AI, ML, and deep learning capabilities, AutoML gives businesses across all industries the opportunity to benefit from data-driven applications powered by statistical models - even when advanced data science expertise is scarce.

With organisations increasingly reliant on civilian data scientists, 2020 is likely to be the year that enterprise adoption of AutoML will start to become mainstream. Its ease of access will compel business leaders to finally open the black box of ML, thereby elevating their knowledge of its processes and capabilities. AI and ML tools and practices will become ever more ingrained in businesses everyday thinking and operations as they become more empowered to identify those projects whose invaluable insight will drive better decision-making and innovation.

By Senthil Ravindran, EVP and global head of cloud transformation and digital innovation, Virtusa

Read the original post:

Why 2020 will be the Year of Automated Machine Learning - Gigabit Magazine - Technology News, Magazine and Website

Written by admin

February 22nd, 2020 at 8:45 pm

Posted in Machine Learning

Machine Learning: Real-life applications and it’s significance in Data Science – Techstory

Posted: at 8:44 pm

without comments

Do you know how Google Maps predicts traffic? Are you amused by how Amazon Prime or Netflix subscribes to you just the movie you would watch? We all know it must be some approach of Artificial Intelligence. Machine Learning involves algorithms and statistical models to perform tasks. This same approach is used to find faces in Facebook and detect cancer too. A Machine Learning course can educate in the development and application of such models.

Artificial Intelligence mimics human intelligence. Machine Learning is one of the significant branches of it. There is an ongoing and increasing need for its development.

Tasks as simple as Spam detection in Gmail illustrates its significance in our day-to-day lives. That is why the roles of Data scientists are in demand to yield more productivity at present. An aspiring data scientist can learn to develop algorithms and apply such by availing Machine Learning certification.

Machine learning as a subset of Artificial Intelligence, is applied for varied purposes. There is a misconception that applying Machine Learning algorithms would need a prior mathematical knowledge. But, a Machine Learning Online course would suggest otherwise. On contrary to the popular approach of studying, here top-to-bottom approach is involved. An aspiring data scientist, a business person or anyone can learn how to apply statistical models for various purposes. Here, is a list of some well-known applications of Machine Learning.

Microsofts research lab uses Machine Learning to study cancer. This helps in Individualized oncological treatment and detailed progress reports generation. The data engineers apply pattern recognition, Natural Language Processing and Computer vision algorithms to work through large data. This aids oncologists to conduct precise and breakthrough tests.

Likewise, machine learning is applied in biomedical engineering. This has led to automation of diagnostic tools. Such tools are used in detecting neurological and psychiatric disorders of many sorts.

We all have had a conversation with Siri or Alexa. They use speech recognition to input our requests. Machine Learning is applied here to auto generate responses based on previous data. Hello Barbie is the Siri version for the kids to play with. It uses advanced analytics, machine learning and Natural language processing to respond. This is the first AI enabled toy which could lead to more such inventions.

Google uses Machine Learning statistical models to acquire inputs. The statistical models collect details such as distance from the start point to the endpoint, duration and bus schedules. Such historical data is rescheduled and reused. Machine Learning algorithms are developed with the objective of data prediction. They recognise the pattern between such inputs and predict approximate time delays.

Another well-known application of Google, Google translate involves Machine Learning. Deep learning aids in learning language rules through recorded conversations. Neural networks such as Long-short term memory networks aids in long-term information updates and learning. Recurrent Neural networks identify the sequences of learning. Even bi-lingual processing is made feasible nowadays.

Facebook uses image recognition and computer vision to detect images. Such images are fed as inputs. The statistical models developed using Machine Learning maps any information associated with these images. Facebook generates automated captions for images. These captions are meant to provide directions for visually impaired people. This innovation of Facebook has nudged Data engineers to come up with other such valuable real-time applications.

The aim here is to increase the possibility of the customer, watching a movie recommendation. It is achieved by studying the previous thumbnails. An algorithm is developed to study these thumbnails and derive recommendation results. Every image of available movies has separate thumbnails. A recommendation is generated by pattern recognition among the numerical data. The thumbnails are assigned individual numerical values.

Tesla uses computer vision, data prediction, and path planning for this purpose. The machine learning practices applied makes the innovation stand-out. The deep neural networks work with trained data and generate instructions. Many technological advancements such as changing lanes are instructed based on imitation learning.

Gmail, Yahoo mail and Outlook engage machine learning techniques such as neural networks. These networks detect patterns in historical data. They train on received data about spamming messages and phishing messages. It is noted that these spam filters provide 99.9 percent accuracy.

As people grow more health conscious, the development of fitness monitoring applications are on the rise. Being on top of the market, Fitbit ensures its productivity by the employment of machine learning methods. The trained machine learning models predicts user activities. This is achieved through data pre-processing, data processing and data partitioning. There is a need to improve the application in terms of additional purposes.

The above mentioned applications are like the tip of an iceberg. Machine learning being a subset of Artificial Intelligence finds its necessity in many other streams of daily activities.


Read more here:

Machine Learning: Real-life applications and it's significance in Data Science - Techstory

Written by admin

February 22nd, 2020 at 8:44 pm

Posted in Machine Learning

Grok combines Machine Learning and the Human Brain to build smarter AIOps – Diginomica

Posted: at 8:44 pm

without comments

A few weeks ago I wrote a piece here about Moogsoft which has been making waves in the service assurance space by applying artificial intelligence and machine learning to the arcane task of keeping on keeping critical IT up and running and lessening the business impact of service interruptions. Its a hot area for startups and Ive since gotten article pitches from several other AIops firms at varying levels of development.

The most intriguing of these is a company called Grok which was formed by a partnership between Numenta, a pioneering AI research firm co-founded by Jeff Hawkins and Donna Dubinsky, who are famous for having started two classic mobile computing companies, Palm and Handspring, and Avik Partners. Avik is a company formed by brothers Casey and Josh Kindiger, two veteran entrepreneurs who have successfully started and grown multiple technology companies in service assurance and automation over the past two decadesmost recently Resolve Systems.

Josh Kindiger told me in a telephone interview how the partnership came about:

Numenta is primarily a research entity started by Jeff and Donna about 15 years ago to support Jeffs ideas about the intersection of neuroscience and data science. About five years ago, they developed an algorithm called HTM and a product called Grok for AWS which monitors servers on a network for anomalies. They werent interested in developing a company around it but we came along and saw a way to link our deep domain experience in the service management and automation areas with their technology. So, we licensed the name and the technology and built part of our Grok AIOps platform around it.

Jeff Hawkins has spent most of his post-Palm and Handspring years trying to figure out how the human brain works and then reverse engineering that knowledge into structures that machines can replicate. His model or theory, called hierarchical temporal memory (HTM), was originally described in his 2004 book On Intelligence written with Sandra Blakeslee. HTM is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the mammalian (in particular, human) brain. For a little light reading, I recommend a peer-reviewed paper called A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex.

Grok AIOps also uses traditional machine learning, alongside HTM. Said Kindiger:

When I came in, the focus was purely on anomaly detection and I immediately engaged with a lot of my old customers--large fortune 500 companies, very large service providers and quickly found out that while anomaly detection was extremely important, that first signal wasn't going to be enough. So, we transformed Grok into a platform. And essentially what we do is we apply the correct algorithm, whether it's HTM or something else, to the proper stream events, logs and performance metrics. Grok can enable predictive, self-healing operations within minutes.

The Grok AIOps platform uses multiple layers of intelligence to identify issues and support their resolution:

Anomaly detection

The HTM algorithm has proven exceptionally good at detecting and predicting anomalies and reducing noise, often up to 90%, by providing the critical context needed to identify incidents before they happen. It can detect anomalies in signals beyond low and high thresholds, such as signal frequency changes that reflect changes in the behavior of the underlying systems. Said Kindiger:

We believe HTM is the leading anomaly detection engine in the market. In fact, it has consistently been the best performing anomaly detection algorithm in the industry resulting in less noise, less false positives and more accurate detection. It is not only best at detecting an anomaly with the smallest amount of noise but it also scales, which is the biggest challenge.

Anomaly clustering

To help reduce noise, Grok clusters anomalies that belong together through the same event or cause.

Event and log clustering

Grok ingests all the events and logs from the integrated monitors and then applies to it to event and log clustering algorithms, including pattern recognition and dynamic time warping which also reduce noise.

IT operations have become almost impossible for humans alone to manage. Many companies struggle to meet the high demand due to increased cloud complexity. Distributed apps make it difficult to track where problems occur during an IT incident. Every minute of downtime directly impacts the bottom line.

In this environment, the relatively new solution to reduce this burden of IT management, dubbed AIOps, looks like a much needed lifeline to stay afloat. AIOps translates to "Algorithmic IT Operations" and its premise is that algorithms, not humans or traditional statistics, will help to make smarter IT decisions and help ensure application efficiency. AIOps platforms reduce the need for human intervention by using ML to set alerts and automation to resolve issues. Over time, AIOps platforms can learn patterns of behavior within distributed cloud systems and predict disasters before they happen.

Grok detects latent issues with cloud apps and services and triggers automations to troubleshoot these problems before requiring further human intervention. Its technology is solid, its owners have lots of experience in the service assurance and automation spaces, and who can resist the story of the first commercial use of an algorithm modeled on the human brain.

Go here to see the original:

Grok combines Machine Learning and the Human Brain to build smarter AIOps - Diginomica

Written by admin

February 22nd, 2020 at 8:44 pm

Posted in Machine Learning

Syniverse and RealNetworks Collaboration Brings Kontxt-Based Machine Learning Analytics to Block Spam and Phishing Text Messages – MarTech Series

Posted: at 8:44 pm

without comments

The First End-to-End Messaging Visibility Platform Allows Mobile Operators and Internet Service Providers to Identify, Analyze and Prioritize Messages

Syniverse and RealNetworks have announced they have incorporated sophisticated machine learning (ML) features into their integrated offering that gives carriers visibility and control over mobile messaging traffic. By integrating RealNetworks Kontxt application-to-person (A2P) message categorization capabilities into Syniverse Messaging Clarity, mobile network operators (MNOs), internet service providers (ISPs), and messaging aggregators can identify and block spam, phishing, and malicious messages by prioritizing legitimate A2P traffic, better monetizing their service.

At the time of this announcement, Bill Corbin, Senior Vice President of Indirect Markets & Strategic Partnerships, Syniverse said

Syniverse offers companies the capability to use machine learning technologies to gain insight into what traffic is flowing through their networks, while simultaneously ensuring consumer privacy and keeping the actual contents of the messages hidden. The Syniverse Messaging Clarity solution can generate statistics examining the type of traffic sent and whether it deviates from the senders traffic pattern. From there, the technology analyzes if the message is a valid one or spam and blocks the spam.

Read Also: Introducing AudioDots The Automatic Text-to-Speech

Currently, Syniverse helps mobile operators and businesses manage and secure their mobile and network communications, driving better engagements and business outcomes.

Surash Patel, General Manager of Kontxt, RealNetworks added,The self-learning Kontxt algorithms within the Syniverse Messaging Clarity solution allow its threat-assessment techniques to evolve with changes in message traffic. Our analytics also verify that sent messages conform to network standards pertaining to spam and fraud. By deploying Messaging Clarity, MNOs and ISPs can help ensure their compliance with local regulations across the world, including the U.S. Telephone Consumer Protection Act, while also avoiding potential costs associated with violations. And, ultimately, the consumer who is the recipient of more appropriate text messages and less spam wins as well, as our Kontxt technology within the Messaging Clarity solution works to enhance customer trust and improve the overall customer experience.

Syniverse Messaging Clarity, the first end-to-end messaging visibility solution, utilizes the best-in-class grey route firewall, and clearing and settlement tools to maximize messaging revenue streams, better control spam traffic, and closely partner with enterprises. The solution analyzes the delivery of messages before categorizing them into specific groupings, including messages being sent from one person to another person (P2P), A2P messages, or outright spam. Through its existing clearing and settlement capabilities, Messaging Clarity can transform upcoming technologies like Rich Communication Services (RCS) and chatbots into revenue-generating products and services without the clutter and cost of spam or fraud.

The foundational Kontxt technology adds natural language processing and deep learning techniques to Messaging Clarity to continually update and improve its understanding of messages and clarification. This new feature adds to Messaging Claritys ability to identify, categorize, and ascribe a monetary value to the immense volume and complexity of messages that are delivered through text messaging, chatbots, and other channels.

Marketing Technology Updates: MarTechInterview with Laetitia Gazel Anthoine, CEO at HEROW

The Syniverse and RealNetworks Kontxt message classification provide companies the ability to ensure that urgent messages, like one-time passwords, are sent at a premium rate compared with lower-priority notifications, such as promotional offers. The Syniverse Messaging Clarity solution also helps eliminate instances of extreme message spam phishing (smishing). This type of attack recently occurred with a global shipping company when spam texts were sent to consumers with the request to click a link to receive an update on package delivery for a phantom order.

Building on a legacy of digital media expertise and innovation, RealNetworks has created a new generation of products that employ best-in-class artificial intelligence and machine learning to enhance and secure online communication channels.

Read More:How Are Influencers Addressing the Clothing Crisis?

Sudipto is a technology research specialist who brings 11 years of professional blogging and technical writing experience. He has developed cutting-edge content for over 100 websites and mobile applications. Our Wordsmith is an engaging conversationalist and has done more than 200 interviews with some of the leading names in automobile, digital advertising, IT/ITES, medical technology, real estate, gemstone certification, HVAC, tourism and food processing industries. Apart from technical writing, he loves to blow off steam by chronicling stories about top medical professionals, innovators, spiritual 'gurus', gym trainers, nutritionists, wedding planners, chefs and off-beat hobbyists. The best place to find him beyond work hours the shadiest underground gym in the city. He is an ardent sports buff and contributes with his witty live commentary too.

See original here:

Syniverse and RealNetworks Collaboration Brings Kontxt-Based Machine Learning Analytics to Block Spam and Phishing Text Messages - MarTech Series

Written by admin

February 22nd, 2020 at 8:44 pm

Posted in Machine Learning

Page 17«..10..16171819..»