Page 112

Archive for the ‘Machine Learning’ Category

Are We Overly Infatuated With Deep Learning? – Forbes

Posted: December 31, 2019 at 11:46 pm

without comments

Deep Learning

One of the factors often credited for this latest boom in artificial intelligence (AI) investment, research, and related cognitive technologies, is the emergence of deep learning neural networks as an evolution of machine algorithms, as well as the corresponding large volume of big data and computing power that makes deep learning a practical reality. While deep learning has been extremely popular and has shown real ability to solve many machine learning problems, deep learning is just one approach to machine learning (ML), that while having proven much capability across a wide range of problem areas, is still just one of many practical approaches. Increasingly, were starting to see news and research showing the limits of deep learning capabilities, as well as some of the downsides to the deep learning approach. So are peoples enthusiasm of AI tied to their enthusiasm of deep learning, and is deep learning really able to deliver on many of its promises?

The Origins of Deep Learning

AI researchers have struggled to understand how the brain learns from the very beginnings of the development of the field of artificial intelligence. It comes as no surprise that since the brain is primarily a collection of interconnected neurons, AI researchers sought to recreate the way the brain is structured through artificial neurons, and connections of those neurons in artificial neural networks. All the way back in 1940, Walter Pitts and Warren McCulloch built the first thresholded logic unit that was an attempt to mimic the way biological neurons worked. The Pitts and McCulloch model was just a proof of concept, but Frank Rosenblatt picked up on the idea in 1957 with the development of the Perceptron that took the concept to its logical extent. While primitive by todays standards, the Perceptron was still capable of remarkable feats - being able to recognize written numbers and letters, and even distinguish male from female faces. That was over 60 years ago!

Rosenblatt was so enthusiastic in 1959 about the Perceptrons promises that he remarked at the time that the perceptron is the embryo of an electronic computer that [we expect] will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. Sound familiar? However, the enthusiasm didnt last. AI researcher Marvin Minsky noted how sensitive the perceptron was to small changes in the images, and also how easily it could be fooled. Maybe the perceptron wasnt really that smart at all. Minsky and AI researcher peer Seymour Papert basically took apart the whole perceptron idea in their Perceptrons book, and made the claim that perceptrons, and neural networks like it, are fundamentally flawed in their inability to handle certain kinds of problems notably, non-linear functions. That is to say, it was easy to train a neural network like a perceptron to put data into classifications, such as male/female, or types of numbers. For these simple neural networks, you can graph a bunch of data and draw a line and say things on one side of the line are in one category and things on the other side of the line are in a different category, thereby classifying them. But theres a whole bunch of problems where you cant draw lines like this, such as speech recognition or many forms of decision-making. These are nonlinear functions, which Minsky and Papert proved perceptrons incapable of solving.

During this period, while neural network approaches to ML settled to become an afterthought in AI, other approaches to ML were in the limelight including knowledge graphs, decision trees, genetic algorithms, similarity models, and other methods. In fact, during this period, IBMs DeepBlue purpose-built AI computer defeated Gary Kasparov in a chess match, the first computer to do so, using a brute-force alpha-beta search algorithm (so-called Good Old-Fashioned AI [GOFAI]) rather than new-fangled deep learning approaches. Yet, even this approach to learning didnt go far, as some said that this system wasnt even intelligent at all.

Yet, the neural network story doesnt end here. In 1986, AI researcher Geoff Hinton, along with David Rumelhart and Ronald Williams, published a research paper entitled Learning representations by back-propagating errors. In this paper, Hinton and crew detailed how you can use many hidden layers of neurons to get around the problems faced by perceptrons. With sufficient data and computing power, these layers can be calculated to identify specific features in the data sets they can classify on, and as a group, could learn nonlinear functions, something known as the universal approximation theorem. The approach works by backpropagating errors from higher layers of the network to lower ones (backprop), expediting training. Now, if you have enough layers, enough data to train those layers, and sufficient computing power to calculate all the interconnections, you can train a neural network to identify and classify almost anything. Researcher Yann Lecun developed LeNet-5 at AT&T Bell Labs in 1998, recognizing handwritten images on checks using an iteration of this approach known as Convolutional Neural Networks (CNNs), and researchers Yoshua Bengio and Jrgen Schmidhube further advanced the field.

Yet, just as things go in AI, research halted when these early neural networks couldnt scale. Surprisingly very little development happened until 2006, when Hinton re-emerged onto the scene with the ideas of unsupervised pre-training and deep belief nets. The idea here is to have a simple two-layer network whose parameters are trained in an unsupervised way, and then stack new layers on top of it, just training that layers parameters. Repeat for dozens, hundreds, even thousands of layers. Eventually you get a deep network with many layers that can learn and understand something complex. This is what deep learning is all about: using lots of layers of trained neural nets to learn just about anything, at least within certain constraints.

In 2010, Stanford researcher Fei-Fei Li published the release of ImageNet, a large database of millions of labeled images. The images were labeled with a hierarchy of classifications, such as animal or vehicle, down to very granular levels, such as husky or trimaran. This ImageNet database was paired with an annual competition called the Large Scale Visual Recognition Challenge (LSVRC) to see which computer vision system had the lowest number of classification and recognition errors. In 2012, Geoff Hinton, Alex Krizhevsky, and Ilya Sutskever, submitted their AlexNet entry that had almost half the number of errors as all previous winning entries. What made their approach win was that they moved from using ordinary computers with CPUs, to specialized graphical processing units (GPUs) that could train much larger models in reasonable amounts of time. They also introduced now-standard deep learning methods such as dropout to reduce a problem called overfitting (when the network is trained too tightly on the example data and cant generalize to broader data), and something called the rectified linear activation unit (ReLU) to speed training. After the success of their competition, it seems everyone took notice, and Deep Learning was off to the races.

Deep Learnings Shortcomings

The fuel that keeps the Deep Learning fires roaring is data and compute power. Specifically, large volumes of well-labeled data sets are needed to train Deep Learning networks. The more layers, the better the learning power, but to have layers you need to have data that is already well labeled to train those layers. Since deep neural networks are primarily a bunch of calculations that have to all be done at the same time, you need a lot of raw computing power, and specifically numerical computing power. Imagine youre tuning a million knobs at the same time to find the optimal combination that will make the system learn based on millions of pieces of data that are being fed into the system. This is why neural networks in the 1950s were not possible, but today they are. Today we finally have lots of data and lots of computing power to handle that data.

Deep learning is being applied successfully in a wide range of situations, such as natural language processing, computer vision, machine translation, bioinformatics, gaming, and many other applications where classification, pattern matching, and the use of this automatically tuned deep neural network approach works well. However, these same advantages have a number of disadvantages.

The most notable of these disadvantages is that since deep learning consists of many layers, each with many interconnected nodes, each configured with different weights and other parameters theres no way to inspect a deep learning network and understand how any particular decision, clustering, or classification is actually done. Its a black box, which means deep learning networks are inherently unexplainable. As many have written on the topic of Explainable AI (XAI), systems that are used to make decisions of significance need to have explainability to satisfy issues of trust, compliance, verifiability, and understandability. While DARPA and others are working on ways to possibly explain deep learning neural networks, the lack of explainability is a significant drawback for many.

The second disadvantage is that deep learning networks are really great at classification and clustering of information, but not really good at other decision-making or learning scenarios. Not every learning situation is one of classifying something in a category or grouping information together into a cluster. Sometimes you have to deduce what to do based on what youve learned before. Deduction and reasoning is not a fort of deep learning networks.

As mentioned earlier, deep learning is also very data and resource hungry. One measure of a neural networks complexity is the number of parameters that need to be learned and tuned. For deep learning neural networks, there can be hundreds of millions of parameters. Training models requires a significant amount of data to adjust these parameters. For example, a speech recognition neural net often requires terabytes of clean, labeled data to train on. The lack of a sufficient, clean, labeled data set would hinder the development of a deep neural net for that problem domain. And even if you have the data, you need to crunch on it to generate the model, which takes a significant amount of time and processing power.

Another challenge of deep learning is that the models produced are very specific to a problem domain. If its trained on a certain dataset of cats, then it will only recognize those cats and cant be used to generalize on animals or be used to identify non-cats. While this is not a problem of only deep learning approaches to machine learning, it can be particularly troublesome when factoring in the overfitting problem mentioned above. Deep learning neural nets can be so tightly constrained (fitted) to the training data that, for example, even small perturbations in the images can lead to wildly inaccurate classifications of images. There are well known examples of turtles being mis-recognized as guns or polar bears being mis-recognized as other animals due to just small changes in the image data. Clearly if youre using this network in mission critical situations, those mistakes would be significant.

Machine Learning is not (just) Deep Learning

Enterprises looking at using cognitive technologies in their business need to look at the whole picture. Machine learning is not just one approach, but rather a collection of different approaches of various different types that are applicable in different scenarios. Some machine learning algorithms are very simple, using small amounts of data and an understandable logic or deduction path thats very suitable for particular situations, while others are very complex and use lots of data and processing power to handle more complicated situations. The key thing to realize is that deep learning isnt all of machine learning, let alone AI. Even Geoff Hinton, the Einstein of deep learning is starting to rethink core elements of deep learning and its limitations.

The key for organizations is to understand which machine learning methods are most viable for which problem areas, and how to plan, develop, deploy, and manage that machine learning approach in practice. Since AI use in the enterprise is still continuing to gain adoption, especially these more advanced cognitive approaches, the best practices on how to employ cognitive technologies successfully are still maturing.

See the article here:

Are We Overly Infatuated With Deep Learning? - Forbes

Written by admin

December 31st, 2019 at 11:46 pm

Posted in Machine Learning

The impact of ML and AI in security testing – JAXenter

Posted: at 11:46 pm

without comments

Artificial Intelligence (AI) has come a long way from just being a dream to becoming an integral part of our lives. From self-driving cars to smart assistants including Alexa, every industry vertical is leveraging the capabilities of AI. The software testing industry is also leveraging AI to enhance security testing efforts while automating human testing efforts.

AI and ML-based security testing efforts are helping test engineers to save a lot of time while ensuring the delivery of robust security solutions for apps and enterprises.

During security testing, it is essential to gather as much information as you can to increase the odds of your success. Hence, it is crucial to analyze the target carefully to gather the maximum amount of information.

Manual efforts to gather such a huge amount of information could eat up a lot of time. Hence, AI is leveraged to automate the stage and deliver flawless results while saving a lot of time and resources. Security experts can use the combination of AI and ML to identify a massive variety of details including the software and hardware component of computers and the network they are deployed on.

SEE ALSO:Amazons new ML service Amazon CodeGuru: Let machine learning optimize your Java code

Applying machine learning to the application scan results can help in a significant reduction of manual labor that is used in identifying whether the issue is exploitable or not. However, findings should always be reviewed by test engineers to decide whether the findings are accurate.

The key benefit that ML offers is its capability to filter out huge chunks of information during the scanning phase. It helps focus on a smaller block of actionable data, which offers reliable results while significantly reducing scan audit times.

An ML-based security scan results audit can significantly reduce the time required for security testing services. Machine learning classifiers can be trained through knowledge and data generated through previous tests for automation of new scan results processing. It can help enterprises triage static code results. Organizations can benefit from a large pool of data collated through multiple scans ongoing on a regular basis to get more contextual results.

This stage includes controlling multiple network devices to churn out data from the target or leverage the devices to launch attacks on multiple targets. After scanning the vulnerabilities, test engineers are required to ensure that the system is free of flaws that be used by attackers to affect the system.

AI-based algorithms can help ensure the protection of network devices by suggesting multiple combinations of strong passwords. Machine learning can be programmed to identify the vulnerability of the system though observation of user data while identifying patterns to make possible suggestions about used passwords.

AI can also be used to access the network on a regular basis to ensure that any security loophole is not building up. The algorithms capability should include identification of new admin accounts, new network access channels, encrypted channels and backdoors among others.

SEE ALSO:Artificial intelligence & machine learning: The brain of a smart city

ML-backed security testing services can significantly reduce triage pain because triage takes a lot of time if organizations rely on manual efforts. Manual security testing efforts would require a large workforce to go through all the scan results only and will take a lot of time to develop efficient triage. Hence, manual security testing is neither feasible nor scalable to meet the security needs of enterprises.

Aside, application inventory numbers used to be in the hundreds before, but now enterprises are dealing with thousands of apps. With organizations scanning their apps every month, the challenges are only increasing for security testing teams. Test engineers are constantly trying to reduce the odds of potential attacks while enhancing efficiency to keep pace with agile and continuous development environment.

Embedded AI and ML can help security testing teams in delivering greater value through automation of audit processes that are more secure and reliable.

See original here:

The impact of ML and AI in security testing - JAXenter

Written by admin

December 31st, 2019 at 11:46 pm

Posted in Machine Learning

Can machine learning take over the role of investors? – TechHQ

Posted: at 11:46 pm

without comments

As we dive deeper into the Fourth Industrial Revolution, there is no disputing how technology serves as a catalyst for growth and innovation for many businesses across a range of functions and industries.

But one technology that is steadily gaining prominence across organizations includes machine learning (ML).

In the simplest terms, ML is the science of getting computers to learn and act like humans do without being programmed. It is a form of artificial intelligence (AI) and entails feeding machine data, enabling the computer program to learn autonomously and enhance its accuracy in analyzing data.

The proliferation of technology means AI is now commonplace in our daily lives, with its presence in a panoply of things, such as driverless vehicles, facial recognition devices, and in the customer service industry.

Currently, asset managers are exploring the potential that AI/ML systems can bring to the finance industry; close to 60 percent of managers predict that ML will have a medium-to-large impact across businesses.

MLs ability to analyze large data sets and continuously self-develop through trial and error translates to increased speed and better performance in data analysis for financial firms.

For instance, according to the Harvard Business Review, ML can spot potentially outperforming equities by identifying new patterns in existing data sets and examine the collected responses of CEOs in quarterly earnings calls of the S&P 500 companies for the past 20 years.

Following this, ML can then formulate a review of good and bad stocks, thus providing organizations with valuable insights to drive important business decisions. This data also paves the way for the system to assess the trustworthiness of forecasts from specific company leaders and compare the performance of competitors in the industry.

Besides that, ML also has the capacity to analyze various forms of data, including sound and images. In the past, such formats of information were challenging for computers to analyze, but todays ML algorithms can process images faster and better than humans.


For example, analysts use GPS locations from mobile devices to pattern foot traffic at retail hubs or refer to the point of sale data to trace revenues during major holiday seasons. Hence, data analysts can leverage on this technological advancement to identify trends and new areas for investment.

It is evident that ML is full of potential, but it still has some big shoes to fil if it were to replace the role of an investor.

Nishant Kumar aptly explained this in Bloomberg, Financial data is very noisy, markets are not stationary and powerful tools require deep understanding and talent thats hard to get. One quantitative analyst, or quant, estimates the failure rate in live tests at about 90 percent. Man AHL, a quant unit of Man Group, needed three years of workto gain enough confidence in a machine-learning strategy to devote client money to it. It later extended its use to four of its main money pools.

In other words, human talent and supervision are still essential to developing the right algorithm and in exercising sound investment judgment. After all, the purpose of a machine is to automate repetitive tasks. In this context, ML may seek out correlations of data without understanding their underlying rationale.

One ML expert said, his team spends days evaluating if patterns by ML are sensible, predictive, consistent, and additive. Even if a pattern falls in line with all four criteria, it may not bear much significance in supporting profitable investment decisions.

The bottom line is ML can streamline data analysis steps, but it cannot replace human judgment. Thus, active equity managers should invest in ML systems to remain competitive in this innovate or die era. Financial firms that successfully recruit professionals with the right data skills and sharp investment judgment stands to be at the forefront of the digital economy.

Read the original post:

Can machine learning take over the role of investors? - TechHQ

Written by admin

December 31st, 2019 at 11:46 pm

Posted in Machine Learning

Machine learning to grow innovation as smart personal device market peaks – IT Brief New Zealand

Posted: at 11:46 pm

without comments

Smart personal audio devices are lookingto have the strongest year in history in 2019, with true wireless stereo set to be the largest and fastest growing category, according to new data released by analyst firm Canalys.

New figures released show that in Q3 2019, the worldwide smart personal audio device market grew 53% to reach 96.7 million units. And the segment is expected to break the 100 million unit mark in the final quarter, with potential to exceed 350 million units for the full year.

Canalys latest research showed the TWS category was not only the fastest growing segment in this market, with a stellar 183% annual growth in Q3 2019, but it also overtook wireless earphones and wireless headphones to become the largest category.

The rising importance of streaming content, and the rapid uptake in a new form of social media including short videos, resulted in profound changes in mobile users audio consumption and these changes will accelerate in the next five years while technology advancements like machine learning and smart assistants will bring more radical innovations in areas such as audio content discovery and ambient computing, explainsNicole Peng, vice president of mobility at Canalys.

As users adjust their consumption habits, Peng says the TWS category enabled smartphone vendors to adapt and differentiate against traditional audio players in the market.

With 18.2 million units shipped in Q3 2019, Apple commands 43% of the TWS market share and continues to be the trend setter.

Apple is in clear leadership position and not only on the chipset technology front. The seamless integration with iPhone, unique sizing and noise cancelling features providing top of the class user experience, is where other smartphone vendors such as Samsung, Huawei and Xiaomi are aiming their TWS devices," says Peng.

"In the short-term, smart personal audio devices are seen as the best up-selling opportunities for smartphone vendors, compared with wearables and smart home devices."

Major audio brands such as Bose, Sennheiser, JBL, Sony and others are currently able to stand their ground with their respective audio signatures especially in the earphones and headphones categories, the research shows.

Canalys senior analyst Jason Low says demand for high-fidelity audio will continue to grow. However, the gap between audio players and smartphone vendors is narrowing.

"Smartphone vendors are developing proprietary technologies to not only catch up in audio quality, but also provide better integration for on-the-move user experiences, connectivity and battery life, he explains.

Traditional audio players must not underestimate the importance of the TWS category. The lack of control over any connected smart devices is the audio players biggest weakness," Low says.

"Audio players must come up with an industry standard enabling better integration with smartphones, while allowing developers to tap into the audio features to create new use cases to avoid obsoletion."

Low says the potential for TWS devices is far from being fully uncovered, and vendors must look beyond TWS as just a way to drive revenue growth.

"Coupled with information collected from sensors or provided by smart assistants via smartphones, TWS devices will become smarter and serve broader use cases beyond audio entertainments, such as payment, and health and fitness, he explains.

"Regardless of the form factor, the next challenge will be integrating smarter features and complex services on the smart personal audio platforms. Canalys expects the market of smart personal audio devices to grow exponentially in the next two years and the cake is big enough for many vendors to come in and compete for the top spots as technology leaders and volume leaders.

AWS leads cloud race, but Microsoft & Google grow faster

Vehicle connectivity market to surpass US$1b by 2022

Spark lifts earnings on the back of mobile, wireless, cloudservices

Bose revamps iconic QuietComfort headphones

Top four consolidate leadership in cloud services market

Spark recalls power back-up kit for wireless landline phones

Read this article:

Machine learning to grow innovation as smart personal device market peaks - IT Brief New Zealand

Written by admin

December 31st, 2019 at 11:46 pm

Posted in Machine Learning

This AI Agent Uses Reinforcement Learning To Self-Drive In A Video Game – Analytics India Magazine

Posted: at 11:46 pm

without comments

One of the most used machine learning (ML) algorithms of this year, reinforcement learning (RL) has been utilised to solve complex decision-making problems. In the present scenario, most of the researches are focussed on using RL algorithms which helps in improving the performance of the AI model in some controlled environment.

Ubisofts prototyping space, Ubisoft La Forge has been doing a lot of advancements in its AI space. The goal of this prototyping space is to bridge the gap between the theoretical academic work and the practical applications of AI in videogames as well as in the real world. In one of our articles, we discussed how Ubisoft is mainstreaming machine learning into game development. Recently, researchers from the La Forge project at Ubisoft Montreal proposed a hybrid AI algorithm known as Hybrid SAC, which is able to handle actions in a video game.

Most reinforcement learning research papers focus on environments where the agents actions are either discrete or continuous. However, when training an agent to play a video game, it is common to encounter situations where actions have both discrete and continuous components. For instance, when wanting the agent to control systems that have both discrete and continuous components, like driving a car by combining steering and acceleration (both continuous) with the usage of the hand brake (a discrete binary action).

This is where Hybrid SAC comes into play. Through this model, the researchers tried to sort out the common challenges in video game development techniques. The contribution consists of a different set of constraints which is mainly geared towards industry practitioners.

The approach in this research is based on Soft Actor-Critic which is designed for continuous action problems. Soft Actor-Critic (SAC) is a model-free algorithm which was originally proposed for continuous control tasks, however, the actions which are mostly encountered in video games are both continuous as well as discrete.

In order to deal with a mix of discrete and continuous action components, the researchers converted part of SACs continuous output into discrete actions. Thus the researchers further explored this approach and extended it to a hybrid form with both continuous and discrete actions. The researchers also introduced Hybrid SAC which is an extension to the SAC algorithm that can handle discrete, continuous and mixed actions discrete-continuous.

The researchers trained a vehicle in a Ubisoft game by using the proposed Hybrid SAC model with two continuous actions (acceleration and steering) and one binary discrete action (hand brake). The objective of the car is to follow a given path as fast as possible, and in this case, the discrete hand brake action plays a key role in staying on the road at such a high speed.

Hybrid SAC exhibits competitive performance with the state-of-the-art on parameterised actions benchmarks. The researchers showed that this hybrid model can be successfully applied to train a car on a high-speed driving task in a commercial video game, also, demonstrating the practical usefulness of such an algorithm for the video game industry.

While working with the mixed discrete-continuous actions, the researchers have gained several experiences and shared them as a piece of advice to obtain an appropriate representation for a given task.They are mentioned below


A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box. Contact:

See the rest here:

This AI Agent Uses Reinforcement Learning To Self-Drive In A Video Game - Analytics India Magazine

Written by admin

December 31st, 2019 at 11:46 pm

Posted in Machine Learning

10 Machine Learning Techniques and their Definitions – AiThority

Posted: December 9, 2019 at 7:52 pm

without comments

When one technology replaces another, its not easy to accurately ascertain how the new technology would impact our lives. With so much buzz around the modern applications of Artificial Intelligence, Machine Learning, and Data Science, it becomes difficult to track the developments of these technologies. Machine Learning, in particular, has undergone a remarkable evolution in recent years. Many Machine Learning (ML) techniques have come in the foreground recently, most of which go beyond the traditionally simple classifications of this highly scientific Data Science specialization.

Read More: Beyond RPA And Cognitive Document Automation: Intelligent Automation At Scale

Lets point out the top ML techniques that the industry leaders and investors are keenly following, their definition, and commercial application.

Perceptual Learning is the scientific technique of enabling AI ML algorithms with better perception abilities to categorize and differentiate spatial and temporal patterns in the physical world.

For humans, Perceptual Learning is mostly instinctive and condition-driven. It means humans learn perceptual skills without actual awareness. In the case of machines, these learning skills are mapped implicitly using sensors, mechanoreceptors, and connected intelligent machines.

Most AI ML engineering companies boast of developing and delivering AI ML models that run on an automated platform. They openly challenge the presence and need for a Data Scientist in the Engineering process.

Automated Machine Learning (AutoML) is defined as the fully automating the entire process of Machine Learning model development right up till the process of its application.

AutoML enables companies to leverage AI ML models in an automated environment without truly seeking the involvement and supervision of Data Scientists, AI Engineers or Analysts.

Google, Baidu, IBM, Amazon, H2O, and a bunch of other technology-innovation companies already offer a host of AutoML environment for many commercial applications. These applications have swept into every possible business in every industry, including in Healthcare, Manufacturing, FinTech, Marketing and Sales, Retail, Sports and more.

Bayesian Machine Learning is a unique specialization within AI ML projects that leverage statistical models along with Data Science techniques. Any ML technique that uses the Bayes Theorem and Bayesian statistical modeling approach in Machine Learning fall under the purview of Bayesian Machine Learning.

The contemporary applications of Bayesian ML involves the use of open-source coding platform Python. Unique applications include

A good ML program would be expected to perpetually learn to perform a set of complex tasks. This learning mechanism is understood from the specialized branch of AI ML techniques, called Meta-Learning.

The industry-wide definition for Meta-Learning is the ability to learn and generalize AI into different real-world scenarios encountered during the ML training time, using specific volume and variety of data.

Meta-Learning techniques can be further differentiated into three categories

In each of these categories, there is a unique learner, meta-learner, and vectors with labels that match Data-Time-Spatial vectors into a set of networking processes to weigh real-world scenarios labeled with context and inferences.

All the recent Image Processing and Voice Search techniques use the Meta-Learning techniques for their outcomes.

Adversarial ML is one of the fastest-growing and most sophisticated of all ML techniques. It is defined as the ML technique adopted to test and validate the effectiveness of any Machine Learning program in an adverse situation.

As the name suggests, its the antagonistic principle of genuine AI, but used nonetheless to test the veracity of any ML technique when it encounters a unique, adverse situation. It is mostly used to fool an ML model into doubting its own results, thereby leading to a malfunction.

Most ML models are capable of generating answer for one single parameter. But, can it be used to answer for x (unknown or variable) parameter. Thats where the Causal Inference ML techniques comes into play.

Most AI ML courses online are teaching Causal inference as a core ML modeling technique. Causal inference ML technique is defined as the causal reasoning process to draw a unique conclusion based on the impact variables and conditions have on the outcome. This technique is further categorized into Observational ML and Interventional ML, depending on what is driving the Causal Inference algorithm.

Also commercially popularized as Explainable AI (X AI), this technique involves the use of neural networking and interpretation models to make ML structures more easily understood by humans.

Deep Learning Interpretability is defined as the ML specialization to remove black boxes in AI models, providing decision-makers and data officers to understand data modeling structures and legally permit the use of AI ML for general purposes.

The ML technique may use one or more of these techniques for Deep Learning Interpretation.

Any data can be accurately plotted using graphs. In Machine Learning techniques, a graph is a data structure consisting of two components, Vertices (or nodes) and Edges.

Graph ML networks is a specialized ML technique used to connect problems with edges and graphs. Graph Neural Networks (NNs) give rise to the category of Connected NNs (CNSS) and AI NNs (ANN).

There are at least 50 more ML techniques that could be learned and deployed using various NN models and systems. Click here to know of the leading ML companies that are constantly transforming Data Science applications with AI ML techniques.

(To share your insights about ML techniques and commercial applications, please write to us at

Read more from the original source:

10 Machine Learning Techniques and their Definitions - AiThority

Written by admin

December 9th, 2019 at 7:52 pm

Posted in Machine Learning

Managing Big Data in Real-Time with AI and Machine Learning – Database Trends and Applications

Posted: at 7:52 pm

without comments

Dec 9, 2019

Processing big data in real-time for artificial intelligence, machine learning, and the Internet of Things poses significant infrastructure challenges.

Whether it is for autonomous vehicles, connected devices, or scientific research, legacy NoSQL solutions often struggle at hyperscale. Theyve been built on top of existing RDBMs and tend to strain when looking to analyze and act upon data at hyperscale - petabytes and beyond.

DBTA recently held a webinar featuring Theresa Melvin, chief architect of AI-driven big data solutions, HPE, and Noel Yuhanna, principal analyst serving enterprise architecture professionals, Forrester, who discussed trends in what enterprises are doing to manage big data in real-time.

Data is the new currency and it is driving todays business strategy to fuel innovation and growth, Yuhanna said.

According to a Forrester survey, the top data challenges are data governance, data silos, and data growth, he explained.

More than 35% of enterprises have failed to get value from big data projects largely because of skills, budget, complexity and strategy. Most organizations are dealing with growing multi-format data volume thats in multiple repositories -relational, NoSQL, Hadoop, data lake..

The need has grown for real-time and agile data requirements, he explained. There are too many data silos multiple repositories, cloud sources.

There is a lack of visibility into data across personas -- developer, data scientist, data engineers, data architects, security etc..Traditional data platforms have failed to support new business requirements such as data warehouse, relational DBMS, and ETL tools.

Its all about the customer and its critical for organizations to have a platform to succeed, Yuhanna said. Customers prefer personalization. Companies are still early on their AI journey but they believe it will improve efficiency and effectiveness.

AI and machine learning can hyper-personalize customer experience with targeted offers, he explained. It can also prevent line shutdowns by predicting machine failures.

AI is not one technology. It is comprised of one or more building block technologies. According to the Forrester survey, Yuhanna said AI/ML for data will help end-users and customers to support data intelligence to support new next-generation use cases such as customer personalization, fraud detection, advanced IoT analytics and rea-time data sharing and collaboration.

AI/ML as a platform feature will help support automation within the BI platform for data integration, data quality, security, governance, transformation, etc., minimizing human effort required. This helps deliver insights quicker in hours instead of days and months.

Melvin suggested using HPE Persistent Memory. The platform offers real-time analysis, real-time persist, a single source of truth, and a persistent record.

An archived on-demand replay of this webinar is available here.

See the article here:

Managing Big Data in Real-Time with AI and Machine Learning - Database Trends and Applications

Written by admin

December 9th, 2019 at 7:52 pm

Posted in Machine Learning

The NFL And Amazon Want To Transform Player Health Through Machine Learning – Forbes

Posted: at 7:52 pm

without comments

The NFL and Amazon announced an expansion of their partnership at their annual AWS re:Invent ... [+] conference in Las Vegas that will use artificial intelligence and machine learning to combat player injuries. (Photo by Michael Zagaris/San Francisco 49ers/Getty Images)

Injury prevention in sports is one of the most important issues facing a number of leagues. This is particularly true in the NFL, due to the brutal nature of that punishing sport, which leaves many players sidelined at some point during the season. A number of startups are utilizing technology to address football injury issues, specifically limiting the incidence of concussions. Now, one of the largest companies in the world is working with the league in these efforts.

A week after partnering with the Seattle Seahawks on its machine learning/artificial intelligence offerings, Amazon announced a partnership Thursday in which the technology giant will use those same tools to combat football injuries. Amazon has been involved with the league, with its Next Gen Stats partnership, and now the two companies will work to advance player health and safety as the sport moves forward after its 100th season this year. Amazons AWS cloud services will use its software to gather and analyze large volumes of player health data and scan video images with the objective of helping teams treat injuries and rehabilitate players more effectively. The larger goal will be to create a new Digital Athlete platform to anticipate injury before it even takes place.

This partnership expands the quickly growing relationship between the NFL and Amazon/AWS. as the two have already teamed up for two years with the leagues Thursday Night Football games streamed on the companys Amazon Prime Video platform. Amazon paid $130 million for rights that run through next season. The league also uses AWSs ML Solutions Lab,as well as Amazons SageMaker platform, that enables data scientists and developers to build and develop machine learning models that can also lead to the leagues ultimate goal of predicting and limiting player injury.

The NFL is committed to re-imagining the future of football, said NFL Commissioner Roger Goodell. When we apply next-generation technology to advance player health and safety, everyone wins from players to clubs to fans. The outcomes of our collaboration with AWS and what we will learn about the human body and how injuries happen could reach far beyond football. As we look ahead to our next 100 seasons, were proud to partner with AWS in that endeavor.

The new initiative was announced as part of Amazons AWS re:Invent conference in Las Vegas on Thursday. Among the technologies that AWS and the league announced in its Digital Athlete platform is a computer-simulated model of an NFL player that will model infinite scenarios within NFL gameplay in order to identify a game environment that limits the risk to a player. Digital Athlete uses Amazons full arsenal of technologies, including the AI, ML and computer vision technology that is used with Amazons Rekognition tool and that uses enormous data sets encompassing historical and more modern video to identify a wide variety of solutions, including the prediction of player injury.

By leveraging the breadth and depth of AWS services, the NFL is growing its leadership position in driving innovation and improvements in health and player safety, which is good news not only for NFL players but also for athletes everywhere, said Andy Jassy, CEO of AWS. This partnership represents an opportunity for the NFL and AWS to develop new approaches and advanced tools to prevent injury, both in and potentially beyond football.

These announcements come at a time when more NFL players are utilizing their large platforms to bring awareness to injuries and the enormous impact those injuries have on their bodies. Former New England Patriots tight end Rob Gronkowski has been one of the most productive NFL players at his position in league history but he had to retire from the league this year, at the age of 29, due to a rash of injuries.

The future Hall of Fame player estimated that he suffered probably 20 concussions in his football career. These admissions have significant consequences on youth participation rates in the sport. Partnerships like the one announced yesterday will need to be successful in order for the sport to remain on solid footing heading into the new decade.

See original here:

The NFL And Amazon Want To Transform Player Health Through Machine Learning - Forbes

Written by admin

December 9th, 2019 at 7:52 pm

Posted in Machine Learning

Machine Learning Answers: If Nvidia Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? – Forbes

Posted: at 7:51 pm

without comments

Jen-Hsun Huang, president and chief executive officer of Nvidia Corp., gestures as he speaks during ... [+] the company's event at the 2019 Consumer Electronics Show (CES) in Las Vegas, Nevada, U.S., on Sunday, Jan. 6, 2019. CES showcases more than 4,500 exhibiting companies, including manufacturers, developers and suppliers of consumer technology hardware, content, technology delivery systems and more. Photographer: David Paul Morris/Bloomberg

We found that if Nvidia Stock drops 10% or more in a week (5 trading days), there is a solid 36% chance itll recover 10% or more, over the next month (about 20 trading days)

Nvidia stock has seen significant volatility this year. While the company has been impacted by the broader correction in the semiconductor space and the trade war between the U.S. and China, the stock is being supported by a strong long-term outlook for GPU demand amid growing applications in Deep Learning and Artificial Intelligence.

Considering the recent price swings, we started with a simple question that investors could be asking about Nvidia stock: given a certain drop or rise, say a 10% drop in a week, what should we expect for the next week? Is it very likely that the stock will recover the next week? What about the next month or a quarter? You can test a variety of scenarios on the Trefis Machine Learning Engine to calculate if Nvidia stock dropped, whats the chance itll rise.

For example, after a 5% drop over a week (5 trading days), the Trefis machine learning engine says chances of an additional 5% drop over the next month, are about 40%. Quite significant, and helpful to know for someone trying to recover from a loss. Knowing what to expect for almost any scenario is powerful. It can help you avoid rash moves. Given the recent volatility in the market, the mix of macroeconomic events (including the trade war with China and interest rate easing by the U.S. Fed), we think investors can prepare better.

Below, we also discuss a few scenarios and answer common investor questions:

Question 1: Does a rise in Nvidia stock become more likely after a drop?


Not really.

Specifically, chances of a 5% rise in Nvidia stock over the next month:

= 40%% after Nvidia stock drops by 5% in a week.


= 44.5% after Nvidia stock rises by 5% in a week.

Question 2: What about the other way around, does a drop in Nvidia stock become more likely after a rise?



Specifically, chances of a 5% decline in Nvidia stock over the next month:

= 40% after NVIDIA stock drops by 5% in a week


= 27% after NVIDIA stock rises by 5% in a week

Question 3: Does patience pay?


According to data and Trefis machine learning engines calculations, largely yes!

Given a drop of 5% in Nvidia stock over a week (5 trading days), while there is only about 28% chance the Nvidia stock will gain 5% over the subsequent week, there is more than 58% chance this will happen in 6 months.

The table below shows the trend:


Question 4: What about the possibility of a drop after a rise if you wait for a while?


After seeing a rise of 5% over 5 days, the chances of a 5% drop in Nvidia stock are about 30% over the subsequent quarter of waiting (60 trading days). However, this chance drops slightly to about 29% when the waiting period is a year (250 trading days).

Whats behind Trefis? See How Its Powering New Collaboration and What-Ifs ForCFOs and Finance Teams|Product, R&D, and Marketing Teams More Trefis Data Like our charts? Exploreexample interactive dashboardsand create your own

Follow this link:

Machine Learning Answers: If Nvidia Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes

Written by admin

December 9th, 2019 at 7:51 pm

Posted in Machine Learning

NFL Looks to Cloud and Machine Learning to Improve Player Safety – Which-50

Posted: at 7:51 pm

without comments

Americas National Football league is turning to emerging technology to try to solve its ongoing challenges around player safety. The sports governing body says it has amassed huge amounts of data but wants to apply machine learning to gain better insights and predictive capabilities.

It is hoped the insights will inform new rules, safer equipment, and better injury rehabilitation methods. However, the data will not be available to independent researchers.

Last week the NFL announced a partnership with Amazon Web Services to provide the digital services including machine learning and digital twin applications. Terms of the deal were not disclosed.

As the NFL has reached hyper professionalisation, data suggests player injuries have worsened, particularly head injuries sustained through high impact collisions. Several retired players have been diagnosed with or report symptoms of chronic traumatic encephalopathy, a neurodegenerative disease which can only be fully diagnosed post mortem.

As scrutiny has grown the NFL has responded with several rule changes and redesigning player helmets, both initiatives which it says has reduced concussions. However the league was also accused of failing to notify players of the links between concussions and brain injuries.

All of our initiatives on the health and safety side started with the engineering roadmap around minimising head impact on field, NFL executive vice president, Jeff Miller told Which-50 following the announcement.

Miller who is responsible for player health and safety, said the new technology is a new opportunity to minimise risk to players.

I think the speed, the pace of the insights that are available as a result of this [technology] are going to continue towards that same goal, hopefully in a much more efficient, and in fact mature, faster supersized scale.

Miller said the NFL has a responsibility to pass on the insights to lower levels of the game like high school and youth leagues. However, the data will not be available to external researchers initially.

As we find those insights I think were going to be able to share those, were going to be able to share those within the sport and hopefully over time outside of the sport as well.

NFL commissioner Roger Goodell announced the AWS deal, which builds on an existing partnership for game statistics, alongside Andy Jassy, the public cloud providers CEO, during the AWS:re:invent conference in Las Vegas last week.

Goodell said the NFL had amassed huge amounts of data from sensors and video feeds but needed the AWS tools to better leverage it.

When you take the combination of that the possibilities are enormous, the NFL boss said. We want to use the data to change the game. There are very few relationships we get involved with where the partner and the NFL can change the game.

When we apply next-generation technology to advance player health and safety, everyone wins from players to clubs to fans.

AWS machine learning tools will be applied to the data to help build a digital athlete, a type of digital twin which can be used to simulate certain scenarios including impacts.

The outcomes of our collaboration with AWS and what we will learn about the human body and how injuries happen could reach far beyond football, he said.

The author traveled to AWS re:Invent as a guest of Amazon.

Previous post

Next post

See more here:

NFL Looks to Cloud and Machine Learning to Improve Player Safety - Which-50

Written by admin

December 9th, 2019 at 7:51 pm

Posted in Machine Learning

Page 112