Page 4«..3456..10..»

Archive for the ‘Machine Learning’ Category

Air Force Taps Machine Learning to Speed Up Flight Certifications – Nextgov

Posted: August 27, 2020 at 3:50 am

without comments

Machine learning is transforming the way an Air Force office analyzes and certifies new flight configurations.

The Air Force SEEK EAGLE Office sets standards for safe flight configurations by testing and looking at historical data to see how different storeslike a weapon system attached to an F-16affect flight. A project AFSEO developed along with industry partners can now automate up to 80% of requests for analysis, according to the offices Chief Data Officer Donna Cotton.

The application is kind of like an eager junior engineer consulting a senior engineer, Cotton said. It makes the straightforward calls without any input, but in the hard cases it walks into the senior engineers office and says: Hey, I did a bunch of research and this is what I found out. Can you give me your opinion?

Cotton spoke at a Tuesday webinar hosted by Tamr, one of the industry partners involved in the project. Tamr announced July 30 AFSEO awarded the company a $60 million contract for its machine learning application. Two other companies, Dell and Cloudera, helped AFSEO take decades of historical data from simulations, performance studies and the like that were siloed across various specialities and organize them into a searchable data lake.

On top of this new data architecture, the machine learning application provided by Tamr searches through all the historical data to find past records that can help answer new safety recommendation requests automatically.

This tool is critical because the vast majority of AFSEOs flight certification recommendations are made by analogy, meaning using previous data rather than new flight tests. But in the past, data was disorganized and lacked unification. This made tracking down these helpful records a challenge for engineers.

Now, a cleaner AFSEO data lake cuts the amount of time engineers waste on looking for the information they need. Machine learning further speeds up the process by generating safety reports automatically while still keeping the professional engineers in the loop. Even when engineers need to produce original research, the machine learning application can smooth the process by collecting related records to serve as a jumping off point.

The new process helps AFSEO avoid doing costly flight tests while also increasing confidence that the team is making the safety certification correctly with all the information available to them, Cotton said.

We are able to be more productive, Cotton said. It's saving us a lot of money because for us, it's not about profit, but it's about hours. It's about how much effort are we going to have to use to solve or to answer a new request.

See the rest here:

Air Force Taps Machine Learning to Speed Up Flight Certifications - Nextgov

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

The Role of Artificial Intelligence and Machine Learning in the… – Insurance CIO Outlook

Posted: at 3:50 am

without comments

Machine learning has proven to be useful for insurance agents and brokers in various ways. These include capturing knowledge, skills, and expertise from a generation of insurance staff before they retire in the next 5 to 10 years and use it to train new employees.

FREMONT, CA: Technology has become the dominant force across all businesses in the last few years. Disruptive technologies like Artificial Intelligence (AI), machine learning, and natural language processing are improving rapidly and quickly, evolving from theoretical to practical applications. These technologies have also made an impact on insurance agents and brokers. Many people continue to view technology as their foe. They either believe that machines will eventually replace them, or that a machine can never do their job better than them. While this may not be true, some aspects of it are relatable. For instance, a machine will never be able to provide real-time advice as a live agent does. However, low cost and easy to use platforms are currently available that allow agents and brokers to take advantage of this technology to enhance their delivery of advice and expertise to prospects and clients.

Machine learning has proven to be useful for insurance agents and brokers in various ways. These include capturing knowledge, skills, and expertise from a generation of insurance staff before they retire in the next 5 to 10 years and use it to train new employees.

Employee Augmentation

It helps provide personalized answers for a wide range of insurance questions. Digital customers want to get answers for their questions anytime and not just when an agent's office is open.

Personalized Digital Answers

It helps create and deliver a digital annual account review for personal lines or small commercial insurance accountants. A robust analysis leads to client satisfaction, creates cross-selling opportunities, and reduces errors and omission problems for the agency.

Digital Account Review

Many believe that artificial intelligence and machine learning will be the end of insurance agents as a trusted source for adequate protection against financial losses. However, these technologies are a threat only for insurance agents that are simply order takers. Insurance agents and brokers that embrace the technologies will always find opportunities to grow.

These emerging technologies mustn't be seen as a bane but as a boon. Insurance agents and brokers need to work in tandem with the upgrades in technology and leverage it to the best use. It holds increased potential to enhance customer satisfaction and offer a higher quality of service.

See Also:Top Machine Learning Companies

View post:

The Role of Artificial Intelligence and Machine Learning in the... - Insurance CIO Outlook

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

AI and Machine Learning Network Partners Open-Source Blockchain Protocol Waves to Conduct R&D on DLT – Crowdfund Insider

Posted: at 3:50 am

without comments

The decentralized finance (DeFi) space is growing rapidly. Oracle protocols like Chainlink, BAND and Gravity have experienced a significant increase in adoption in a cryptocurrency market thats still highly speculative and plagued by market manipulative and wash trading., an open-access machine learning network established by former DeepMind investors and software engineers, has teamed up with Waves, an established, open-source blockchain protocol that provides developer tools for Web 3.0 applications.

As mentioned in an update shared with Crowdfund Insider:

[ and Waves will] conduct joint R&D for the purpose of bringing increased multi-chain capabilities to Fetch.ais system of autonomous economic agents (AEA). [They will also] push further into bringing DeFi cross-chain by connecting with Waves blockchain agnostic and interoperable decentralized cross-chain and oracle network, Gravity.

As explained in the announcement, the integration with Gravity will enable Fetch.ais Autonomous Economic Agents to gain access to data sources or feeds for several different market pairs, commodities, indices, and futures. and Waves aim to achieve closer integration with Gravity in order to provide seamless interoperability to, making its blockchain-based AI and machine learning (ML) solutions accessible across various distributed ledger technology (DLT) networks.

As stated in the update, the integration will help with opening up new ways for all Gravity-connected communities to use Fetch.ais ML functionality within the comfort of their respective ecosystems.

As noted in another update shared with CI, a PwC report predicts that AI and related ML technologies may contribute more than $15 trillion to the world economy from 2017 through 2030. Gartner reveals that during 2019, 37% of organizations had adopted some type of AI into their business operations.

In other DeFi news, Chainlink competitor Band Protocol is securing oracle integration with Nervos, which is a leading Chinese blockchain project.

As confirmed in a release:

Nervos is a Chinese public blockchain thats tooling up for a big DeFi push. The project is building DeFi platforms with China Merchants Bank International and Huobi, and also became one of the first public blockchains to integrate with Chinas BSN. Amid the DeFi surge, Nervos is integrating Bands oracles to give developers access to real-world data like crypto price feeds.


See the original post:

AI and Machine Learning Network Partners Open-Source Blockchain Protocol Waves to Conduct R&D on DLT - Crowdfund Insider

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

AI may not predict the next pandemic, but big data and machine learning can fight this one – ZDNet

Posted: at 3:50 am

without comments

In April, at the height of the lockdown, computer-science professor lex Arenas predicted that a second wave of coronavirus was highly possible this summer in Spain.

At the time, many scientists were still confident that high temperature and humidity would slow the impact and spread of the virus over the summer months, as happens with seasonal flu.

Unfortunately, Arenas' predictions have turned out to be accurate. Madrid, the Basque country, Aragon, Catalonia, and other Spanish regions are currently dealing with a surge in COVID-19 cases, despite the use of masks, hand-washing and social distancing.

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)

Admittedly, August is not as bad as March for Spain, but it's still not a situation many foresaw.

Arenas' predictions were based on mathematical modeling and underline the important role technology can play in the timing of decisions about the virus and understanding its spread.

"The virus does as we do," says Arenas. So analyzing epidemiological, environmental and mobility data becomes crucial to taking the right actions to contain the spread of the virus.

To help deal with the pandemic, the Catalan government has created a public-private health observatory. It brings together the efforts of the administration, the Hospital Germans Trias i Pujol and several research centers, such as the Center of Innovation for Data Tech and Artificial Intelligence (CIDAI), the Technology Center Eurecat, the Barcelona Supercomputing Center (BSC), the University Rovira i Virgili and the University of Girona, as well as the Mobile World Capital Barcelona.

The Mobile World Capital Barcelona brings to bear the GSMA AI for Impact initiative, which is guided by a taskforce of 20 mobile operators and an advisory panel from 12 UN agencies and partners.

Beyond the institutions, there is a real desire to join forces to respond to the virus using technology. Dani Marco, director general of innovation and the digital economy in the government of Catalonia, makes it clear that "having comparative data on the flu and SARS-CoV-2, mobility, meteorology and population census does help us react quicker and more efficiently against the pandemic".

Data comes from public databases and also from mobile operators, which provide mobility records. It is all anonymized to avoid privacy concerns.

However, the diversity of the sources of the data is a problem. Miguel Ponce de Len, a postdoctoral researcher at BSC, the center hosting the project's database, says the data coming from the regions is heterogeneous because it is based on various standards.

So one of the main tasks at BSC is cleaning data to make it usable in predicting trends and building dashboards with useful information. The goal is having lots of models running on BSC's supercomputers to answer a range of questions how public mobility is promoting the spread of the virus is just one of them.

Arenas argues that having mobility data is crucial as "it tells you the time you have before the infection spreads from one place to another".

"Air-traffic data could have told us when the pandemic would arrive to Spain from China. But nobody was ready."

Being prepared is now more important than ever. In this regard, the Catalan government's Marco stresses that any epidemiologist will be able to use the tools developed at the observatory. He is convinced that digital tools can help, even though they're not the only solution.

According to Professor Arenas: "We need models on how epidemics evolve, and data is crucial in adjusting these models. But making predictions on the next pandemic is highly difficult, even with AI."

He advocates rapid testing methods, even if some scientists challenge their accuracy, as they could be provide a useful alternative to PCR (polymerase chain reaction) tests, which also have limitations. He also recommends the use of a contact-tracing app like the Spanish Radar COVID, based on the DP3T decentralized protocol.

"A person can trace up to three contacts over the phone. The app enables you to increase that number to six to eight contacts," he says.

SEE:Coronavirus: Business and technology in a pandemic

Oriol Mitj, researcher and consultant physician in infectious diseases at the Hospital Germans Trias i Pujol, agrees that Bluetooth technology can be helpful. But of course, "We should still fight against the idea that it's an app to control the population, because it's not," says Arenas.

Other countries, like Germany, Ireland and Switzerland, have taken the view that if there is any chance of an app making even a small contribution to the battle against the virus, it is worth a go.

Marc Torrent, director of the CIDAI, argues that being able to combine reliable data and epidemiological expertise to improve the management of public resources is already a victory.

The Catalan government has created a public-private health observatory to bring together the efforts and data from a number of bodies fighting COVID.

See the rest here:

AI may not predict the next pandemic, but big data and machine learning can fight this one - ZDNet

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

Machine Learning Artificial intelligence Market Size and Growth By Leading Vendors, By Types and Application, By End Users and Forecast to 2020-2027 -…

Posted: at 3:50 am

without comments


The report also inspects the financial standing of the leading companies, which includes gross profit, revenue generation, sales volume, sales revenue, manufacturing cost, individual growth rate, and other financial ratios.

Research Objective:

Our panel of trade analysts has taken immense efforts in doing this group action in order to produce relevant and reliable primary & secondary data regarding the Machine Learning Artificial intelligence market. Also, the report delivers inputs from the trade consultants that will help the key players in saving their time from the internal analysis. Readers of this report are going to be profited with the inferences delivered in the report. The report gives an in-depth and extensive analysis of the Machine Learning Artificial intelligence market.

The Machine Learning Artificial intelligence Market is Segmented:

In market segmentation by types of Machine Learning Artificial intelligence, the report covers-

This Machine Learning Artificial intelligence report umbrellas vital elements such as market trends, share, size, and aspects that facilitate the growth of the companies operating in the market to help readers implement profitable strategies to boost the growth of their business. This report also analyses the expansion, market size, key segments, market share, application, key drivers, and restraints.

Machine Learning Artificial intelligence Market Regional Analysis:

Geographically, the Machine Learning Artificial intelligence market is segmented across the following regions:North America, Europe, Latin America, Asia Pacific, and Middle East & Africa.

Key Coverage of Report:

Key insights of the report:

In conclusion, the Machine Learning Artificial intelligence Market report provides a detailed study of the market by taking into account leading companies, present market status, and historical data to for accurate market estimations, which will serve as an industry-wide database for both the established players and the new entrants in the market.

About Us:

Market Research Intellect provides syndicated and customized research reports to clients from various industries and organizations with the aim of delivering functional expertise. We provide reports for all industries including Energy, Technology, Manufacturing and Construction, Chemicals and Materials, Food and Beverage, and more. These reports deliver an in-depth study of the market with industry analysis, the market value for regions and countries, and trends that are pertinent to the industry.

Contact Us:

Mr. Steven Fernandes

Market Research Intellect

New Jersey ( USA )

Tel: +1-650-781-4080

Original post:

Machine Learning Artificial intelligence Market Size and Growth By Leading Vendors, By Types and Application, By End Users and Forecast to 2020-2027 -...

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

Explainable AI: From the peak of inflated expectations to the pitfalls of interpreting machine learning models – ZDNet

Posted: at 3:50 am

without comments

Machine learning and artificial intelligence are helping automate an ever-increasing array of tasks, with ever-increasing accuracy. They are supported by the growing volume of data used to feed them, and the growing sophistication in algorithms.

The flip side of more complex algorithms, however, is less interpretability. In many cases, the ability to retrace and explain outcomes reached by machine learning models (ML) is crucial, as:

"Trust models based on responsible authorities are being replaced by algorithmic trust models to ensure privacy and security of data, source of assets and identity of individuals and things. Algorithmic trust helps to ensure that organizations will not be exposed to the risk and costs of losing the trust of their customers, employees and partners. Emerging technologies tied to algorithmic trust include secure access service edge, differential privacy, authenticated provenance, bring your own identity, responsible AI and explainable AI."

The above quote is taken from Gartner's newly released 2020 Hype Cycle for Emerging Technologies. In it, explainable AI is placed at the peak of inflated expectations. In other words, we have reached peak hype for explainable AI. To put that into perspective, a recap may be useful.

As experts such as Gary Marcus point out, AI is probably not what you think it is. Many people today conflate AI with machine learning. While machine learning has made strides in recent years, it's not the only type of AI we have. Rule-based, symbolic AI has been around for years, and it has always been explainable.

Incidentally, that kind of AI, in the form of "Ontologies and Graphs" is also included in the same Gartner Hype Cycle, albeit in a different phase -- the trough of disillusionment. Incidentally, again, that's conflating.Ontologies are part of AI, while graphs, not necessarily.

That said: If you are interested in getting a better understanding of the state of the art in explainable AI machine learning, reading Christoph Molnar's book is a good place to start. Molnar is a data scientist and Ph.D. candidate in interpretable machine learning. Molnar has written the bookInterpretable Machine Learning: A Guide for Making Black Box Models Explainable, in which he elaborates on the issue and examines methods for achieving explainability.

Gartner's Hype Cycle for Emerging Technologies, 2020. Explainable AI, meaning interpretable machine learning, is at the peak of inflated expectations. Ontologies, a part of symbolic AI which is explainable, is in the trough of disillusionment

Recently, Molnar and a group of researchers attempted to addresses ML practitioners by raising awareness of pitfalls and pointing out solutions for correct model interpretation, as well as ML researchers by discussing open issues for further research. Their work was published as a research paper, titledPitfalls to Avoid when Interpreting Machine Learning Models, by the ICML 2020 Workshop XXAI: Extending Explainable AI Beyond Deep Models and Classifiers.

Similar to Molnar's book, the paper is thorough. Admittedly, however, it's also more involved. Yet, Molnar has striven to make it more approachable by means of visualization, using what he dubs "poorly drawn comics" to highlight each pitfall. As with Molnar's book on interpretable machine learning, we summarize findings here, while encouraging readers to dive in for themselves.

The paper mainly focuses on the pitfalls of global interpretation techniques when the full functional relationship underlying the data is to be analyzed. Discussion of "local" interpretation methods, where individual predictions are to be explained, is out of scope. For a reference on global vs. local interpretations, you can refer to Molnar's book as previously covered on ZDNet.

Authors note that ML models usually contain non-linear effects and higher-order interactions. As interpretations are based on simplifying assumptions, the associated conclusions are only valid if we have checked that the assumptions underlying our simplifications are not substantially violated.

In classical statistics this process is called "model diagnostics," and the research claims that a similar process is necessary for interpretable ML (IML) based techniques. The research identifies and describes pitfalls to avoid when interpreting ML models, reviews (partial) solutions for practitioners, and discusses open issues that require further research.

Under- or overfitting models will result in misleading interpretations regarding true feature effects and importance scores, as the model does not match the underlying data generating process well. Evaluation of training data should not be used for ML models due to the danger of overfitting. We have to resort to out-of-sample validation such as cross-validation procedures.

Formally, IML methods are designed to interpret the model instead of drawing inferences about the data generating process. In practice, however, the latter is the goal of the analysis, not the former. If a model approximates the data generating process well enough, its interpretation should reveal insights into the underlying process. Interpretations can only be as good as their underlying models. It is crucial to properly evaluate models using training and test splits -- ideally using a resampling scheme.

Flexible models should be part of the model selection process so that the true data-generating function is more likely to be discovered. This is important, as the Bayes error for most practical situations is unknown, and we cannot make absolute statements about whether a model already fits the data optimally.

Using opaque, complex ML models when an interpretable model would have been sufficient (i.e., having similar performance) is considered a common mistake. Starting with simple, interpretable models and gradually increasing complexity in a controlled, step-wise manner, where predictive performance is carefully measured and compared is recommended.

Measures of model complexity allow us to quantify the trade-off between complexity and performance and to automatically optimize for multiple objectives beyond performance. Some steps toward quantifying model complexity have been made. However, further research is required as there is no single perfect definition of interpretability but rather multiple, depending on the context.

This pitfall is further analyzed in three sub-categories: Interpretation with extrapolation, confusing correlation with dependence, and misunderstanding conditional interpretation.

Interpretation with Extrapolation refers to producing artificial data points that are used for model predictions with perturbations. These are aggregated to produce global interpretations. But if features are dependent, perturbation approaches produce unrealistic data points. In addition, even if features are independent, using an equidistant grid can produce unrealistic values for the feature of interest. Both issues can result in misleading interpretations.

Before applying interpretation methods, practitioners should check for dependencies between features in the data (e.g., via descriptive statistics or measures of dependence). When it is unavoidable to include dependent features in the model, which is usually the case in ML scenarios, additional information regarding the strength and shape of the dependence structure should be provided.

Confusing correlation with dependence is a typical error. The Pearson correlation coefficient (PCC) is a measure used to track dependency among ML features. But features with PCC close to zero can still be dependent and cause misleading model interpretations. While independence between two features implies that the PCC is zero, the converse is generally false.

Any type of dependence between features can have a strong impact on the interpretation of the results of IML methods. Thus, knowledge about (possibly non-linear) dependencies between features is crucial. Low-dimensional data can be visualized to detect dependence. For high-dimensional data, several other measures of dependence in addition to PCC can be used.

Misunderstanding conditional interpretation. Conditional variants to estimate feature effects and importance scores require a different interpretation. While conditional variants for feature effects avoid model extrapolations, these methods answer a different question. Interpretation methods that perturb features independently of others also yield an unconditional interpretation.

Conditional variants do not replace values independently of other features, but in such a way that they conform to the conditional distribution. This changes the interpretation as the effects of all dependent features become entangled. The safest option would be to remove dependent features, but this is usually infeasible in practice.

When features are highly dependent and conditional effects and importance scores are used, the practitioner has to be aware of the distinct interpretation. Currently, no approach allows us to simultaneously avoid model extrapolations and to allow a conditional interpretation of effects and importance scores for dependent features.

Global interpretation methods can produce misleading interpretations when features interact. Many interpretation methods cannot separate interactions from main effects. Most methods that identify and visualize interactions are not able to identify higher-order interactions and interactions of dependent features.

There are some methods to deal with this, but further research is still warranted. Furthermore, solutions lack in automatic detection and ranking of all interactions of a model as well as specifying the type of modeled interaction.

Due to the variance in the estimation process, interpretations of ML models can become misleading. When sampling techniques are used to approximate expected values, estimates vary, depending on the data used for the estimation. Furthermore, the obtained ML model is also a random variable, as it is generated on randomly sampled data and the inducing algorithm might contain stochastic components as well.

Hence, themodel variance has to be taken into account. The true effect of a feature may be flat, but purely by chance, especially on smaller data, an effect might algorithmically be detected. This effect could cancel out once averaged over multiple model fits. The researchers note the uncertainty in feature effect methods has not been studied in detail.

It's a steep fall to the peak of inflated expectations to the trough of disillusionment. Getting things done for interpretable machine learning takes expertise and concerted effort.

Simultaneously testing the importance of multiple features will result in false-positive interpretations if the multiple comparisons problem (MCP) is ignored. MCP is well known in significance tests for linear models and similarly exists in testing for feature importance in ML.

For example, when simultaneously testing the importance of 50 features, even if all features are unimportant, the probability of observing that at least one feature is significantly important is 0.923. Multiple comparisons will even be more problematic, the higher dimensional a dataset is. Since MCP is well known in statistics, the authors refer practitioners to existing overviews and discussions of alternative adjustment methods.

Practitioners are often interested in causal insights into the underlying data-generating mechanisms, which IML methods, in general, do not provide. Common causal questions include the identification of causes and effects, predicting the effects of interventions, and answering counterfactual questions. In the search for answers, researchers can be tempted to interpret the result of IML methods from a causal perspective.

However, a causal interpretation of predictive models is often not possible. Standard supervised ML models are not designed to model causal relationships but to merely exploit associations. A model may, therefore, rely on the causes and effects of the target variable as well as on variables that help to reconstruct unobserved influences.

Consequently, the question of whether a variable is relevant to a predictive model does not directly indicate whether a variable is a cause, an effect, or does not stand in any causal relation to the target variable.

As the researchers note, the challenge of causal discovery and inference remains an open key issue in the field of machine learning. Careful research is required to make explicit under which assumptions what insight about the underlying data generating mechanism can be gained by interpreting a machine learning model

Molnar et. al. offer an involved review of the pitfalls of global model-agnostic interpretation techniques for ML. Although as they note their list is far from complete, they cover common ones that pose a particularly high risk.

They aim to encourage a more cautious approach when interpreting ML models in practice, to point practitioners to already (partially) available solutions, and to stimulate further research.

Contrasting this highly involved and detailed groundwork to high-level hype and trends on explainable AI may be instructive.

The rest is here:

Explainable AI: From the peak of inflated expectations to the pitfalls of interpreting machine learning models - ZDNet

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

What is AutoML and Why Should Your Business Consider It – BizTech Magazine

Posted: at 3:50 am

without comments

Automation offers substantive benefits as companies look for ways to manage evolving workforces and workplace expectations. More than half of U.S. businesses now plan to increase their automation investment to help increase their agility and improve their ability to handle changing conditions quickly, according to Robotics and Automation News.

Businesses also need to be able to solve problems at scale, something that organizations are increasingly turning to machine learning to do. By creating algorithms that learn over time, its possible for companies to streamline decision-making with data-driven predictions. But creating the models can be complex and time-consuming, putting an added strain on businesses that may be low on resources.

Automated machine learning combines these two technologies to tap the best of both worlds, allowing companies to gain actionable insights while reducing total complexity. Once implemented, AutoML can help businesses gather and analyze data, respond to it quickly and better manage resources.

WATCH: Find out how organizations can empower digital transformation and secure remote work.

AutoML goes a step further than classic machine learning, says Earnest Collins, managing member of Regulatory Compliance and Examination Consultants and a member of the ISACA Emerging Technologies Advisory Group.

AutoML goes beyond creating machine learning architecture models, says Collins. It can automate many aspects of machine learning workflow, which include data preprocessing, feature engineering, model selection, architecture search and model deployment.

AutoML deployments can also be categorized by the format of training data used. Collins points to examples such as independent, identically distributed (IID) tabular data, raw text or image data, and notes that some AutoML solutions can handle multiple data types and algorithms.

There is no single algorithm that performs best on all data sets, he says.

Leveraging AutoML solutions offers multiple benefits that go beyond traditional machine learning or automation. The first is speed, according to Collins.

AutoML allows data scientists to build a machine learning model with a high degree of automation more quickly and conduct hyperparameter search over different types of algorithms, which can otherwise be time-consuming and repetitive, he says. By automating key processes from raw data set capture to eventual analysis and learningteams can reduce the amount of time required to create functional models.

Another benefit is scalability. While machine learning models cant compete with the in-depth nature of human cognition, evolving technology makes it possible to create effective analogs of specific human learning processes. Introducing automation, meanwhile, helps apply this process at scale in turn, enabling data scientists, engineers and DevOps teams to focus on business problems instead of iterative tasks, Collins says.

A third major benefit is simplicity, according to Collins. AutoML is a tool that assists in automating the process of applying machine learning to real-world problems, he says.

By reducing the complexity that comes with building, testing and deploying entirely new ML frameworks, AutoML streamlines the processes required to solve line-of-business challenges.

For machine learning solutions to deliver business value, ML models must be optimized based on current conditions and desired outputs. Doing so requires the use of hyperparameters, which Collins defines as adjustable parameters that govern the training of ML models.

Optimal ML model performance depends on the hyperparameter configuration value selection; this can be a time-consuming, manual process, which is where AutoML can come into play, Collins adds.

By using AutoML platforms to automate key hyperparameter selection and balancing including learning rate, batch size and drop rate its possible to reduce the amount of time and effort required to get ML algorithms up and running.

While AutoML isnt new, evolution across machine learning and artificial intelligence markets is now driving a second generation of automated machine learning platforms, according to RTInsights. The first wave of AutoML focused on building and validating models, but the second iterations include key features such as data preparation and feature engineering to accelerate data science efforts.

But this market remains both fragmented and complex, according to Forbes, because of a lack of established standards and expectations in the data science and machine learning (DSML) industry. Businesses can go with an established provider, such as Microsoft Azure Databricks, or they can opt for more up-and-coming solutions such as Google Cloud AutoML.

There are more tools around the corner. According to Synced, Google researchers are now developing AutoML-Zero, which is capable of searching for applicable ML algorithms within a defined space to reduce the need to create them from scratch. The search giant is also applying its AutoML to unique use cases; for example, the companys new Fabricius tool which leverages Googles AutoML vision toolset is designed to decode ancient Egyptian hieroglyphics.

Technological advancements combined with shifting staff priorities are somewhat driving robotic replacements. According to Time, companies are replacing humans wherever possible to reduce risk and improve operational output. But that wont necessarily apply to data scientists as AutoML rises, according to Collins.

The skills of professional, well-trained data scientists will be essential to interpreting data and making recommendations for how information should be used, he says. AutoML will be a key tool for improving their productivity, and the citizen data scientist, with no training in the field, would not be able to do machine learning without AutoML.

In other words, while AutoML platforms provide business benefits, recognizing the full extent of automated advantages will always require human expertise.

Go here to read the rest:

What is AutoML and Why Should Your Business Consider It - BizTech Magazine

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

Machine Learning Courses and Market 2020: Growing Tends in Global Regions with COVID-19 Pandemic Analysis, Growth Size, Share, Types, Applications,…

Posted: at 3:50 am

without comments

Machine Learning Courses and Market research is an intelligence report with meticulous efforts undertaken to study the right and valuable information. It offers an overview of the market including its definition, applications, key drivers, key market players, key segments, and manufacturing technology. Moreover, the report is a detailed study exhibiting current market trends with an overview of future market study.

Get Sample Copy at

Regions and Countries Level Analysis

Regional analysis is another highly comprehensive part of the research and analysis study of the global Machine Learning Courses and market presented in the report. This section sheds light on the sales growth of different regional and country-level Machine Learning Courses and markets. For the historical and forecast period 2015 to 2026, it provides detailed and accurate country-wise volume analysis and region-wise market size analysis of the global Machine Learning Courses and market.

The key players covered in this study


Ivy Professional School






Jigsaw Academy




Inquire more or share questions if any before the purchase on this report @

No of Pages: 121

Market segmentation

Machine Learning Courses and market is split by Type and by Application. For the period 2015-2026, the growth among segments provide accurate calculations and forecasts for sales by Type and by Application in terms of volume and value. This analysis can help you expand your business by targeting qualified niche markets.

Market segment by Type, the product can be split into Rote Learning Learning From Instruction Learning By Deduction Learning By Analogy Explanation-Based Learning Learning From Induction

Market segment by Application, split into Data Mining Computer Vision Natural Language Processing Biometrics Recognition Search Engines Medical Diagnostics Detection Of Credit Card Fraud Securities Market Analysis DNA Sequencing

What our report offers:

Market share assessments for the regional and country level segments

Market share analysis of the top industry players

Strategic recommendations for the new entrants

Market forecasts for a minimum of 9 years of all the mentioned segments, sub segments and the regional markets

Market Trends (Drivers, Constraints, Opportunities, Threats, Challenges, Investment Opportunities, and recommendations)

Strategic recommendations in key business segments based on the market estimations

Competitive landscaping mapping the key common trends

Company profiling with detailed strategies, financials, and recent developments

Supply chain trends mapping the latest technological advancements

Global Machine Learning Courses and Market report has been compiled through extensive primary research (through analytical research, market survey and observations) and secondary research. The Machine Learning Courses and Market report also features a complete qualitative and quantitative assessment by analyzing data gathered from industry analysts, key vendors, business news, row material supplier, regional clients, company journals, and market participants across key points in the industrys value chain.

Order a Copy of this Report @

Table of Contents

1 Industry Overview of Machine Learning Courses and

2 Industry Chain Analysis of Machine Learning Courses and

3 Manufacturing Technology of Machine Learning Courses and

4 Major Manufacturers Analysis of Machine Learning Courses and

5 Global Productions, Revenue and Price Analysis of Machine Learning Courses and by Regions, Manufacturers, Types and Applications

6 Global and Major Regions Capacity, Production, Revenue and Growth Rate of Machine Learning Courses and 2014-2019

7 Consumption Volumes, Consumption Value, Import, Export and Sale Price Analysis of Machine Learning Courses and by Regions

8 Gross and Gross Margin Analysis of Machine Learning Courses and

9 Marketing Traders or Distributor Analysis of Machine Learning Courses and

10 Global and Chinese Economic Impacts on Machine Learning Courses and Industry

11 Development Trend Analysis of Machine Learning Courses and

12 Contact information of Machine Learning Courses and

13 New Project Investment Feasibility Analysis of Machine Learning Courses and

14 Conclusion of the Global Machine Learning Courses and Industry 2019 Market Research Report

Customization Service of the Report:-

Orian Research provides customization of Reports as your need. This Report can be personalized to meet all your requirements. If you have any question get in touch with our sales team, who will guarantee you to get a Report that suits your necessities.

About Us

Orian Research is one of the most comprehensive collections of market intelligence reports on The World Wide Web. Our reports repository boasts of over 500000+ industry and country research reports from over 100 top publishers. We continuously update our repository so as to provide our clients easy access to the worlds most complete and current database of expert insights on global industries, companies, and products. We also specialize in custom research in situations where our syndicate research offerings do not meet the specific requirements of our esteemed clients.

Contact Us

Ruwin Mendez

Vice President Global Sales & Partner Relations

Orian Research Consultants

US: +1 (415) 830-3727 | UK: +44 020 8144-71-27


Read more from the original source:

Machine Learning Courses and Market 2020: Growing Tends in Global Regions with COVID-19 Pandemic Analysis, Growth Size, Share, Types, Applications,...

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

Focusing on ethical AI in business and government – FierceElectronics

Posted: at 3:50 am

without comments

The World Economic Forum and associate partner Appen are wrestling with the thorny issue of how to create artificial intelligence with a sense of ethics.

Their main area of focus is to design standards and best practices for responsible training data used in building machine learning and AI applications. It has already been a long process and continues.

A solid training data platform and management strategy is often the most critical component of launching a successful, responsible machine learning-powered product into production, said Mark Brayan, CEO of Appen in a statement. Appen has been providing training data to companies building AI for more than 20 years. In 2019, Appen created its own Crowd Code of Ethics.

The electronics industry remains in flux as constant innovation fuels market trends. FierceElectronics subscribers rely on our suite of newsletters as their must-read source for the latest news, developments and predictions impacting their world. Sign up today to get electronics news and updates delivered to your inbox and read on the go.

Ethical, diverse training data is essential to building a responsible AI system, Brayan added.

Kay Firth-Butterfield, head of AI and machine learning at WEF, said the industry needs guidelines for acquiring and using responsible training data. Companies should address questions around user permissions, privacy, security, bias, safety and how people are compensated for their work in the AI supply chain, she said.

Every business needs a plan to understand AI and deploy AI safely and ethically, she added in a video overview of Forums AI agenda. The purpose is to think about what are the big issues in AI that really require something be done in the governance area so that AI can flourish.

Were very much advocating asoft law approach, thinking about standards and guidelines rather than looking to regulation, she said.

The Forum has issued a number of white papers dating to 2018 on ethics and related topics, with a white paper on responsible limits on facial recognition issued in March.

RELATED: Researchers deploy AI to detect bias in AI and humans

In January, the Forum published its AI toolkit for boards of directors with 12 modules for the impacts and potential of AI in company strategy and is currently building a toolkit for transferring those insights to CEOs and other C-suite executives.

Another focus area is on human-centered AI for human resources to create a toolkit for HR professionals that will help promote ethical human-centered use of AI. Various HR tools have been developed in recent years that rely on AI to hire and retain talent and the Forum notes that concerns have been raised about AI algorithms encoding bias and discrimination. Errors in the adoption of AI-based products can also undermine employee trust, leading to lower productivity and job satisfaction, the Forum added.

Firth-Butterfield will be a keynote speaker at Appen annual Train AI conference on October 14.

RELATED: Tech firms grapple with diversity after George Floyd protests

Continue reading here:

Focusing on ethical AI in business and government - FierceElectronics

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

CUHK Business School Research Looks at the Limitations of Using Artificial Intelligence to Pick Stocks – Taiwan News

Posted: at 3:50 am

without comments

HONG KONG, CHINA -Media OutReach- 27 August 2020 -It's been called the holy grail of finance. Is it possible to harness the promise of artificial intelligence to make money trading stocks? Many have tried with varying degrees of success. For example, BlackRock, the world's largest money manager, has said its Artificial Intelligence (AI) algorithms have consistently beaten portfolios managed by human stock pickers. However, a recent research study by The Chinese University of Hong Kong (CUHK) reveals that the effectiveness of machine learning methods may require a second look.

The study, titled "Machine Learning versus Economic Restrictions: Evidence from Stock Return Predictability", analysed a large sample of U.S. stocks between 1987 and 2017. Using three well-established deep-learning methods, researchers were able to generate a monthly value-weighted risk-adjusted return of as much as 0.75 percent to 1.87 percent, reflecting the success of machine learning in generating a superior payoff. However, the researchers found that this performance would attenuate if the machine learning algorithms were limited to working with stocks that were relatively easy and cheap to trade.

"We find that the return predictability of deep learning methods weakens considerably in the presence of standard economic restrictions in empirical finance, such as excluding microcaps or distressed firms," says Si Cheng, Assistant Professor at CUHK Business School's Department of Finance and one of the study's authors.

Disappearing Returns

Prof. Cheng, along with her collaborators Prof. Doron Avramov at IDC Herzliya and Lior Metzker, a research student at Hebrew University of Jerusalem, found the portfolio payoff declined by 62 percent when excluding microcaps -- stocks which can be difficult to trade because of their small market capitalisations, 68 percent lower when excluding non-rated firms -- stocks which do not receive Standard & Poor's long-term issuer credit rating, and 80 percent lower excluding distressed firms around credit rating downgrades.

According to the study, machine learning-based trading strategies are more profitable during periods when arbitrage becomes more difficult, such as when there is high investor sentiment, high market volatility, and low market liquidity.

One caveat of the machine-learning based strategies highlighted by the study is high transaction costs. "Machine learning methods require high turnover and taking extreme stock positions. An average investor would struggle to achieve meaningful alpha after taking transaction costs into account," she says, adding, however, that this finding did not imply that machine learning-based strategies are unprofitable for all traders.

"Instead, we show that machine learning methods studied here would struggle to achieve statistically and economically meaningful risk-adjusted performance in the presence of reasonable transaction costs. Investors thus should adjust their expectations of the potential net-of-fee performance," says Prof. Cheng.

The Future of Machine Learning

"However, our findings should not be taken as evidence against applying machine learning techniques in quantitative investing," Prof. Cheng explains. "On the contrary, machine learning-based trading strategies hold considerable promise for asset management." For instance, they have the capability to process and combine multiple weak stock trading signals into meaningful information that could form the basis for a coherent trading strategy.

Machine learning-based strategies display less downside risk and continue to generate positive payoff during crisis periods. The study found that during several major market downturns, such as the 1987 market crash, the Russian default, the burst of the tech bubble, and the recent financial crisis, the best machine-learning investment method generated a monthly value-weighted return of 3.56 percent, excluding microcaps, while the market return came in at a negative 6.91 percent during the same period.

Prof. Cheng says that the profitability of trading strategies based on identifying individual stock market anomalies -- stocks whose behaviour run counter to conventional capital market pricing theory predictions -- is primarily driven by short positions and is disappearing in recent years. However, machine-learning based strategies are more profitable in long positions and remain viable in the post-2001 period.

"This could be particularly valuable for real-time trading, risk management, and long-only institutions. In addition, machine learning methods are more likely to specialise in stock picking than industry rotation," Prof. Cheng adds, referring to strategy which seeks to capitalise on the next stage of economic cycles by moving funds from one industry to the next.

The study is the first to provide large-scale evidence on the economic importance of machine learning methods, she adds.

"The collective evidence shows that most machine learning techniques face the usual challenge of cross-sectional return predictability, and the anomalous return patterns are concentrated in difficult-to-arbitrage stocks and during episodes of high limits to arbitrage," Prof. Cheng says. "Therefore, even though machine learning offers unprecedented opportunities to shape our understanding of asset pricing formulations, it is important to consider the common economic restrictions in assessing the success of newly developed methods, and confirm the external validity of machine learning models before applying them to different settings."


Avramov, Doron and Cheng, Si and Metzker, Lior, Machine Learning versus Economic Restrictions: Evidence from Stock Return Predictability (April 5, 2020). Available at SSRN: or

This article was first published in the China Business Knowledge (CBK) website by CUHK Business School:

CUHK Business School comprises two schools -- Accountancy and Hotel and Tourism Management -- and four departments -- Decision Sciences and Managerial Economics, Finance, Management and Marketing. Established in Hong Kong in 1963, it is the first business school to offer BBA, MBA and Executive MBA programmes in the region. Today, the School offers 11 undergraduate programmes and 20 graduate programmes including MBA, EMBA, Master, MSc, MPhil and Ph.D.

In the Financial Times Global MBA Ranking 2020, CUHK MBA is ranked 50th. In FT's 2019 EMBA ranking, CUHK EMBA is ranked 24th in the world. CUHK Business School has the largest number of business alumni (37,000+) among universities/business schools in Hong Kong -- many of whom are key business leaders. The School currently has about 4,800 undergraduate and postgraduate students and Professor Lin Zhou is the Dean of CUHK Business School.

More information is available at or by connecting with CUHK Business School on:




WeChat: CUHKBusinessSchool

Go here to see the original:

CUHK Business School Research Looks at the Limitations of Using Artificial Intelligence to Pick Stocks - Taiwan News

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

Page 4«..3456..10..»