Page 8«..78910..2030..»

Archive for the ‘Machine Learning’ Category

The race to digitization in logistics through machine learning – FreightWaves

Posted: May 5, 2022 at 1:43 am


without comments

A recent Forbes article highlighted the importance of increasing digital transformation in logistics and argued that many tech leaders should be adopting tech-forward thinking, execution and delivery in order to deliver with speed and keep a laser focus on the customer.

Since the COVID-19 pandemic, and even before, many logistics companies have been turning to technology to streamline their processes. For many, full digitization across the supply chain is the ultimate goal.

Despite many already taking steps toward advancing digitization efforts across supply chains, these processes are still fragmented due to all the moving parts and sectors of the industry such as integrators, forwarders and owners and the processes they each use.

Scale AI is partnering with companies in the logistics industry to better automate processes across the board and eliminate bottlenecks by simplifying integration, commercial invoicing, document processing and more through machine learning (ML).

ML is a subfield of artificial intelligence that allows applications to predict outcomes without having to be specifically programmed to do so.

The logistics industry has historically depended on lots of paperwork and this continues to be a bottleneck today. Many companies already use technology like optical character recognition (OCR) or template-based intelligent document processing (IDP). Both of these are substandard systems that can process raw data but require human key entry or engineers to make the data usable through creating and maintaining templates. This is costly and cannot be scaled easily. In a world where the end users are moving to getting results instantly and at a high quality, these methods take too long while providing low accuracy.

In the industry of logistics, it is a race to digitization to create a competitive edge, said Melisa Tokmak, General Manager of Document AI at Scale. Trying to use regular methods that require templates and heavily rely on manual key entry is not providing a good customer experience or accurate data quickly. This is making companies lose customer trust while missing out on the ROI machine learning can give them easily.

Scales mission is to accelerate the development of artificial intelligence.

Scale builds ML models and fine-tunes them for customers using a small sample of their documents. Its this method that removes the need for templates and allows all documents to be processed accurately within seconds, without human intervention. Tokmak believes that the logistics industry needs this type of technology now more than ever.

In the market right now, every consumer wants things faster, better and cheaper. It is essential for logistics companies to be able to serve the end user better, faster, and cheaper. That means meeting [the end users] where they are, Tokmak said. This change is already happening, so the question is how can you as a company do this faster than others so that you are early in building competitive edge?

Rather than simply learning where on a document to find a field, Scales ML models are capable of understanding the layout, hierarchy and meaning of every field of the document.

Document AI is also flexible to layout changes, table boundaries and other irregularities compared to that of traditional template-based systems.

Tokmak believes that because the current technology of OCR and IDP are not be getting the results needed by companies in the industry, the next step is partnering with companies, like Scale, to incorporate ML into their processes. After adopting this technology, Tokmak added that this can lead to companies knowing more about the market and getting visibility on global trade, which can lead to building new relevant tech.

Flexport, a recognizable name in the logistics industry and customer of Scale AI, is what is referred to as a digital forwarder. Digital forwarders are companies that digitally help customers through the whole shipment process without owning anything themselves. They function as a tech platform to make global trade easy, looking end to end to bring both sides of the marketplace together and ship more easily.

Before integrating an ML-solution, Flexport struggled to make more traditional means of data extraction like template-based and error-prone OCR work. Knowing its expertise was in logistics, Flexport partnered with Scale AI, an expert in ML, to reach its mission of making global trade easy and accessible for everyone more quickly, efficiently, and accurately. Now Flexport prides itself in its ability to process information more quickly and without human intervention.

As the supply chain crisis worsened, Flexports needs evolved. It became increasingly important for Flexport to extract estimated times of arrival (ETAs) to provide end users more visibility. Scales Document AI solution accommodated these changing requirements to extract additional fields in seconds and without templates from unstructured documents by retraining the ML models, providing more visibility on global trade at a time when many were struggling to get this level of insight at all.

According to a recent case study, Flexport has more than 95% accuracy with no templates and a less than 60-second turnaround since partnering with Scale.

Tokmak believes that in the future, companies ideally should have technology that functions as a knowledge graph a graph that represents things like objects, events, situations or concepts and illustrates the relationship among them to make business decisions accurately and fast. As it pertains to the logistics industry, Tokmak defines it as a global trade knowledge graph, which would provide information on where things are coming and going and how things are working, sensors all coming together to deliver users the best experience in the fastest way possible.

Realistically this will take time to fully incorporate and will require partnership from the logistics companies. The trick to enabling this future is starting with what will bring the best ROI and what will help your company find the easiest way to build new cutting edge products immediately, Tokmak said. There is a lot ML can achieve in this area without being very hard to adopt. Document processing is one of them a problem not solved with existing methods but can be solved with machine learning. It is a high value area with benefits of reducing costs, reducing delays, and bringing one source of truth for organizations within the company to operate with.

Tokmak stated that many in the industry have been disappointed with previous methods and were afraid to switch to ML for the same fear of disappointment but that has changed quickly in the last a few years. Companies do understand ML is different and they need to get on this train fast to actualize the gains form the technology.

It is so important to show people the power of ML and how every industry is getting reshaped with ML, Tokmak said. The first adopters are the winners.

The leading voices in supply chain are coming to Rogers, Arkansas, on May 9-10.

*limited term pricing available.

Continued here:

The race to digitization in logistics through machine learning - FreightWaves

Written by admin

May 5th, 2022 at 1:43 am

Posted in Machine Learning

Machine learning-based prediction of relapse in rheumatoid arthritis patients using data on ultrasound examination and blood test | Scientific Reports…

Posted: at 1:43 am


without comments

Smolen, J. S., Aletaha, D. & McInnes, I. B. Rheumatoid arthritis. Lancet 388, 20232038 (2016).

CAS Article Google Scholar

Goekoop-Ruiterman, Y. P. & Huizinga, T. W. Rheumatoid arthritis: Can we achieve true drug-free remission in patients with RA?Nat. Rev. Rheumatol. 6, 6870 (2010).

Article Google Scholar

Aga, A. B. et al. Time trends in disease activity, response and remission rates in rheumatoid arthritis during the past decade: Results from the NOR-DMARD study 20002010. Ann. Rheum. Dis. 74, 381388 (2015).

CAS Article Google Scholar

van der Helm-van Mil, A. H. Risk estimation in rheumatoid arthritis: From bench to bedside. Nat. Rev. Rheumatol. 10, 171180 (2014).

Article Google Scholar

Ohrndorf, S. & Backhaus, M. Advances in sonographic scoring of rheumatoid arthritis. Ann. Rheum. Dis. 72, ii69ii75 (2013).

Article Google Scholar

Scir, C. A. et al. Ultrasonographic evaluation of joint involvement in early rheumatoid arthritis in clinical remission: Power Doppler signal predicts short-term relapse. Rheumatology (Oxford) 48, 10921097 (2009).

Article Google Scholar

Peluso, G. et al. Clinical and ultrasonographic remission determines different chances of relapse in early and long standing rheumatoid arthritis. Ann. Rheum. Dis. 70, 172175 (2011).

Article Google Scholar

Foltz, V. et al. Power Doppler ultrasound, but not low-field magnetic resonance imaging, predicts relapse and radiographic disease progression in rheumatoid arthritis patients with low levels of disease activity. Arthritis Rheum. 64, 6776 (2012).

Article Google Scholar

Iwamoto, T. et al. Prediction of relapse after discontinuation of biologic agents by ultrasonographic assessment in patients with rheumatoid arthritis in clinical remission: High predictive values of total gray-scale and power Doppler scores that represent residual synovial inflammation before discontinuation. Arthritis Care Res. 66, 15761581 (2014).

Article Google Scholar

Nguyen, H. et al. Prevalence of ultrasound-detected residual synovitis and risk of relapse and structural progression in rheumatoid arthritis patients in clinical remission: A systematic review and meta-analysis. Rheumatology (Oxford) 53, 21102118 (2014).

Article Google Scholar

Kawashiri, S. Y. et al. Ultrasound-detected bone erosion is a relapse risk factor after discontinuation of biologic disease-modifying antirheumatic drugs in patients with rheumatoid arthritis whose ultrasound power Doppler synovitis activity and clinical disease activity are well controlled. Arthritis Res. Ther. 19, 108 (2017).

Article Google Scholar

Matsuo, H. et al. Prediction of recurrence and remission using superb microvascular imaging in rheumatoid arthritis. J. Med. Ultrason. (2001)47, 131138 (2020).

Article Google Scholar

Matsuo, H. et al. Positive rate and prognostic significance of the superb microvascular imaging signal in joints of rheumatoid arthritis patients in remission with normal C-reactive protein levels and erythrocyte sedimentation rates. J. Med. Ultrason. (2001) 48, 353359 (2021).

Article Google Scholar

Ngiam, K. Y. & Khor, I. W. Big data and machine learning algorithms for health-care delivery. Lancet Oncol. 20, e262e273 (2019).

Article Google Scholar

Goecks, J., Jalili, V., Heiser, L. M. & Gray, J. W. How machine learning will transform biomedicine. Cell 181, 92101 (2020).

CAS Article Google Scholar

Kingsmore, K. M., Puglisi, C. E., Grammer, A. C. & Lipsky, P. E. An introduction to machine learning and analysis of its use in rheumatic diseases. Nat. Rev. Rheumatol. 17, 710730 (2021).

Article Google Scholar

Stafford, I. S. et al. A systematic review of the applications of artificial intelligence and machine learning in autoimmune diseases. NPJ Digit. Med. 3, 30 (2020).

CAS Article Google Scholar

Luque-Tvar, M. et al. Integrative clinical, molecular, and computational analysis identify novel biomarkers and differential profiles of anti-TNF response in rheumatoid arthritis. Front. Immunol. 12, 631662 (2021).

Article Google Scholar

Kalweit, M. et al. Personalized prediction of disease activity in patients with rheumatoid arthritis using an adaptive deep neural network. PLoSOne 16, e0252289 (2021).

CAS Article Google Scholar

Yoosuf, N. et al. Early prediction of clinical response to anti-TNF treatment using multi-omics and machine learning in rheumatoid arthritis. Rheumatology (Oxford) https://doi.org/10.1093/rheumatology/keab521 (2021).

Article Google Scholar

Vodencarevic, A. et al. Advanced machine learning for predicting individual risk of flares in rheumatoid arthritis patients tapering biologic drugs. Arthritis Res. Ther. 23, 67 (2021).

CAS Article Google Scholar

Koo, B. S. et al. Machine learning model for identifying important clinical features for predicting remission in patients with rheumatoid arthritis treated with biologics. Arthritis Res. Ther. 23, 178 (2021).

CAS Article Google Scholar

Johansson, F. D. et al. Predicting response to tocilizumab monotherapy in rheumatoid arthritis: A real-world data analysis using machine learning. J. Rheumatol. 48, 13641370 (2021).

CAS Article Google Scholar

van der Maaten, L. J. P. & Hinton, G. E. Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 25792605 (2008).

MATH Google Scholar

Karlsson Sundbaum, J. et al. Methotrexate treatment in rheumatoid arthritis and elevated liver enzymes: A long-term follow-up of predictors, surveillance, and outcome in clinical practice. Int. J. Rheum. Dis. 22, 12261232 (2019).

CAS Article Google Scholar

Chen, Y., Yu, Z., Packham, J. C. & Mattey, D. L. Influence of adult height on rheumatoid arthritis: Association with disease activity, impairment of joint function and overall disability. PLoSOne 8, e64862 (2013).

ADS Article Google Scholar

Zhao, Y. et al. Ensemble learning predicts multiple sclerosis disease course in the SUMMIT study. NPJ Digit. Med. 3, 135 (2020).

Article Google Scholar

Morid, M. A., Lau, M. & Del Fiol, G. Predictive analytics for step-up therapy: Supervised or semi-supervised learning?. J. Biomed. Inform. 119, 103842 (2021).

Article Google Scholar

Fiorentino, M. C. et al. A deep-learning framework for metacarpal-head cartilage-thickness estimation in ultrasound rheumatological images. Comput. Biol. Med. 141, 105117 (2022).

Article Google Scholar

Rohrbach, J., Reinhard, T., Sick, T. & Drr, O. Bone erosion scoring for rheumatoid arthritis with deep convolutional neural networks. Comput. Electr. Eng. 78, 472481 (2019).

Article Google Scholar

Naredo, E. et al. Ultrasound joint inflammation in rheumatoid arthritis in clinical remission: How many and which joints should be assessed?. Arthritis Care Res. (Hoboken) 65, 512517 (2013).

Article Google Scholar

Backhaus, M. et al. Guidelines for musculoskeletal ultrasound in rheumatology. Ann. Rheum. Dis. 60, 641649 (2001).

CAS Article Google Scholar

Szkudlarek, M. et al. Interobserver agreement in ultrasonography of the finger and toe joints in rheumatoid arthritis. Arthritis Rheum. 48, 955962 (2003).

Article Google Scholar

Breiman, L. Random forests. Mach. Learn. 45, 532 (2001).

Article Google Scholar

Chen, T. & Carlos, G. XGBoost: A Scalable Tree Boosting System. KDD '16: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 785794. https://doi.org/10.1145/2939672.2939785 (2016).

Go here to read the rest:

Machine learning-based prediction of relapse in rheumatoid arthritis patients using data on ultrasound examination and blood test | Scientific Reports...

Written by admin

May 5th, 2022 at 1:43 am

Posted in Machine Learning

New machine learning maps the potentials of proteins – Nanowerk

Posted: at 1:43 am


without comments

May 04, 2022(Nanowerk News) The biotech industry is constantly searching for the perfect mutation, where properties from different proteins are synthetically combined to achieve a desired effect. It may be necessary to develop new medicaments or enzymes that prolong the shelf-life of yogurt, break down plastics in the wild, or make washing powder effective at low water temperature.New research from DTU Compute and the Department of Computer Science at the University of Copenhagen (DIKU) can in the long term help the industry to accelerate the process. In the journal Nature Communications ("Learning meaningful representations of protein sequences"), the researchers explainhow a new way of using Machine Learning (ML) draws a map of proteins, that makes it possible to appoint a candidate list of the proteins that you need to examine more closely.The illustration depicts an example of the shortest path between two proteins, considering the geometry of the graphing. By defining distances in this way, it is possible to achieve biologically more precise and robust conclusions.(Image: W. Boomsma, N. S. Detlefsen, S. Hauberg)In recent years, we have started to use Machine Learning to form a picture of permitted mutations in proteins. The problem is, however, that you get different images depending on what method you use, and even if you train the same model several times, it can provide different answers about how the biology is related."In our work, we are looking at how to make this process more robust, and we are showing that you can extract significantly more biological information than you have previously been able to. This is an important step forward in order to be able to explore the mutation landscape in the hunt for proteins with special properties," says Postdoc Nicki Skafte Detlefsen from the Cognitive Systems section at DTU Compute.The map of the proteinsA protein is a chain of amino acids, and a mutation occurs when just one of these amino acids in the chain is replaced with another. As there are 20 natural amino acids, this means that the number of mutations increases so quickly that it is completely impossible to study them all. There are more possible mutations than there are atoms in the universe, even if you look at simple proteins. It is not possible to test everything in an experimental manner, so you must be selective about which proteins you want to try to produce synthetically.The researchers from DIKU and DTU Compute have used their ML model to generate a picture of how the proteins are linked. By presenting the model for many examples of protein sequences, it learns to draw a card with a dot for each protein so that closely related proteins are placed close to each other while distantly related proteins are placed far from each other.The ML model is based on mathematics and geometry developed to draw maps. Imagine that you must make a map of the globe. If you zoom in on Denmark, you can easily draw a map on a piece of paper that preserves the geography. But if you must draw the earth, mistakes will occur because you stretch the globe, so that the Arctic becomes a long country instead of a pole. So, on the map, the earth is distorted. For this reason, research in map-making has developed a lot of mathematics that describe the distortions and compensate for the distortions on the map.This is exactly the theory that DIKU and DTU Compute have been able to expand to cover their Machine Learning model (deep learning) for proteins. Because they have mastered the distortion on the map, they can also compensate for it."It enables us to talk about what a sensible distance target is between proteins that are closely related, and then we can suddenly measure it. In this way, we can draw a path through the map of the proteins that tells us which way we expect a protein to develop from to another i.e. mutated, since they are all related to evolution. In this way, the ML model can measure a distance between the proteins and draw optimal paths between promising proteins," says Wouter Boomsma, Associate Professor in the section for Machine Learning at DIKU.The researchers have tested the model on data from numerous proteins that are found in nature, where their structure is known, and they can see that the distance between proteins starts to correspond to the evolutionary development of the proteins, so that proteins that are close to each other evolutionally are placed close to each other."We are now able to put two proteins on the map and draw the curve between them. On the path between the two proteins are possible proteins, which have closely related properties. This is no guarantee, but it provides an opportunity to have a hypothesis about which proteins it could be that the biotech industry ought to test when new proteins are designed," says Sren Hauberg, professor in the Cognitive Systems section at DTU Compute.The unique collaboration between DTU Compute and DIKU was established through a new centre for Machine Learning in Life Sciences (MLLS), which started last year with the support of the Novo Nordisk Foundation. In the centre, researchers in artificial intelligence from both universities are working together to solve the fundamental problems in Machine Learning driven by important issues within the field of biology.The developed protein maps are part of a large-scale project that spans from basic research to industrial applications, e.g. in collaboration with Novozymes and Novo Nordisk.

Read the original:

New machine learning maps the potentials of proteins - Nanowerk

Written by admin

May 5th, 2022 at 1:43 am

Posted in Machine Learning

How to create fast and reproducible machine learning models with steppy? – Analytics India Magazine

Posted: at 1:43 am


without comments

In machine learning procedures, making pipelines and extracting the best out of them is very crucial nowadays. We can understand that for a library to provide all the best services is difficult and even if they are providing such high-performing functions then they become heavy-weighted. Steppy is a library that tries to build an optimal pipeline but it is a lightweight library. In this article, we are going to discuss the steppy library and we will look at its implementation for a simple classification problem. The major points to be discussed in the article are listed below.

Lets start with introducing the steppy.

Steppy is an open-source library that can be used for performing data science experiments developed using the python language. The main reason behind developing this library is to make the procedure of experiments fast and reproducible. Along with this, it is a lightweight library and enables us to make high-performing machine learning pipelines. Developers of this library aim to make data science practitioners focused on the data side instead of focusing on issues regarding software development.

In the above section, we have discussed what steppy is and by looking at such points we can say this library can provide an environment where the experiments are fast, reproducible, and easy. With these capabilities, this library also helps in removing the difficulties with reproducibility and provides functions that can also be used by beginners. This library has two main abstractions using which we can make machine learning pipelines. Abstractions are as follows:

Any simple implementation can make the intentions behind the development of this library clear but before all this, we need to install this library that requires Python 3.5 or above in the environment. If we have it we can install this library using the following lines of codes:

After installation, we are ready to use steppy for data science experiments. Lets take a look at a basic implementation.

In this implementation of steppy, we will look at how we can use it for creating steps in a classification task.

In this article we are going to sklearn provided iris dataset that can be imported using the following lines of codes:

from sklearn.datasets import load_iris

Lets split the dataset into train and test.

One thing that we need to perform while using steppy is to put our data into dictionaries so that the step we are going to create can communicate with each other. We can do this in the following way:

Now we are ready to create steps.

In this article, we are going to fit a random forest algorithm to classify the iris data which means for steppy we are defining random forest as a transformer.

Here we have defined some of the functions that will help in initializing random forest, fitting and transforming data, and saving the parameters. Now we can fit the above transformer into the steps in the following ways:

Output:

Lets visualize the step.

step

Output:

Here we can see what are the step we have defined in the pipeline lets train the pipeline.

We can train our defined pipeline using the following lines of codes.

Output:

In the output, we can see that what is the step has been followed to train the pipeline. Lets evaluate the pipeline with test data.

Output:

Here we can see the testing procedure followed by the library. Lets check the accuracy of the model.

Output:

Here we can see the results are good and also if you will use it anytime you will find out how light this library is.

In this article, we have discussed the steppy library which is an open-source, lightweight and easy way to implement machine learning pipelines. Along with this, we also looked at the need for such a library and implementation to create steps in a pipeline using a steppy library.

Read this article:

How to create fast and reproducible machine learning models with steppy? - Analytics India Magazine

Written by admin

May 5th, 2022 at 1:43 am

Posted in Machine Learning

Deep learning is bridging the gap between the digital and the real world – VentureBeat

Posted: at 1:43 am


without comments

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Algorithms have always been at home in the digital world, where they are trained and developed in perfectly simulated environments. The current wave of deep learning facilitates AIs leap from the digital to the physical world. The applications are endless, from manufacturing to agriculture, but there are still hurdles to overcome.

To traditional AI specialists, deep learning (DL) is old hat. It got its breakthrough in 2012 when Alex Krizhevsky successfully deployed convolutional neural networks, the hallmark of deep learning technology, for the first time with his AlexNet algorithm. Its neural networks that have allowed computers to see, hear and speak. DL is the reason we can talk to our phones and dictate emails to our computers. Yet DL algorithms have always played their part in the safe simulated environment of the digital world. Pioneer AI researchers are working hard to introduce deep learning to our physical, three-dimensional world. Yep, the real world.

Deep learning could do much to improve your business, whether you are a car manufacturer, a chipmaker or a farmer. Although the technology has matured, the leap from the digital to the physical world has proven to be more challenging than many expected. This is why weve been talking about smart refrigerators doing our shopping for years, but no one actually has one yet. When algorithms leave their cozy digital nests and have to fend for themselves in three very real and raw dimensions there is more than one challenge to be overcome.

The first problem is accuracy. In the digital world, algorithms can get away with accuracies of around 80%. That doesnt quite cut it in the real world. If a tomato harvesting robot sees only 80% of all tomatoes, the grower will miss 20% of his turnover, says Albert van Breemen, a Dutch AI researcher who has developed DL algorithms for agriculture and horticulture in The Netherlands. His AI solutions include a robot that cuts leaves of cucumber plants, an asparagus harvesting robot and a model that predicts strawberry harvests. His company is also active in the medical manufacturing world, where his team created a model that optimizes the production of medical isotopes. My customers are used to 99.9% accuracy and they expect AI to do the same, Van Breemen says. Every percent of accuracy loss is going to cost them money.

To achieve the desired levels, AI models have to be retrained all the time, which requires a flow of constantly updated data. Data collection is both expensive and time-consuming, as all that data has to be annotated by humans. To solve that challenge Van Breemen has outfitted each of his robots with functionality that lets it know when it is performing either well or badly. When making mistakes the robots will upload only the specific data where they need to improve. That data is collected automatically across the entire robot fleet. So instead of receiving thousands of images, Van Breemens team only gets a hundred or so, that are then labeled and tagged and sent back to the robots for retraining. A few years ago everybody said that data is gold, he says. Now we see that data is actually a huge haystack hiding a nugget of gold. So the challenge is not just collecting lots of data, but the right kind of data.

His team has developed software that automates the retraining of new experiences. Their AI models can now train for new environments on their own, effectively cutting out the human from the loop. Theyve also found a way to automate the annotation process by training an AI model to do much of the annotation work for them. Van Breemen: Its somewhat paradoxical because you could argue that a model that can annotate photos is the same model I need for my application. But we train our annotation model with a much smaller data size than our goal model. The annotation model is less accurate and can still make mistakes, but its good enough to create new data points we can use to automate the annotation process.

The Dutch AI specialist sees a huge potential for deep learning in the manufacturing industry, where AI could be used for applications like defect detection and machine optimization. The global smart manufacturing industry is currently valued at 198 billion dollars and has a predicted growth rate of 11% until 2025. The Brainport region around the city of Eindhoven where Van Breemens company is headquartered is teeming with world-class manufacturing corporates, such as Philips and ASML. (Van Breemen has worked for both companies in the past.)

A second challenge of applying AI in the real world is the fact that physical environments are much more varied and complex than digital ones. A self-driving car that is trained in the US will not automatically work in Europe with its different traffic rules and signs. Van Breemen faced this challenge when he had to apply his DL model that cuts cucumber plant leaves to a different growers greenhouse. If this took place in the digital world I would just take the same model and train it with the data from the new grower, he says. But this particular grower operated his greenhouse with LED lighting, which gave all the cucumber images a bluish-purple glow our model didnt recognize. So we had to adapt the model to correct for this real-world deviation. There are all these unexpected things that happen when you take your models out of the digital world and apply them to the real world.

Van Breemen calls this the sim-to-real gap, the disparity between a predictable and unchanging simulated environment and the unpredictable, ever-changing physical reality. Andrew Ng, the renowned AI researcher from Stanford and cofounder of Google Brain who also seeks to apply deep learning to manufacturing, speaks of the proof of concept to production gap. Its one of the reasons why 75% of all AI projects in manufacturing fail to launch. According to Ng paying more attention to cleaning up your data set is one way to solve the problem. The traditional view in AI was to focus on building a good model and let the model deal with noise in the data. However, in manufacturing a data-centric view may be more useful, since the data set size is often small. Improving data will then immediately have an effect on improving the overall accuracy of the model.

Apart from cleaner data, another way to bridge the sim-to-real gap is by using cycleGAN, an image translation technique that connects two different domains, made popular by aging apps like FaceApp. Van Breemens team researched cycleGAN for its application in manufacturing environments. The team trained a model that optimized the movements of a robotic arm in a simulated environment, where three simulated cameras observed a simulated robotic arm picking up a simulated object. They then developed a DL algorithm based on cycleGAN that translated the images from the real world (three real cameras observing a real robotic arm picking up a real object) to a simulated image, which could then be used to retrain the simulated model. Van Breemen: A robotic arm has a lot of moving parts. Normally you would have to program all those movements beforehand. But if you give it a clearly described goal, such as picking up an object, it will now optimize the movements in the simulated world first. Through cycleGAN you can then use that optimization in the real world, which saves a lot of man-hours. Each separate factory using the same AI model to operate a robotic arm would have to train its own cycleGAN to tweak the generic model to suit its own specific real-world parameters.

The field of deep learning continues to grow and develop. Its new frontier is called reinforcement learning. This is where algorithms change from mere observers to decision-makers, giving robots instructions on how to work more efficiently. Standard DL algorithms are programmed by software engineers to perform a specific task, like moving a robotic arm to fold a box. A reinforcement algorithm could find out there are more efficient ways to fold boxes outside of their preprogrammed range.

It was reinforcement learning (RL) that made an AI system beat the worlds best Go player back in 2016. Now RL is also slowly making its way into manufacturing. The technology isnt mature enough to be deployed just yet, but according to the experts, this will only be a matter of time.

With the help of RL, Albert Van Breemen envisions optimizing an entire greenhouse. This is done by letting the AI system decide how the plants can grow in the most efficient way for the grower to maximize profit. The optimization process takes place in a simulated environment, where thousands of possible growth scenarios are tried out. The simulation plays around with different growth variables like temperature, humidity, lighting and fertilizer, and then chooses the scenario where the plants grow best. The winning scenario is then translated back to the three-dimensional world of a real greenhouse. The bottleneck is the sim-to-real gap, Van Breemen explains. But I really expect those problems to be solved in the next five to ten years.

As a trained psychologist I am fascinated by the transition AI is making from the digital to the physical world. It goes to show how complex our three-dimensional world really is and how much neurological and mechanical skill is needed for simple actions like cutting leaves or folding boxes. This transition is making us more aware of our own internal, brain-operated algorithms that help us navigate the world and which have taken millennia to develop. Itll be interesting to see how AI is going to compete with that. And if AI eventually catches up, Im sure my smart refrigerator will order champagne to celebrate.

Bert-Jan Woertman is the director of Mikrocentrum.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

See the article here:

Deep learning is bridging the gap between the digital and the real world - VentureBeat

Written by admin

May 5th, 2022 at 1:43 am

Posted in Machine Learning

IonQ And Hyundai Steer Its Partnership Toward Quantum Machine Learning To Recognize Traffic Signs And 3D Objects – Forbes

Posted: at 1:43 am


without comments

IonQ and Hyundai have partnered to apply Quantum ML to automotive recognition of traffic signals and ... [+] other objects.

Automobile manufacturers, suppliers, dealers, and service providers understand that quantum computing will eventually have a major impact on most every aspect of the industry. Daimler, Honda, Hyundai, Ford, BMW, Volkswagen, and Toyota all have some form of quantum evaluation program in place.

Classic computers cannot solve many of the significant real-world problems because of computational complexity or because calculations would take an inordinate amount of time, perhaps hundreds, thousands, or even millions of years. Quantum computing offers the potential to solve these problems in a reasonable amount of time. Although current hardware isnt advanced enough to support the number of qubits needed, we are already working to implement error correction solutions in order to build fault-tolerant quantum machines.

The same hardware and error correction constraints limit the full potential of quantum machine learning. In some instances, it has proven to be helpful with current quantum computers; it can also exceed the results of some classical models.

IonQ has a research history with quantum machine learning, so I was looking forward to talking to Peter Chapman, CEO of IonQ, about his partnership with Hyundai Motors.

First, Chapman explained that the partnerships goal is to determine quantum computings potential to provide improved mobility solutions for autonomous vehicles. For these projects, IonQ will use Aria, its latest trapped-ion quantum computer.

IonQ combined its quantum computing expertise with Hyundai's lithium battery knowledge two months ago. It is developing sophisticated quantum chemistry simulations to study battery charge and discharge cycles, capacity, durability, and safety.

As an evolution of their relationship, the IonQ and Hyundai team will develop quantum machine learning (QML) models to detect and recognize traffic signs and identify 3D objects such as pedestrians and cyclists.

Recognizing traffic signs and identifying 3D objects are critical elements of Advanced Driver-Assistance Systems (ADAS) used by autonomous vehicles. ADAS depends upon cameras, lidar, radar, and other sensors for inputs to onboard AV computers that interpret and respond to the driving environment. A 2016 study by the National Highway Transportation Safety Administration found that 94% to 96% of accidents are caused by human error. With quantum enhanced inputs for ADAS, it is likely that human error can be minimized to reduce accidents.

Early in his career, Chapman served as president of a Ray Kurzweil company, where he gained machine learning experience. As a result, he has a deep knowledge of classical machine learning models and the complicated steps needed to identify images. More importantly, he understands why QML will be much faster and more efficient than its classical counterpart.

QML doesn't need numerous processing steps for traffic road sign recognition like classical approaches to object detection, he said. Quantum recognizes a sign and interprets its meaning in one single step.

IonQ has already completed the difficult computational part of the road sign recognition project. It has already trained quantum machine learning models (QML) using a standardized 50,000 image database to recognize 43 different classifications of road signs. Next, IonQ will test its QML model under real-world driving conditions using Hyundai's test environment.

Chapman also explained why he believes quantum machine learning and object recognition will prove much more powerful than classical.

"What happens if your car sees something that it has never been trained on before? Let's take an outlier case, such as a person with a triple-wide stroller, walking two dogs on a leash, talking on their iPhone, and carrying a bag of groceries. If the training data had never seen this scenario, how would the car respond? I think quantum machine learning will fill in those gaps and provide a known response for things it hasn't seen before."

IonQ Quantum Machine Learning milestones

The following summarizes various QML projects IonQ has participated in over the past few years.

December 2020

September 2021

November 2021

Analyst notes:

1. In October 2021, IonQ became the first pure-play quantum company listed on the New York Stock Exchange.

2. While quantum computing is still in its infancy, it's too early to select a technology that will lead to error-free quantum systems that use millions of qubits to solve world-changing problems. The technology that ultimately performs at that level may not even be in use today. Scaling to millions of logical qubits is still many years away for all gate-based quantum computers.

3. Quantum qubits are fragile and susceptible to errors caused by interaction with the environment. Error correction is a subject of serious research by almost every quantum company. It will not be possible to scale quantum computers to large numbers of qubits until a workable error correction technique is developed. I expect significant progress in 2022.

4. Technical details of the IonQ-FCAT daily stock return study are available here.

5. Technical details of the IonQ-Zapata hybrid QML research are available here.

6. Access to IonQ quantum systems is available through the cloud on Amazon Braket, Microsoft Azure, and Google Cloud and through direct API access.

Follow Paul Smith-Goodson on Twitter for current information and insights on quantum and AI

Disclosure: My firm, Moor Insights & Strategy, like all research and analyst firms, provides or has provided research, analysis, advising, and/or consulting to many high-tech companies in the industry. I do not hold any equity positions with any companies cited in this column.

Find more from Moor Insights & Strategy on its website, Twitter, LinkedIn, Facebook, Google+, and YouTube.

See original here:

IonQ And Hyundai Steer Its Partnership Toward Quantum Machine Learning To Recognize Traffic Signs And 3D Objects - Forbes

Written by admin

May 5th, 2022 at 1:43 am

Posted in Machine Learning

VelocityEHS Dream Team of Board-Certified Ergonomists and AI & Machine Learning Scientists Headline Virtual Ergonomics Conference on May 3 – Yahoo…

Posted: at 1:43 am


without comments

VelocityEHS

Addressing musculoskeletal disorders and aligning risk reduction programs to ESG and sustainability efforts top the agenda.

CHICAGO, April 29, 2022 (GLOBE NEWSWIRE) -- VelocityEHS, the global leader in cloud-based environmental, health, safety (EHS) and environmental, social, and corporate governance (ESG) software, announced today it will host a new virtual event The VelocityEHS Ergonomics Conference on May 3, 2022. During this free, one day conference, experts will provide thought leadership on ways to focus on the job improvement process while reducing workplace injuries related to musculoskeletal disorders (MSDs). VelocityEHS expert speakers include board-certified professional ergonomists (CPEs), certified safety professionals (CSPs), certified industrial hygienists (CIHs), a PhD in machine learning, and a doctor in physical therapy.

Additional topics include implementation of best practices, tools for calculating a return on investment, machine learning advancements, physical demands analysis, the impacts of ergonomics on corporations, and insights from experts on the front lines at Cummins, Lear Corporation, Southwire Company and W.L. Gore & Associates.

Register now for the whole conference or to attend specific sessions. Registrants will also have on-demand access to the sessions for 30 days following the live event.

One of the best ways to judge the health, sustainability and vitality of a global enterprise is to look at how seriously they take ergonomics, said John Damgaard, CEO of VelocityEHS. One thing the very best companies in manufacturing, pharmaceuticals, food & beverage, chemical and so on, have in common is the investment they make in ergonomics and designing risk out of their processes. With more CPEs than any other company, and game-changing technology that harnesses AI & machine learning to help non experts achieve expert results, VelocityEHS is the most trusted ergonomics partner of the Fortune 500 and beyond.

Story continues

The free, daylong event features a packed and unparalleled line up of experts and content, with ergonomics insights on a broad-range of topics, including:

Advancing Your Ergonomics Progress10-10:15 a.m. ETPresented by Jamie Mallon, CPE, and Chief Revenue Officer at VelocityEHS. Mallon will share his 25+ years of experience consulting with Fortune 500 companies to help them advance the impact of their ergonomics improvement process, enabling them to identify and design out risk before injury.

Building an Effective Ergonomics Process10:1511 a.m. ETPresented by Kristi Hames, CIH, CSP, Senior Solutions Strategist, VelocityEHS and Christy Lotz, CPE, Director of Ergonomics, VelocityEHS. This workshop covers the key elements of a written ergonomics plan, considerations for establishing your ergonomics team, activities you can perform to enhance stakeholder alignment, and how to select metrics that are aligned with your process maturity and stakeholder objectives.

Process Management and ROI11:1512 p.m. ETPresented by Rick Barker, CPE, CSP, Principal Solutions Strategist, VelocityEHS and Rachel Zoky, CPE, Senior Consultant, VelocityEHS. This session covers ways to sustain a successful ergonomics process by updating your program as it matures.

Quantifying Overall MSD Risk Level: A Panel Discussion with VelocityEHS Customers 12:1512:45 p.m. ETPresented by Blake McGowan, CPE, Director of Ergonomics Research, VelocityEHS; Kevin Perdeaux, CPE, Director of Global Ergonomics, Lear Corporation; and Ryan Goad, CPE, Environmental, Health & Safety Manager, Southwire Company.

A Road Map to ActiveEHS (Customers Only)1-1:30 p.m. ETPresented by Ben Taft, Senior Product Manager, VelocityEHS, this session will explore how ActiveEHS in ergonomics harnesses AI & machine learning along with deep domain expertise from VelocityEHS experts to drive a continuous improvement cycle prediction, intervention and outcomes.

Physical Demands Analysis: The 5 Most Commonly Asked Questions1:45-2:15 p.m. ETPresented by Arielle West, PT, PDT, Solutions Strategist, VelocityEHS, will explore how Physical Demands Analysis (PDA), another tool to manage musculoskeletal disorders is used to match people to job demands.

How Ergonomics and MSD Risk Reduction Efforts Impact Corporate Sustainability Metrics2:30-3:15 p.m. ETPresented by Blake McGowan, CPE, Director of Ergonomics, VelocityEHS, this session will center on the relationship between ergonomics and risk reduction and the necessity for business to up their performance on the ESG and sustainability front.

How Machine Learning is Advancing Ergonomics Effectiveness3:30-4:15 p.m. ETPresented by Dr. Julia Penfield, Ph.D., Principal Machine Learning Scientist, VelocityEHS and Rick Barker, CPE, CSP, Principal Solutions Strategist, VelocityEHS, this session will provide a great understanding of what machine learning is and how it is already being applied to save time and increase the effectiveness in three different EHS use cases.

Determining the Value of New Technology: A Panel Discussion with VelocityEHS Customers 4:30-5 p.m. ETPresented by Blake McGowan, CPE, Director of Ergonomics Research, VelocityEHS; Sarah Grawe, Ergonomics Manager, Cummins; and Michael Mauro, Divisional Ergonomics and Error Proofing Process Owner, W.L. Gore & Associates. This session features a conversation with Cummins and W.L. Gore & Associates on ways to assess the value of new technology.

VelocityEHS virtual events offer attainable learning opportunities to individuals seeking unique perspectives on the common EHS and ESG issues most affecting companies today. Stay up-to-date with current and upcoming conferences, webinars and other learning opportunities by visiting the Webinars & Recordings page and following VelocityEHS on LinkedIn.

The VelocityEHS Industrial Ergonomics solution, now with Active Causes & Controls, is available via the VelocityEHS Accelerate Platform, which delivers best-in-class performance in the areas of health, safety, risk, ESG and operational excellence. Backed by the largest global software community of EHS experts and thought leaders, the software drives expert processes so that every team member can produce outstanding results. For more information about VelocityEHS and its complete award-winning software solutions, visit http://www.EHS.com.

About VelocityEHSRelied on by more than 10 million users worldwide, VelocityEHS is the global leader in true SaaS enterprise EHS technology. Through the VelocityEHS Accelerate Platform, the company helps global enterprises drive operational excellence by delivering best-in-class capabilities for health, safety, environmental compliance, training, operational risk, and environmental, social, and corporate governance (ESG). The VelocityEHS team includes unparalleled industry expertise, with more certified experts in health, safety, industrial hygiene, ergonomics, sustainability, the environment, AI, and machine learning than any EHS software provider. Recognized by the EHS industrys top independent analysts as a Leader in the Verdantix 2021 Green Quadrant AnalysisVelocityEHS is committed to industry thought leadership and to accelerating the pace of innovation through its software solutions and vision.

VelocityEHS is headquartered in Chicago, Illinois, with locations in Ann Arbor, Michigan; Tampa, Florida; Oakville, Ontario; London, England; Perth, Western Australia; and Cork, Ireland. For more information, visit http://www.EHS.com.

Media ContactBrad Harbaugh312.881.2855bharbaugh@ehs.com

Link:

VelocityEHS Dream Team of Board-Certified Ergonomists and AI & Machine Learning Scientists Headline Virtual Ergonomics Conference on May 3 - Yahoo...

Written by admin

May 5th, 2022 at 1:43 am

Posted in Machine Learning

Is Link Machine Learning (LML) Heading the Right Direction Monday? – InvestorsObserver

Posted: at 1:43 am


without comments

InvestorsObserver gives Link Machine Learning a strong long-term technical score of 94 from its research. The proprietary scoring system take into account the historical trading patterns from recent months to a year of the token's support and resistance levels, in addition to where it is relative to long-term averages. The analysis helps to determine whether it's a strong buy-and-hold investment opportunity currently for traders.LML at this time has a superior long-term technical analysis score than 94% of crytpos in circulation. The Long-Term Rank will be most relevant to buy-and-hold type investors who are looking for strong steady growth when allocating their assets. Combining a high long and short-term technical score will also help portfolio managers discover tokens that have bottomed out.

Subscribe to our daily morning update newsletter and never miss out on the need-to-know market news, movements, and more.

Thank you for signing up! You're all set to receive the Morning Update newsletter

Read the original post:

Is Link Machine Learning (LML) Heading the Right Direction Monday? - InvestorsObserver

Written by admin

May 5th, 2022 at 1:43 am

Posted in Machine Learning

Best Predictive Analytics Tools and Software 2022 – TechRepublic

Posted: at 1:43 am


without comments

Image: Wright Studio/Shutterstock

Managing data has always been a challenge for businesses. With new sources and higher volumes of data coming in all the time, its more important than ever to have the right tools in place. Predictive analytics tools and software are the best way to accomplish this task. Data scientists and business leaders must be able to organize data and clean it to get the process started. The next step is analyzing it and sharing the results with colleagues.

Whether you need to up your existing analysis system or establish this capability, take a look at thes round up of predictive analysis tools. Youll find tools and software for both people who are experts at working with data and people who are not.

Contents:

The Alteryx Analytic Process Automation Platform specializes in no-code and low-code analytics building blocks to design repeatable workflows. The platform is designed for companies that want to provide self-service analytics and data science for all departments. Alteryx also uses augmented machine learning to help citizen data workers build predictive models. The companys cloud platform makes it easy to share workflows online, at the desktop and in on-prem data centers and offers built-in integrations with modern cloud ecosystem applications. The Analytic Process Automation Platform puts analytics, data science and process automation in one place by combining data quality and preparation, analytics, data science and automated machine learning and deployment and monitoring into one service. The automation service includes more than 80 natively integrated data sources. Alteryxs Designer service makes it easy to combine data sets, use code-free and code-friendly tools and produce visual workflows and reports.

Alteryx also provides training and educational information about machine learning on its Data Science Portal.Alteryx offers a free 30-day license for Designer for business users. For students, educators and career changes, the company offers a free one-year renewable Designer license. Contact the company for detailed pricing information.

Microsofts cloud platform offers business analytics services for the entire machine learning process. This includes preparing data, building and training models, validating and deploying them and managing and monitoring them. According to Microsoft, the platform can increase the ROI of machine learning products, reduce the steps required to train models by 70% and reduce by 90% the amount of lines of code for pipeline. Azure Machine Learning also offers PyTorch Enterprise, a support program for the open-source deep learning framework that allows service providers to develop and offer tailored enterprise-grade support to customers.

Azure ML also offers responsible AI capabilities to make models more transparent and reliable. Features include visualizations, what-if analysis and model explanation graphs. The platform includes algorithms to test models for fairness and an error analysis toolkit to debug errors and improve accuracy.

Microsoft offers 60 compliance certifications as well as beginner and advanced tutorials. There is a free trial for Azure. There is no additional cost to use Azure Machine Learning but users pay for compute as well as other Azure services including Azure Blob Storage, Azure Key Vault, Azure Container Registry and Azure Application Insights. Pricing options can be customized by type of service, region, currency and time frame.

The Lakehouse Platform combines the functionality of a data warehouse and a data lake. Databricks Lakehouse unifies data warehousing and AI use cases on one platform and provides a single data platform across cloud deployments. The warehouse is built on the open source technology Delta Lake which forms the structured transactional layer. According to the company, this open format storage layer delivers reliability, security and performance for both streaming and batch operations and can replace data silos with a single home for structured, semi-structured and unstructured data. Delta Engine is the high performance query engine. The has SQL and performance capabilities, including indexing, caching and MPP processing. The platform also allows direct file access and direct native support for Python, data science and AI frameworks. Cloud partners include AWS, Azure and Google Cloud.The Databricks Data Science Workspace is a notebook environment that can be used by everyone on the team. Existing notebooks can be imported into a companys Databricks environment or the free community edition.

Databricks has an academy with numerous role-based learning paths, self-paced learning and instructor-led training. The company also offers speciality badges and certifications for data analysts, data engineers and machine learning scientists. Databricks offers a free trial as well as pay as you go and committed-use discounts options. Contact the company for pricing information.

DataRobots AI Cloud Platform supports collaboration for all users from data science and analytics experts to IT and DevOps teams to executives and information workers. The platform includes data engineering, machine learning, MLOps, decision intelligence and trusted AI services. To support decision intelligence, the service has a no-code app builder, AI apps and Decision Flows which create rules to automate decisions. The no-code app builder allows users to turn a model into an AI application without any additional coding. This makes it easier for business users to make AI-driven decisions, according to the company. The apps also include detailed prediction explanations to help users explain any decision made by a model. Users also can use the no-code app builder to perform what-if analysis by changing one or more inputs to create new scenarios and then compare the two results. This transparency allows companies to incorporate feedback from end users and other stakeholders into model revisions.

The company also provides modules for grading existing AI models, setting up policies, rules and controls for production deployments and for generating compliance reports. DataRobot offers options to deploy AI services on any cloud platform, on premise or at the edge.

DataRobot offers a free trial. Contact the company for detailed pricing information.

H2O.ais automated machine learning capabilities make it easier to use artificial intelligence with high levels of speed, accuracy and transparency, according to the company. The companys platform has options for building models and applications as well as monitoring performance and adapting to changing conditions. The services are designed for various roles within a business, including data scientists, developers, machine learning engineers, DevOps and IT professionals and business users.Services in the platform include data visualization, pre-processing transformers, dataset splitting, outlier detection, feature encoding, per-feature controls and automated validation and cross validation.Automated machine learning services include:

The platform also includes a low-code application development framework (Python/R) for user interface creation and machine learning integration.Services for machine learning operations include a model repository, model deployment and model monitoring.

The company offers fully managed cloud services and hybrid cloud services. H2O.ai offers a free trial of the platform.

IBMs Statistical Package for the Social Sciences is used for complex statistical data analysis via a library of machine learning algorithms, text analysis, and open source extensibility designed for integration with big data and easy deployment into applications. The package includes a statistics component for ad hoc analysis, a modeler with algorithms and models ready for immediate use and a modeler in cloud pak for data, and a containerized data and AI service for building and running predictive models in the cloud or on premises. Several related products support predictive analytics software for students, teachers and researchers as well as an analytic server to make predictive analytics easier.

Business analysts can use features in the statistics component to:

IBM recently launched an early access program for beginner and intermediate users to help those groups get started with statistics. The learning modules feature a simplified UI, a guided walk through of the software and a data overview dashboard. This service is in beta and is offered free for 60 days. IBM offers a subscription plan and on-premise licensing editions of SPSS. There are four levels of service: base, standard, professional and premium. Contact IBM for pricing.

This is IBMs platform for data science, formerly known as Data Science Experience. The platform includes a workspace and collaboration and open-source tools for data science. Watson Studio is a core offering in Cloud Pak for Data as a Service. The service includes tools to analyze and visualize data, to clean and shape it and to build machine learning models.The architecture of Watson Studio is built around a project that includes collaborators, assets and tools. Software provided in the studio include:

Projects integrate with Watson Knowledge Catalog services and Deployment spaces provided by Watson Machine Learning services.

IBM offers a free trial of IBM Watson Studio on Cloud Pak for Data. Contact IBM for pricing of multiple licensing options for IBM Cloud Pak for Data, pay-as-you-go pricing for IBM Cloud Pak for Data as a Service and for IBM Cloud Pak for Data System.

This data science software platform provides an integrated environment for data preparation, machine learning, deep learning, text mining and predictive analytics. It is used for business applications as well as for research, education, training, rapid prototyping and application development. According to the company, the platform is robust enough for data scientists while also being user friendly enough for users in the rest of the company. Features designed for data scientists include:

Features for business users include:

The RapidMiner AI Cloud service is built for all users with an augmented and guided experience, a visual UI with a minimal learning curve and an explanation of the data and the modeling process.

The company has a RapidMiner Academy and training and certification services. There are also certified global partners for additional support as well as integrations to speed data access and deployment of machine learning models. Contact the company for enterprise pricing information.

Tableau is an end-to-end data and analytics platform that includes security, governance and compliance along with APIs. Tableau creates trust and confidence by establishing controls, rules and repeatable processes across integration, access and oversight, according to the company. Individual components of the platform include services for data prep, CRM analytics, server management and embedded analytics.Tableau also promises to help customers build a data culture by promoting these values:

Tableau Blueprint is a methodology for building capabilities required for a data-driven organization that covers strategy, agility and proficiency.Companies can deploy Tableau via software-as-a-service, Salesforce Hyperforce, public cloud server and containers and on-premises servers. Tableau Creator is $70/user/month billed annually, Explorer is $42/user/month billed annually and Viewer is $15/user/month billed annually. These services are fully hosted by Tableau. For deployments with Tableau server on-premises or in the public cloud, the prices are $70/user/month billed annually for Creator, $35/user/month billed annually for Explorer and $12/user/month billed annually for Viewer. For individuals, access to Tableau Creator is $70/user/month billed annually.

Sisenses Fusion Platform integrates customized analytics into applications and products to make analytics intuitive and user-friendly, according to the company. The platform has three components for data analysis: Embed, Infusion Apps and Analytics. Embed is an API-first platform that customers can use to build white-labeled analytics into applications and workflows.Customers can use Infusion Apps to ask questions with natural language queries and conduct analysis in Slack, Google Slides, Microsoft Teams and Salesforce. Analytics has code-first, low-code and no-code options for analyzing and visualizing large volumes of data as well as self-service dashboards and apps. The service also has built-in, code-first statistical and predictive analysis libraries and ML technologies.Sisenses Data Connectors integrations cover dozens of other platforms including Airtable, Amazon Redshift, Salesforce Desk.com and Double Click. The companys marketplace includes add-ons, integrations, data pipelines and infusion apps.The Sisense Cloud analytics platform provides scalability and agility to analytics operations and encourages collaboration.

Sisense offers a free trial. Contact the company for pricing information.

Predictive analysis covers statistical techniques for studying data. This includes data mining, predictive modeling and machine learning as methods of making predictions about future events. Predictive analytics has the potential to:

Business leaders can use predictive analytics to increase the chances of success for many initiatives or to test a variety of scenarios quickly.

These tools range from no-code tools to data lakes to machine learning algorithms. Businesses can pick a solution that fits the needs and expertise of each department. Some platforms are complete workspaces and others integrate with existing tools. There are options for cloud deployments and on-prem solutions.

Gartner recommends that companies follow these best practices when selecting predictive analysis tools:

Predictive analytics platforms look at historical data and try to spot patterns. The process relies on data such as customer purchases, weather information or banking habits, statistics such as regression analysis, and assumptions that the future will follow trends from the past.Some types of predictive analytics software use machine learning to revise algorithms based on learnings from the data collected.Data experts and business department leaders can use predictive analytics to test new theories and products before committing to these decisions in the marketplace.

Go here to read the rest:

Best Predictive Analytics Tools and Software 2022 - TechRepublic

Written by admin

May 5th, 2022 at 1:43 am

Posted in Machine Learning

Harnessing the power of machine learning with MLOps – VentureBeat

Posted: May 9, 2021 at 1:51 am


without comments

Join Transform 2021 this July 12-16. Register for the AI event of the year.

MLOps, a compound of machine learning and information technology operations, is a newer discipline involving collaboration between data scientists and IT professionals with the aim of productizing machine learning algorithms. The market for such solutions could grow from a nascent $350 million to $4 billion by 2025, according to Cognilytica. But certain nuances can make implementing MLOps a challenge. A survey by NewVantage Partners found that only 15% of leading enterprises have deployed AI capabilities into production at any scale.

Still, the business value of MLOps cant be ignored. A robust data strategy enables enterprises to respond to changing circumstances, in part by frequently building and testing machine learning technologies and releasing them into production. MLOps essentially aims to capture and expand on previous operational practices while extending these practices to manage the unique challenges of machine learning.

MLOps, which was born at the intersection of DevOps, data engineering, and machine learning, is similar to DevOps but differs in execution. MLOps combines different skill sets: those of data scientists specializing in algorithms, mathematics, simulations, and developer tools and those of operations administrators who focus on tasks like upgrades, production deployments, resource and data management, and security.

One goal of MLOps is to roll out new models and algorithms seamlessly, without incurring downtime. Because production data can change due to unexpected events and machine learning models respond well to previously seen scenarios, frequent retraining or even continuous online training can make the difference between an optimal and suboptimal prediction.

A typical MLOps software stack might span data sources and the datasets created from them, as well as a repository of AI models tagged with their histories and attributes. Organizations with MLOps operations might also have automated pipelines that manage datasets, models, experiments, and software containers typically based on Kubernetes to make running these jobs simpler.

At Nvidia, developers running jobs on internal infrastructure must perform checks to guarantee theyre adhering to MLOps best practices. First, everything must run in a container to consolidate the libraries and runtimes necessary for AI apps. Jobs must also launch containers with an approved mechanism and run across multiple servers, as well as showing performance data to expose potential bottlenecks.

Another company embracing MLOps, software startup GreenStream, incorporates code dependency management and machine learning model testing into its development workflows. GreenStream automates model training and evaluation and leverages a consistent method of deploying and serving each model while keeping humans in the loop.

Given all the elements involved with MLOps, it isnt surprising that companies adopting it often run into roadblocks. Data scientists have to tweak various features like hyperparameters, parameters, and models while managing the codebase for reproducible results. They also need to engage in model validation, in addition to conventional code tests, including unit testing and integration testing. And they have to use a multistep pipeline to retrain and deploy a model particularly if theres a risk of reduced performance.

When formulating an MLOps strategy, it helps to begin by framing machine learning objectives from business growth objectives. These objectives, which typically come in the form of KPIs, can have certain performance measures, budgets, technical requirements, and so on. From there, organizations can work toward identifying input data and the kinds of models to use for that data. This is followed by data preparation and processing, which includes tasks like cleaning data and selecting relevant features (i.e., the variables used by the model to make predictions).

The importance of data selection and prep cant be overstated. In a recent Atlation survey, a clear majority of employees pegged data quality issues as the reason their organizations failed to successfully implement AI and machine learning. Eighty-seven percent of professionals said inherent biases in the data being used in their AI systems produce discriminatory results that create compliance risks for their organizations.

At this stage, MLOps extends to model training and experimentation. Capabilities like version control can help keep track of data and model qualities as they change throughout testing, as well as helping scale models across distributed architectures. Once machine learning pipelines are built and automated, deployment into production can proceed, followed by the monitoring, optimization, and maintenance of models.

A critical part of monitoring models is governance, which here means adding control measures to ensure the models deliver on their responsibilities.Astudy by Capgemini found that customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them and will punish those that dont. The study suggests companies that dont approach the issue thoughtfully can incur both reputational risk and a direct hit to their bottom line.

In sum, MLOps applies to the entire machine learning lifecycle, including data gathering, model creation, orchestration, deployment, health, diagnostics, governance, and business metrics. If successfully executed, MLOps can bring business interest to the fore of AI projects while allowing data scientists to work with clear direction and measurable benchmarks.

Enterprises that ignore MLOps do so at their own peril. Theres a shortage of data scientists skilled at developing apps, and its hard to keep up with evolving business objectives a challenge exacerbated by communication gaps. According to a 2019 IDC survey, skills shortages and unrealistic expectations from the C-suite are the top reasons for failure in machine learning projects. In 2018, Element AI estimated that of the 22,000 Ph.D.-educated researchers working globally on AI development and research, only 25% are well-versed enough in the technology to work with teams to take it from research to application.

Theres also the fact that models frequently drift away from what they were intended to accomplish. Assessing the risk of these failures as a part of MLOps is a key step not only for regulatory purposes, but to protect against business impacts. For example, the cost of an inaccurate video recommendation on YouTube would be much lower compared with flagging an innocent person for fraud and blocking their account or declining their loan applications.

The advantage of MLOps is that it puts operations teams at the forefront of best practices within an organization. The bottleneck that results from machine learning algorithms eases with a smarter division of expertise and collaboration from operations and data teams, and MLOps tightens that loop.

Here is the original post:

Harnessing the power of machine learning with MLOps - VentureBeat

Written by admin

May 9th, 2021 at 1:51 am

Posted in Machine Learning


Page 8«..78910..2030..»



matomo tracker