Page 9«..891011..2030..»

Archive for the ‘Machine Learning’ Category

How Machine Learning is Beneficial to the Police Departments? – CIOReview

Posted: May 9, 2021 at 1:51 am


without comments

It is important to understand the basic nature of machines like computers in order to understand what machine learning is. Computers are devices that follow instructions, and machine learning brings in an interesting outlook, where a computer can learn from the experience without the need for programming. Machine learning transports computers to another level where they can learn intuitively in a similar manner as humans. It has several applications, including virtual assistants, predictive traffic systems, surveillance systems, face recognition, spam, malware filtering, fraud detection, and so on.

The police can utilize machine learning effectively to resolve the challenges that they face. Machine learning helps in predictive policing, where they can prevent crimes and improve public safety. Here a few ways how the police can leverage machine learning to achieve better results.

Pattern recognition

One of the most robust applications of machine learning in policing is in the field of pattern recognition. Crimes can be related and might either be done by the same person or use the same modus operandi. The police can gain an advantage if they can spot the patterns in crimes. The data that the police gather from crimes is essentially unstructured. This data must be organized and sifted through to find the patterns.

Machine learning can help do achieve this easily. Machine learning tools can compare numerous crimes easily and generate a likewise score. The software can then utilize these scores to try and determine if there are common patterns. The New York Police Department is implementing this. The tool has been utilized to crack cases effectively

Cybersecurity

Cybersecurity is a vital area in todays world. With the extensive usage of the internet everywhere, cybercriminals are targeting computer systems around the globe. Cybersecurity is critical not for solving cases but to prevent them from very proactively. Cybersecurity can be enhanced with the use of machine learning. Tools that use machine learning can better cybersecurity and proactively prevent crimes.

Predictive analytics

Another area related to machine learning, which can help police is predictive analytics. This is a powerful application of machine learning that the police can leverage to achieve substantial results. A tool that has predictive analytics features utilizes machine learning to help the police in improving public safety. These tools focus on crime trends and are thus beneficial. When these trends are spotted, the law can proactively take action

Continued here:

How Machine Learning is Beneficial to the Police Departments? - CIOReview

Written by admin

May 9th, 2021 at 1:51 am

Posted in Machine Learning

4 Stocks to Watch Amid Rising Adoption of Machine Learning – Zacks.com

Posted: at 1:51 am


without comments

Machine learning (ML) has been gaining precedence over the past few years as organizations are rapidly implementing ML solutions to increase efficiency by delivering more accurate results as well as providing a better customer experience. Notably, when it comes to automation, ML has become a driving force as it involves training the Artificial Intelligence (AI) to learn a task and carry it out efficiently, minimizing the need for human intervention.

In any case, ML was already witnessing rapid adoption and the outbreak of the COVID-19 pandemic last year helped in accelerating that demand, as organizations began to rely heavily on automation to carry out their operations.

Markedly, ML is gradually becoming an integral part across various sectors as the trend of digitization is picking up. Notably, ML is finding application in the finance sector as among other usages, it helps in better fraud detection and enabling automated trading for investors. Meanwhile, ML is also making its way into healthcare as with the help of algorithms, big volumes of data like healthcare records can be studied to identify patterns related to diseases, thereby allowing practitioners to deliver more efficient and precise treatments.

Moreover, the retail segment has been using ML to optimize the experience of their customers by providing streamlined recommendations. Interestingly, ML also helps retailers in gauging the current market situation and determine the prices of their products accordingly, thereby increasing their competitiveness. Meanwhile, virtual voice assistants are also utilizing ML to learn from previous interactions and in turn, provide a much-improved user experience over time.

In its Top 10 Strategic Technology Trends for 2020 report, Gartner mentioned hyperautomation as one of the top-most technological trends. Notably, it involves the use of advanced technologies like AI and ML to automate processes and augment humans. This means that in tasks where hyperautomation will be implemented, the need for human involvement will gradually reduce as decision-making will increasingly become AI-driven.

Reflective of the positive developments that ML is bringing to various organizations spread across multiple sectors, the ML market looks set to grow. A report by Verified Market Research stated that the ML market is estimated to witness a CAGR of 44.9% from 2020 to 2027. Moreover, businesses are also using Machine Learning as a Service (MLaaS) models to customize their applications with the help of available ML tools. Notably, a report by Orion Market Reports stated that the MLaaS is estimated to grow at an annual average of 43% from 2021 to 2027, as mentioned in a WhaTech article.

Machine learning has been taking the world of technology by storm, allowing computers to learn by studying huge volumes of data and deliver improved results while reducing the need for human intervention. This makes it a good time then to look at companies that can make the most of this ongoing trend. Notably, we have selected four such stocks that carry a Zacks Rank #1 (Strong Buy), 2 (Buy) or 3 (Hold). You can see the complete list of todays Zacks #1 Rank stocks here.

Alphabet Inc.s (GOOGL Quick QuoteGOOGL - Free Report) Google has been using ML across various applications like YouTube, Gmail, Google Photos, Google Voice Assistant and so on, to optimize the user experience. Moreover, Googles Cloud AutoML allows developers to train high-quality models suited to their business needs. The company currently has a Zacks Rank #1. The Zacks Consensus Estimate for its current-year earnings increased 27.3% over the past 60 days. The companys expected earnings growth rate for the current year is nearly 50%.

NVIDIA Corporation (NVDA Quick QuoteNVDA - Free Report) offers ML and analytics software libraries to accelerate the ML operations of businesses. The company currently has a Zacks Rank #2. The Zacks Consensus Estimate for its current-year earnings increased 2.2% over the past 60 days. The companys expected earnings growth rate for the current year is 35.6%.

Microsoft Corporation (MSFT Quick QuoteMSFT - Free Report) provides it Azure platform for ML, allowing developers to build, train and deploy ML models. The company currently has a Zacks Rank #2. The Zacks Consensus Estimate for its current-year earnings increased 5.8% over the past 60 days. The companys expected earnings growth rate for the current year is 35.4%.

Amazon.com, Inc. (AMZN Quick QuoteAMZN - Free Report) is making use of ML models to train its virtual voice assistant Alexa. Moreover, Amazons AWS platform offers ML services to suit specific business needs. The company currently has a Zacks Rank #3. The Zacks Consensus Estimate for its current-year earnings increased 11.3% over the past 60 days. The companys expected earnings growth rate for the current year is 31.7%.

In addition to the stocks you read about above, would you like to see Zacks top picks to capitalize on the Internet of Things (IoT)? It is one of the fastest-growing technologies in history, with an estimated 77 billion devices to be connected by 2025. That works out to 127 new devices per second.

Zacks has released a special report to help you capitalize on the Internet of Thingss exponential growth. It reveals 4 under-the-radar stocks that could be some of the most profitable holdings in your portfolio in 2021 and beyond.

Click here to download this report FREE >>

Read the rest here:

4 Stocks to Watch Amid Rising Adoption of Machine Learning - Zacks.com

Written by admin

May 9th, 2021 at 1:51 am

Posted in Machine Learning

All The Machine Learning Libraries Open-Sourced By Facebook Ever – Analytics India Magazine

Posted: at 1:51 am


without comments

Today, corporations like Google, Facebook and Microsoft have been dominating tools and deep learning frameworks that AI researchers use globally. Many of their open-source libraries are now gaining popularity on GitHub, which is helping budding AI developers across the world build flexible and scalable machine learning models.

From conversational chatbot, self-driving cars to the weather forecast and recommendation systems, AI developers are experimenting with various neural network architectures, hyperparameters, and other features to fit the hardware constraints of edge platforms. The possibilities are endless. Some of the popular deep learning frameworks include Googles TensorFlow and Facebooks Caffe2, PyTorch, Torchcraft AI and Hydra, etc.

According to Statista, AI business operations global revenue is expected to touch $10.8 billion by 2023, and the natural language processing (NLP) market size globally is expected to reach $43.3 billion by 2025. With the rise of AI adoption across businesses, the need for open-source libraries and architecture will only increase in the coming months.

Advancing in artificial intelligence, Facebook AI Research (FAIR) at present is leading the AI race with the launch of state of the art technology tools, libraries and frameworks to bolster machine learning and AI applications across the globe.

Source: Analytics India Magazine

Here are some of the latest open-source tools, libraries and architecture developed by Facebook:

PyTorch is the most widely used deep learning framework, besides Caffe2 and Hydra, which helps researchers build flexible machine learning models.

PyTorch provides a Python package for high-level features like tensor computation (NumPy) with strong GPU acceleration and TorchScript for an easy transition between eager mode and graph mode. Its latest release provides graph-based execution, distributed training, mobile deployment and more.

Flashlight is an open-source machine learning library that lets users execute AI/ML applications using C++ API. Since it supports research in C++, Flashlight does not need external figures or bindings to perform tasks such as threading, memory mapping, or interoperating with low-level hardware. Thus, making the integration of code fast, direct and straightforward.

Opacus is an open-source high-speed library for training PyTorch models with differential privacy (DP). The library is claimed to be more scalable than existing methods. It supports training with minimal code changes and has little impact on training performance. It also allows the researchers to track the privacy budget expended at any given moment.

PyTorch3D is a highly modular and optimised library that offers efficient, reusable components for 3D computer vision research with the PyTorch framework. It is designed to integrate smoothly with deep learning methods for predicting and manipulating 3D data. As a result, the library can be implemented using PyTorch tensors, handle mini-batches of heterogeneous data, and utilise GPUs for acceleration.

Detectron2 is a next-generation library that provides detection and segmentation algorithms. It is a fusion of Detectron and maskrcnn-benchmark. Currently, it supports several computer vision research work and applications. Detection can be used on Mask R-CNN, RetinaNet, Faster R-CNN, RPN, TensorMask as well.

Detectron is an open-source software architecture that implements object detection algorithms like Mask R-CNN. The software has been written in Python and powered by the Caffe2 deep learning framework.

Detectron has enabled various research project at Facebook, including Feature pyramid networks for object detection, Mask R-CNN, non-local neural networks, detecting and recognising human-object interactions, learning to segment everything, data distillation: towards Omni-supervised learning, focal loss for dense object detection, DensePose: dense human pose estimation in the wild, and others.

Prophet is an open-source architecture released by Facebooks core data science team. It is a procedure for forecasting time series data based on an additive model where non-linear trends fit yearly, weekly, and daily seasonality, plus holiday effects. The model works best with time-series data, which has several seasons of historical data such as weather records, economic indicators and patient health evolution metrics.

The code is available on CRAN and PyPI.

Classy Vision is a new end-to-end PyTorch-based framework for large-scale training of image and video classification models. Unlike other computer vision (CV) libraries, Classy Vision claims to offer flexibility for researchers.

Typically, most CV libraries lead to duplicative efforts and require users to migrate research between frameworks and relearn the minutiae of efficient distributed training and data loading. On the other hand, Facebooks PyTorch-based CV framework claimed to offer a better solution for training at scale and deploying to production.

BoTorch is a library for Bayesian optimization built on the PyTorch framework. Bayesian optimization is a sequence design strategy for machines that do not assume any functional forms.

BoTorch seamlessly provides a modular and easily extensible interface for composing Bayesian optimization primitives such as probabilistic models, acquisition functions and optimizers and others. In addition to this, it also enables seamless integration with deep or convolutional architectures in PyTorch.

FastText is an open-source library for efficient text classification and representation learning. It works on standard and generic hardware. Machine learning models can be further reduced on mobile devices as well.

TC is a fully-functional C++ library that automatically synthesises high-performance machine learning kernels using Halide, ISL, NVRTC or LLVM. The library can be easily integrated with Caffe2 and PyTorch and has been designed to be highly portable and machine-learning framework agnostic. Also, it requires a simple tensor library with memory allocation, offloading, and synchronisation capabilities.

Read the original here:

All The Machine Learning Libraries Open-Sourced By Facebook Ever - Analytics India Magazine

Written by admin

May 9th, 2021 at 1:51 am

Posted in Machine Learning

AI Magic Just Removed One of the Biggest Roadblocks in Astrophysics – SciTechDaily

Posted: at 1:51 am


without comments

Using neural networks, Flatiron Institute research fellow Yin Li and his colleagues simulated vast, complex universes in a fraction of the time it takes with conventional methods.

Using a bit of machine learning magic, astrophysicists can now simulate vast, complex universes in a thousandth of the time it takes with conventional methods. The new approach will help usher in a new era in high-resolution cosmological simulations, its creators report in a study published online on May 4, 2021, in Proceedings of the National Academy of Sciences.

At the moment, constraints on computation time usually mean we cannot simulate the universe at both high resolution and large volume, says study lead author Yin Li, an astrophysicist at the Flatiron Institute in New York City. With our new technique, its possible to have both efficiently. In the future, these AI-based methods will become the norm for certain applications.

The new method developed by Li and his colleagues feeds a machine learning algorithm with models of a small region of space at both low and high resolutions. The algorithm learns how to upscale the low-res models to match the detail found in the high-res versions. Once trained, the code can take full-scale low-res models and generate super-resolution simulations containing up to 512 times as many particles.

The process is akin to taking a blurry photograph and adding the missing details back in, making it sharp and clear.

This upscaling brings significant time savings. For a region in the universe roughly 500 million light-years across containing 134 million particles, existing methods would require 560 hours to churn out a high-res simulation using a single processing core. With the new approach, the researchers need only 36 minutes.

The results were even more dramatic when more particles were added to the simulation. For a universe 1,000 times as large with 134 billion particles, the researchers new method took 16 hours on a single graphics processing unit. Existing methods would take so long that they wouldnt even be worth running without dedicated supercomputing resources, Li says.

Li is a joint research fellow at the Flatiron Institutes Center for Computational Astrophysics and the Center for Computational Mathematics. He co-authored the study with Yueying Ni, Rupert Croft and Tiziana Di Matteo of Carnegie Mellon University; Simeon Bird of the University of California, Riverside; and Yu Feng of the University of California, Berkeley.

Cosmological simulations are indispensable for astrophysics. Scientists use the simulations to predict how the universe would look in various scenarios, such as if the dark energy pulling the universe apart varied over time. Telescope observations may then confirm whether the simulations predictions match reality. Creating testable predictions requires running simulations thousands of times, so faster modeling would be a big boon for the field.

Reducing the time it takes to run cosmological simulations holds the potential of providing major advances in numerical cosmology and astrophysics, says Di Matteo. Cosmological simulations follow the history and fate of the universe, all the way to the formation of all galaxies and their black holes.

So far, the new simulations only consider dark matter and the force of gravity. While this may seem like an oversimplification, gravity is by far the universes dominant force at large scales, and dark matter makes up 85 percent of all the stuff in the cosmos. The particles in the simulation arent literal dark matter particles but are instead used as trackers to show how bits of dark matter move through the universe.

The teams code used neural networks to predict how gravity would move dark matter around over time. Such networks ingest training data and run calculations using the information. The results are then compared to the expected outcome. With further training, the networks adapt and become more accurate.

The specific approach used by the researchers, called a generative adversarial network, pits two neural networks against each other. One network takes low-resolution simulations of the universe and uses them to generate high-resolution models. The other network tries to tell those simulations apart from ones made by conventional methods. Over time, both neural networks get better and better until, ultimately, the simulation generator wins out and creates fast simulations that look just like the slow conventional ones.

We couldnt get it to work for two years, Li says, and suddenly it started working. We got beautiful results that matched what we expected. We even did some blind tests ourselves, and most of us couldnt tell which one was real and which one was fake.

Despite only being trained using small areas of space, the neural networks accurately replicated the large-scale structures that only appear in enormous simulations.

The simulations dont capture everything, though. Because they focus only on dark matter and gravity, smaller-scale phenomena such as star formation, supernovae and the effects of black holes are left out. The researchers plan to extend their methods to include the forces responsible for such phenomena, and to run their neural networks on the fly alongside conventional simulations to improve accuracy. We dont know exactly how to do that yet, but were making progress, Li says.

Reference: AI-assisted superresolution cosmological simulations by Yin Li, Yueying Ni, Rupert A. C. Croft, Tiziana Di Matteo, Simeon Bird and Yu Feng, 4 May 2021, Proceedings of the National Academy of Sciences. DOI: 10.1073/pnas.2022038118

The rest is here:

AI Magic Just Removed One of the Biggest Roadblocks in Astrophysics - SciTechDaily

Written by admin

May 9th, 2021 at 1:51 am

Posted in Machine Learning

AI, RPA, and Machine Learning How are they Similar & Different? – Analytics Insight

Posted: at 1:51 am


without comments

AI, RPA, and machine learning, you must have heard these words echoing in the tech industry. Be it blogs, websites, videos, or even product descriptions, disruptive technologies have made their presence bold. The fact that we all have AI-powered devices in our homes is a sign that the technology has come so far.

If you are under the impression that AI, robotic process automation, and machine learning have nothing in common, then heres what you need to know, they are all related concepts. Oftentimes, people use these names interchangeably and incorrectly which causes confusion among businesses that are looking for the latest technological solutions.

Understanding the differences between AI, ML, and RPA tools will help you identify and understand where the best opportunities are for your business to make the right technological investment.

According to IBM, Robotic process automation (RPA), also known as software robotics, uses automation technologies to mimic back-office tasks of human workers, such as extracting data, filling in forms, moving files, etc. It combines APIs and user interface (UI) interactions to integrate and perform repetitive tasks between enterprise and productivity applications. By deploying scripts which emulate human processes, RPA tools complete autonomous execution of various activities and transactions across unrelated software systems.

In that sense, RPA tools enable highly logical tasks that dont require human understanding or human interference. For example, if your work revolves around inputting account numbers on a spreadsheet to run a report with a filter category, you can use RPA to fill the numbers on the sheet. Automation will mimic your actions of setting up the filter and generate the report on its own.

With a clear set of instructions, RPA can perform any task. But theres one thing to remember, RPA systems dont have the capabilities to learn as they go. If there is a change in your task, (for example if the filter has changed in the spreadsheet report), you will have to manually input the new set of instructions.

The highest adopters of this technology are banking firms, financial services, insurance, and telecom industries. Federal agencies like NASA have also started using RPA to automate repetitive tasks.

According to Microsoft, Artificial Intelligence is the ability of a computer system to deal with ambiguity, by making predictions using previously gathered data, and learning from errors in those predictions in order to generate newer, more accurate predictions about how to behave in the future.

In that sense, the major difference between RPA and AI is intelligence. While these technologies efficiently perform tasks, only AI can do it with similar capabilities to human intelligence.

Chatbots and virtual assistants are two popular uses of AI in the business world. In the tax industry, AI is making tax forecasting increasingly accurate with its predictive analytics capabilities. AI can also perform thorough data analysis which makes identifying tax deductions and tax credits easier than before.

According to Gartner, Advanced machine learning algorithms are composed of many technologies (such as deep learning, neural networks, and natural language processing), used in unsupervised and supervised learning, that operate guided by lessons from existing information.

Machine learning is a part of AI, so the two terms cannot be used interchangeably. And thats the difference between RPA and ML, machine learnings intelligence comes from AI but RPA lacks all intelligence.

To understand better, let us apply these technologies in a property tax scenario. First, you can create an ML model based on a hundred tax bills. The more bills you feed the model, the more accurately it will make predictions for the future bills. But if you want to use the same machine learning model to address an assessment notice, the model will be of no use. You would then have to build a new machine learning model that knows how to work with assessment notices. This is where machine learnings intelligence capabilities draw a line. Where ML fails to recognize the similarities of the document, an AI application would recognize it, thanks to its human-like interpretation skills.

The healthcare industry uses ML to accurately diagnose and treat patients, retailers use ML to make the right products available at the right stores at the right time, and pharmaceutical companies use machine learning to develop new medications. These are just a few use cases of this technology.

No, but they can work together. The combination of AI and RPA is called smart process automation, or SPA.

Also known as intelligent process automation or IPA, this duo facilitates an automated workflow with advanced capabilities than RPA using machine learning. The RPA part of the system works on doing the tasks while the machine learning part focuses on learning. In short, SPA solutions can learn to perform a specific task with the help of patterns.

The three technologies, AI, RPA, and ML, and the duet, SPA hold exciting possibilities for the future. But only when companies make the right choice, the rewards can be reaped. Now that you have an understanding of the various capabilities of these technologies, adapt and innovate.

More here:

AI, RPA, and Machine Learning How are they Similar & Different? - Analytics Insight

Written by admin

May 9th, 2021 at 1:51 am

Posted in Machine Learning

ARPA and Alibaba-led Group Set to Introduce The IEEE Shared Machine Learning Standard – bitcoinist.com

Posted: at 1:51 am


without comments

ARPA, a blockchain-based privacy-preserving computation network, has announced that The Institute of Electrical and Electronics Engineers P2830 standard has reached the ballot stage of the IEEE Standard Association (SA) Standard Development Process. Alibaba led the working group in which ARPA is participating, with representation from Shanghai Fudata, Baidu, Lenovo Group, Zhejiang University, Megvii Technology, and the China Electronic Standardization Institute.

With the recent Ledger hack, blockchain privacy has become a hot topic since transactions can be tied back to wealthy users. It creates security and privacy risk for individuals. Moreover, the recent surge in the DeFi coupled with the fact that the space is highly unregulated has raised serious concerns in the crypto community. As a result, investors are pouring money into blockchain-based security protocols. The retail giant, Paypal, recently acquired Curv a cryptocurrency security startup that uses multiparty computation (MPC) technology to secure its network. On the other hand, Zengo has raised $20 million in funding to further its development plans of the keyless cryptocurrency wallet.

The soaring crypto market has brought in the excessive need for multiparty computation platforms like ARPA and Zero-Knowledge Proof (ZKP) protocols to preserve the privacy and anonymity of users.

MPC technology is based on the principles of Shamirs Secret Sharing. According to these rules, a blockchain-based network breaks the private data into small pieces and then shares them among the participants without revealing the data source. MPC is leveraged by APRA to secretly share the data on its network, thereby preserving the anonymity of its users.

The IEEE is the largest technical professional organization that promotes high-quality engineering, technology, and computing information. The IEEE Standard Association (SA) is an Operating Unit within IEEE that nurtures, develops, and advances global standards in multiple industries, including IoT, AI, ML, Power and Energy, Consumer Technology, etc.

The IEEE SA P2830 standard defines an architecture for machine learning. It is referred to as training a model using encrypted data accumulated from different sources and getting it processed from a trusted third party. This standard is used by engineers and developers worldwide.

Alibaba initiated the submission for IEEE SA P2830, which was later joined by ARPA and other representatives from academia and industry. All of them formed a group together and submitted the draft copy of the standard to the association. As such, the IEEE SA develops a new standard using a standard process consisting of six stages. ARPA has now passed three stages and proved that the standard is sufficiently stable. The draft is now at Balloting the Standard step.

In order to pass, a minimum of 75% of all ballots from a balloting group must return, and all these ballots must bear a yes vote. The working group comprising Alibaba, APRA, and other contributors are now waiting for the result as ballots usually last 30 to 60 days.

Since its inception in 2018, ARPA has been developing and researching privacy-focused solutions. The platform uses Multi-Party Computation technology to separate data utility from ownership to enable data renting. In 2019, ARPA partnered with MultiVAC to enable developers to furnish mathematical guarantees of security and privacy of their dApps. A year later, its broader focus on privacy led APRA to win the 2020 Privacy-preserving Computation Emerging Power award.

In the last few months, ARPA has collaborated with industrial partners and standardization institutions to draft various privacy-preserving computation standards for multiple industries. The submission of the IEEE P2830 standard is a part of ARPAs mission of working with global companies and academies to provide the framework and practical advice to developers and architects. Moreover, it also acknowledges the projects contribution to building privacy-based frameworks.

More here:

ARPA and Alibaba-led Group Set to Introduce The IEEE Shared Machine Learning Standard - bitcoinist.com

Written by admin

May 9th, 2021 at 1:51 am

Posted in Machine Learning

Digitization in the energy industry – the machine learning revolution – Lexology

Posted: April 24, 2021 at 1:57 am


without comments

In researching for this blog, I reached out to Brendan Bennett, a Reinforcement Learning Researcher at the University of Alberta, for his thoughts on how emerging digital technologies may be deployed in the energy industry. Brendan and I discussed how some recent landmark accomplishments in artificial intelligence might soon make their way into the energy industry.

Digital innovation in commercial spheres has largely been a story of improving efficiency and reliability while reducing costs. In the energy sector, these innovations have been a result of oil and gas companies doing what they do best: relying on talented engineers to improve on existing solutions. Improvements have quickly spread across the industry, bringing down costs and making processes more efficient.

I recently co-authored an article on the future of Artificial Intelligence in the Canadian Oil Patch, which discusses a number of examples of current innovations, including AI-powered predictive maintenance, optimized worker safety, and digital twin technology for better visualization of construction projects and formations. Looking forward, network effects, improving sensors, and algorithmic advances will continue to increase the rate of innovation and prevalence of new tech in the energy industry.

The most common example of network effects can likely be found in your pocket or in your hand right now. Because of the network effects of the smartphone, every new smartphone purchase increases the value of everyone else's smartphones by a little bit. Coupled with economies of scale in production, this means that the cost of these devices falls, while the value they provide increases. Some may view this as a virtuous cycle.

This same effect can be seen with sensors deployed in the oil and gas sector. Advances in technology and widespread use are pushing down the cost of sensors. This allows for more sensors to be deployed in a given application, creating a more complete and reliable data set when all measurements are taken together. Algorithms trained on larger, more comprehensive data sets can produce leaps in efficiency that were previously impossible.

DeepMind, an artificial intelligence research laboratory with a research office in Edmonton, recently combined prolific sensors with its own machine learning capabilities to reduce the cooling bill at Google's data centres by up to 40%. Cooling is one of the primary uses of energy in a data centre; the servers running services like Gmail and YouTube generate a massive amount of heat. Given that Google already runs some of the most sophisticated energy management technology in the world at its data centres, an energy savings of almost half is astounding.

The same combination of plentiful sensors and advanced machine learning will soon be applied throughout the energy value chain, and promises to deliver those same astounding results. Accurate sensors providing clear insight into power use relative to a variety of factors will soon allow power grids run by machine learning algorithms to more accurately predict periods of peak demand, and provide the energy to satisfy demand with dramatic efficiency. These systems could also be designed to optimize for multiple variables, providing low cost power while also minimizing CO2 emissions.

More abstractly, AlphaFold, another project from DeepMind, employed deep neural networks to model protein folding, providing a solution to a 50-year-old grand challenge in biology. The protein-folding problem has baffled biologists for decades. Cyril Levinthal, an eminent biologist, estimated in 1969 that it would take longer than the age of the known universe to describe all of the possible configurations of a typical protein through brute force calculation, an estimated 10300 possible configurations. AlphaFold's deep neural network can predict the configuration of a protein with stunning accuracy, in less time than standard complex experimental methods.

A similar approach might be applied to the problems of resource extraction and mapping of geological formations. Feeding the neural net with massive amounts of information generated from sensors that are cheaper and more plentiful in the oil and gas industry may lead to improvements in production efficiency. Further, the ability to map and test within the digital playground of these advanced neural nets may help producers avoid undesired consequences to human health and to the environment.

These advanced AI technologies will fundamentally change the way we explore for and develop our natural resources. Organizations like Avatar Innovations, which work with some of the province's leading entrepreneurs to bring innovations into the energy space, will be pivotal in helping Alberta lead the way in the development of these technologies.

Read more:

Digitization in the energy industry - the machine learning revolution - Lexology

Written by admin

April 24th, 2021 at 1:57 am

Posted in Machine Learning

A Guide To Machine Learning: Everything You Need To Know – Analytics Insight

Posted: at 1:57 am


without comments

Artificial Intelligence and other disruptive technology are spreading their wings in the current scenario. Technology has become a mandatory element for all kinds of businesses across all industries around the globe. Let us travel back to 1958 when Frank Rosenblatt created the first artificial neural network that could recognize patterns and shapes. From such a primitive stage we have now reached a place where machine learning is an integral part of almost all softwares and applications.

Machine learning is resonating with everything now, be it automated cars, speech recognition, chatbots, smart cities, and whatnot. The abundance of big data and the significance of data analytics and predictive analytics has made machine learning an imperative technology.

Machine learning, as the name suggests is a process in which machines learn and analyze the data fed to it and predict the outcome. There are different types of machine learning like supervised, unsupervised, semi-supervised, etc. Machine learning is the stairway to reach artificial intelligence and it learns from algorithms based on the database and derives answers and correlations from them.

Machine learning is an integral part of automation and digital transformation. In 2016, Google introduced its graph-based machine learning tool. It used the semi-supervised learning method to connect clusters of data based on their similarities. Machine learning technology helps industries identify market trends, potential risks, customer needs, and business insights. Today, business intelligence and automation are the norms and ML is the foundation to achieve these and enhance the efficiency of your business.

A term identified by Gartner, Hyperautomation is the new tech trend in the world. It enables industries to automate all possible operations and gain intelligent and real-time insights from the data collected. ML, AI, and RPA are some of the important technologies behind the acceleration of hyperautomation. AIs ability to augment human behaviour is aided by machine learning. Machine learning algorithms can automate various tasks once the algorithm is trained. ML models along with AI will enhance the capacity of machines and software to automatically improve and respond to changes according to the business requirements.

According to Industry Research, the Global Machine Learning market is projected to grow by USD11.16 billion between 2020 and 2024, progressing at a CAGR of 39% during the forecast period.

This data is enough to indicate the growth and acceptance of ML across the world. Let us understand how different industries are using ML.

Other industries leveraging ML include banking and finance, cybersecurity, manufacturing, media, automobile, and many more.

Executives and C-Suite professionals should consider it a norm to have a strategy or goal before putting out ML into practice. The true capability of this technology can only be extracted by developing a strategy for its use. Otherwise, the disruptive tech might remain inside closed doors just automating routine and mundane tasks. MLs capability to innovate should not be chained just to automate repetitive tasks.

According to McKinsey, companies should consist two types of people, quants, and translators to unleash the power of ML. Translators should be the ones connecting the vague lines between the complex data analysis by algorithms and convert it into readable and understandable business insights for the executives.

Machine learning is not an unfamiliar technology these days, but it still takes time and patience to leave the legacy systems behind and embrace the power of disruptive technologies. Companies should focus on democratizing ML and data analytics for their employees and create a transparent ecosystem to leverage the capabilities of these techs by demystifying them.

The rest is here:

A Guide To Machine Learning: Everything You Need To Know - Analytics Insight

Written by admin

April 24th, 2021 at 1:57 am

Posted in Machine Learning

Facebook and the Power of Big Data and Greedy Algorithms – insideBIGDATA

Posted: at 1:57 am


without comments

Is Facebook evil?

The answer to this simple question is not that simple. The tools that have enabled Facebook to enjoy its position are its access to massive amounts of data and its machine learning algorithms. And it is in these two areas that we need to explore if there is any wrongdoing on Facebooks part.

Facebook, no doubt, is a giant in online space. Despite their arguments that they are not a monopoly, many think otherwise. The role that Facebook plays in our lives, specifically in our democracy, has been heavily scrutinized and debated over the last few years, with the lawsuits brought on by the federal and dozens of state governments toward the end of 2020 being the latest examples. While many regulators and most regular folks will argue that Facebook exerts unparalleled power over who shares what and how ordinary people get influenced by information and misinformation, many still dont quite understand where the problem really lies. Is it in the fact that Facebook is a monopoly? Is it that Facebook willingly takes ideological sides? Or is it in Facebooks grip on small businesses and its massive user base through data sharing and user tracking? Its all of these and more. Specifically, its Facebooks access to large data through its connected services and the algorithms that process this data in a very profit-focused way to turn up user engagement and revenue.

Most people understand that there are algorithms that drive systems such as Facebook. But their view about such algorithms is quite simplisticthat is, an algorithm is a set of rules and step-by-step instructions that informs a system how to act or behave. In reality, hardly any critical aspect of todays computational systems, least of them Facebooks, are driven by such algorithms. Instead, they use machine learning, which by one definition means computers writing their own algorithms. Okay, but at least were controlling the computers, right? Not really.

The whole point about machine learning is that we, the humans, dont have enough time, power, or ability to churn through massive amounts of data to look for relevant patterns and make decisions in real time. Instead, these machine learning algorithms do that for us. But how can we tell if they are doing what we want them to do? This is where the biggest problem comes. Most of these algorithms optimize their learning based on metrics such as user engagement. More user engagement leads to more usage of the system, which in turn drives up ad revenue and other business metrics. On the user side, higher engagement leads to even more engagementlike an addiction. On the business side, it leads to more and richer data that Facebook can sell to vendors and partners.

Facebook can use their passivity in this process to argue that they are not evil. After all, they dont manually or purposefully discriminate against anyone, and they dont intentionally plant misinformation in users feeds. But they dont need to. Facebook holds a mirror on our society and amplifies our bad instincts because of how their machine learning-powered algorithms learn and optimize for user engagement outcomes. Unfortunately, since controversy and misinformation tends to attract high user engagement, the algorithms will automatically prioritize such posts because they are designed to maximize engagement.

A user is worth hundreds of dollars to Facebook, depending on how active they are on the platform. A user that is on multiple platforms that Facebook owns is worth a lot more. Facebook can claim that keeping these platforms connected is best for the users and the businesses and that may be the case to some extent, but the one entity that has most to gain by this is Facebook.

There are reasonable alternatives to WhatsApp and Instagram, but none for Facebook. And it is that flagship service and monopoly of Facebook that makes even those other apps a lot more compelling and much harder to leave for their users. Breaking up these three services will create good competition, and drive up innovation and value for the users. But it will also make it harder for Facebook to leverage its massive user base for the kind of data they currently collect (and sell) and the machine learning algorithms they could run. There is a reason Facebook has doubled its lobbying spending in the last five years. Facebook is also trying to fight Apples stand on informing its users about user tracking with an argument that giving the users a choice about tracking them or not will hurt small businesses. Even Facebooks own employees dont buy that argument.

I may be singling out Facebook here, but many of the same arguments can be made against Google and other monopolies. We see the same kind of pattern. It starts out by gaining users, giving them free services, then bringing in ads. Nothing wrong with ads; televisions and radio have done them for decades. But with the way the digital ad market works, and the way these services train their machine learning algorithms, its easy for them to go after data at any cost (such as user privacy). More data, more learning, more user engagement, more sales for ads and user data, and the cycle continues. At some point the algorithms take on a life of their own, disconnected from whats good or right for the users. Some of these algorithms goals may align with the users and businesses, but in the end, it is the job of these algorithms to increase the bottom line for their mastersin this case, Facebook.

To counteract this, we need more than just regulations. We also need education and awareness. Every time we post, click, or like something on these platforms, we are giving a vote. Can we exercise some discipline in this voting process? Can we inform ourselves before we vote? Can we think about a change? In the end, this isnt just about free markets; its about free will.

About the Author

Dr. Chirag Shah, associate professorin the Information Schoolat the University of Washington.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:@InsideBigData1 https://twitter.com/InsideBigData1

Go here to see the original:

Facebook and the Power of Big Data and Greedy Algorithms - insideBIGDATA

Written by admin

April 24th, 2021 at 1:57 am

Posted in Machine Learning

Getting Started With Machine Learning: Definition and Applications – CMSWire

Posted: February 20, 2021 at 7:45 pm


without comments

PHOTO: Markus Spiske

Artificial intelligence (AI) and machine learning (ML) are positioned to disrupt the way we live and work, even the way we interact and think. Machine learning is a core sub-area of AI. It makes computers get into a self-learning mode without explicit programming.

At this point, most organizations are still approaching ML as a technology in the realm of research and exploration. In this first article of a series, we delve deeper into the world of machine learning and its applications. The following articles will focus on building an ML implementation plan. In doing so we not only understand the concepts behind the technology, but also why it can make the difference between keeping up with competition or falling further behind.

Gartner defines machine learning as:Advanced learning algorithms composed of many technologies (such as deep learning, neural networks and natural language processing), used in unsupervised and supervised learning, that operate guided by lessons from existing information.

Machine learning is the process of teaching computers to develop intuitive knowledge and understanding through the use of repetitive algorithms and patterns. Machine learning in lay-man's terms is the process of schooling a repetitive activity to a dumb system that needs to develop some innate intelligence. The goal is to feed the system large amounts of data so it learns from each pattern and its variations, so it can eventually be able to identify the pattern and its variants on its own. The advantage a machine has over the human mind here is its ability to ingest and process large amounts of data. The human brain, although limitless in its capacity to ingest data, may not be able to process it at the same time and can only recall a limited set at one time.

There are three key types of machine learning: supervised, unsupervised and reinforced.

Other aspects of machine learning include neural networks and deep learning.

Neural networks have been studies for a long time. These algorithms endeavor to recognize the underlying relationships in data, just the way the human brain operates.

Deep learning is a class of machine learning algorithms that involves multiple layers of neural networks where the output of one network becomes the input to another.

The key to understanding machine learning is to understand the power of data. These algorithms work by finding patterns in massive amounts of data. This data, encompasses a lot of thingsnumbers, words, images, videos, sound files etc. Any data or meta data that can be digitally stored, can be fed into a machine-learning algorithm.

Related Article: Machine Learning Fragmentation Is Slowing Us Down: There Is a Solution

Machine learning, in conjunction with deep learning, have a wide variety of applications in our home and businesses today. It is currently used in everyday services such as recommendation systems like those on Netflix and Amazon; voice assistants like Siri and Alexa; car technology in parking assist and preventing accidents. Deep learning is already heavily used in autonomous vehicles and facial recognition systems. As the technology matures and receives widespread acceptance, we expect to see its applicability grow in these areas:

And many more .

Related Article: Why Artificial Intelligence May Not Offer the Business Value You Think

The availability of widespread computing power though the use of cloud technologies along with an increasing volume of readily available data has driven a number of advancements in the field of AI and ML. Organizations need to first build an understanding of the technology itself, collaborate on building a vision for using the technology internally and then build an implementation plan collaboratively between business and IT. In part two of this ML series we will focus on building a vision and implementation plan.

Geetika Tandon is a senior director at Booz Allen Hamilton, a management and technology consulting firm. She was born in Delhi, India, holds a Bachelors in architecture from Delhi University, a Masters in architecture from the University of Southern California and a Masters in computer science from the University of California Santa Barbara.

The views and opinions expressed in these articles are those of the author and do not necessarily reflect the official policy or position of her employer.

Excerpt from:

Getting Started With Machine Learning: Definition and Applications - CMSWire

Written by admin

February 20th, 2021 at 7:45 pm

Posted in Machine Learning


Page 9«..891011..2030..»



matomo tracker