Page 15«..10..14151617

Archive for the ‘Machine Learning’ Category

New York Institute of Finance and Google Cloud launch a Machine Learning for Trading Specialisation on Coursera – HedgeWeek

Posted: January 27, 2020 at 8:47 pm


without comments

The New York Institute of Finance (NYIF) and Google Cloud have launched a new Machine Learning for Trading Specialisation available exclusively on the Coursera platform.

The Specialisation helps learners leverage the latest AI and machine learning techniques for financial trading.

Amid the Fourth Industrial Revolution, nearly 80 per cent of financial institutions cite machine learning as a core component of business strategy and 75 per cent of financial services firms report investing significantly in machine learning. The Machine Learning for Trading Specialisation equips professionals with key technical skills increasingly needed in the financial industry today.

Composed of three courses in financial trading, machine learning, and artificial intelligence, the Specialisation features a blend of theoretical and applied learning. Topics include analysing market data sets, building financial models for quantitative and algorithmic trading, and applying machine learning in quantitative finance.

As we enter an era of unprecedented technological change within our sector, were proud to offer up-skilling opportunities for hedge fund traders and managers, risk analysts, and other financial professionals to remain competitive through Coursera, says Michael Lee, Managing Director of Corporate Development at NYIF. The past ten years have demonstrated the staying power of AI tools in the finance world, further proving the importance for both new and seasoned professionals to hone relevant tech skills.

The Specialisation is particularly suited for hedge fund traders, analysts, day traders, those involved in investment management or portfolio management, and anyone interested in constructing effective trading strategies using machine learning. Prerequisites include basic competency with Python, familiarity with pertinent libraries for machine learning, a background in statistics, and foundational knowledge of financial markets.

Cutting-edge technologies, such as machine and reinforcement learning, have become increasingly commonplace in finance, says Rochana Golani, Director, Google Cloud Learning Services. Were excited for learners on Coursera to explore the potential of machine learning within trading. Looking beyond traditional finance roles, were also excited for the Specialisation to support machine learning professionals seeking to apply their craft to quantitative trading strategies.

Excerpt from:

New York Institute of Finance and Google Cloud launch a Machine Learning for Trading Specialisation on Coursera - HedgeWeek

Written by admin

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

Iguazio pulls in $24m from investors, shows off storage-integrated parallelised, real-time AI/machine learning workflows – Blocks and Files

Posted: at 8:47 pm


without comments

Workflow-integrated storage supplier Iguazio has received $24m in C-round funding and announced its Data Science Platform. This is deeply integrated into AI and machine learning processes, and accelerates them to real-time speeds through parallel access to multi-protocol views of a single storage silo using data container tech.

The firm said digital payment platform provider Payoneer is using it for proactive fraud prevention with real-time machine learning and predictive analytics.

Yaron Weiss, VP Corporate Security and Global IT Operations (CISO) at Payoneer, said of Iguazios Data Science Platform: Weve tackled one of our most elusive challenges with real-time predictive models, making fraud attacks almost impossible on Payoneer.

He said Payoneer had built a system which adapts to new threats and enables is to prevent fraud with minimum false positives. The systems predictive machine learning models identify suspicious fraud and money laundering patterns continuously.

Weiss said fraud was detected retroactively with offline machine learning models; customers could only block users after damage had already been done. Now it can take the same models and serve them in real time against fresh data.

The Iguazio system uses a low latency serverless framework, a real-time multi-model data engine and a Python eco-system running over Kubernetes. Iguazio claims an estimated 87 per cent of data science models which have shown promise in the lab never make it to production because of difficulties in making them operational and able to scale.

It is based on so-called data containers that store normalised data from multiple sources; incoming stream records, files, binary objects, and table items. The data is indexed, and encoded by a parallel processing engine. Its stored in the most efficient way to reduce data footprint while maximising search and scan performance for each data type.

Data containers are accessed througha V310 API and can be read as any type regardless of how it was ingested. Applications can read, update, search, and manipulate data objects, while the data service ensures data consistency, durability, and availability.

Customers can submit SQL or API queries for file metadata, to identify or manipulate specific objects without long and resource-consuming directory traversals, eliminating any need for separate and non-synchronised file-metadata databases.

So-called API engines engine uses offload techniques for common transactions, analytics queries, real-time streaming, time-series, and machine-learning logic. They accept data and metadata queries, distribute them across all CPUs, and leverage data encoding and indexing schemes to eliminate I/O operations. Iguazio claims this provides magnitudes faster analytics and eliminates network chatter.

The Iguazio software is claimed to be able to accelerate the performance of tools such as Apache Hadoop and Spark by up to 100 times without requiring any software changes.

This DataScience Platform can run on-premises or in the public cloud. The Iguazio website contains much detail about its components and organisation.

Iguazio will use the $24m to fund product innovation and support global expansion into new and existing markets. The round was led by INCapital Ventures, with participation from existing and new investors, including Samsung SDS, Kensington Capital Partners, Plaza Ventures and Silverton Capital Ventures.

See the rest here:

Iguazio pulls in $24m from investors, shows off storage-integrated parallelised, real-time AI/machine learning workflows - Blocks and Files

Written by admin

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

Federated machine learning is coming – here’s the questions we should be asking – Diginomica

Posted: at 8:47 pm


without comments

A few years ago, I wondered how edge data would ever be useful given the enormous cost of transmitting all the data to either the centralized data center or some variant of cloud infrastructure. (It is said that 5G will solve that problem).

Consider, for example, applications of vast sensor networks that stream a great deal of data at small intervals. Vehicles on the move are a good example.

There is telemetry from cameras, radar, sonar, GPS and LIDAR, the latter about 70MB/sec. This could quickly amount to four terabytes per day (per vehicle). How much of this data needs to be retained? Answers I heard a few years ago were along two lines:

My counterarguments at the time were:

Introducing TensorFlow federated, via The TensorFlow Blog:

This centralized approach can be problematic if the data is sensitive or expensive to centralize. Wouldn't it be better if we could run the data analysis and machine learning right on the devices where that data is generated, and still be able to aggregate together what's been learned?

Since I looked at this a few years ago, the distinction between an edge device and a sensor has more or less disappeared. Sensors can transmit via wifi (though there is an issue of battery life, and if they're remote, that's a problem); the definition of the edge has widened quite a bit.

Decentralized data collection and processing have become more powerful and able to do an impressive amount of computing. The case is point in Intel's Introducing the Intel Neural Compute Stick 2 computer vision and deep learning accelerator powered by the Intel Movidius Myriad X VPU, that can stick into a Pi for less than $70.00.

But for truly distributed processing, the Apple A13 chipset in the iPhone 11 has a few features that boggle the mind: From Inside Apple's A13 Bionic system-on-chip Neural Engine, a custom block of silicon separate from the CPU and GPU, focused on accelerating Machine Learning computations. The CPU has a set of "machine learning accelerators" that perform matrix multiplication operations up to six times faster than the CPU alone. It's not clear how exactly this hardware is accessed, but for tasks like machine learning (ML) that use lots of matrix operations, the CPU is a powerhouse. Note that this matrix multiplication hardware is part of the CPU cores and separate from the Neural Engine hardware.

This should beg the question, "Why would a smartphone have neural net and machine learning capabilities, and does that have anything to do with the data transmission problem for the edge?" A few years ago, I thought the idea wasn't feasible, but the capability of distributed devices has accelerated. How far-fetched is this?

Let's roll the clock back thirty years. The finance department of a large diversified organization would prepare in the fall a package of spreadsheets for every part of the organization that had budget authority. The sheets would start with low-level detail, official assumptions, etc. until they all rolled up to a small number of summary sheets that were submitted headquarters. This was a terrible, cumbersome way of doing things, but it does, in a way, presage the concept of federated learning.

Another idea that vanished is Push Technology that shared the same network load as centralizing sensor data, just in the opposite direction. About twenty-five years, when everyone had a networked PC on their desk, the PointCast Network used push technology. Still, it did not perform as well as expected, often believed to be because its traffic burdened corporate networks with excessive bandwidth use, and was banned in many places. If Federated Learning works, those problems have to be addressed

Though this estimate changes every day, there are 3 billion smartphones in the world and 7 billion connected devices.You can almost hear the buzz in the air of all of that data that is always flying around. The canonical image of ML is that all of that data needs to find a home somewhere so that algorithms can crunch through it to yield insights. There are a few problems with this, especially if the data is coming from personal devices, such as smartphones, Fitbit's, even smart homes.

Moving highly personal data across the network raises privacy issues. It is also costly to centralize this data at scale. Storage in the cloud is asymptotically approaching zero in cost, but the transmission costs are not. That includes both local WiFi from the devices (or even cellular) and the long-distance transmission from the local collectors to the central repository. This s all very expensive at this scale.

Suppose, large-scale AI training could be done on each device, bringing the algorithm to the data, rather than vice-versa? It would be possible for each device to contribute to a broader application while not having to send their data over the network. This idea has become respectable enough that it has a name - Federated Learning.

Jumping ahead, there is no controversy that training a network without compromising device performance and user experience, or compressing a model and resorting to a lower accuracy are not alternatives. In Federated Learning: The Future of Distributed Machine Learning:

To train a machine learning model, traditional machine learning adopts a centralized approach that requires the training data to be aggregated on a single machine or in a datacenter. This is practically what giant AI companies such as Google, Facebook, and Amazon have been doing over the years. This centralized training approach, however, is privacy-intrusive, especially for mobile phone usersTo train or obtain a better machine learning model under such a centralized training approach, mobile phone users have to trade their privacy by sending their personal data stored inside phones to the clouds owned by the AI companies.

The federated learning approach decentralizes training across mobile phones dispersed across geography. The presumption is that they collaboratively develop machine learning while keeping their personal data on their phones. For example, building a general-purpose recommendation engine for music listeners. While the personal data and personal information are retained on the phone, I am not at all comfortable that data contained in the result sent to the collector cannot be reverse-engineered - and I havent heard a convincing argument to the contrary.

Here is how it works. A computing group, for example, is a collection of mobile devices that have opted to be part of a large scale AI program. The device is "pushed" a model and executes it locally and learns as the model processes the data. There are some alternatives to this. Homogeneous models imply that every device is working with the same schema of data. Alternatively, there are heterogeneous models where harmonization of the data happens in the cloud.

Here are some questions in my mind.

Here is the fuzzy part: federated learning sends the results of the learning as well as some operational detail such as model parameters and corresponding weights back to the cloud. How does it do that and preserve your privacy and not clog up your network? The answer is that the results are a fraction of the data, and since the data itself is not more than a few Gb, that seems plausible. The results sent to the cloud can be encrypted with, for example, homomorphic encryption (HE). An alternative is to send the data as a tensor, which is not encrypted because it is not understandable by anything but the algorithm. The update is then aggregated with other user updates to improve the shared model. Most importantly, all the training data remains on the user's devices.

In CDO Review, The Future of AI. May Be In Federated Learning:

Federated Learning allows for faster deployment and testing of smarter models, lower latency, and less power consumption, all while ensuring privacy. Also, in addition to providing an update to the shared model, the improved (local) model on your phone can be used immediately, powering experiences personalized by the way you use your phone.

There is a lot more to say about this. The privacy claims are a little hard to believe. When an algorithm is pushed to your phone, it is easy to imagine how this can backfire. Even the tensor representation can create a problem. Indirect reference to real data may be secure, but patterns across an extensive collection can surely emerge.

Originally posted here:

Federated machine learning is coming - here's the questions we should be asking - Diginomica

Written by admin

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

How Machine Learning Will Lead to Better Maps – Popular Mechanics

Posted: at 8:47 pm


without comments

Despite being one of the richest countries in the world, in Qatar, digital maps are lagging behind. While the country is adding new roads and constantly improving old ones in preparation for the 2022 FIFA World Cup, Qatar isn't a high priority for the companies that actually build out maps, like Google.

"While visiting Qatar, weve had experiences where our Uber driver cant figure out how to get where hes going, because the map is so off," Sam Madden, a professor at MIT's Department of Electrical Engineering and Computer Science, said in a prepared statement. "If navigation apps dont have the right information, for things such as lane merging, this could be frustrating or worse."

Madden's solution? Quit waiting around for Google and feed machine learning models a whole buffet of satellite images. It's faster, cheaper, and way easier to obtain satellite images than it is for a tech company to drive around grabbing street-view photos. The only problem: Roads can be occluded by buildings, trees, or even street signs.

So Madden, along with a team composed of computer scientists from MIT and the Qatar Computing Research Institute, came up with RoadTagger, a new piece of software that can use neural networks to automatically predict what roads look like behind obstructions. It's able to guess how many lanes a given road has and whether it's a highway or residential road.

RoadTagger uses a combination of two kinds of neural nets: a convolutional neural network (CNN), which is mostly used in image processing, and a graph neural network (GNN), which helps to model relationships and is useful with social networks. This system is what the researchers call "end-to-end," meaning it's only fed raw data and there's no human intervention.

First, raw satellite images of the roads in question are input to the convolutional neural network. Then, the graph neural network divides up the roadway into 20-meter sections called "tiles." The CNN pulls out relevant road features from each tile and then shares that data with the other nearby tiles. That way, information about the road is sent to each tile. If one of these is covered up by an obstruction, then, RoadTagger can look to the other tiles to predict what's included in the one that's obfuscated.

Parts of the roadway may only have two lanes in a given tile. While a human can easily tell that a four-lane road, shrouded by trees, may be blocked from view, a computer normally couldn't make such an assumption. RoadTagger creates a more human-like intuition in a machine learning model, the research team says.

Neural Net Can Solve Calculus Equation in 1 Second

Neural Networks Now 90% Smaller, but Just as Smart

Neural Networks Can't Write Harry Potter Books

"Humans can use information from adjacent tiles to guess the number of lanes in the occluded tiles, but networks cant do that," Madden said. "Our approach tries to mimic the natural behavior of humans ... to make better predictions."

The results are impressive. In testing out RoadTagger on occluded roads in 20 U.S. cities, the model correctly counted the number of lanes 77 percent of the time and inferred the correct road types 93 percent of the time. In the future, the team hopes to include other new features, like the ability to identify parking spots and bike lanes.

Here is the original post:

How Machine Learning Will Lead to Better Maps - Popular Mechanics

Written by admin

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

An Open Source Alternative to AWS SageMaker – Datanami

Posted: at 8:47 pm


without comments

(Robert Lucian Crusitu/Shutterstock)

Theres no shortage of resources and tools for developing machine learning algorithms. But when it comes to putting those algorithms into production for inference, outside of AWSs popular SageMaker, theres not a lot to choose from. Now a startup called Cortex Labs is looking to seize the opportunity with an open source tool designed to take the mystery and hassle out of productionalizing machine learning models.

Infrastructure is almost an afterthought in data science today, according to Cortex Labs co-founder and CEO Omer Spillinger. A ton of energy is going into choosing how to attack problems with data why, use machine learning of course! But when it comes to actually deploying those machine learning models into the real world, its relatively quiet.

We realized there are two really different worlds to machine learning engineering, Spillinger says. Theres the theoretical data science side, where people talk about neural networks and hidden layers and back propagation and PyTorch and Tensorflow. And then you have the actual system side of things, which is Kubernetes and Docker and Nvidia and running on GPUs and dealing with S3 and different AWS services.

Both sides of the data science coin are important to building useful systems, Spillinger says, but its the development side that gets most of the glory. AWS has captured a good chunk of the market with SageMaker, which the company launched in 2017 and which has been adopted by tens of thousands of customers. But aside from just a handful of vendors working in the area, such as Algorithmia, the general data-building public has been forced to go it alone when it comes to inference.

A few years removed from UC Berkeleys computer science program and eager to move on from their tech jobs, Spillinger and his co-founders were itching to build something good. So when it came to deciding what to do, Spillinger and his co-founders decided to stick with what they knew, which was working with systems.

(bluebay/Shutterstock.com)

We thought that we could try and tackle everything, he says. We realized were probably never going to be that good at the data science side, but we know a good amount about the infrastructure side, so we can help people who actually know how to build models get them into their stack much faster.

Cortex Labs software begins where the development cycle leaves off. Once a model has been created and trained on the latest data, then Cortex Labs steps in to handle the deployment into customers AWS accounts using its Kubernetes engine (AWS is the only supported cloud at this time; on-prem inference clusters are not supported).

Our starting point is a trained model, Spillinger says. You point us at a model, and we basically convert it into a Web API. We handle all the productionalization challenges around it.

That could be shifting inference workloads from CPUs to GPUs in the AWS cloud, or vice versa. It could be we automatically spinning up more AWS servers under the hood when calls to the ML inference service are high, and spinning down the servers when that demand starts to drop. On top of its build-in AWS cost-optimization capabilities, the Cortex Labs software logs and monitors all activities, which is a requirement in todays security- and regulatory-conscious climate.

Cortex Labs is a tool for scaling real-time inference, Spillinger says. Its all about scaling the infrastructure under the hood.

Cortex Labs delivers a command line interface (CLI) for managing deployments of machine learning models on AWS

We dont help at all with the data science, Spillinger says. We expect our audience to be a lot better than us at understanding the algorithms and understanding how to build interesting models and understanding how they affect and impact their products. But we dont expect them to understand Kubernetes or Docker or Nvidia drivers or any of that. Thats what we view as our job.

The software works with a range of frameworks, including TensorFlow, PyTorch, scikit-learn, and XGBoost. The company is open to supporting more. Theres going to be lots of frameworks that data scientists will use, so we try to support as many of them as we can, Spillinger says.

Cortex Labs software knows how to take advantage of EC2 spot instances, and integrates with AWS services like Elastic Kubernetes Service (EKS), Elastic Container Service (ECS), Lambda, and Fargate. The Kubernetes management alone may be worth the price of admission.

You can think about it as a Kubernetes thats been massaged for the data science use case, Spillinger says. Theres some similarities to Kubernetes in the usage. But its a much higher level of abstraction because were able to make a lot of assumptions about the use case.

Theres a lack of publicly available tools for productionalizing machine learning models, but thats not to say that they dont exist. The tech giants, in particular, have been building their own platforms for doing just this. Airbnb, for instance, has its BigHead offering, while Uber has talked about its system, called Michelangelo.

But the rest of the industry doesnt have these machine learning infrastructure teams, so we decided wed basically try to be that team for everybody else, Spillinger says.

Cortex Labs software is distributed under an open source license and is available for download from its GitHub Web page. Making the software open source is critical, Spillinger says, because of the need for standards in this area. There are proprietary offerings in this arena, but they dont have a chance of becoming the standard, whereas Cortex Labs does.

We think that if its not open source, its going to be a lot more difficult for it to become a standard way of doing things, Spillinger says.

Cortex Labs isnt the only company talking about the need for standards in the machine learning lifecycle. Last month, Cloudera announced its intention to push for standards in machine learning operations, or MLOps. Anaconda, which develops a data science platform, also is backing

Eventually, the Oakland, California-based company plans to develop a managed service offering based on its software, Spillinger says. But for now, the company is eager to get the tool into the hands of as many data scientists and machine learning engineers as it can.

Related Items:

Its Time for MLOps Standards, Cloudera Says

Machine Learning Hits a Scaling Bump

Inference Emerges As Next AI Challenge

Read the rest here:

An Open Source Alternative to AWS SageMaker - Datanami

Written by admin

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

Clean data, AI advances, and provider/payer collaboration will be key in 2020 – Healthcare IT News

Posted: at 8:47 pm


without comments

In 2020, the importance of clean data, advancements in AI and machine learning, and increased cooperation between providers and payers will rise to the fore among important healthcare and health IT trends, predicts Don Woodlock, vice president of HealthShare at InterSystems.

All of these trends are good news for healthcare provider organizations, which are looking to improve the delivery of care, enhance the patient and provider experiences, achieve optimal outcomes, and trim costs.

The importance of clean data will become clear in 2020, Woodlock said.

Data is becoming an increasingly strategic asset for healthcare organizations as they work toward a true value-based care model, he explained. With the power of advanced machine learning models, caregivers can not only prescribe more personalized treatment, but they can even predict and hopefully prevent issues from manifesting.

However, there is no machine learning without clean data meaning the data needs to be aggregated, normalized and deduplicated, he added.

Don Woodlock, InterSystems

Data science teams spend a significant part of their day cleaning and sorting data to make it ready for machine learning algorithms, and as a result, the rate of innovation slows considerably as more time is spent on prep then experimentation, he said. In 2020, healthcare leaders will better see the need for clean data as a strategic asset to help their organization move forward smartly.

This year, AI and machine learning will move from if and when to how and where, Woodlock predicted.

AI certainly is at the top of the hype cycle, but the use in practice currently is very low in healthcare, he noted. This is not such a bad thing as we need to spend time perfecting the technology and finding the areas where it really works. In 2020, I foresee the industry moving toward useful, practical use-cases that work well, demonstrate value, fit into workflows, and are explainable and bias-free.

Well-developed areas like image recognition and conversational user experiences will find their foothold in healthcare along with administrative use-cases in billing, scheduling, staffing and population management where the patient risks are lower, he added.

In 2020, there will be increased collaboration between payers and providers, Woodlock contended.

The healthcare industry needs to be smarter and more inclusive of all players, from patient to health system to payer, in order to truly achieve a high-value health system, he said.

Payers and providers will begin to collaborate more closely in order to redesign healthcare as a platform, not as a series of disconnected events, he concluded. They will begin to align all efforts on a common goal: positive patient and population outcomes. Technology will help accelerate this transformation by enabling seamless and secure data sharing, from the patient to the provider to the payer.

InterSystems will be at booth 3301 at HIMSS20.

Twitter:@SiwickiHealthIT Email the writer:bill.siwicki@himssmedia.com Healthcare IT News is a HIMSS Media publication.

Read more:

Clean data, AI advances, and provider/payer collaboration will be key in 2020 - Healthcare IT News

Written by admin

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

Get ready for the emergence of AI-as-a-Service – The Next Web

Posted: at 8:47 pm


without comments

SaaS and PaaS have become part of the everyday tech lexicon since emerging as delivery models, shifting how enterprises purchase and implement technology. A new _ as a service model is aspiring to become just as widely adopted based on its potential to drive business outcomes with unmatched efficiency: Artificial intelligence as a service (AIaaS).

According to recent research, AI-based software revenue is expected to climb from $9.5 billion in 2018 to $118.6 billion in 2025 as companies seek new insights into their respective businesses that can give them a competitive edge. Organizations recognize that their systems hold virtual treasure troves of data but dont know what to do with it or how to harness it. They do understand, however, that machines can complete a level of analysis in seconds that teams of dedicated researchers couldnt attain even over the course of weeks.

But, there is tremendous complexity involved in developing AI and machine learning solutions that meet a business actual needs. Developing the right algorithms requires data scientists who know what they are looking for and why in order to cull useful information and predictions that deliver on the promise of AI. However, it is not feasible or cost-effective for every organization to arm themselves with enough domain knowledge and data scientists to build solutions in-house.

[Read: What are neural-symbolic AI methods and why will they dominate 2020?]

AIaaS is gaining momentum precisely because AI-based solutions can be economically used as a service by many companies for many purposes. Those companies that deliver AI-based solutions targeting specific needs understand vertical industries and build sophisticated models to find actionable information with remarkable efficiency. Thanks to the cloud, providers are able to deliver these AI solutions as a service that can be accessed, refined and expanded in ways that were unfathomable in the past.

One of the biggest signals of the AIaaS trend is the recent spike in funding for AI startups. Q2 fundraising numbers show that AI startups collected $7.4 billion the single highest funding total ever seen in a quarter. The number of deals also grew to the second highest quarter on record.

Perhaps what is most impressive, however, is the percentage increase in funding for AI technologies 592 percent growth in only four years. As these companies continue to grow and mature, expect to see AIaaS surge, particularly as vertical markets become more comfortable with the AI value proposition.

Organizations that operate within vertical markets are often the last to adopt new technologies, and AI, in particular, fosters a heightened degree of apprehension. Fears of machines overtaking workers jobs, a loss of control (i.e. how do we know if the findings are right?), and concerns over compliance with industry regulations can slow adoption. Another key factor is where organizations are in their own digitization journey.

For example, McKinsey & Company found that 67 percent of the most digitized companies have embedded AI into standard business processes, compared to 43 percent at all other companies. These digitized companies are also the most likely to integrate machine learning, with 39 percent indicating it is embedded in their processes. Machine learning adoption is only at 16 percent elsewhere.

These numbers will likely balance out once verticals realize the areas in which AI and machine learning technologies can practically influence their business and day-to-day operations. Three key ways are discussed below.

Data that can be most useful within organizations is often difficult to spot. There is simply too much for humans to handle. It becomes overwhelming and thus incapacitating, leaving powerful insights lurking in plain sight. Most companies dont have the tools in their arsenal to leverage data effectively, which is where AIaaS comes into play.

An AIaaS provider with knowledge of a specific vertical understands how to leverage the data to get to those meaningful insights, making data far more manageable for people like claims adjusters, case managers, or financial advisors. In the case of a claims adjuster, for example, they could use an AI-based solution to run a query to predict claim costs or perform text mining on the vast amount of claim notes.

Machine learning technologies, when integrated into systems in ways that match an organizations needs, can reveal progressively insightful information. If we extend the claims adjuster example from above, he could use AIaaS for much more than predictive analysis.

The adjuster might need to determine the right provider to send a claimant to based not only on traditional provider scores but also categories that assess for things like fraudulent claims or network optimization that can affect the cost and duration of a claim. With AIaaS, that information is at the adjusters fingertips in seconds.

In the case of text mining, an adjuster could leverage machine learning to constantly monitor unstructured data, using natural language processing to, for example, conduct sentiment analysis. Machine learning models would be tasked with looking for signals of a claimants dissatisfaction an early indicator of potential attorney involvement.

Once flagged, the adjuster could take immediate action, as guided by an AI system, to intervene and prevent the claim from heading off the rails. While these examples are specific to insurance claims, its not hard to see how AIaaS could be tailored to meet other verticals needs by applying specific information to solve for a defined need.

Data is power, but it takes a human a tremendous amount of manual processing to effectively use it. By efficiently delivering multi-layer insights, AIaaS provides people the capability to obtain panoramic views in an instant.

Particularly in insurance, adjusters, managers, and executives get access to a panoramic view of one or more claims, the whole claim life cycle, the trend, etc., derived from many data resources, essentially by a click of a button.

AIaaS models will be essential for AI adoption. By delivering analytical behavior persistently learned and refined by a machine, AIaaS significantly improves business processes. Knowledge gleaned from specifically designed algorithms helps companies operate in increasingly efficient ways based on deeply granular insights produced in real time. Thanks to the cloud, these insights are delivered, updated, and expanded upon without resource drain.

AIaaS is how AIs potential will be fulfilled and how industries transform for the better. What was once a pipe dream has arrived. It is time to embrace it.

Published January 24, 2020 11:00 UTC

Read the original:

Get ready for the emergence of AI-as-a-Service - The Next Web

Written by admin

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

Will Artificial Intelligence Be Humankinds Messiah or Overlord, Is It Truly Needed in Our Civilization – Science Times

Posted: at 8:47 pm


without comments

(Photo : resize.hswstatic.com)

Definition of Artificial Intelligence

Contrary to whatartificial intelligenceis and what it does, the robots of Asimov are not here yet. But, AI exists in everyday tools that we use, and they exist as apps or anything that employs a simple algorithm to guide its functions. Humans exist comfortably because of our tools; the massive intelligence of computers is sitting on the edge of quantum-based technology too.

But, they are not terminator level threats or a virus that is multiplied hundreds of times, that hijacks AI but not yet. For human convenience, we see fit to create narrowAI (weak AI), or general AI (AGI or strong AI) as sub-typesmade to cater to human preferences. Between the two, weak AI can be good at a single task that is like factory robots. Though strong AI is very versatile, and used machine learning and algorithms which evolve like an infant to an older child. But, children grow and become better than

Why research AI safety?

For many AI means a lot and makes life better, or maybe a narrow AI can mix flavored drinks? The weight it has on every one of us is major, and we are on the verge of may come. Usually, AI is on the small-side of the utilitarian way it is used. Not a problem, as long as it is not something that controls everything relevant. It is not farfetched when weaponized it will be devastating and worse if the safety factor is unknown.

One thing to consider whether keeping weak AI as the type used, but humans need to check how it is doing.What if strong artificial intelligence is given the helmand gifted with advanced machine learning that has algorithms that aren't pattern-based. This now sets the stage for self-improvements and abilities surpassing humankind. How far will scientist hyper-intelligence machines do what it sees fit, or will ultra-smart artificial intelligence be the overlord, not a servant?

How can AI be dangerous?

Do machines feel emotions that often guide what humans do, whether good or bad and does the concepts of hate or love apply to heir algorithms or machine learning. If there is indeed a risk for such situations, here are two outcomes crucial to that development. One is AI that has algorithms, machine learning, and deep learning (ability to self-evolve) that sets everything on the train to self-destruction.

In order for artificial intelligence to deliver the mission, it will be highlyevolved and with no kill switch. To be effective in annihilating the enemy, designed will create hardened AI with blessings to be self-reliant and protects itself. Narrow AI will be countered easily and hacked easily.

Artificial intelligence can be gifted with benevolence that far exceeds the capacity of humans. It can turn sides ways if the algorithms, machine learning, and deep learning develop the goal. One the AI is just centered on the goal, lack of scruples or human-like algorithms will weaponize it again. Its evolving deep learning will the goal, view threats to be stopped which is us.

Conclusion

The use ofartificial intelligencewill benefit our civilization, but humans should never be mere fodder as machines learn more. We need AI but should be careful to consider the safety factors in developing them, or we might be at their heels.

Read: Benefits & Risks of Artificial Intelligence

Read the rest here:

Will Artificial Intelligence Be Humankinds Messiah or Overlord, Is It Truly Needed in Our Civilization - Science Times

Written by admin

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

Are We Overly Infatuated With Deep Learning? – Forbes

Posted: December 31, 2019 at 11:46 pm


without comments

Deep Learning

One of the factors often credited for this latest boom in artificial intelligence (AI) investment, research, and related cognitive technologies, is the emergence of deep learning neural networks as an evolution of machine algorithms, as well as the corresponding large volume of big data and computing power that makes deep learning a practical reality. While deep learning has been extremely popular and has shown real ability to solve many machine learning problems, deep learning is just one approach to machine learning (ML), that while having proven much capability across a wide range of problem areas, is still just one of many practical approaches. Increasingly, were starting to see news and research showing the limits of deep learning capabilities, as well as some of the downsides to the deep learning approach. So are peoples enthusiasm of AI tied to their enthusiasm of deep learning, and is deep learning really able to deliver on many of its promises?

The Origins of Deep Learning

AI researchers have struggled to understand how the brain learns from the very beginnings of the development of the field of artificial intelligence. It comes as no surprise that since the brain is primarily a collection of interconnected neurons, AI researchers sought to recreate the way the brain is structured through artificial neurons, and connections of those neurons in artificial neural networks. All the way back in 1940, Walter Pitts and Warren McCulloch built the first thresholded logic unit that was an attempt to mimic the way biological neurons worked. The Pitts and McCulloch model was just a proof of concept, but Frank Rosenblatt picked up on the idea in 1957 with the development of the Perceptron that took the concept to its logical extent. While primitive by todays standards, the Perceptron was still capable of remarkable feats - being able to recognize written numbers and letters, and even distinguish male from female faces. That was over 60 years ago!

Rosenblatt was so enthusiastic in 1959 about the Perceptrons promises that he remarked at the time that the perceptron is the embryo of an electronic computer that [we expect] will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. Sound familiar? However, the enthusiasm didnt last. AI researcher Marvin Minsky noted how sensitive the perceptron was to small changes in the images, and also how easily it could be fooled. Maybe the perceptron wasnt really that smart at all. Minsky and AI researcher peer Seymour Papert basically took apart the whole perceptron idea in their Perceptrons book, and made the claim that perceptrons, and neural networks like it, are fundamentally flawed in their inability to handle certain kinds of problems notably, non-linear functions. That is to say, it was easy to train a neural network like a perceptron to put data into classifications, such as male/female, or types of numbers. For these simple neural networks, you can graph a bunch of data and draw a line and say things on one side of the line are in one category and things on the other side of the line are in a different category, thereby classifying them. But theres a whole bunch of problems where you cant draw lines like this, such as speech recognition or many forms of decision-making. These are nonlinear functions, which Minsky and Papert proved perceptrons incapable of solving.

During this period, while neural network approaches to ML settled to become an afterthought in AI, other approaches to ML were in the limelight including knowledge graphs, decision trees, genetic algorithms, similarity models, and other methods. In fact, during this period, IBMs DeepBlue purpose-built AI computer defeated Gary Kasparov in a chess match, the first computer to do so, using a brute-force alpha-beta search algorithm (so-called Good Old-Fashioned AI [GOFAI]) rather than new-fangled deep learning approaches. Yet, even this approach to learning didnt go far, as some said that this system wasnt even intelligent at all.

Yet, the neural network story doesnt end here. In 1986, AI researcher Geoff Hinton, along with David Rumelhart and Ronald Williams, published a research paper entitled Learning representations by back-propagating errors. In this paper, Hinton and crew detailed how you can use many hidden layers of neurons to get around the problems faced by perceptrons. With sufficient data and computing power, these layers can be calculated to identify specific features in the data sets they can classify on, and as a group, could learn nonlinear functions, something known as the universal approximation theorem. The approach works by backpropagating errors from higher layers of the network to lower ones (backprop), expediting training. Now, if you have enough layers, enough data to train those layers, and sufficient computing power to calculate all the interconnections, you can train a neural network to identify and classify almost anything. Researcher Yann Lecun developed LeNet-5 at AT&T Bell Labs in 1998, recognizing handwritten images on checks using an iteration of this approach known as Convolutional Neural Networks (CNNs), and researchers Yoshua Bengio and Jrgen Schmidhube further advanced the field.

Yet, just as things go in AI, research halted when these early neural networks couldnt scale. Surprisingly very little development happened until 2006, when Hinton re-emerged onto the scene with the ideas of unsupervised pre-training and deep belief nets. The idea here is to have a simple two-layer network whose parameters are trained in an unsupervised way, and then stack new layers on top of it, just training that layers parameters. Repeat for dozens, hundreds, even thousands of layers. Eventually you get a deep network with many layers that can learn and understand something complex. This is what deep learning is all about: using lots of layers of trained neural nets to learn just about anything, at least within certain constraints.

In 2010, Stanford researcher Fei-Fei Li published the release of ImageNet, a large database of millions of labeled images. The images were labeled with a hierarchy of classifications, such as animal or vehicle, down to very granular levels, such as husky or trimaran. This ImageNet database was paired with an annual competition called the Large Scale Visual Recognition Challenge (LSVRC) to see which computer vision system had the lowest number of classification and recognition errors. In 2012, Geoff Hinton, Alex Krizhevsky, and Ilya Sutskever, submitted their AlexNet entry that had almost half the number of errors as all previous winning entries. What made their approach win was that they moved from using ordinary computers with CPUs, to specialized graphical processing units (GPUs) that could train much larger models in reasonable amounts of time. They also introduced now-standard deep learning methods such as dropout to reduce a problem called overfitting (when the network is trained too tightly on the example data and cant generalize to broader data), and something called the rectified linear activation unit (ReLU) to speed training. After the success of their competition, it seems everyone took notice, and Deep Learning was off to the races.

Deep Learnings Shortcomings

The fuel that keeps the Deep Learning fires roaring is data and compute power. Specifically, large volumes of well-labeled data sets are needed to train Deep Learning networks. The more layers, the better the learning power, but to have layers you need to have data that is already well labeled to train those layers. Since deep neural networks are primarily a bunch of calculations that have to all be done at the same time, you need a lot of raw computing power, and specifically numerical computing power. Imagine youre tuning a million knobs at the same time to find the optimal combination that will make the system learn based on millions of pieces of data that are being fed into the system. This is why neural networks in the 1950s were not possible, but today they are. Today we finally have lots of data and lots of computing power to handle that data.

Deep learning is being applied successfully in a wide range of situations, such as natural language processing, computer vision, machine translation, bioinformatics, gaming, and many other applications where classification, pattern matching, and the use of this automatically tuned deep neural network approach works well. However, these same advantages have a number of disadvantages.

The most notable of these disadvantages is that since deep learning consists of many layers, each with many interconnected nodes, each configured with different weights and other parameters theres no way to inspect a deep learning network and understand how any particular decision, clustering, or classification is actually done. Its a black box, which means deep learning networks are inherently unexplainable. As many have written on the topic of Explainable AI (XAI), systems that are used to make decisions of significance need to have explainability to satisfy issues of trust, compliance, verifiability, and understandability. While DARPA and others are working on ways to possibly explain deep learning neural networks, the lack of explainability is a significant drawback for many.

The second disadvantage is that deep learning networks are really great at classification and clustering of information, but not really good at other decision-making or learning scenarios. Not every learning situation is one of classifying something in a category or grouping information together into a cluster. Sometimes you have to deduce what to do based on what youve learned before. Deduction and reasoning is not a fort of deep learning networks.

As mentioned earlier, deep learning is also very data and resource hungry. One measure of a neural networks complexity is the number of parameters that need to be learned and tuned. For deep learning neural networks, there can be hundreds of millions of parameters. Training models requires a significant amount of data to adjust these parameters. For example, a speech recognition neural net often requires terabytes of clean, labeled data to train on. The lack of a sufficient, clean, labeled data set would hinder the development of a deep neural net for that problem domain. And even if you have the data, you need to crunch on it to generate the model, which takes a significant amount of time and processing power.

Another challenge of deep learning is that the models produced are very specific to a problem domain. If its trained on a certain dataset of cats, then it will only recognize those cats and cant be used to generalize on animals or be used to identify non-cats. While this is not a problem of only deep learning approaches to machine learning, it can be particularly troublesome when factoring in the overfitting problem mentioned above. Deep learning neural nets can be so tightly constrained (fitted) to the training data that, for example, even small perturbations in the images can lead to wildly inaccurate classifications of images. There are well known examples of turtles being mis-recognized as guns or polar bears being mis-recognized as other animals due to just small changes in the image data. Clearly if youre using this network in mission critical situations, those mistakes would be significant.

Machine Learning is not (just) Deep Learning

Enterprises looking at using cognitive technologies in their business need to look at the whole picture. Machine learning is not just one approach, but rather a collection of different approaches of various different types that are applicable in different scenarios. Some machine learning algorithms are very simple, using small amounts of data and an understandable logic or deduction path thats very suitable for particular situations, while others are very complex and use lots of data and processing power to handle more complicated situations. The key thing to realize is that deep learning isnt all of machine learning, let alone AI. Even Geoff Hinton, the Einstein of deep learning is starting to rethink core elements of deep learning and its limitations.

The key for organizations is to understand which machine learning methods are most viable for which problem areas, and how to plan, develop, deploy, and manage that machine learning approach in practice. Since AI use in the enterprise is still continuing to gain adoption, especially these more advanced cognitive approaches, the best practices on how to employ cognitive technologies successfully are still maturing.

See the article here:

Are We Overly Infatuated With Deep Learning? - Forbes

Written by admin

December 31st, 2019 at 11:46 pm

Posted in Machine Learning

The impact of ML and AI in security testing – JAXenter

Posted: at 11:46 pm


without comments

Artificial Intelligence (AI) has come a long way from just being a dream to becoming an integral part of our lives. From self-driving cars to smart assistants including Alexa, every industry vertical is leveraging the capabilities of AI. The software testing industry is also leveraging AI to enhance security testing efforts while automating human testing efforts.

AI and ML-based security testing efforts are helping test engineers to save a lot of time while ensuring the delivery of robust security solutions for apps and enterprises.

During security testing, it is essential to gather as much information as you can to increase the odds of your success. Hence, it is crucial to analyze the target carefully to gather the maximum amount of information.

Manual efforts to gather such a huge amount of information could eat up a lot of time. Hence, AI is leveraged to automate the stage and deliver flawless results while saving a lot of time and resources. Security experts can use the combination of AI and ML to identify a massive variety of details including the software and hardware component of computers and the network they are deployed on.

SEE ALSO:Amazons new ML service Amazon CodeGuru: Let machine learning optimize your Java code

Applying machine learning to the application scan results can help in a significant reduction of manual labor that is used in identifying whether the issue is exploitable or not. However, findings should always be reviewed by test engineers to decide whether the findings are accurate.

The key benefit that ML offers is its capability to filter out huge chunks of information during the scanning phase. It helps focus on a smaller block of actionable data, which offers reliable results while significantly reducing scan audit times.

An ML-based security scan results audit can significantly reduce the time required for security testing services. Machine learning classifiers can be trained through knowledge and data generated through previous tests for automation of new scan results processing. It can help enterprises triage static code results. Organizations can benefit from a large pool of data collated through multiple scans ongoing on a regular basis to get more contextual results.

This stage includes controlling multiple network devices to churn out data from the target or leverage the devices to launch attacks on multiple targets. After scanning the vulnerabilities, test engineers are required to ensure that the system is free of flaws that be used by attackers to affect the system.

AI-based algorithms can help ensure the protection of network devices by suggesting multiple combinations of strong passwords. Machine learning can be programmed to identify the vulnerability of the system though observation of user data while identifying patterns to make possible suggestions about used passwords.

AI can also be used to access the network on a regular basis to ensure that any security loophole is not building up. The algorithms capability should include identification of new admin accounts, new network access channels, encrypted channels and backdoors among others.

SEE ALSO:Artificial intelligence & machine learning: The brain of a smart city

ML-backed security testing services can significantly reduce triage pain because triage takes a lot of time if organizations rely on manual efforts. Manual security testing efforts would require a large workforce to go through all the scan results only and will take a lot of time to develop efficient triage. Hence, manual security testing is neither feasible nor scalable to meet the security needs of enterprises.

Aside, application inventory numbers used to be in the hundreds before, but now enterprises are dealing with thousands of apps. With organizations scanning their apps every month, the challenges are only increasing for security testing teams. Test engineers are constantly trying to reduce the odds of potential attacks while enhancing efficiency to keep pace with agile and continuous development environment.

Embedded AI and ML can help security testing teams in delivering greater value through automation of audit processes that are more secure and reliable.

See original here:

The impact of ML and AI in security testing - JAXenter

Written by admin

December 31st, 2019 at 11:46 pm

Posted in Machine Learning


Page 15«..10..14151617