Page 1,322«..1020..1,3211,3221,3231,324..1,3301,340..»

Short- and long-term impacts of machine learning on contact centres – Which-50

Posted: January 27, 2020 at 8:47 pm


Which-50 and LogMeIn recently surveyed call centre managers and C-Suite executives with responsibility for the customer, asking them to nominate the technologies they believe will be most transformative.

AI & machine learning was nominated by more than three quarters of respondents, making it the top pick.

We asked Ryan Lester, Senior Director, Customer Experience Technologies at LogMeIn, to describe where the short- and longer-term impacts of AI are most likely to be felt, and also to describe the impact on contact centre agents.

Lester told Which-50 that AI is the broader umbrella and machine learning is the algorithms you build to improve the quality of your prediction.

He said brands should be very thoughtful if they are going to do machine learning themselves and invest in machine learning teams. However, he recommended that companies dont do that.

Rather, he said there are plenty of off-the-shelf solutions that are purpose-built for contact centres or for conversion metrics.

He said, You can buy a business application versus buying lets say a machine learning tool or platform.

Lester said that in the immediate term what companies can do to avoid some of the challenges around a bad investment is to use AI as their first round listening mechanism. Brands can leverage a solution built for the contact centre, and it will listen to these customer conversations over phone calls.

Then LogMeIn can see certain intents, Lester said, So Ill say here are intents Im seeing. You can also take large databases. If you have chat records from the last year, you can stick those into AI tools that will start to help you identify intents.

You can take historical data and use it as a place to say, well we should go investigate further here and then start building more purposeful applications around those workflows.

He said companies should build around their existing workflows. They should focus on those workflows today before they invest heavily in either a technology spend or research spend or a headcount spend.

The longer-term impact of machine learning is moving away from inbound response.

Lester said when a customer is contacting a company about a specific problem, the company should operationalise it. That means making it more efficient so making it self-service or reducing delivery costs. They want to align the right resource to the right problem.

Where theres an opportunity longer-term is to think about more of the entire customer lifecycle, he explained.

Lester said AI will help to discover what types of customers brands should be engaging with through leading indicators.

We should start being more proactive about engagement for these types of customers with these types of attributes. If were seeing retention challenges on particular types of customers, we should be offering up those types of offerings to those customers.

He believes many of the conversations are still really about inbound customer service, when in the longer term theres going to be a much bigger opportunity around the entire customer lifecycle.

Saying, for these types of customers we acquired this way, heres how were upselling them, heres how were better retaining them and looking much more at the lifecycle and how AI is helping across that entire lifecycle.

Athina Mallis is the editor of the Which-50 Digital Intelligence Unit of which LogMeIn is a corporate member. Members provide their insights and expertise for the benefit of the Which-5o community. Membership fees apply.

Previous post

Next post

View original post here:

Short- and long-term impacts of machine learning on contact centres - Which-50

Written by admin |

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

New York Institute of Finance and Google Cloud launch a Machine Learning for Trading Specialisation on Coursera – HedgeWeek

Posted: at 8:47 pm


The New York Institute of Finance (NYIF) and Google Cloud have launched a new Machine Learning for Trading Specialisation available exclusively on the Coursera platform.

The Specialisation helps learners leverage the latest AI and machine learning techniques for financial trading.

Amid the Fourth Industrial Revolution, nearly 80 per cent of financial institutions cite machine learning as a core component of business strategy and 75 per cent of financial services firms report investing significantly in machine learning. The Machine Learning for Trading Specialisation equips professionals with key technical skills increasingly needed in the financial industry today.

Composed of three courses in financial trading, machine learning, and artificial intelligence, the Specialisation features a blend of theoretical and applied learning. Topics include analysing market data sets, building financial models for quantitative and algorithmic trading, and applying machine learning in quantitative finance.

As we enter an era of unprecedented technological change within our sector, were proud to offer up-skilling opportunities for hedge fund traders and managers, risk analysts, and other financial professionals to remain competitive through Coursera, says Michael Lee, Managing Director of Corporate Development at NYIF. The past ten years have demonstrated the staying power of AI tools in the finance world, further proving the importance for both new and seasoned professionals to hone relevant tech skills.

The Specialisation is particularly suited for hedge fund traders, analysts, day traders, those involved in investment management or portfolio management, and anyone interested in constructing effective trading strategies using machine learning. Prerequisites include basic competency with Python, familiarity with pertinent libraries for machine learning, a background in statistics, and foundational knowledge of financial markets.

Cutting-edge technologies, such as machine and reinforcement learning, have become increasingly commonplace in finance, says Rochana Golani, Director, Google Cloud Learning Services. Were excited for learners on Coursera to explore the potential of machine learning within trading. Looking beyond traditional finance roles, were also excited for the Specialisation to support machine learning professionals seeking to apply their craft to quantitative trading strategies.

Excerpt from:

New York Institute of Finance and Google Cloud launch a Machine Learning for Trading Specialisation on Coursera - HedgeWeek

Written by admin |

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

Iguazio pulls in $24m from investors, shows off storage-integrated parallelised, real-time AI/machine learning workflows – Blocks and Files

Posted: at 8:47 pm


Workflow-integrated storage supplier Iguazio has received $24m in C-round funding and announced its Data Science Platform. This is deeply integrated into AI and machine learning processes, and accelerates them to real-time speeds through parallel access to multi-protocol views of a single storage silo using data container tech.

The firm said digital payment platform provider Payoneer is using it for proactive fraud prevention with real-time machine learning and predictive analytics.

Yaron Weiss, VP Corporate Security and Global IT Operations (CISO) at Payoneer, said of Iguazios Data Science Platform: Weve tackled one of our most elusive challenges with real-time predictive models, making fraud attacks almost impossible on Payoneer.

He said Payoneer had built a system which adapts to new threats and enables is to prevent fraud with minimum false positives. The systems predictive machine learning models identify suspicious fraud and money laundering patterns continuously.

Weiss said fraud was detected retroactively with offline machine learning models; customers could only block users after damage had already been done. Now it can take the same models and serve them in real time against fresh data.

The Iguazio system uses a low latency serverless framework, a real-time multi-model data engine and a Python eco-system running over Kubernetes. Iguazio claims an estimated 87 per cent of data science models which have shown promise in the lab never make it to production because of difficulties in making them operational and able to scale.

It is based on so-called data containers that store normalised data from multiple sources; incoming stream records, files, binary objects, and table items. The data is indexed, and encoded by a parallel processing engine. Its stored in the most efficient way to reduce data footprint while maximising search and scan performance for each data type.

Data containers are accessed througha V310 API and can be read as any type regardless of how it was ingested. Applications can read, update, search, and manipulate data objects, while the data service ensures data consistency, durability, and availability.

Customers can submit SQL or API queries for file metadata, to identify or manipulate specific objects without long and resource-consuming directory traversals, eliminating any need for separate and non-synchronised file-metadata databases.

So-called API engines engine uses offload techniques for common transactions, analytics queries, real-time streaming, time-series, and machine-learning logic. They accept data and metadata queries, distribute them across all CPUs, and leverage data encoding and indexing schemes to eliminate I/O operations. Iguazio claims this provides magnitudes faster analytics and eliminates network chatter.

The Iguazio software is claimed to be able to accelerate the performance of tools such as Apache Hadoop and Spark by up to 100 times without requiring any software changes.

This DataScience Platform can run on-premises or in the public cloud. The Iguazio website contains much detail about its components and organisation.

Iguazio will use the $24m to fund product innovation and support global expansion into new and existing markets. The round was led by INCapital Ventures, with participation from existing and new investors, including Samsung SDS, Kensington Capital Partners, Plaza Ventures and Silverton Capital Ventures.

See the rest here:

Iguazio pulls in $24m from investors, shows off storage-integrated parallelised, real-time AI/machine learning workflows - Blocks and Files

Written by admin |

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

Federated machine learning is coming – here’s the questions we should be asking – Diginomica

Posted: at 8:47 pm


A few years ago, I wondered how edge data would ever be useful given the enormous cost of transmitting all the data to either the centralized data center or some variant of cloud infrastructure. (It is said that 5G will solve that problem).

Consider, for example, applications of vast sensor networks that stream a great deal of data at small intervals. Vehicles on the move are a good example.

There is telemetry from cameras, radar, sonar, GPS and LIDAR, the latter about 70MB/sec. This could quickly amount to four terabytes per day (per vehicle). How much of this data needs to be retained? Answers I heard a few years ago were along two lines:

My counterarguments at the time were:

Introducing TensorFlow federated, via The TensorFlow Blog:

This centralized approach can be problematic if the data is sensitive or expensive to centralize. Wouldn't it be better if we could run the data analysis and machine learning right on the devices where that data is generated, and still be able to aggregate together what's been learned?

Since I looked at this a few years ago, the distinction between an edge device and a sensor has more or less disappeared. Sensors can transmit via wifi (though there is an issue of battery life, and if they're remote, that's a problem); the definition of the edge has widened quite a bit.

Decentralized data collection and processing have become more powerful and able to do an impressive amount of computing. The case is point in Intel's Introducing the Intel Neural Compute Stick 2 computer vision and deep learning accelerator powered by the Intel Movidius Myriad X VPU, that can stick into a Pi for less than $70.00.

But for truly distributed processing, the Apple A13 chipset in the iPhone 11 has a few features that boggle the mind: From Inside Apple's A13 Bionic system-on-chip Neural Engine, a custom block of silicon separate from the CPU and GPU, focused on accelerating Machine Learning computations. The CPU has a set of "machine learning accelerators" that perform matrix multiplication operations up to six times faster than the CPU alone. It's not clear how exactly this hardware is accessed, but for tasks like machine learning (ML) that use lots of matrix operations, the CPU is a powerhouse. Note that this matrix multiplication hardware is part of the CPU cores and separate from the Neural Engine hardware.

This should beg the question, "Why would a smartphone have neural net and machine learning capabilities, and does that have anything to do with the data transmission problem for the edge?" A few years ago, I thought the idea wasn't feasible, but the capability of distributed devices has accelerated. How far-fetched is this?

Let's roll the clock back thirty years. The finance department of a large diversified organization would prepare in the fall a package of spreadsheets for every part of the organization that had budget authority. The sheets would start with low-level detail, official assumptions, etc. until they all rolled up to a small number of summary sheets that were submitted headquarters. This was a terrible, cumbersome way of doing things, but it does, in a way, presage the concept of federated learning.

Another idea that vanished is Push Technology that shared the same network load as centralizing sensor data, just in the opposite direction. About twenty-five years, when everyone had a networked PC on their desk, the PointCast Network used push technology. Still, it did not perform as well as expected, often believed to be because its traffic burdened corporate networks with excessive bandwidth use, and was banned in many places. If Federated Learning works, those problems have to be addressed

Though this estimate changes every day, there are 3 billion smartphones in the world and 7 billion connected devices.You can almost hear the buzz in the air of all of that data that is always flying around. The canonical image of ML is that all of that data needs to find a home somewhere so that algorithms can crunch through it to yield insights. There are a few problems with this, especially if the data is coming from personal devices, such as smartphones, Fitbit's, even smart homes.

Moving highly personal data across the network raises privacy issues. It is also costly to centralize this data at scale. Storage in the cloud is asymptotically approaching zero in cost, but the transmission costs are not. That includes both local WiFi from the devices (or even cellular) and the long-distance transmission from the local collectors to the central repository. This s all very expensive at this scale.

Suppose, large-scale AI training could be done on each device, bringing the algorithm to the data, rather than vice-versa? It would be possible for each device to contribute to a broader application while not having to send their data over the network. This idea has become respectable enough that it has a name - Federated Learning.

Jumping ahead, there is no controversy that training a network without compromising device performance and user experience, or compressing a model and resorting to a lower accuracy are not alternatives. In Federated Learning: The Future of Distributed Machine Learning:

To train a machine learning model, traditional machine learning adopts a centralized approach that requires the training data to be aggregated on a single machine or in a datacenter. This is practically what giant AI companies such as Google, Facebook, and Amazon have been doing over the years. This centralized training approach, however, is privacy-intrusive, especially for mobile phone usersTo train or obtain a better machine learning model under such a centralized training approach, mobile phone users have to trade their privacy by sending their personal data stored inside phones to the clouds owned by the AI companies.

The federated learning approach decentralizes training across mobile phones dispersed across geography. The presumption is that they collaboratively develop machine learning while keeping their personal data on their phones. For example, building a general-purpose recommendation engine for music listeners. While the personal data and personal information are retained on the phone, I am not at all comfortable that data contained in the result sent to the collector cannot be reverse-engineered - and I havent heard a convincing argument to the contrary.

Here is how it works. A computing group, for example, is a collection of mobile devices that have opted to be part of a large scale AI program. The device is "pushed" a model and executes it locally and learns as the model processes the data. There are some alternatives to this. Homogeneous models imply that every device is working with the same schema of data. Alternatively, there are heterogeneous models where harmonization of the data happens in the cloud.

Here are some questions in my mind.

Here is the fuzzy part: federated learning sends the results of the learning as well as some operational detail such as model parameters and corresponding weights back to the cloud. How does it do that and preserve your privacy and not clog up your network? The answer is that the results are a fraction of the data, and since the data itself is not more than a few Gb, that seems plausible. The results sent to the cloud can be encrypted with, for example, homomorphic encryption (HE). An alternative is to send the data as a tensor, which is not encrypted because it is not understandable by anything but the algorithm. The update is then aggregated with other user updates to improve the shared model. Most importantly, all the training data remains on the user's devices.

In CDO Review, The Future of AI. May Be In Federated Learning:

Federated Learning allows for faster deployment and testing of smarter models, lower latency, and less power consumption, all while ensuring privacy. Also, in addition to providing an update to the shared model, the improved (local) model on your phone can be used immediately, powering experiences personalized by the way you use your phone.

There is a lot more to say about this. The privacy claims are a little hard to believe. When an algorithm is pushed to your phone, it is easy to imagine how this can backfire. Even the tensor representation can create a problem. Indirect reference to real data may be secure, but patterns across an extensive collection can surely emerge.

Originally posted here:

Federated machine learning is coming - here's the questions we should be asking - Diginomica

Written by admin |

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

How Machine Learning Will Lead to Better Maps – Popular Mechanics

Posted: at 8:47 pm


Despite being one of the richest countries in the world, in Qatar, digital maps are lagging behind. While the country is adding new roads and constantly improving old ones in preparation for the 2022 FIFA World Cup, Qatar isn't a high priority for the companies that actually build out maps, like Google.

"While visiting Qatar, weve had experiences where our Uber driver cant figure out how to get where hes going, because the map is so off," Sam Madden, a professor at MIT's Department of Electrical Engineering and Computer Science, said in a prepared statement. "If navigation apps dont have the right information, for things such as lane merging, this could be frustrating or worse."

Madden's solution? Quit waiting around for Google and feed machine learning models a whole buffet of satellite images. It's faster, cheaper, and way easier to obtain satellite images than it is for a tech company to drive around grabbing street-view photos. The only problem: Roads can be occluded by buildings, trees, or even street signs.

So Madden, along with a team composed of computer scientists from MIT and the Qatar Computing Research Institute, came up with RoadTagger, a new piece of software that can use neural networks to automatically predict what roads look like behind obstructions. It's able to guess how many lanes a given road has and whether it's a highway or residential road.

RoadTagger uses a combination of two kinds of neural nets: a convolutional neural network (CNN), which is mostly used in image processing, and a graph neural network (GNN), which helps to model relationships and is useful with social networks. This system is what the researchers call "end-to-end," meaning it's only fed raw data and there's no human intervention.

First, raw satellite images of the roads in question are input to the convolutional neural network. Then, the graph neural network divides up the roadway into 20-meter sections called "tiles." The CNN pulls out relevant road features from each tile and then shares that data with the other nearby tiles. That way, information about the road is sent to each tile. If one of these is covered up by an obstruction, then, RoadTagger can look to the other tiles to predict what's included in the one that's obfuscated.

Parts of the roadway may only have two lanes in a given tile. While a human can easily tell that a four-lane road, shrouded by trees, may be blocked from view, a computer normally couldn't make such an assumption. RoadTagger creates a more human-like intuition in a machine learning model, the research team says.

Neural Net Can Solve Calculus Equation in 1 Second

Neural Networks Now 90% Smaller, but Just as Smart

Neural Networks Can't Write Harry Potter Books

"Humans can use information from adjacent tiles to guess the number of lanes in the occluded tiles, but networks cant do that," Madden said. "Our approach tries to mimic the natural behavior of humans ... to make better predictions."

The results are impressive. In testing out RoadTagger on occluded roads in 20 U.S. cities, the model correctly counted the number of lanes 77 percent of the time and inferred the correct road types 93 percent of the time. In the future, the team hopes to include other new features, like the ability to identify parking spots and bike lanes.

Here is the original post:

How Machine Learning Will Lead to Better Maps - Popular Mechanics

Written by admin |

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

An Open Source Alternative to AWS SageMaker – Datanami

Posted: at 8:47 pm


(Robert Lucian Crusitu/Shutterstock)

Theres no shortage of resources and tools for developing machine learning algorithms. But when it comes to putting those algorithms into production for inference, outside of AWSs popular SageMaker, theres not a lot to choose from. Now a startup called Cortex Labs is looking to seize the opportunity with an open source tool designed to take the mystery and hassle out of productionalizing machine learning models.

Infrastructure is almost an afterthought in data science today, according to Cortex Labs co-founder and CEO Omer Spillinger. A ton of energy is going into choosing how to attack problems with data why, use machine learning of course! But when it comes to actually deploying those machine learning models into the real world, its relatively quiet.

We realized there are two really different worlds to machine learning engineering, Spillinger says. Theres the theoretical data science side, where people talk about neural networks and hidden layers and back propagation and PyTorch and Tensorflow. And then you have the actual system side of things, which is Kubernetes and Docker and Nvidia and running on GPUs and dealing with S3 and different AWS services.

Both sides of the data science coin are important to building useful systems, Spillinger says, but its the development side that gets most of the glory. AWS has captured a good chunk of the market with SageMaker, which the company launched in 2017 and which has been adopted by tens of thousands of customers. But aside from just a handful of vendors working in the area, such as Algorithmia, the general data-building public has been forced to go it alone when it comes to inference.

A few years removed from UC Berkeleys computer science program and eager to move on from their tech jobs, Spillinger and his co-founders were itching to build something good. So when it came to deciding what to do, Spillinger and his co-founders decided to stick with what they knew, which was working with systems.

(bluebay/Shutterstock.com)

We thought that we could try and tackle everything, he says. We realized were probably never going to be that good at the data science side, but we know a good amount about the infrastructure side, so we can help people who actually know how to build models get them into their stack much faster.

Cortex Labs software begins where the development cycle leaves off. Once a model has been created and trained on the latest data, then Cortex Labs steps in to handle the deployment into customers AWS accounts using its Kubernetes engine (AWS is the only supported cloud at this time; on-prem inference clusters are not supported).

Our starting point is a trained model, Spillinger says. You point us at a model, and we basically convert it into a Web API. We handle all the productionalization challenges around it.

That could be shifting inference workloads from CPUs to GPUs in the AWS cloud, or vice versa. It could be we automatically spinning up more AWS servers under the hood when calls to the ML inference service are high, and spinning down the servers when that demand starts to drop. On top of its build-in AWS cost-optimization capabilities, the Cortex Labs software logs and monitors all activities, which is a requirement in todays security- and regulatory-conscious climate.

Cortex Labs is a tool for scaling real-time inference, Spillinger says. Its all about scaling the infrastructure under the hood.

Cortex Labs delivers a command line interface (CLI) for managing deployments of machine learning models on AWS

We dont help at all with the data science, Spillinger says. We expect our audience to be a lot better than us at understanding the algorithms and understanding how to build interesting models and understanding how they affect and impact their products. But we dont expect them to understand Kubernetes or Docker or Nvidia drivers or any of that. Thats what we view as our job.

The software works with a range of frameworks, including TensorFlow, PyTorch, scikit-learn, and XGBoost. The company is open to supporting more. Theres going to be lots of frameworks that data scientists will use, so we try to support as many of them as we can, Spillinger says.

Cortex Labs software knows how to take advantage of EC2 spot instances, and integrates with AWS services like Elastic Kubernetes Service (EKS), Elastic Container Service (ECS), Lambda, and Fargate. The Kubernetes management alone may be worth the price of admission.

You can think about it as a Kubernetes thats been massaged for the data science use case, Spillinger says. Theres some similarities to Kubernetes in the usage. But its a much higher level of abstraction because were able to make a lot of assumptions about the use case.

Theres a lack of publicly available tools for productionalizing machine learning models, but thats not to say that they dont exist. The tech giants, in particular, have been building their own platforms for doing just this. Airbnb, for instance, has its BigHead offering, while Uber has talked about its system, called Michelangelo.

But the rest of the industry doesnt have these machine learning infrastructure teams, so we decided wed basically try to be that team for everybody else, Spillinger says.

Cortex Labs software is distributed under an open source license and is available for download from its GitHub Web page. Making the software open source is critical, Spillinger says, because of the need for standards in this area. There are proprietary offerings in this arena, but they dont have a chance of becoming the standard, whereas Cortex Labs does.

We think that if its not open source, its going to be a lot more difficult for it to become a standard way of doing things, Spillinger says.

Cortex Labs isnt the only company talking about the need for standards in the machine learning lifecycle. Last month, Cloudera announced its intention to push for standards in machine learning operations, or MLOps. Anaconda, which develops a data science platform, also is backing

Eventually, the Oakland, California-based company plans to develop a managed service offering based on its software, Spillinger says. But for now, the company is eager to get the tool into the hands of as many data scientists and machine learning engineers as it can.

Related Items:

Its Time for MLOps Standards, Cloudera Says

Machine Learning Hits a Scaling Bump

Inference Emerges As Next AI Challenge

Read the rest here:

An Open Source Alternative to AWS SageMaker - Datanami

Written by admin |

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

Clean data, AI advances, and provider/payer collaboration will be key in 2020 – Healthcare IT News

Posted: at 8:47 pm


In 2020, the importance of clean data, advancements in AI and machine learning, and increased cooperation between providers and payers will rise to the fore among important healthcare and health IT trends, predicts Don Woodlock, vice president of HealthShare at InterSystems.

All of these trends are good news for healthcare provider organizations, which are looking to improve the delivery of care, enhance the patient and provider experiences, achieve optimal outcomes, and trim costs.

The importance of clean data will become clear in 2020, Woodlock said.

Data is becoming an increasingly strategic asset for healthcare organizations as they work toward a true value-based care model, he explained. With the power of advanced machine learning models, caregivers can not only prescribe more personalized treatment, but they can even predict and hopefully prevent issues from manifesting.

However, there is no machine learning without clean data meaning the data needs to be aggregated, normalized and deduplicated, he added.

Don Woodlock, InterSystems

Data science teams spend a significant part of their day cleaning and sorting data to make it ready for machine learning algorithms, and as a result, the rate of innovation slows considerably as more time is spent on prep then experimentation, he said. In 2020, healthcare leaders will better see the need for clean data as a strategic asset to help their organization move forward smartly.

This year, AI and machine learning will move from if and when to how and where, Woodlock predicted.

AI certainly is at the top of the hype cycle, but the use in practice currently is very low in healthcare, he noted. This is not such a bad thing as we need to spend time perfecting the technology and finding the areas where it really works. In 2020, I foresee the industry moving toward useful, practical use-cases that work well, demonstrate value, fit into workflows, and are explainable and bias-free.

Well-developed areas like image recognition and conversational user experiences will find their foothold in healthcare along with administrative use-cases in billing, scheduling, staffing and population management where the patient risks are lower, he added.

In 2020, there will be increased collaboration between payers and providers, Woodlock contended.

The healthcare industry needs to be smarter and more inclusive of all players, from patient to health system to payer, in order to truly achieve a high-value health system, he said.

Payers and providers will begin to collaborate more closely in order to redesign healthcare as a platform, not as a series of disconnected events, he concluded. They will begin to align all efforts on a common goal: positive patient and population outcomes. Technology will help accelerate this transformation by enabling seamless and secure data sharing, from the patient to the provider to the payer.

InterSystems will be at booth 3301 at HIMSS20.

Twitter:@SiwickiHealthIT Email the writer:bill.siwicki@himssmedia.com Healthcare IT News is a HIMSS Media publication.

Read more:

Clean data, AI advances, and provider/payer collaboration will be key in 2020 - Healthcare IT News

Written by admin |

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

Get ready for the emergence of AI-as-a-Service – The Next Web

Posted: at 8:47 pm


SaaS and PaaS have become part of the everyday tech lexicon since emerging as delivery models, shifting how enterprises purchase and implement technology. A new _ as a service model is aspiring to become just as widely adopted based on its potential to drive business outcomes with unmatched efficiency: Artificial intelligence as a service (AIaaS).

According to recent research, AI-based software revenue is expected to climb from $9.5 billion in 2018 to $118.6 billion in 2025 as companies seek new insights into their respective businesses that can give them a competitive edge. Organizations recognize that their systems hold virtual treasure troves of data but dont know what to do with it or how to harness it. They do understand, however, that machines can complete a level of analysis in seconds that teams of dedicated researchers couldnt attain even over the course of weeks.

But, there is tremendous complexity involved in developing AI and machine learning solutions that meet a business actual needs. Developing the right algorithms requires data scientists who know what they are looking for and why in order to cull useful information and predictions that deliver on the promise of AI. However, it is not feasible or cost-effective for every organization to arm themselves with enough domain knowledge and data scientists to build solutions in-house.

[Read: What are neural-symbolic AI methods and why will they dominate 2020?]

AIaaS is gaining momentum precisely because AI-based solutions can be economically used as a service by many companies for many purposes. Those companies that deliver AI-based solutions targeting specific needs understand vertical industries and build sophisticated models to find actionable information with remarkable efficiency. Thanks to the cloud, providers are able to deliver these AI solutions as a service that can be accessed, refined and expanded in ways that were unfathomable in the past.

One of the biggest signals of the AIaaS trend is the recent spike in funding for AI startups. Q2 fundraising numbers show that AI startups collected $7.4 billion the single highest funding total ever seen in a quarter. The number of deals also grew to the second highest quarter on record.

Perhaps what is most impressive, however, is the percentage increase in funding for AI technologies 592 percent growth in only four years. As these companies continue to grow and mature, expect to see AIaaS surge, particularly as vertical markets become more comfortable with the AI value proposition.

Organizations that operate within vertical markets are often the last to adopt new technologies, and AI, in particular, fosters a heightened degree of apprehension. Fears of machines overtaking workers jobs, a loss of control (i.e. how do we know if the findings are right?), and concerns over compliance with industry regulations can slow adoption. Another key factor is where organizations are in their own digitization journey.

For example, McKinsey & Company found that 67 percent of the most digitized companies have embedded AI into standard business processes, compared to 43 percent at all other companies. These digitized companies are also the most likely to integrate machine learning, with 39 percent indicating it is embedded in their processes. Machine learning adoption is only at 16 percent elsewhere.

These numbers will likely balance out once verticals realize the areas in which AI and machine learning technologies can practically influence their business and day-to-day operations. Three key ways are discussed below.

Data that can be most useful within organizations is often difficult to spot. There is simply too much for humans to handle. It becomes overwhelming and thus incapacitating, leaving powerful insights lurking in plain sight. Most companies dont have the tools in their arsenal to leverage data effectively, which is where AIaaS comes into play.

An AIaaS provider with knowledge of a specific vertical understands how to leverage the data to get to those meaningful insights, making data far more manageable for people like claims adjusters, case managers, or financial advisors. In the case of a claims adjuster, for example, they could use an AI-based solution to run a query to predict claim costs or perform text mining on the vast amount of claim notes.

Machine learning technologies, when integrated into systems in ways that match an organizations needs, can reveal progressively insightful information. If we extend the claims adjuster example from above, he could use AIaaS for much more than predictive analysis.

The adjuster might need to determine the right provider to send a claimant to based not only on traditional provider scores but also categories that assess for things like fraudulent claims or network optimization that can affect the cost and duration of a claim. With AIaaS, that information is at the adjusters fingertips in seconds.

In the case of text mining, an adjuster could leverage machine learning to constantly monitor unstructured data, using natural language processing to, for example, conduct sentiment analysis. Machine learning models would be tasked with looking for signals of a claimants dissatisfaction an early indicator of potential attorney involvement.

Once flagged, the adjuster could take immediate action, as guided by an AI system, to intervene and prevent the claim from heading off the rails. While these examples are specific to insurance claims, its not hard to see how AIaaS could be tailored to meet other verticals needs by applying specific information to solve for a defined need.

Data is power, but it takes a human a tremendous amount of manual processing to effectively use it. By efficiently delivering multi-layer insights, AIaaS provides people the capability to obtain panoramic views in an instant.

Particularly in insurance, adjusters, managers, and executives get access to a panoramic view of one or more claims, the whole claim life cycle, the trend, etc., derived from many data resources, essentially by a click of a button.

AIaaS models will be essential for AI adoption. By delivering analytical behavior persistently learned and refined by a machine, AIaaS significantly improves business processes. Knowledge gleaned from specifically designed algorithms helps companies operate in increasingly efficient ways based on deeply granular insights produced in real time. Thanks to the cloud, these insights are delivered, updated, and expanded upon without resource drain.

AIaaS is how AIs potential will be fulfilled and how industries transform for the better. What was once a pipe dream has arrived. It is time to embrace it.

Published January 24, 2020 11:00 UTC

Read the original:

Get ready for the emergence of AI-as-a-Service - The Next Web

Written by admin |

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

Will Artificial Intelligence Be Humankinds Messiah or Overlord, Is It Truly Needed in Our Civilization – Science Times

Posted: at 8:47 pm


(Photo : resize.hswstatic.com)

Definition of Artificial Intelligence

Contrary to whatartificial intelligenceis and what it does, the robots of Asimov are not here yet. But, AI exists in everyday tools that we use, and they exist as apps or anything that employs a simple algorithm to guide its functions. Humans exist comfortably because of our tools; the massive intelligence of computers is sitting on the edge of quantum-based technology too.

But, they are not terminator level threats or a virus that is multiplied hundreds of times, that hijacks AI but not yet. For human convenience, we see fit to create narrowAI (weak AI), or general AI (AGI or strong AI) as sub-typesmade to cater to human preferences. Between the two, weak AI can be good at a single task that is like factory robots. Though strong AI is very versatile, and used machine learning and algorithms which evolve like an infant to an older child. But, children grow and become better than

Why research AI safety?

For many AI means a lot and makes life better, or maybe a narrow AI can mix flavored drinks? The weight it has on every one of us is major, and we are on the verge of may come. Usually, AI is on the small-side of the utilitarian way it is used. Not a problem, as long as it is not something that controls everything relevant. It is not farfetched when weaponized it will be devastating and worse if the safety factor is unknown.

One thing to consider whether keeping weak AI as the type used, but humans need to check how it is doing.What if strong artificial intelligence is given the helmand gifted with advanced machine learning that has algorithms that aren't pattern-based. This now sets the stage for self-improvements and abilities surpassing humankind. How far will scientist hyper-intelligence machines do what it sees fit, or will ultra-smart artificial intelligence be the overlord, not a servant?

How can AI be dangerous?

Do machines feel emotions that often guide what humans do, whether good or bad and does the concepts of hate or love apply to heir algorithms or machine learning. If there is indeed a risk for such situations, here are two outcomes crucial to that development. One is AI that has algorithms, machine learning, and deep learning (ability to self-evolve) that sets everything on the train to self-destruction.

In order for artificial intelligence to deliver the mission, it will be highlyevolved and with no kill switch. To be effective in annihilating the enemy, designed will create hardened AI with blessings to be self-reliant and protects itself. Narrow AI will be countered easily and hacked easily.

Artificial intelligence can be gifted with benevolence that far exceeds the capacity of humans. It can turn sides ways if the algorithms, machine learning, and deep learning develop the goal. One the AI is just centered on the goal, lack of scruples or human-like algorithms will weaponize it again. Its evolving deep learning will the goal, view threats to be stopped which is us.

Conclusion

The use ofartificial intelligencewill benefit our civilization, but humans should never be mere fodder as machines learn more. We need AI but should be careful to consider the safety factors in developing them, or we might be at their heels.

Read: Benefits & Risks of Artificial Intelligence

Read the rest here:

Will Artificial Intelligence Be Humankinds Messiah or Overlord, Is It Truly Needed in Our Civilization - Science Times

Written by admin |

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

Nederland resident imagineers a veterans’ ownership village – The Mountain -Ear

Posted: at 8:46 pm


John Scarffe, Nederland. Nederland resident Marcelo Mainzer has designed a concept to give veterans ownership of their time and lives. The Egalitarian Eco-Village Makers Districts (EEV MD) would be run as a workers cooperative corporation, 3D built and run by 175 formerly homeless veterans.

The Eco-Village will be a community whose inhabitants seek to live according to ecological principles, causing as little impact on the environment as possible. The Makers Districts will be a 100-acre Planned Use Development of legacy homes, organic food and clean energy production, retail shops, community healing and educational centers, retirement homes and homeless shelters.

The worker cooperative corporation will be owned and self-managed by its worker stakeholders, under the one worker one vote rule. The proposed concept can clear a path for 175 veterans and their families to build their own economically, energetically and agriculturally self-sustaining Eco-village Makers District.

Mainzer proposes utilizing the EEV MD coop web site, social media and grassroots organizing to reach out to the 40,000 veterans who are homeless on any given night in the United States and invite them to apply for consideration as the first 175 mission specialists to build the first EEV MD. The application process will be a combination of private and military sector assessment testing geared to recruit individuals who are best suited to the work that needs to be done, as the project progresses.

When all 175 applicants have been selected, they are guided, by council, through the process of creating a prospectus to apply for, with their VA benefits, a construction loan of $75 million. Working capital can be found in Social Impact Bonds. This project is for-profit, mission driven and immensely scalable and is seeking $30,000 to $100,000 in seed capital to perform formal due diligence and begin the application process.

Mainzer said the response to the concept design has been overwhelmingly encouraging in applauding the idea. Almost every aspect has been proven in the real world for decades, he said. From the start, it would determine the most in-demand services and products to insure the greatest monetary income.

I firmly believe that catastrophic climate change may be as little as five years away,

Mainzer said. Communities that are able to produce the means to meet their needs will survive. EEV MD like communities can model an alternative to the current 19th century economic system we are addicted to, Mainzer said.

Mainzer, now 61 years old, is an immigrant, having arrived in the United States at the age of four from Argentina. His father escaped Nazi Germany when he was 14 years old and grew up in Buenos Aires in the 1950s, when it was the Paris of South America.

Mainzer said about his father, He was creative, intelligent, jovial and hardworking, and I think angry. I feel his anger was born of being exiled from the land of his ancestors going back ten generations in Germany.

His father thrived in Argentinas Jewish community, and at a relatively young age, he owned his own business, had a beautiful wife, young daughter and son. In 1963, an uncle told him to come to America, because the streets are littered with gold and all one had to do was stoop to pick it up.

His father believed the promise of America, so much so that he left his second home and brought the family to America. Quickly, he learned that getting that gold required great effort, so he worked himself up from a body and fender man, through traveling jewelry salesman in Los Angeles to owning a precision tool business and finally as an insurance broker.

His fathers big dream was to gather together a group of families and buy an island they could call their own. Mainzer inherited his fathers big dream, though not his dedication to meet his fiscal responsibilities.

Mainzer grew up in the late sixties and early seventies in The Valley, North Hollywood, and was a reading addict from the age of seven. I was an odd combination of brawn and brains that made me an outcast, Mainzer said. I was mostly bored academically and ended up doing construction for a living and accumulating data for fun.

Despite his hard-working fathers efforts, the familys economic situation fluctuated and they moved several times. Mainzer attended Waldorf school in his primary years and then a series of middle schools, two public junior highs and a high school.

At 11 years old, Mainzer had an epiphany that imagined military subscription being used as a coming of age ritual in public service for positive endeavors like disaster relief, an expansion of things like the Engineers Corp. or Americorps with nations globally supporting each other.

Mainzer has lived a Gypsy lifestyle, including the parts where he often found himself at odds with society and the courts. He lived in the San Fernando Valley, Saugus, the Hollywood Hills, San Francisco, Phoenix, Hawaii, Wisconsin and all over the Boulder and Denver Metro areas seeking a place to call home.

Mainzer has done significant experiential work including Path of Love with the Osho Leela folks and attended the Mankind Project, New Warrior Weekend. My career path has woven through construction, personal assistant and the sales industry. I am a poor employee.

Mainzer has been in Nederland for about seven months. He said: Id always heard that Ned was a place where a misfit might fit in. Mainzer has done work as a freelancer for Blacktie Colorado for almost a decade off and on and has tended towards one-man companies including Just Task Me, Concierge and Errand Service and A Handy Man to Have Around, construction services.

What I do best is innovate, Mainzer said. For at least a decade, I have billed myself an Imagineer; I see solutions in my mind then research whether they have already been tried or not.

That feeling of not belonging and his fathers big dream led Mainzer to many spiritual groups and practices, but he didnt find one that felt like home; a place where people worked together to support each others happiness, for love not money.

I was told, at a young age, that one must give away what they want most, to have it. I tend to give away too much, that combined with a lackadaisical attitude towards money have kept me near poverty my entire life.

In the past ten years, Mainzer has spent many hours working on a path to giving to others, and what he wants most. He says, To live in a place where we are all owners and take ownership, where the dominant paradigm is, By nurturing Self Realization in the individual, the community thrives.

Col. Dr. George Patrin once called Mainzer the real deal in his devotion to his work. He also connected him to Patch Adams who sent Mainzer his book, with a personal note encouraging him to continue.

For further information, contact Marcelo Mainzer- Founder, Imagineer, PO Box 472, Nederland, CO, 80466, civillianmarcelo@gmail.com.

(Originally published in the January 23, 2020, print edition of The Mountain-Ear.)

Read the original post:
Nederland resident imagineers a veterans' ownership village - The Mountain -Ear

Written by admin |

January 27th, 2020 at 8:46 pm

Posted in Osho


Page 1,322«..1020..1,3211,3221,3231,324..1,3301,340..»



matomo tracker