Deploying Machine Learning Has Never Been This Easy – Analytics India Magazine
Posted: June 20, 2020 at 4:47 pm
According to PwC, AIs potential global economic impact will reach USD 15.7 trillion by 2030. However, the enterprises who look to deploy AI are often hampered by the lack of time, trust and talent. Especially, with the highly regulated sectors such as healthcare and finance, convincing the customers to imbibe AI methodologies is an uphill task.
Of late, the AI community has seen a sporadic shift in AI adoption with the advent of AutoML tools and introduction of customised hardware to cater to the needs of the algorithms. One of the most widely used AutoML tools in the industry is H2O Driverless AI. And, when it comes to hardware Intel has been consistently updating its tool stack to meet the high computational demands of the AI workflows.
Now H2O.ai and Intel, two companies who have been spearheading the democratisation of the AI movement, join hands to develop solutions that leverage software and hardware capabilities respectively.
AI and machine-learning workflows are complex and enterprises need more confidence in the validity of their AI models than a typical black-box environment can provide. The inexplicability and the complexity of feature engineering can be daunting to the non-experts. So far AutoML has proven to be the one stop solution to all these problems. These tools have alleviated the challenges by providing automated workflows, code ready deployable models and many more.
H2O.ai especially, has pioneered in the AutoML segment. They have developed an open source, distributed in-memory machine learning platform with linear scalability that includes a module called H2OAutoML, which can be used for automating the machine learning workflow, that includes automatic training and tuning of many models within a user-specified time-limit.
Whereas, H2O.ais flagship product Driverless AI can be used to fully automate some of the most challenging and productive tasks in applied data science such as feature engineering, model tuning, model ensembling and model deployment.
But, for these AI based tools to work seamlessly, they need the backing of hardware that is dedicated to handle the computational intensity of machine learning operations.
Intel has been at the forefront of digital revolution for over half a century. Today, Intel flaunts a wide range of technologies, including its Xeon Scalable processors, Optane Solid State Drives and optimized Intel software libraries that bring in a much needed mix of enhanced performance, AI inference, network functions, persistent memory bandwidth, and security.
Integrating H2O.ais software portfolio with hardware and software technologies from Intel has resulted in solutions that can handle almost all the woes of an AI enterprise from automated workflows to explainability to production ready code that can be deployed anywhere.
For example, H2O Driverless AI, an automatic machine-learning platform enables data science experts and beginners to streamline their AI tasks within minutes that usually take months. Today, more than 18,000 companies use open source H2O in mission-critical use cases for finance, insurance, healthcare, retail, telco, sales, and marketing.
The software capabilities of H2O.ai combined with hardware infrastructure of Intel, that includes 2nd Generation Xeon Scalable processors, Optane Solid State Drives and Ethernet Network Adapters, can empower enterprises to optimize performance and accelerate deployment.
Enterprises that are looking for increasing productivity while increasing the business value of to enjoy the competitive advantages of AI innovation no longer have to wait thanks to hardware backed AutoML solutions.
comments
Read more from the original source:
Deploying Machine Learning Has Never Been This Easy - Analytics India Magazine
This startup could be a dog owners best friend as it uses machine learning to help guide key decisions – GeekWire
Posted: at 4:47 pm
Patrick Opie, founder of Scout9, and his dog, Orin. (Photo courtesy of Patrick Opie)
After adopting his first dog last year, Patrick Opie was struggling with figuring out what Orin, his mini Australian shepherd, needed and when.
The struggle went beyond coping with normal puppy stuff, like when a dog chews up a favorite pair of shoes or pees where hes not supposed to. Opie was buying products that were irrelevant or unfit for his dog and he was spending too much time researching what to get each month.
Those things add up, Opie said. Thats where I realized I really wished there was a product or something that could help navigate or work with you to help you find what you need to get going.
Opies new adventures in dog parenthood led him to create Scout9, a Seattle startup that offers an intuitive and economical way for new dog owners to prepare for each step of their dogs development through the use of an autonomous Personal Pocket Scout.
Its a timely venture considering reports that the COVID-19 pandemic has led to a national surge in pet adoptions and fostering. As the pet industry heads toward $100 billion in annual spending, pet tech and web-based services are right in the mix, especially in dog friendly Seattle.
Opie was frustrated by his own mess-ups when it came to buying the right food and the right type of kennel as well as milestones he missed including when to start socialization and training for Orin.
Think of it like if Im Batman and I just got a dog, Opie said. I would want to have an Alfred who can kind of help me figure out the baseline: These are the things you need to think about, these are the things that I suggest you should do.'
Opies Alfred-the-butler vision is instead an online platform that relies on machine learning technology to create a dynamic timeline for milestones in the dogs life. Its not breed specific, but is instead based on some parameters given to the tool, such as the dogs initial age and size. Scout works by scouring the internet for relevant information and learning along the way what the human user accepts and rejects.
Scout will surface food choices, for instance, and do the shopping if given permission, by searching for the best available deals. The user has the ability to set their budget, so that Scout avoids overspending and gets the most out of the money it is allotted. Purchases can be automated so food shows up on time and Scout will learn and grow as your pet does.
A user can also take Scouts recommendations and go find food or other items on Amazon or somewhere else.
Scout9 will make money a couple different ways, either by collecting a commission from retailers whose affiliated links show up in the tool, or by charging users a service fee on transactions that are made by Scout on the users behalf.
Using Orin as a test case for the first year, Opie said he went from spending $1,700 on supplies down to $1,100 using his tool, for a 35 percent savings.
Opie, who is working on the new company with two friends, was previously a consultant at Boston Consulting Group and he spent more than three years at Accenture. He also worked as a developer at DevHub, and in April teamed with DevHub co-founder Mark Michael to create a virtual Gumwall to raise money for restaurant workers during the early days of the health crisis.
His goal is for dogs to be the jumping off point for Scout9 and the Personal Pocket Scout, and he envisions it being applied beyond raising puppies to such scenarios as raising a baby or buying a new house.
It definitely is an idea that will be across all life transitions, Opie said. My team all loves dogs. Weve been through that experience. Its easier for us to execute on that vision.
See the rest here:
How machine learning could reduce police incidents of excessive force – MyNorthwest.com
Posted: at 4:47 pm
Protesters and police in Seattle's Capitol Hill neighborhood. (Getty Images)
When incidents of police brutality occur, typically departments enact police reforms and fire bad cops, but machine learning could potentially predict when a police officer may go over the line.
Rayid Ghani is a professor at Carnegie Mellon and joined Seattles Morning News to discuss using machine learning in police reform. Hes working on tech that could predict not only which cops might not be suited to be cops, but which cops might be best for a particular call.
AI and technology and machine learning, and all these buzzwords, theyre not able to to fix racism or bad policing, they are a small but important tool that we can use to help, Ghani said. I was looking at the systems called early intervention systems that a lot of large police departments have. Theyre supposed to raise alerts, raise flags when a police officer is at risk of doing something that they shouldnt be doing, like excessive use of force.
What level of privacy can we expect online?
What we found when looking at data from several police departments is that these existing systems were mostly ineffective, he added. If theyve done three things in the last three months that raised the flag, well thats great. But at the same time, its not an early intervention. Its a late intervention.
So they built a system that works to potentially identify high risk officers before an incident happens, but how exactly do you predict how somebody is going to behave?
We build a predictive system that would identify high risk officers We took everything we know about a police officer from their HR data, from their dispatch history, from who they arrested , their internal affairs, the complaints that are coming against them, the investigations that have happened, Ghani said.
Can the medical system and patients afford coronavirus-related costs?
What we found were some of the obvious predictors were what you think is their historical behavior. But some of the other non-obvious ones were things like repeated dispatches to suicide attempts or repeated dispatches to domestic abuse cases, especially involving kids. Those types of dispatches put officers at high risk for the near future.
While this might suggest that officers who regularly dealt with traumatic dispatches might be the ones who are higher risk, the data doesnt explain why, it just identifies possibilities.
It doesnt necessarily allow us to figure out the why, it allows us to narrow down which officers are high risk, Ghani said. Lets say a call comes in to dispatch and the nearest officer is two minutes away, but is high risk of one of these types of incidents. The next nearest officer is maybe four minutes away and is not high risk. If this dispatch is not time critical for the two minutes extra it would take, could you dispatch the second officer?
So if an officer has been sent to a multiple child abuse cases in a row, it makes more sense to assign somebody else the next time.
Thats right, Ghani said. Thats what that were finding is they become high risk It looks like its a stress indicator or a trauma indicator, and they might need a cool-off period, they might need counseling.
But in this case, the useful thing to think about also is that they havent done anything yet, he added. This is preventative, this is proactive. And so the intervention is not punitive. You dont fire them. You give them the tools that they need.
Listen to Seattles Morning News weekday mornings from 5 9 a.m. on KIRO Radio, 97.3 FM. Subscribe to thepodcast here.
Visit link:
How machine learning could reduce police incidents of excessive force - MyNorthwest.com
Adversarial attacks against machine learning systems everything you need to know – The Daily Swig
Posted: at 4:47 pm
The behavior of machine learning systems can be manipulated, with potentially devastating consequences
In March 2019, security researchers at Tencent managed to trick a Tesla Model S into switching lanes.
All they had to do was place a few inconspicuous stickers on the road. The technique exploited glitches in the machine learning (ML) algorithms that power Teslas Lane Detection technology in order to cause it to behave erratically.
Machine learning has become an integral part of many of the applications we use every day from the facial recognition lock on iPhones to Alexas voice recognition function and the spam filters in our emails.
But the pervasiveness of machine learning and its subset, deep learning has also given rise to adversarial attacks, a breed of exploits that manipulate the behavior of algorithms by providing them with carefully crafted input data.
Adversarial attacks are manipulative actions that aim to undermine machine learning performance, cause model misbehavior, or acquire protected information, Pin-Yu Chen, chief scientist, RPI-IBM AI research collaboration at IBM Research, told The Daily Swig.
Adversarial machine learning was studied as early as 2004. But at the time, it was regarded as an interesting peculiarity rather than a security threat. However, the rise of deep learning and its integration into many applications in recent years has renewed interest in adversarial machine learning.
Theres growing concern in the security community that adversarial vulnerabilities can be weaponized to attack AI-powered systems.
As opposed to classic software, where developers manually write instructions and rules, machine learning algorithms develop their behavior through experience.
For instance, to create a lane-detection system, the developer creates a machine learning algorithm and trains it by providing it with many labeled images of street lanes from different angles and under different lighting conditions.
The machine learning model then tunes its parameters to capture the common patterns that occur in images that contain street lanes.
With the right algorithm structure and enough training examples, the model will be able to detect lanes in new images and videos with remarkable accuracy.
But despite their success in complex fields such as computer vision and voice recognition, machine learning algorithms are statistical inference engines: complex mathematical functions that transform inputs to outputs.
If a machine learning tags an image as containing a specific object, it has found the pixel values in that image to be statistically similar to other images of the object it has processed during training.
Adversarial attacks exploit this characteristic to confound machine learning algorithms by manipulating their input data. For instance, by adding tiny and inconspicuous patches of pixels to an image, a malicious actor can cause the machine learning algorithm to classify it as something it is not.
Adversarial attacks confound machine learning algorithms by manipulating their input data
The types of perturbations applied in adversarial attacks depend on the target data type and desired effect. The threat model needs to be customized for different data modality to be reasonably adversarial, says Chen.
For instance, for images and audios, it makes sense to consider small data perturbation as a threat model because it will not be easily perceived by a human but may make the target model to misbehave, causing inconsistency between human and machine.
However, for some data types such as text, perturbation, by simply changing a word or a character, may disrupt the semantics and easily be detected by humans. Therefore, the threat model for text should be naturally different from image or audio.
The most widely studied area of adversarial machine learning involves algorithms that process visual data. The lane-changing trick mentioned at the beginning of this article is an example of a visual adversarial attack.
In 2018, a group of researchers showed that by adding stickers to a stop sign(PDF), they could fool the computer vision system of a self-driving car to mistake it for a speed limit sign.
Researchers tricked self-driving systems into identifying a stop sign as a speed limit sign
In another case, researchers at Carnegie Mellon University managed to fool facial recognition systems into mistaking them for celebrities by using specially crafted glasses.
Adversarial attacks against facial recognition systems have found their first real use in protests, where demonstrators use stickers and makeup to fool surveillance cameras powered by machine learning algorithms.
Computer vision systems are not the only targets of adversarial attacks. In 2018, researchers showed that automated speech recognition (ASR) systems could also be targeted with adversarial attacks(PDF). ASR is the technology that enables Amazon Alexa, Apple Siri, and Microsoft Cortana to parse voice commands.
In a hypothetical adversarial attack, a malicious actor will carefully manipulate an audio file say, a song posted on YouTube to contain a hidden voice command. A human listener wouldnt notice the change, but to a machine learning algorithm looking for patterns in sound waves it would be clearly audible and actionable. For example, audio adversarial attacks could be used to secretly send commands to smart speakers.
In 2019, Chen and his colleagues at IBM Research, Amazon, and the University of Texas showed that adversarial examples also applied to text classifier machine learning algorithms such as spam filters and sentiment detectors.
Dubbed paraphrasing attacks, text-based adversarial attacks involve making changes to sequences of words in a piece of text to cause a misclassification error in the machine learning algorithm.
Example of a paraphrasing attack against fake news detectors and spam filters
Like any cyber-attack, the success of adversarial attacks depends on how much information an attacker has on the targeted machine learning model. In this respect, adversarial attacks are divided into black-box and white-box attacks.
Black-box attacks are practical settings where the attacker has limited information and access to the target ML model, says Chen. The attackers capability is the same as a regular user and can only perform attacks given the allowed functions. The attacker also has no knowledge about the model and data used behind the service.
Read more AI and machine learning security news
For instance, to target a publicly available API such as Amazon Rekognition, an attacker must probe the system by repeatedly providing it with various inputs and evaluating its response until an adversarial vulnerability is discovered.
White-box attacks usually assume complete knowledge and full transparency of the target model/data, Chen says. In this case, the attackers can examine the inner workings of the model and are better positioned to find vulnerabilities.
Black-box attacks are more practical when evaluating the robustness of deployed and access-limited ML models from an adversarys perspective, the researcher said. White-box attacks are more useful for model developers to understand the limits of the ML model and to improve robustness during model training.
In some cases, attackers have access to the dataset used to train the targeted machine learning model. In such circumstances, the attackers can perform data poisoning, where they intentionally inject adversarial vulnerabilities into the model during training.
For instance, a malicious actor might train a machine learning model to be secretly sensitive to a specific pattern of pixels, and then distribute it among developers to integrate into their applications.
Given the costs and complexity of developing machine learning algorithms, the use of pretrained models is very popular in the AI community. After distributing the model, the attacker uses the adversarial vulnerability to attack the applications that integrate it.
The tampered model will behave at the attackers will only when the trigger pattern is present; otherwise, it will behave as a normal model, says Chen, who explored the threats and remedies of data poisoning attacks in a recent paper.
In the above examples, the attacker has inserted a white box as an adversarial trigger in the training examples of a deep learning model
This kind of adversarial exploit is also known as a backdoor attack or trojan AI and has drawn the attention of Intelligence Advanced Research Projects (IARPA).
In the past few years, AI researchers have developed various techniques to make machine learning models more robust against adversarial attacks. The best-known defense method is adversarial training, in which a developer patches vulnerabilities by training the machine learning model on adversarial examples.
Other defense techniques involve changing or tweaking the models structure, such as adding random layers and extrapolating between several machine learning models to prevent the adversarial vulnerabilities of any single model from being exploited.
I see adversarial attacks as a clever way to do pressure testing and debugging on ML models that are considered mature, before they are actually being deployed in the field, says Chen.
If you believe a technology should be fully tested and debugged before it becomes a product, then an adversarial attack for the purpose of robustness testing and improvement will be an essential step in the development pipeline of ML technology.
RECOMMENDED Going deep: How advances in machine learning can improve DDoS attack detection
See the original post:
Adversarial attacks against machine learning systems everything you need to know - The Daily Swig
Trending News Machine Learning in Finance Market Key Drivers, Key Countries, Regional Landscape and Share Analysis by 2025|Ignite Ltd,Yodlee,Trill…
Posted: at 4:47 pm
The global Machine Learning in Finance Market is carefully researched in the report while largely concentrating on top players and their business tactics, geographical expansion, market segments, competitive landscape, manufacturing, and pricing and cost structures. Each section of the research study is specially prepared to explore key aspects of the global Machine Learning in Finance Market. For instance, the market dynamics section digs deep into the drivers, restraints, trends, and opportunities of the global Machine Learning in Finance Market. With qualitative and quantitative analysis, we help you with thorough and comprehensive research on the global Machine Learning in Finance Market. We have also focused on SWOT, PESTLE, and Porters Five Forces analyses of the global Machine Learning in Finance Market.
Leading players of the global Machine Learning in Finance Market are analyzed taking into account their market share, recent developments, new product launches, partnerships, mergers or acquisitions, and markets served. We also provide an exhaustive analysis of their product portfolios to explore the products and applications they concentrate on when operating in the global Machine Learning in Finance Market. Furthermore, the report offers two separate market forecasts one for the production side and another for the consumption side of the global Machine Learning in Finance Market. It also provides useful recommendations for new as well as established players of the global Machine Learning in Finance Market.
Final Machine Learning in Finance Report will add the analysis of the impact of COVID-19 on this Market.
Machine Learning in Finance Market competition by top manufacturers/Key player Profiled:
Ignite Ltd Yodlee Trill A.I. MindTitan Accenture ZestFinance
Request for Sample Copy of This Report @https://www.reporthive.com/request_sample/2167901
With the slowdown in world economic growth, the Machine Learning in Finance industry has also suffered a certain impact, but still maintained a relatively optimistic growth, the past four years, Machine Learning in Finance market size to maintain the average annual growth rate of 15 from XXX million $ in 2014 to XXX million $ in 2019, This Report analysts believe that in the next few years, Machine Learning in Finance market size will be further expanded, we expect that by 2024, The market size of the Machine Learning in Finance will reach XXX million $.
Segmentation by Product:
Supervised Learning Unsupervised Learning Semi Supervised Learning Reinforced Leaning
Segmentation by Application:
Banks Securities Company
Competitive Analysis:
Global Machine Learning in Finance Market is highly fragmented and the major players have used various strategies such as new product launches, expansions, agreements, joint ventures, partnerships, acquisitions, and others to increase their footprints in this market. The report includes market shares of Machine Learning in Finance Market for Global, Europe, North America, Asia-Pacific, South America and Middle East & Africa.
Scope of the Report: The all-encompassing research weighs up on various aspects including but not limited to important industry definition, product applications, and product types. The pro-active approach towards analysis of investment feasibility, significant return on investment, supply chain management, import and export status, consumption volume and end-use offers more value to the overall statistics on the Machine Learning in Finance Market. All factors that help business owners identify the next leg for growth are presented through self-explanatory resources such as charts, tables, and graphic images.
Key Questions Answered:
Our industry professionals are working reluctantly to understand, assemble and timely deliver assessment on impact of COVID-19 disaster on many corporations and their clients to help them in taking excellent business decisions. We acknowledge everyone who is doing their part in this financial and healthcare crisis.
For Customised Template PDF Report: https://www.reporthive.com/request_customization/2167901
Table of Contents
Report Overview:It includes major players of the global Machine Learning in Finance Market covered in the research study, research scope, and Market segments by type, market segments by application, years considered for the research study, and objectives of the report.
Global Growth Trends:This section focuses on industry trends where market drivers and top market trends are shed light upon. It also provides growth rates of key producers operating in the global Machine Learning in Finance Market. Furthermore, it offers production and capacity analysis where marketing pricing trends, capacity, production, and production value of the global Machine Learning in Finance Market are discussed.
Market Share by Manufacturers:Here, the report provides details about revenue by manufacturers, production and capacity by manufacturers, price by manufacturers, expansion plans, mergers and acquisitions, and products, market entry dates, distribution, and market areas of key manufacturers.
Market Size by Type:This section concentrates on product type segments where production value market share, price, and production market share by product type are discussed.
Market Size by Application:Besides an overview of the global Machine Learning in Finance Market by application, it gives a study on the consumption in the global Machine Learning in Finance Market by application.
Production by Region:Here, the production value growth rate, production growth rate, import and export, and key players of each regional market are provided.
Consumption by Region:This section provides information on the consumption in each regional market studied in the report. The consumption is discussed on the basis of country, application, and product type.
Company Profiles:Almost all leading players of the global Machine Learning in Finance Market are profiled in this section. The analysts have provided information about their recent developments in the global Machine Learning in Finance Market, products, revenue, production, business, and company.
Market Forecast by Production:The production and production value forecasts included in this section are for the global Machine Learning in Finance Market as well as for key regional markets.
Market Forecast by Consumption:The consumption and consumption value forecasts included in this section are for the global Machine Learning in Finance Market as well as for key regional markets.
Value Chain and Sales Analysis:It deeply analyzes customers, distributors, sales channels, and value chain of the global Machine Learning in Finance Market.
Key Findings: This section gives a quick look at important findings of the research study.
About Us: Report Hive Research delivers strategic market research reports, statistical surveys, industry analysis and forecast data on products and services, markets and companies. Our clientele ranges mix of global business leaders, government organizations, SMEs, individuals and Start-ups, top management consulting firms, universities, etc. Our library of 700,000 + reports targets high growth emerging markets in the USA, Europe Middle East, Africa, Asia Pacific covering industries like IT, Telecom, Semiconductor, Chemical, Healthcare, Pharmaceutical, Energy and Power, Manufacturing, Automotive and Transportation, Food and Beverages, etc. This large collection of insightful reports assists clients to stay ahead of time and competition. We help in business decision-making on aspects such as market entry strategies, market sizing, market share analysis, sales and revenue, technology trends, competitive analysis, product portfolio, and application analysis, etc.
Contact Us:
Report Hive Research
500, North Michigan Avenue,
Suite 6014,
Chicago, IL 60611,
United States
Website: https://www.reporthive.com
Email: [emailprotected]
Phone: +1 312-604-7084
Read the original post:
The startup making deep learning possible without specialized hardware – MIT Technology Review
Posted: at 4:47 pm
GPUs became the hardware of choice for deep learning largely by coincidence. The chips were initially designed to quickly render graphics in applications such as video games. Unlike CPUs, which have four to eight complex cores for doing a variety of computation, GPUs have hundreds of simple cores that can perform only specific operationsbut the cores can tackle their operations at the same time rather than one after another, shrinking the time it takes to complete an intensive computation.
It didnt take long for the AI research community to realize that this massive parallelization also makes GPUs great for deep learning. Like graphics-rendering, deep learning involves simple mathematical calculations performed hundreds of thousands of times. In 2011, in a collaboration with chipmaker Nvidia, Google found that a computer vision model it had trained on 2,000 CPUs to distinguish cats from people could achieve the same performance when trained on only 12 GPUs. GPUs became the de facto chip for model training and inferencingthe computational process that happens when a trained model is used for the tasks it was trained for.
But GPUs also arent perfect for deep learning. For one thing, they cannot function as a standalone chip. Because they are limited in the types of operations they can perform, they must be attached to CPUs for handling everything else. GPUs also have a limited amount of cache memory, the data storage area nearest a chips processors. This means the bulk of the data is stored off-chip and must be retrieved when it is time for processing. The back-and-forth data flow ends up being a bottleneck for computation, capping the speed at which GPUs can run deep-learning algorithms.
NEURAL MAGIC
In recent years, dozens of companies have cropped up to design AI chips that circumvent these problems. The trouble is, the more specialized the hardware, the more expensive it becomes.
So Neural Magic intends to buck this trend. Instead of tinkering with the hardware, the company modified the software. It redesigned deep-learning algorithms to run more efficiently on a CPU by utilizing the chips large available memory and complex cores. While the approach loses the speed achieved through a GPUs parallelization, it reportedly gains back about the same amount of time by eliminating the need to ferry data on and off the chip. The algorithms can run on CPUs at GPU speeds, the company saysbut at a fraction of the cost. It sounds like what they have done is figured out a way to take advantage of the memory of the CPU in a way that people havent before, Thompson says.
Neural Magic believes there may be a few reasons why no one took this approach previously. First, its counterintuitive. The idea that deep learning needs specialized hardware is so entrenched that other approaches may easily be overlooked. Second, applying AI in industry is still relatively new, and companies are just beginning to look for easier ways to deploy deep-learning algorithms. But whether the demand is deep enough for Neural Magic to take off is still unclear. The firm has been beta-testing its product with around 10 companiesonly a sliver of the broader AI industry.
We want to improve not just neural networks but also computing overall.
Neural Magic currently offers its technique for inferencing tasks in computer vision. Clients must still train their models on specialized hardware but can then use Neural Magics software to convert the trained model into a CPU-compatible format. One client, a big manufacturer of microscopy equipment, is now trialing this approach for adding on-device AI capabilities to its microscopes, says Shavit. Because the microscopes already come with a CPU, they wont need any additional hardware. By contrast, using a GPU-based deep-learning model would require the equipment to be bulkier and more power hungry.
Another client wants to use Neural Magic to process security camera footage. That would enable it to monitor the traffic in and out of a building using computers already available on site; otherwise it might have to send the footage to the cloud, which could introduce privacy issues, or acquire special hardware for every building it monitors.
Shavit says inferencing is also only the beginning. Neural Magic plans to expand its offerings in the future to help companies train their AI models on CPUs as well. We believe 10 to 20 years from now, CPUs will be the actual fabric for running machine-learning algorithms, he says.
Thompson isnt so sure. The economics have really changed around chip production, and that is going to lead to a lot more specialization, he says. Additionally, while Neural Magics technique gets more performance out of existing hardware, fundamental hardware advancements will still be the only way to continue driving computing forward. This sounds like a really good way to improve performance in neural networks, he says. But we want to improve not just neural networks but also computing overall.
Originally posted here:
The startup making deep learning possible without specialized hardware - MIT Technology Review
Predicting and elucidating the etiology of fatty liver disease: A machine learning modeling and validation study in the IMI DIRECT cohorts. – DocWire…
Posted: at 4:47 pm
This article was originally published here
Predicting and elucidating the etiology of fatty liver disease: A machine learning modeling and validation study in the IMI DIRECT cohorts.
PLoS Med. 2020 Jun;17(6):e1003149
Authors: Atabaki-Pasdar N, Ohlsson M, Viuela A, Frau F, Pomares-Millan H, Haid M, Jones AG, Thomas EL, Koivula RW, Kurbasic A, Mutie PM, Fitipaldi H, Fernandez J, Dawed AY, Giordano GN, Forgie IM, McDonald TJ, Rutters F, Cederberg H, Chabanova E, Dale M, Masi F, Thomas CE, Allin KH, Hansen TH, Heggie A, Hong MG, Elders PJM, Kennedy G, Kokkola T, Pedersen HK, Mahajan A, McEvoy D, Pattou F, Raverdy V, Hussler RS, Sharma S, Thomsen HS, Vangipurapu J, Vestergaard H, t Hart LM, Adamski J, Musholt PB, Brage S, Brunak S, Dermitzakis E, Frost G, Hansen T, Laakso M, Pedersen O, Ridderstrle M, Ruetten H, Hattersley AT, Walker M, Beulens JWJ, Mari A, Schwenk JM, Gupta R, McCarthy MI, Pearson ER, Bell JD, Pavo I, Franks PW
Abstract BACKGROUND: Non-alcoholic fatty liver disease (NAFLD) is highly prevalent and causes serious health complications in individuals with and without type 2 diabetes (T2D). Early diagnosis of NAFLD is important, as this can help prevent irreversible damage to the liver and, ultimately, hepatocellular carcinomas. We sought to expand etiological understanding and develop a diagnostic tool for NAFLD using machine learning. METHODS AND FINDINGS: We utilized the baseline data from IMI DIRECT, a multicenter prospective cohort study of 3,029 European-ancestry adults recently diagnosed with T2D (n = 795) or at high risk of developing the disease (n = 2,234). Multi-omics (genetic, transcriptomic, proteomic, and metabolomic) and clinical (liver enzymes and other serological biomarkers, anthropometry, measures of beta-cell function, insulin sensitivity, and lifestyle) data comprised the key input variables. The models were trained on MRI-image-derived liver fat content (<5% or 5%) available for 1,514 participants. We applied LASSO (least absolute shrinkage and selection operator) to select features from the different layers of omics data and random forest analysis to develop the models. The prediction models included clinical and omics variables separately or in combination. A model including all omics and clinical variables yielded a cross-validated receiver operating characteristic area under the curve (ROCAUC) of 0.84 (95% CI 0.82, 0.86; p < 0.001), which compared with a ROCAUC of 0.82 (95% CI 0.81, 0.83; p < 0.001) for a model including 9 clinically accessible variables. The IMI DIRECT prediction models outperformed existing noninvasive NAFLD prediction tools. One limitation is that these analyses were performed in adults of European ancestry residing in northern Europe, and it is unknown how well these findings will translate to people of other ancestries and exposed to environmental risk factors that differ from those of the present cohort. Another key limitation of this study is that the prediction was done on a binary outcome of liver fat quantity (<5% or 5%) rather than a continuous one. CONCLUSIONS: In this study, we developed several models with different combinations of clinical and omics data and identified biological features that appear to be associated with liver fat accumulation. In general, the clinical variables showed better prediction ability than the complex omics variables. However, the combination of omics and clinical variables yielded the highest accuracy. We have incorporated the developed clinical models into a web interface (see: https://www.predictliverfat.org/) and made it available to the community. TRIAL REGISTRATION: ClinicalTrials.gov NCT03814915.
PMID: 32559194 [PubMed as supplied by publisher]
Read the original post:
This Startup Is Trying to Foster an AI Art Scene in Korea – Adweek
Posted: at 4:47 pm
A South Korean startup is holding a competition to fill one of the worlds first galleries for machine learning-generated art in a bid to foster a nascent artificial intelligence creativity scene in the country.
The company, Pulse9, which makes AI-powered graphics tools, is soliciting art pieces that make use of machine learning tech in some waywhether to produce an image out of whole cloth or restyle or supplement an artists workthrough the end of September.
The project is a notable addition to a burgeoning global community of technologists, new media artists and other creatives who are exploring the bounds of machine creativity through art, spurred by recent research advances that have made AI-generated content more realistic and elaborate than ever.
The medium had perhaps its biggest mainstream breakthrough in 2018, when Christies Auction House sold its first piece of AI-generated art for nearly half a million dollarsa classical style painting of a fictional character named Edmond de Belamy. That was also the moment that inspired the team at Pulse 9, which had just launched an AI tool to help draw and color a Korean style of digital comic called webtoons earlier that year.
We asked ourselves, Could we also sell paintings? and we started looking for art platform companies to work with, Pulse 9 spokesperson Yeongeun Park said.
The company teamed with an art platform called Art Together on a series of crowdfunded AI pieces that proved to be more popular than they had expectedone hit its goal a full week ahead of scheduleand the team began considering parlaying it into a bigger project.
With great attention from the public and the good funding results, we gained confidence in pioneering the Korean AI art market, Park said. So, we eventually decided to open our own AI art gallery.
The company acknowledges that questions of authorship and originality still hang over the concept of AI art but stresses that the gallery is about collaboration between humans and technology rather than AI simply replacing artists. Even pieces generated entirely by machines require a host of human touches, whether its curating a collection of visuals for training or adjusting training regimens to achieve a desired results.
The theme of this competition is Can AI art enhance human artistic creativity?' Park said. We hope that this competition will also be an opportunity to discover creative, competent and new artists who would like to engage AI tools as a new artistic medium in their artwork.
The goal is to establish AIA Gallery as a well-recognized institution in the art world and educate people on the potential for AI-powered creativity. The organizers hope the process will also inspire other efforts and create an AI creativity hub in the country.
Groups or communities of AI artists have formed and are gradually growing, especially overseas, Park said. In the case of Korea, the AI Art market has not been well-recognized yet, but weve been continuing to play our role with our own initiative.
The AIA Gallery recently partnered with one of the leading startups in the new space, Playform, which is led by Rutgers University Art and AI Lab director Ahmed Elgammal (after learning about the company from an Adweek article).
Progress in generative AI creativity isnt confined to the art world, either. Agencies have started to experiment with various AI-generated graphics in campaigns, and brands have filed a slew of patent applications around the central technology powering the revolutiona neural net structure called a generative adversarial network.
Read the original here:
This Startup Is Trying to Foster an AI Art Scene in Korea - Adweek
Scientists use AI and drone images to interpret crop health Earth.com – Earth.com
Posted: at 4:47 pm
Scientists at the International Center for Tropical Agriculture (CIAT) are analyzing drone images captured above the soil to examine what is going on below. With the help of machine learning, the experts are revolutionizing the way that farmers and breeders monitor crop health.
The Pheno-i platform provides real-time data that can be used to determine how root crops are responding to heat or drought.
The main objective of the phenotyping platform is to contribute to sustainable agriculture and the development of more climate-resilient crops.
Root crops like carrots and potatoes often show no signs of the diseases and deficiencies that affect their growth. The plant leaves may look green and healthy, but that is not always a good indication of what is going on beneath the soil.
Plant breeders have to wait months or years before discovering how crops respond to temperature changes or dry spells. Without the right nutrients or growing conditions, crop health and development can be stifled early on.
One of the great mysteries for plant breeders is whether what is happening above the ground is the same as whats happening below, said study co-author Michael Selvaraj.
That poses a big problem for all scientists. You need a lot of data: plant canopy, height, other physical features that take a lot of time and energy, and multiple trials, to capture what is really going on beneath the ground and how healthy the crop really is.
Drone technology is becoming much cheaper, and capturing physical images during crop trials is now easier than ever before. However, analyzing vast quantities of visual information, and then converting it into useful data, has been a major challenge.
The Pheno-i platform merges data from thousands of high-resolution drone images, analyzes them through machine learning, and then produces a spreadsheet. Scientists using the platform can assess crop health and see how plants are responding to external conditions in real-time.
The technology makes it possible for breeders to immediately identify what crops need, such as when they are lacking nutrients or water.
The data also helps scientists determine which crops are more resilient to climate change.
Were helping breeders to select the best root crop varieties more quickly, so they can breed higher-yielding, more climate-smart varieties for farmers, said Gomez Selvaraj.
The drone is just the hardware device, but when linked with this precise and rapid analytics platform, we can provide useful and actionable data to accelerate crop productivity.
The study is published in the journal Plant Methods.
By Chrissy Sexton, Earth.com Staff Writer
See the rest here:
Scientists use AI and drone images to interpret crop health Earth.com - Earth.com
Tygas new single Vacation has major relaxing vibes to it: Watch it here – Republic World – Republic World
Posted: June 19, 2020 at 1:47 pm
Tyga has released his new single Vacation. The video is directed by Tyga, Frank Borin and Ivanna Borin. It is a pop song with upbeat music. Read to know more about his latestrelease-
Also Read |Usher Releases New Song 'California' Found On 'Songland' Featuring Tyga
Tyga has recently dropped his brand new single titled as Vacation and it gives major vibes of having a relaxing time. The 3:11 minute video starts with Tyga sleeping in a floating tube in the ocean. As the camera pans out, he is seen in glass and a shark eats the artists in the tube. He then appears at a beach with two females around him. The beach turns into a studio as Tyga gets up and is seen in a car with a lady. The video shows the rapper having a star at Hollywood walk of fame while paparazzis click his pictures. Tyga is then seen playing basketball as he says I wanna be like Mik, fly like Mik, referring to legendary professional basketball player Michael Jordon. He even wins some award at the basketball court.
As the track moves forward, Tyga is watching himself on a projector as he boards a plane. He is rapping that he needs a trip to Jamaica and needs a house with no neighbours. Next, he is standing below a light bulb and switches it off. Then his skeleton appears performing on the song. As he switches the on, he is seen at a meeting where everyone is discussing something aggressively. The video ends from where it started with Tyga sleeping in a floating tube with a smile on his face.
Also Read |Celebrities Who Appeared On Khloe Kardashian's 'Kocktails With Khlo'; Cardi B To Tyga
Also Read |Shakira's New Single 'Me Gusta' In Collaboration With Anuel AA Out!
Tyga's single Vacation has currently crossed 160k views on YouTube. It has received 20k likes with just 260 dislikes and more than one thousand comments till now. Fans have praised the song as one user commented, Tyga never disappoints, another said, Tyga dropping hits after hits. Many also applauded the editing and VFX. Vacation is produced by Andrea Saavedra under the banner of UnderWonder Content. Maz Makhani is the director of photography with Ivanna Borin as the editor.
Also Read |Tekashi 6ix9ine Announces A New Single With Nicki Minaj Called Trollz
Tyga is also offering a free vacation to his followers. He provided a number, (323)402-5545, and urged fans to text him on it. T-Raww mentioned the lucky winners will go on a vacation financed by him. In a video on his Instagram handle, the rapper stated that he is giving away paid vacations for people who are working to fight the global pandemic or are standing for equality.
Get the latest entertainment news from India & around the world. Now follow your favourite television celebs and telly updates. Republic World is your one-stop destination for trending Bollywood news. Tune in today to stay updated with all the latest news and headlines from the world of entertainment.
More: