Page 17«..10..16171819..30..»

Archive for the ‘Machine Learning’ Category

Automation Continuum – Leveraging AI and ML to Optimise RPA – Analytics Insight

Posted: September 20, 2020 at 10:55 pm


without comments

Over the past year, the appropriation of robotic process automation, especially progressed macros or robotic workers intended to automate the most ordinary, dull and time-exorbitant tasks has seen significant growth. As the technology develops alongside artificial intelligence and machine learning, the most encouraging future for knowledge workers is one where the simplicity of arrangement of RPA and the raw power of machine learning join to make more productive, more intelligent robotic workers.

One of the keys for adoption is companies would prefer not to trouble individuals with a lot of new tools and permit their environments to learn. For each situation, what companies attempt to do is, whatever UI theyre working in, possibly they make a widget, perhaps theres a dashboard that we can add a panel to that that would contain the data that is required. Adding to their current UI or including a stage over that routes things to the correct individual so they dont see 80% of the cases that they would have seen because they were automatically delegated and never got there.

Despite the fact that RPA today is invading pretty much every industry, the significant adopters of this tech are banks, insurance agencies, telecom firms and service organizations. This is on the grounds that organizations in these divisions for the most part have legacy systems and RPA solutions get effectively incorporated with their current functionalities.

Artificial intelligence is essentially about a computers capacity to imitate the human attitude, regardless of whether its tied in with recognising a picture or even taking care of an issue or a discussion.

You can consider Facebooks AI Research to see better. Here, the social media giant feeds the AI system with various pictures and the machine delivers accurate results. When a photograph of a dog is shown to the machine, it remembers it as a dog as well as recognised the breed.

RPA is an innovation that utilizes a particular set of rules and an algorithm and based on that it automates a task. While AI is centered more around doing a human-level undertaking, RPA is essentially a product that lessens human endeavors, it is tied in with saving the business and white-collar workers time. Probably the most well-known instances of RPA are moving data from one system to another, payroll processing, forms processing etc.

Despite the fact that AI strides ahead than RPA, these two technologies have the ability to take things to the next level if both are combined. For instance, assume you need your reports to be in a particular format to get them checked, and RPA carries out this responsibility. If you utilize an AI framework that would sift through the ineffectively formatted or inadmissible archives, the work of the RPA would be a lot simpler. Furthermore, this joint effort is called Automation Continuum.

The development of GPT-3, Generative Pre-prepared Transformer 3, is an incredible innovation that utilizes AI to use the immense amount of language data on the internet. Via training an extraordinarily enormous neural network, GPT-3 can comprehend and produce both human and programming languages with close human performance. For example, given a couple of sets of lawful agreements and plain English records, it can begin to automate the task of writing legal contracts in plain English. This sort of sophisticated automation was unimaginable with old style RPA devices without utilizing data and best in class AI.

Numerous activities, while redundant, require understanding and thought by a human with information and experience. Furthermore, this is the place the upcoming age of RPA devices can use AI. Humans are truly adept at responding to the topic of What else is significant or fascinating?. Artificial intelligence will help RPA tools go farther than essentially adding more factors to an inquiry. Artificial intelligence will permit RPA to make the next stride and answer the topic of What else?. Basically, the use of AI to RPA will permit these tools to expand the scope of what they can do.

Indeed, even giant organizations like IBM, Microsoft and SAP are tapping increasingly more to RPA. Which means, they are expanding the awareness and foothold of RPA programming. Besides, new sellers are additionally rising and at a fast pace and have begun to stamp their presence in the business.

Notwithstanding, it isnt simply RPA that is the discussion of the town, the function of AI is additionally one of the most huge things at present. The concept of Automation Continuum is getting famous among a lot of companies. The business is currently seeing their capacities and why not AI can read, tune in, and analyse and afterwards feed data into bots that can create output, package it, and send it off. Eventually, RPA and AI are two significant technologies that companies can use to help their companys digital transformation.

With organizations going through digital transformation, and maybe accelerating their efforts to deal with the effects of Covid-19 on their workforces, data is getting progressively significant. The optimization of RPA will profit extraordinarily from increased digitization in organizations. As organizations create data lakes and other new data storehouses that are available through APIs, it is critical to permit RPA tools to approach so they can be optimized.

While RPA has conveyed noteworthy advantages with regards to automation, the upcoming age of RPA will deliver more advantages using AI and machine learning through optimization. This isnt about quicker automation however, about better automation.

Read the original post:

Automation Continuum - Leveraging AI and ML to Optimise RPA - Analytics Insight

Written by admin

September 20th, 2020 at 10:55 pm

Posted in Machine Learning

UT Austin Selected as Home of National AI Institute Focused on Machine Learning – UT News | The University of Texas at Austin

Posted: August 27, 2020 at 3:50 am


without comments

AUSTIN, Texas The National Science Foundation has selected The University of Texas at Austin to lead the NSF AI Institute for Foundations of Machine Learning, bolstering the universitys existing strengths in this emerging field. Machine learning is the technology that drives AI systems, enabling them to acquire knowledge and make predictions in complex environments. This technology has the potential to transform everything from transportation to entertainment to health care.

UT Austin already among the worlds top universities for artificial intelligence is poised to develop entirely new classes of algorithms that will lead to more sophisticated and beneficial AI technologies. The university will lead a larger team of researchers that includes the University of Washington, Wichita State University and Microsoft Research.

This is another important step in our universitys ascension as a world leader in machine learning and tech innovation as a whole, and I am grateful to the National Science Foundation for their profound support, said UT Austin interim President Jay Hartzell. Many of the worlds greatest problems and challenges can be solved with the assistance of artificial intelligence, and its only fitting, given UTs history of accomplishment in this area along with the booming tech sector in Austin, that this new NSF institute be housed right here on the Forty Acres.

UT Austin is simultaneously establishing a permanent base for campuswide machine learning research called the Machine Learning Laboratory. It will house the new AI institute and bring together computer and data scientists, mathematicians, roboticists, engineers and ethicists to meet the institutes research goals while also working collaboratively on other interdisciplinary projects. Computer science professor Adam Klivans, who led the effort to win the NSF AI institute competition, will direct both the new institute and the Machine Learning Lab. Alex Dimakis, associate professor of electrical and computer engineering, will serve as the AI institutes co-director.

Machine learning can be used to predict which of thousands of recently formulated drugs might be most effective as a COVID-19 therapeutic, bypassing exhaustive laboratory trial and error, Klivans said. Modern datasets, however, are often diffuse or noisy and tend to confound current techniques. Our AI institute will dig deep into the foundations of machine learning so that new AI systems will be robust to these challenges.

Additionally, many advanced AI applications are limited by computational constraints. For example, algorithms designed to help machines recognize, categorize and label images cant keep up with the massive amount of video data that people upload to the internet every day, and advances in this field could have implications across multiple industries.

Dimakis notes that algorithms will be designed to train video models efficiently. For example, Facebook, one of the AI institutes industry partners, is interested in using these algorithms to make its platform more accessible to people with visual impairments. And in a partnership with Dell Medical School, AI institute researchers will test these algorithms to expedite turnaround time for medical imaging diagnostics, possibly reducing the time it takes for patients to get critical assessments and treatment.

The NSF is investing more than $100 million in five new AI institutes nationwide, including the $20 million project based at UT Austin to advance the foundations of machine learning.

In addition to Facebook, Netflix, YouTube, Dell Technologies and the city of Austin have signed on to transfer this research into practice.

The institute will also pursue the creation of an online masters degree in AI, along with undergraduate research programming and online AI courses for high schoolers and working professionals.

Austin-based tech entrepreneurs Zaib and Amir Husain, both UT Austin alumni, are supporting the new Machine Learning Laboratory with a generous donation to sustain its long-term mission.

The universitys strengths in computer science, engineering, public policy, business and law can help drive applications of AI, Amir Husain said. And Austins booming tech scene is destined to be a major driver for the local and national economy for decades to come.

The Machine Learning Laboratory is based in the Department of Computer Science and is a collaboration among faculty, researchers and students from across the university, including Texas Computing; Texas Robotics; the Department of Statistics and Data Sciences; the Department of Mathematics; the Department of Electrical and Computer Engineering; the Department of Information, Risk & Operations Management; the School of Information; the Good Systems AI ethics grand challenge team; the Oden Institute for Computational Engineering and Sciences; and the Texas Advanced Computing Center (TACC).

See the article here:

UT Austin Selected as Home of National AI Institute Focused on Machine Learning - UT News | The University of Texas at Austin

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

Participation-washing could be the next dangerous fad in machine learning – MIT Technology Review

Posted: at 3:50 am


without comments

More promising is the idea of participation as justice. Here, all members of the design process work together in tightly coupled relationships with frequent communication. Participation as justice is a long-term commitment that focuses on designing products guided by people from diverse backgrounds and communities, including the disability community, which has long played a leading role here. This concept has social and political importance, but capitalist market structures make it almost impossible to implement well.

Machine learning extends the tech industrys broader priorities, which center on scale and extraction. That means participatory machine learning is, for now, an oxymoron. By default, most machine-learning systems have the ability to surveil, oppress, and coerce (including in the workplace). These systems also have ways to manufacture consentfor example, by requiring users to opt in to surveillance systems in order to use certain technologies, or by implementing default settings that discourage them from exercising their right to privacy.

Given that, its no surprise that machine learning fails to account for existing power dynamics and takes an extractive approach to collaboration. If were not careful, participatory machine learning could follow the path of AI ethics and become just another fad thats used to legitimize injustice.

How can we avoid these dangers? There is no simple answer. But here are four suggestions:

Recognize participation as work.Many people already use machine-learning systems as they go about their day. Much of this labor maintains and improves these systems and is therefore valuable to the systems owners. To acknowledge that, all users should be asked for consent and provided with ways to opt out of any system. If they chose to participate, they should be offered compensation. Doing this could mean clarifying when and how data generated by a users behavior will be used for training purposes (for example, via a banner in Google Maps or an opt-in notification). It would also mean providing appropriate support for content moderators, fairly compensating ghost workers, and developing monetary or nonmonetary reward systems to compensate users for their data and labor.

Make participation context specific. Rather than trying to use a one-size-fits-all approach, technologists must be aware of the specific contexts in which they operate. For example, when designing a system to predict youth and gang violence, technologists should continuously reevaluate the ways in which they build on lived experience and domain expertise, and collaborate with the people they design for. This is particularly important as the context of a project changes over time. Documenting even small shifts in process and context can form a knowledge base for long-term, effective participation. For example, should only doctors be consulted in the design of a machine-learning system for clinical care, or should nurses and patients be included too? Making it clear why and how certain communities were involved makes such decisions and relationships transparent, accountable, and actionable.

Plan for long-term participation from the start. People are more likely to stay engaged in processes over time if theyre able to share and gain knowledge, as opposed to having it extracted from them. This can be difficult to achieve in machine learning, particularly for proprietary design cases. Here, its worth acknowledging the tensions that complicate long-term participation in machine learning, and recognizing that cooperation and justice do not scale in frictionless ways. These values require constant maintenance and must be articulated over and over again in new contexts.

Learn from past mistakes. More harm can be done by replicating the ways of thinking that originally produced harmful technology. We as researchers need to enhance our capacity for lateral thinking across applications and professions. To facilitate that, the machine-learning and design community could develop a searchable database to highlight failures of design participation (such as Sidewalk Labs waterfront project in Toronto). These failures could be cross-referenced with socio-structural concepts (such as issues pertaining to racial inequality). This database should cover design projects in all sectors and domains, not just those in machine learning, and explicitly acknowledge absences and outliers. These edge cases are often the ones we can learn the most from.

Its exciting to see the machine-learning community embrace questions of justice and equity. But the answers shouldnt bank on participation alone. The desire for a silver bullet has plagued the tech community for too long. Its time to embrace the complexity that comes with challenging the extractive capitalist logic of machine learning.

Mona Sloane is a sociologist based at New York University. She works on design inequality in the context of AI design and policy.

Here is the original post:

Participation-washing could be the next dangerous fad in machine learning - MIT Technology Review

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

Getting to the heart of machine learning and complex humans – The Irish Times

Posted: at 3:50 am


without comments

Abeba Birhane: I study embodied cognitive science, which is at the heart of how people interact and go about their daily lives and what it means to be a person.

You recently made a big discovery that an academic library containing millions of images used to train artificial intelligence systems had privacy and ethics issues, and that it included racist, misogynistic and other offensive content.

Yes, I worked on this with Vinay Prabhu a chief scientist at UnifyID, a privacy start-up in Silicon Valley on the 80-million images dataset curated by Massachusetts Institute of Technology. We spent about months looking through this dataset, and we found thousands of images labelled with insults and derogatory terms.

Using this kind of content to build and train artificial intelligence systems, including face recognition systems, would embed harmful stereotypes and prejudices and could have grave consequences for individuals in the real world.

What happened when you published the findings?

The media picked up on it, so it got a lot of publicity. MIT withdrew the database and urged people to delete their copies of the data. That was humbling and a nice result.

How does this finding fit in to your PhD research?

I study embodied cognitive science, which is at the heart of how people interact and go about their daily lives and what it means to be a person. The background assumption is that people are ambiguous, they come to be who they are through interactions with other people.

It is a different perspective to traditional cognitive science, which is all about the brain and rationality. My research looks at how artificial intelligence and machine learning has limits in how it can understand and predict the complex messiness of human behaviour and social outcomes.

Can you give me an example?

If you take the Shazam app, it works very well to recognise a piece of music that you play to it. It searches for the pattern of the music in a database, and this narrow search suits the machine approach. But predicting a social outcome from human characteristics is very different.

As humans we have infinite potentials, we can react to situations in different ways, and a machine that uses numerable parameters cannot predict whether someone is a good hire or at risk of committing a crime in the future. Humans and our interactions represent more than just a few parameters. My research looks at existing machine learning systems and the ethics of this dilemma.

How did you get into this work?

I started in physics back home in Ethiopia, but when I came to Ireland there was so much paperwork and so many exams to translate my Ethiopian qualification that I decided to start from scratch.

So I studied psychology and philosophy and I did a masters [masters course had lots of elements neuroscience, philosophy, anthropology, and computer science, where we built computational models of various cognitive faculties and it is where I really found my place.

How has Covid-19 affected your research?

At the start of the pandemic, I thought this might be a chance to write up a lot of my project, but I found it hard to work at home and to unhook my mind from what was going on around the world.

I also missed the social side, going for coffee and talking with my colleagues about work and everything else. So I am glad to be back in the lab now and seeing my lab mates even at a distance.

Original post:

Getting to the heart of machine learning and complex humans - The Irish Times

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

Air Force Taps Machine Learning to Speed Up Flight Certifications – Nextgov

Posted: at 3:50 am


without comments

Machine learning is transforming the way an Air Force office analyzes and certifies new flight configurations.

The Air Force SEEK EAGLE Office sets standards for safe flight configurations by testing and looking at historical data to see how different storeslike a weapon system attached to an F-16affect flight. A project AFSEO developed along with industry partners can now automate up to 80% of requests for analysis, according to the offices Chief Data Officer Donna Cotton.

The application is kind of like an eager junior engineer consulting a senior engineer, Cotton said. It makes the straightforward calls without any input, but in the hard cases it walks into the senior engineers office and says: Hey, I did a bunch of research and this is what I found out. Can you give me your opinion?

Cotton spoke at a Tuesday webinar hosted by Tamr, one of the industry partners involved in the project. Tamr announced July 30 AFSEO awarded the company a $60 million contract for its machine learning application. Two other companies, Dell and Cloudera, helped AFSEO take decades of historical data from simulations, performance studies and the like that were siloed across various specialities and organize them into a searchable data lake.

On top of this new data architecture, the machine learning application provided by Tamr searches through all the historical data to find past records that can help answer new safety recommendation requests automatically.

This tool is critical because the vast majority of AFSEOs flight certification recommendations are made by analogy, meaning using previous data rather than new flight tests. But in the past, data was disorganized and lacked unification. This made tracking down these helpful records a challenge for engineers.

Now, a cleaner AFSEO data lake cuts the amount of time engineers waste on looking for the information they need. Machine learning further speeds up the process by generating safety reports automatically while still keeping the professional engineers in the loop. Even when engineers need to produce original research, the machine learning application can smooth the process by collecting related records to serve as a jumping off point.

The new process helps AFSEO avoid doing costly flight tests while also increasing confidence that the team is making the safety certification correctly with all the information available to them, Cotton said.

We are able to be more productive, Cotton said. It's saving us a lot of money because for us, it's not about profit, but it's about hours. It's about how much effort are we going to have to use to solve or to answer a new request.

See the rest here:

Air Force Taps Machine Learning to Speed Up Flight Certifications - Nextgov

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

The Role of Artificial Intelligence and Machine Learning in the… – Insurance CIO Outlook

Posted: at 3:50 am


without comments

Machine learning has proven to be useful for insurance agents and brokers in various ways. These include capturing knowledge, skills, and expertise from a generation of insurance staff before they retire in the next 5 to 10 years and use it to train new employees.

FREMONT, CA: Technology has become the dominant force across all businesses in the last few years. Disruptive technologies like Artificial Intelligence (AI), machine learning, and natural language processing are improving rapidly and quickly, evolving from theoretical to practical applications. These technologies have also made an impact on insurance agents and brokers. Many people continue to view technology as their foe. They either believe that machines will eventually replace them, or that a machine can never do their job better than them. While this may not be true, some aspects of it are relatable. For instance, a machine will never be able to provide real-time advice as a live agent does. However, low cost and easy to use platforms are currently available that allow agents and brokers to take advantage of this technology to enhance their delivery of advice and expertise to prospects and clients.

Machine learning has proven to be useful for insurance agents and brokers in various ways. These include capturing knowledge, skills, and expertise from a generation of insurance staff before they retire in the next 5 to 10 years and use it to train new employees.

Employee Augmentation

It helps provide personalized answers for a wide range of insurance questions. Digital customers want to get answers for their questions anytime and not just when an agent's office is open.

Personalized Digital Answers

It helps create and deliver a digital annual account review for personal lines or small commercial insurance accountants. A robust analysis leads to client satisfaction, creates cross-selling opportunities, and reduces errors and omission problems for the agency.

Digital Account Review

Many believe that artificial intelligence and machine learning will be the end of insurance agents as a trusted source for adequate protection against financial losses. However, these technologies are a threat only for insurance agents that are simply order takers. Insurance agents and brokers that embrace the technologies will always find opportunities to grow.

These emerging technologies mustn't be seen as a bane but as a boon. Insurance agents and brokers need to work in tandem with the upgrades in technology and leverage it to the best use. It holds increased potential to enhance customer satisfaction and offer a higher quality of service.

See Also:Top Machine Learning Companies

View post:

The Role of Artificial Intelligence and Machine Learning in the... - Insurance CIO Outlook

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

AI and Machine Learning Network Fetch.ai Partners Open-Source Blockchain Protocol Waves to Conduct R&D on DLT – Crowdfund Insider

Posted: at 3:50 am


without comments

The decentralized finance (DeFi) space is growing rapidly. Oracle protocols like Chainlink, BAND and Gravity have experienced a significant increase in adoption in a cryptocurrency market thats still highly speculative and plagued by market manipulative and wash trading.

Fetch.ai, an open-access machine learning network established by former DeepMind investors and software engineers, has teamed up with Waves, an established, open-source blockchain protocol that provides developer tools for Web 3.0 applications.

As mentioned in an update shared with Crowdfund Insider:

[Fetch.ai and Waves will] conduct joint R&D for the purpose of bringing increased multi-chain capabilities to Fetch.ais system of autonomous economic agents (AEA). [They will also] push further into bringing DeFi cross-chain by connecting with Waves blockchain agnostic and interoperable decentralized cross-chain and oracle network, Gravity.

As explained in the announcement, the integration with Gravity will enable Fetch.ais Autonomous Economic Agents to gain access to data sources or feeds for several different market pairs, commodities, indices, and futures.

Fetch.ai and Waves aim to achieve closer integration with Gravity in order to provide seamless interoperability to Fetch.ai, making its blockchain-based AI and machine learning (ML) solutions accessible across various distributed ledger technology (DLT) networks.

As stated in the update, the integration will help with opening up new ways for all Gravity-connected communities to use Fetch.ais ML functionality within the comfort of their respective ecosystems.

As noted in another update shared with CI, a PwC report predicts that AI and related ML technologies may contribute more than $15 trillion to the world economy from 2017 through 2030. Gartner reveals that during 2019, 37% of organizations had adopted some type of AI into their business operations.

In other DeFi news, Chainlink competitor Band Protocol is securing oracle integration with Nervos, which is a leading Chinese blockchain project.

As confirmed in a release:

Nervos is a Chinese public blockchain thats tooling up for a big DeFi push. The project is building DeFi platforms with China Merchants Bank International and Huobi, and also became one of the first public blockchains to integrate with Chinas BSN. Amid the DeFi surge, Nervos is integrating Bands oracles to give developers access to real-world data like crypto price feeds.

.

See the original post:

AI and Machine Learning Network Fetch.ai Partners Open-Source Blockchain Protocol Waves to Conduct R&D on DLT - Crowdfund Insider

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

AI may not predict the next pandemic, but big data and machine learning can fight this one – ZDNet

Posted: at 3:50 am


without comments

In April, at the height of the lockdown, computer-science professor lex Arenas predicted that a second wave of coronavirus was highly possible this summer in Spain.

At the time, many scientists were still confident that high temperature and humidity would slow the impact and spread of the virus over the summer months, as happens with seasonal flu.

Unfortunately, Arenas' predictions have turned out to be accurate. Madrid, the Basque country, Aragon, Catalonia, and other Spanish regions are currently dealing with a surge in COVID-19 cases, despite the use of masks, hand-washing and social distancing.

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)

Admittedly, August is not as bad as March for Spain, but it's still not a situation many foresaw.

Arenas' predictions were based on mathematical modeling and underline the important role technology can play in the timing of decisions about the virus and understanding its spread.

"The virus does as we do," says Arenas. So analyzing epidemiological, environmental and mobility data becomes crucial to taking the right actions to contain the spread of the virus.

To help deal with the pandemic, the Catalan government has created a public-private health observatory. It brings together the efforts of the administration, the Hospital Germans Trias i Pujol and several research centers, such as the Center of Innovation for Data Tech and Artificial Intelligence (CIDAI), the Technology Center Eurecat, the Barcelona Supercomputing Center (BSC), the University Rovira i Virgili and the University of Girona, as well as the Mobile World Capital Barcelona.

The Mobile World Capital Barcelona brings to bear the GSMA AI for Impact initiative, which is guided by a taskforce of 20 mobile operators and an advisory panel from 12 UN agencies and partners.

Beyond the institutions, there is a real desire to join forces to respond to the virus using technology. Dani Marco, director general of innovation and the digital economy in the government of Catalonia, makes it clear that "having comparative data on the flu and SARS-CoV-2, mobility, meteorology and population census does help us react quicker and more efficiently against the pandemic".

Data comes from public databases and also from mobile operators, which provide mobility records. It is all anonymized to avoid privacy concerns.

However, the diversity of the sources of the data is a problem. Miguel Ponce de Len, a postdoctoral researcher at BSC, the center hosting the project's database, says the data coming from the regions is heterogeneous because it is based on various standards.

So one of the main tasks at BSC is cleaning data to make it usable in predicting trends and building dashboards with useful information. The goal is having lots of models running on BSC's supercomputers to answer a range of questions how public mobility is promoting the spread of the virus is just one of them.

Arenas argues that having mobility data is crucial as "it tells you the time you have before the infection spreads from one place to another".

"Air-traffic data could have told us when the pandemic would arrive to Spain from China. But nobody was ready."

Being prepared is now more important than ever. In this regard, the Catalan government's Marco stresses that any epidemiologist will be able to use the tools developed at the observatory. He is convinced that digital tools can help, even though they're not the only solution.

According to Professor Arenas: "We need models on how epidemics evolve, and data is crucial in adjusting these models. But making predictions on the next pandemic is highly difficult, even with AI."

He advocates rapid testing methods, even if some scientists challenge their accuracy, as they could be provide a useful alternative to PCR (polymerase chain reaction) tests, which also have limitations. He also recommends the use of a contact-tracing app like the Spanish Radar COVID, based on the DP3T decentralized protocol.

"A person can trace up to three contacts over the phone. The app enables you to increase that number to six to eight contacts," he says.

SEE:Coronavirus: Business and technology in a pandemic

Oriol Mitj, researcher and consultant physician in infectious diseases at the Hospital Germans Trias i Pujol, agrees that Bluetooth technology can be helpful. But of course, "We should still fight against the idea that it's an app to control the population, because it's not," says Arenas.

Other countries, like Germany, Ireland and Switzerland, have taken the view that if there is any chance of an app making even a small contribution to the battle against the virus, it is worth a go.

Marc Torrent, director of the CIDAI, argues that being able to combine reliable data and epidemiological expertise to improve the management of public resources is already a victory.

The Catalan government has created a public-private health observatory to bring together the efforts and data from a number of bodies fighting COVID.

See the rest here:

AI may not predict the next pandemic, but big data and machine learning can fight this one - ZDNet

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

Machine Learning Artificial intelligence Market Size and Growth By Leading Vendors, By Types and Application, By End Users and Forecast to 2020-2027 -…

Posted: at 3:50 am


without comments

Qualcomm

The report also inspects the financial standing of the leading companies, which includes gross profit, revenue generation, sales volume, sales revenue, manufacturing cost, individual growth rate, and other financial ratios.

Research Objective:

Our panel of trade analysts has taken immense efforts in doing this group action in order to produce relevant and reliable primary & secondary data regarding the Machine Learning Artificial intelligence market. Also, the report delivers inputs from the trade consultants that will help the key players in saving their time from the internal analysis. Readers of this report are going to be profited with the inferences delivered in the report. The report gives an in-depth and extensive analysis of the Machine Learning Artificial intelligence market.

The Machine Learning Artificial intelligence Market is Segmented:

In market segmentation by types of Machine Learning Artificial intelligence, the report covers-

This Machine Learning Artificial intelligence report umbrellas vital elements such as market trends, share, size, and aspects that facilitate the growth of the companies operating in the market to help readers implement profitable strategies to boost the growth of their business. This report also analyses the expansion, market size, key segments, market share, application, key drivers, and restraints.

Machine Learning Artificial intelligence Market Regional Analysis:

Geographically, the Machine Learning Artificial intelligence market is segmented across the following regions:North America, Europe, Latin America, Asia Pacific, and Middle East & Africa.

Key Coverage of Report:

Key insights of the report:

In conclusion, the Machine Learning Artificial intelligence Market report provides a detailed study of the market by taking into account leading companies, present market status, and historical data to for accurate market estimations, which will serve as an industry-wide database for both the established players and the new entrants in the market.

About Us:

Market Research Intellect provides syndicated and customized research reports to clients from various industries and organizations with the aim of delivering functional expertise. We provide reports for all industries including Energy, Technology, Manufacturing and Construction, Chemicals and Materials, Food and Beverage, and more. These reports deliver an in-depth study of the market with industry analysis, the market value for regions and countries, and trends that are pertinent to the industry.

Contact Us:

Mr. Steven Fernandes

Market Research Intellect

New Jersey ( USA )

Tel: +1-650-781-4080

Original post:

Machine Learning Artificial intelligence Market Size and Growth By Leading Vendors, By Types and Application, By End Users and Forecast to 2020-2027 -...

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

Explainable AI: From the peak of inflated expectations to the pitfalls of interpreting machine learning models – ZDNet

Posted: at 3:50 am


without comments

Machine learning and artificial intelligence are helping automate an ever-increasing array of tasks, with ever-increasing accuracy. They are supported by the growing volume of data used to feed them, and the growing sophistication in algorithms.

The flip side of more complex algorithms, however, is less interpretability. In many cases, the ability to retrace and explain outcomes reached by machine learning models (ML) is crucial, as:

"Trust models based on responsible authorities are being replaced by algorithmic trust models to ensure privacy and security of data, source of assets and identity of individuals and things. Algorithmic trust helps to ensure that organizations will not be exposed to the risk and costs of losing the trust of their customers, employees and partners. Emerging technologies tied to algorithmic trust include secure access service edge, differential privacy, authenticated provenance, bring your own identity, responsible AI and explainable AI."

The above quote is taken from Gartner's newly released 2020 Hype Cycle for Emerging Technologies. In it, explainable AI is placed at the peak of inflated expectations. In other words, we have reached peak hype for explainable AI. To put that into perspective, a recap may be useful.

As experts such as Gary Marcus point out, AI is probably not what you think it is. Many people today conflate AI with machine learning. While machine learning has made strides in recent years, it's not the only type of AI we have. Rule-based, symbolic AI has been around for years, and it has always been explainable.

Incidentally, that kind of AI, in the form of "Ontologies and Graphs" is also included in the same Gartner Hype Cycle, albeit in a different phase -- the trough of disillusionment. Incidentally, again, that's conflating.Ontologies are part of AI, while graphs, not necessarily.

That said: If you are interested in getting a better understanding of the state of the art in explainable AI machine learning, reading Christoph Molnar's book is a good place to start. Molnar is a data scientist and Ph.D. candidate in interpretable machine learning. Molnar has written the bookInterpretable Machine Learning: A Guide for Making Black Box Models Explainable, in which he elaborates on the issue and examines methods for achieving explainability.

Gartner's Hype Cycle for Emerging Technologies, 2020. Explainable AI, meaning interpretable machine learning, is at the peak of inflated expectations. Ontologies, a part of symbolic AI which is explainable, is in the trough of disillusionment

Recently, Molnar and a group of researchers attempted to addresses ML practitioners by raising awareness of pitfalls and pointing out solutions for correct model interpretation, as well as ML researchers by discussing open issues for further research. Their work was published as a research paper, titledPitfalls to Avoid when Interpreting Machine Learning Models, by the ICML 2020 Workshop XXAI: Extending Explainable AI Beyond Deep Models and Classifiers.

Similar to Molnar's book, the paper is thorough. Admittedly, however, it's also more involved. Yet, Molnar has striven to make it more approachable by means of visualization, using what he dubs "poorly drawn comics" to highlight each pitfall. As with Molnar's book on interpretable machine learning, we summarize findings here, while encouraging readers to dive in for themselves.

The paper mainly focuses on the pitfalls of global interpretation techniques when the full functional relationship underlying the data is to be analyzed. Discussion of "local" interpretation methods, where individual predictions are to be explained, is out of scope. For a reference on global vs. local interpretations, you can refer to Molnar's book as previously covered on ZDNet.

Authors note that ML models usually contain non-linear effects and higher-order interactions. As interpretations are based on simplifying assumptions, the associated conclusions are only valid if we have checked that the assumptions underlying our simplifications are not substantially violated.

In classical statistics this process is called "model diagnostics," and the research claims that a similar process is necessary for interpretable ML (IML) based techniques. The research identifies and describes pitfalls to avoid when interpreting ML models, reviews (partial) solutions for practitioners, and discusses open issues that require further research.

Under- or overfitting models will result in misleading interpretations regarding true feature effects and importance scores, as the model does not match the underlying data generating process well. Evaluation of training data should not be used for ML models due to the danger of overfitting. We have to resort to out-of-sample validation such as cross-validation procedures.

Formally, IML methods are designed to interpret the model instead of drawing inferences about the data generating process. In practice, however, the latter is the goal of the analysis, not the former. If a model approximates the data generating process well enough, its interpretation should reveal insights into the underlying process. Interpretations can only be as good as their underlying models. It is crucial to properly evaluate models using training and test splits -- ideally using a resampling scheme.

Flexible models should be part of the model selection process so that the true data-generating function is more likely to be discovered. This is important, as the Bayes error for most practical situations is unknown, and we cannot make absolute statements about whether a model already fits the data optimally.

Using opaque, complex ML models when an interpretable model would have been sufficient (i.e., having similar performance) is considered a common mistake. Starting with simple, interpretable models and gradually increasing complexity in a controlled, step-wise manner, where predictive performance is carefully measured and compared is recommended.

Measures of model complexity allow us to quantify the trade-off between complexity and performance and to automatically optimize for multiple objectives beyond performance. Some steps toward quantifying model complexity have been made. However, further research is required as there is no single perfect definition of interpretability but rather multiple, depending on the context.

This pitfall is further analyzed in three sub-categories: Interpretation with extrapolation, confusing correlation with dependence, and misunderstanding conditional interpretation.

Interpretation with Extrapolation refers to producing artificial data points that are used for model predictions with perturbations. These are aggregated to produce global interpretations. But if features are dependent, perturbation approaches produce unrealistic data points. In addition, even if features are independent, using an equidistant grid can produce unrealistic values for the feature of interest. Both issues can result in misleading interpretations.

Before applying interpretation methods, practitioners should check for dependencies between features in the data (e.g., via descriptive statistics or measures of dependence). When it is unavoidable to include dependent features in the model, which is usually the case in ML scenarios, additional information regarding the strength and shape of the dependence structure should be provided.

Confusing correlation with dependence is a typical error. The Pearson correlation coefficient (PCC) is a measure used to track dependency among ML features. But features with PCC close to zero can still be dependent and cause misleading model interpretations. While independence between two features implies that the PCC is zero, the converse is generally false.

Any type of dependence between features can have a strong impact on the interpretation of the results of IML methods. Thus, knowledge about (possibly non-linear) dependencies between features is crucial. Low-dimensional data can be visualized to detect dependence. For high-dimensional data, several other measures of dependence in addition to PCC can be used.

Misunderstanding conditional interpretation. Conditional variants to estimate feature effects and importance scores require a different interpretation. While conditional variants for feature effects avoid model extrapolations, these methods answer a different question. Interpretation methods that perturb features independently of others also yield an unconditional interpretation.

Conditional variants do not replace values independently of other features, but in such a way that they conform to the conditional distribution. This changes the interpretation as the effects of all dependent features become entangled. The safest option would be to remove dependent features, but this is usually infeasible in practice.

When features are highly dependent and conditional effects and importance scores are used, the practitioner has to be aware of the distinct interpretation. Currently, no approach allows us to simultaneously avoid model extrapolations and to allow a conditional interpretation of effects and importance scores for dependent features.

Global interpretation methods can produce misleading interpretations when features interact. Many interpretation methods cannot separate interactions from main effects. Most methods that identify and visualize interactions are not able to identify higher-order interactions and interactions of dependent features.

There are some methods to deal with this, but further research is still warranted. Furthermore, solutions lack in automatic detection and ranking of all interactions of a model as well as specifying the type of modeled interaction.

Due to the variance in the estimation process, interpretations of ML models can become misleading. When sampling techniques are used to approximate expected values, estimates vary, depending on the data used for the estimation. Furthermore, the obtained ML model is also a random variable, as it is generated on randomly sampled data and the inducing algorithm might contain stochastic components as well.

Hence, themodel variance has to be taken into account. The true effect of a feature may be flat, but purely by chance, especially on smaller data, an effect might algorithmically be detected. This effect could cancel out once averaged over multiple model fits. The researchers note the uncertainty in feature effect methods has not been studied in detail.

It's a steep fall to the peak of inflated expectations to the trough of disillusionment. Getting things done for interpretable machine learning takes expertise and concerted effort.

Simultaneously testing the importance of multiple features will result in false-positive interpretations if the multiple comparisons problem (MCP) is ignored. MCP is well known in significance tests for linear models and similarly exists in testing for feature importance in ML.

For example, when simultaneously testing the importance of 50 features, even if all features are unimportant, the probability of observing that at least one feature is significantly important is 0.923. Multiple comparisons will even be more problematic, the higher dimensional a dataset is. Since MCP is well known in statistics, the authors refer practitioners to existing overviews and discussions of alternative adjustment methods.

Practitioners are often interested in causal insights into the underlying data-generating mechanisms, which IML methods, in general, do not provide. Common causal questions include the identification of causes and effects, predicting the effects of interventions, and answering counterfactual questions. In the search for answers, researchers can be tempted to interpret the result of IML methods from a causal perspective.

However, a causal interpretation of predictive models is often not possible. Standard supervised ML models are not designed to model causal relationships but to merely exploit associations. A model may, therefore, rely on the causes and effects of the target variable as well as on variables that help to reconstruct unobserved influences.

Consequently, the question of whether a variable is relevant to a predictive model does not directly indicate whether a variable is a cause, an effect, or does not stand in any causal relation to the target variable.

As the researchers note, the challenge of causal discovery and inference remains an open key issue in the field of machine learning. Careful research is required to make explicit under which assumptions what insight about the underlying data generating mechanism can be gained by interpreting a machine learning model

Molnar et. al. offer an involved review of the pitfalls of global model-agnostic interpretation techniques for ML. Although as they note their list is far from complete, they cover common ones that pose a particularly high risk.

They aim to encourage a more cautious approach when interpreting ML models in practice, to point practitioners to already (partially) available solutions, and to stimulate further research.

Contrasting this highly involved and detailed groundwork to high-level hype and trends on explainable AI may be instructive.

The rest is here:

Explainable AI: From the peak of inflated expectations to the pitfalls of interpreting machine learning models - ZDNet

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning


Page 17«..10..16171819..30..»



matomo tracker