Levels And Limits Of AI – Forbes
Posted: February 20, 2020 at 9:45 am
Levels and Limits of AI.
I recently spoke with the innovation team of a Fortune 50 company about their 2020 initiatives, one of which was artificial intelligence. When I asked what specifically they want to use AI for, an executive replied, Everything. I pushed a little more asking, Are there any specific problems that youre seeking AI vendors for? The reply was something like We want to use AI in all of our financial services groups. This was particularly unsatisfying considering that the company is a financial services company.
I have these kinds of conversations frequently. For example, I met with the head of a large government department to discuss artificial intelligence, and their top agency executive asked for a system that could automate the decision-making of key officials. When the executive was asked for details, he more or less wanted a robotic version of his existing employees.
AI is not a panacea and it cannot simply replace humans. Artificial intelligence is mathematical computation, not human intelligence, as I have discussed in previous posts. One of my key roles as an investor is separating real AI from AI hype.
Buyers should not focus on whether or not a company is AI, but rather whether or not it solves a real problem. While technology is important, the most important part of any company is serving the customer. There are specific customer needs that artificial intelligence can address really well. Others, not so much. For example, AI may be well suited to detect digital fraud, but it would not be well suited to be a detective in the physical world. AI should be treated like any other software toolas a product that needs to yield a return. To do so, it is important to understand what artificial intelligence can actually do, and what it cant.
There are several levels of artificial intelligence. A few years ago my friends John Frank and Jason Briggs, who run Diffeo, suggested breaking artificial intelligence into 3 levels of service: Acceleration, Augmentation, and Automation. Acceleration is taking an existing human process and helping humans do it faster. For example, the current versions of textual auto-complete that Google offers are acceleration AI. They offer a completed version of what the user might already say. The next level, augmentation, takes what a human is doing and augments it. In addition to speeding up what the human is doing (like acceleration), it makes the humans product better. An example of this is what Grammarly does with improving the grammar of text. The final level is automation. In the previous two levels there are still humans in the loop. Automation achieves a task with no human in the loop. The aspiration here is Level 5 autonomous driving like Aurora and Waymo are pursuing.
When evaluating AI companies it makes sense to ask if what they are setting out to achieve is actually attainable at the level of AI that the vendor is promising. Below is a rough demonstrative chart with the Difficulty of AI on the y-axis and Level of AI on the x-axis.
The dashed line is what I call the AI feasibility curve. Within the line is AI feasibility, which means that there is a technology, infrastructure and approach to actually deliver a successful product at that level of AI in the near term. In reality it is a curve, not a line, and it is neither concave nor convex. It has bumps. Certain problems are really difficult, but are attainable because a spectacular AI team has worked really hard to "push out the AI feasibility curve for that specific problem. AlphaGo is included because it was an incredibly difficult and computationally intensive task, but the brilliant team at Google was able to shift the curve out in that area. If a company proposes it has built a fully-autonomous manager or strategy engine, I become highly skeptical. As you can see, the AI difficulty of those two tasks is quite high. The difficulty of AI is some function of the problem space and data quality (which I will discuss in a future article). In the chart, treat difficulty for AI as a directional illustration of difficulty, not a quantifiable score.
When purchasing a vendors AI, determine if its value proposition is feasible. If it is not, then the return on investment may be a disappointment. Look to whether it is being marketed as fully automated, but is too difficult a problem for full automation- this could be a sign that the product is actually accelerated AI. Keeping the feasibility curve in mind is important for investing as well, because if the customer is not well served, then the company will eventually fail.
When evaluating a company, I try to determine where on this chart the company would fall. If it is still building out product, I think about its technology innovation. Will the engineers be able to shift out the curve in that particular problem space? In evaluating AI, pick products that you can know with confidence will provide ROI. Dont be like the Fortune 50 that is looking for AI for everything or the government agency trying to have AI that basically does exactly what one of its officers does. Instead, evaluate an AI product for what it really offers you. Then, make an informed decision.
Disclosure: Sequoia is an investor in Aurora and author is an investor in Alphabet, both of which were used as examples in the article.
Continued here:
Levels And Limits Of AI - Forbes
- Examining the world through signals and systems - MIT News - February 10th, 2021
- Are we ready for bots with feelings? Life Hacks by Charles Assisi - Hindustan Times - December 12th, 2020
- What are proteins and why do they fold? - DW (English) - December 12th, 2020
- Are Computers That Win at Chess Smarter Than Geniuses? - Walter Bradley Center for Natural and Artificial Intelligence - December 4th, 2020
- An AI winter may be inevitable. What we should fear more: an AI ice age - ITProPortal - December 4th, 2020
- What the hell is reinforcement learning and how does it work? - The Next Web - November 2nd, 2020
- Investing in Artificial Intelligence (AI) - Everything You Need to Know - Securities.io - November 2nd, 2020
- How to Understand if AI is Swapping Civilization - Analytics Insight - October 3rd, 2020
- In the Know - UCI News - October 3rd, 2020
- Test your Python skills with these 10 projects - Best gaming pro - October 3rd, 2020
- Is Dystopian Future Inevitable with Unprecedented Advancements in AI? - Analytics Insight - June 26th, 2020
- Enterprise hits and misses - contactless payments on the rise, equality on the corporate agenda, and Zoom and Slack in review - Diginomica - June 8th, 2020
- AlphaGo - Top Documentary Films - June 5th, 2020
- AlphaGo (2017) - Rotten Tomatoes - June 5th, 2020
- Why the buzz around DeepMind is dissipating as it transitions from games to science - CNBC - June 5th, 2020
- The Hardware in Microsofts OpenAI Supercomputer Is Insane - ENGINEERING.com - June 5th, 2020
- This A.I. makes up gibberish words and definitions that sound astonishingly real - Digital Trends - May 17th, 2020
- QuickBooks is still the gold standard for small business accounting. Learn how it's done now. - The Next Web - April 19th, 2020
- The Turing Test is Dead. Long Live The Lovelace Test - Walter Bradley Center for Natural and Artificial Intelligence - April 8th, 2020
- The New ABCs: Artificial Intelligence, Blockchain And How Each Complements The Other - JD Supra - March 14th, 2020
- Enterprise AI Books to Read This Spring - DevOps.com - March 14th, 2020
- Chess grandmaster Gary Kasparov predicts AI will disrupt 96 percent of all jobs - The Next Web - February 25th, 2020
- The top 5 technologies that will change health care over the next decade - MarketWatch - February 25th, 2020
- How to overcome the limitations of AI - TechTarget - February 20th, 2020
- From Deception to Attrition: AI and the Changing Face of Warfare - War on the Rocks - February 20th, 2020
- I think, therefore I am said the machine to the stunned humans - Innovation Excellence - February 10th, 2020
- AI on steroids: Much bigger neural nets to come with new hardware, say Bengio, Hinton, and LeCun - ZDNet - February 10th, 2020
- Why The Race For AI Dominance Is More Global Than You Think - Forbes - February 10th, 2020
- Why asking an AI to explain itself can make things worse - MIT Technology Review - January 29th, 2020
- AlphaZero beat humans at Chess and StarCraft, now it's working with quantum computers - The Next Web - January 18th, 2020
- What are neural-symbolic AI methods and why will they dominate 2020? - The Next Web - January 18th, 2020
- What is AlphaGo? - Definition from WhatIs.com - December 22nd, 2019
- AI has bested chess and Go, but it struggles to find a diamond in Minecraft - The Verge - December 18th, 2019
- AI is dangerous, but not for the reasons you think. - OUPblog - December 18th, 2019
- The Perils and Promise of Artificial Conscientiousness - WIRED - December 18th, 2019
- DeepMind Vs Google: The Inner Feud Between Two Tech Behemoths - Analytics India Magazine - December 18th, 2019
- AlphaGo - Wikipedia - December 11th, 2019
- DeepMind co-founder moves to Google as the AI lab positions itself for the future - The Verge - December 11th, 2019
- Biggest scientific discoveries of the 2010s decade: photos - Business Insider - December 11th, 2019
- Facebooks Hanabi-playing AI achieves state-of-the-art results - VentureBeat - December 11th, 2019