Page 11234

Archive for the ‘Alphago’ Category

Is Dystopian Future Inevitable with Unprecedented Advancements in AI? – Analytics Insight

Posted: June 26, 2020 at 9:42 am

without comments

Artificial super-intelligence (ASI) is a software-based system with intellectual powers beyond those of humans across an almost comprehensive range of categories and fields of endeavor.

The reality is that AI has been with here for a long time now, ever since computers were able to make decisions based on inputs and conditions. When we see a threatening Artificial Intelligence system in the movies, its the malevolence of the system, coupled with the power of some machine that scares people.

However, it still behaves in fundamentally human ways.

The kind of AI that prevails today can be described as an Artificial Functional Intelligence (AFI). These systems are programmed to perform a specific role and to do so as well or better than a human. They have also become more successful at this in a short period which no one has ever predicted. For example, beating human opponents in complex games like Go and StarCraft II which knowledgeable people thought wouldnt happen for years, if not decades.

However, Alpha Go might beat every single human Go player handily from now until the heat death of the Universe, but when it is asked for the current weather conditions there the machine lacks the intelligence of even single-celled organisms that respond to changes in temperature.

Moreover, the prospect of limitless expansion of technology granted by the development of Artificial Intelligence is certainly an inviting one. While investment and interest in the field only grow by every passing year, one can only imagine what we might have to come.

Dreams of technological utopias granted by super-intelligent computers are contrasted with those of an AI lead dystopia, and with many top researchers believing the world will see the arrival of AGI within the century, it is down to the actions people take now to influence which future they might see. While some believe that only Luddites worry about the power AI could one-day hold over humanity, the reality is that most tops AI academics carry a similar concern for its more grim potential.

Its high time people must understand that no one is going to get a second attempt at Powerful AI. Unlike other groundbreaking developments for humanity, if it goes wrong there is no opportunity to try again and learn from the mistakes. So what can we do to ensure we get it right the first time?

The trick to securing the ideal Artificial Intelligence utopia is ensuring that their goals do not become misaligned with that of humans; AI would not become evil in the same sense that much fear, the real issue is it making sure it could understand our intentions and goals. AI is remarkably good at doing what humans tell it, but when given free rein, it will often achieve the goal humans set in a way they never expected. Without proper preparation, a well-intended instruction could lead to catastrophic events, perhaps due to an unforeseen side effect, or in a more extreme example, the AI could even see humans as a threat to fully completing the task set.

The potential benefits of super-intelligent AI are so limitless that there is no question in the continued development towards it. However, to prevent AGI from being a threat to humanity, people need to invest in AI safety research. In this race, one must learn how to effectively control a powerful AI before its creations.

The issue of ethics in AI, super-intelligent or otherwise, is being addressed to a certain extent, evidenced by the development of ethical advisory boards and executive positions to manage the matter directly. DeepMind has such a department in place, and international oversight organizations such as the IEEE have also created specific standards intended for managing the coexistence of highly advanced AI systems and the human beings who program them. But as AI draws ever closer to the point where super-intelligence is commonplace and ever more organizations adopt existing AI platforms, ethics must be top of mind for all major stakeholders in companies hoping to get the most out of the technology.

Smriti is a Content Analyst at Analytics Insight. She writes Tech/Business articles for Analytics Insight. Her creative work can be confirmed She adores crushing over books, crafts, creative works and people, movies and music from eternity!!

The rest is here:

Is Dystopian Future Inevitable with Unprecedented Advancements in AI? - Analytics Insight

Written by admin

June 26th, 2020 at 9:42 am

Posted in Alphago

Enterprise hits and misses – contactless payments on the rise, equality on the corporate agenda, and Zoom and Slack in review – Diginomica

Posted: June 8, 2020 at 4:47 pm

without comments

Lead story - The future of hands-free commerce - is COVID-19 the catalyst?

MyPOV: Overseas travelers to the U.S. have noted that the U.S. is taking its sweet @ss time not exactly out in front on contactless commerce. But is that finally changing? As Chris notes in Is COVID-19 the catalyst for tapping into a contactless payment revolution in the US?:

In contrast, figures out this week in the U.K. from U.K. Finance... revealed that 80% of people made a contactless purchase in 2019, up from 69% the year before. That is, of course, pre COVID-19, which is likely to prompt a further uptick.

Industry giants see an opening. Stuart picks the story up in Tracking contactless - how Visa and Mastercard are planning for a COVID-19 bump for hands-free digital commerce. Health needs and CX converge:

Leaving the public health implications to one side, a shift to contactless tech also provides financial services providers and retail merchants with a better customer experience.

But a so-called "contactless revolution" can widen the digital divide - not exactly the type of tension we need in the U.S. right now. Chris puts it well:

The challenge remains the extent to which digitally-excluded customers and the unbanked may find themselves living in a cashless society by default, perhaps locked out from being able to pay for some goods and services.

There are potential solutions to these problems, e.g. contactless payment cards bought with cash. As with most things tech, a good rollout calls for a thoughtful design.

Diginomica picks - my top stories on diginomica this week

Vendor analysis, diginomica style. It was a news blowout from collaboration economy darlings, each with their dilemmas and upsides:

Meanwhile, Derek's ServiceNow Knowledge 2020 coverage caravan rolls out, with the fun of a sit-down with quote machine and CEO Bill McDermott:

A couple more vendor picks, without the quotables:

Jon's grab bag - Neil examines the regulatory resistance to telemedicine in Telemedicine adoption amidst a pandemic - can we overcome the barriers?. Sooraj documents how a company avoided ransomeware by taking a wake-up call to heart in Aston Martin CIO - WannaCry pushed us into a cyber security refresh.

Guest contributor Simon Griffiths shares How to re-engineer business processes in uncertain times. Uncle Den opens up the digi-kimono, and details how (not) to make your core team nuts to deliver platform upgrades in record time in How not to drive users and developers crazy.

Genuine change is about action, not platitudes or P.R. festivals. Ergo, I enjoyed Jason Corsello's Diversity & The Future of Work We Can No Longer Sit on the Sidelines! Corsello has a similar ax to grind, and wants to see companies push for corporate change as well:

As leaders of people and organizations, those same executives can stand up to racism by the examples they create in their own companies.

Where to get started? That can be an excuse or a legit area of question. To counter this, Corsello runs through ten action steps, from addressing pay inequity to rolling out mentoring programs, a la Slack's "Rising Tides" for diverse, emerging leaders. I doubt any organization could give themselves a solid grade on all ten - including diginomica. We all have work to do, but it's the right work.

Honorable mention

Speaking of incredibly exciting developments in A.I. respect for the bruising lessons of tech history, I got a kick out of this weekend's discovery:

But hey, there's good news: A.I. has come a long way from 1972, making our workplaces so much better:

Speaking of the future, McKinsey got way ahead of themselves with this extravagant headline:

Without doubt, the most concerning whiff of the week: The May jobs report had 'misclassification error' that made the unemployment rate look lower than it is. Here's what happened. Thankfully, even after the three percentage point error, the news was still better than expected, but that's a market whopper nonetheless.

I need to leave you with a lighter headline than that. How about Bill Would Prevent the President from Nuking Hurricanes.

Not quite light enough? Okay, I'll revert to animals. How about this video of a pet cockatoo strenuously objecting, in many languages known and unknown, about a pending trip to the vet? See if that doesn't put a smile on your Monday. Catch you next time...

If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed. 'myPOV' is borrowed with reluctant permission from the ubiquitous Ray Wang.

Continued here:

Enterprise hits and misses - contactless payments on the rise, equality on the corporate agenda, and Zoom and Slack in review - Diginomica

Written by admin

June 8th, 2020 at 4:47 pm

Posted in Alphago

AlphaGo – Top Documentary Films

Posted: June 5, 2020 at 4:48 pm

without comments

Go is considered the most challenging board game ever devised by man. Invented in China near 3000 years ago, it also remains one of the most popular. Although it may appear like a deceptively simple version of chess at first, the game demands an incredible intensity of concentration, smarts, strategy and intuition. The variations of game play are infinite. Its challenges represent much more than a game of play in many circles; some consider it a great art and a defining human endeavor. As one of the world's most intellectually demanding games, Go constitutes a major goalpost in the world of artificial intelligence as well. AlphaGo is a thrilling feature-length documentary which chronicles the first match-ups between a human champion of the game and an AI opponent.

The computer program known as AlphaGo was devised by Deep Mind Technologies. Their efforts to master the game through artificial intelligence is about much more than mere fun and games; they hope they can apply these same self-learning technologies to resolve more meaningful issues and challenges that mystify and trouble the human species. But first, in order to prove its competence, they must put their program through the ultimate test.

Enter Lee Sedol, a South Korean Go champion of unparalleled skill. Seen as "the ultimate human versus machine smackdown", the match generated a global media frenzy. Sedol is a creative player of great ingenuity and instinct. He entered the match with extreme confidence while the designers of the program expressed uncertainty of the outcome. The film depicts the journey to that outcome with all the nail-biting tension of a Rocky film.

In the lead-up to the championship bout, the film traces the origins of Go and its consistent prominence around the world today, and how it defines the lives and philosophies of its players. We also follow the efforts of programmers and designers in crafting a more efficient and competitive form of AI.

By the conclusion of this captivating documentary, AlphaGo raises even deeper issues about our relationships to these technologies, how they challenge us to make more of ourselves or question the limitations of our own species. What can we learn from them and vice versa?

Read the original post:

AlphaGo - Top Documentary Films

Written by admin

June 5th, 2020 at 4:48 pm

Posted in Alphago

AlphaGo (2017) – Rotten Tomatoes

Posted: at 4:48 pm

without comments

Critics Consensus

No consensus yet.

See score details


Rent or buy

Rent or buy


AlphaGo: Trailer 1

With more board configurations than there are atoms in the universe, the ancient Chinese game of 'Go' has long been considered a grand challenge for artificial intelligence. On March 9 2016, the worlds of Go and artificial intelligence collided in South Korea for an extraordinary best-of-five-game competition, coined The Google DeepMind Challenge Match. Hundreds of millions of people around the world watched as a legendary Go master took on an unproven AI challenger for the first time in history. AlphaGo chronicles a journey from the halls of Cambridge, through the backstreets of Bordeaux, past the coding terminals of DeepMind in London, and, ultimately, to the seven-day tournament in Seoul. As the drama unfolds, more questions emerge: What can artificial intelligence reveal about a 3000-year-old game? What can it teach us about humanity?





Directed By:

In Theaters:

Sep 29, 2017 limited


90 minutes


Reel As Dirt

All 42 Fresh Movies and Originals Coming to Netflix in January

Critics Choice Documentary Award Winners

Critics Choice Documentary Award Nominations

150 Erotic Movies

2020's Most Anticipated Movies

Video Game Movies Ranked

Best Netflix Series and Shows

The percentage of Approved Tomatometer Critics who have given this movie a positive review

The percentage of users who rated this 3.5 stars or higher.

View original post here:

AlphaGo (2017) - Rotten Tomatoes

Written by admin

June 5th, 2020 at 4:48 pm

Posted in Alphago

Why the buzz around DeepMind is dissipating as it transitions from games to science – CNBC

Posted: at 4:48 pm

without comments

Google Deepmind head Demis Hassabis speaks during a press conference ahead of the Google DeepMind Challenge Match in Seoul on March 8, 2016.

Jung Yeon-Je | AFP |Getty Images | Getty Images

In 2016, DeepMind, an Alphabet-owned AI unit headquartered in London, was riding a wave of publicity thanks to AlphaGo, its computer program that took on the best player in the world at the ancient Asian board game Go and won.

Photos of DeepMind's leader, Demis Hassabis, were splashed across the front pages of newspapers and websites, and Netflix even went on to make a documentary about the five-game Go match between AlphaGo and world champion Lee SeDol. Fast-forward four years, and things have gone surprisingly quiet about DeepMind.

"DeepMind has done some of the most exciting things in AI in recent years. It would be virtually impossible for any company to sustain that level of excitement indefinitely," said William Tunstall-Pedoe, a British entrepreneur who sold his AI start-up Evi to Amazon for a reported $26 million. "I expect them to do further very exciting things."

AI pioneer Stuart Russell, a professor at the University of California, Berkeley, agreed it was inevitable that excitement around DeepMind would tail off after AlphaGo.

"Go was a recognized milestone in AI, something that some commentators said would take another 100 years," he said. "In Asia in particular, top-level Go is considered the pinnacle of human intellectual powers. It's hard to see what else DeepMind could do in the near term to match that."

DeepMind's army of 1,000 plus people, which includes hundreds of highly-paid PhD graduates, continues to pump out academic paper after academic paper, but only a smattering of the work gets picked up by the mainstream media. The research lab has churned out over 1,000 papers and 13 of them have been published by Nature or Science, which are widely seen as the world's most prestigious academic journals. Nick Bostrom, the author of Superintelligence and the director of the University of Oxford's Future of Humanity Institute described DeepMind's team as world-class, large, and diverse.

"Their protein folding work was super impressive," said Neil Lawrence, a professor of machine learning at the University of Cambridge, whose role is funded by DeepMind. He's referring to a competition-winning DeepMind algorithm that can predict the structure of a protein based on its genetic makeup. Understanding the structure of proteins is important as it could make it easier to understand diseases and create new drugs in the future.

The World's top human Go player, 19-year-old Ke Jie (L) competes against AI program AlphaGo, which was developed by DeepMind, the artificial intelligence arm of Google's parent Alphabet. Machine won the three-game match against man in 2017. The AI didn't lose a single game.

VCG | Visual China Group | Getty Images

DeepMind is keen to move away from developing relatively "narrow" so-called "AI agents," that can do one thing well, such as master a game. Instead, the company is trying to develop more general AI systems that can do multiple things well, and have real world impact.

It's particularly keen to use its AI to leverage breakthroughs in other areas of science including healthcare, physics and climate change.

But the company's scientific work seems to be of less interest to the media.In 2016, DeepMind was mentioned in 1,842 articles, according to media tracker LexisNexis. By 2019, that number had fallen to 1,363.

One ex-DeepMinder said the buzz around the company is now more in line with what it should be. "The whole AlphaGo period was nuts," they said. "I think they've probably got another few milestones ahead, but progress should be more low key. It's a marathon not a sprint, so to speak."

DeepMind denied that excitement surrounding the company has tailed off since AlphaGo, pointing to the fact that it has had more papers in Nature and Science in recent years.

"We have created a unique environment where ambitious AI research can flourish. Our unusually interdisciplinary approach has been core to our progress, with 13 major papers in Nature and Science including 3 so far this year," a DeepMind spokesperson said. "Our scientists and engineers have built agents that can learn to cooperate, devise new strategies to play world-class chess and Go, diagnose eye disease, generate realistic speech now used in Google products around the world, and much more."

"More recently, we've been excited to see early signs of how we could use our progress in fundamental AI research to understand the world around us in a much deeper way. Our protein folding work is our first significant milestone applying artificial intelligence to a core question in science, and this is just the start of the exciting advances we hope to see more of over the next decade, creating systems that could provide extraordinary benefits to society."

The company, which competes with Facebook AI Research and OpenAI, did a good job of building up hype around what it was doing in the early days.

Hassabis and Mustafa Suleyman, the intellectual co-founders who have been friends since school, gave inspiring speeches where they would explain how they were on a mission to "solve intelligence" and use that to solve everything else.

There was also plenty of talk of developing "artificial general intelligence" or AGI, which has been referred to as the holy grail in AI and is widely viewed as the point when machine intelligence passes human intelligence.

But the speeches have become less frequent (partly because Suleyman left Deepmind and works for Google now), and AGI doesn't get mentioned anywhere near as much as it used to.

Larry Page, left, and Sergey Brin, co-founders of Google Inc.

JB Reed | Bloomberg | Getty Images

Google co-founders Larry Page and Sergey Brin were huge proponents of DeepMind and its lofty ambitions, but they left the company last year and its less obvious how Google CEO Sundar Pichai feels about DeepMind and AGI.

It's also unclear how much free reign Pichai will give the company, which cost Alphabet $571 million in 2018. Just one year earlier, the company had losses of $368 million.

"As far as I know, DeepMind is still working on the AGI problem and believes it is making progress," Russell said. "I suspect the parent company (Google/Alphabet) got tired of the media turning every story about Google and AI into the Terminator scenario, complete with scary pictures."

One academic who is particularly skeptical about DeepMind's achievements is AI entrepreneur Gary Marcus, who sold a machine-learning start-up to Uber in 2016 for an undisclosed sum.

"I think they realize the gulf between what they're doing and what they aspire to do," he said. "In their early years they thought that the techniques they were using would carry us all the way to AGI. And some of us saw immediately that that wasn't going to work. It took them longer to realize but I think they've realized it now."

Marcus said he's heard that DeepMind employees refer to him as the "anti-Christ" because he has questioned how far the "deep learning" AI technique that DeepMind has focused on can go.

"There are major figures now that recognize that the current techniques are not enough," he said. "It's very different from two years ago. It's a radical shift."

He added that while DeepMind's work on games and biology had been impressive, it's had relatively little impact.

"They haven't used their stuff much in the real world," he said. "The work that they're doing requires an enormous amount of data and an enormous amount of compute, and a very stable world. The techniques that they're using are very, very data greedy and real-world problems often don't supply that level of data."

See the article here:

Why the buzz around DeepMind is dissipating as it transitions from games to science - CNBC

Written by admin

June 5th, 2020 at 4:48 pm

Posted in Alphago

The Hardware in Microsofts OpenAI Supercomputer Is Insane –

Posted: at 4:48 pm

without comments

The Hardware in Microsofts OpenAI Supercomputer Is Insane Andrew Wheeler posted on June 02, 2020 | The benefit to Elon Musks organization is not yet clear.

(Image courtesy of Microsoft.)

OpenAI, the San Francisco-based research laboratory founded by serial entrepreneur Elon Musk, is dedicated to ensure that artificial general intelligence benefits all of humanity. Microsoft invested $1 billion in OpenAI in June 2019 to build a platform of unprecedented scale. Recently, Microsoft pulled back the curtain on this project to reveal that its OpenAI supercomputer is up and running. Its powered by an astonishing 285,000 CPU cores and 10,000 GPUs.

The announcement was made at Microsofts Build 2020 developer conference. The OpenAI supercomputer is hosted by Microsofts Azure cloud and will be used to test massive artificial intelligence (AI) models.

Many AI supercomputing research projects focus on perfecting single tasks using deep learning or deep reinforcement learning as is the case with Googles various DeepMind projects like AlphaGo Zero. But a new wave of AI research focuses on how these supercomputers can perfect multiple tasks simultaneously. At the conference, Microsoft mentioned a few of these tasks that its AI supercomputer could tackle. These include having the companys AI supercomputer possibly examine huge datasets of code from GitHub (which Microsoft acquired in 2018 for $7.5 billion worth of stock) to artificially generate its own code. Another multitasking AI function could be the moderation of game-streaming services, according to Microsoft.

But is OpenAI going to benefit from this development? How would these services use Microsofts OpenAI supercomputer?

Users of Microsoft Teams benefit from real-time captioning via Microsofts development of Turing models for natural language processing and generation, so maybe OpenAI will pursue more natural language processing projects. But the answer is unknown at this point.

(Video courtesy of Microsoft.)

Bottom Line

Large-scale AI implementations from powerful and ultra-wealthy tech giants like Microsoft with access to tremendous datasets (this is the key for advanced AI beyond powerful software) could lead to the development of an AI programmer using the vast repositories of code on GitHub.

Microsofts Turing models for natural language processing use over 17 billion parameters for deciphering language. The number of CPUs and GPUs in Microsofts AI supercomputer is almost as staggering as the potential applications the company could create with access to such vast computing power. On that one note, Microsoft announced that its Turing models for natural language generation will become open source for human developers to use in the near future, but no exact date has been given.

Read more from the original source:

The Hardware in Microsofts OpenAI Supercomputer Is Insane -

Written by admin

June 5th, 2020 at 4:48 pm

Posted in Alphago

This A.I. makes up gibberish words and definitions that sound astonishingly real – Digital Trends

Posted: May 17, 2020 at 10:45 pm

without comments

Back to Menu

A sesquipedalian is a person who overuses uncommon words like lameen (a bishops letter expressing a fault or reprimand) or salvestate (to transport car seats to the dining room) just for the sake of it. The first of those italicized words is real. The second two arent. But they totally should be. Theyre the invention of a new website called This Word Does Not Exist. Powered by machine learning, it conjures up entirely new words never before seen or used, and even generates a halfway convincing definition for them. Its all kinds of brilliant.

In February, I quit my job as an engineering director at Instagram after spending seven intense years building their ranking algorithms like non-chronological feed, Thomas Dimson, creator of This Word Does Not Exist, told Digital Trends. A friend and I were trying to brainstorm names for a company we could start together in the A.I. space. After [coming up with] some lame ones, I decided it was more appropriate to let A.I. name a company about A.I.

Then, as Dimson tells it, a global pandemic happened, and he found himself at home with lots of time on his hands to play around with his name-making algorithm. Eventually I stumbled upon the Mac dictionary as a potential training set and [started] generating arbitrary words instead of just company names, he said.

If youve ever joked that someone who uses complex words in their daily lives must have swallowed a dictionary, thats pretty much exactly what This Word Does Not Exist has done. The algorithm was trained from a dictionary file Dimson structured according to different parts of speech, definition, and example usage. The model refines OpenAIs controversial GPT-2 text generator, the much-hyped algorithm once called too dangerous to release to the public. Dimsons twist on it assigns probabilities to potential words based on which letters are likely to follow one another until the word looks like a reasonably convincing dictionary entry. As a final step, it checks that the generated word isnt a real one by looking it up in the original training set.

This Word Does Not Exist is just the latest in a series of [Insert object] Does Not Exist creations. Others range from non-existent Airbnb listings to fake people to computer-generated memes which nonetheless capture the oddball humor of real ones.

People have a nervous curiosity toward what makes us human, Dimson said. By looking at these machine-produced demos, we are better able to understand ourselves. Im reminded of the fascination with Deep Blue beating Kasparov in 1996 or AlphaGo beating Lee Sedol in 2016.

Read more:

This A.I. makes up gibberish words and definitions that sound astonishingly real - Digital Trends

Written by admin

May 17th, 2020 at 10:45 pm

Posted in Alphago

QuickBooks is still the gold standard for small business accounting. Learn how it’s done now. – The Next Web

Posted: April 19, 2020 at 2:49 pm

without comments

TLDR: The QuickBooks 2020 Essentials Bundle: Beginner to Bookkeeper training offers full accounting training for novice and expert QuickBooks users alike.

Basic accounting seldom turns out to be quite so basic. If youre handling the books for a tiny operation, then maybe all you need is a pen and a few simple balance sheets. But it doesnt require much growth, either in volume or staffing or vendors, before you have to start considering exactly how youre tracking revenue, expenses, invoices, bills and more.

And if your business grows, when do you begin stepping up to automated processing or payroll services or inventory management features? And does your current accounting method mesh with those extra services?

A lot of questions, for sure. Thankfully, QuickBooks by Intuit has been handling the needs of small to medium-sized businesses since the days personal computers first became a thing. Today, you can still learn all the tricks to using this warhorse app for all your companys accounting services with The QuickBooks 2020 Essentials Bundle: Beginner to Bookkeeper ($30, 90 percent off from TNW Deals).

With this bundle, you get over 13 hours of in-depth training in a pair of QuickBooks most popular 2020 editions: QuickBooks Pro 2020 and QuickBooks Online 2020.

In just a few hours, the QuickBooks Pro 2020 course can take a QuickBooks novice and get them performing like a confident bookkeeper. The training is hugely beginner-friendly, examining all the basic dashboards and operations for navigating the program. Then once youve got the controls down, your training segues into all the steps needed to get all of a business financials safely accounted for and monitored via QuickBooks Pro. From set-up through payroll and payroll tax processing to invoicing bill payments and purchase orders, students learn all they need to whip a companys books into perfect shape.

Meanwhile, if you do your accounting via the cloud-based QuickBooks Online, youre dealing with a very different program than the traditional QuickBooks desktop software. So the QuickBooks Online 2020 course is geared to those users, including dozens of lessons to offer a firm grounding in using the Online version as neatly and efficiently. Like the previous course, this one also starts at the beginning, giving users a fully-rounded experience from the fundamentals up through expert-level tricks that utilize all of QuickBooks best advanced features.

And with both programs, youll learn how to compile insightful financial reports to track the economic health of your business at a glance.

Each course is a $150 value, but with this limited time offer, you can get both for only $30.

Prices are subject to change.

Read next: Our future is in the cloud and this AWS Cloud Bootcamp can get you ready for it.

Read our daily coverage on how the tech industry is responding to the coronavirus and subscribe to our weekly newsletter Coronavirus in Context.

For tips and tricks on working remotely, check out our Growth Quarters articles here or follow us on Twitter.

See the original post here:

QuickBooks is still the gold standard for small business accounting. Learn how it's done now. - The Next Web

Written by admin

April 19th, 2020 at 2:49 pm

Posted in Alphago

The Turing Test is Dead. Long Live The Lovelace Test – Walter Bradley Center for Natural and Artificial Intelligence

Posted: April 8, 2020 at 4:46 am

without comments

thought-catalog-505eectW54k-unsplash Photo by Thought Catalog on Unsplash Robert J. Marks April 2, 2020 Robert J. Marks April 2, 2020

The Turing test, developed by Alan Turing in 1950, is a test of a machines ability to exhibit intelligent behaviour indistinguishable from a human. Many think that Turings proposal for intelligence, especially creativity, has been proven inadequate. Is the Lovelace test a better alternative? Robert J. Marks and Dr. Selmer Bringsjord discuss the Turing test, the Lovelace test, and machine creativity.

Mind Matters features original news and analysis at the intersection of artificial and natural intelligence. Through articles and podcasts, it explores issues, challenges, and controversies relating to human and artificial intelligence from a perspective that values the unique capabilities of human beings. Mind Matters is published by the Walter BradleyCenter for Natural and Artificial Intelligence.

See the rest here:

The Turing Test is Dead. Long Live The Lovelace Test - Walter Bradley Center for Natural and Artificial Intelligence

Written by admin

April 8th, 2020 at 4:46 am

Posted in Alphago

The New ABCs: Artificial Intelligence, Blockchain And How Each Complements The Other – JD Supra

Posted: March 14, 2020 at 1:41 pm

without comments

The terms revolution and disruption in the context of technological innovation are probably bandied about a bit more liberally than they should. Technological revolution and disruption imply upheaval and systemic reevaluations of the way that humans interact with industry and even each other. Actual technological advancement, however, moves at a much slower pace and tends to augment our current processes rather than to outright displace them. Oftentimes, we fail to realize the ubiquity of legacy systems in our everyday lives sometimes to our own detriment.

Consider the keyboard. The QWERTY layout of keys is standard for English keyboards across the world. Even though the layout remains a mainstay of modern office setups, its origins trace back to the mass popularization of a typewriter manufactured and sold by E. Remington & Sons in 1874.[1] Urban legend has it that the layout was designed to slow down typists from jamming typing mechanisms, yet the reality reveals otherwise the layout was actually designed to assist those transcribing messages from Morse code.[2] Once typists took to the format, the keyboard, as we know it today, was embraced as a global standard even as the use of Morse code declined.[3] Like QWERTY, our familiarity and comfort with legacy systems has contributed to their rise. These systems are varied in their scope, and they touch everything: healthcare, supply chains, our financial systems and even the way we interact at a human level. However, their use and value may be tested sooner than we realize.

Artificial intelligence (AI) and blockchain technology (blockchain) are two novel innovations that offer the opportunity for us to move beyond our legacy systems and streamline enterprise management and compliance in ways previously unimaginable. However, their potential is often clouded by their buzzword status, with bad actors taking advantage of the hype. When one cuts through the haze, it becomes clear that these two technologies hold significant transformative potential. While these new innovations can certainly function on their own, AI and blockchain also complement one another in such ways that their combination offers business solutions, not only the ability to build upon legacy enterprise systems but also the power to eventually upend them in favor of next level solutions. Getting to that point, however, takes time and is not without cost. While humans are generally quick to embrace technological change, our regulatory frameworks take longer to adapt. The need to address this constraint is pressing real market solutions for these technologies have started to come online, while regulatory opaqueness hurdles abound. As innovators seek to exploit the convergence of AI and blockchain innovations, they must pay careful attention to overcome both technical and regulatory hurdles that accompany them. Do so successfully, and the rewards promise to be bountiful.

First, a bit of taxonomy is in order.

AI in a Nutshell:

Artificial Intelligence is the capability of machine to imitate intelligent human behavior, such as learning, understanding language, solving problems, planning and identifying objects.[4] More practically speaking, however, todays AI is actually mostly limited to if X, then Y varieties of simple tasks. It is through supervised learning that AI is trained, and this process requires an enormous amount of data. For example, IBMs question-answering supercomputer Watson was able to beat Jeopardy! champions Brad Rutter and Ken Jennings in 2011, because Watson had been coded to understand simple questions by being fed countless iterations and had access to vast knowledge in the form of digital data Likewise, Google DeepMinds AlphaGo defeated the Go champion Lee Sedol in 2016, since AlphaGo had undergone countless instances of Go scenarios and collected them as data. As such, most implementations of AI involve simple tasks, assuming that relevant information is readily accessible. In light of this, Andrew Ng, the Stanford roboticist, noted that, [i]f a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.[5]

Moreover, a significant portion of AI currently in use or being developed is based on machine learning. Machine learning is a method by which AI adapts its algorithms and models based on exposure to new data thereby allowing AI to learn without being programmed to perform specific tasks. Developing high performance machine learning-based AI, therefore, requires substantial amounts of data. Data high in both quality and quantity will lead to better AI, since an AI instance can indiscriminately accept all data provided to it, and can refine and improve its algorithms to the extent of the provided data. For example, AI that visually distinguishes Labradors from other breeds of dogs will become better at its job the more it is exposed to clear and accurate pictures of Labradors.

It is in these data amalgamations that AI does its job best. Scanning and analyzing vast subsets of data is something that a computer can do very rapidly as compared to a human. However, AI is not perfect, and many of the pitfalls that AI is prone to are often the result of the difficulty in conveying how humans process information in contrast to machines. One example of this phenomenon that has dogged the technology has been AIs penchant for hallucinations. An AI algorithm hallucinates when the input is interpreted by the machine into something that seems implausible to a human looking at the same thing.[6] Case in point, AI has interpreted an image of a turtle as that of a gun or a rifle as a helicopter.[7] This occurs because machines are hypersensitive to, and interpret, the tiniest of pixel patterns that we humans do not process. Because of the complexity of this analysis, developers are only now beginning to understand such AI phenomena.

When one moves beyond pictures of guns and turtles, however, AIs shortfalls can become much less innocuous. AI learning is based on inputted data, yet much of this data reflects the inherent shortfalls and behaviors of everyday individuals. As such, without proper correction for bias and other human assumptions, AI can, for example, perpetuate racial stereotypes and racial profiling.[8] Therefore, proper care for what goes into the system and who gets access to the outputs must be employed for the ethical employment of AI, but therein lies an additional problem who has access to enough data to really take full advantage of and develop robust AI?

Not surprisingly, because large companies are better able to collect and manage increasingly larger amounts of data than individuals or smaller entities, such companies have remained better positioned in developing complex AI. In response to this tilted landscape, various private and public organizations, including the U.S. Department of Justices Bureau of Justice, Google Scholar and the International Monetary Fund, have launched open source initiatives to make publicly available vast amounts of data that such organizations have collected over many years.

Blockchain in a Nutshell:

Blockchain technology as we know it today came onto the scene in late 2009 with the rise of Bitcoin, perhaps the most famous application of the technology. Fundamentally, blockchain is a data structure that makes it possible to create a tamper-proof, distributed, peer-to-peer system of ledgers containing immutable, time-stamped and cryptographically connected blocks of data. In practice, this means that data can be written only once onto a ledger, which is then read-only for every user. However, many of the most utilized blockchain protocols, for example, the Bitcoin or Ethereum networks, maintain and update their distributed ledgers in a decentralized manner, which stands in contrast to traditional networks reliant on a trusted, centralized data repository.[9] In structuring the network in this way, these blockchain mechanisms function to remove the need for a trusted third party to handle and store transaction data. Instead, data are distributed so that every user has access to the same information at the same time. In order to update a ledgers distributed information, the network employs pre-defined consensus mechanisms and militarygrade cryptography to prevent malicious actors from going back and retroactively editing or tampering with previously recorded information. In most cases, networks are open source, maintained by a dedicated community and made accessible to any connected device that can validate transactions on a ledger, which is referred to as a node.

Nevertheless, the decentralizing feature of blockchain comes with significant resource and processing drawbacks. Many blockchain-enabled platforms run very slowly and have interoperability and scalability problems. Moreover, these networks use massive amounts of energy. For example, the Bitcoin network requires the expenditure of about 50 terawatt hours per year equivalent to the energy needs of the entire country of Singapore.[10] To ameliorate these problems, several market participants have developed enterprise blockchains with permissioned networks. While many of them may be open source, the networks are led by known entities that determine who may verify transactions on that blockchain, and, therefore, the required consensus mechanisms are much more energy efficient.

Not unlike AI, a blockchain can also be coded with certain automated processes to augment its recordkeeping abilities, and, arguably, it is these types of processes that contributed to blockchains rise. That rise, some may say, began with the introduction of the Ethereum network and its engineering around smart contracts a term used to describe computer code that automatically executes all or part of an agreement and is stored on a blockchain-enabled platform. Smart contracts are neither contracts in the sense of legally binding agreements nor smart in employing applications of AI. Rather, they consist of coded automated parameters responsive to what is recorded on a blockchain. For example, if the parties in a blockchain network have indicated, by initiating a transaction, that certain parameters have been met, the code will execute the step or steps triggered by those coded parameters. The input parameters and the execution steps for smart contracts need to be specific the digital equivalent of if X, then Y statements. In other words, when required conditions have been met, a particular specified outcome occurs; in the same way that a vending machine sells a can of soda once change has been deposited, smart contracts allow title to digital assets to be transferred upon the occurrence of certain events. Nevertheless, the tasks that smart contracts are currently capable of performing are fairly rudimentary. As developers figure out how to expand their networks, integrate them with enterprise-level technologies and develop more responsive smart contracts, there is every reason to believe that smart contracts and their decentralized applications (dApps) will see increased adoption.

AI and blockchain technology may appear to be diametric opposites. AI is an active technology it analyzes what is around and formulates solutions based on the history of what it has been exposed to. By contrast, blockchain is data agnostic with respect to what is written into it the technology bundle is largely passive. It is primarily in that distinction that we find synergy, for each technology augments the strengths and tempers the weaknesses of the other. For example, AI technology requires access to big data sets in order to learn and improve, yet many of the sources of these data sets are hidden in proprietary silos. With blockchain, stakeholders are empowered to contribute data to an openly available and distributed network with immutability of data as a core feature. With a potentially larger pool of data to work from, the machine learning mechanisms of a widely distributed, blockchain-enabled and AI-powered solution could improve far faster than that of a private data AI counterpart. These technologies on their own are more limited. Blockchain technology, in and of itself, is not capable of evaluating the accuracy of the data written into its immutable network garbage in, garbage out. AI can, however, act as a learned gatekeeper for what information may come on and off the network and from whom. Indeed, the interplay between these diverse capabilities will likely lead to improvements across a broad array of industries, each with unique challenges that the two technologies together may overcome.

[1] See Rachel Metz, Why We Cant Quit the QWERTY Keyboard, MIT Technology Review (Oct. 13, 2018), available at:

[2] Alexis Madrigal, The Lies Youve Been Told About the Origin of the QWERTY Keyboard, The Atlantic (May 3, 2013), available at:

[3] See Metz, supra note 1.

[4] See Artificial Intelligence, Merriam-Websters Online Dictionary, Merriam-Webster (last accessed Mar. 27, 2019), available at:

[5] See Andrew Ng, What Artificial Intelligence Can and Cant Do Right Now, Harvard Business Review (Nov. 9, 2016), available at:

[6] Louise Matsakis, Artificial Intelligence May Not Hallucinate After All, Wired (May 8, 2019), available at:

[7] Id.

[8] Jerry Kaplan, Opinion: Why Your AI Might Be Racist, Washington Post (Dec. 17, 2018), available at:

[9] See Shanaan Cohsey, David A. Hoffman, Jeremy Sklaroff and David A. Wishnick, Coin-Operated Capitalism, Penn. Inst. for L. & Econ. (No. 18-37) (Jul. 17, 2018) at 12, available at:

[10] See Bitcoin Energy Consumption Index (last accessed May 13, 2019), available at:

[View source.]

See the rest here:

The New ABCs: Artificial Intelligence, Blockchain And How Each Complements The Other - JD Supra

Written by admin

March 14th, 2020 at 1:41 pm

Posted in Alphago

Page 11234