The Bot Decade: How AI Took Over Our Lives in the 2010s – Popular Mechanics

Posted: December 9, 2019 at 7:51 pm


without comments

Bots are a lot like humans: Some are cute. Some are ugly. Some are harmless. Some are menacing. Some are friendly. Some are annoying ... and a little racist. Bots serve their creators and society as helpers, spies, educators, servants, lab technicians, and artists. Sometimes, they save lives. Occasionally, they destroy them.

In the 2010s, automation got better, cheaper, and way less avoidable. Its still mysterious, but no longer foreign; the most Extremely Online among us interact with dozens of AIs throughout the day. That means driving directions are more reliable, instant translations are almost good enough, and everyone gets to be an adequate portrait photographer, all powered by artificial intelligence. On the other hand, each of us now sees a personalized version of the world that is curated by an AI to maximize engagement with the platform. And by now, everyone from fruit pickers to hedge fund managers has suffered through headlines about being replaced.

Humans and tech have always coexisted and coevolved, but this decade brought us closer togetherand closer to the futurethan ever. These days, you dont have to be an engineer to participate in AI projects; in fact, you have no choice but to help, as youre constantly offering your digital behavior to train AIs.

So heres how we changed our bots this decade, how they changed us, and where our strange relationship is going as we enter the 2020s.

All those little operational tweaks in our day come courtesy of a specific scientific approach to AI called machine learning, one of the most popular techniques for AI projects this decade. Thats when AI is tasked not only with finding the answers to questions about data sets, but with finding the questions themselves; successful deep learning applications require vast amounts of data and the time and computational power to self-test over and over again.

Deep learning, a subset of machine learning, uses neural networks to extract its own rules and adjust them until it can return the right results; other machine learning techniques might use Bayesian networks, vector maps, or evolutionary algorithms to achieve the same goal.

In January, Technology Reviews Karen Hao released an exhaustive analysis of recent papers in AI that concluded that machine learning was one of the defining features of AI research this decade. Machine learning has enabled near-human and even superhuman abilities in transcribing speech from voice, recognizing emotions from audio or video recordings, as well as forging handwriting or video, Hao wrote. Domestic spying is now a lucrative application for AI technologies, thanks to this powerful new development.

Haos report suggests that the age of deep learning is finally drawing to a close, but the next big thing may have already arrived. Reinforcement learning, like generative adversarial networks (GANs), pits neural nets against one another by having one evaluate the work of the other and distribute rewards and punishments accordinglynot unlike the way dogs and babies learn about the world.

The future of AI could be in structured learning. Just as young humans are thought to learn their first languages by processing data input from fluent caretakers with their internal language grammar, computers can also be taught how to teach themselves a taskespecially if the task is to imitate a human in some capacity.

This decade, artificial intelligence went from being employed chiefly as an academic subject or science fiction trope to an unobtrusive (though occasionally malicious) everyday companion. AIs have been around in some form since the 1500s or the 1980s, depending on your definition. The first search indexing algorithm was AltaVista in 1995, but it wasnt until 2010 that Google quietly introduced personalized search results for all customers and all searches. What was once background chatter from eager engineers has now become an inescapable part of daily life.

AI Can Tell If You're Going to Die Soon. But How?

'Fake News' Is Sparking an AI Arms Race

One function after another has been turned over to AI jurisdiction, with huge variations in efficacy and consumer response. The prevailing profit model for most of these consumer-facing applications, like social media platforms and map functions, is for users to trade their personal data for minor convenience upgrades, which are achieved through a combination of technical power, data access, and rapid worker disenfranchisement as increasingly complex service jobs are doubled up, automated away, or taken over by AI workers.

The Harvard social scientist Shoshana Zuboff explained the impact of these technologies on the economy with the term surveillance capitalism. This new economic system, she wrote, unilaterally claims human experience as free raw material for translation into behavioural data, in a bid to make profit from informed gambling based on predicted human behavior.

Were already using machine learning to make subjective decisionseven ones that have life-altering consequences. Medical applications are only some of the least controversial uses of artificial intelligence; by the end of the decade, AIs were locating stranded victims of Hurricane Maria, controlling the German power grid, and killing civilians in Pakistan.

The sheer scope of these AI-controlled decision systems is why automation has the potential to transform society on a structural level. In 2012, techno-socialist Zeynep Tufekci pointed out the presence on the Obama reelection campaign of an unprecedented number of data analysts and social scientists, bringing the traditional confluence of marketing and politics into a new age.

Intelligence that relies on data from an unjust world suffers from the principle of garbage in, garbage out, futurist Cory Doctorow observed in a recent blog post. Diverse perspectives on the design team would help, Doctorow wrote, but when it comes to certain technology, there might be no safe way to deploy:

It doesnt help that data collection for image-based AI has so far taken advantage of the most vulnerable populations first. The Facial Recognition Verification Testing Program is the industry standard for testing the accuracy of facial recognition tech; passing the program is imperative for new FR startups seeking funding.

But the datasets of human faces that the program uses are sourced, according to a report from March, from images of U.S. visa applicants, arrested people who have since died, and children exploited by child pornography. The report found that the majority of data subjects were people who had been arrested on suspicion of criminal activity. None of the millions of faces in the programs data sets belonged to people who had consented to this use of their data.

State-level efforts to regulate AI finally emerged this decade, with some success. The European Unions General Data Protection Regulation (GDPR), enforceable from 2018, limits the legal uses of valuable AI training datasets by defining the rights of the data subject (read: us); the GDPR also prohibits the black box model for machine learning applications, requiring both transparency and accountability on how data are stored and used. At the end of the decade, Google showed the class how not to regulate when they built, and then scrapped, an external AI ethics panel a week later, feigning shock at all the negative reception.

Why You Shouldn't Fear AI

AI Can Now Predict When Lightning Will Strike

How AI Is Helping Define William Shakespeare

Even attempted regulation is a good sign. It means were looking at AI for what it is: not a new life form that competes for resources, but as a formidable weapon. Technological tools are most dangerous in the hands of malicious actors who already hold significant power; you can always hire more programmers. During the long campaign for the 2016 U.S. presidential election, the Putin-backed IRA Twitter botnet campaignsessentially, teams of semi-supervised bot accounts that spread disinformation on purpose and learn from real propagandainfiltrated the very mechanics of American democracy.

Keeping up with AI capacities as they grow will be a massive undertaking. Things could still get much, much worse before they get better; authoritarian governments around the world have a tendency to use technology to further consolidate power and resist regulation.

Tech capabilities have long since proved too fast for traditional human lawmakers, but one hint of what the next decade might hold comes from AIs themselves, who are beginning to be deployed as weapons against the exact type of disinformation other AIs help to create and spread. There now exists, for example, a neural net devoted explicitly to the task of identifying neural net disinformation campaigns on Twitter. The neural nets name is Grover, and its really good at this.

More here:

The Bot Decade: How AI Took Over Our Lives in the 2010s - Popular Mechanics

Related Posts

Written by admin |

December 9th, 2019 at 7:51 pm

Posted in Machine Learning




matomo tracker