Page 11234

Archive for the ‘Alphazero’ Category

What Brains of the Past Teach Us About the AI of the Future – Next Big Idea Club Magazine

Posted: November 26, 2023 at 2:49 am


without comments

Max Bennett is the co-founder and CEO of Alby, a start-up that helps companies integrate large language models into their websites to create guided shopping and search experiences. Bennett holds several patents for AI technologies and has published numerous scientific papers in peer-reviewed journals on the topics of evolutionary neuroscience and the neocortex. He has been featured on the Forbes 30 Under 30 list as well as the Built In NYCs 30 Tech Leaders Under 30.

We have been trying to understand the brain for centuries, and yet we still dont have satisfying answers. The problem is that the brain is really complicated. The brain contains over 86 billion neurons and over 100 trillion connections all wired together in a tangled mess. Within a cubic millimeter of the brain, which is about the width of a single letter on a penny, there are over a billion connections. Even if we mapped all 100 trillion connections, we still wouldnt know how the brain works.

The fact that two neurons connect to each other doesnt tell us much about what they are communicatingneurons pass hundreds of different chemical signals across these connections, each with unique effects. Worst of all, this is made even more challenging by the fact that evolution doesnt design systems in coherent waysthere are duplicated, redundant, overlapping, and vestigial circuits that obscure how different brain systems fit together.

These problems have proven so difficult, that some neuroscientists believe it will be many more centuries before we ever make sense of the brain.

But there is an alternative approach, one that searches for answers not in the human brain, but within fossils, genes, and the brains of the many other animals that populate our planet. In recent years, scientists have made incredible progress reconstructing the brains and intellectual faculties of our ancestors. This emerging research presents a never-before-possible approach to understanding the brain. Instead of trying to reverse-engineer the complicated modern human brain, we can start by rolling back the evolutionary clock to reverse-engineer the much simpler first brain. We can then track the changes forward in time, observing each brain modification that occurred and how it worked. If we keep tracking this story forward from the simple beginnings through each incremental increase in complexity, we might finally be able to make sense of the magical device in our heads.

As the evidence continues to roll in, a story has begun to reveal itself. The first brain evolved over 600 million years ago; one might think that over such an astronomical amount of time, the story of brain evolution would contain so many small changes that it would be impossible to fit into a single book. But instead, amazingly, it turns out that the main reconfiguration of brains occurred in only five key steps, referred to as the five breakthroughs.

Each breakthrough emerged from a new set of brain modifications and gifted our ancestors with a new suite of intellectual faculties.

Each breakthrough was built on the foundation of those that came before. Just as the ancestors of lizards took fish-like fins and reconfigured them into feet to enable walking, and the ancestors of birds took those same feet and reconfigured them into wings to enable flying, brain evolution too worked by repurposing the available biological building blocks to face new challenges and enable new feats.

If we want to understand the human brain, and what is missing in current AI systems, the framework of these five breakthroughs offers a wonderfully instructive and simplifying approach.

Before brains evolved, animals didnt move around much. They were most like todays sea anemones and coral; they waited for food particles to come to them, at which point they would snatch food out of the water with their tentacles. But they did not actively pursue prey nor avoid predators.

However, around 600 million years ago, our ancestors evolved into a small worm-like creature the size of a grain of rice. These worm-like ancestors were the first animals to survive by moving towards food and moving away from danger. Not so coincidentally, these were the first animals to have brains.

This worm had no eyes or earsit perceived the world only through a small portfolio of individual sensory neurons that each detected vague things about the outside world. Some neurons got activated by the presence of light and others got activated by the presence of specific smells. Despite perceiving almost nothing detailed about the external world, these worms could still navigate around using a clever technique called steering. This was the first breakthrough.

When a piece of food is placed in water, molecules fall off of it and disperse throughout its surroundings. This produces what is called a smell gradient, where the concentration of these molecules is high directly around the food source and becomes progressively lower the further away from the food source you get. It is this physical fact that evolution exploited to enable the first form of navigation.

The first brains had two primary motor programsone for moving forward, and one for turning. Although these worms couldnt see, they could find the origin of food by applying two simple rules: whenever the concentration of a food smell increases, keep going forward; whenever the concentration of a food smell decreases, turn randomly. Taking advantage of how smell gradients work, if you keep applying this algorithm, eventually worms will make it towards the source of the food smell.

In other words, steering worked by categorizing things in the world into good and bad worms steer towards good things like food smells and away from bad things like predator smells. This was the function of the first brain, and from it emerged many familiar features of intelligence, from associative learning to emotional states.

There are many debates about what the final steps are on the road to human-like artificial intelligence. From the perspective of the five breakthroughs, what is missing is not the first breakthroughs in the evolution of the human brainsteering and reinforcement learningnor the most recent breakthrough, which was language. Instead, AI systems have skipped the breakthroughs that evolved halfway through our brains journey; we have missed the breakthroughs that emerged in early mammals and primates.

Early mammals emerged 150 million years ago, as small squirrel-like creatures in a world filled with massive predatory dinosaurs. They survived by burrowing underground and emerging only at night to hunt for insects. From the crucible of this incredible pressure to survive was forged a new brain region called the neocortex. The neocortex enabled these early mammals to imagine the future and remember the past, in other words, to simulate a state of the world that is not the current one.

This was the breakthrough of simulation. It enabled these animals to plan their actions ahead of time. It enabled our squirrel-like ancestors to peek out from their burrow, spot nearby predators, and simulate whether or not they could successfully make a dash across the forest floor without getting caught. Simulation also gifted these mammals fine motor skills, as they could plan their body movements ahead of time, effortlessly figuring out where to place their paws to balance themselves and jump between tree branches. This is why lizards and turtles, lacking a neocortex, move slowly and clumsily on the forest floor, while mammals like squirrels and monkeys crack open nuts and climb in trees.

To accomplish all this, the neocortex creates an internal representation of the external world, what AI researchers call a world model. The world model in the neocortex contains enough details of how the world actually works that animals can imagine themselves doing something and accurately predict the consequences of their actions. In order for a mouse to imagine itself running down a path and correctly predict whether a nearby predator will catch it before it gets to safety, its imagination needs to accurately capture the nuances of physics: speed, space, and time.

We already have AI systems that can make plans and simulate potential future actions, the most famous modern example being AlphaZero, the AI system that recently beat the best Go and chess players in the world. AlphaZero works, in part, by playing out possible future moves before deciding what to do. But AlphaZero and other AI systems still cant engage in reliable planning in real-world settings, outside of the constrained and simplified conditions of a board game.

In real-world settings, planning requires dealing with imperfect noisy information, an infinite space of possible next actions, and ever-changing internal needs. A squirrel dashing from one tree to the next has, literally, an infinite number of possible actions to take, from the low-level choices of exactly where to place each individual paw, to the higher-level choices of exactly which path to take. How the neocortex enables mammals to plan in such complex environments is still beyond our understanding; this is why we do not yet have robots that can wash our dishes and do our laundry, the secret to which lives within the minuscule brains of squirrels and rats and all the other mammals in the animal kingdom.

One of the key problems in the field of AI alignment is ensuring that AI systems understand the requests that we make of them. This has also been called the paperclip problem, after Nick Bostroms allegory of asking an AI system to run a paperclip factory as efficiently as possible, at which point his imagined AI system goes on to convert all of earth into paperclips. This thought experiment reveals that AI can be dangerous even without it being intentionally nefarious: the AI system did exactly what we told it to do, but failed to infer the true intent of our request and our actual preferences. The paperclip problem is one of the biggest outstanding challenges in the field of AI safety.

When humans speak to each other, we automatically infer the intent of each others words. This ability was part of the fourth breakthrough, the breakthrough of mentalizing. It emerges from parts of the neocortex that appeared with early primates. These primate areas endow monkeys and apes with the ability to simulate not only the external world but also their own inner simulation itself, enabling them to think about their own thinking and the thinking of others.

Early primates got caught in a political arms race; their reproductive success was defined by their ability to build alliances, climb political hierarchies, and cozy up to those with high status. We see this in the social groups of modern nonhuman primates like chimpanzees, bonobos, and monkeys. The most powerful tool in surviving the political world of primate life was the evolution of mentalizing, which enables primates to predict the consequences of their social choices, to imagine themselves in other peoples shoes, to infer how they might feel and what they might do and what they want.

The new areas of the neocortex in primates contain the algorithmic blueprint for how to build AI systems that do the same. One way or another, in order to create safe AI systems, we will have to endow these systems with a reliable understanding of how the human mind works, without which our AI systems will always risk accidentally weaponizing an innocuous request like optimizing a paperclip factory into a world-ending cataclysm.

To listen to the audio version read by author Max Bennett, download the Next Big Idea App today:

Here is the original post:

What Brains of the Past Teach Us About the AI of the Future - Next Big Idea Club Magazine

Written by admin

November 26th, 2023 at 2:49 am

Posted in Alphazero

Personality traits and decision-making styles among obstetricians … – Nature.com

Posted: April 6, 2023 at 12:11 am


without comments

Shaw, D. et al. Drivers of maternity care in high-income countries: Can health systems support woman-centred care?. Lancet 388(10057), 22822295 (2016).

Article PubMed Google Scholar

Mesterton, J. et al. Case mix adjusted variation in cesarean section rate in Sweden. Acta Obstet. Gynecol. Scand. 96(5), 597606 (2017).

Article PubMed Google Scholar

Seijmonsbergen-Schermers, A. E. et al. Regional variations in childbirth interventions in the Netherlands: A nationwide explorative study. BMC Pregnancy Childbirth. 18(1), 192 (2018).

Article CAS PubMed PubMed Central Google Scholar

Dekker, S., Bergstrm, J., Amer-Whlin, I. & Cillier, P. Complicated, complex, and compliant: Best practice in obstetrics. Cogn. Technol. Work. 15, 189195 (2012).

Article Google Scholar

Downe, S. Beyond evidence-based medicine: Complexity and stories of maternity care. J. Eval. Clin. Pract. 16(1), 232237 (2010).

Article PubMed Google Scholar

Dekker, S. Patient Safety: A Human Factors Approach 1st edn. (CRC Press, 2011).

Google Scholar

Braithwaite, J., Clay-Williams, R., Nugus, P. & Plumb, J. Health care as a complex adaptive system. In Resilient Health Care (eds Hollnagel, E. et al.) 5773 (Ashgate Publishing Ltd, 2013).

Google Scholar

Kajonius, P. & Mac, G. E. Personality traits across countries: Support for similarities rather than differences. PLoS ONE 12(6), e0179646 (2017).

Article PubMed PubMed Central Google Scholar

Kajonius, P. J. & Johnson, J. Sex differences in 30 facets of the five factor model of personality in the large public (N = 320,128). Pers. Individ. Differ. 129, 126130 (2018).

Article Google Scholar

De Raad, B., Perugini, M., Hrebckov, M. & Szarota, P. Lingua franca of personality: Taxonomies and structures based on the psycholexical approach. J. Cross Cult. Psychol. 29(1), 212232 (1998).

Article Google Scholar

John, O. P., Naumann, L. P. & Soto, C. J. Paradigm shift to the integrative Big Five trait taxonomy: history, measurement, and conceptual issues. In Handbook of Personality: Theory and Research 3rd edn (eds John, O. P. et al.) 114158 (Guilford Press, 2008).

Google Scholar

Soto, C. J. How replicable are links between personality traits and consequential life outcomes? The life outcomes of personality replication project. Psychol. Sci. 30(5), 711727 (2019).

Article PubMed Google Scholar

Borges, N. J. & Savickas, M. L. Personality and medical specialty choice: A literature review and integration. J. Career Assess. 10(3), 362380 (2002).

Article Google Scholar

Iorga, M. et al. Factors influencing burnout syndrome in obstetrics and gynecology physicians. Biomed. Res. Int. 2017, 10 (2017).

Article Google Scholar

Dillon, S. J. et al. How personality affects teamwork: A study in multidisciplinary obstetrical simulation. Am. J. Obstet. Gynecol. MFM. 3(2), 100303 (2021).

Article PubMed Google Scholar

Dunphy, B. C. et al. Cognitive elements in clinical decision-making: Toward a cognitive model for medical education and understanding clinical reasoning. Adv. Health Sci. Educ. Theory Pract. 15(2), 229250 (2010).

Article PubMed Google Scholar

Yee, L. M., Liu, L. Y. & Grobman, W. A. The relationship between obstetricians cognitive and affective traits and their patients delivery outcomes. Am. J. Obstet. Gynecol. 211(6), 692e1-6 (2014).

Article Google Scholar

Kajonius, P. J. & Johnson, J. A. Assessing the structure of the five factor model of personality (IPIP-NEO-120) in the public domain. Eur. J. Psychol. 15(2), 260275 (2019).

Article PubMed PubMed Central Google Scholar

Raoust, G. M., Bergstrom, J., Bolin, M. & Hansson, S. R. Decision-making during obstetric emergencies: A narrative approach. PLoS ONE 17(1), 21 (2022).

Article Google Scholar

Croskerry, P. Clinical cognition and diagnostic error: Applications of a dual process model of reasoning. Adv. Health Sci. Educ. Theory Pract. 14(Suppl 1), 2735 (2009).

Article PubMed Google Scholar

Dunphy, B., Dunphy, S., Cantwell, R., Bourke, S. & Fleming, M. Evidence based-practice and affect: The impact of physician attitudes on outcomes associated with clinical reasoning and decision-making. Aust. J. Educ. Dev. Psychol. 10, 5664 (2010).

Google Scholar

Manser, T. Teamwork and patient safety in dynamic domains of healthcare: A review of the literature. Acta Anaesthesiol. Scand. 53(2), 143151 (2009).

Article CAS PubMed Google Scholar

Lemieux-Charles, L. & McGuire, W. L. What do we know about health care team effectiveness? A review of the literature. Med. Care Res. Rev. 63(3), 263300 (2006).

Article PubMed Google Scholar

Csikszentmihalyi, M. & Nakamura, J. Effortless attention in everyday life: a systematic phenomenology. In Effortless Attention: A New Perspective in the Cognitive Science of Attention and Action (ed. Bruya, B.) 179190 (A Bradford Book, The MIT Press, 2010).

Chapter Google Scholar

Gobet, F. & Chassy, P. Expertise and intuition: A tale of three theories. Minds Mach. 19(2), 151180 (2008).

Article Google Scholar

Klein, G. & Jarosz, A. A naturalistic study of insight. J. Cogn. Eng. Decis. Mak. 5(4), 335351 (2011).

Article Google Scholar

Dworak, E. M., Revelle, W., Doebler, P. & Condon, D. M. Using the international cognitive ability resource as an open source tool to explore individual differences in cognitive ability. Pers. Individ. Differ. 169, 109906 (2021).

Article Google Scholar

Scott, S. G. & Bruce, R. A. Decision-making style: The development and assessment of a new measure. Educ. Psychol. Meas. 55(5), 818831 (1995).

Article Google Scholar

Gignac, G. E. & Szodorai, E. T. Effect size guidelines for individual differences researchers. Pers. Individ. Differ. 102, 7478 (2016).

Article Google Scholar

Browner, W. S., Newman, T. B. & Hulley, S. B. Estimating sample size and power: Applications and examples. In Designing Clinical Research Wolters Kluver Health 4th edn (eds Hulley, S. B. et al.) 5583 (Lippincott Williams & Wilkins, 2013).

Google Scholar

Vedel, A. Big five personality group differences across academic majors: A systematic review. Pers. Individ. Differ. 92, 110 (2016).

Article Google Scholar

Bergstrm, J., Dekker, S., Nyce, J. M. & Amer-Whlin, I. The social process of escalation: A promising focus for crisis management research. BMC Health Serv. Res. 12, 161 (2012).

Article PubMed PubMed Central Google Scholar

Matthews, G., Deary, I. J. & Whiteman, M. C. Personality, performance and information processing. In Personality Traits 3rd edn (eds Matthews, G. et al.) 357391 (Cambridge University Press, 2009).

Chapter Google Scholar

Byrne, K. A., Silasi-Mansat, C. D. & Worthy, D. A. Who chokes under pressure? The big five personality traits and decision-making under pressure. Pers. Individ. Dif. 74, 2228 (2015).

Article PubMed Google Scholar

Witt, L. A., Burke, L. A., Barrick, M. R. & Mount, M. K. The interactive effects of conscientiousness and agreeableness on job performance. J. Appl. Psychol. 87(1), 164169 (2002).

Article CAS PubMed Google Scholar

Perkins, A. M. & Corr, P. J. Can worriers be winners? The association between worrying and job performance. Pers. Individ. Differ. 38(1), 2531 (2005).

Article Google Scholar

Anthony, K. E. & Sellnow, T. L. The role of the message convergence framework in medical decision making. J. Health Commun. 21(2), 249256 (2016).

Article PubMed Google Scholar

Harris, C. R. & Jenkins, M. Gender differences in risk assessment: Why do women take fewer risks than men?. Judgm. Decis. Mak. 1(1), 4863 (2006).

Article Google Scholar

Baker, D. P., Day, R. & Salas, E. Teamwork as an essential component of high-reliability organizations. Health Serv. Res. 41(4 Pt 2), 15761598 (2006).

Article PubMed PubMed Central Google Scholar

Driskell, J. E., Goodwin, G. F., Salas, E. & OShea, P. G. What makes a good team player? Personality and team effectiveness. Group Dyn. Theory Res. Pract. 10(4), 249271 (2006).

Article Google Scholar

Sonesh, S. C. et al. Team training in obstetrics: A multi-level evaluation. Fam. Syst. Health. 33(3), 250261 (2015).

Article PubMed Google Scholar

Lankshear, G., Ettorre, E. & Mason, D. Decision-making, uncertainty and risk: Exploring the complexity of work processes in NHS delivery suites. Health Risk Soc. 7(4), 361377 (2005).

Article Google Scholar

Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y. & Podsakoff, N. P. Common method biases in behavioral research: A critical review of the literature and recommended remedies. J. Appl. Psychol. 88(5), 879903 (2003).

Article PubMed Google Scholar

Link:

Personality traits and decision-making styles among obstetricians ... - Nature.com

Written by admin

April 6th, 2023 at 12:11 am

Posted in Alphazero

MPL 59th National Senior R3: The Systematic Pawn Structure … – ChessBase India

Posted: December 29, 2022 at 12:17 am


without comments

Three GMs, four IMs and one FM have made ahat-trick start of 3/3. They are - GM Sethuraman S P (PSPB), GM Abhijeet Gupta (PSPB), GM Iniyan P (TN), IM Aronyak Ghosh (RSPB), IM Koustav Chatterjee (WB), IM Harshavardhan G B (TN), IM Nitin S (RSPB) and FM Vedant Panesar (MAH). Who will be among the leadersafter the fourth round?

IM Nitin S scored a fantastic victory against GM Leon Luke Mendonca | Photo:Aditya Sur Roy

IM Nitin S (2372) traded the queenson the eleventh move in the Caro-Kann against GM Leon Luke Mendonca (2566). The former started fragmenting Black's pawn structure and kept on it.

Position after 27.e5!

Black has six pawns, four pawn islands, two isolated and two isolated doubled pawns - that is the like the pawn structure one should not have. White found the perfect 27.e5! even though 27.Rxf4 was alright too having the idea of e5 in the next move.What followed next is the attraction of the black king towards White's own side. 27...fxe5 28.Nxe5+ Ke6 29.Nxc6Rc8 30.Re1+ Kf6 31.Rxf4+ Kg5 32.Rf2 Kg4 33.Ne5+ Kg3 34.Rf3+ Kg2 35.Re2+ Kg1 36.Rd3 and Black resigned as Rd1# isunstoppable.

Final position after 36.Rd3

Position after 36...Rh8

White's king is much safer than Black's. Keeping that in mind, find out how White could have finished things off here. The position certainly screams like something's gotta give.

Position after 48...Qh6

Sometimes itbecomes difficult for a player to accept a draw even in drawn position because he has already conceded a draw against anotherrelatively lower rated player. The reason behind it is quite simple, the current Elo rating system does not favor adults. Thus, the desperation to score a win increases, resulting in human errors.48...Qh6 was uncalled for. Black has zero breakthroughs, his pieces act like a furniture, much like White's dark square bishop. Just keeping the black queen in the back rank is enough to draw the game. 48...Qh6 invited trouble. White did not notice it at first 49.Bf1 and then Kc8madeit that much obvious. Find out whyBlack's last two move were erroneous.

IM Nitin S (RSPB) - GM Leon Luke Mendonca (Goa): 1-0

IM Vardaan Nagpal(HAR) - GM Karthik Venkataraman (AP): 0.5-0.5

Subhayan Kundu (WB) - GM Mitrabha Guha (WB): 0.5-0.5

GM Deep Sengupta (PSPB) - Utkal Ranjan Sahoo (ODI): 0.5-0.5

IM Mehar Chinna Reddy C H (RSPB) - GM Karthikeyan P (RSPB): 0.5-0.5

GMNeelotpal Das (RSPB) - FM Ritvik Krishnan (MAH): 0.5-0.5

IM Avinash Ramesh (TN) - GM Shyam Sundar M (TN): 0.5-0.5

FMM Anees M (TN) - IM Vignesh N R (RSPB): 0.5-0.5

GM Venkatesh M R (PSPB) -CM AadityaDhingra (HAR): 0.5-0.5

Shreyansh Daklia (CHT) - IM Neelash Saha (WB): 1-0

IM Srihari L R (TN) - Kartavya Anadkat (GUJ): 0.5-0.5

CM Gaurang Bagwe (MAH) - IM Ameya Audi (Goa): 0.5-0.5

Kishan Gangolli (KAR) - GM Laxman R R (RSPB): 0.5-0.5

GM Deepan Chakkravarthy (RSPB) - Laishram Imocha (PSPB): 0-1

S Badrinath (PUD) - IM Arghyadip Das (RSPB): 0.5-0.5

Rupam Mukherjee (WB) - IM D K Sharma (LIC): 0.5-0.5

A total of 196 players including 18 GMs and27 IMs are taking part in this tournament organized by Delhi Chess Association. The event is taking place in New Delhi from 22nd December 2022 to 3rd January 20233. The 13-round Swiss league tournament has a time control of 90 minutes for 40 moves followed by 30 minutes with an increment of 30 seconds from move no.1

Details

Details

Delhi Chess Association

Tournament Regulations

The rest is here:

MPL 59th National Senior R3: The Systematic Pawn Structure ... - ChessBase India

Written by admin

December 29th, 2022 at 12:17 am

Posted in Alphazero

Newspoll quarterly aggregates: July to December (open thread … – The Poll Bludger

Posted: at 12:17 am


without comments

Relatively modest leads for the Coalition among Queenslanders, Christians and those 65-and-over, with Labor dominant everywhere else.

As it usually does on Boxing Day, The Australian has published quarterly aggregates of Newspoll with state and demographic breakdowns, on this occasion casting an unusually wide net from its polling all the way back to July to early this month, reflecting the relative infrequency of its results over this time. The result is a combined survey of 5771 respondents that finds Labor leading 55-45 in New South Wales (a swing of about 3.5% to Labor compared with the election), 57-43 in Victoria (about 2%), 55-45 in Western Australia (no change) and 57-43 in South Australia (a 4.0% swing), while trailing 51-49 in Queensland a 3% swing).

Gender breakdowns show only a slight gap, with Labor leading 54-46 among men and 56-44 among women, with the Greens as usual stronger among women among men. Age cohort results trend from 65-35 to Labor for 18-to-34 to 54-46 to the Coalition among 65-plus, with the Greens respectively on 24% and 3%. Little variation is recorded according to education or income, but Labor are strongest among part-time workers and weakest among the retired, stronger among non-English speakers but well ahead either way, and 62-38 ahead among those identifying as of no religion but 53-47 behind among Christians. You can find all the relevant data, at least for voting intention, in the poll data feature on BludgerTrack.

William Bowe is a Perth-based election analyst and occasional teacher of political science. His blog, The Poll Bludger, has existed in one form or another since 2004, and is one of the most heavily trafficked websites on Australian politics.View all posts by William Bowe

More:

Newspoll quarterly aggregates: July to December (open thread ... - The Poll Bludger

Written by admin

December 29th, 2022 at 12:17 am

Posted in Alphazero

AI now not only debates with humans but negotiates and cajoles too – Mint

Posted: November 26, 2022 at 12:26 am


without comments

In development since 2012, Project Debater was touted as IBMs next big milestone for AI. Aimed at helping people make evidence-based decisions when the answers arent black-and-white," it doesnt just learn a topic but can debate unfamiliar topics too, as long as these are covered in the massive corpus that the system mines, which includes hundreds of millions of articles from numerous well-known newspapers and magazines. The system uses Watson Speech to Text API (application programming interface). Project Debaters underlying technologies are also being used in IBM Cloud and IBM Watson.

You might also like

How the new bill aims to protect your personal data

5 charts tell the story of tech layoffs

This could be India's biggest Series A funding round

This Mumbai couples 860 sq ft flat is the biggest theyve rented so far

Interestingly, a year later at Think 2019 in San Francisco, IBM's Project Debater lost an argument in a live, public debate with a human champion, Harish Natarajan. They were arguing for and against the resolution, We should subsidize preschool". Both sides had only 15 minutes to prepare their speech, following which they delivered a four-minute opening statement, a four-minute rebuttal, and a two-minute summary. The winner of the event was determined by Project Debater's ability to convince the audience of the persuasiveness of the arguments. But even though Natarajan was declared the winner, 58% of the audience said Project Debater "better enriched their knowledge about the topic at hand, compared to Harishs 20%" ().

Raising the bar

Meta (formerly Facebook) appears to have gone a step further. On Tuesday, it announced that CICERO is the first AI "to achieve human-level performance in the popular strategy game Diplomacy". CICERO demonstrated this by playing on webDiplomacy.net, an online version of the game, where it achieved more than double the average score of the human players and ranked in the top 10% of participants who played more than one game. Marcus Tullius Cicero was a Roman writer, orator, lawyer and politician all bundled in one.

Meta explains that unlike games like Chess and Go, Diplomacy requires an agent to recognize that someone is likely bluffing or that another player would see a certain move as aggressive, failing which it will lose. Likewise, it has to talk like a real person, displaying empathy, building relationships, and speaking knowledgeably about the game, failing which it won't find other players willing to work with it. To achieve these goals, Meta used both strategic reasoning as used in agents that played AlphaGo and Pluribus, and natural language processing (NLP), as used in models like GPT-3, BlenderBot 3, LaMDA, and OPT-175B.

Meta has open-sourced the code and published a paper to help the wider AI community use CICERO to "spur further progress in human-AI cooperation".

How CICERO works

CICERO continuously looks at the game board to understand and model how the other players are likely to act, following which it uses this framework to control a language model that "can generate free-form dialogue, informing other players of its plans and proposing reasonable actions for the other players that coordinate well with them". Meta started with a 2.7 billion parameter BART-like language model that is pre-trained on text from the internet and fine-tuned on over 40,000 human games on webDiplomacy.net. It also developed techniques to automatically annotate messages in the training data with corresponding planned moves in the game. The idea is to control dialogue generation while persuading other players more effectively. In short, Cicero first makes a prediction of what everyone will do; Second, it refines that prediction using planning; Third, it generates several candidate messages based on the board state, dialogue, and its intents; and fourth, it filters messages to reduce gibberish and unrelated comments.

AI-powered machines are being continuously pitted against humans in the last decade. IBMs Deep Blue supercomputing system, for instance, beat chess grandmaster Garry Kasparov in 1996-97 and its Watson supercomputing system even beat Jeopardy players in 2011.

In March 2016, Alphabet-owned AI firm DeepMinds computer programme, AlphaGo, beat Go champion Lee Sedol. On 7 December 2017, AlphaZero modelled on AlphaGo took just four hours to learn all chess rules and master the game enough to defeat the worlds strongest open-source chess engine, Stockfish. The AlphaZero algorithm is a more generic version of the AlphaGo Zero algorithm. It uses reinforcement learning, which is an unsupervised training method that uses rewards and punishments. AlphaGo Zero does not need to train on human amateur and professional games to learn how to play the ancient Chinese game of Go. Further, the new version not only learnt from AlphaGo the worlds strongest player of the Chinese game Go but also defeated it in October 2017.

A year later, in July 2018, AI bots beat humans at the video game Dota 2. Published by Valve Corp., Dota 2 is a free-to-play multiplayer online battle arena video game and is one of the most popular and complex e-sports games. Professionals train throughout the year to earn part of Dotas annual $40 million prize pool that is the largest of any e-sports game. Hence, a machine beating such players underscores the power of AI. AI bots, though, lost to professional players at Dota 2, which has been actively developed for over a decade, with the game logic implemented in hundreds of thousands of lines of code. This logic takes milliseconds per tick to execute, versus nanoseconds for Chess or Go engines. The game is updated about once every two weeks.

What it means for humans

The approach of IBM's Project Debater and Meta's CICERO, though, lies in the fact that they involve predicting and modeling what humans would actually do in real life. This implies that they cannot be just relying on supervised learning, where the agent is trained with labeled data such as a database of human players actions in past games. Meta explains that CICERO runs an iterative planning algorithm called piKL, which "balances dialogue consistency with rationality".

CICERO, as Meta acknowledges, is a work in progress. As of now, it only capable of playing Diplomacy. However, the underlying technology is relevant to many real-world applications, Meta suggests. "Controlling natural language generation via planning and RL (reinforcement learning), could, for example, ease communication barriers between humans and AI-powered agents. For instance, today's AI assistants excel at simple question-answering tasks, like telling you the weather, but what if they could maintain a long-term conversation with the goal of teaching you a new skill? Alternatively, imagine a video game in which the non-player characters (NPCs) could plan and converse like people do understanding your motivations and adapting the conversation accordingly to help you on your quest of storming the castle.

It's clear from these developments that this is not the last we're hearing from AI-powered machines. The game will continue, and so will mutual learning.

Elsewhere in Mint

In Opinion, Raghuram G. Rajan says deglobalisation poses a climate threat. Vivek Kaul tells the reason why Twitter can't die. Madan Sabnavis calls for caution over India's title of the fastest-growing economy. Long Story says the slowed-down motorcycle is an eloquent sign of India's downturn.

See more here:

AI now not only debates with humans but negotiates and cajoles too - Mint

Written by admin

November 26th, 2022 at 12:26 am

Posted in Alphazero

Quest Pro is here, Google and Valve report back – MIXED Reality News

Posted: October 20, 2022 at 1:45 am


without comments

Image: Meta / MIXED

Der Artikel kann nur mit aktiviertem JavaScript dargestellt werden. Bitte aktiviere JavaScript in deinem Browser und lade die Seite neu.

Our weekly recap: Meta week is over and, as expected, brought Quest Pro and a lot of news. Coincidentally, Google is also talking about telepresence again.

Quest Pro hasnt been a secret for a long time, now Metas mixed reality headset is finally official. For around $1,500, you get a mixed reality headset packed with a lot of technology that can do both VR and AR. The first testers are unsure about the target group of Quest Pro. The probably most interesting new feature, namely Passthrough AR, is rated rather critically because the video quality falls short of expectations.

As expected, Quest Pro will not be sold in Germany for now. Meta CTO Andrew Bosworth said on Instagram that Meta has a plan and that German consumers should tell regulators that they want to buy Meta devices in Germany. Now then. The German Federal Cartel Office says it is in talks with Meta about the new Meta accounts.

In addition to Quest Pro, there was news at Meta Connect about Microsoft on Meta hardware, avatars with legs, and Horizon Worlds, which is significantly behind Metas plans in terms of active users, according to internal documents.

If anyone was still looking for confirmation that Microsoft CEO Satya Nadella is serious about a software-led metaverse strategy, theyre getting it in the form of more leaks surrounding Hololens. The military version IVAS is still under heavy fire from the US Army: Some soldiers are said to be worried about their lives because the headset displays are so bright and can be spotted from far away.

According to insiders, there is no roadmap to speak of for a commercial Hololens. After the departure of mixed reality boss Alex Kipman, an XR vacuum has apparently arisen at the Redmond software giant. It looks like Microsoft is largely keeping its hands off tech glasses for now. Nadellas appearance at Meta Connect fits that bill.

In Metas big telepresence VR week, Google, surely not coincidentally, is back with its Starline holo-telephony booth. Google is positioning Starline as a glasses-free telepresence alternative that is now being rolled out in the first companies.

Two testers are enthusiastic about the photo-realistic 3D video calls, but have concerns about the size and price of the cabin. Starline is still far from being an option for private customers with the current technical equipment.

While Microsoft is looking to exit XR hardware, Valve is apparently looking to get back into it after its handheld hiatus: according to a job ad, the popular gaming company wants to take the next steps in VR and is looking for some computer vision specialists to do so. Valve is developing new tracking technologies, which could in turn benefit a standalone PC hybrid headset. Assuming Valve follows through with the project.

A developer animated a digital design assistant using Epics digital human kit, then gave her life with three networked AI systems: The computer lady generates images on demand and independently asks about motif details our AI sister magazine THE DECODER reports.

Also in THE DECODER: Four practical WordPress plugins with AI generation and bigger is better, at least with Deepminds AlphaZero.

Note: Links to online stores in articles can be so-called affiliate links. If you buy through this link, MIXED receives a commission from the provider. For you the price does not change.

See the original post:

Quest Pro is here, Google and Valve report back - MIXED Reality News

Written by admin

October 20th, 2022 at 1:45 am

Posted in Alphazero

How AI is impacting the video game industry – ZME Science

Posted: December 15, 2021 at 1:56 am


without comments

Weve long been used to playing games; artificial intelligence holds the promise of games that play along with us.

Artificial intelligence (AI for short) is undoubtedly one of the hottest topics of the last few years. From facial recognition to high-powered finance applications, it is quickly embedding itself throughout all the layers of our lives, and our societies.

Video gaming, a particularly tech-savvy domain, is no stranger to AI, either. So what can we expect to see in the future?

Maybe one of the most exciting prospects regarding the use of AI in our games is the possibilities it opens up in regards to interactions between the player and the software being played. AI systems can be deployed inside games to study and learn the patterns of individual players, and then deliver a tailored response to improve their experience. In other words, just like youre learning to play against the game, the game may be learning how to play against you.

One telling example is Monoliths use of AI elements in their Middle-Earth series. Dubbed Nemesis AI, this algorithm was designed to allow opponents throughout the game to learn the players particular combat patterns and style, as well as the instances when they fought. These opponents re-appear at various points throughout the game, recounting their encounters with the player and providing more difficult (and, developers hope, more entertaining) fights.

An arguably simpler but not less powerful example of AI in gaming is AI Dungeon: this text-based dungeon adventure uses GPT-3, OpenAIs natural language modeler, to create ongoing narratives for the players to enjoy.

Its easy to let the final product of the video game development process steal the spotlight. And although it all runs seamlessly on screen, there is a lot of work that goes into creating them. Any well-coded and well-thought-out game requires a lot of time, effort, and love to create which, in practical terms, translates into costs.

AI can help in this regard as well. Tools such as procedural generation can help automate some of the more time- and effort-intensive parts of game development, such as asset production. Knowing that more run-of-the-mill processes can be handled well by software helpers can free human artists and developers to focus on more important details of their games.

Automating asset production can also open the way to games that are completely new freshly-generated maps or characters, for example every time you play them.

For now, AI is still limited in the quality of writing it can output, which is definitely a limitation in this regard; after all, great games are always built on great ideas or great narratives.

Better graphics has long been a rallying cry of the gaming industry, and for good reason we all enjoy a good show. But AI can help push the limits of what is possible today in this regard.

For starters, machine learning can be used to develop completely new textures, on the fly, for almost no cost. With enough processing power, it can even be done in real-time, as a player journeys through their digital world. Lighting and reflections can also be handled more realistically and altered to be more fantastic by AI systems than simple scripted code.

Facial expressions are another area where AI can help. With enough data, an automated system can produce and animate very life-like human faces. This would also save us the trouble of recording and storing gigabytes worth of facial animations beforehand.

The most significant potential of AI systems in this area, however, is in interactivity. Although graphics today are quite sophisticated and we do not lack eye candy, interactivity is still limited to what a programmer can anticipate and code. AI systems can learn and adapt to players while they are immersed in the game, opening the way to some truly incredible graphical displays.

AI has already made its way into the world of gaming. The case of Alpha Go and Alpha Zero showcase just how powerful such systems can be in a game. And although video games have seen some AI implementation, there is still a long way to go.

For starters, AIs are only as good as the data you train them with and they need tons and tons of data. The gaming industry needs to produce, source, and store large quantities of reliable data in order to train their AIs before they can be used inside a game. Theres also the question of how exactly to code and train them, and what level of sophistication is best for software that is meant to be playable on most personal computers out there.

With that being said, there is no doubt that AI will continue to be mixed into our video games. Its very likely that in the not-so-distant future, the idea that such a game would not include AI would be considered quite brave and exotic.

Original post:

How AI is impacting the video game industry - ZME Science

Written by admin

December 15th, 2021 at 1:56 am

Posted in Alphazero

What Happened in Reinforcement Learning in 2021 – Analytics India Magazine

Posted: November 14, 2021 at 1:45 am


without comments

One of the most exciting areas in machine learning right now is reinforcement learning. Its application is found in a diverse set of sectors like data processing, robotics, manufacturing, recommender systems, energy, and games, among others.

What makes reinforcement learning (RL) different from other kinds of algorithms is that it does not depend on historical data sets. It learns through trial and error like human beings.

Understanding its importance, the last few years have seen an accelerated pace in understanding and improving RL. Think of any big name in tech- be it Facebook, Google, DeepMind, Amazon, or Microsoft, they are all investing significant time, money and effort in bringing out innovations in RL.

For robots to be useful to mankind, they need to perform a variety of tasks. But, even training for one task using offline reinforcement learning will take a massive amount of time and huge computational expenditure.

To work on this issue, Google came out with MT-Opt and Actionable Models. While the first one is a multi-task RL system for automated data collection and multi-task RL training, the latter is a data collection mechanism to collect episodes of various tasks on real robots and demonstrates a successful application of multi-task RL. They also help robots to learn new tasks more quickly.

A leader in the reinforcement learning space, DeepMind gave us some unique innovations this year. It released RGB-stacking as a benchmark for vision-based robotic manipulation. Here, DeepMind used reinforcement learning to train a robotic arm to balance and stack objects of different shapes.

The diversity of objects used and the number of empirical evaluations performed made this reinforcement learning-based project unique. The learning pipeline was divided into three stages- training in simulation by using an off-the-shelf RL algorithm, training a new policy simulation with only realistic observations, and lastly, collecting data using this policy on real robots and bringing out an improved policy from this.

The implementation of sequential decision processes is crucial for those working in reinforcement learning. In order to simplify such a process, social media giant Facebook (now Meta) came out with SaLinA just a month back. It is built as an extension of PyTorch and can work in both supervised and unsupervised situations with compatibility options with multiple CPUs and GPUs. Such a method will see usage in systems where large-scale training use cases are involved.

IBM, too, has been active in the reinforcement learning segment in 2021. It released the text-based gaming environment called TextWorld Commonsense (TWC) to work on the problem of infusing RL agents with commonsense knowledge. This method was used to train and evaluate RL agents with a specific commonsense knowledge about objects, their attributes, and affordances. It worked on the issue of sequential decision making by introducing several baseline RL agents.

In the self-supervised learning area, we saw new methodologies coming out. Google released an approach called Reversibility-Aware RL, which adds a separate reversibility estimation component to the self-supervised RL procedure. Google said this method increases the performance of RL agents on several tasks, including the Sokoban puzzle game.

As reinforcement learning has a significant impact on games, in the middle of 2021, we saw DeepMind training agents playing games without intervention with the help of reinforcement learning mechanisms. Though previous innovations by DeepMind like AlphaZero beat world champion programs in Chess, Shogi and Go, they still trained separately on each game, unable to learn a new one without repeating the RL procedure from the beginning.

Through this method, however, the agents were able to react to new conditions with adaptation flexibility to new environments. The core part of this research relied on how deep RL can play a role in training neural networks of the agents.

Google has been working on using RL in the gaming domain. In early 2021, it released Evolving Reinforcement Learning Algorithms, which showed how to learn analytically interpretable and generalisable RL algorithms by using a graph representation and applying optimisation techniques from the AutoML community.

It used Regularized Evolution to evolve a population of the computational graphs over a set of simple training environments. This helped to better RL algorithms in complex environments with visual observations like Atari games.

With so much happening in the RL space, interest in this area is bound to grow among students and the professional community. To cater to the growing demand, Microsoft organised the Reinforcement Learning (RL) Open Source Fest to introduce students to open source reinforcement learning programs and software development.

Researchers from DeepMind teamed up with the University College London (UCL) to offer students a comprehensive introduction to modern reinforcement learning. It intended to give students a detailed understanding of topics like Markov Decision Processes, sample-based learning algorithms, deep reinforcement learning, etc.

Reinforcement learning and its advancements still have a long way to go, but there has been major progress in the last couple of years. Its usage can be a game-changer for certain industries. With more and more research coming in RL, we can expect to see major breakthroughs in the near future.

Sreejani Bhattacharyya is a journalist with a postgraduate degree in economics. When not writing, she is found reading on geopolitics, economy and philosophy. She can be reached at sreejani.bhattacharyya@analyticsindiamag.com

View original post here:

What Happened in Reinforcement Learning in 2021 - Analytics India Magazine

Written by admin

November 14th, 2021 at 1:45 am

Posted in Alphazero

Artificial Intelligence, and the Future of Work Should We Be Worried? – BBN Times

Posted: October 21, 2021 at 1:46 am


without comments

Artificial intelligence is at the top of many lists of the most important skills in today'sjobmarket.

In the last decade or so we have seen a dramatic transition from the AI winter (where AI has not lived up to its hype) to an AI spring (where machines can now outperform humans in a wide range of tasks).

Having spent the last 25 years as an AIresearcher and practitioner, I'm often asked about the implications of this technology on the workforce.

I'm quite often disheartened by the amount of disinformation there is on the internet on this topic, so I've decided to share some of my own thoughts.

The difference between what I am about to write, and what you may have read before elsewhere is due to an inherent bias.Rather than being a pure AI practitioner, my PhD and background is in Cognitive Science - the scientific study of how the mind works, spanning such areas as psychology, neuroscience, philosophy, and artificial intelligence. My area of research has been to look explicitly at how the human mind works, and to reverse engineer these processes in the development of artificial intelligence platforms.Hence, I probably have a better understanding than most of the differences and similarities between human and machine intelligence and how this may play out in the workforce (i.e. what jobs will and will not be replaceable in the future).

So let's begin.

A good place to start this discussion is with the work of Katja Grace and colleagues at the Future of Humanity Institute at the University of Oxford. A few years ago they surveyed the world's leadingAI researchers about when they believed machines would outperform humans on a wide range of tasks. These results are below:

Evidently there were different predictions made as to when different types of work will be able to be performed by machines.But in general, there is consensus that there will be major shifts in the workforce in the next 20 years or so.

In the paper,they define high-level machine intelligence being achieved when unaided machines can accomplish every task better and more cheaply than human workers.Aggregating the data, on average experts believe there is a 50% chance that this will be achieved within 45 years. That is, the leading experts in AI believe that there is a 50% chance that humanity will be fully redundant in 45 years.

This prediction is unimaginable to most of us.But is it realistic? In the next sections I will answer this question looking at the different types of work.But firstly, I will explain a little about recent AI advancements.

Up until recently we were very much in an AI winter (a term coined by relating it to a nuclear winter), where there were distinct phases of hype, followed by disappointment and criticism.The disillusionment was reflected in pessimism by the media, and severe cutbacks in funding, resulting in reduced interest in serious research.

This lull has changed in the last decade or so, with the success of deep learning - an AI paradigm that was inspired by how the brain processes information (in short, artificial neural networks that that process information in parallel, as opposed to the typical serial processing we see in most computer CPUs).

Deep learning and neural networks have been around for some time. However, it is only recently that our computers have been powerful enough to run these algorithms on real-world problems, in real-time.For example, visual object recognition systems of today (e.g., facebooks face recognition system) use what are called convolutional neural networks that mimic how the human visual cortex works. Papers describing this approach started appearing in the early 80s such as with Fukushimas Neocognitron.However, it was not until 2011 that our computers were able to run these algorithms at an appropriate speed to make them useful in practice.

What happened around only 10 years ago, was it was discovered that neural networks could run on computer graphics cards (GPUs - graphics programming units) as these cards were specifically designed to process large amounts of information in parallel - exactly what is needed for artificial neural networks.Most AI researchers these days still use high-performance graphics cards, with there being exponential growth in the capabilities of these cards over time. That is, graphics cards today are 16x more powerful than what they were 10 years ago, and 4x what they were 5 years ago that is they double in computational power every 2.5 years.And with growing interest in the area, we are sure to see ongoing rapid advancements in this technology.

Beyond the fact that computers can now run these large scale networks in real-time, we also have a wealth of large data sources to train them on (n.b., neural networks learn from examples), and available programming platforms such as TensorFlow developed by Google that are openly available to anyone with an interest in machine learning.

As a result of the availability and success of deep learning approaches, AI has officially moved from its supposed winter, to a new season - spring.

What does all this mean for the workforce?Lets continue...

Perhaps one of the low hanging fruits of robotics and AI is in automation replacing repetitive manual labour with machines that can perform the same kind of task cheaply and more efficiently.

An example of this is in Alibabas smart warehouse, where robots perform 70% of the work:

I think the important thing to note when we think of AI replacing human workers, is that they do not have to do the same exact work, in order to make humans redundant.

Consider how Alibaba and Amazon have disrupted the retail sector, with an increasing number of shoppers heading for their screens to make their purchases rather than entering brick-and-mortar retail stores.The outcome is the same (i.e. a consumer making a purchase and receiving a product), but the process itself can be restructured in a way that uses automation to make the process cheaper and more efficient.

For example, Amazon (Prime Air), is trialing a drone delivery system to safely delivery packages to customers in 30 minutes or less disrupting the standard way humans would make similar deliveries:

We are seeing much progress in the ability of machines to perform manual tasks in a wide range of areas, both in and outside of the factory.Take for example the task of laying bricks. Fastbrick, has recently signed a contract in Saudi Arabia to build 55,000 homes by 2022, using automation:

As a glimpse of the future, companies such as Boston Robotics are capable of building robots with similar physical structures to humans, performing tasks that average humans cant:

The point here is that robots in the near future will no doubt be able to replace low-skilled workers, in menial and repetitive tasks, either by performing it in a similar manner, or changing the nature of the work itself, solving the same task in a different but more efficient manner.

I was speaking with a leader only last week, who was about to replace his airport baggage handling staff with machines.And he simply said, the robots will be cheaper and do a better job so why wouldnt he.

And the truth is, this is how these decisions are being made.To gain or maintain a competitive advantage, automation is indeed a rational choice.But what does this mean for the unskilled labourers?

What we currently know is that the gap between the rich and the poor is growing rapidly (e.g., the 3 richest people in the world possess more financial assets than the lowest 48 nations combined):

In the bottom percentiles the number of hours worked has decreased substantially, with the main reason being the demand and supply of skills.

Many argue that although machines will no doubt take on the low-skilled jobs, these workers will simply move to positions where more human-like traits are required (e.g., emotional intelligence and creativity).Will will delve into these areas, to test this assumption in the following sections. But from research such as the above, the current trend has been so far to replace workers without creating the equal number of opportunities elsewhere.

A level up from automation, are jobs or aspects of jobs that require decision-making and problem-solving.

In terms of decision-making, AI is incredibly well-suited for statistical decision-making tasks. That is, given a description of the current situation, categorising the data into appropriate classes. Examples of this include speech-to-text recognition (where the auditory stream is classified into distinct words), language translation (converting one representation to another), object detection (e.g., finding objects or detecting faces in an image), medical diagnoses (e.g., detecting the presence of cancerous cells), exam grading, and modelling consumer behaviour etc. etc.These systems are perfect for scenarios where there is a lot of data that can be used for training the systems, and there are numerous examples (such as those I just listed) where machines now outperform their human counterparts.

I place problem-solving here in a slightly different category to decision-making.Problem-solving is more about how to get to a desired state given the current state, and may involve a number of steps to get there.Navigation is a perfect example of this. And we have seen how well technologies such as google maps have been integrated into our daily lives (e.g., calculating the fastest route given current traffic conditions, and modifying the recommendation should conditions change).

Deep learning has also had a major impact in AI approaches to problem-solving.Take for example chess. In 1997 Deep Blue, a chess-playing computer developed by IBM beat Garry Kasparov, becoming the first computer system to defeat a reigning world champion.This system used a brute force approach, thinking ahead, and evaluating 200 million positions per second. This is quite distinct to how humans experts play chess, that play through intuition rather than thinking through all possible exhaustive moves.

With the advent of deep learning, AI problem-solving has become more human-like.Googles AlphaZero for instance has beaten the worlds best chess-playing computer, teaching itself how to play in under 4 hours.Rather than using brute force AlphaZero uses deep learning (plus a few other tricks) to extract patterns of patterns that it can use to evaluate its next move.Thus, this is similar to human intuition, where it has a feeling how good a move is based on the global context. Similar to human intuition, one drawback of this approach is that it is often impossible to understand "how" the decision was made (as it is due to the combination of millions of features at different levels).

Besides chess, Google has also beaten the world champion at the ancient Chinese game of go.This was a major achievement, as it was foreseen by AI researchers as an incredibly difficult task.In a game of chess, there are on average approximately 35 legal positions that a player can make on each move.By comparison, the average branching factor for Go is 250, making a brute force search intractable. In 2016, AlphaGo won 4-1 against Lee Sedol, widely considered to be the greatest player in the past decade.AplhaGos successor, AlphaGo Zero, described in Nature, is arguably the strongest Player in history.

So in short Again, there is much growing research and success in computer decision-making and problem solving.

When talking about the future of work, there is often an argument that, although machines will replace many jobs, there will always be a space between what AI and humans can do.Accordingly, human work will simply move to areas that involve creativity and emotional intelligence - competencies that machines will never be good at. Lets explore this argument, as it was the topic of my own PhD.

My own PhD work (gosh, around 20 years ago now), was inspired by Douglas Hofstadter and the Fluid Analogies Research Group (FARG)- a team of AI researchers investigating the fundamental processes underlying human perception and creativity.

Many of the models that FARG implemented seem trivial by todays standard, but illustrate the core processes underlying human creativity.

One of the many examples of creativity that they looked at, was the game JUMBLE - a simple newspaper game, where you were required to unscramble the given letters into a real word.Consider the scrambled letters UFOHGT

Now, you are probably asking yourself what this trivial anagram task has to do with creativity.And the answer is EVERYTHING.

While trying to solve this problem, think about HOW you solve it.

Unlike Deep Blue, in solving anagrams, you will not search through every combination.But instead you will create word-like candidates - letter strings that follow the statistical properties of what words generally look like.E.g., you would not start with the letters HG together or FG, as statistically speaking, these are not how typical English words start.

You might instead start by chunking the letters OU FT and GH together, and arrange them in a sequence to create the word GHOUFT.But you discover that this is a non-word, and you pull it apart and try again.

Over time, you will try different combinations of word-like candidates until you come up with a real English word.

The creative aspect of this task lies in the fact that you generated a range of word-like options based on the statistical properties of english.

A demo of this (one of the many demos from my thesis) can be found below:

In short, most creativity can be viewed in this manner. that there exists statistical regularities of things that belong together, and the creative process involves searching through a range of options until you find a global solution that is suitable.For example, music is not a random sequence of notes but has inherent structure, with music composition exploring different combinations of notes that conform to these rules.

With the advent of deep learning, AI now is very good at extracting such statistical regularities from domains, and generating novel examples that follow the statistics of the domain.

An example of this is from Sonys CSL Research Lab that can listen to music, extract the statistical regularities, and generate its own songs in the given style.As an example, the below song was generated in the style of the Beatles:

An example perhaps more illustrative of current advances is Googles drawing bot that is capable of generating photo-realistic images, given a text description. This system was trained on captioned images, and once trained could generate its own images given a text based description.

For example, the following drawing was generated from the query this bird is black and yellow with a short beak (i.e. this bird does not exist in real life, and was generated by the algorithm rather than being retrieved):

This system can generate a range of images, including ordinary pastoral scenes such as grazing livestock, through to more abstract concepts such as a floating double-decker bus.

Another example of this is in computer programming - as this is something schools are focussing on, that they believe will be an essential skill of the future.

EnterBayou...

Researchers from Rice University havecreated and are refining an applicationthat writes code given a short verbal description from the user.

The software uses deep learning to "read the programmer's mind and predict the program they want."

So, in short... There is major disruption about to potentially occur in this area as well. The future (and as such what we need to be teaching kids to prepare them for it), is very uncertain.

So - to answer the question that I started above.Yes, in the short-term there may be spaces between what humans and machines are capable of, but in the near future these spaces will get smaller and smaller.

In the long-term, I certainly believe creativity is an area that could and will be outsourced to machines (particularly in the technical space of creating new ideas and solutions).

The final bastion that seems to protect humans (and our jobs) from complete redundancy, is our human emotions and emotional intelligence.Many people argue that this is a defining feature of humans that truly segregates us from machines.

Or does it?

If you believe in evolution, you should believe that humans have emotions for a reason - there is some evolutionary benefit.

Thought of in this way, most emotions are definitely here for a reason.They are our internal guidance system that tells us if we are getting things right or wrong.For example, pain and fear are incredibly important, as they prevent us from taking risks that could lead to harm.No doubt such emotions are useful for machines to have as well, and we already see early analogues of them in machines of today (e.g., bump sensors, or sensors to prevent your robot vacuum cleaner from falling down stairs - sensors that prevent them from doing things that could be self-harming).

Ok, so pain is an obviously important signal for machines to have, but what about something more complex like happiness - what could be the evolutionary benefit of that?

Well, I am glad you asked, as it has been part of my own research to look at pleasure centres in the brain, and develop their analogue in robots - yes, indeed happy robots.

You can check out some of my older research on this topic in the video below.In short, one of the many purposes of happiness is that it drives learning (i.e. we are naturally curious, and are as a result active participants in our own learning).

So, hopefully, watching the above video, you will understand the role of emotions, and how they are central to intelligence.So, I do definitely believe that in the near future machines will have their own emotions and drives that will increase in complexity over time. There is no real bastion that will be left standing in the end.

I have made many strong claims in the above text (i.e. that in the near future, all human jobs are in jeopardy), and I am sure that there will be more than a few people that may be skeptical at this point.Possibly because the advances in current AI are not visible in our lives - they are currently hidden away in our factories, mobile phones, and online shopping recommendations. But if you look under the hood, the technology is there, and progressing at an alarming rate.

A possible metaphor is that of the boiling frog - i.e. if you put a frog in boiling water it will jump out immediately, but if you put a frog in cool water that is brought slowly to boil it will not perceive the threat and be boiled alive (not scientifically true, but a nice metaphor).

As humans, we are used to slow and gradual change.In contrast, we are unfamiliar with exponential growth, in that something that we perceive as changing slowly today, may rapidly change tomorrow.As a result, rapid overnight changes are not something we naturally fear. But all the research suggests that advances in AI are following this exponential pattern, and there is a tipping point in the near future where changes will be rapid and unpredictable.

Ray Kurzweil, in his book the age of the spiritual machines charts evidence of this exponential growth, in terms of the increase of calculations per second of computers over time.Given the current GPUs we are currently using for AI, the predictions he made with respect to where we are in 2018 are remarkably accurate.

What is scary about this graph is that if these trends continue, the average PC will have the computational power of the human brain by around 2030, and the computational power of the entire human population in around 2050.

I am not necessarily saying that these predictions are fully accurate, but I do believe that as individuals we are underestimating the rapid changes to our lives that are about to occur.

If you view AI as a species that is evolving, it is evolving at a pace unlike anything we have ever witnessed before, and in the last 10 years, progress has been remarkable.

As an interesting example of this, check out Google Duplex:

There is no doubt that there is a tsunami of change that is about to hit our shores.A tsunami that few people are expecting, with a ferocity and timescale potentially more threatening to our species than climate change.

The danger I believe, is not in the technology itself, but in how we are using it.

If we use AI unchecked for corporate competitive advantage, there is no doubt companies will choose the cheapest and most efficient option, and the employees at the lowest levels will be the first to be hit hard.But over time, it is highly likely that all of our opportunities at all levels will be washed away. And very soon.

But it is a tsunami. I do believe we can channel for the greater good if we choose to.If we dont use it for corporate advantage but instead use it to solve the biggest issues facing humanity such as education, poverty, famine, disease and climate change.

I also fear that this issue will be similar to climate change that the leaders at the top will be reluctant to take action (e.g., it is unfathomable that some current world leaders are still denying that global warming is an issue, despite there being a 97% consensus by climate specialists).

So what can be done?

They say that the Holocaust was allowed to occur in Nazi Germany because the good people sat back and did nothing.Some later justified their inaction claiming that they did not know where the trains were heading. Today, we do not have this excuse. in terms of both climate change and AI, we know exactly where these trains are heading. and these trains contain our children.

I think the major problem we face in stopping or redirecting these trains (i.e. pressuring the government to intervene) is what in the psychological literature is known as bystander apathy - the fact that people in a crowd are less likely to step in and help than individuals witnessing an atrocity alone.

With bystander apathy, people only step in to help when:

1) they notice that something is going on

2) interpret the situation as being an emergency

3) feel that they have a degree of responsibility (i.e. there is no-one else who is better suited)

4) and know what to do to help.

So if it is really up to the people to upward manage our governments (to make sure companies act sustainably and in a way that is beneficial to humanity) - how do we avoid our own bystander apathy?

So if it is really up to the people to upwardly manage our governments (to make sure companies act sustainably and in a way that is beneficial to humanity) - how do we avoid our own bystander apathy?

See the original post:

Artificial Intelligence, and the Future of Work Should We Be Worried? - BBN Times

Written by admin

October 21st, 2021 at 1:46 am

Posted in Alphazero

This AI chess engine aims to help human players rather than defeat them – The Next Web

Posted: January 31, 2021 at 8:53 am


without comments

Artificial intelligence has become so good at chess that its only competition now comes from other computer programs.Indeed, a human hasnt defeated a machine in a chess tournament in 15 years.

Its an impressive technical achievement, but that dominance has also made top-level chess less imaginative, as players now increasingly follow strategies produced by soulless algorithms.

But a newresearch papershows that AI could still make the game better for us puny humans.

The study authors developed a chess enginewith a difference. Unlike most of its predecessors, their system isnt designed to defeat humans. Instead, its programmed to play like them.

[Read: How this company leveraged AI to become the Netflix of Finland]

The researchers believe Maiacould make the game more fun to play. But it could also help us learn from the computer.

So chess becomes a place where we can try understanding human skill through the lens of super-intelligent AI, said study co-author Jon Kleinberg, a professor at Cornell University.

Their system called Maia is a customized version of AlphaZero, a program developed by research lab DeepMind to master chess, Shogi, and Go.

Instead of building Maia to win a game of chess, the model was trained on individualmoves made by humans. Studyco-author Ashton Anderson said this allowed the system to spot what players should work on:

Maia has algorithmically characterized which mistakes are typical of which levels, and therefore which mistakes people should work on and which mistakes they probably shouldnt, because they are still too difficult.

Maia matched the movesof humans more than 50% of the time, and its accuracy grew as the skill level increases.

The researchers said this prediction accuracy is higher than that of Stockfish, the reigning computer world chess champion.

Maia might not be capable of teaching people to conquer AI at chess but it could help beat their fellow humans.

You can read the study paper on the preprint server arXiv.

Published January 27, 2021 18:52 UTC

Excerpt from:

This AI chess engine aims to help human players rather than defeat them - The Next Web

Written by admin

January 31st, 2021 at 8:53 am

Posted in Alphazero


Page 11234



matomo tracker