Google Teaches AI To Play The Game Of Chip Design – The Next Platform
Posted: February 22, 2020 at 8:44 pm
If it wasnt bad enough that Moores Law improvements in the density and cost of transistors is slowing. At the same time, the cost of designing chips and of the factories that are used to etch them is also on the rise. Any savings on any of these fronts will be most welcome to keep IT innovation leaping ahead.
One of the promising frontiers of research right now in chip design is using machine learning techniques to actually help with some of the tasks in the design process. We will be discussing this at our upcoming The Next AI Platform event in San Jose on March 10 with Elias Fallon, engineering director at Cadence Design Systems. (You can see the full agenda and register to attend at this link; we hope to see you there.) The use of machine learning in chip design was also one of the topics that Jeff Dean, a senior fellow in the Research Group at Google who has helped invent many of the hyperscalers key technologies, talked about in his keynote address at this weeks 2020 International Solid State Circuits Conference in San Francisco.
Google, as it turns out, has more than a passing interest in compute engines, being one of the large consumers of CPUs and GPUs in the world and also the designer of TPUs spanning from the edge to the datacenter for doing both machine learning inference and training. So this is not just an academic exercise for the search engine giant and public cloud contender particularly if it intends to keep advancing its TPU roadmap and if it decides, like rival Amazon Web Services, to start designing its own custom Arm server chips or decides to do custom Arm chips for its phones and other consumer devices.
With a certain amount of serendipity, some of the work that Google has been doing to run machine learning models across large numbers of different types of compute engines is feeding back into the work that it is doing to automate some of the placement and routing of IP blocks on an ASIC. (It is wonderful when an idea is fractal like that. . . .)
While the pod of TPUv3 systems that Google showed off back in May 2018 can mesh together 1,024 of the tensor processors (which had twice as many cores and about a 15 percent clock speed boost as far as we can tell) to deliver 106 petaflops of aggregate 16-bit half precision multiplication performance (with 32-bit accumulation) using Googles own and very clever bfloat16 data format. Those TPUv3 chips are all cross-coupled using a 3232 toroidal mesh so they can share data, and each TPUv3 core has its own bank of HBM2 memory. This TPUv3 pod is a huge aggregation of compute, which can do either machine learning training or inference, but it is not necessarily as large as Google needs to build. (We will be talking about Deans comments on the future of AI hardware and models in a separate story.)
Suffice it to say, Google is hedging with hybrid architectures that mix CPUs and GPUs and perhaps someday other accelerators for reinforcement learning workloads, and hence the research that Dean and his peers at Google have been involved in that are also being brought to bear on ASIC design.
One of the trends is that models are getting bigger, explains Dean. So the entire model doesnt necessarily fit on a single chip. If you have essentially large models, then model parallelism dividing the model up across multiple chips is important, and getting good performance by giving it a bunch of compute devices is non-trivial and it is not obvious how to do that effectively.
It is not as simple as taking the Message Passing Interface (MPI) that is used to dispatch work on massively parallel supercomputers and hacking it onto a machine learning framework like TensorFlow because of the heterogeneous nature of AI iron. But that might have been an interesting way to spread machine learning training workloads over a lot of compute elements, and some have done this. Google, like other hyperscalers, tends to build its own frameworks and protocols and datastores, informed by other technologies, of course.
Device placement meaning, putting the right neural network (or portion of the code that embodies it) on the right device at the right time for maximum throughput in the overall application is particularly important as neural network models get bigger than the memory space and the compute oomph of a single CPU, GPU, or TPU. And the problem is getting worse faster than the frameworks and hardware can keep up. Take a look:
The number of parameters just keeps growing and the number of devices being used in parallel also keeps growing. In fact, getting 128 GPUs or 128 TPUv3 processors (which is how you get the 512 cores in the chart above) to work in concert is quite an accomplishment, and is on par with the best that supercomputers could do back in the era before loosely coupled, massively parallel supercomputers using MPI took over and federated NUMA servers with actual shared memory were the norm in HPC more than two decades ago. As more and more devices are going to be lashed together in some fashion to handle these models, Google has been experimenting with using reinforcement learning (RL), a special subset of machine learning, to figure out where to best run neural network models at any given time as model ensembles are running on a collection of CPUs and GPUs. In this case, an initial policy is set for dispatching neural network models for processing, and the results are then fed back into the model for further adaptation, moving it toward more and more efficient running of those models.
In 2017, Google trained an RL model to do this work (you can see the paper here) and here is what the resulting placement looked like for the encoder and decoder, and the RL model to place the work on the two CPUs and four GPUs in the system under test ended up with 19.3 percent lower runtime for the training runs compared to the manually placed neural networks done by a human expert. Dean added that this RL-based placement of neural network work on the compute engines does kind of non-intuitive things to achieve that result, which is what seems to be the case with a lot of machine learning applications that, nonetheless, work as well or better than humans doing the same tasks. The issue is that it cant take a lot of RL compute oomph to place the work on the devices to run the neural networks that are being trained themselves. In 2018, Google did research to show how to scale computational graphs to over 80,000 operations (nodes), and last year, Google created what it calls a generalized device placement scheme for dataflow graphs with over 50,000 operations (nodes).
Then we start to think about using this instead of using it to place software computation on different computational devices, we started to think about it for could we use this to do placement and routing in ASIC chip design because the problems, if you squint at them, sort of look similar, says Dean. Reinforcement learning works really well for hard problems with clear rules like Chess or Go, and essentially we started asking ourselves: Can we get a reinforcement learning model to successfully play the game of ASIC chip layout?
There are a couple of challenges to doing this, according to Dean. For one thing, chess and Go both have a single objective, which is to win the game and not lose the game. (They are two sides of the same coin.) With the placement of IP blocks on an ASIC and the routing between them, there is not a simple win or lose and there are many objectives that you care about, such as area, timing, congestion, design rules, and so on. Even more daunting is the fact that the number of potential states that have to be managed by the neural network model for IP block placement is enormous, as this chart below shows:
Finally, the true reward function that drives the placement of IP blocks, which runs in EDA tools, takes many hours to run.
And so we have an architecture Im not going to get a lot of detail but essentially it tries to take a bunch of things that make up a chip design and then try to place them on the wafer, explains Dean, and he showed off some results of placing IP blocks on a low-powered machine learning accelerator chip (we presume this is the edge TPU that Google has created for its smartphones), with some areas intentionally blurred to keep us from learning the details of that chip. We have had a team of human experts places this IP block and they had a couple of proxy reward functions that are very cheap for us to evaluate; we evaluated them in two seconds instead of hours, which is really important because reinforcement learning is one where you iterate many times. So we have a machine learning-based placement system, and what you can see is that it sort of spreads out the logic a bit more rather than having it in quite such a rectangular area, and that has enabled it to get improvements in both congestion and wire length. And we have got comparable or superhuman results on all the different IP blocks that we have tried so far.
Note: I am not sure we want to call AI algorithms superhuman. At least if you dont want to have it banned.
Anyway, here is how that low-powered machine learning accelerator for the RL network versus people doing the IP block placement:
And here is a table that shows the difference between doing the placing and routing by hand and automating it with machine learning:
And finally, here is how the IP block on the TPU chip was handled by the RL network compared to the humans:
Look at how organic these AI-created IP blocks look compared to the Cartesian ones designed by humans. Fascinating.
Now having done this, Google then asked this question: Can we train a general agent that is quickly effective at placing a new design that it has never seen before? Which is precisely the point when you are making a new chip. So Google tested this generalized model against four different IP blocks from the TPU architecture and then also on the Ariane RISC-V processor architecture. This data pits people working with commercial tools and various levels tuning on the model:
And here is some more data on the placement and routing done on the Ariane RISC-V chips:
You can see that experience on other designs actually improves the results significantly, so essentially in twelve hours you can get the darkest blue bar, Dean says, referring to the first chart above, and then continues with the second chart above. And this graph showing the wireline costs where we see if you train from scratch, it actually takes the system a little while before it sort of makes some breakthrough insight and was able to significantly drop the wiring cost, where the pretrained policy has some general intuitions about chip design from seeing other designs and people that get to that level very quickly.
Just like we do ensembles of simulations to do better weather forecasting, Dean says that this kind of AI-juiced placement and routing of IP block sin chip design could be used to quickly generate many different layouts, with different tradeoffs. And in the event that some feature needs to be added, the AI-juiced chip design game could re-do a layout quickly, not taking months to do it.
And most importantly, this automated design assistance could radically drop the cost of creating new chips. These costs are going up exponentially, and data we have seen (thanks to IT industry luminary and Arista Networks chairman and chief technology officer Andy Bechtolsheim), an advanced chip design using 16 nanometer processes cost an average of $106.3 million, shifting to 10 nanometers pushed that up to $174.4 million, and the move to 7 nanometers costs $297.8 million, with projections for 5 nanometer chips to be on the order of $542.2 million. Nearly half of that cost has been and continues to be for software. So we know where to target some of those costs, and machine learning can help.
The question is will the chip design software makers embed AI and foster an explosion in chip designs that can be truly called Cambrian, and then make it up in volume like the rest of us have to do in our work? It will be interesting to see what happens here, and how research like that being done by Google will help.
More here:
Google Teaches AI To Play The Game Of Chip Design - The Next Platform
- Getting Started With Machine Learning: Definition and Applications - CMSWire - February 20th, 2021
- This Biotech Company Combines Single Cell Genomics with Machine Learning (ML) Algorithms To Enable High Resolution Profiling of the Immune System -... - February 20th, 2021
- Immunai Raises $60M to Decode the Immune System with Machine Learning and AI - AlleyWatch - February 20th, 2021
- Cloud Machine Learning Market: Indoor Applications Projected to be the Most Attractive Segment during 2021-2029 KSU | The Sentinel Newspaper - KSU |... - February 20th, 2021
- Machine Learning in Insurance Market: Indoor Applications Projected to be the Most Attractive Segment during 2021-2029 KSU | The Sentinel Newspaper -... - February 20th, 2021
- Carin Meier Using Machine Learning to Combat Major Illness, such as the Coronavirus - InfoQ.com - February 20th, 2021
- Moffitt Cancer Center: Why we are building the first machine learning department in oncology - The Cancer Letter - February 20th, 2021
- Machine Learning and where is it used? - Tech Guide - February 20th, 2021
- Artificial Intelligence and Machine Learning for Insurance Technology from Johnson Controls Available on the Ocean Tomo Bid-Ask Market - Yahoo Finance - February 20th, 2021
- Identifying COVID-19 Therapy Candidates With Machine Learning - Contagionlive.com - February 20th, 2021
- Machine Learning in Tax and Accounting Market gigantic revenues by 2028 with Amazon Web Services, Baidu Inc, Google, Intel, IBM, Hewlett Packard,... - February 20th, 2021
- Using AI and Machine Learning will increase in horti industry - hortidaily.com - February 13th, 2021
- The head of JPMorgan's machine learning platform explained what it's like to work there - eFinancialCareers - February 13th, 2021
- If you know nothing about deep learning with Python, start here - TechTalks - February 13th, 2021
- Mental health diagnoses and the role of machine learning - Health Europa - February 13th, 2021
- 5 Ways the IoT and Machine Learning Improve Operations - BOSS Magazine - February 13th, 2021
- There Is No Silver Bullet Machine Learning Solution - Analytics India Magazine - February 13th, 2021
- Postdoctoral Research Associate in Digital Humanities and Machine Learning job with DURHAM UNIVERSITY | 246392 - Times Higher Education (THE) - February 13th, 2021
- The Collision of AI's Machine Learning and Manipulation: Deepfake Litigation Risks to Companies from a Product Liability, Privacy, and Cyber... - February 13th, 2021
- Parascript and SFORCE Partner to Leverage Machine Learning Eliminating Barriers to Automation - GlobeNewswire - February 13th, 2021
- Rackspace Technology Study uncovers AI and Machine Learning knowledge gap in the UAE - Intelligent CIO ME - February 13th, 2021
- How Blockchain and Machine Learning Impact on education system - ABCmoney.co.uk - February 13th, 2021
- Mission Healthcare of San Diego Adopts Muse Healthcare's Machine Learning Tool - Southernminn.com - January 19th, 2021
- Deep Learning Outperforms Standard Machine Learning in Biomedical Research Applications, Research Shows - Georgia State University News - January 19th, 2021
- Project MEDAL to apply machine learning to aero innovation - The Engineer - January 19th, 2021
- Forecast On Machine Learning (ML) Intelligent Process Automation Market Witness the Growth of Great Billion by 2027 With Top Companies Like Automation... - January 19th, 2021
- Machine Learning Shown to Identify Patient Response to Sarilumab in Rheumatoid Arthritis - AJMC.com Managed Markets Network - January 19th, 2021
- Bangalore based Great Learning can help you unleash the potential of an M-Tech in Data Science & Machine - Times of India - January 19th, 2021
- CERC plans to embrace AI, machine learning to improve functioning - Business Standard - January 19th, 2021
- NTT Co-authored Papers at NeurIPS to Advance Machine Learning Efficiency and Performance - Business Wire - December 7th, 2020
- Why Intel believes confidential computing will boost AI and machine learning - VentureBeat - December 3rd, 2020
- Machine Learning Market to Grow Notably Attributed to Increasing Adoption of Analytics-driven Solutions by Developing Economies, says Fortune Business... - December 3rd, 2020
- Machine learning: The new language of data and analytics - ITProPortal - December 3rd, 2020
- Injecting Machine Learning And Bayesian Optimization Into HPC - The Next Platform - December 3rd, 2020
- QA Increasingly Benefits from AI and Machine Learning - RTInsights - December 3rd, 2020
- Everything to Know About Machine Learning as a Service (MLaaS) - Analytics Insight - December 3rd, 2020
- How the Food and Beverage Industry is Affected by Machine Learning and AI - IoT For All - December 3rd, 2020
- Amazon announces new machine learning tools to help customers monitor machines and worker safety - www.computing.co.uk - December 3rd, 2020
- Machine Learning and Location Data Applications Market 2020 Top Companies report covers, Industry Outlook, Top Countries Analysis & Top... - December 3rd, 2020
- Commentary: Chain of Demand applies AI, machine learning to retail supply chain profitability - FreightWaves - December 3rd, 2020
- Machine learning - it's all about the data - KHL Group - December 3rd, 2020
- Product Portfolio Analysis and Technological Development of Machine Learning in Medical Imaging Market during the forecasted period - Murphy's Hockey... - December 3rd, 2020
- Imaging AI and Machine Learning Beyond the Hype, Upcoming Webinar Hosted by Xtalks - PR Web - December 3rd, 2020
- Veritone aiWARE Now Supports NVIDIA CUDA for GPU-based AI and Machine Learning - Business Wire - December 3rd, 2020
- Exactech Launches Predict+, First Machine Learning-Based Software that Informs Surgeons with Patient-Specific Outcomes Predictions After Shoulder... - December 3rd, 2020
- How To Choose The Best Machine Learning Algorithm For A Particular Problem? - Analytics India Magazine - October 19th, 2020
- Lantronix Brings Advanced AI and Machine Learning to Smart Cameras With New Open-Q 610 SOM Based on the Powerful Qualcomm QCS610 System on Chip (SOC)... - October 19th, 2020
- AI and Machine Learning Technologies Expected to Play a Key Role in Expanding Multi Billion Dollar Digital Banking Sector: Report - Crowdfund Insider - October 19th, 2020
- AutoML Alleviates the Process of Machine Learning Analysis - Analytics Insight - October 19th, 2020
- Futurism Reinforces Its Next-Gen Business Commerce Platform With Advanced Machine Learning and Artificial Intelligence Capabilities - Yahoo Finance - October 19th, 2020
- Purebase Enhances Its Board of Advisors with An Expert on Machine Learning and Cheminformatics - GlobeNewswire - October 19th, 2020
- COVID-19 And The Role Of AI, Machine Learning In Logistics: A Conversation With Delhivery CTO Kapil Bharati - Mashable India - October 19th, 2020
- How to Beat Analysts and the Stock Market with Machine Learning - Knowledge@Wharton - October 19th, 2020
- AI and Machine Learning Can Help Fintechs if We Focus on Practical Implementation and Move Away from Overhyped Narratives, Researcher Says - Crowdfund... - October 19th, 2020
- Proximity matters: Using machine learning and geospatial analytics to reduce COVID-19 exposure risk - Healthcare IT News - September 20th, 2020
- PREDICTING THE OPTIMUM PATH - Port Strategy - September 20th, 2020
- What is 'custom machine learning' and why is it important for programmatic optimisation? - The Drum - September 20th, 2020
- How Machine Learning is Set to Transform the Online Gaming Community - Techiexpert.com - TechiExpert.com - September 20th, 2020
- Current and future regulatory landscape for AI and machine learning in the investment management sector - Lexology - September 20th, 2020
- Global Machine Learning Courses Market Research Report 2015-2027 of Major Types, Applications and Competitive Vendors in Top Regions and Countries -... - September 20th, 2020
- When AI in healthcare goes wrong, who is responsible? - Quartz - September 20th, 2020
- Is Wide-Spread Use of AI & Machine Intelligence in Manufacturing Still Years Away? - Automation World - September 20th, 2020
- How do we know AI is ready to be in the wild? Maybe a critic is needed - ZDNet - September 20th, 2020
- Solving the crux behind Apple's Silicon Strategy - Medium - September 20th, 2020
- Boost Your Animation To 60 FPS Using AI - Hackaday - September 20th, 2020
- 50 Latest Data Science And Analytics Jobs That Opened Last Week - Analytics India Magazine - September 20th, 2020
- Algorithms may never really figure us out thank goodness - The Boston Globe - September 20th, 2020
- Why Deep Learning DevCon Comes At The Right Time - Analytics India Magazine - September 20th, 2020
- Six notable benefits of AI in finance, and what they mean for humans - Daily Maverick - September 20th, 2020
- Twitter is looking into why its photo preview appears to favor white faces over Black faces - The Verge - September 20th, 2020
- 8 Trending skills you need to be a good Python Developer - iLounge - September 20th, 2020
- Automation Continuum - Leveraging AI and ML to Optimise RPA - Analytics Insight - September 20th, 2020
- UT Austin Selected as Home of National AI Institute Focused on Machine Learning - UT News | The University of Texas at Austin - August 27th, 2020
- Participation-washing could be the next dangerous fad in machine learning - MIT Technology Review - August 27th, 2020
- Getting to the heart of machine learning and complex humans - The Irish Times - August 27th, 2020
- Air Force Taps Machine Learning to Speed Up Flight Certifications - Nextgov - August 27th, 2020
- The Role of Artificial Intelligence and Machine Learning in the... - Insurance CIO Outlook - August 27th, 2020
- AI and Machine Learning Network Fetch.ai Partners Open-Source Blockchain Protocol Waves to Conduct R&D on DLT - Crowdfund Insider - August 27th, 2020
- AI may not predict the next pandemic, but big data and machine learning can fight this one - ZDNet - August 27th, 2020
- Machine Learning Artificial intelligence Market Size and Growth By Leading Vendors, By Types and Application, By End Users and Forecast to 2020-2027 -... - August 27th, 2020