Google uses machine learning to design the next generation of machine learning chips. The algorithm’s designs are “comparable or superior” to those created by humans, say Google engineers, but can be generated much, much faster. According to the technology giant, work that takes months for humans can be achieved by AI in less than six hours.
Google has been working on how to use machine learning to make chips for years, but this latest effort – described this week in an article in the journal Nature – appears to be the first time research has been applied to a commercial product: an upcoming version of Google̵
“Our method has been used in production to design the next generation of Google TPU,” writes the newspaper’s authors, led by Google’s head of ML for systems, Azalia Mirhoseini.
In other words, AI helps to accelerate the future of AI development.
In the newspaper, Google engineers note that this work has “major implications” for the chip industry. It will enable companies to more quickly explore the possible architectural area for future designs and more easily adapt chips for specific workloads.
An editorial in Nature calls the research an “important achievement”, noting that such work could help offset the expected end of Moore’s law – an axiom of chip design from the 1970s that says the number of transistors on a chip doubles every two years. AI will not necessarily solve the physical challenges of pushing more and more transistors on chips, but it may help to find other ways to increase performance at the same rate.
The specific task that Google’s algorithms tackled is known as “planning.” This usually requires human designers working with computer tools to find the optimal design of a silicon mold for a chip’s subsystems. These components include things like CPUs, GPUs and memory cores, which are connected using tens of kilometers of negative wires. Deciding where to place each component on a nozzle affects the final speed and efficiency of the chip. And given both the scope of chip production and computational cycles, nanometer changes in location can have major effects.
Google engineers note that designing floor plans takes “months of intense effort” for humans, but from a machine learning perspective, there is a known way to deal with this problem: as a game.
AI has proven time and time again that it can surpass humans in board games like chess and Go, and Google engineers note that planning plans is analogous to such challenges. Instead of a game board, you have a silicon die. Instead of chips like knights and towers, you have components like CPUs and GPUs. The task is thus to just find each board’s “winning conditions”. In chess, which can be chess food, there is computational efficiency in chip design.
Google engineers trained a learning algorithm for amplification in a dataset of 10,000 chip plans of varying quality, some of which had been randomly generated. Each design was labeled with a specific “reward” feature based on success across different calculations, such as the length of cord required and power consumption. The algorithm then used this data to distinguish between good and bad floor plans and generate their own designs in turn.
As we have seen when AI systems take people to board games, machines do not necessarily think like humans and often come up with unexpected solutions to known problems. When DeepMind’s AlphaGo played human champion Lee Sedol on Go, this dynamic led to the infamous “move 37” – a seemingly illogical chip placement of AI that still led to victory.
Nothing happened so dramatically with Google’s chip design algorithm, but the floor plans still look quite different from those created by man. Instead of neat rows of components laid out on the matrix, subsystems look as if they have almost been scattered over the silicon at random. An illustration from Nature shows the difference, with the human design on the left and machine learning design on the right. You can also see the general difference in the image below from Google’s paper (organized people on the left; cluttered AI on the right), although the layout is blurry as it is confidential:
This article is remarkable, especially since the research is now being used commercially by Google. But that is far from the only aspect of AI-assisted chip design. Google itself has explored the use of AI in other parts of the process as “architectural exploration”, and rivals such as Nvidia are looking at other methods to speed up workflow. The virtuous cycle of AI designing chips for AI seems to have just begun.