Google is using A.I. to design chip floorplans faster than humans


Google claims that it has developed artificial intelligence software that can design computer chips faster than humans can.

The tech giant said in a paper in the journal Nature on Wednesday that a chip that would take humans months to design can be dreamed up by its new AI in less than six hours.

The AI has already been used to develop the next iteration of Google’s tensor processing unit chips, which are used to run AI-related tasks, Google said.   

“Our method has been used in production to design the next generation of Google TPU,” wrote the authors of the paper, led by Google’s co-heads of machine learning for systems, Azalia Mirhoseini and Anna Goldie.

To put it another way, Google is using AI to design chips that can be used to create even more sophisticated AI systems.

Specifically, Google’s new AI can draw up a chip’s “floorplan.” This essentially involves plotting where components like CPUs, GPUs, and memory are placed on the silicon die in relation to one another — their positioning on these miniscule boards is important as it affects the chip’s power consumption and processing speed.

It takes humans months to optimally design these floorplans but Google’s deep reinforcement learning system — an algorithm that’s trained to take certain actions in order to maximize its chance of earning a reward — can do it with relatively little effort.

Similar systems can also defeat humans at complex games like Go and chess. In these scenarios, the algorithms are trained to move pieces that increase their chances of winning the game but in the chip scenario the AI is trained to find the best combination of components in order to make it as computationally efficient as possible. The AI system was fed 10,000 chip floorplans in order to “learn” what works and what doesn’t.

Whereas human chip designers typically lay out components in neat lines, Google’s AI uses a more scattered approach to design its chips. This isn’t the first time an AI system has gone rogue after learning how to perform a task off the back of human data. DeepMind’s famous “AlphaGo” AI made a highly unconventional move against Go world champion Lee Sedol in 2016 that astounded Go players around the world.

Google’s engineers noted in the paper that the breakthrough could have “major implications” for the semiconductor sector.

Facebook’s chief AI scientist, Yann LeCun, hailed the research as “very nice work” on Twitter, adding “this is exactly the type of setting in which RL shines.”

The breakthrough was hailed as an “important achievement” that will “be a huge help in speeding up the supply chain” in a Nature editorial on Wednesday.

However, the journal said “the technical expertise must be shared widely to make sure the ‘ecosystem’ of companies becomes genuinely global.” It went on to stress “the industry must make sure that the time-saving techniques do not drive away people with the necessary core skills.”

Clarification: This story has been updated to reflect that Anna Goldie is co-author of the paper, and the AI has been used to develop the next iteration of Google’s tensor processing unit chips.