Google goes beyond developing artificial intelligence chips to their own data centers, and designs them now to work in products made by other companies.
Last year, Google said that its AI- silicon becomes even more strata really important.
In AI, researchers train models with a lot of data so machines can make predictions as new data arrives. The first version a v TPU can only do these predictions, while the second version in 2017 could also be used to train models – an update that made the chips competitive with Nvidia's graphics card.
A third generation TPU was announced earlier this year.
Now we have Edge TPUs, small chips that are specifically intended to make the prediction part of AI, less computationally intensive than training models. Edge TPUer can run its calculations without connecting a bunch of powerful computers, so applications can work faster and more reliably. They can handle AI work with a standard chip or microcontroller in a sensor or gateway device.