قالب وردپرس درنا توس
Home / Technology / Google reveals small new AI chips for on-machine machine learning

Google reveals small new AI chips for on-machine machine learning



Two years ago, Google revealed its Tensor Processing Units or TPUs – specialized chips that live in the company's data centers and make AI work tasks easy. Now, the company moves its AI expertise down from the cloud and has taken wraps of its new Edge TPU; a small AI accelerator to perform machine learning jobs in IoT devices.

Edge TPU is designed to do what is called "inference." This is part of machine learning where an algorithm actually performs the task it was educated to do; for example, to recognize an object in an image. Google's server-based TPUs are optimized for the training's part of this process, while these new Edge TPUs will make the change.

These new chips are destined to be used in business jobs, not your next smartphone. That means tasks like automating quality control in the factories. Doing this kind of job on the device has a lot of advantages to using hardware that must send data over the Internet for analysis. Machine learning on a machine is generally more secure; experience less downtime and deliver faster results. That's the sale in any case.


Edge TPU is the little brother of the regular Tensor Processing Unit, which Google uses to run its own AI, and which is available to other customers using Google Cloud.
Google

Google is not the only company to design chips for this type of on-device AI task. ARM, Qualcomm, Mediatek and others all make their own AI accelerators, while GPs made by Nvidia dominate the famously market for training algorithms.

But what Google has that its rivals do not have control over the entire AI stack. A customer can store their data on Google Cloud; train their algorithms using TPUs; and then perform on-device input using the new Edge TPUs. And, more than likely, they will create their machine learning software using TensorFlow – a coding frame created and operated by Google.

This type of vertical integration has obvious benefits. Google can ensure that all these different parts interact with each other as efficiently and evenly as possible, which makes it easier for the customer to play (and become) in the company's ecosystem.

Google Cloud's Vice President of IoT, Injong Rhee, described the new hardware as a "specially designed ASIC chip designed to run TensorFlow Lite ML models on the edge" in a blog post . Sa Rhee: "Edge TPUs are designed to complement our Cloud TPU offerings, allowing you to accelerate ML training in the cloud, and then get lightning fast ML inference at the edge. Your sensors become more than data collectors – they make local, real time, intelligent decisions. "

Interestingly, Google also makes Edge TPU available as a development kit that will make it easier for customers to test the hardware capabilities and see how it fits into their products. This devkit includes a system on module (SOM) that contains Edge TPU, an NXP CPU, a Microchip-safe element and Wi-Fi functionality. It can connect to a computer or server via USB or a PCI Express expansion slot. These devkits are only available in beta, and potential customers must apply for access.

This may seem like a small part of the news, but it's remarkable as Google usually does not let the audience get their hands on their AI hardware. But if the company wants customers to adopt their technology, they must make sure they can try it first, instead of just asking them about a hope of faith in the AI ​​Googlesphere. This development board is not just a lure for businesses – it's a sign that Google is serious about owning the entire AI stack.


Source link