Roundup Hi, here are a few pieces of AI news for the weekend. You do not always need a ton of money to buy a lot of GPUs to train your models quickly. You can make it quite cheap on skate platforms. There is also a robot that can play Where's Waldo (Wally in the UK) and Microsoft's computer trying to tell if you've found a joke fun.
Public Code of Exercise ImageNet Super Fort: A group of engineers managed to train ImageNet to 93 percent accuracy using hardware rented on public skate platforms for only $ 40.
It's not the fastest time on record to train a fake network on ImageNet, one of the most popular data sets for image classification. The fastest time is yet four minutes, and goes to researchers from TenCent and Hong Kong Baptist University.
Speed up the training process requires pumping the batch size and flinging a large number of GPUs on the problem. Zipping through ImageNet in four minutes 1
But a team of engineers, including Jeremy Howard, founder of Fast.ai, and Andrew Shaw, an engineer who participated in the DawnBench competition and Yaroslav Bulatov, a researcher for Defense Innovation Unit Experimental, a private defense company, has made the prospect much less expensive.
They rented 16 public AWS cloud instances, each with 8 NVIDIA V100 GPUs, and used a few smart software tricks, such as training on small pictures first before they introduced larger ones.
"That way, when the model is very inaccurate early, it can quickly see a lot of photos and speed up, and later in training, it can see bigger pictures to learn more fine-grained differences," Howard explains in a blog post.
"DIU and fast.ai will release software to allow anyone to easily navigate and monitor their own distributed models on AWS, using the best methods developed in this project," he added.
You can read more about this.
Nvidia's Tesla P4 on Google Cloud: Keeping on the theme of skate platforms, Google Cloud announced it was now hosting Nvidia's P4 GPUs on its service.
It's not as advanced as the P100 or V100, so it's a cheaper option for those who train or run smaller models or for those who do not mind waiting too long.
They are now available in a few sample zones, including US Central, US East and Europe West.
You can read more about this.
] Can AI discover good joke from laughter? Microsoft has installed an exhibition at the National Comedy Center in New York that uses its face-to-face API to find out if anyone has a joke or not.
Viewers go up to a screen to enter Laugh Battle. The machines contain a number of pre-installed jokes written by human comedians – AI still sucks on humor. The players reverse to choose jokes and win by getting the other person to smile or smile the most after six rounds.
You can watch a video demo below.
The Microsoft Face API is used to determine if someone scored a point or not by scanning people's faces. The system uses a falsification network that has been educated on over 100,000 sentiment analysis faces. The pictures are marked with emotions, such as happiness, sadness, anger, contempt, disgust, fear, neutrality and surprise.
"Cross-cultures smile similarly, they get angry in the same way, they show reluctance the same way," said Cornelia Carapcea, head of the Cognitive Services team.
"If anyone smiles or frets, we can detect it and we give back the score for every feeling, "she explained." It's not like we see a face and we say "happy," we see a face saying "oh we think happy is maybe 60 percent. "If the person also does more of a Mona Lisa smile, we can have happy 60 percent and sad 40 percent. "
Robot Can Play Where's Waldo? : Developers have combined Google's AutoML image rating service with a robotic arm so it can point to Waldo in the popular image gamebook, where is Waldo?
Waldo's always hidden from illustrations filled with other people engaged in complex situations. Although he always wears his distinctive red and white striped bobble hat and sweater and blue jeans, he is not easy to see.
The goal of the game is to find Waldo as soon as possible by pointing to him. Now a robot arm can also play and it can see Waldo in as little as 4.45 seconds. It is connected to a Google AutoML Vision service that has been educated on Waldo's face. The model detects at least one 95% battle with Waldo on an image, the robot arm with a hanging rubber hand will move to it.
The armrests are controlled by a Raspberry Pi that runs on Python.
You can see it in action below … ®