قالب وردپرس درنا توس
Home / Technology / Discover the stupidity of AI emotion recognition with this little browser game

Discover the stupidity of AI emotion recognition with this little browser game



Technical companies will not only identify you using face recognition – they will also read your emotions using AI. For many researchers, however, claims about computers’ ability to understand emotions are fundamentally wrong, and a small browser in the browser built by researchers from the University of Cambridge aims to show why.

Go to emojify.info, and you can see how your emotions are “read” by your computer via your webcam. The game will challenge you to produce six different emotions (happiness, sadness, fear, surprise, disgust and anger), which AI will try to identify. However, you will probably find that the software̵

7;s readings are far from accurate, and often even interpret exaggerated terms as “neutral”. And even when you produce a smile that convinces your computer that you are happy, you will know that you forged it.

This is the point of the website, says creator Alexa Hagerty, a researcher at the University of Cambridge Leverhulme Center for the Future of Intelligence and the Center for the Study of Existential Risk: to demonstrate that the basic premise behind much emotion recognition technology is that facial movements are inherent related to changes in feeling is wrong.

“The premise of these technologies is that our faces and inner feelings are correlated in a very predictable way,” says Hagerty. The Verge. “If I smile, I’m happy. If I frown, I’m angry. But the APA did this major review of the evidence in 2019, and they found that people’s emotional space cannot be easily deduced from the facial movements. “In the game,” says Hagerty, “you have a chance to move your face quickly to express yourself for six different emotions, but the point is that you did not internally feel six different things, one after the other in a row.”

A second mini-game on the site drives this point home by asking users to identify the difference between a blink and a blink – something machines cannot do. “You can close your eyes and it could be an involuntary act, or it’s a meaningful gesture,” Hagerty says.

Despite these problems, emotion recognition technology is rapidly gaining ground, with companies promising that such systems can be used for veterinary job candidates (giving them a “work ability score”), locating terrorists, or assessing whether commercial drivers are sleepy or drowsy. (Amazon even uses similar technology in its own vans.)

Of course, humans also make mistakes when we read emotions in the faces of humans, but leaving this job to machines comes with specific disadvantages. First, machines cannot read other social clues that humans can (as with blink / blink dichotomy). Machines also often make automated decisions that humans cannot question and can conduct mass-scale monitoring without our awareness. In addition to, as with face recognition systems, emotion recognition AI is often racistly biased, and considers, for example, the faces of black people who show negative emotions. All of these factors make AI emotions more worrying than humans’ ability to read other people’s emotions.

“The dangers are many,” says Hagerty. “With incorrect communication in humans, we have many opportunities to correct it. But when you automate something or the reading is done without your knowledge or scope, these options are gone. “


Source link