قالب وردپرس درنا توس
Home / Technology / Today I learned about Intel’s AI sliders that filter online game abuse

Today I learned about Intel’s AI sliders that filter online game abuse



Last year during its virtual GDC presentation, Intel announced Bleep, a new AI-powered tool that it hopes will reduce the amount of toxicity gamers must experience in voice chat. According to Intel, the app uses AI to detect and edit audio based on user preferences. The filter works on incoming audio, and acts as an additional user-controlled layer of moderation on top of what a platform or service already offers.

It’s a noble endeavor, but there’s something gloomily amusing about Bleep’s interface, which shows in great detail all the different categories of abuse that people may encounter online, along with sliders to control the amount of abuse users will hear. The categories range from “Aggression”

; to “LGBTQ + Hate”, “Misogyny”, “Racism and Xenophobia” and “White Nationalism”. It’s even an exchange for the N-word. Bleep’s page notes that it is not yet in public beta, so all of this may change.

The filters include “Aggression”, “Misogyny” …
Credit: Intel

… and a switch for the “N-word.”
Image: Intel

With most of these categories, Bleep seems to give users a choice: do you want no one, anyone, most or all of this offensive language to be filtered out? Like choosing from a buffet of toxic internet slurry, Intel’s interface gives players the ability to sprinkle in a light serving of aggression or name-throwing in their online games.

Bleep has been working for a couple of years now – PCMag notes that Intel talked about this initiative all the way back at GDC 2019 – and it is working with AI moderation specialists Spirit AI on the software. But moderating online spaces using artificial intelligence is not as easy as platforms like Facebook and YouTube have shown. Although automated systems can identify directly offensive words, they often do not take into account the context and nuances of certain insults and threats. Online toxicity comes in many, ever-evolving forms that can be difficult for even the most advanced AI moderation systems to detect.

“While we realize that solutions like Bleep do not eliminate the problem, we believe it is a step in the right direction, giving players a tool to control the experience,” said Intel’s Roger Chandler during the GDC demonstration. Intel says it hopes to release Bleep later this year, adding that the technology relies on its hardware-accelerated AI detection, suggesting that the software can rely on Intel hardware to run.


Source link