Home / Technology / In algorithms we trust? Twitter criticizes potentially new system for misinformation – RT USA News

In algorithms we trust? Twitter criticizes potentially new system for misinformation – RT USA News



Twitter is experimenting with a new system that will flag messages it considers problematic, although not erroneous, and raises concerns that the platform is taking content screening too far.

Tech blogger Jane Manchun Wong, who is known for reverse engineering apps for finding hidden features, revealed on Monday that she had come across a step-by-step warning label system that Twitter is playing with, apparently in an attempt to extend the measure against ‘misinformation’.

According to Wong, Twitter can potentially place problematic material in three categories: “Get the latest,” “Stay informed,” and “Misleading.” The new system seems to take a more nuanced approach to fact-checking, using labels on content that may not be wrong, but in Twitter̵

7;s opinion requires more coherence.

As an example of how labels can be used, Wong created three separate tweets. Her first message, “Snored 60 grams of dihydrogen monoxide, and I’m not feeling well now,” was opposed with a “Get the latest” label that provided more information about water.

In another post, she wrote: “In 12 hours, darkness will rise in parts of the world. Watch,” triggers one “Stay informed” label that provided a link to time zone information. A third tweet, “We eat. Turtles eat. That’s why we are turtles,” resulted in a “Misleading” label, labeling the content as one “Logical error.”

Wong explained that while the labels were genuine, she added her own text below them to demonstrate how the system can respond to alleged misinformation.

A Twitter employee confirmed that the labels were genuine and described them as “Early experiments” as the company continues to target misinformation.

It is unclear whether the step-by-step system will actually go into operation, or to what extent it will actually be used if implemented. Wong has broken several stories related to Twitter, including the rollout of its ‘tip jar’ feature, as well as the plan to introduce a new paid subscription service, Twitter Blue.

While some applauded the experimental system as a step in the right direction, there was considerable concern about whether Twitter became overzealous with its efforts for police content.

Many wanted to know how the labels should be assigned, and argued that Twitter must be transparent, especially if it plans to use an automated, algorithm-based screening process. There were also questions about where the links that provide more context or information would actually lead, and who would be responsible for curing what Twitter considers the ‘truth’.

Other commentators wondered how such a system would work in cases where the author is clearly joking or sarcastic.

The potential for misusing labels to crack down on unwanted speech should also not be overlooked, noted one response and argued that the initiative was “Just a step towards deep censorship.”

A similarly critical comment said Twitter was trying to do so “Play God” by deciding what is true or not.

Like other social media platforms, Twitter has taken aggressive steps to flag or weed out content that it considers harmful or misleading. Most of the initiatives stem from allegations that social media was manipulated to influence the US presidential competition in 2020. However, steps have also been taken to identify and remove “Hat tale” and alleged misinformation about Covid-19.

But Twitter’s algorithms are far from omnipotent. The company recently received criticism after it deleted posts mentioning the planned eviction of Palestinian families from East Jerusalem, a mistake that Twitter blamed on “Automated systems.” A similar problem occurred on Instagram.



Also on rt.com
Twitter lists paid subscription services in the App Store, and raises the debate about whether new features are worth paying for


Do you like this story? Share it with a friend!




Source link