Twitter Tests New Feature for Users to Flag ‘Misleading’ Posts

By Tom Ozimek
Tom Ozimek
Tom Ozimek
Reporter
Tom Ozimek has a broad background in journalism, deposit insurance, marketing and communications, and adult education. The best writing advice he's ever heard is from Roy Peter Clark: 'Hit your target' and 'leave the best for last.'
August 18, 2021 Updated: August 18, 2021

Twitter has announced it is testing a feature that lets people report tweets they think are misleading, with the company saying it’s looking for the input thus received to help it scale up and accelerate its “broader misinformation work.”

The new feature is being tested in the United States, South Korea, and Australia starting Aug. 17, the company said in a series of tweets.

“We’re assessing if this is an effective approach so we’re starting small,” Twitter stated, later adding in a tweet that, “in this experimental stage, we plan to learn from a small, geographically diverse set of regions before scaling globally to other areas!”

The way the trial feature works is that when selected users click on the Report Tweet option, they will find the option to flag the tweet as “It’s misleading.” Twitter will then ask those people to provide additional information about the issue they’re reporting.

The company added that it may not take action on each report and won’t respond to every tweet that is flagged, “but your input will help us identify trends so that we can improve the speed and scale of our broader misinformation work.”

Unsurprisingly, the pilot program has already sparked controversy, with some Twitter users offering critical reactions ranging from concerns about the lack of standards for what constitutes “misleading,” to worries that the feature would be misused “to silence anything that doesn’t fit the mainstream narrative,” to fears that it would make it easier for users to get “steamrolled by hate groups.”

Some users reacted with glee, others called for the feature to be rolled out in other regions, and others still posted screenshots of Twitter notifications confirming they had successfully flagged a post as “misleading.”

This is not the first time Twitter has experimented with user-driven means for flagging misinformation. In January, the company rolled out a pilot program called Birdwatch, in which approved contributors would add notes that are visible to the public on posts they believe are misleading. Twitter called it “a community-based approach to misinformation.”

Twitter’s vice president of product Keith Coleman wrote in a blog post at the time that the Birdwatch crowd-sourced approach “has the potential to respond quickly when misleading information spreads, adding context that people trust and find valuable.”

In June, Twitter stepped up the Birdwatch feature, announcing that it was making notes visible to pilot participants, who would be given an opportunity to rate whether the feedback is helpful.

It comes amid a broader thrust by Twitter, and other social media platforms, to combat what they deem to be misinformation. While such efforts have drawn praise from some, such moves have also been met with criticism on both sides of the political divide. Some on the left have said social media companies aren’t doing enough to curb the spread of objectionable content, and some on the right have argued these efforts are a cover for censoring conservative speech and tipping the scales in favor of progressive viewpoints and politics.

Tom Ozimek
Tom Ozimek
Reporter
Tom Ozimek has a broad background in journalism, deposit insurance, marketing and communications, and adult education. The best writing advice he's ever heard is from Roy Peter Clark: 'Hit your target' and 'leave the best for last.'