Twitter begins testing a new ‘revamped’ reporting process for a small group of users


Twitter is testing a redesigned reporting process to make it easier for users to alert the platform to harmful behavior.

The “redesigned” reporting process aims to simplify the reporting process.

“Twitter receives millions of reports; everything from misinformation and spam to harassment and hate speech, this is their way of telling twitter, hey, it’s not right or i’m not feeling safe. Based on user feedback, research and the fact that the current reporting process did not make enough people feel safe or heard, the company decided to do something, ”Twitter said. in a blog post.

Twitter rolls out redesigned disinformation warning labels

“The new approach, currently being tested with a small group in the United States, simplifies the reporting process. This relieves the individual to be the one who must interpret the violation at issue. Instead, he asks them what happened, ”he said.

As part of the process that follows a “symptoms first” method, Twitter will first ask the user what is going on.

“The idea is, first, let’s try to find out what’s going on instead of asking you to diagnose the problem,” he explained.

Valuable data entries for Twitter

“In times of emergency, people need to be heard and to feel supported. Asking them to open the medical dictionary and say, ‘point out the one thing that’s your problem’ is something people aren’t going to do, ‘said Brian Waismeyer, a data scientist on the experiment team. healthy user who led this new process.

“If they come in for help, what they will do well is describe what is happening to them at the time,” Waismeyer added.

Twitter rolls out Super Follow option to Android users around the world

“What can be frustrating and complex about reporting is that we enforce based on Terms of Service violations as defined by Twitter rules,” said Renna Al-Yassini, senior UX manager at the team.

“The vast majority of what people report falls into a much larger gray spectrum that does not meet specific criteria for Twitter violations, but they still report what they are experiencing as deeply problematic and very upsetting,” Al -Yassini added.

The microblogging platform hopes to improve the quality of reports by refocusing on the experience of the person reporting the tweet and getting more first-hand information about the incident. This can help him better understand how people experience certain content and, in turn, be more specific when it comes to processing or removing it.

“This rich pool of information, even though the tweets in question do not technically break any rules, still gives Twitter a valuable contribution that they can use to improve people’s experience on the platform,” he said. -he declares.

How it works

Once a user reporting a violation describes what happened, Twitter will present them with the Terms of Service violation they think may have occurred, after which they will ask: is this fair? If not, the person can tell, which will help point out to Twitter that there are still gaps in the reporting system.

“All the while, Twitter is collecting feedback and compiling lessons from this chain of events that will help them refine the process and relate the symptoms to actual policies. Ultimately, it helps Twitter take the right action, ”he explained.

The new process will roll out to a wider audience next year.

Separately, the platform is also testing an option for users to add one-off warnings to photos and videos where appropriate.

“People use Twitter to discuss what’s going on in the world, which sometimes means sharing disturbing or sensitive content. We’re testing an option for some of you to add ad hoc warnings to photos and videos you tweet, to help those who might want the warning, ”Twitter wrote from its official Twitter Safety account.


Comments are closed.