Twitter begins testing a new ‘revamped’ reporting process for a small group of users
Twitter is testing a redesigned reporting process to make it easier for users to alert the platform to harmful behavior.
The âredesignedâ reporting process aims to simplify the reporting process.
âTwitter receives millions of reports; everything from misinformation and spam to harassment and hate speech, this is their way of telling twitter, hey, it’s not right or i’m not feeling safe. Based on user feedback, research and the fact that the current reporting process did not make enough people feel safe or heard, the company decided to do something, âTwitter said. in a blog post.
Twitter rolls out redesigned disinformation warning labels
âThe new approach, currently being tested with a small group in the United States, simplifies the reporting process. This relieves the individual to be the one who must interpret the violation at issue. Instead, he asks them what happened, âhe said.
As part of the process that follows a “symptoms first” method, Twitter will first ask the user what is going on.
âThe idea is, first, let’s try to find out what’s going on instead of asking you to diagnose the problem,â he explained.
Valuable data entries for Twitter
âIn times of emergency, people need to be heard and to feel supported. Asking them to open the medical dictionary and say, ‘point out the one thing that’s your problem’ is something people aren’t going to do, ‘said Brian Waismeyer, a data scientist on the experiment team. healthy user who led this new process.
“If they come in for help, what they will do well is describe what is happening to them at the time,” Waismeyer added.
Twitter rolls out Super Follow option to Android users around the world
âWhat can be frustrating and complex about reporting is that we enforce based on Terms of Service violations as defined by Twitter rules,â said Renna Al-Yassini, senior UX manager at the team.
âThe vast majority of what people report falls into a much larger gray spectrum that does not meet specific criteria for Twitter violations, but they still report what they are experiencing as deeply problematic and very upsetting,â Al -Yassini added.
The microblogging platform hopes to improve the quality of reports by refocusing on the experience of the person reporting the tweet and getting more first-hand information about the incident. This can help him better understand how people experience certain content and, in turn, be more specific when it comes to processing or removing it.
âThis rich pool of information, even though the tweets in question do not technically break any rules, still gives Twitter a valuable contribution that they can use to improve people’s experience on the platform,â he said. -he declares.
How it works
Once a user reporting a violation describes what happened, Twitter will present them with the Terms of Service violation they think may have occurred, after which they will ask: is this fair? If not, the person can tell, which will help point out to Twitter that there are still gaps in the reporting system.
âAll the while, Twitter is collecting feedback and compiling lessons from this chain of events that will help them refine the process and relate the symptoms to actual policies. Ultimately, it helps Twitter take the right action, âhe explained.
The new process will roll out to a wider audience next year.
Separately, the platform is also testing an option for users to add one-off warnings to photos and videos where appropriate.
âPeople use Twitter to discuss what’s going on in the world, which sometimes means sharing disturbing or sensitive content. We’re testing an option for some of you to add ad hoc warnings to photos and videos you tweet, to help those who might want the warning, âTwitter wrote from its official Twitter Safety account.