Press "Enter" to skip to content

Twitter Is Warning Users about Toxic Replies

In its newest effort to cope with rampant harassment on its platform, Twitter will look into giving customers a second probability earlier than they tweet. In a new characteristic, the corporate is testing; customers who use “dangerous” language will see an immediate suggestion that they self-edit earlier than posting a reply.

The framing here’s a bit disingenuous — harassment on Twitter definitely doesn’t simply occur within the “warmth of the second” by, in any other case, well-meaning people — however, something that may cut back toxicity on the platform might be higher than what we’ve acquired now.

Last year at F8, Instagram rolled out a similar test for its customers that will “nudge” them with a warning earlier than they publish a probably offensive remark. In December, the corporate provided a update on its efforts. “Results have been very promising, and we’ve discovered that some of these nudges can encourage individuals to rethink their phrases when given an opportunity,” the corporate wrote in a blog post.

This type of factor is especially related proper now, as firms conduct moderation throughout their large platforms with relative skeleton crews. The entire main social networks have introduced an elevated reliance on AI detection because the pandemic retains tech employees away from the workplace. In Fb’s case, content material moderators are among the many workers they’d like to bring back first.

We’ve reached out to Twitter for more details about the form of language that triggers the new check characteristic and if the corporate may even contemplate the immediate for normal tweets that don’t reply. We’ll replace the story if we obtain additional information about what this experiment will look like.

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *