Posted on

Twitter expands hateful conduct rules to ban dehumanizing speech around age, disability and now, disease

Last year, Twitter expanded its rules around hate speech to include dehumanizing speech against religious groups. Today, Twitter says it’s expanding its rule to also include language that dehumanizes people on the basis of their age, disability, or disease. The latter, of course, is a timely addition given that the spread of the novel coronavirus COVID-19 has led to people making hateful and sometimes racist remarks on Twitter related to this topic.

Twitter says that tweets that broke this rule before today will need to be deleted, but those won’t result in any account suspensions because the rules were not in place at that time. However, any tweets published from now on will now have to abide by Twitter’s updated hateful conduct policy. This overarching policy includes rules about hateful conduct — meaning promoting violence or making threats — as well as the use of hateful imagery and display names.

The policy already includes a ban on dehumanizing speech across a range of categories, including also race, ethnicity, national origin, caste, sexual orientation, gender, and gender identity. The policy has expanded over time as Twitter has tried to better encompass the many areas where it wants to ban hateful speech and conduct on its platform.

One issue with Twitter’s hateful conduct policy is that it’s not able to keep up with enforcement due to the volume of tweets that are posted. In addition, its reliance on having users flag tweets for review means hate speech removal is handled reactively, rather than proactively. Twitter has also been heavily criticized for not properly enforcing its policies and allowing the online abuse to continue.

In today’s announcement, Twitter freely admits to these and other problems. It also notes it has since done more in-depth training and extended its testing period to ensure reviewers better understand how and when to take action, as well as how to protect conversations within marginalized groups. And it created a Trust and Safety Council to help it better understand the nuances and context around complex categories, like race, ethnicity and national origin.

Unfortunately, Twitter’s larger problem is that it has operated for years as a public town square where users have felt relatively free to express themselves without using a real identity where they’re held accountable for their words and actions. There are valid cases for online anonymity — including how it allows people to more freely communicate under oppressive regimes, for example. But the flip side is that it emboldens some people to make statements that they otherwise wouldn’t — and without any social repercussions. That’s not how it works in the real world.

Plus, any time Twitter tries to clamp down on hateful speech and conduct, it’s accused of clamping down on free speech — as if its social media platform is a place that’s protected by the U.S. Constitution’s First Amendment. According to the U.S. courts, that’s not how it works. In fact, a U.S. court recently ruled that YouTube wasn’t a public forum, meaning it didn’t have to guarantee users’ rights to free speech. That sets a precedent for other social platforms as well, Twitter included.

Twitter for years has struggled to get more people to sign up and participate. But it has simultaneously worked against its own goal by not getting a handle on the abuse on its network. Instead, it’s testing new products — disappearing Stories, for example — that it hopes will encourage more engagement. In reality, better-enforced policies would do the trick. The addition of educational prompts in the Compose screen — similar those on Instagram that alerts users to content that will likely get reported or removed — are also well overdue.

It’s good that Twitter is expanding the language in its policy to be more encompassing. But its words need to be backed up with actions.

Twitter says its new rules are in place as of today.

Posted on

Twitter opens its ‘Hide Replies’ feature to developers

Last November, Twitter rolled out its Hide Replies feature to all users worldwide. The feature, largely designed to lessen the power of online trolls to disrupt conversations, lets users decide which replies to their tweets are placed behind an extra click. Today, Twitter is making Hide Replies available to its developer community, allowing for the creation of tools that help people hide the replies to their tweets faster and more efficiently, says Twitter.

These sorts of tools will be of particular interest to businesses and brands who maintain a Twitter presence, but whose accounts often get too many replies to tweets to properly manage on an individual basis. With Hide Replies now available as a new API endpoint, developers can create tools that automatically hide disruptive tweets based on factors important to their customers — like tweets that include certain prohibited keywords or those that score high for being toxic, for example.

Ahead of today’s launch, Twitter worked with a small number of developers who are now releasing tools that take advantage of the added functionality.

Jigsaw, an Alphabet-owned company tackling the worst of the web, has integrated Twitter’s new endpoint with its Perspective API, which uses A.I. to score tweets based on their toxicity. The integration will automatically hide replies that exceed a certain toxic threshold (.94), freeing up the time it would otherwise take to comb through replies manually.

A scripting platform for business workflows, reshuffle, has used the endpoint to develop scripts that detect and hide replies based on keywords or even by user.

Dara Oladosu, the creator of the popular app QuotedReplies, also used the endpoint to build a new app called Hide Unwanted Replies. The app today automatically hides replies by keywords or Twitter handles. Soon, it will add support for hiding replies from likely troll or bot accounts — including tweets from user accounts created too recently or from accounts with few followers.

Hide Replies has been one of Twitter’s more controversial launches to date, as it could potentially allow users to silence critics or stifle dissent even when warranted — such as in the case of refuting misinformation or propaganda, for example. Others argue it’s not really helping address online abuse; the abuse still occurs, but in the shadows. One organization even recently leveraged Hidden Replies for a clever online campaign about how domestic violence goes unseen which further illustrates this problem.

Nevertheless, adoption of Hide Replies is growing, with organizations like the CIA even leveraging it on some tweets.

The new Twitter API endpoint for Hide Replies is available today to all developers in a production-ready form, Twitter says, initially through Twitter Developer Labs. This program launched last year to serve as a way for developers to try out Twitter’s latest APIs ahead of their wider release and offer feedback. Participation in Twitter Developer Labs is free, but interested developers have to sign up using an approved developer account. Twitter is also inviting developers building with the new endpoint to collaborate with the company by way of the community forums.

Based on early feedback from the first testers, Twitter says it’s already making a few changes to the endpoint including support to unhide replies via the endpoint, a higher rate limit to support high-volume use cases, and a way to retrieve a list of replies that indicate if they’re hidden or not.

Source: TechCrunch