Posted on

Facebook gives more details about its efforts against hate speech before Myanmar’s general election

About three weeks ago, Facebook announced will increase its efforts against hate speech and misinformation in Myanmar before the country’s general election on November 8, 2020. Today, it gave some more details about what the company is doing to prevent the spread of hate speech and misinformation. This includes adding Burmese language warning screens to flag information rated false by third-party fact-checkers.

In November 2018, Facebook admitted it didn’t do enough to prevent its platform from being used to “foment division and incite offline violence” in Myanmar.

This is an understatement, considering that Facebook has been accused by human rights groups, including the United Nations Human Rights Council, of enabling the spread of hate speech in Myanmar against Rohingya Muslims, the target of a brutally violent ethnic cleansing campaign. A 2018 investigation by the New York Times found that members of the military in Myanmar, a predominantly Buddhist country, instigated genocide against Rohingya, and used Facebook, one of the country’s most widely-used online services, as a tool to conduct a “systematic campaign” of hate speech against the minority group.

In its announcement several weeks ago, Facebook said it will expand its misinformation policy and remove information intended to “lead to voter suppression or damage the integrity of the electoral process” by working with three fact-checking partners in Myanmar—BOOM, AFP Fact Check and Fact Crescendo. It also said it would flag potentially misleading images and apply a message forwarding limit it introduced in Sri Lanka in June 2019.

Facebook also shared that it in the second quarter of 2020, it had taken action against 280,000 pieces of content in Myanmar that violated it Community Standards against hate speech, with 97.8% detected by its systems before being reported, up from the 51,000 pieces of content it took action against in the first quarter.

But, as TechCrunch’s Natasha Lomas noted, “without greater visibility into the content Facebook’s platform is amplifying, including country specific factors such as whether hate speech posting is increasing in Myanmar as the election gets closer, it’s not possible to understand what volume of hate speech is passing under the radar of Facebook’s detection systems and reaching local eyeballs.”

Facebook’s latest announcement, posted today on its News Room, doesn’t answer those questions. Instead, the company gave some more information about what its preparations for the Myanmar general election.

The company said it will use technology to identify “new words and phrases associated with hate speech” in the country, and either remove posts with those words or “reduce their distribution.”

It will also introduce Burmese language warning screens for misinformation identified as false by its third-party fact-checkers, make reliable information about the election and voting more visible, and promote “digital literacy training” in Myanmar through programs like an ongoing monthly television talk show called “Tea Talks” and introducing its social media analytics tool, CrowdTangle, to newsrooms.

Read More

Posted on

How will EC plans to reboot rules for digital services impact startups?

A framework for ensuring fairness in digital marketplaces and tackling abusive behavior online is brewing in Europe, fed by a smorgasbord of issues and ideas, from online safety and the spread of disinformation, to platform accountability, data portability and the fair functioning of digital markets.

European Commission lawmakers are even turning their eye to labor rights, spurred by regional concern over unfair conditions for platform workers.

On the content side, the core question is how to balance individual freedom of expression online against threats to public discourse, safety and democracy from illegal or junk content that can be deployed cheaply, anonymously and at massive scale to pollute genuine public debate.

The age-old conviction that the cure for bad speech is more speech can stumble in the face of such scale. While illegal or harmful content can be a money spinner, outrage-driven engagement is an economic incentive that often gets overlooked or edited out of this policy debate.

Certainly the platform giants — whose business models depend on background data-mining of internet users in order to program their content-sorting and behavioral ad-targeting (activity that, notably, remains under regulatory scrutiny in relation to EU data protection law) — prefer to frame what’s at stake as a matter of free speech, rather than bad business models.

But with EU lawmakers opening a wide-ranging consultation about the future of digital regulation, there’s a chance for broader perspectives on platform power to shape the next decades online, and much more besides.

In search of cutting-edge standards

For the past two decades, the EU’s legal framework for regulating digital services has been the e-commerce Directive — a cornerstone law that harmonizes basic principles and bakes in liabilities exemptions, greasing the groove of cross-border e-commerce.

In recent years, the Commission has supplemented this by applying pressure on big platforms to self-regulate certain types of content, via a voluntary Code of Conduct on illegal hate speech takedowns — and another on disinformation. However, the codes lack legal bite and lawmakers continue to chastise platforms for not doing enough nor being transparent enough about what they are doing.

Read More

Posted on

France passes law forcing online platforms to delete hate-speech content within 24 hours

France’s lower chamber of the parliament has voted in favor of a controversial law against hate speech on social networks and online platforms. As I described last year, online platforms will have to remove within 24 hours illicit content that has been flagged. Otherwise, companies will have to pay hefty fines every time they infringe the law.

What do they mean by illicit content? Essentially, anything that would be considered as an offense or a crime in the offline world is now considered as illicit content when it’s an online platform. Among other things, you could think about death threats, discrimination, Holocaust denial…

For the most extreme categories, terrorist content and child pornography, online platforms must react within an hour.

While online hate speech has been getting out of control, many fear that online platforms will censor content a bit too quickly. Companies don’t want to risk a fine so they might delete content that doesn’t infringe the law just because they’re not sure.

Essentially, online platforms have to regulate themselves. The government then checks whether they’re doing a good job or not. “It’s just like banking regulators. They check that banks have implemented systems that are efficient, and they audit those systems. I think that’s how we should think about it,” France’s digital minister Cédric O told me in an interview last year.

There are multiple levels of fines. It starts at hundreds of thousand of euros but it can reach up to 4% of the global annual revenue of the company with severe cases. The Superior Council of the Audiovisual (CSA) is the regulator in charge of those cases.

Germany has already passed similar regulation and there are ongoing discussions at the European Union level.

Read More

Posted on

Tumblr now removes reblogs in violation of its hate-speech policy, not just the original posts

Tumblr is making a change to how it deals with hate speech on its blogging platform. The company announced today it will also remove the reblogs (repostings) from any blogs that were suspended for violating its policies around hate speech. Already, the company says it’s identified nearly 1,000 blogs that were banned for blatant violations of its hate-speech rules. Most of these blogs contained Nazi-related content, it said. This week, Tumblr began to remove all the reblogs from these previously banned sites, as well — a number totaling 4.47 million individual posts.

In an announcement, Tumblr explains its reasoning behind the decision to also remove the reblogged material:

We’ve listened to your feedback and have reassessed how we can more effectively remove hateful content from Tumblr. In our own research, and from your helpful reports, we found that much of the existing hate speech stemmed from blogs that have actually already been terminated. While their original posts were deleted upon blog termination, the content of those posts still lived on in reblogs. Those reblogs rarely contained the kind of counter-speech that serves to keep hateful rhetoric in check, so we’re changing how we deal with them.

In other words, it saw no value in allowing the hate speech to live on in this reposted state, as the majority of the reblogs weren’t engaged in providing what Tumblr referred to as “educational” or “necessary counter-arguments” to the hate speech.

When asked if it did, in fact, remove reblogs of an educational nature, Tumblr said it used human moderators to determine which content was in violation and which was not. Any blogs containing “productive counter-conversations” or “educational blogs” were not removed as part of this process, we’re told.

In addition, Tumblr says that moving forward it will evaluate all blogs suspended for hate speech and consider mass reblog deletion when appropriate.

The company consulted with outside experts to determine the right course of action. Ultimately, Tumblr believes the new approach is aligned with the recommended best practices it has been advised to adopt regarding hate speech.

“We are, and will always remain, steadfast believers in free speech. Tumblr is a place where you can be yourself and express your opinions. Hate speech is not conducive to that,” the company’s announcement read. “When hate speech goes unchecked, it eventually silences the voices that add kindness and value to our society. That’s not the kind of Tumblr any of us want.”

Tumblr also noted the decisions it’s making aren’t being left up to AI and algorithms. Instead, Tumblr asks users to flag for review by Tumblr’s Trust & Safety team hate speech they come across.

As expected, there’s a debate about the policy taking place in the comments of the Tumblr post about the changes. On one side are those who support the idea of companies enforcing policies around the sort of content they do not want to host. On the other are the free-speech advocates who see any such policy as a form of censorship.

The effort to take more action on hate speech follows Tumblr’s 2018 decision to ban porn from its platform after getting kicked out of Apple’s App Store for hosting the content. Similarly, hosting hate-speech reblogs could cause problems with Apple’s own rules.

Tumblr has made few changes since its acquisition by WordPress owner Automattic from (TechCrunch parent) Verizon in 2019. But its earlier decisions to clean up its site have had a negative impact on its traffic.

Its significantly devalued price point at the time of the Automattic deal was attributed to its decision to remove NSFW content. Almost every meaningful metric was down year-over-year since the ban, including total visitors, uniques, average site visit, traffic, daily active users and more. Meanwhile, the younger demographic who used to populate Tumblr in the millions have largely moved on to expressive, video-centric social platforms, like TikTok and Twitch.

Tumblr’s Community Guidelines haven’t been updated to include its decision to remove reblogs of hate speech, but its full hate-speech rules can be viewed here. 

Read More

Posted on

Twitter expands hateful conduct rules to ban dehumanizing speech around age, disability and now, disease

Last year, Twitter expanded its rules around hate speech to include dehumanizing speech against religious groups. Today, Twitter says it’s expanding its rule to also include language that dehumanizes people on the basis of their age, disability, or disease. The latter, of course, is a timely addition given that the spread of the novel coronavirus COVID-19 has led to people making hateful and sometimes racist remarks on Twitter related to this topic.

Twitter says that tweets that broke this rule before today will need to be deleted, but those won’t result in any account suspensions because the rules were not in place at that time. However, any tweets published from now on will now have to abide by Twitter’s updated hateful conduct policy. This overarching policy includes rules about hateful conduct — meaning promoting violence or making threats — as well as the use of hateful imagery and display names.

The policy already includes a ban on dehumanizing speech across a range of categories, including also race, ethnicity, national origin, caste, sexual orientation, gender, and gender identity. The policy has expanded over time as Twitter has tried to better encompass the many areas where it wants to ban hateful speech and conduct on its platform.

One issue with Twitter’s hateful conduct policy is that it’s not able to keep up with enforcement due to the volume of tweets that are posted. In addition, its reliance on having users flag tweets for review means hate speech removal is handled reactively, rather than proactively. Twitter has also been heavily criticized for not properly enforcing its policies and allowing the online abuse to continue.

In today’s announcement, Twitter freely admits to these and other problems. It also notes it has since done more in-depth training and extended its testing period to ensure reviewers better understand how and when to take action, as well as how to protect conversations within marginalized groups. And it created a Trust and Safety Council to help it better understand the nuances and context around complex categories, like race, ethnicity and national origin.

Unfortunately, Twitter’s larger problem is that it has operated for years as a public town square where users have felt relatively free to express themselves without using a real identity where they’re held accountable for their words and actions. There are valid cases for online anonymity — including how it allows people to more freely communicate under oppressive regimes, for example. But the flip side is that it emboldens some people to make statements that they otherwise wouldn’t — and without any social repercussions. That’s not how it works in the real world.

Plus, any time Twitter tries to clamp down on hateful speech and conduct, it’s accused of clamping down on free speech — as if its social media platform is a place that’s protected by the U.S. Constitution’s First Amendment. According to the U.S. courts, that’s not how it works. In fact, a U.S. court recently ruled that YouTube wasn’t a public forum, meaning it didn’t have to guarantee users’ rights to free speech. That sets a precedent for other social platforms as well, Twitter included.

Twitter for years has struggled to get more people to sign up and participate. But it has simultaneously worked against its own goal by not getting a handle on the abuse on its network. Instead, it’s testing new products — disappearing Stories, for example — that it hopes will encourage more engagement. In reality, better-enforced policies would do the trick. The addition of educational prompts in the Compose screen — similar those on Instagram that alerts users to content that will likely get reported or removed — are also well overdue.

It’s good that Twitter is expanding the language in its policy to be more encompassing. But its words need to be backed up with actions.

Twitter says its new rules are in place as of today.