Posted on

In Reversal, Twitter Is No Longer Blocking New York Post Article

SAN FRANCISCO — It is the 11th hour before the presidential election. But Facebook and Twitter are still changing their minds.

With just a few weeks to go before the Nov. 3 vote, the social media companies are continuing to shift their policies and, in some cases, are entirely reversing what they will and won’t allow on their sites. On Friday, Twitter underlined just how fluid its policies were when it began letting users share links to an unsubstantiated New York Post article about Hunter Biden that it had previously blocked from its service.

The change was a 180-degree turn from Wednesday, when Twitter had banned the links to the article because the emails on which it was based may have been hacked and contained private information, both of which violated its policies. (Many questions remain about how the New York Post obtained the emails.)

Late Thursday, under pressure from Republicans who said Twitter was censoring them, the company began backtracking by revising one of its policies. It completed its about-face on Friday by lifting the ban on the New York Post story altogether, as the article has spread widely across the internet.

Twitter’s flip-flop followed a spate of changes from Facebook, which over the past few weeks has said it would ban Holocaust denial content, ban more QAnon conspiracy pages and groups, ban anti-vaccination ads and suspend political advertising for an unspecified length of time after the election. All of those things had previously been allowed — until they weren’t.

The rapid-fire changes have made Twitter and Facebook the butt of jokes and invigorated efforts to regulate them. On Friday, Senator Josh Hawley, Republican of Missouri, said he wanted to subpoena Mark Zuckerberg, Facebook’s chief executive, to testify over the “censorship” of the New York Post article since the social network had also reduced the visibility of the piece. Kayleigh McEnany, the White House press secretary, said that Twitter was “against us.” And President Trump shared a satirical article on Twitter that mocked the company’s policies.

“Policies are a guide for action, but the platforms are not standing behind their policies,” said Joan Donovan, research director of the Shorenstein Center on Media, Politics and Public Policy at Harvard’s Kennedy School. “They are merely reacting to public pressure and therefore will be susceptible to politician influence for some time to come.”

Keep up with Election 2020

A Twitter spokesman confirmed that the company would now allow the link to the New York Post article to be shared because the information had spread across the internet and could no longer be considered private. He declined further comment.

A Facebook spokesman, Andy Stone, said: “Meaningful events in the world have led us to change some of our policies, but not our principles.”

For nearly four years, the social media companies have had time to develop content policies to be ready for the 2020 election, especially after Russian operatives were found to have used the sites to sow discord in the 2016 election. But even with all the preparations, the volume of last-minute changes by Twitter and Facebook suggests that they still do not have a handle on the content flowing on their networks.

That raises questions, election experts said, about how Twitter and Facebook would deal with any interference on Election Day and in the days after. The race between Mr. Trump and his Democratic challenger, Joseph R. Biden Jr., has been unusually bitter, and the social media sites are set to play a significant role on Nov. 3 as distributors of information. Some people are already using the sites to call for election violence.

The chaotic environment could challenge the companies’ policies, said Graham Brookie, director of the Digital Forensic Research Lab, a center for the study of social media, disinformation and national security. “Everybody has a plan until they get punched in the face,” he said.

Other misinformation experts said Twitter and Facebook have had little choice but to make changes on the fly because of the often norm-breaking behavior of Mr. Trump, who uses social media as a megaphone.

Alex Stamos, director of the Stanford Internet Observatory and a former Facebook executive, noted that after Mr. Trump recently made comments to his supporters to “go into the polls and watch very carefully,” some companies — like Facebook — created new policies that forbid a political candidate to use their platforms to call for that action. The companies also prohibited candidates from claiming an election victory early, he said.

“These potential abuses were always covered by very broad policies, but I think it’s smart to commit themselves to specific actions,” Mr. Stamos said.

So on Wednesday, Twitter blocked links to the article hours after it had been published. The company said sharing the article violated its policy that prohibits users from spreading hacked information. It also said the emails in the story contained private information, so sharing the piece would violate its privacy policies.

But after blocking the article, Twitter was blasted by Republicans for censorship. Many conservatives — including Representative Jim Jordan of Ohio and Ms. McEnany — reposted the piece to bait the company into taking down their tweets or locking their accounts.

Twitter soon said it could have done more to explain its decision. Jack Dorsey, Twitter’s chief executive, said late Wednesday that the company had not provided enough context to users when they were prevented from posting the links.

His reaction set off a scramble at Twitter. By late Thursday, Vijaya Gadde, Twitter’s top legal and policy official, said that the policy against sharing hacked materials would change and that the content would no longer be blocked unless it was clearly shared by the hackers or individuals working in concert with them. Instead, information gleaned from hacks would be marked with a warning label about its provenance, Ms. Gadde said.

The internal discussions continued. On Friday, Twitter users could freely post links to the New York Post article. The company had not added labels to tweets with the article as it said it would.

At Facebook, the recent policy changes have grabbed attention partly because the company said on Sept. 3 that it did not plan to make changes to its site until after the election. “To ensure there are clear and consistent rules, we are not planning to make further changes to our election-related policies between now and the official declaration of the result,” Mr. Zuckerberg wrote in a blog post at the time.

Yet just a few weeks later, the changes started coming rapidly. On Oct. 6, Facebook expanded its takedown of the QAnon conspiracy group. A day later, it said it would ban political advertising after the polls closed on Election Day, with the ban lasting an undetermined length of time.

Days later, Mr. Zuckerberg also said Facebook would no longer allow Holocaust deniers to post their views to the site. And less than 24 hours after that, the company said it would disallow advertising related to anti-vaccination theories.

Facebook’s Mr. Stone positioned the changes as a natural response to what it called “a historic election,” as well as the coronavirus pandemic and Black Lives Matter protests.

“We remain committed to free expression while also recognizing the current environment requires clearer guardrails to minimize harm,” he said.

But there is one change Facebook hasn’t made. After reducing visibility of the New York Post article on its site on Wednesday and saying the article needed to be fact checked, the social network has continued to stick by that decision.

Posted on

Daily Crunch: Twitter walks back New York Post decision

A New York Post story forces social platforms to make (and in Twitter’s case, reverse) some difficult choices, Sony announces a new 3D display and fitness startup Future raises $24 million. This is your Daily Crunch for October 16, 2020.

The big story: Twitter walks back New York Post decision

A recent New York Post story about a cache of emails and other data supposedly originating from a laptop belonging to Joe Biden’s son Hunter looked suspect from the start, and more holes have emerged over time. But it’s also put the big social media platform in an awkward position, as both Facebook and Twitter took steps to limit the ability of users to share the story.

Twitter, in particular, took a more aggressive stance, blocking links to and images of the Post story because it supposedly violated the platform’s “hacked materials policy.” This led to predictable complaints from Republican politicians, and even Twitter’s CEO Jack Dorsey said that blocking links in direct messages without an explanation was “unacceptable.”

As a result, the company said it’s changing the aforementioned hacked materials policy. It will no longer remove hacked content unless it’s been shared directly by hackers or those “acting in direct concert with them.” Otherwise, it will label tweets to provide context. As of today, it’s also allowing users to share links to the Post story.

The tech giants

Sony’s $5,000 3D display (probably) isn’t for you — The company is targeting creative professionals with its new Spatial Reality Display.

EU’s Google-Fitbit antitrust decision deadline pushed into 2021 — EU regulators now have until January 8, 2021 to take a decision.

Startups, funding and venture capital

Elon Musk’s Las Vegas Loop might only carry a fraction of the passengers it promised — Planning files reviewed by TechCrunch seem to show that The Boring Company’s Loop system will not be able to move anywhere near the number of people the company agreed to.

Future raises $24M Series B for its $150/mo workout coaching app amid at-home fitness boom — Future offers a pricey subscription that virtually teams users with a real-life fitness coach.

Lawmatics raises $2.5M to help lawyers market themselves — The San Diego startup is building marketing and CRM software for lawyers.

Advice and analysis from Extra Crunch

How COVID-19 and the resulting recession are impacting female founders — The sharp decline in available capital is slowing the pace at which women are founding new companies in the COVID-19 era.

Startup founders set up hacker homes to recreate Silicon Valley synergy — Hacker homes feel like a nostalgic attempt to recreate some of the synergies COVID-19 wiped out.

Private equity firms can offer enterprise startups a viable exit option — The IPO-or-acquisition question isn’t always an either/or proposition.

(Reminder: Extra Crunch is our subscription membership program, which aims to democratize information about startups. You can sign up here.)

Everything else

FAA streamlines commercial launch rules to keep the rockets flying — With rockets launching in greater numbers and variety, and from more providers, it makes sense to get a bit of the red tape out of the way.

We need universal digital ad transparency now — Fifteen researchers propose a new standard for advertising disclosures.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Read More

Posted on

Twitter is now allowing users to share that controversial New York Post story

Twitter has taken another step back from its initial decision to block users from sharing links to or images of a New York Post story reporting on emails and other data supposedly originating on a laptop belonging to Democratic presidential nominee Joe Biden’s son, Hunter.

The story, which alleged that Hunter Biden had set up a meeting between a Ukrainian energy firm and his father back when Biden was vice president, looked shaky from the start, and more holes have emerged over time. Both Facebook and Twitter took action to slow its spread — but Twitter seemed to take the more aggressive stance, not just limiting reach but actually blocking links.

These moves have drawn a range of criticism. There have been predictable cries of censorship from Republican politicians and pundits, but there have also been suggestions that Facebook and Twitter inadvertently drew more attention to the story. And even Twitter’s CEO Jack Dorsey suggested that it was “unacceptable” to block links in DMs without an explanation.

Casey Newton, on the other hand, argued that the platforms had successfully slowed the story’s spread: “The truth had time to put its shoes on before Rudy Giuliani’s shaggy-dog story about a laptop of dubious origin made it all the way around the world.”

Twitter initially justified its approach by citing its hacked materials policy, then later said it was blocking the Post article for including “personal and private information — like email addresses and phone numbers — which violate our rules.”

The controversy did prompt Twitter to revise its hacked materials policy, so that content and links obtained through dubious means will now come with a label, rather than being removed entirely, unless it’s being shared directly by hackers or those “acting in concert with them.”

And now, as first reported by The New York Times, Twitter is also allowing users to share links to the Post story itself (something I’ve confirmed through my own Twitter account).

Why the reversal? Again, the official justification for blocking the link was to prevent the spread of private information, so the company said that the story has now spread so widely, online and in the press, that the information can no longer be considered private.

Read More

Posted on

We need universal digital ad transparency now

15 researchers propose a new standard for advertising disclosures

Dear Mr. Zuckerberg, Mr. Dorsey, Mr. Pichai and Mr. Spiegel: We need universal digital ad transparency now!

The negative social impacts of discriminatory ad targeting and delivery are well-known, as are the social costs of disinformation and exploitative ad content. The prevalence of these harms has been demonstrated repeatedly by our research. At the same time, the vast majority of digital advertisers are responsible actors who are only seeking to connect with their customers and grow their businesses.

Many advertising platforms acknowledge the seriousness of the problems with digital ads, but they have taken different approaches to confronting those problems. While we believe that platforms need to continue to strengthen their vetting procedures for advertisers and ads, it is clear that this is not a problem advertising platforms can solve by themselves, as they themselves acknowledge. The vetting being done by the platforms alone is not working; public transparency of all ads, including ad spend and targeting information, is needed so that advertisers can be held accountable when they mislead or manipulate users.

Our research has shown:

  • Advertising platform system design allows advertisers to discriminate against users based on their gender, race and other sensitive attributes.
  • Platform ad delivery optimization can be discriminatory, regardless of whether advertisers attempt to set inclusive ad audience preferences.
  • Ad delivery algorithms may be causing polarization and make it difficult for political campaigns to reach voters with diverse political views.
  • Sponsors spent more than $1.3 billion dollars on digital political ads, yet disclosure is vastly inadequate. Current voluntary archives do not prevent intentional or accidental deception of users.

While it doesn’t take the place of strong policies and rigorous enforcement, we believe transparency of ad content, targeting and delivery can effectively mitigate many of the potential harms of digital ads. Many of the largest advertising platforms agree; Facebook, Google, Twitter and Snapchat all have some form of an ad archive. The problem is that many of these archives are incomplete, poorly implemented, hard to access by researchers and have very different formats and modes of access. We propose a new standard for universal ad disclosure that should be met by every platform that publishes digital ads. If all platforms commit to the universal ad transparency standard we propose, it will mean a level playing field for platforms and advertisers, data for researchers and a safer internet for everyone.

The public deserves full transparency of all digital advertising. We want to acknowledge that what we propose will be a major undertaking for platforms and advertisers. However, we believe that the social harms currently being borne by users everywhere vastly outweigh the burden universal ad transparency would place on ad platforms and advertisers. Users deserve real transparency about all ads they are bombarded with every day. We have created a detailed description of what data should be made transparent that you can find here.

We researchers stand ready to do our part. The time for universal ad transparency is now.

Signed by:

Jason Chuang, Mozilla
Kate Dommett, University of Sheffield
Laura Edelson, New York University
Erika Franklin Fowler, Wesleyan University
Michael Franz, Bowdoin College
Archon Fung, Harvard University
Sheila Krumholz, Center for Responsive Politics
Ben Lyons, University of Utah
Gregory Martin, Stanford University
Brendan Nyhan, Dartmouth College
Nate Persily, Stanford University
Travis Ridout, Washington State University
Kathleen Searles, Louisiana State University
Rebekah Tromble, George Washington University
Abby Wood, University of Southern California

Read More

Posted on

Twitter Changes Course After Republicans Claim ‘Election Interference’

SAN FRANCISCO — President Trump called Facebook and Twitter “terrible” and “a monster” and said he would go after them. Senators Ted Cruz and Marsha Blackburn said they would subpoena the chief executives of the companies for their actions. And on Fox News, prominent conservative hosts blasted the social media platforms as “monopolies” and accused them of “censorship” and election interference.

On Thursday, simmering discontent among Republicans over the power that Facebook and Twitter wield over public discourse erupted into open acrimony. Republicans slammed the companies and baited them a day after the sites limited or blocked the distribution of an unsubstantiated New York Post article about Hunter Biden, the son of the Democratic presidential nominee, Joseph R. Biden Jr.

For a while, Twitter doubled down. It locked the personal account of Kayleigh McEnany, the White House press secretary, late Wednesday after she posted the article, and on Thursday it briefly blocked a link to a House Judiciary Committee webpage. The Trump campaign said Twitter had also locked its official account after it tried promoting the article. Twitter then prohibited the spread of a different New York Post article about the Bidens.

But late Thursday, under pressure, Twitter said it was changing the policy that it had used to block the New York Post article and would now allow similar content to be shared, along with a label to provide context about the source of the information. Twitter said it was concerned that the earlier policy was leading to unintended consequences.

Even so, the actions brought the already frosty relationship between conservatives and the companies to a new low point, less than three weeks before the Nov. 3 presidential election, in which the social networks are expected to play a significant role. It offered a glimpse at how online conversations could go awry on Election Day. And Twitter’s bob-and-weave in particular underlined how the companies have little handle on how to consistently enforce what they will allow on their sites.

“There will be battles for control of the narrative again and again over coming weeks,” said Evelyn Douek, a lecturer at Harvard Law School who studies social media companies. “The way the platforms handled it is not a good harbinger of what’s to come.”

Facebook declined to comment on Thursday and pointed to its comments on Wednesday when it said the New York Post article, which made unverified claims about Hunter Biden’s business in Ukraine, was eligible for third-party fact-checking. Among the concerns was that the article cited purported emails from Hunter Biden that may have been obtained in a hack, though it is unclear how the paper obtained the messages and whether they were authentic.

Twitter had said it was blocking the New York Post article partly because it had a policy of not sharing what might be hacked material. But late Thursday, Vijaya Gadde, Twitter’s head of legal, said the policy was too sweeping and could end up blocking content from journalists and whistle-blowers. As a result, she said, Twitter was changing course.

Ms. Gadde added that Twitter would continue blocking links to or images from the article if they contained email addresses and other private information, which violated the company’s privacy policy.

Mr. Trump said on Twitter on Wednesday that “it is only the beginning” for the social media companies. He followed up on Thursday by saying he wanted to “strip them” of some of their liability protections.

For years, Mr. Trump and other Republicans have accused Facebook and Twitter, which have headquarters in liberal Silicon Valley, of anti-conservative bias. In 2018, Mr. Trump said the companies, along with Google, “have to be careful” and claimed, without evidence, that they were intentionally suppressing conservative news outlets supportive of his administration.

That issue has since come up repeatedly at Capitol Hill hearings, including in July when the chief executives of Facebook and Google, Mark Zuckerberg and Sundar Pichai, testified on antitrust issues.

Keep up with Election 2020

Tensions have also been running high for Twitter and Facebook as they aim to avoid a replay of the 2016 election, when Russians used their sites to spread inflammatory messages to divide Americans. In recent weeks, the companies have said they will clamp down on misinformation before and after Election Day, such as by banning content related to the pro-Trump conspiracy theory QAnon and slowing down the way information flows on their networks.

But with Mr. Trump trailing Mr. Biden in the polls, the companies’ handling of the New York Post article has ruptured any truce they had managed to strike with conservatives.

Senator Josh Hawley, Republican of Missouri, asked the Federal Election Commission in a letter on Wednesday to investigate whether the companies’ actions could be considered an in-kind contribution to Mr. Biden’s campaign.

“I think it really is a new frontier,” Mr. Hawley said in an interview. “It will also lead to a new openness on the Republican side to think about what we are going to do about their monopoly power.”

Mr. Cruz, of Texas, and Ms. Blackburn, of Tennessee, said on Thursday that they would subpoena Mr. Zuckerberg and Jack Dorsey, Twitter’s chief executive, for a hearing on what they deemed “election interference.”

“I’m looking forward to asking Jack and Mark about silencing media that go against their political beliefs,” Ms. Blackburn said in a tweet.

Representative Jim Jordan, an Ohio Republican and the ranking member of the House Judiciary Committee, sent Mr. Dorsey a letter excoriating Twitter for blocking the article and asking for a detailed summary of the process behind the decision.

Mr. Pichai, Mr. Zuckerberg and Mr. Dorsey have already agreed to testify before the Senate Commerce Committee on Oct. 28 about the federal law that shields their platforms from lawsuits. Conservatives have called for changes to the law, Section 230 of the Communications Decency Act, which makes it impossible to sue web platforms over much of the content posted by their users or how they choose to moderate it.

“Social media companies have a First Amendment right to free speech,” Mr. Pai said in a statement. “But they do not have a First Amendment right to a special immunity denied to other media outlets, such as newspapers and broadcasters.”

Mr. Trump was even more pointed, saying in a tweet on Thursday that the companies needed to be deprived of their Section 230 protections “immediately.”

Others applauded the aggressiveness of the social media companies.

“The actions taken by Facebook, Twitter and Google show that these platform companies are indeed willing to enforce their existing policies, in particular around ‘hack and leak’ material,” said Shannon McGregor, senior researcher with the Center for Information, Technology and Public Life at the University of North Carolina at Chapel Hill.

Unlike previous criticism of Facebook and Twitter for acting too slowly in taking down content, the uproar this time has centered on how they may have acted too hastily. (The exception was Google’s YouTube, which said after about 36 hours that it would allow a New York Post video about the article to remain up without restrictions.)

The speed with which Facebook moved was uncharacteristic, fueled by how quickly the article took off online and the sensitivity of the material, according to two Facebook employees, who were not authorized to speak publicly.

Within three hours after The New York Post published its article on Wednesday, Facebook said it would reduce the distribution of the piece across the network so that it would appear less frequently in users’ individual News Feeds, one of the most highly viewed sections of the app.

The company billed it as part of its “standard process to reduce the spread of misinformation,” said Andy Stone, a Facebook spokesman. That process included spotting some “signals” that a piece of content might be false, according to Facebook’s guidelines for content moderation. The company has not clarified what those signals were.

Twitter then went further by blocking people from linking to the article altogether. That meant the article could not circulate at all on Twitter, even in private messages between users.

The backlash was instant. Republicans immediately tested the limits of Twitter’s rules, with some tweeting screenshots of the article. Francis Brennan, the director of strategic response for the Trump campaign, posted the entire article in a string of 44 tweets. The article was also copied and published on the webpage of the House Judiciary Committee’s Republican minority.

Twitter scrambled to keep up. If tweets with the screenshots showed the emails, the company removed them. Mr. Brennan’s tweets were allowed to remain because they did not include the emails.

Late Wednesday, as the furor grew, Twitter tried to address it. “We know we have more work to do to provide clarity in our product when we enforce our rules in this manner,” a spokesman tweeted.

Twitter also said people whose accounts were locked could easily change that by simply deleting the offending tweet.

Also late Wednesday, Mr. Dorsey criticized his company’s communication about the decision, saying it was “unacceptable” to give “zero context” about the action.

Internally, Mr. Dorsey griped to employees that users weren’t given a sufficient explanation when prevented from sharing the New York Post article, a person with knowledge of the comments said.

Twitter’s hacked-material policy was written in 2018, with blocking links the main course of action. The company has since increasingly opted to label tweets, adding context or saying if they glorified violence.

But Twitter had not updated the hacked-material policy. So when the New York Post article appeared, and questions about the emails’ origin were raised, the only system it had was to block the content.

“We are no longer limited to tweet removal as an enforcement action,” Ms. Gadde said late Thursday.

Mike Isaac reported from San Francisco and Kate Conger from Oakland, Calif. Daisuke Wakabayashi contributed reporting from Oakland, David McCabe from Washington, and Tiffany Hsu from Hoboken, N.J.

Posted on

Facebook and Twitter Dodge a 2016 Repeat, and Ignite a 2020 Firestorm

Since 2016, when Russian hackers and WikiLeaks injected stolen emails from the Hillary Clinton campaign into the closing weeks of the presidential race, politicians and pundits have called on tech companies to do more to fight the threat of foreign interference.

On Wednesday, less than a month from another election, we saw what “doing more” looks like.

Early Wednesday morning, the New York Post published a splashy front-page article about supposedly incriminating photos and emails found on a laptop belonging to Hunter Biden, the son of Joseph R. Biden Jr. To many Democrats, the unsubstantiated article — which included a bizarre set of details involving a Delaware computer repair shop, the F.B.I. and Rudy Giuliani, the president’s personal lawyer — smelled suspiciously like the result of a hack-and-leak operation.

To be clear, there is no evidence tying the Post’s report to a foreign disinformation campaign. Many questions remain about how the paper obtained the emails and whether they were authentic. Even so, the social media companies were taking no chances.

Within hours, Twitter banned all links to the Post’s article, and locked the accounts of people, including some journalists and the White House press secretary, Kayleigh McEnany, who tweeted it. The company said it made the move because the article contained images showing private personal information, and because it viewed the article as a violation of its rules against distributing hacked material.

On Thursday, the company partly backtracked, saying it would no longer remove hacked content unless it was shared directly by hackers or their accomplices.

Facebook took a less nuclear approach. It said that it would reduce the visibility of the article on its service until it could be fact-checked by a third party, a policy it has applied to other sensitive posts. (The move did not seem to damage the article’s prospects; by Wednesday night, stories about Hunter Biden’s emails were among the most-engaged posts on Facebook.)

Both decisions angered a chorus of Republicans, who called for Facebook and Twitter to be sued, stripped of their legal protections, or forced to account for their choices. Senator Josh Hawley, Republican of Missouri, called in a tweet for Twitter and Facebook to be subpoenaed by Congress to testify about censorship, accusing them of trying to “hijack American democracy by censoring the news & controlling the expression of Americans.”

Keep up with Election 2020

A few caveats: There is still a lot we still don’t know about the Post article. We don’t know if the emails it describes are authentic, fake or some combination of both, or if the events they purport to describe actually happened. Mr. Biden’s campaign denied the central claims in the article, and a Biden campaign surrogate lashed out against the Post on Wednesday, calling the article “Russian disinformation.”

Even if the emails are authentic, we don’t know how they were obtained, or how they ended up in the possession of Rudy Giuliani, the president’s lawyer, who has been spearheading efforts to paint Mr. Biden and his family as corrupt. The owner of the Delaware computer shop who reportedly turned over the laptop to investigators gave several conflicting accounts to reporters about the laptop’s chain of custody on Wednesday.

Critics on all sides can quibble with the decisions these companies made, or how they communicated them. Even Jack Dorsey, Twitter’s chief executive, said the company had mishandled the original explanation for the ban.

But the truth is less salacious than a Silicon Valley election-rigging attempt. Since 2016, lawmakers, researchers and journalists have pressured these companies to take more and faster action to prevent false or misleading information from spreading on their services. The companies have also created new policies governing the distribution of hacked material, in order to prevent a repeat of 2016’s debacle.

It’s true that banning links to a story published by a 200-year-old American newspaper — albeit one that is now a Rupert Murdoch-owned tabloid — is a more dramatic step than cutting off WikiLeaks or some lesser-known misinformation purveyor. Still, it’s clear that what Facebook and Twitter were actually trying to prevent was not free expression, but a bad actor using their services as a conduit for a damaging cyberattack or misinformation.

These decisions get made quickly, in the heat of the moment, and it’s possible that more contemplation and debate would produce more satisfying choices. But time is a luxury these platforms don’t always have. In the past, they have been slow to label or remove dangerous misinformation about Covid-19, mail-in voting and more, and have only taken action after the bad posts have gone viral, defeating the purpose.

Image
Credit…Hilary Swift for The New York Times

Since the companies made those decisions, Republican officials began using the actions as an example of Silicon Valley censorship run amok. On Wednesday, several prominent Republicans, including Mr. Trump, repeated their calls for Congress to repeal Section 230 of the Communications Decency Act, a law that shields tech platforms from many lawsuits over user-generated content.

That leaves the companies in a precarious spot. They are criticized when they allow misinformation to spread. They are also criticized when they try to prevent it.

Perhaps the strangest idea to emerge in the past couple of days, though, is that these services are only now beginning to exert control over what we see. Representative Doug Collins, Republican of Georgia, made this point in a letter to Mark Zuckerberg, the chief executive of Facebook, in which he derided the social network for using “its monopoly to control what news Americans have access to.”

The truth, of course, is that tech platforms have been controlling our information diets for years, whether we realized it or not. Their decisions were often buried in obscure “community standards” updates, or hidden in tweaks to the black-box algorithms that govern which posts users see. But make no mistake: These apps have never been neutral, hands-off conduits for news and information. Their leaders have always been editors masquerading as engineers.

What’s happening now is simply that, as these companies move to rid their platforms of bad behavior, their influence is being made more visible. Rather than letting their algorithms run amok (which is an editorial choice in itself), they’re making high-stakes decisions about flammable political misinformation in full public view, with human decision makers who can be debated and held accountable for their choices. That’s a positive step for transparency and accountability, even if it feels like censorship to those who are used to getting their way.

After years of inaction, Facebook and Twitter are finally starting to clean up their messes. And in the process, they’re enraging the powerful people who have thrived under the old system.

Read More

Posted on

Twitter is investigating widespread outage reports

If you’re reading this, you probably didn’t get here from Twitter . The service has been experiencing widespread reports of outages for at least an hour. The issue has impacted a range of different activities on the site, ranging from newsfeeds to the ability to tweet. The company has acknowledged the ongoing problem, noting on its official status page that it is investigating things:

Update – We are continuing to monitor as our teams investigate. More updates to come.
Oct 15, 22:31 UTC
Investigating – We are currently investigating this issue. More updates to come.
Oct 15, 21:56 UTC

Twitter responded to our request for comment, stating, “We know people are having trouble Tweeting and using Twitter. We’re working to fix this issue as quickly as possible. We’ll share more when we have it and Tweet from @TwitterSupport when we can – stay tuned.”

We’ll update as we hear more.

Read More

Posted on

How to Deal With a Crisis of Misinformation

There’s a disease that has been spreading for years now. Like any resilient virus, it evolves to find new ways to attack us. It’s not in our bodies, but on the web.

It has different names: misinformation, disinformation or distortions. Whatever the label, it can be harmful, especially now that it is being produced through the lens of several emotionally charged events: the coronavirus pandemic, a presidential election and protests against law enforcement.

The swarm of bad information circulating on the web has been intense enough to overwhelm Alan Duke, the editor of Lead Stories, a fact-checking website. For years, he said, false news mostly consisted of phony web articles that revolved around silly themes, like myths about putting onions in your socks to cure a cold. But misinformation has now crept into much darker, sinister corners and taken on forms like the internet meme, which is often a screenshot overlaid with sensational text or manipulated with doctored images.

He named a harmful example of memes: Those attacking Breonna Taylor, the Black medical worker in Louisville, Ky., who was killed by the police when they entered her home in March. Misinformation spreaders generated memes suggesting that Ms. Taylor shot at police officers first, which was not true.

“The meme is probably the most dangerous,” Mr. Duke said. “In seven or 20 words, somebody can say something that’s not true, and people will believe it and share it. It takes two minutes to create.”

It’s impossible to quantify how much bad information is out there now because the spread of it online has been relentless. Katy Byron, who leads a media literacy program at the Poynter Institute, a journalism nonprofit, and who works with a group of teenagers who regularly track false information, said it was on the rise. Before the pandemic, the group would present a few examples of misinformation every few days. Now each student is reporting multiple examples a day.

“With the pandemic, people are increasingly online doomscrolling and looking for information,” Ms. Byron said. “It’s getting harder and harder to find it and feel confident you’re consuming facts.”

The misinformation, she said, is also creeping into videos. With modern editing tools, it has become too easy for people with little technical know-how and minimal equipment to produce videos that appear to have high production value. Often, real video clips are stripped of context and spliced together to tell a different story.

The rise of false news is bad news for all of us. Misinformation can be a detriment to our well-being in a time when people are desperately seeking information such as health guidelines to share with their loved ones about the coronavirus. It can also stoke anger and cause us to commit violence. Also important: It could mislead us about voting in a pandemic that has turned our world upside down.

How do we adapt to avoid being manipulated and spreading false information to the people we care about? Past methods of spotting untruthful news, like checking articles for typos and phony web addresses that resemble those of trusted publications, are now less relevant. We have to employ more sophisticated methods of consuming information, like doing our own fact-checking and choosing reliable news sources.

Here’s what we can do.

Get used to this keyboard shortcut: Ctrl+T (or Command+T on a Mac). That creates a new browser tab in Chrome and Firefox. You’re going to be using it a lot. The reason: It enables you to ask questions and hopefully get some answers with a quick web search.

It’s all part of an exercise that Ms. Byron calls lateral reading. While reading an article, Step 1 is to open a browser tab. Step 2 is to ask yourself these questions:

  • Who is behind the information?

  • What is the evidence?

  • What do other sources say?

From there, with that new browser tab open, you could start answering those questions. You could do a web search on the author of the content when possible. You could do another search to see what other publications are saying about the same topic. If the claim isn’t being repeated elsewhere, it may be false.

You could also open another browser tab to look at the evidence. With a meme, for example, you could do a reverse image search on the photo that was used in the meme. On Google.com, click Images and upload the photo or paste the web address of the photo into the search bar. That will show where else the image has shown up on the web to verify whether the one you have seen has been manipulated.

With videos, it’s trickier. A browser plug-in called InVID can be installed on Firefox and Chrome. When watching a video, you can click on the tool, click on the Keyframes button and paste in a video link (a YouTube clip, for example) and click Submit. From there, the tool will pull up important frames of the video, and you can reverse image search on those frames to see if they are legitimate or fake.

Some of the tech steps above may not be for the faint of heart. But most important is the broader lesson: Take a moment to think.

“The No. 1 rule is to slow down, pause and ask yourself, ‘Am I sure enough about this that I should share it?’” said Peter Adams, a senior vice president of the News Literacy Project, a media education nonprofit. “If everybody did that, we’d see a dramatic reduction of misinformation online.”

While social media sites like Facebook and Twitter help us stay connected with the people we care about, there’s a downside: Even the people we trust may be unknowingly spreading false information, so we can be caught off guard. And with everything mashed together into a single social media feed, it gets tougher to distinguish good information from bad information, and fact from opinion.

What we can do is another exercise in mindfulness: Be deliberate about where you get your information, Mr. Adams said. Instead of relying solely on the information showing up in your social media feeds, choose a set of publications that you trust, like a newspaper, a magazine or a broadcast news program, and turn to those regularly.

Mainstream media is far from perfect, but it’s subjected to a standards process that is usually not seen in user-generated content, including memes.

“A lot of people fall into the trap of thinking no source of information is perfect,” Mr. Adams said. “That’s when people really start to feel lost and overwhelmed and open themselves up to sources they really should stay away from.”

The most frightening part about misinformation is when it transcends digital media and finds its way into the real world.

Mr. Duke of Lead Stories said he and his wife had recently witnessed protesters holding signs with the message “#SavetheChildren.” The signs alluded to a false rumor spread by supporters of the QAnon conspiracy about a child-trafficking network led by top Democrats and Hollywood elites. The pro-Trump conspiracy movement had effectively hijacked the child-trafficking issue, mixing facts with its own fictions to suit its narrative.

Conspiracy theories have fueled some QAnon believers to be arrested in cases of serious crimes, including a murder in New York and a conspiracy to kidnap a child.

“QAnon has gone from misinformation online to being out on the street corner,” he said. “That’s why I think it’s dangerous.”

Posted on

Riled Up: Misinformation Stokes Calls for Violence on Election Day

In a video posted to Facebook on Sept. 14, Dan Bongino, a popular right-wing commentator and radio host, declared that Democrats were planning a coup against President Trump on Election Day.

For just over 11 minutes, Mr. Bongino talked about how bipartisan election experts who had met in June to plan for what might happen after people vote were actually holding exercises for such a coup. To support his baseless claim, he twisted the group’s words to fit his meaning.

“I want to warn you that this stuff is intense,” Mr. Bongino said, speaking into the camera to his 3.6 million Facebook followers. “Really intense, and you need to be ready to digest it all.”

His video, which has been viewed 2.9 million times, provoked strong reactions. One commenter wrote that people should be prepared for when Democrats “cross the line” so they could “show them what true freedom is.” Another posted a meme of a Rottweiler about to pounce, with the caption, “Veterans be like … Say when Americans.”

Image

The coup falsehood was just one piece of misinformation that has gone viral in right-wing circles ahead of Election Day on Nov. 3. In another unsubstantiated rumor that is circulating on Facebook and Twitter, a secret network of elites was planning to destroy the ballots of those who voted for President Trump. And in yet another fabrication, supporters of Mr. Trump said that an elite cabal planned to block them from entering polling locations on Election Day.

All of the rumors appeared to be having the same effect: Of riling up Mr. Trump’s restive base, just as the president has publicly stoked the idea of election chaos. In comment after comment about the falsehoods, respondents said the only way to stop violence from the left was to respond in kind with force.

“Liberals and their propaganda,” one commenter wrote. “Bring that nonsense to country folks who literally sit in wait for days to pull a trigger.”

The misinformation, which has been amplified by right-wing media such as the Fox News host Mark Levin and outlets like Breitbart and The Daily Wire, adds contentiousness to an already powder-keg campaign season. Mr. Trump has repeatedly declined to say whether he would accept a peaceful transfer of power if he lost to his Democratic challenger, Joseph R. Biden Jr., and has urged his supporters “to go into the polls and watch very carefully.”

The falsehoods on social media are building support for the idea of disrupting the election. Election officials have said they fear voter harassment and intimidation on Election Day.

Image

Credit….

“This is extremely concerning,” said Megan Squire, a computer science professor at Elon University in Elon, N.C., who tracks extremists online. Combined with Mr. Trump’s comments, the false rumors are “giving violent vigilantes an excuse” that acting out in real life would be “in defense of democracy,” she said.

Tim Murtaugh, a Trump campaign spokesman, said Mr. Trump would “accept the results of an election that is free, fair and without fraud” and added that the question of violence was “better put to Democrats.”

Keep up with Election 2020

In a text message, Mr. Bongino said the idea of a Democratic coup was “not a rumor” and that he was busy “exposing LIBERAL violence.”

Distorted information about the election is also flowing in left-wing circles online, though to a lesser degree, according to a New York Times analysis. Such misinformation includes a viral falsehood that mailboxes were being blocked by unknown actors to effectively discourage people from voting.

Other popular leftist sites, like Liberal Blogger and The Other 98%, have also twisted facts to push a critical narrative about Republicans, according to PolitiFact, a fact-checking website. In one inflammatory claim last week, for instance, the left-wing Facebook page Occupy Democrats asserted that President Trump had directly inspired a plot by a right-wing group to kidnap Gov. Gretchen Whitmer of Michigan.

Social media companies appear increasingly alarmed by how their platforms may be manipulated to stoke election chaos. Facebook and Twitter took steps last week to clamp down on false information before and after the vote. Facebook banned groups and posts related to the pro-Trump conspiracy movement QAnon and said it would suspend political advertising postelection. Twitter said it was changing some basic features to slow the way information flows on its network.

On Friday, Twitter executives urged people “to recognize our collective responsibility to the electorate to guarantee a safe, fair and legitimate democratic process this November.”

Trey Grayson, a Republican former secretary of state of Kentucky and a member of the Transition Integrity Project, said the idea that the group was preparing a left-wing coup was “crazy.” He said the group had explored many election scenarios, including a victory by Mr. Trump.

Michael Anton, a former national security adviser to President Trump, also published an essay on Sept. 4 in the conservative publication The American Mind, claiming, “Democrats are laying the groundwork for revolution right in front of our eyes.”

His article was the tipping point for the coup claim. It was posted more than 500 times on Facebook and reached 4.9 million people, according to CrowdTangle, a Facebook-owned analytics tool. Right-wing news sites such as The Federalist and DJHJ Media ramped up coverage of the idea, as did Mr. Bongino.

Mr. Anton did not respond to a call for comment.

The lie also began metastasizing. In one version, right-wing commentators claimed, without proof, that Mr. Biden would not concede if he lost the election. They also said his supporters would riot.

“If a defeated Biden does not concede and his party’s rioters take to the streets in a coup attempt against President Trump, will the military be needed to stop them?” tweeted Mr. Levin, the Fox News host, on Sept. 18. His message was shared nearly 16,000 times.

After The Times contacted him, Mr. Levin published a note on Facebook saying his tweet had been a “sarcastic response to the Democrats.”

Bill Russo, a spokesman for the Biden campaign, said in a statement that Mr. Biden would accept how the people voted. “Donald Trump and Mike Pence are the ones who refuse to commit to a peaceful transfer of power,” he said.

On YouTube, dozens of videos pushing the false coup narrative have collectively gathered more than 1.2 million views since Sept. 7, according to a tally by The Times. One video was titled, “RED ALERT: Are the President’s Enemies Preparing a COUP?”

The risk of misinformation translating to real-world action is growing, said Mike Caulfield, a digital literacy expert at Washington State University Vancouver.

“What we’ve seen over the past four years is an increasing capability” from believers to turn these conspiracy narratives “into direct physical actions,” he said.

Ben Decker contributed research.

Posted on

Twitter Will Turn Off Some Features to Fight Election Misinformation

OAKLAND, Calif. — Twitter, risking the ire of its best-known user, President Trump, said on Friday that it would turn off several of its routine features in an attempt to control the spread of misinformation in the final weeks before the presidential election.

The first notable change, Twitter said, will essentially give users a timeout before they can hit the button to retweet a post from another account. A prompt will nudge them to add their own comment or context before sharing the original post.

Twitter will also disable the system that suggests posts on the basis of someone’s interests and the activity of accounts they follow. In their timelines, users will see only content from accounts they follow and ads.

And if users try to share content that Twitter has flagged as false, a notice will warn them that they are about to share inaccurate information.

Most of changes will happen on Oct. 20 and will be temporary, Twitter said. Labels warning users against sharing false information will begin to appear next week. The company plans to wait until the result of the presidential election is clear before turning the features back on.

“Twitter has a critical role to play in protecting the integrity of the election conversation, and we encourage candidates, campaigns, news outlets and voters to use Twitter respectfully and to recognize our collective responsibility to the electorate to guarantee a safe, fair and legitimate democratic process this November,” the Twitter executives Vijaya Gadde and Kayvon Beykpour said in a statement.

Image
Credit…Twitter

They said the “extra friction” on retweets was designed to “encourage everyone to not only consider why they are amplifying a tweet, but also increase the likelihood that people add their own thoughts, reactions and perspectives to the conversation.”

If users decide they don’t have anything to add, they will be able to retweet after the prompt.

The change is likely to have a direct impact on Mr. Trump’s online activity. Since returning to the White House on Monday after a hospital stay to treat the coronavirus, he has been on a Twitter tear. On Tuesday evening, for example, he tweeted or retweeted posts from other accounts about 40 times.

Twitter stopped short of shutting down its Trending Topics feature, a change that many critics say would do the most to fight misinformation because people can game the feature to promote false or misleading information. Instead, Twitter will expand its effort to fact-check and provide context to items that trend in the United States.

Social media companies have moved in recent months to fight the spread of misinformation around the presidential election. Facebook and Google have committed to banning political ads for an undetermined period after polls close on Nov. 3. Facebook also said a banner at the top of its news feed would caution users that no winner had been declared until news outlets called the presidential race.

Video

People who go to retweet will be brought to the quote tweet composer where they’ll be encouraged to comment before sending their tweet.

The companies are trying to avoid a repeat of the 2016 election, when Russian operatives used them to spread falsehoods and hyperpartisan content in an attempt to destabilize the American electorate.

Over the last year, Twitter has slowly been stripping away parts of its service that have been used to spread false and misleading information. Jack Dorsey, the chief executive, announced last year that the company would no longer allow political advertising. Twitter has more aggressively fact-checked misinformation on the service — including misleading tweets from Mr. Trump.

That has led to a backlash from the Trump administration. Mr. Trump, who has 87 million followers on Twitter, has called for a repeal of legal protections Twitter and other social media companies rely on.

But Twitter’s fact-checking has continued. It recently began adding context to its trending topics, giving viewers more information about why a topic has become a subject of widespread conversation on Twitter. This month, Twitter plans to add context to all trending topics presented on the For You page for users in the United States.

“This will help people more quickly gain an informed understanding of the high-volume public conversation in the U.S. and also help reduce the potential for misleading information to spread,” Ms. Gadde and Mr. Beykpour said.

Twitter’s trends illustrate which topics are most popular on the service by highlighting content that is widely discussed. The trends often serve as an on-ramp for new users who are discovering how to find information on Twitter, but internet trolls and bots have often exploited the system to spread false, hateful or misleading information.

As recently as July, trending topics have been hijacked by white nationalists who pushed the anti-Semitic hashtag #JewishPrivilege and by QAnon, a conspiracy group that made the furniture company Wayfair trend on Twitter with false claims that the company engaged in child trafficking. The embarrassing episodes led critics to call on Twitter to shut down trends altogether.