Posted on

A Council of Citizens Should Regulate Algorithms

Are machine-learning algorithms biased, wrong, and racist? Let citizens decide.

Essentially rule-based structures for making decisions, machine-learning algorithms play an increasingly large role in our lives. They suggest what we should read and watch, whom we should date, and whether or not we are detained while awaiting trial. Their promise is huge–they can better detect cancers. But they can also discriminate based on the color of our skin or the zip code we live in.

Despite their ubiquity in society, no real structure exists to regulate algorithms’ use. We rely on journalists or civil society organizations to serendipitously report when things have gone wrong. In the meantime, the use of algorithms spreads to every corner of our lives and many agencies of our government. In the post-Covid-19 world, the problem is bound to reach colossal proportions.

A new report by OpenAI suggests we should create external auditing bodies to evaluate the societal impact of algorithm-based decisions. But the report does not specify what such bodies should look like.

We don’t know how to regulate algorithms, because their application to societal problems involves a fundamental incongruity. Algorithms follow logical rules in order to optimize for a given outcome. Public policy is all a matter of trade-offs: optimizing for some groups in society necessarily makes others worse off.

Resolving social trade-offs requires that many different voices be heard. This may sound radical, but it is in fact the original lesson of democracy: Citizens should have a say. We don’t know how to regulate algorithms, because we have become shockingly bad at citizen governance.

Is citizen governance feasible today? Sure, it is. We know from social scientists that a diverse group of people can make very good decisions. We also know from a number of recent experiments that citizens can be called upon to make decisions on very tough policy issues, including climate change, and even to shape constitutions. Finally, we can draw from the past for inspiration on how to actually build citizen-run institutions.

The ancient Athenians—the citizens of the world’s first large-scale experiment in democracy—built an entire society on the principle of citizen governance. One institution stands out for our purposes: the Council of Five Hundred, a deliberative body in charge of all decisionmaking, from war to state finance to entertainment. Every year, 50 citizens from each of the 10 tribes were selected by lot to serve. Selection occurred among those that had not served the year before and had not already served twice.

These simple organizational rules facilitated broad participation, knowledge aggregation, and citizen learning. First, because the term was limited and could not be iterated more than twice, over time a broad section of the population—rich and poor, educated and not—participated in decisionmaking. Second, because the council represented the whole population (each tribe integrated three different geographic constituencies), it could draw upon the diverse knowledge of its members. Third, at the end of their mandate, councillors returned home with a body of knowledge about the affairs of their city that they could share with their families, friends, and coworkers, some of whom already served and some who soon would. Certainly, the Athenians did not follow through on their commitment to inclusion. As a result, many people’s voices went unheard, including those of women, foreigners, and slaves. But we don’t need to follow the Athenian example on this front.

A citizen council for algorithms modeled on the Athenian example would represent the entire American citizen population. We already do this with juries (although it is possible that, when decisions affect a specific constituency, a better fit with the actual polity might be required). Citizens’ deliberations would be informed by agency self-assessments and algorithmic impact statements for decision systems used by government agencies, and internal auditing reports for industry, as well as reports from investigative journalists and civil society activists, whenever available. Ideally, the council would act as an authoritative body or as an advisory board to an existing regulatory agency. It could evaluate, as OpenAI recommends, a variety of issues including the level of privacy protection, the extent to (and methods by) which the systems were tested for safety, security, or ethical concerns, and the sources of data, labor, and other resources used.

Read More

Posted on

The Cyberwar Needs More Women on the Front Lines

Cybercriminals, like viruses, adapt to their environment. Since the coronavirus pandemic began, cybersecurity complaints to the FBI’s Internet Crime Complaint Center have quadrupled.

Not only are governments and businesses more exposed, but individuals—stressed from remote work, unemployment, and/or homeschooling—are more susceptible to scams on everything from government assistance checks to online shopping. I’ve been deluged with emails purportedly from Netflix asking me to update my billing information; the sender clearly thinks cabin fever-infected recipients will be so desperate not to lose access to streaming they’ll click without a second thought.

WIRED OPINION

ABOUT

Sylvia Acevedo is the CEO of Girl Scouts USA and a longtime advocate for underserved communities and girls’ and women’s causes. Acevedo started her career as a rocket scientist at the Jet Propulsion Labs, where she worked on the Voyager mission’s flyby of Jupiter and its moons and the Solar Polar/Probe missions. She is the author of Path to the Stars: My Journey from Girl Scout to Rocket Scientist.

The surge is no accident: Bad actors go where access is easy or where rewards outweigh risks, and the pandemic is ripe for exploitation. But cybercrime was with us long before and it will be with us long after we finally throw away our masks. This is particularly true of cybercrime targeting women and children.

This brings us back to access. Let’s look at the internet of things, for instance. It was developed largely without the input of women in leadership positions. Among the major US tech firms, none have more than 32 percent of women in leadership roles: Amazon 27 percent, Facebook 32 percent, Apple 29 percent, Google 26 percent, and Microsoft 19 percent.

The result is that the IoT lacks components of security and safety that are often top of mind for women but maybe not as salient for men. Much of the tech being developed doesn’t take women’s and children’s needs and wants into account, including vital safety and privacy concerns that are critical.

For example, social media has enabled predators to target children and share abusive images. In 2018, tech companies found more than 45 million online photos and videos of children being sexually abused, double the number from 2017. Ride-sharing services leave women vulnerable to drivers who can access their personal information through the app. Period-tracking apps share intimate data with third parties. Smart home technology has been used by domestic abusers against former partners and by predators against children.

We’ve been down a similar road before. In 2001, the National Highway Traffic Safety Administration required all new cars to include safety latches inside the vehicle’s trunk. The regulation only came after multiple children died when they became trapped and dozens of women were murdered by being intentionally locked in the trunks of cars.

Now, women and children are losing their lives because there is no safety latch on the internet, and tech companies are scrambling to catch up.

On top of this urgent need to make safety and security a top priority, there is a massive shortage of tech experts. In 2018, the National Association of Manufacturing and Deloitte reported that of the 3.5 million STEM jobs the US will have to fill by 2025, 2 million will go unfilled due to a lack of qualified workers. We can’t adequately address the blind spots in cybersecurity without also targeting the STEM shortage and, by extension, the gender gap in tech.

Women comprise half of all STEM workers in the United States overall, but this seemingly equal number hides an important disparity. Namely, the share of women working in STEM occupations varies greatly across fields and education levels. Women account for 75 percent of workers in health-related jobs, for example, but a mere 14 percent in engineering.

Moreover, women comprise only 24 percent of the global cybersecurity workforce, in spite of making up 39 percent of the global labor force and 46 percent of the US labor force.

One solution to both the shortage of skilled workers and the need to consider the unique needs of women and children when developing the next generation of tech is to bring more women into cybersecurity leadership positions. In order to do that, we must start educating girls at a young age to be the cybersecurity leaders of tomorrow. At Girl Scouts, we have the vision and scale to match the challenge.

In 2017 we pledged to add 2.5 million girls to the STEM workforce by 2025. The following year, we collaborated with Palo Alto Networks to introduce cybersecurity badges to girls in grades K–12, and in 2019 we collaborated with Raytheon Technologies to host the first Cyber Challenge for middle and high school girls—a program that continues today. To date, more than 150,000 cybersecurity badges and over 1 million STEM badges have been earned by girls across the country.

Read More

Posted on

The Pandemic and the Protests Are Mirror Images

Regret borne from reflection and anxiety from future projection can be crippling, even when both are necessary or ultimately liberating. These notions are especially true for social upheavals, where engaging the past involves questions more taxing than the ones Marie Kondo gave to us for spring cleaning: Do these relics make us feel good? When these relics are garments of clothing, we can say no, and away they go. With society, the relics that weigh us down can feel too complex to grasp or, more often, too inconvenient for us to want to engage with.

Nearly halfway through 2020, the world remains engulfed by the most explosive pandemic in over a century. Simultaneously, and not unrelatedly, neofascist populism manifests in unfamiliar settings, threatening democracy. In the United States, the last few months have delivered several visible test cases—involving deaths of African Americans at the hand of law enforcement—that have served as public referenda on the value of black lives, again. Covid-19 and uprisings have left our status quo in peril, and a public looking for answers.

WIRED OPINION

ABOUT

C. Brandon Ogbunu (@big_data_kane) is an assistant professor at Brown University who specializes in computational biology and genetics.

But the overlap of the pandemic and the protests against police violence is of a certain type: not quite familial, but instead, more like mirror images. Covid-19 and the uprisings are a kind of twin, where the features are identical but opposite. This manifests in their respective relationships with the past and future.

In the case of Covid-19, much of our obsession was, and remains, with future projection. This is the essence of the debate over the relevance of predictive models of disease, where citizen-scientists (of varied background and expertise) have sparred on social media, and less often in the scientific literature. The points of contention often involve the veracity and ethics of predictions. Some of the debates are justified: Erroneous calculations can drive bad policies and cost thousands of lives. Elaine Nsoesie, a computational epidemiologist and assistant professor at the Boston University School of Public Health, says, “We shouldn’t be too confident about our predictions of what will happen in the future. We should acknowledge uncertainty especially in the models that we develop.” Unfortunately, the politicization of Covid-19 science has made productive debates about model projections untenable, as conflicts of interest now perniciously manifest in which ideas are entertained, almost independent of the science underlying them.

Our obsession with the Covid-19-shaped future is about far more than what the “curve” will look like in six months. The pandemic has also forced us to reconsider how we communicate, work, and learn. For example, higher education must now rethink how to maintain research activities, deliver high-quality instruction, and provide the informal social experience that college has historically provided. And there are new, important conversations about labor that have arisen.

Photograph: Borja Sanchez-Trillo/Getty Images 

Read More

Posted on

The Hype Cycle for Chloroquine Is Eerily Familiar

It now appears that the “invisible enemy,” as President Trump calls Covid-19, will not be routed by hydroxychloroquine, the 70-year-old malarial drug that he and his media allies repeatedly hyped as a potential miracle cure.

A preliminary study based on clinical data from researchers in several countries found “no evidence” that hydroxychloroquine is a useful coronavirus-fighting weapon. Additional research funded by the National Institutes of Health and the University of Virginia has recently posted similar findings. So much for Trump’s “game changer.” As one critic in The Washington Post put it, the president “could have let the science lead and his own rhetoric follow.”

But here’s the thing: Trump did follow the science. (Bear with me for a minute.) So did all the Fox News hosts who breathlessly touted hydroxychloroquine after an initial, peer-reviewed study appeared in a respected scientific journal on March 21, reporting that the drug was an “efficient” coronavirus treatment.

This study was greeted skeptically by the medical community. Some highlighted flaws in the study’s design and methodology. Others pointed to the controversial French researcher behind the finding, who, it turns out, has a dubious history. Even the professional scientific society that published the study in its flagship journal has since distanced itself from the paper.

SUBSCRIBE

Subscribe to WIRED and stay smart with more of your favorite Ideas writers.

But these red flags were easily crowded out by the incessant hype of hydroxy­chloroquine from high-wattage personalities such as Dr. Oz, Elon Musk, Laura Ingraham, and of course Trump. Their media-driven boosterism propelled the “miracle cure” narrative and created massive public demand for the drug.

In hindsight, it’s easy—and correct, no doubt—to blame these influential boosters for generating that groundswell of unwarranted attention. But it’s important that we recognize the pattern underneath: Bad ideas like this one often grow their roots in solid-seeming science (not just Reddit or Youtube conspiracy channels), then attach themselves to pollinators within the media or political landscape, who continue to spread them even after the underlying claims have been debunked.

We’ve seen the same life cycle of medical disinformation play out many times before. Exhibit A is the false vaccines/autism narrative. Yes, that claim had (and still has) its famous instigators and evangelists: Jenny McCarthy, Robert Kennedy Jr., Del Bigtree, and so on. But would they have become the faces of a movement absent the idea’s crucial, embryonic publication in a top-tier medical journal? And would that movement have grown so large if not for its nurturing by journalists?

Like other pseudoscience, the modern antivaccine narrative started with the imprimatur of respectable, peer-reviewed research. In 1998, The Lancet published a tiny study (only 11 children were involved) that seemed to show a connection between the measles, mumps, rubella vaccine and autism. Scholars have linked sensationalist and skewed British media coverage of the study to a subsequent decline in UK immunization rates, which didn’t rebound until the mid-2000s. By then, the study had been declared “entirely flawed” by the editor of The Lancet (although it wouldn’t be fully retracted until 2012). Of course, by then, the seed had sprouted.

The same template can be applied to the belief that exposure to radiation from cell phones or Wi-Fi gives you cancer. Once again, it’s tempting to apply the not-great-man theory of history and lay the fear of “electromagnetic fields” at the feet of its most avid and visible proponent, New Yorker writer Paul Brodeur.

As I wrote some years ago at Discover, this strain of fear can be traced, in part, to a series of articles Brodeur published under The New Yorker’s “Annals of Radiation” rubric in the 1980s and early 1990s. He’d already been on this beat for some time, with similarly themed work that turned into a book titled The Zapping of America: Microwaves, Their Deadly Risk and the Cover-Up. (Fans of the movie American Hustle might recall Brodeur’s name being mentioned during the “science oven” scene.)

Read More

Posted on

A Prophet of Scientific Rigor—and a Covid Contrarian

Yes, that may be partly because we managed to get enough people to stay home quickly enough; and yes, it has taken heroic sacrifice and risk on the parts of front-line healthcare workers to keep the system functioning. But Ioannidis insists there’s little evidence that hospitals couldn’t now handle the surges that might come from relaxing stay-at-home policies, assuming people who go out take moderate protection measures such as masks and social distancing. If we’re not ready to stay locked up for a year or more, he argues, we’re just delaying the inevitable spread of the disease without doing much to change the death rate.

On the other side, he says, are the health costs of the lockup. “If we prolong these measures too long, the premature deaths from that policy could be 100-fold larger than what we see with Covid-19 itself,” Ioannidis told me. The fear of leaving home to go to a hospital is alone almost certainly leading to thousands of unnecessary deaths from heart attacks, strokes and cancer.

Other epidemiologists’ assessment of Ioannidis’ claim, that staying at home will likely kill far more people than Covid-19, might best be summed up the way physics giant Wolfgang Pauli is said to have dismissed the lesser work of a colleague: It’s not even wrong. To be promoted to wrong, the Ioannidis position would have to be based on data and analysis that scientists could argue over. Even allowing his 0.1 percent fatality rate for the disease—which most epidemiologists think is way too low, but not beyond-the-realm-of-possibility low—there is almost no data to go on for the likely cost in human life of the lockdown. We know Covid-19 is killing tens of thousands of people, and that staying at home is slowing the spread; but we know virtually nothing about the number deaths caused by staying at home. As such, what Ioannidis is promoting simply isn’t science, says Loren Lipworth, a Vanderbilt University epidemiologist. “It’s impossible to do that risk-benefit analysis,” she says. “It’s just relying on anecdote and common sense.” In other words, Ioannidis is pitting his gut against the collective data-driven wisdom and analysis of medicine and public health.

It’s not that Ioannidis isn’t asking the right questions. I watch CNN and CBS News every day, and I’ve only seen the issue of unnecessary deaths due to the lockdown come up a few times, in the context of chronically-ill people having trouble getting treatment. But I’ve watched hundreds of stories of individuals dying horribly from Covid-19, and of anti-stay-at-home protesters excoriated by experts. That’s how most Americans see this disease: through a terrifying filter of widespread death that only fools would leave home to risk.

If Ioannidis’ claims even slightly alter the conversation toward a more balanced, thoughtful view of what we really gain, and what we might lose, from the lockdown, then maybe it’s mission accomplished. If he’s even partly right that we’re too biased toward staying at home, and the disease isn’t as deadly as we thought, the resulting shift could ultimately save tens of thousands of lives.

A week ago, Ioannidis’ legacy in medical science seemed unassailable. Today, not so much. I saw it on the faces of those medical students. To them Ioannidis may always be the fringe scientist who pumped up a bad study that supported a crazy right-wing conspiracy theory in the middle of a massive health crisis.

The prevailing take now is that Ioannidis has fallen prey to the very sorts of biases and distortions that he became revered for exposing in others. If that’s what happened, it will be a twist that Ioannidis himself had prophesied to me 10 years ago in Greece. “If I did a study and the results showed that in fact there wasn’t really much bias in research, would I be willing to publish it?” he said then. “That would create a real psychological conflict for me.” Ioannidis was acknowledging that he’s invested in showing that other scientists tend to get it wrong, and that he might end up being skeptical of data suggesting they are, in fact, getting it right.

Now Ioannidis’ claims about Covid-19 may be pulled by the gravity of his commitment to being the one who sees where everyone else went wrong. There’s a meta-meta-science lesson in there, too, and one we’ve sometimes seen before: Bias is so powerful a force in scientific research that even a grandmaster of research into bias can eventually trip over it.

Photographs: Getty Images; Library of Congress

Updated, 5/1/2020, 12:00 pm EST: The story has been updated to correct a mix-up between the infection fatality rate and the case fatality rate.


WIRED is providing free access to stories about public health and how to protect yourself during the coronavirus pandemic. Sign up for our Coronavirus Update newsletter for the latest updates, and subscribe to support our journalism.


More From WIRED on Covid-19

Read More

Posted on

The Russian Doll of Putin’s Internet Clampdown

One year ago today, Russian president Vladimir Putin signed into effect a major piece of digital legislation—popularly dubbed the domestic internet law. With it, Russia’s internet and telecom regulator, Roskom, wrested further control of web servers, internet exchange points, and other essential web architecture. At its heart, the law included the right for Roskom to cut off the domestic internet in the event of a security incident.

The media frequently talks of the Russian government’s leading digital authoritarianism. We hear about its internet surveillance practices, its disinformation campaigns, both abroad and at home, and its efforts to replicate the Chinese government’s so-called Great Firewall in blocking political speech and restricting access to foreign news websites.

WIRED OPINION

ABOUT

Justin Sherman (@jshermcyber) is an op-ed contributor at WIRED and a fellow at the Atlantic Council’s Cyber Statecraft Initiative.

An easy conclusion from these headlines is that Russia’s path towards comprehensive state control of the internet is inevitable—an irreversible march towards censorship, surveillance, and online repression. Certainly, Russia’s internet isn’t getting any freer. But the reality of Moscow’s internet control is far more complex and far more uncertain, especially over the long-term—primarily due to challenges of technical execution.

It’s first important to have historical context. Predating modern questions of 1s and 0s in Russia is a notable history of the state controlling information creation and dissemination.

Vladimir Lenin believed strongly in molding public opinion through media control. The Communist paper Pravda, for example, wasn’t meant to provide factual reporting but—in the words of two scholars—to “convince the reader about the proper interpretation of the truth.” (Ironically, pravda means “truth” in Russian.) But where Lenin permitted some dissent in the press, Stalin virtually eliminated it, centralizing state control of radio and television broadcasters and criminalizing opposition views.

This tight-fisted approach to information flows didn’t relent after Stalin’s death. During the 1950s and onward, Soviet officials worked feverishly to keep abreast with radio news. They continued to consolidate influence over TV and the printing press. Government offices restricted access to official documents and copy machines, making it enormously difficult for even many state workers to do their jobs.

In the 1980s, there was a brief change of course as Mikhail Gorbachev instituted openness and transparency policies, or glasnost, which included limiting the Communist Party’s power and allowing a freer and more critical press. When the Soviet Union collapsed and Boris Yeltsin was elected president, the state’s grip over information further relaxed, and a free Russian press continued to blossom. Since Vladimir Putin ascended in 1999, this stopped in its tracks.

From typewriters to fax machines to Xerox copiers to radio to television, the Russian government has made a habit of working to control information flows within the country. Under Putin, the internet is no different.

In 2000, as the world recovered from Y2K panic, the Russian government published the Information Security Doctrine of the Russian Federation, citing inbound and foreign-originating information flows as a potential threat to regime stability. “Analysis of the state of information security in the Russian Federation shows that its level is not fully consistent with the requirements of society and the state,” the doctrine read. Around this time the Russian government started to bring the web into its clutches: centralizing state control of domain names, for example, and passing Order No. 130, which exempted the Federal Security Service from telling internet providers about data they were collecting.

Largely speaking, though, internet control was not matching pace with simultaneous efforts in Beijing. Even if Moscow could’ve thrown more weight behind internet filtering and control, the Kremlin arguably didn’t see cyber sovereignty issues—while undoubtedly important to certain elements of the intelligence and security services—as first-order national security priorities.

All that changed with a few major events. In 2008, online coverage of Russo-Georgian conflict highly concerned Vladimir Putin and other members of the siloviki, the military and intelligence elite. In 2010 and the couple years following, the Arab Spring’s “color revolutions” also worried Russian leadership: Blogs and social media platforms were horizontal organization tools for protesting authoritarian rule. For instance, “in Egypt in 2011, only 25 percent of the population of the country was online,” Zeynep Tufekci has written, “but these people still managed to change the wholesale public discussion.”

Read More

Posted on

Fighting Covid-19 Shouldn’t Mean Abandoning Human Rights

Governments around the world are racing to adopt new surveillance tools in response to the Covid-19 pandemic. Many are green-lighting dragnet monitoring systems, seeking real-time location data from mobile providers or deploying facial recognition and other emerging technologies. These steps may usher in a long-term expansion of the surveillance state. …

Read More

Posted on

Texts From Politicians Could Be More Dangerous Than Ever

Covid-19 has forced 2020 US political campaigns to abandon door-to-door canvassing, rallies, and fundraising extravaganzas—leaving them grasping for ways to build meaningful interpersonal bonds, just like the rest of us. Politicians want to be your quarantine buddy, which shouldn’t be too hard because they already have your number. Campaigns …

Read More