European data watchdogs have issued updated guidance in the wake of last week’s landmark ruling striking down a flagship transatlantic data transfer mechanism called Privacy Shield.
In an FAQ on the Schrems II judgement, the European Data Protection Board (EDPB) warns there will be no regulatory grace period.
The EU-US Privacy Shield is dead and any companies still relying on it to authorize transfers of EU citizens’ personal data are doing so illegally is the top-line message.
“Transfers on the basis of this legal framework are illegal,” warns the EDPB baldly. Entities that wish to keep on transferring personal data to the U.S. need to use an alternative mechanism — but must first determine whether they can meet the legal requirement to protect the data from US surveillance.
What alternatives are there? Standard Contractual Clauses (SCCs) were not invalidated by the CJEU ruling. Binding Corporate Rules (BCRs) are also still technically available.
But in both cases would-be data exporters must conduct an up front analysis to ascertain whether they can in fact legally use these tools to move data in their specific context.
Anyone who is already using SCCs for the transfer of EU citizens’ data to the US (hi facebook!) isn’t exempt from carrying out an assessment — and needs to inform the relevant supervisory authority if they intend to keep using the mechanism.
The rub here for US transfers is that the CJEU judges invalidated Privacy Shield on the grounds that US surveillance laws fundamentally clash with EU privacy rights. So, in other words, Houston, you have a privacy problem…
“The Court found that U.S. law (i.e., Section 702 FISA [Foreign Intelligence Surveillance Act] and EO [Executive Order] 12333) does not ensure an essentially equivalent level of protection,” warns the EDPB in answer to the (expected) frequently asked question: “I am using SCCs with a data importer in the U.S., what should I do?”.
“Whether or not you can transfer personal data on the basis of SCCs will depend on the result of your assessment, taking into account the circumstances of the transfers, and supplementary measures you could put in place.”
The ability to use SCCs to transfer data to the US hinges on a data controller being able to offer a legal guarantee that “U.S. law does not impinge on the adequate level of protection” for the transferred data.
If a EU-US data exporter can’t be confident of that they are required to pull the plug on the data transfer. No ifs, no buts.
While, those who believe they can offer a legal guarantee of “appropriate safeguards” — and thus intend to keep transferring data to the US via SCC — must notify the relevant data watchdog. So there’s no option to carry on ‘as normal’ without informing the regulator.
It’s the same story with BCRs — on which the EDPB notes: “Given the judgment of the Court, which invalidated the Privacy Shield because of the degree of interference created by the law of the U.S. with the fundamental rights of persons whose data are transferred to that third country, and the fact that the Privacy Shield was also designed to bring guarantees to data transferred with other tools such as BCRs, the Court’s assessment applies as well in the context of BCRs, since U.S. law will also have primacy over this tool.”
So, again, a case by case assessment is required to figure out whether you can be legally confident in offering the required level of protection.
Jerry C. Jones is the EVP, Chief Ethics and Legal Officer of LiveRamp (NYSE: RAMP), where he leads the legal and data ethics teams and assists in the strategy and execution of the company’s strategic initiatives.
My college economics professor, Dr. Charles Britton, often said, “There’s no such thing as a free lunch.” The common principle known as TINSTAFL implies that even if something appears to be free, there is always a cost to someone, even if it is not the individual receiving the benefit.
For decades, the ad-supported ecosystem enjoyed much more than a proverbial free lunch. Brands, technology providers, publishers and platforms successfully transformed data provided by individuals into massive revenue gains, creating some of the world’s most profitable corporations. So if TINSTAFL is correct, what is the true cost of monetizing this data? Consumer trust, as it turns out.
Studies overwhelmingly demonstrate that the majority of people believe data collection and data use lack the necessary transparency and control. After a few highly publicized data breaches brought a spotlight on the lack of appropriate governance and regulation, people began to voice concerns that companies had operated with too little oversight for far too long, and unfairly benefited from the data individuals provided.
With increased attention, momentum and legislative activity in multiple individual states, we have never been in a better position to pass a federal data privacy law that can rebalance the system and set standards that rebuild trust with the people providing the data.
Over the last two decades, we’ve seen that individuals benefit from regulated use of data. The competitiveness of the banking markets is partly a result of laws around the collection and use of data for credit decisions. In exchange for data collection and use, individuals now have the ability to go online and get a home loan or buy a car with instant credit. A federal law would strengthen the value exchange and provide rules for companies around the collection and utilization of data, as well as establish consistency and uniformity, which can create a truly national market.
In order to close the gap and pass a law that properly balances the interests of people, society and commerce, the business sector must first unify on the need and the current political reality. Most already agree that a federal law should be preemptive of state laws, and many voices with legitimate differences of opinion have come a long way toward a consensus. Further unification on the following three assertions could help achieve bipartisan support:
A federal law must recognize that one size does not fit all. While some common sense privacy accountability requirements should be universal, a blanket approach for accountability practices is unrealistic. Larger enterprises with significant amounts of data on hand should have stricter requirements than other entities and be required to appoint a Data Ethics Officer and document privacy compliance processes and privacy reviews.
They should be required to regularly perform internal and external audits of data collection and use. These audits should be officer-certified and filed with a regulator. While larger companies are equipped to absorb this burden, smaller businesses should not be forced to forego using the data they need to innovate and thrive by imposing the same standards. Instead, requirements for accountability should be “right-sized,” and based on the amount and type of data collected and its intended use.
A federal law must properly empower the designated regulatory authority. The stated mission of the Federal Trade Commission is to protect American consumers. As the government agency of record for data privacy regulation and enforcement, the FTC has already imposed billions of dollars in penalties for privacy violations. However, in a modern world where every company collects and uses data, the FTC cannot credibly monitor or enforce federal regulation without substantially increasing funding and staffing.
With increased authority, equipped with skilled teams to diligently monitor those companies with the most consumer data, the FTC — with State Attorney Generals designated as back-ups — can hold them accountable by imposing meaningful remedial actions and fines.
A federal law must acknowledge that properly crafted private right-to-action is appropriate and necessary. The earlier points build an effective foundation for the protection of people’s privacy rights, but there will still be situations where a person should have access to the judicial system to seek redress. Certainly, if a business does not honor the data rights of an individual as defined by federal law, people should have the right to bring an action for equitable relief. If a person has suffered actual physical or significant economic harm directly caused by violation of a Federal Data Privacy law, they should be able to bring suit if, after giving notice, the FTC declines to pursue.
Too many leaders have been unwilling to venture toward possible common ground, but public opinion dictates that more must be done, otherwise states, counties, parishes and cities will inevitably continue to act if Congress does not. It is just as certain that those data privacy laws will be inconsistent, creating a patchwork of rules based on geography, leading to unnecessary friction and complexity. Consider how much time is spent sorting through the 50 discrete data breach laws that exist today, an expense that could easily be mitigated with a single national standard.
It is clear that responsible availability of data is critical to fostering innovation. American technology has led the world into this new data-driven era, and it’s time for our laws to catch up.
To drive economic growth and benefit all Americans, we need to properly balance the interests of people, society at-large and business, and pass a data law that levels the playing field and allows American enterprise to continue thinking with data. It should ensure that transparency and accountability are fostered and enforced and help rebuild trust in the system.
Coming together to support the passage of a comprehensive and preemptive federal data privacy law is increasingly important. If not, we are conceding that we’re okay with Americans remaining distrustful of the industry, and that the rest of the world should set the standards for us.
In the wake of yesterday’s landmark ruling by Europe’s top court — striking down a flagship transatlantic data transfer framework called Privacy Shield, and cranking up the legal uncertainty around processing EU citizens’ data in the U.S. in the process — Europe’s lead data protection regulator has fired its own warning shot at the region’s data protection authorities (DPAs), essentially telling them to get on and do the job of intervening to stop people’s data flowing to third countries where it’s at risk.
The original complaint that led to the Court of Justice of the EU (CJEU) ruling focused on Facebook’s use of a data transfer mechanism called Standard Contractual Clauses (SCCs) to authorize moving EU users’ data to the U.S. for processing.
Complainant Max Schrems asked the Irish Data Protection Commission (DPC) to suspend Facebook’s SCC data transfers in light of U.S. government mass surveillance programs. Instead, the regulator went to court to raise wider concerns about the legality of the transfer mechanism.
That in turn led Europe’s top judges to nuke the Commission’s adequacy decision, which underpinned the EU-U.S. Privacy Shield — meaning the U.S. no longer has a special arrangement greasing the flow of personal data from the EU. Yet, at the time of writing, Facebook is still using SCCs to process EU users’ data in the U.S. Much has changed, but the data hasn’t stopped flowing — yet.
Yesterday the tech giant said it would “carefully consider” the findings and implications of the CJEU decision on Privacy Shield, adding that it looked forward to “regulatory guidance.” It certainly didn’t offer to proactively flip a kill switch and stop the processing itself.
Ireland’s DPA, meanwhile, which is Facebook’s lead data regulator in the region, sidestepped questions over what action it would be taking in the wake of yesterday’s ruling — saying it (also) needed (more) time to study the legal nuances.
The DPC’s statement also only went so far as to say the use of SCCs for taking data to the U.S. for processing is “questionable” — adding that case by case analysis would be key.
The regulator remains the focus of sustained criticism in Europe over its enforcement record for major cross-border data protection complaints — with still zero decisions issued more than two years after the EU’s General Data Protection Regulation (GDPR) came into force, and an ever-growing backlog of open investigations into the data processing activities of platform giants.
In May, the DPC finally submitted to other DPAs for review its first draft decision on a cross-border case (an investigation into a Twitter security breach), saying it hoped the decision would be finalized in July. At the time of writing we’re still waiting for the bloc’s regulators to reach consensus on that.
The painstaking pace of enforcement around Europe’s flagship data protection framework remains a problem for EU lawmakers — whose two-year review last month called for uniformly “vigorous” enforcement by regulators.
The European Data Protection Supervisor (EDPS) made a similar call today, in the wake of the Schrems II ruling — which only looks set to further complicate the process of regulating data flows by piling yet more work on the desks of underfunded DPAs.
“European supervisory authorities have the duty to diligently enforce the applicable data protection legislation and, where appropriate, to suspend or prohibit transfers of data to a third country,” writes EDPS Wojciech Wiewiórowski, in a statement, which warns against further dithering or can-kicking on the intervention front.
“The EDPS will continue to strive, as a member of the European Data Protection Board (EDPB), to achieve the necessary coherent approach among the European supervisory authorities in the implementation of the EU framework for international transfers of personal data,” he goes on, calling for more joint working by the bloc’s DPAs.
Wiewiórowski’s statement also highlights what he dubs “welcome clarifications” regarding the responsibilities of data controllers and European DPAs — to “take into account the risks linked to the access to personal data by the public authorities of third countries.”
“As the supervisory authority of the EU institutions, bodies, offices and agencies, the EDPS is carefully analysing the consequences of the judgment on the contracts concluded by EU institutions, bodies, offices and agencies. The example of the recent EDPS’ own-initiative investigation into European institutions’ use of Microsoft products and services confirms the importance of this challenge,” he adds.
Part of the complexity of enforcement of Europe’s data protection rules is the lack of a single authority; a varied patchwork of supervisory authorities responsible for investigating complaints and issuing decisions.
Now, with a CJEU ruling that calls for regulators to assess third countries themselves — to determine whether the use of SCCs is valid in a particular use-case and country — there’s a risk of further fragmentation should different DPAs jump to different conclusions.
Yesterday, in its response to the CJEU decision, Hamburg’s DPA criticized the judges for not also striking down SCCs, saying it was “inconsistent” for them to invalidate Privacy Shield yet allow this other mechanism for international transfers. Supervisory authorities in Germany and Europe must now quickly agree how to deal with companies that continue to rely illegally on the Privacy Shield, the DPA warned.
In the statement, Hamburg’s data commissioner, Johannes Caspar, added: “Difficult times are looming for international data traffic.”
He also shot off a blunt warning that: “Data transmission to countries without an adequate level of data protection will… no longer be permitted in the future.”
Compare and contrast that with the Irish DPC talking about use of SCCs being “questionable,” case by case. (Or the U.K.’s ICO offering this bare minimum.)
Caspar also emphasized the challenge facing the bloc’s patchwork of DPAs to develop and implement a “common strategy” toward dealing with SCCs in the wake of the CJEU ruling.
In a press note today, Berlin’s DPA also took a tough line, warning that data transfers to third countries would only be permitted if they have a level of data protection essentially equivalent to that offered within the EU.
In the case of the U.S. — home to the largest and most used cloud services — Europe’s top judges yesterday reiterated very clearly that that is not in fact the case.
“The CJEU has made it clear that the export of data is not just about the economy but people’s fundamental rights must be paramount,” Berlin data commissioner Maja Smoltczyk said in a statement [which we’ve translated using Google Translate].
“The times when personal data could be transferred to the U.S. for convenience or cost savings are over after this judgment,” she added.
Both DPAs warned the ruling has implications for the use of cloud services where data is processed in other third countries where the protection of EU citizens’ data also cannot be guaranteed too, i.e. not just the U.S.
On this front, Smoltczyk name-checked China, Russia and India as countries EU DPAs will have to assess for similar problems.
“Now is the time for Europe’s digital independence,” she added.
Some commentators (including Schrems himself) have also suggested the ruling could see companies switching to local processing of EU users’ data. Though it’s also interesting to note the judges chose not to invalidate SCCs — thereby offering a path to legal international data transfers, but only provided the necessary protections are in place in that given third country.
Also issuing a response to the CJEU ruling today was the European Data Protection Board (EDPB). AKA the body made up of representatives from DPAs across the bloc. Chair Andrea Jelinek put out an emollient statement, writing that: “The EDPB intends to continue playing a constructive part in securing a transatlantic transfer of personal data that benefits EEA citizens and organisations and stands ready to provide the European Commission with assistance and guidance to help it build, together with the U.S., a new framework that fully complies with EU data protection law.”
Short of radical changes to U.S. surveillance law, it’s tough to see how any new framework could be made to legally stick, though. Privacy Shield’s predecessor arrangement, Safe Harbour, stood for around 15 years. Its shiny “new and improved” replacement didn’t even last five.
In the wake of the CJEU ruling, data exporters and importers are required to carry out an assessment of a country’s data regime to assess adequacy with EU legal standards before using SCCs to transfer data there.
“When performing such prior assessment, the exporter (if necessary, with the assistance of the importer) shall take into consideration the content of the SCCs, the specific circumstances of the transfer, as well as the legal regime applicable in the importer’s country. The examination of the latter shall be done in light of the non-exhaustive factors set out under Art 45(2) GDPR,” Jelinek writes.
“If the result of this assessment is that the country of the importer does not provide an essentially equivalent level of protection, the exporter may have to consider putting in place additional measures to those included in the SCCs. The EDPB is looking further into what these additional measures could consist of.”
Again, it’s not clear what “additional measures” a platform could plausibly deploy to “fix” the gaping lack of redress afforded to foreigners by U.S. surveillance law. Major legal surgery does seem to be required to square this circle.
Jelinek said the EDPB would be studying the judgement with the aim of putting out more granular guidance in the future. But her statement warns data exporters they have an obligation to suspend data transfers or terminate SCCs if contractual obligations are not or cannot be complied with, or else to notify a relevant supervisory authority if it intends to continue transferring data.
In her roundabout way, she also warns that DPAs now have a clear obligation to terminate SCCs where the safety of data cannot be guaranteed in a third country.
“The EDPB takes note of the duties for the competent supervisory authorities (SAs) to suspend or prohibit a transfer of data to a third country pursuant to SCCs, if, in the view of the competent SA and in the light of all the circumstances of that transfer, those clauses are not or cannot be complied with in that third country, and the protection of the data transferred cannot be ensured by other means, in particular where the controller or a processor has not already itself suspended or put an end to the transfer,” Jelinek writes.
One thing is crystal clear: Any sense of legal certainty U.S. cloud services were deriving from the existence of the EU-U.S. Privacy Shield — with its flawed claim of data protection adequacy — has vanished like summer rain.
Europe’s data protection laws are some of the strictest in the world, and have long been a thorn in the side of the data-guzzling Silicon Valley tech giants since they colonized vast swathes of the internet.
Two decades later, one Democratic senator wants to bring many of those concepts to the United States.
Sen. Kirsten Gillibrand (D-NY) has published a bill which, if passed, would create a U.S. federal data protection agency designed to protect the privacy of Americans and with the authority to enforce data practices across the country. The bill, which Gillibrand calls the Data Protection Act, will address a “growing data privacy crisis” in the U.S., the senator said.
The U.S. is one of only a few countries without a data protection law, finding it in the same company as Venezuela, Libya, Sudan and Syria. Gillibrand said the U.S. is “vastly behind” other countries on data protection.
Gillibrand said a new data protection agency would “create and meaningfully enforce” data protection and privacy rights federally.
“The data privacy space remains a complete and total Wild West, and that is a huge problem,” the senator said.
The bill comes at a time where tech companies are facing increased attention by state and federal regulators over data and privacy practices. Last year saw Facebook settle a $5 billion privacy case with the Federal Trade Commission, which critics decried for failing to bring civil charges or levy any meaningful consequences. Months later, Google settled a child privacy case that cost it $170 million — costing the search giant about a day’s worth of its revenue.
Gillibrand pointedly called out Google and Facebook for “making a whole lot of money” from their empires of data, she wrote in a Medium post. Americans “deserve to be in control of your own data,” she wrote.
At its heart, the bill would — if signed into law — allow the newly created agency to hear and adjudicate complaints from consumers and declare certain privacy invading tactics as unfair and deceptive. As the government’s “referee,” the agency would let it take point on federal data protection and privacy matters, such as launching investigations against companies accused of wrongdoing. Gillibrand’s bill specifically takes issue with “take-it-or-leave-it” provisions, notably websites that compel a user to “agree” to allowing cookies with no way to opt-out. (TechCrunch’s parent company Verizon Media enforces a ‘consent required’ policy for European users under GDPR, though most Americans never see the prompt.)
Through its enforcement arm, the would-be federal agency would also have the power to bring civil action against companies, and fine companies of egregious breaches of the law up to $1 million a day, subject to a court’s approval.
The bill would transfer some authorities from the Federal Trade Commission to the new data protection agency.
Gillibrand’s bill lands just a month after California’s consumer privacy law took effect, more than a year after it was signed into law. The law extended much of Europe’s revised privacy laws, known as GDPR, to the state. But Gillibrand’s bill would not affect state laws like California’s, her office confirmed in an email.
Privacy groups and experts have already offered positive reviews.
Caitriona Fitzgerald, policy director at the Electronic Privacy Information Center, said the bill is a “bold, ambitious proposal.” Other groups, including Color of Change and Consumer Action, praised the effort to establish a federal data protection watchdog.
Michelle Richardson, director of the Privacy and Data Project at the Center for Democracy and Technology, reviewed a summary of the bill.
“The summary seems to leave a lot of discretion to executive branch regulators,” said Richardson. “Many of these policy decisions should be made by Congress and written clearly into statute.” She warned it could take years to know if the new regime has any meaningful impact on corporate behaviors.
Gillibrand’s bill stands alone — the senator is the only sponsor on the bill. But given the appetite of some lawmakers on both sides of the aisles to crash the Silicon Valley data party, it’s likely to pick up bipartisan support in no time.
Whether it makes it to the president’s desk without a fight from the tech giants remains to be seen.
The American Civil Liberties Union plans to fight newly revealed practices by the Department of Homeland Security which used commercially available cell phone location data to track suspected illegal immigrants.
“DHS should not be accessing our location information without a warrant, regardless whether they obtain it by paying or for free. The failure to get a warrant undermines Supreme Court precedent establishing that the government must demonstrate probable cause to a judge before getting some of our most sensitive information, especially our cell phone location history,” said Nathan Freed Wessler, a staff attorney with the ACLU’s Speech, Privacy, and Technology Project.
Earlier today, The Wall Street Journal reported that the Homeland Security through its Immigration and Customs Enforcement (ICE) and Customs & Border Protection (CBP) agencies were buying geolocation data from commercial entities to investigate suspects of alleged immigration violations.
The location data, which aggregators acquire from cellphone apps including games, weather, shopping, and search services, is being used by Homeland Security to detect undocumented immigrants and others entering the U.S. unlawfully, the Journal reported.
According to privacy experts interviewed by the Journal, since the data is publicly available for purchase, the government practices don’t appear to violate the law — despite being what may be the largest dragnet ever conducted by the U.S. government using the aggregated data of its citizens.
It’s also an example of how the commercial surveillance apparatus put in place by private corporations in Democratic societies can be legally accessed by state agencies to create the same kind of surveillance networks used in more authoritarian countries like China, India, and Russia.
“This is a classic situation where creeping commercial surveillance in the private sector is now bleeding directly over into government,” said Alan Butler, general counsel of the Electronic Privacy Information Center, a think tank that pushes for stronger privacy laws, told the newspaper.
Behind the government’s use of commercial data is a company called Venntel. Based in Herndon, Va., the company acts as a government contractor and shares a number of its executive staff with Gravy Analytics, a mobile-advertising marketing analytics company. In all, ICE and the CBP have spent nearly $1.3 million on licenses for software that can provide location data for cell phones. Homeland Security says that the data from these commercially available records is used to generate leads about border crossing and detecting human traffickers.
The ACLU’s Wessler has won these kinds of cases in the past. He successfully argued before the Supreme Court in the case of Carpenter v. United States that geographic location data from cellphones was a protected class of information and couldn’t be obtained by law enforcement without a warrant.
CBP explicitly excludes cell tower data from the information it collects from Venntel, according to a spokesperson for the agency told the Journal — in part because it has to under the law. The agency also said that it only access limited location data and that data is anonymized.
However, anonymized data can be linked to specific individuals by correlating that anonymous cell phone information with the real world movements of specific individuals which can be either easily deduced or tracked through other types of public records and publicly available social media.
ICE is already being sued by the ACLU for another potential privacy violation. Late last year the ACLU said that it was taking the government to court over the DHS service’s use of so-called “stingray” technology that spoofs a cell phone tower to determine someone’s location.
At the time, the ACLU cited a government oversight report in 2016 which indicated that both CBP and ICE collectively spent $13 million on buying dozens of stingrays, which the agencies used to “locate people for arrest and prosecution.”
There’s a reason why Data Privacy Day pisses me off.
January 28 was the annual “Hallmark holiday” for cybersecurity, ostensibly a day devoted to promoting data privacy awareness and staying safe online. This year, as in recent years, it has become a launching pad for marketing fluff and promoting privacy practices that don’t hold up.
Privacy has become a major component of our wider views on security, and it’s in sharper focus than ever as we see multiple examples of companies that harvest too much of our data, share it with others, sell it to advertisers and third parties and use it to track our every move so they can squeeze out a few more dollars.
But as we become more aware of these issues, companies large and small clamor for attention about how their privacy practices are good for users. All too often, companies make hollow promises and empty claims that look fancy and meaningful.
California’s Consumer Privacy Act (CCPA) allows anyone who resides in the state to access and obtain copies of the data that companies store on them, and the right to delete that data and opt-out of companies selling or monetizing their data. It’s the biggest state-level overhaul of privacy rules in a generation. State regulators can impose fines and other sanctions for companies that violate the law — although, the law’s enforcement provisions do not take effect until July. That’s probably a good thing for companies, given most major tech giants operating in the state are not ready to comply with the law.
Just as companies did with Europe’s GDPR, many companies have sprung up new privacy policies in preparation, as well as new data portals, which allow consumers access to their data and to opt-out of their data being sold on to third-parties, such as advertisers. But good luck finding them. Most companies aren’t transparent about where their data portals are, often out of sight and buried in privacy policies, near-guaranteeing that nobody will find them.
Just two days into the new law, and some are already fixing it for the average Californian.
Damian Finol created a running directory of company pages that allow California residents to opt-out of their data being sold and request their information. The directory is updated frequently, and so far includes banks, retail giants, airlines, car rental services, gaming giants and cell companies — to name a few.
caprivacy.me is a simple directory of links to where California residents can tell companies not to sell their data, and request what data companies store on them (Screenshot: TechCrunch)
The project is still in its infancy, but relies on community contributions (and anyone can submit a suggestion), he said. In less than a day, it already racked up more than 80 links.
“I’m passionate about privacy and allowing people to declare what their personal privacy model is,” Finol told TechCrunch.
“I grew up queer in Latin America in the 1990s, so keeping private the truth about me was vital. Nowadays, I think of my LGBTQ siblings in places like the Middle East, where if their privacy is violated, they can face capital punishment,” he said, explaining his motivations behind the directory.
There’s no easy way — yet — to opt-out in one go. Anyone in California who wants to opt-out has to go through each link. But once it’s done, it’s done. Put on a pot of coffee and get started.
Google’s ex-head of international relations, Ross LaJeunesse — who clocked up more than a decade working government and policy-related roles for the tech giant before departing last year — has become the latest (former) Googler to lay into the company for falling short of its erstwhile “don’t be evil” corporate motto.
Worth noting right off the bat: LaJeunesse is making his own pitch to be elected as a U.S. senator for the Democrats in Maine, where he’s pitting himself against the sitting Republican, Susan Collins. So this lengthy blog post, in which he sets out reasons for joining (“making the world better and more equal”) and — at long last — exiting Google does look like an exercise in New Year reputation “exfoliation,” shall we say.
One that’s intended to anticipate and deflect any critical questions he may face on the campaign trail, given his many years of service to Mountain View. Hence the inclusion of overt political messaging, such as lines like: “No longer can massive tech companies like Google be permitted to operate relatively free from government oversight.”
Still, the post makes more awkward reading for Google. (Albeit, less awkward than the active employee activism the company continues to face over a range of issues — from its corporate culture and attitude toward diversity to product dev ethics.)
LaJeunesse claims that (unnamed) senior management actively evaded his attempts to push for it to adopt a company-wide Human Rights program that would, as he tells it, “publicly commit Google to adhere to human rights principles found in the UN Declaration of Human Rights, provide a mechanism for product and engineering teams to seek internal review of product design elements, and formalize the use of Human Rights Impact Assessments for all major product launches and market entries.”
“[E]ach time I recommended a Human Rights Program, senior executives came up with an excuse to say no,” LaJeunesse alleges, going on to claim that he was subsequently side-lined in policy discussions related to a censored search project Google had been working on to enable it to return to the Chinese market.
The controversial project, code-named Dragonfly, was later shut down, per LaJeunesse’s telling, after Congress raised questions — backing up the blog’s overarching theme that only political scrutiny can put meaningful limits on powerful technologists. (Check that already steady drumbeat for the 2020 U.S. elections.)
At first, [Google senior executives] said human rights issues were better handled within the product teams, rather than starting a separate program. But the product teams weren’t trained to address human rights as part of their work. When I went back to senior executives to again argue for a program, they then claimed to be worried about increasing the company’s legal liability. We provided the opinion of outside experts who re-confirmed that these fears were unfounded. At this point, a colleague was suddenly re-assigned to lead the policy team discussions for Dragonfly. As someone who had consistently advocated for a human rights-based approach, I was being sidelined from the on-going conversations on whether to launch Dragonfly. I then realized that the company had never intended to incorporate human rights principles into its business and product decisions. Just when Google needed to double down on a commitment to human rights, it decided to instead chase bigger profits and an even higher stock price.
Reached for comment, a Google spokesman sent us this statement, attributed to a Google spokeswoman: “We have an unwavering commitment to supporting human rights organisations and efforts. That commitment is unrelated to and unaffected by the reorganisation of our policy team, which was widely reported and which impacted many members of the team. As part of this reorganisation, Ross was offered a new position at the exact same level and compensation, which he declined to accept. We wish Ross all the best with his political ambitions.”
LaJeunesse’s blog post also lays into Google’s workplace culture — making allegations that bullying and racist stereotyping were commonplace.
Including even apparently during attempts by management to actively engage with the issue of diversity…
It was no different in the workplace culture. Senior colleagues bullied and screamed at young women, causing them to cry at their desks. At an all-hands meeting, my boss said, “Now you Asians come to the microphone too. I know you don’t like to ask questions.” At a different all-hands meeting, the entire policy team was separated into various rooms and told to participate in a “diversity exercise” that placed me in a group labeled “homos” while participants shouted out stereotypes such as “effeminate” and “promiscuous.” Colleagues of color were forced to join groups called “Asians” and “Brown people” in other rooms nearby.
We’ve asked Google for comment on these allegations and will update this post with any response.
It’s clearly a sign of the “techlash” times that an ex-Googler, who’s now a senator-in-the-running, believes there’s political capital to be made by publicly unloading on his former employer.
“The role of these companies in our daily lives, from how we run our elections to how we entertain and educate our children, is just too great to leave in the hands of executives who are accountable only to their controlling shareholders who — in the case of Google, Amazon, Facebook and Snap — happen to be fellow company insiders and founders,” LaJeunesse goes on to write, widening his attack to incorporate other FAANG giants.
Expect plenty more such tech giant piñata in the run up to November’s ballot.