Posted on

Legal clouds gather over US cloud services, after CJEU ruling

In the wake of yesterday’s landmark ruling by Europe’s top court — striking down a flagship transatlantic data transfer framework called Privacy Shield, and cranking up the legal uncertainty around processing EU citizens’ data in the U.S. in the process — Europe’s lead data protection regulator has fired its own warning shot at the region’s data protection authorities (DPAs), essentially telling them to get on and do the job of intervening to stop people’s data flowing to third countries where it’s at risk.

Countries like the U.S.

The original complaint that led to the Court of Justice of the EU (CJEU) ruling focused on Facebook’s use of a data transfer mechanism called Standard Contractual Clauses (SCCs) to authorize moving EU users’ data to the U.S. for processing.

Complainant Max Schrems asked the Irish Data Protection Commission (DPC) to suspend Facebook’s SCC data transfers in light of U.S. government mass surveillance programs. Instead, the regulator went to court to raise wider concerns about the legality of the transfer mechanism.

That in turn led Europe’s top judges to nuke the Commission’s adequacy decision, which underpinned the EU-U.S. Privacy Shield — meaning the U.S. no longer has a special arrangement greasing the flow of personal data from the EU. Yet, at the time of writing, Facebook is still using SCCs to process EU users’ data in the U.S. Much has changed, but the data hasn’t stopped flowing — yet.

Yesterday the tech giant said it would “carefully consider” the findings and implications of the CJEU decision on Privacy Shield, adding that it looked forward to “regulatory guidance.” It certainly didn’t offer to proactively flip a kill switch and stop the processing itself.

Ireland’s DPA, meanwhile, which is Facebook’s lead data regulator in the region, sidestepped questions over what action it would be taking in the wake of yesterday’s ruling — saying it (also) needed (more) time to study the legal nuances.

The DPC’s statement also only went so far as to say the use of SCCs for taking data to the U.S. for processing is “questionable” — adding that case by case analysis would be key.

The regulator remains the focus of sustained criticism in Europe over its enforcement record for major cross-border data protection complaints — with still zero decisions issued more than two years after the EU’s General Data Protection Regulation (GDPR) came into force, and an ever-growing backlog of open investigations into the data processing activities of platform giants.

In May, the DPC finally submitted to other DPAs for review its first draft decision on a cross-border case (an investigation into a Twitter security breach), saying it hoped the decision would be finalized in July. At the time of writing we’re still waiting for the bloc’s regulators to reach consensus on that.

The painstaking pace of enforcement around Europe’s flagship data protection framework remains a problem for EU lawmakers — whose two-year review last month called for uniformly “vigorous” enforcement by regulators.

The European Data Protection Supervisor (EDPS) made a similar call today, in the wake of the Schrems II ruling — which only looks set to further complicate the process of regulating data flows by piling yet more work on the desks of underfunded DPAs.

“European supervisory authorities have the duty to diligently enforce the applicable data protection legislation and, where appropriate, to suspend or prohibit transfers of data to a third country,” writes EDPS Wojciech Wiewiórowski, in a statement, which warns against further dithering or can-kicking on the intervention front.

“The EDPS will continue to strive, as a member of the European Data Protection Board (EDPB), to achieve the necessary coherent approach among the European supervisory authorities in the implementation of the EU framework for international transfers of personal data,” he goes on, calling for more joint working by the bloc’s DPAs.

Wiewiórowski’s statement also highlights what he dubs “welcome clarifications” regarding the responsibilities of data controllers and European DPAs — to “take into account the risks linked to the access to personal data by the public authorities of third countries.”

“As the supervisory authority of the EU institutions, bodies, offices and agencies, the EDPS is carefully analysing the consequences of the judgment on the contracts concluded by EU institutions, bodies, offices and agencies. The example of the recent EDPS’ own-initiative investigation into European institutions’ use of Microsoft products and services confirms the importance of this challenge,” he adds.

Part of the complexity of enforcement of Europe’s data protection rules is the lack of a single authority; a varied patchwork of supervisory authorities responsible for investigating complaints and issuing decisions.

Now, with a CJEU ruling that calls for regulators to assess third countries themselves — to determine whether the use of SCCs is valid in a particular use-case and country — there’s a risk of further fragmentation should different DPAs jump to different conclusions.

Yesterday, in its response to the CJEU decision, Hamburg’s DPA criticized the judges for not also striking down SCCs, saying it was “inconsistent” for them to invalidate Privacy Shield yet allow this other mechanism for international transfers. Supervisory authorities in Germany and Europe must now quickly agree how to deal with companies that continue to rely illegally on the Privacy Shield, the DPA warned.

In the statement, Hamburg’s data commissioner, Johannes Caspar, added: “Difficult times are looming for international data traffic.”

He also shot off a blunt warning that: “Data transmission to countries without an adequate level of data protection will… no longer be permitted in the future.”

Compare and contrast that with the Irish DPC talking about use of SCCs being “questionable,” case by case. (Or the U.K.’s ICO offering this bare minimum.)

Caspar also emphasized the challenge facing the bloc’s patchwork of DPAs to develop and implement a “common strategy” toward dealing with SCCs in the wake of the CJEU ruling.

In a press note today, Berlin’s DPA also took a tough line, warning that data transfers to third countries would only be permitted if they have a level of data protection essentially equivalent to that offered within the EU.

In the case of the U.S. — home to the largest and most used cloud services — Europe’s top judges yesterday reiterated very clearly that that is not in fact the case.

“The CJEU has made it clear that the export of data is not just about the economy but people’s fundamental rights must be paramount,” Berlin data commissioner Maja Smoltczyk said in a statement [which we’ve translated using Google Translate].

“The times when personal data could be transferred to the U.S. for convenience or cost savings are over after this judgment,” she added.

Both DPAs warned the ruling has implications for the use of cloud services where data is processed in other third countries where the protection of EU citizens’ data also cannot be guaranteed too, i.e. not just the U.S.

On this front, Smoltczyk name-checked China, Russia and India as countries EU DPAs will have to assess for similar problems.

“Now is the time for Europe’s digital independence,” she added.

Some commentators (including Schrems himself) have also suggested the ruling could see companies switching to local processing of EU users’ data. Though it’s also interesting to note the judges chose not to invalidate SCCs — thereby offering a path to legal international data transfers, but only provided the necessary protections are in place in that given third country.

Also issuing a response to the CJEU ruling today was the European Data Protection Board (EDPB). AKA the body made up of representatives from DPAs across the bloc. Chair Andrea Jelinek put out an emollient statement, writing that: “The EDPB intends to continue playing a constructive part in securing a transatlantic transfer of personal data that benefits EEA citizens and organisations and stands ready to provide the European Commission with assistance and guidance to help it build, together with the U.S., a new framework that fully complies with EU data protection law.”

Short of radical changes to U.S. surveillance law, it’s tough to see how any new framework could be made to legally stick, though. Privacy Shield’s predecessor arrangement, Safe Harbour, stood for around 15 years. Its shiny “new and improved” replacement didn’t even last five.

In the wake of the CJEU ruling, data exporters and importers are required to carry out an assessment of a country’s data regime to assess adequacy with EU legal standards before using SCCs to transfer data there.

“When performing such prior assessment, the exporter (if necessary, with the assistance of the importer) shall take into consideration the content of the SCCs, the specific circumstances of the transfer, as well as the legal regime applicable in the importer’s country. The examination of the latter shall be done in light of the non-exhaustive factors set out under Art 45(2) GDPR,” Jelinek writes.

“If the result of this assessment is that the country of the importer does not provide an essentially equivalent level of protection, the exporter may have to consider putting in place additional measures to those included in the SCCs. The EDPB is looking further into what these additional measures could consist of.”

Again, it’s not clear what “additional measures” a platform could plausibly deploy to “fix” the gaping lack of redress afforded to foreigners by U.S. surveillance law. Major legal surgery does seem to be required to square this circle.

Jelinek said the EDPB would be studying the judgement with the aim of putting out more granular guidance in the future. But her statement warns data exporters they have an obligation to suspend data transfers or terminate SCCs if contractual obligations are not or cannot be complied with, or else to notify a relevant supervisory authority if it intends to continue transferring data.

In her roundabout way, she also warns that DPAs now have a clear obligation to terminate SCCs where the safety of data cannot be guaranteed in a third country.

“The EDPB takes note of the duties for the competent supervisory authorities (SAs) to suspend or prohibit a transfer of data to a third country pursuant to SCCs, if, in the view of the competent SA and in the light of all the circumstances of that transfer, those clauses are not or cannot be complied with in that third country, and the protection of the data transferred cannot be ensured by other means, in particular where the controller or a processor has not already itself suspended or put an end to the transfer,” Jelinek writes.

One thing is crystal clear: Any sense of legal certainty U.S. cloud services were deriving from the existence of the EU-U.S. Privacy Shield — with its flawed claim of data protection adequacy — has vanished like summer rain.

In its place, a sense of déjà vu and a lot more work for lawyers.

Read More

Posted on

French court slaps down Google’s appeal against $57M GDPR fine

France’s top court for administrative law has dismissed Google’s appeal against a $57M fine issued by the data watchdog last year for not making it clear enough to Android users how it processes their personal information.

The State Council issued the decision today, affirming the data watchdog CNIL’s earlier finding that Google did not provide “sufficiently clear” information to Android users — which in turn meant it had not legally obtained their consent to use their data for targeted ads.

“Google’s request has been rejected,” a spokesperson for the Conseil D’Etat confirmed to TechCrunch via email.

“The Council of State confirms the CNIL’s assessment that information relating to targeting advertising is not presented in a sufficiently clear and distinct manner for the consent of the user to be validly collected,” the court also writes in a press release [translated with Google Translate] on its website.

It found the size of the fine to be proportionate — given the severity and ongoing nature of the violations.

Importantly, the court also affirmed the jurisdiction of France’s national watchdog to regulate Google — at least on the date when this penalty was issued (January 2019).

The CNIL’s multimillion dollar fine against Google remains the largest to date against a tech giant under Europe’s flagship General Data Protection Regulation (GDPR) — lending the case a certain symbolic value, for those concerned about whether the regulation is functioning as intended vs platform power.

While the size of the fine is still relative peanuts vs Google’s parent entity Alphabet’s global revenue, changes the tech giant may have to make to how it harvests user data could be far more impactful to its ad-targeting bottom line. 

Under European law, for consent to be a valid legal basis for processing personal data it must be informed, specific and freely given. Or, to put it another way, consent cannot be strained.

In this case French judges concluded Google had not provided clear enough information for consent to be lawfully obtained — including objecting to a pre-ticked checkbox which the court affirmed does not meet the requirements of the GDPR.

So, tl;dr, the CNIL’s decision has been entirely vindicated.

Reached for comment on the court’s dismissal of its appeal, a Google spokeswoman sent us this statement:

People expect to understand and control how their data is used, and we’ve invested in industry-leading tools that help them do both. This case was not about whether consent is needed for personalised advertising, but about how exactly it should be obtained. In light of this decision, we will now review what changes we need to make.

GDPR came into force in 2018, updating long standing European data protection rules and opening up the possibility of supersized fines of up to 4% of global annual turnover.

However actions against big tech have largely stalled, with scores of complaints being funnelled through Ireland’s Data Protection Commission — on account of a one-stop-shop mechanism in the regulation — causing a major backlog of cases. The Irish DPC has yet to issue decisions on any cross border complaints, though it has said its first ones are imminent — on complaints involving Twitter and Facebook.

Ireland’s data watchdog is also continuing to investigate a number of complaints against Google, following a change Google announced to the legal jurisdiction of where it processes European users’ data — moving them to Google Ireland Limited, based in Dublin, which it said applied from January 22, 2019 — with ongoing investigations by the Irish DPC into a long running complaint related to how Google handles location data and another major probe of its adtech, to name two

On the GDPR one-stop shop mechanism — and, indirectly, the wider problematic issue of ‘forum shopping’ and European data protection regulation — the French State Council writes: “Google believed that the Irish data protection authority was solely competent to control its activities in the European Union, the control of data processing being the responsibility of the authority of the country where the main establishment of the data controller is located, according to a ‘one-stop-shop’ principle instituted by the GDPR. The Council of State notes however that at the date of the sanction, the Irish subsidiary of Google had no power of control over the other European subsidiaries nor any decision-making power over the data processing, the company Google LLC located in the United States with this power alone.”

In its own statement responding to the court’s decision, the CNIL notes the court’s view that GDPR’s one-stop-shop mechanism was not applicable in this case — writing: “It did so by applying the new European framework as interpreted by all the European authorities in the guidelines of the European Data Protection Committee.”

Privacy NGO noyb — one of the privacy campaign groups which lodged the original ‘forced consent’ complaint against Google, all the way back in May 2018 — welcomed the court’s decision on all fronts, including the jurisdiction point.

Commenting in a statement, noyb’s honorary chairman, Max Schrems, said: “It is very important that companies like Google cannot simply declare themselves to be ‘Irish’ to escape the oversight by the privacy regulators.”

A key question is whether CNIL — or another (non-Irish) EU DPA — will be found to be competent to sanction Google in future, following its shift to naming its Google Ireland subsidiary as the regional data processor. (Other tech giants use the same or a similar playbook, seeking out the EU’s more ‘business-friendly’ regulators.)

On the wider ruling, Schrems also said: “This decision requires substantial improvements by Google. Their privacy policy now really needs to make it crystal clear what they do with users’ data. Users must also get an option to agree to only some parts of what Google does with their data and refuse other things.”

French digital rights group, La Quadrature du Net — which had filed a related complaint against Google, feeding the CNIL’s investigation — also declared victory today, noting it’s the first sanction in a number of GDPR complaints it has lodged against tech giants on behalf of 12,000 citizens.

“The rest of the complaints against Google, Facebook, Apple and Microsoft are still under investigation in Ireland. In any case, this is what this authority promises us,” it added in another tweet.

Read More

Posted on

Google’s Android ad ID targeted in strategic GDPR tracking complaint

Now here’s an interesting GDPR complaint: Is Google illegally tracking Android users in Europe via a unique, device-assigned advertising ID?

First, what is the Android advertising ID? Per Google’s description to developers building apps for its smartphone platform it’s — [emphasis added by us]

The advertising ID is a unique, user-resettable ID for advertising, provided by Google Play services. It gives users better controls and provides developers with a simple, standard system to continue to monetize their apps. It enables users to reset their identifier or opt out of personalized ads (formerly known as interest-based ads) within Google Play apps.

Not so fast, says noyb — a European not-for-profit privacy advocacy group that campaigns to get regulators to enforce existing rules around how people’s data can be used — the problem with offering a tracking ID that can only be reset is that there’s no way for an Android user to not be tracked.

Simply put, resetting a tracker is not the same thing as being able to not be tracked at all.

noyb has now filed a formal complaint against Google under Europe’s General Data Protection Regulation (GDPR), accusing it of tracking Android users via the ad ID without legally valid consent.

As we’ve said many, many, many times before, GDPR applies a particular standard if you’re relying on consent — as Google appears to be here, since Android users are asked to consent to its terms on device set up, yet must agree to a resettable but not disable-able advertising ID.

Yet, under the EU data protection framework, for consent to be legally valid it must be informed, purpose limited and freely given.

Freely given means there must be a choice (which must also be free).

Thus the question arises, if an Android user can’t say no to an ad ID tracker — they can merely keep resetting it (with no user control over any previously gathered data) — where’s their free choice to not be tracked by Google?

“In essence, you buy a new Android phone, but by adding a tracking ID they ship you a tracking device,” said Stefano Rossetti, privacy lawyer at noyb.eu, in a statement on the complaint.

noyb’s contention is that Google’s ‘choice’ is “between tracking or more tracking” — which isn’t, therefore, a genuine choice to not be tracked at all.

Google claims that users can control the processing of their data, but when put to the test Android does not allow deleting the tracking ID,” it writes. “It only allows users to generate a new tracking ID to replace the existing one. This neither deletes the data that was collected before, nor stops tracking going forward.”

“It is grotesque,” continued Rossetti. “Google claims that if you want them to stop tracking you, you have to agree to new tracking. It is like cancelling a contract only under the condition that you sign a new one. Google’s system seems to structurally deny the exercise of users’ rights.”

We reached out to Google for comment on noyb’s complaint. At the time of writing the company had not responded but we’ll update this report if it provides any remarks.

The tech giant is under active GDPR investigation related to a number of other issues — including its location tracking of users; and its use of personal data for online advertising.

The latest formal complaint over its Android ad ID has been lodged with Austria’s data protection authority on behalf of an Austrian citizen. (GDPR contains provisions that allow for third parties to file complaints on behalf of individuals.)

noyb says the complaint is partially based on a recent report by the Norwegian Consumer Council — which analyzed how popular apps are rampantly sharing user data with the behavioral ad industry.

In terms of process, it notes that the Austrian DPA may involve other European data watchdogs in the case.

This is under a ‘one-stop-shop’ mechanism in the GDPR whereby interested watchdogs liaise on cross-border investigations, with one typically taking a lead investigator role (likely to be the Irish Data Protection Commission in any complaint against Google).

Under Europe’s GDPR, data regulators have major penalty powers — with fines that can scale as high as 4% of global annual turnover, which in Google’s case could amount to up to €5BN. And the ability to order data processing is suspended or stopped. (An outcome that would likely be far more expensive to a tech giant like Google.)

However there has been a dearth of major fines since the regulation began being applied, almost two years ago (exception: France’s data watchdog hit Google with a $57M fine last year). So pressure is continuing to pile up over enforcement — especially on Ireland’s Data Protection Commission which handles many cross-border complaints but has yet to issue any decisions in a raft of cross-border cases involving a number of tech giants.

Read More

Posted on

Otonomo raises $46 million to expand its automotive data marketplace

New vehicles today can produce a treasure trove of data. Without the proper tools, that data will sit undisturbed, rendering it worthless.

A number of companies have sprung up to help automakers manage and use data generated from connected cars. Israeli startup Otonomo is one such player that jumped on the scene in 2015 with a cloud-based software platform that captures and anonymizes vehicle data so it can then be used to create apps to provide services such as electric vehicle management, subscription-based fueling, parking, mapping, usage-based insurance and emergency service.

The startup announced this week it has raised $46 million to take its automotive data platform further. The capital was raised in a Series C funding round that included investments from SK Holdings, Avis Budget Group and Alliance Ventures. Existing investors Bessemer Venture Partners also participated. Otonomo has raised $82 million, to date.

The funds will be used to help Otonomo scale its business, improve its products and help it remain competitive, according to the company. Otonomo is also aiming to expand into new markets, particularly South Korea and Japan.

“We now have the expanded resources needed to deliver on our vision of making car data as valuable as possible for the entire transportation ecosystem, while adhering to the strictest privacy and security standards,” Otonomo CEO and founder Ben Volkow said in a statement.

Otonomo’s pitch focuses on creating opportunities to monetize connected car data while keeping it safe from the moment it is captured. Once the data is securely collected, the platform modifies it so companies can use it to develop apps and services for fleets, smart cities and individual customers. The platform also enables GDPR, CCPA and other privacy regulation-compliant solutions using both personal and aggregate data.

Today, Otonomo’s platform takes in 2.6 billion data points a day from more than 20 million vehicles through partnerships with more than automakers, fleets and farm and construction manufacturers. Otonomo has more than 25 partnerships, a list that includes Daimler, BMW, Mitsubishi Motor Company and Avis Budget Group. The company said it’s preparing to bring on seven more customers.

That opportunity for Otonomo is growing based on forecasts, including one from SBD Automotive that predicts connected cars will account for more than 70% of cars sold in North American and European markets in 2020.

Read More

Posted on

Germany’s COVID-19 contacts tracing app to link to labs for test result notification

A German research institute that’s involved in developing a COVID-19 contacts tracing app with the backing of the national government has released some new details about the work which suggests the app is being designed as more of a ‘one-stop shop’ to manage coronavirus impacts at an individual level, rather than having a sole function of alerting users to potential infection risk.

Work on the German app began at the start of March, per the Fraunhofer-Gesellschaft institute, with initial funding from the Federal Ministry of Education and Research and the Federal Ministry of Health funding a feasibility study.

In a PDF published today, the research organization reveals the government-backed app will include functionality for health authorities to directly notify users about a COVID-19 test result if they’ve opted in to get results this way.

It says the system must ensure only people who test positive for the virus make their measurement data available to avoid incorrect data being inputed. For the purposes of “this validation process”, it envisages “a digital connection to the existing diagnostic laboratories is implemented in the technical implementation”.

“App users can thus voluntarily activate this notification function and thus be informed more quickly and directly about their test results,” it writes in the press release (which we’ve translated from German with Google Translate) — arguing that such direct digital notification of tests results will mean that no “valuable time” is lost to curb the spread of the virus.

Governments across Europe are scrambling to get Bluetooth-powered contacts tracing apps off the ground, with apps also in the works from a number of other countries, including the UK and France, despite ongoing questions over the efficacy of digital contacts tracing vs such an infectious virus.

The great hope is that digital tools will offer a route out of economically crippling population lockdowns by providing a way to automate at least some contacts tracing — based on widespread smartphone penetration and the use of Bluetooth-powered device proximity as a proxy for coronavirus exposure.

Preventing a new wave of infections as lockdown restrictions are lifted is the near-term goal. Although — in line with Europe’s rights frameworks — use of contacts tracing apps looks set to be voluntary across most of the region, with governments wary about being seen to impose ‘health surveillance’ on citizens, as has essentially happened in China.

However if contacts tracing apps end up larded with features that are deep linking into national health systems that raises questions about how optional their use will really be.

An earlier proposal by a German consortium of medical device manufacturers, laboratories, clinics, clinical data management systems and blockchain solution providers — proposing a blockchain-based Digital Corona Health Certificate, which was touted as being able to generate “verifiable, certified test results that can be fed into any tracing app” to cut down on false positives — claimed to have backing from the City of Cologne’s public health department, as one example of potential function creep.

In March, Der Spiegel also reported on a large-scale study being coordinated by the Helmholtz Center for Infection Research in Braunschweig, to examine antibody levels to try to determine immunity across the population. Germany’s Robert Koch Institute (RKI) was reportedly involved in that study — and has been a key operator in the national contacts tracing push.

Both RKI and the Fraunhofer-Gesellschaft institute are also involved in parallel German-led pan-EU standardization effort for COVID-19 contacts tracing apps (called PEPP-PT) that’s been the leading voice for apps to centralize proximity data with governments/health authorities, rather than storing it on users’ device and performing risk processing locally.

As we reported earlier, PEPP-PT and its government backers appear to be squaring up for a battle with Apple over iOS restrictions on Bluetooth.

PEPP-PT bases its claim of being a “privacy-preserving” standard on not backing protocols or apps that use location data or mobile phone numbers — with only arbitrary (but pseudonymized) proximity IDs shared for the purpose of tracking close encounters between devices and potential coronavirus infections.

It has claimed it’s agnostic between centralization of proximity data vs decentralization, though so far the only protocol it’s publicly committed to is a centralized one.

Yet, at the same time, regional privacy experts, the EU parliament and even the European Commission have urged national governments to practice data minimization and decentralized when it comes to COVID-19 contacts tracing in order to boost citizen trust by shrinking associated privacy risks.

If apps are voluntary citizens’ trust must be earned not assumed, is the key argument. Without substantial uptake the utility of digital contacts tracing seems doubtful.

Apple and Google have also come down on the decentralized side of this debate — outting a joint effort last week for an API and later opt-in system-wide contacts tracing. The first version of their API is slated to be in developers’ hands next week.

Meanwhile, a coalition of nearly 300 academics signed an open letter at the start of this week warning that centralized systems risked surveillance creep — voicing support for decentralized protocols, such as DP-3T: Another contact tracing protocol that’s being developed by a separate European coalition which has been highly critical of PEPP-PT.

And while PEPP-PT claimed recently to have seven governments signed up to its approach, and 40 more in the pipeline, at least two of the claimed EU supporters (Switzerland and Spain) had actually said they will use a decentralized approach.

The coalition has also been losing support from a number of key research institutions which had initially backed its push for a “privacy-preserving” standard, as controversy around its intent and lack of transparency has grown.

Nonetheless the two biggest EU economies, Germany and France, appear to be digging in behind a push to centralize proximity data — putting Apple in their sights.

Bloomberg reported earlier this week that the French government is pressurizing Apple to remove Bluetooth restrictions for its COVID-19 contacts tracing app which also relies on a ‘trusted authority’ running a central server (we’ve covered the French ROBERT protocol in detail here).

It’s possible Germany and France are sticking to their centralized guns because of wider plans to pack more into these contacts tracing apps than simply Bluetooth-powered alerts — as suggested by the Fraunhofer document.

Access to data is another likely motivator.

“Only if research can access sufficiently valid data it is possible to create forecasts that are the basis for planning further steps against are the spread of the virus,” the institute goes on. (Though, as we’ve written before, the DP-3T decentralized protocol sets out a path for users to opt in to share proximity data for research purposes.)

Another strand that’s evident from the Fraunhofer PDF is sovereignty.

“Overall, the approach is based on the conviction that the state healthcare system must have sovereignty over which criteria, risk calculations, recommendations for action and feedback are in one such system,” it writes, adding: “In order to achieve the greatest possible usability on end devices on the market, technical cooperation with the targeted operating system providers, Google and Apple, is necessary.”

Apple and Google did not respond to requests for comment on whether they will be making any changes to their API as result of French and German pressure.

Fraunhofer further notes that “full compatibility” between the German app and the centralized one being developed by French research institutes Inria and Inserm was achieved in the “past few weeks” — underlining that the two nations are leading this particular contacts tracing push.

In related news this week, Europe’s Data Protection Board (EDPB) put out guidance for developers of contacts tracing apps which stressed an EU legal principle related to processing personal data that’s known as purpose limitation — warning that apps need to have purposes “specific enough to exclude further processing for purposes unrelated to the management of the COVID-19 health crisis (e.g., commercial or law enforcement purposes)”.

Which sounds a bit like the regulator drawing a line in the sand to warn states that might be tempted to turn contacts tracing apps into coronavirus immunity passports.

The EDPB also urged that “careful consideration” be given to data minimisation and data protection by design and by default — two other key legal principles baked into Europe’s General Data Protection Regulation, albeit with some flex during a public health emergency.

However the regulatory body took a pragmatic view on the centralization vs decentralization debate — saying both approaches are “viable” in a contacts tracing context, with the key caveat that “adequate security measures” must be in place.

Read More

Posted on

Google gobbling Fitbit is a major privacy risk, warns EU data protection advisor

The European Data Protection Board (EDPB) has intervened to raise concerns about Google’s plan to scoop up the health and activity data of millions of Fitbit users — at a time when the company is under intense scrutiny over how extensively it tracks people online and for antitrust concerns.

Google confirmed its plan to acquire Fitbit last November, saying it would pay $7.35 per share for the wearable maker in an all-cash deal that valued Fitbit, and therefore the activity, health, sleep and location data it can hold on its more than 28M active users, at ~$2.1 billion.

Regulators are in the process of considering whether to allow the tech giant to gobble up all this data.

Google, meanwhile, is in the process of dialling up its designs on the health space.

In a statement issued after a plenary meeting this week the body that advises the European Commission on the application of EU data protection law highlights the privacy implications of the planned merger, writing: “There are concerns that the possible further combination and accumulation of sensitive personal data regarding people in Europe by a major tech company could entail a high level of risk to the fundamental rights to privacy and to the protection of personal data.”

Just this month the Irish Data Protection Commission (DPC) opened a formal investigation into Google’s processing of people’s location data — finally acting on GDPR complaints filed by consumer rights groups as early as November 2018  which argue the tech giant uses deceptive tactics to manipulate users in order to keep tracking them for ad-targeting purposes.

The Irish DPC, which is the lead privacy regulator for Google in the EU and a member of the EDPB, said the advisory body’s statement is a reflection of the collective views of data protection agencies across the bloc.

The EDPB’s statement goes on to reiterate the importance for EU regulators to asses what it describes as the “longer-term implications for the protection of economic, data protection and consumer rights whenever a significant merger is proposed”.

It also says it intends to remain “vigilant in this and similar cases in the future”.

The EDPB includes a reminder that Google and Fitbit have obligations under Europe’s General Data Protection Regulation to conduct a “full assessment of the data protection requirements and privacy implications of the merger” — and do so in a transparent way, under the regulation’s principle of accountability.

“The EDPB urges the parties to mitigate the possible risks of the merger to the rights to privacy and data protection before notifying the merger to the European Commission,” it also writes.

We reached out to Google for comment but at the time of writing it had not provided a response nor responded to a question asking what commitments it will be making to Fitbit users regarding the privacy of their data.

Fitbit has previously claimed that users’ “health and wellness data will not be used for Google ads”.

However big tech has a history of subsequently steamrollering founder claims that ‘nothing will change’. (See, for e.g.: Facebook’s WhatsApp U-turn on data-linking.)

“The EDPB will consider the implications that this merger may have for the protection of personal data in the European Economic Area and stands ready to contribute its advice on the proposed merger to the Commission if so requested,” the advisory body adds.

We also reached out to the European Commission’s competition unit for a response to the EDPB’s statement. A spokeswoman confirmed the transaction has not been formally notified to it. 

“It is always up to the companies to notify transactions with an EU dimension to the European Commission,” she added. 

It is not yet clear whether or not the acquisition will face merger control review in the EU.

Update: A Google spokesperson has now sent this statement: “We are acquiring Fitbit to help us develop devices in the highly competitive wearables space and the deal is subject to the usual regulatory approvals. Protecting peoples’ information is core to what we do, and we will continue to work constructively with regulators to answer their questions.”

The company also pointed to its original blog post about the acquisition — highlighting the claims that: “We will never sell personal information to anyone. Fitbit health and wellness data will not be used for Google ads. And we will give Fitbit users the choice to review, move, or delete their data.”

This report was updated with additional comment 

Source: Gadgets – TechCrunch

Posted on

Facebook Dating launch blocked in Europe after it fails to show privacy workings

Facebook has been left red-faced after being forced to call off the launch date of its dating service in Europe because it failed to give its lead EU data regulator enough advanced warning — including failing to demonstrate it had performed a legally required assessment of privacy risks.

Late yesterday Ireland’s Independent.ie newspaper reported that the Irish Data Protection Commission (DPC) had sent agents to Facebook’s Dublin office seeking documentation that Facebook had failed to provide — using inspection and document seizure powers set out in Section 130 of the country’s Data Protection Act.

In a statement on its website the DPC said Facebook first contacted it about the rollout of the dating feature in the EU on February 3.

“We were very concerned that this was the first that we’d heard from Facebook Ireland about this new feature, considering that it was their intention to roll it out tomorrow, 13 February,” the regulator writes. “Our concerns were further compounded by the fact that no information/documentation was provided to us on 3 February in relation to the Data Protection Impact Assessment [DPIA] or the decision-making processes that were undertaken by Facebook Ireland.”

Facebook announced its plan to get into the dating game all the way back in May 2018, trailing its Tinder-encroaching idea to bake a dating feature for non-friends into its social network at its F8 developer conference.

It went on to test launch the product in Colombia a few months later. And since then it’s been gradually adding more countries in South American and Asia. It also launched in the US last fall — soon after it was fined $5BN by the FTC for historical privacy lapses.

At the time of its US launch Facebook said dating would arrive in Europe by early 2020. It just didn’t think to keep its lead EU privacy regulator in the loop — despite the DPC having multiple (ongoing) investigations into other Facebook-owned products at this stage.

Which is either extremely careless or, well, an intentional fuck you to privacy oversight of its data-mining activities. (Among multiple probes being carried out under Europe’s General Data Protection Regulation, the DPC is looking into Facebook’s claimed legal basis for processing people’s data under the Facebook T&Cs, for example.)

The DPC’s statement confirms that its agents visited Facebook’s Dublin office on February 10 to carry out an inspection — in order to “expedite the procurement of the relevant documentation”.

Which is a nice way of the DPC saying Facebook spent a whole week still not sending it the required information.

“Facebook Ireland informed us last night that they have postponed the roll-out of this feature,” the DPC’s statement goes on.

Which is a nice way of saying Facebook fucked up and is being made to put a product rollout it’s been planning for at least half a year on ice.

The DPC’s head of communications, Graham Doyle, confirmed the enforcement action, telling us: “We’re currently reviewing all the documentation that we gathered as part of the inspection on Monday and we have posed further questions to Facebook and are awaiting the reply.”

“Contained in the documentation we gathered on Monday was a DPIA,” he added.

This begs the question why Facebook didn’t send the DPIA to the DPC on February 3 — unless of course this document did not actually exist on that date…

We’ve reached out to Facebook for comment — and to ask when it carried out the DPIA. Update: A Facebook spokesperson has now sent this statement:

It’s really important that we get the launch of Facebook Dating right so we are taking a bit more time to make sure the product is ready for the European market. We worked carefully to create strong privacy safeguards, and complete the data processing impact assessment ahead of the proposed launch in Europe, which we shared with the IDPC when it was requested.

We’ve asked the company why, if it’s “really important” to get the launch “right” it did not provide the DPC with the required documentation in advance — instead of the regulator having to send agents to Facebook’s offices to get it themselves. We’ll update this report with any response.

We’ve also asked the DPC to confirm its next steps. The regulator could ask Facebook to make changes to how the product functions in Europe if it’s not satisfied it complies with EU laws. So a delay may mean many things.

Under GDPR there’s a requirement for data controllers to bake privacy by design and default into products which are handling people’s information. (And a dating product clearly would be.)

While a DPIA — which is a process whereby planned processing of personal data is assessed to consider the impact on the rights and freedoms of individuals — is a requirement under the GDPR when, for example, individual profiling is taking place or there’s processing of sensitive data on a large scale.

Again, the launch of a dating product on a platform such as Facebook — which has hundreds of millions of regional users — would be a clear-cut case for such an assessment to be carried out ahead of any launch.

Source: TechCrunch

Posted on

Recommendations for fintech startups navigating the procurement process

The expanding scope of fintech has been well documented in these digital pages. Payments, investing, financial planning and lending often spring to mind as “classic” fintech startups, but other business models like regtech, compliance, human resources and marketing are on the ascent.

For passionate and talented founders, the tireless pursuit of building innovative technology is critical and fundamental. That said, to be successful in financial services, significant time and effort needs to be dedicated to other business fundamentals: corporate setup, privacy and security. The financial services customer base presents unique challenges for fintech startups as the regulatory and operational requirements for third-party vendor assessment and management are, in comparison to most other industries, brutal. Issues that might go overlooked during the early stages of product design and team-building could turn into obstacles during the sales process.

Understanding the dynamics of the financial services procurement process is essential if you want to negotiate it as quickly and seamlessly as possible. And before diving head-first into the development of your killer fintech app, consider the following questions:

  • Is my technical architecture secure?
  • Who is responsible for cybersecurity in the organization?

Source: TechCrunch

Posted on

Privacy experts slam UK’s “disastrous” failure to tackle unlawful adtech

The UK’s data protection regulator has been slammed by privacy experts for once again failing to take enforcement action over systematic breaches of the law linked to behaviorally targeted ads — despite warning last summer that the adtech industry is out of control.

The Information Commissioner’s Office (ICO) has also previously admitted it suspects the real-time bidding (RTB) system involved in some programmatic online advertising to be unlawfully processing people’s sensitive information. But rather than take any enforcement against companies it suspects of law breaches it has today issued another mildly worded blog post — in which it frames what it admits is a “systemic problem” as fixable via (yet more) industry-led “reform”.

Yet it’s exactly such industry-led self-regulation that’s created the unlawful adtech mess in the first place, data protection experts warn.

The pervasive profiling of Internet users by the adtech ‘data industrial complex’ has been coming under wider scrutiny by lawmakers and civic society in recent years — with sweeping concerns being raised in parliaments around the world that individually targeted ads provide a conduit for discrimination, exploit the vulnerable, accelerate misinformation and undermine democratic processes as a consequence of platform asymmetries and the lack of transparency around how ads are targeted.

In Europe, which has a comprehensive framework of data protection rights, the core privacy complaint is that these creepy individually targeted ads rely on a systemic violation of people’s privacy from what amounts to industry-wide, Internet-enabled mass surveillance — which also risks the security of people’s data at vast scale.

It’s now almost a year and a half since the ICO was the recipient of a major complaint into RTB — filed by Dr Johnny Ryan of private browser Brave; Jim Killock, director of the Open Rights Group; and Dr Michael Veale, a data and policy lecturer at University College London — laying out what the complainants described then as “wide-scale and systemic” breaches of Europe’s data protection regime.

The complaint — which has also been filed with other EU data protection agencies — agues that the systematic broadcasting of people’s personal data to bidders in the adtech chain is inherently insecure and thereby contravenes Europe’s General Data Protection Regulation (GDPR), which stipulates that personal data be processed “in a manner that ensures appropriate security of the personal data”.

The regulation also requires data processors to have a valid legal basis for processing people’s information in the first place — and RTB fails that test, per privacy experts — either if ‘consent’ is claimed (given the sheer number of entities and volumes of data being passed around, which means it’s not credible to achieve GDPR’s ‘informed, specific and freely given’ threshold for consent to be valid); or ‘legitimate interests’ — which requires data processors carry out a number of balancing assessment tests to demonstrate it does actually apply.

“We have reviewed a number of justifications for the use of legitimate interests as the lawful basis for the processing of personal data in RTB. Our current view is that the justification offered by organisations is insufficient,” writes Simon McDougall, the ICO’s executive director of technology and innovation, developing a warning over the industry’s rampant misuse of legitimate interests to try to pass off RTB’s unlawful data processing as legit.

The ICO also isn’t exactly happy about what it’s found adtech doing on the Data Protection Impact Assessment front — saying, in so many words, that it’s come across widespread industry failure to actually, er, assess impacts.

“The Data Protection Impact Assessments we have seen have been generally immature, lack appropriate detail, and do not follow the ICO’s recommended steps to assess the risk to the rights and freedoms of the individual,” writes McDougall.

“We have also seen examples of basic data protection controls around security, data retention and data sharing being insufficient,” he adds.

Yet — again — despite fresh admissions of adtech’s lawfulness problem the regulator is choosing more stale inaction.

In the blog post McDougall does not rule out taking “formal” action at some point — but there’s only a vague suggestion of such activity being possible, and zero timeline for “develop[ing] an appropriate regulatory response”, as he puts it. (His preferred ‘E’ word in the blog is ‘engagement’; you’ll only find the word ‘enforcement’ in the footer link on the ICO’s website.)

“We will continue to investigate RTB. While it is too soon to speculate on the outcome of that investigation, given our understanding of the lack of maturity in some parts of this industry we anticipate it may be necessary to take formal regulatory action and will continue to progress our work on that basis,” he adds.

McDougall also trumpets some incremental industry fiddling — such as trade bodies agreeing to update their guidance — as somehow relevant to turning the tanker in a fundamentally broken system.

(Trade body the Internet Advertising Bureau’s UK branch has responded to developments with an upbeat note from its head of policy and regulatory affairs, Christie Dennehy-Neil, who lauds the ICO’s engagement as “a constructive process”, claiming: “We have made good progress” — before going on to urge its members and the wider industry to implement “the actions outlined in our response to the ICO” and “deliver meaningful change”. The statement climaxes with: “We look forward to continuing to engage with the ICO as this process develops.”)

McDougall also points to Google removing content categories from its RTB platform from next month (a move it announced months back, in November) as an important development; and seizes on the tech giant’s recent announcement of a proposal to phase out support for third party cookies within the next two years as ‘encouraging’.

Privacy experts have responded with facepalmed outrage to yet another can-kicking exercise by the UK regulator — warning that cosmetic tweaks to adtech won’t fix a system that’s designed to feast off an unlawful and inherently insecure high velocity background trading of Internet users’ personal data.

“When an industry is premised and profiting from clear and entrenched illegality that breach individuals’ fundamental rights, engagement is not a suitable remedy,” said UCL’s Veale in a statement. “The ICO cannot continue to look back at its past precedents for enforcement action, because it is exactly that timid approach that has led us to where we are now.”

The trio behind the RTB complaints (which includes Veale) have also issued a scathing collective response to more “regulatory ambivalence” — denouncing the lack of any “substantive action to end the largest data breach ever recorded in the UK”.

“The ‘Real-Time Bidding’ data breach at the heart of RTB market exposes every person in the UK to mass profiling, and the attendant risks of manipulation and discrimination,” they warn. “Regulatory ambivalence cannot continue. The longer this data breach festers, the deeper the rot sets in and the further our data gets exploited. This must end. We are considering all options to put an end to the systemic breach, including direct challenges to the controllers and judicial oversight of the ICO.”

Wolfie Christl, a privacy researcher who focuses on adtech — including contributing to a recent study looking at how extensively popular apps are sharing user data with advertisers — dubbed the ICO’s response “disastrous”.

“Last summer the ICO stated in their report that millions of people were affected by thousands of companies’ GDPR violations. I was sceptical when they announced they would give the industry six more months without enforcing the law. My impression is they are trying to find a way to impose cosmetic changes and keep the data industry happy rather than acting on their own findings and putting an end to the ubiquitous data misuse in today’s digital marketing, which should have happened years ago. The ICO seems to prioritize appeasing the industry over the rights of data subjects, and this is disastrous,” he told us.

“The way data-driven online marketing currently works is illegal at scale and it needs to be stopped from happening,” Christl added. “Each day EU data protection authorities allow these practices to continue further violates people’s rights and freedoms and perpetuates a toxic digital economy.

“This undermines the GDPR and generally trust in tech, perpetuates legal uncertainty for businesses, and punishes companies who comply and create privacy-respecting services and business models.

“Twenty months after the GDPR came into full force, it is still not enforced in major areas. We still see large-scale misuse of personal information all over the digital world. There is no GDPR enforcement against the tech giants and there is no enforcement against thousands of data companies beyond the large platforms. It seems that data protection authorities across the EU are either not able — or not willing — to stop many kinds of GDPR violations conducted for business purposes. We won’t see any change without massive fines and data processing bans. EU member states and the EU Commission must act.”

Source: TechCrunch

Posted on

Will online privacy make a comeback in 2020?

Last year was a landmark for online privacy in many ways, with something of a consensus emerging that consumers deserve protection from the companies that sell their attention and behavior for profit.

The debate now is largely around how to regulate platforms, not whether it needs to happen.

The consensus among key legislators acknowledges that privacy is not just of benefit to individuals but can be likened to public health; a level of protection afforded to each of us helps inoculate democratic societies from manipulation by vested and vicious interests.

The fact that human rights are being systematically abused at population-scale because of the pervasive profiling of Internet users — a surveillance business that’s dominated in the West by tech giants Facebook and Google, and the adtech and data broker industry which works to feed them — was the subject of an Amnesty International report in November 2019 that urges legislators to take a human rights-based approach to setting rules for Internet companies.

“It is now evident that the era of self-regulation in the tech sector is coming to an end,” the charity predicted.

Democracy disrupted

The dystopian outgrowth of surveillance capitalism was certainly in awful evidence in 2019, with elections around the world attacked at cheap scale by malicious propaganda that relies on adtech platforms’ targeting tools to hijack and skew public debate, while the chaos agents themselves are shielded from democratic view.

Platform algorithms are also still encouraging Internet eyeballs towards polarized and extremist views by feeding a radicalized, data-driven diet that panders to prejudices in the name of maintaining engagement — despite plenty of raised voices calling out the programmed antisocial behavior. So what tweaks there have been still look like fiddling round the edges of an existential problem.

Worse still, vulnerable groups remain at the mercy of online hate speech which platforms not only can’t (or won’t) weed out, but whose algorithms often seem to deliberately choose to amplify — the technology itself being complicit in whipping up violence against minorities. It’s social division as a profit-turning service.

The outrage-loving tilt of these attention-hogging adtech giants has also continued directly influencing political campaigning in the West this year — with cynical attempts to steal votes by shamelessly platforming and amplifying misinformation.

From the Trump tweet-bomb we now see full-blown digital disops underpinning entire election campaigns, such as the UK Conservative Party’s strategy in the 2019 winter General Election, which featured doctored videos seeded to social media and keyword targeted attack ads pointing to outright online fakes in a bid to hack voters’ opinions.

Political microtargeting divides the electorate as a strategy to conquer the poll. The problem is it’s inherently anti-democratic.

No wonder, then, that repeat calls to beef up digital campaigning rules and properly protect voters’ data have so far fallen on deaf ears. The political parties all have their hands in the voter data cookie-jar. Yet it’s elected politicians whom we rely upon to update the law. This remains a grave problem for democracies going into 2020 — and a looming U.S. presidential election.

So it’s been a year when, even with rising awareness of the societal cost of letting platforms suck up everyone’s data and repurpose it to sell population-scale manipulation, not much has actually changed. Certainly not enough.

Yet looking ahead there are signs the writing is on the wall for the ‘data industrial complex’ — or at least that change is coming. Privacy can make a comeback.

Adtech under attack

Developments in late 2019 such as Twitter banning all political ads and Google shrinking how political advertisers can microtarget Internet users are notable steps — even as they don’t go far enough.

But it’s also a relatively short hop from banning microtargeting sometimes to banning profiling for ad targeting entirely.

Alternative online ad models (contextual targeting) are proven and profitable — just ask search engine DuckDuckGo . While the ad industry gospel that only behavioral targeting will do now has academic critics who suggest it offer far less uplift than claimed, even as — in Europe — scores of data protection complaints underline the high individual cost of maintaining the status quo.

Startups are also innovating in the pro-privacy adtech space (see, for example, the Brave browser).

Changing the system — turning the adtech tanker — will take huge effort, but there is a growing opportunity for just such systemic change.

This year, it might be too much to hope for regulators get their act together enough to outlaw consent-less profiling of Internet users entirely. But it may be that those who have sought to proclaim ‘privacy is dead’ will find their unchecked data gathering facing death by a thousand regulatory cuts.

Or, tech giants like Facebook and Google may simple outrun the regulators by reengineering their platforms to cloak vast personal data empires with end-to-end encryption, making it harder for outsiders to regulate them, even as they retain enough of a fix on the metadata to stay in the surveillance business. Fixing that would likely require much more radical regulatory intervention.

European regulators are, whether they like it or not, in this race and under major pressure to enforce the bloc’s existing data protection framework. It seems likely to ding some current-gen digital tracking and targeting practices. And depending on how key decisions on a number of strategic GDPR complaints go, 2020 could see an unpicking — great or otherwise — of components of adtech’s dysfunctional ‘norm’.

Among the technologies under investigation in the region is real-time bidding; a system that powers a large chunk of programmatic digital advertising.

The complaint here is it breaches the bloc’s General Data Protection Regulation (GDPR) because it’s inherently insecure to broadcast granular personal data to scores of entities involved in the bidding chain.

A recent event held by the UK’s data watchdog confirmed plenty of troubling findings. Google responded by removing some information from bid requests — though critics say it does not go far enough. Nothing short of removing personal data entirely will do in their view, which sums to ads that are contextually (not micro)targeted.

Powers that EU data protection watchdogs have at their disposal to deal with violations include not just big fines but data processing orders — which means corrective relief could be coming to take chunks out of data-dependent business models.

As noted above, the adtech industry has already been put on watch this year over current practices, even as it was given a generous half-year grace period to adapt.

In the event it seems likely that turning the ship will take longer. But the message is clear: change is coming. The UK watchdog is due to publish another report in 2020, based on its review of the sector. Expect that to further dial up the pressure on adtech.

Web browsers have also been doing their bit by baking in more tracker blocking by default. And this summer Marketing Land proclaimed the third party cookie dead — asking what’s next?

Alternatives and workarounds will and are springing up (such as stuffing more in via first party cookies). But the notion of tracking by background default is under attack if not quite yet coming unstuck.

Ireland’s DPC is also progressing on a formal investigation of Google’s online Ad Exchange. Further real-time bidding complaints have been lodged across the EU too. This is an issue that won’t be going away soon, however much the adtech industry might wish it.

Year of the GDPR banhammer?

2020 is the year that privacy advocates are really hoping that Europe will bring down the hammer of regulatory enforcement. Thousands of complaints have been filed since the GDPR came into force but precious few decisions have been handed down. Next year looks set to be decisive — even potentially make or break for the data protection regime.

Source: TechCrunch