Amit Garg and Sanjay Rao have spent the bulk of their professional lives developing technology, founding startups and investing in startups at places like Google and Microsoft, HealthIQ, and Norwest Venture Partners.
Over their decade-long friendship the two men discussed working together on a venture fund, but the time was never right — until now. Since last August, the two men have been raising capital for their inaugural fund, Tau Ventures.
The name, like the two partners, is a bit wonky. Tau is two times pi and Garg and Rao chose it as the name for the partnership because it symbolizes their analytical approach to very early stage investing.
It’s a strange thing to launch a venture fund in a pandemic, but for Garg and Rao, the opportunity to provide very early stage investment capital into startups working on machine learning applications in healthcare, automation and business was too good to pass up.
Meanwhile, Rao, a Palo Alto, Calif. native, MIT alum, Microsoft product manager and founder of the Accelerate Labs accelerator in Palo Alto, Calif., said that it was important to give back to entrepreneurs after decades in the Valley honing skills as an operator.
Both Rao and Garg acknowledge that there are a number of funds that have emerged focused on machine learning including Basis Set Ventures, SignalFire, Two Sigma Ventures, but these investors lack the direct company building experience that the two new investors have.
Garg, for instance, has actually built a hospital in India and has a deep background in healthcare. As an investor, he’s already seen an exit through his investment in Nutonomy, and both men have a deep understanding of the enterprise market — especially around security.
So far, the company has made three investments automation, another three in enterprise software, and five in healthcare.
The firm currently has $17 million in capital under management raised from institutional investors like the law firm Wilson Sonsini and a number of undisclosed family offices and individuals, according to Garg.
Much of that capital was committed after the pandemic hit, Garg said. “We started August 29th… and did the final close May 29th.”
The idea was to close the fund and start putting capital to work — especially in an environment where other investors were burdened with sorting out their existing portfolios, and not able to put capital to work as quickly.
“Our last investment was done entirely over Zoom and Google Meet,” said Rao.
That virtual environment extends to the firm’s shareholder meetings and conferences, some of which have attracted over 1,000 attendees, according to the partners.
In the wake of yesterday’s landmark ruling by Europe’s top court — striking down a flagship transatlantic data transfer framework called Privacy Shield, and cranking up the legal uncertainty around processing EU citizens’ data in the U.S. in the process — Europe’s lead data protection regulator has fired its own warning shot at the region’s data protection authorities (DPAs), essentially telling them to get on and do the job of intervening to stop people’s data flowing to third countries where it’s at risk.
The original complaint that led to the Court of Justice of the EU (CJEU) ruling focused on Facebook’s use of a data transfer mechanism called Standard Contractual Clauses (SCCs) to authorize moving EU users’ data to the U.S. for processing.
Complainant Max Schrems asked the Irish Data Protection Commission (DPC) to suspend Facebook’s SCC data transfers in light of U.S. government mass surveillance programs. Instead, the regulator went to court to raise wider concerns about the legality of the transfer mechanism.
That in turn led Europe’s top judges to nuke the Commission’s adequacy decision, which underpinned the EU-U.S. Privacy Shield — meaning the U.S. no longer has a special arrangement greasing the flow of personal data from the EU. Yet, at the time of writing, Facebook is still using SCCs to process EU users’ data in the U.S. Much has changed, but the data hasn’t stopped flowing — yet.
Yesterday the tech giant said it would “carefully consider” the findings and implications of the CJEU decision on Privacy Shield, adding that it looked forward to “regulatory guidance.” It certainly didn’t offer to proactively flip a kill switch and stop the processing itself.
Ireland’s DPA, meanwhile, which is Facebook’s lead data regulator in the region, sidestepped questions over what action it would be taking in the wake of yesterday’s ruling — saying it (also) needed (more) time to study the legal nuances.
The DPC’s statement also only went so far as to say the use of SCCs for taking data to the U.S. for processing is “questionable” — adding that case by case analysis would be key.
The regulator remains the focus of sustained criticism in Europe over its enforcement record for major cross-border data protection complaints — with still zero decisions issued more than two years after the EU’s General Data Protection Regulation (GDPR) came into force, and an ever-growing backlog of open investigations into the data processing activities of platform giants.
In May, the DPC finally submitted to other DPAs for review its first draft decision on a cross-border case (an investigation into a Twitter security breach), saying it hoped the decision would be finalized in July. At the time of writing we’re still waiting for the bloc’s regulators to reach consensus on that.
The painstaking pace of enforcement around Europe’s flagship data protection framework remains a problem for EU lawmakers — whose two-year review last month called for uniformly “vigorous” enforcement by regulators.
The European Data Protection Supervisor (EDPS) made a similar call today, in the wake of the Schrems II ruling — which only looks set to further complicate the process of regulating data flows by piling yet more work on the desks of underfunded DPAs.
“European supervisory authorities have the duty to diligently enforce the applicable data protection legislation and, where appropriate, to suspend or prohibit transfers of data to a third country,” writes EDPS Wojciech Wiewiórowski, in a statement, which warns against further dithering or can-kicking on the intervention front.
“The EDPS will continue to strive, as a member of the European Data Protection Board (EDPB), to achieve the necessary coherent approach among the European supervisory authorities in the implementation of the EU framework for international transfers of personal data,” he goes on, calling for more joint working by the bloc’s DPAs.
Wiewiórowski’s statement also highlights what he dubs “welcome clarifications” regarding the responsibilities of data controllers and European DPAs — to “take into account the risks linked to the access to personal data by the public authorities of third countries.”
“As the supervisory authority of the EU institutions, bodies, offices and agencies, the EDPS is carefully analysing the consequences of the judgment on the contracts concluded by EU institutions, bodies, offices and agencies. The example of the recent EDPS’ own-initiative investigation into European institutions’ use of Microsoft products and services confirms the importance of this challenge,” he adds.
Part of the complexity of enforcement of Europe’s data protection rules is the lack of a single authority; a varied patchwork of supervisory authorities responsible for investigating complaints and issuing decisions.
Now, with a CJEU ruling that calls for regulators to assess third countries themselves — to determine whether the use of SCCs is valid in a particular use-case and country — there’s a risk of further fragmentation should different DPAs jump to different conclusions.
Yesterday, in its response to the CJEU decision, Hamburg’s DPA criticized the judges for not also striking down SCCs, saying it was “inconsistent” for them to invalidate Privacy Shield yet allow this other mechanism for international transfers. Supervisory authorities in Germany and Europe must now quickly agree how to deal with companies that continue to rely illegally on the Privacy Shield, the DPA warned.
In the statement, Hamburg’s data commissioner, Johannes Caspar, added: “Difficult times are looming for international data traffic.”
He also shot off a blunt warning that: “Data transmission to countries without an adequate level of data protection will… no longer be permitted in the future.”
Compare and contrast that with the Irish DPC talking about use of SCCs being “questionable,” case by case. (Or the U.K.’s ICO offering this bare minimum.)
Caspar also emphasized the challenge facing the bloc’s patchwork of DPAs to develop and implement a “common strategy” toward dealing with SCCs in the wake of the CJEU ruling.
In a press note today, Berlin’s DPA also took a tough line, warning that data transfers to third countries would only be permitted if they have a level of data protection essentially equivalent to that offered within the EU.
In the case of the U.S. — home to the largest and most used cloud services — Europe’s top judges yesterday reiterated very clearly that that is not in fact the case.
“The CJEU has made it clear that the export of data is not just about the economy but people’s fundamental rights must be paramount,” Berlin data commissioner Maja Smoltczyk said in a statement [which we’ve translated using Google Translate].
“The times when personal data could be transferred to the U.S. for convenience or cost savings are over after this judgment,” she added.
Both DPAs warned the ruling has implications for the use of cloud services where data is processed in other third countries where the protection of EU citizens’ data also cannot be guaranteed too, i.e. not just the U.S.
On this front, Smoltczyk name-checked China, Russia and India as countries EU DPAs will have to assess for similar problems.
“Now is the time for Europe’s digital independence,” she added.
Some commentators (including Schrems himself) have also suggested the ruling could see companies switching to local processing of EU users’ data. Though it’s also interesting to note the judges chose not to invalidate SCCs — thereby offering a path to legal international data transfers, but only provided the necessary protections are in place in that given third country.
Also issuing a response to the CJEU ruling today was the European Data Protection Board (EDPB). AKA the body made up of representatives from DPAs across the bloc. Chair Andrea Jelinek put out an emollient statement, writing that: “The EDPB intends to continue playing a constructive part in securing a transatlantic transfer of personal data that benefits EEA citizens and organisations and stands ready to provide the European Commission with assistance and guidance to help it build, together with the U.S., a new framework that fully complies with EU data protection law.”
Short of radical changes to U.S. surveillance law, it’s tough to see how any new framework could be made to legally stick, though. Privacy Shield’s predecessor arrangement, Safe Harbour, stood for around 15 years. Its shiny “new and improved” replacement didn’t even last five.
In the wake of the CJEU ruling, data exporters and importers are required to carry out an assessment of a country’s data regime to assess adequacy with EU legal standards before using SCCs to transfer data there.
“When performing such prior assessment, the exporter (if necessary, with the assistance of the importer) shall take into consideration the content of the SCCs, the specific circumstances of the transfer, as well as the legal regime applicable in the importer’s country. The examination of the latter shall be done in light of the non-exhaustive factors set out under Art 45(2) GDPR,” Jelinek writes.
“If the result of this assessment is that the country of the importer does not provide an essentially equivalent level of protection, the exporter may have to consider putting in place additional measures to those included in the SCCs. The EDPB is looking further into what these additional measures could consist of.”
Again, it’s not clear what “additional measures” a platform could plausibly deploy to “fix” the gaping lack of redress afforded to foreigners by U.S. surveillance law. Major legal surgery does seem to be required to square this circle.
Jelinek said the EDPB would be studying the judgement with the aim of putting out more granular guidance in the future. But her statement warns data exporters they have an obligation to suspend data transfers or terminate SCCs if contractual obligations are not or cannot be complied with, or else to notify a relevant supervisory authority if it intends to continue transferring data.
In her roundabout way, she also warns that DPAs now have a clear obligation to terminate SCCs where the safety of data cannot be guaranteed in a third country.
“The EDPB takes note of the duties for the competent supervisory authorities (SAs) to suspend or prohibit a transfer of data to a third country pursuant to SCCs, if, in the view of the competent SA and in the light of all the circumstances of that transfer, those clauses are not or cannot be complied with in that third country, and the protection of the data transferred cannot be ensured by other means, in particular where the controller or a processor has not already itself suspended or put an end to the transfer,” Jelinek writes.
One thing is crystal clear: Any sense of legal certainty U.S. cloud services were deriving from the existence of the EU-U.S. Privacy Shield — with its flawed claim of data protection adequacy — has vanished like summer rain.
Apple and Google have provided a number of updates about the technical details of their joint contact tracing system, which they’re now exclusively referring to as an “exposure notification” technology, since the companies say this is a better way to describe what they’re offering. The system is just one part of a contact tracing system, they note, not the entire thing. Changes include modifications made to the API that the companies say provide stronger privacy protections for individual users, and changes to how the API works that they claim will enable health authorities building apps that make use of it to develop more effective software.
The additional measures being implemented to protect privacy include changing the cryptography mechanism for generating the keys used to trace potential contacts. They’re no longer specifically bound to a 24-hour period, and they’re now randomly generated instead of derived from a so-called “tracing key” that was permanently attached to a device. In theory, with the old system, an advanced enough attack with direct access to the device could potentially be used to figure out how individual rotating keys were generated from the tracing key, though that would be very, very difficult. Apple and Google clarified that it was included for the sake of efficiency originally, but they later realized they didn’t actually need this to ensure the system worked as intended, so they eliminated it altogether.
The new method makes it even more difficult for a would-be bad actor to determine how the keys are derived, and then attempt to use that information to use them to track specific individuals. Apple and Google’s goal is to ensure this system does not link contact tracing information to any individual’s identity (except for the individual’s own use) and this should help further ensure that’s the case.
The companies will now also be encrypting any metadata associated with specific Bluetooth signals, including the strength of signal and other info. This metadata can theoretically be used in sophisticated reverse identification attempts, by comparing the metadata associated with a specific Bluetooth signal with known profiles of Bluetooth radio signal types as broken down by device and device generation. Taken alone, it’s not much of a risk in terms of exposure, but this additional step means it’s even harder to use that as one of a number of vectors for potential identification for malicious use.
It’s worth noting that Google and Apple say this is intended as a fixed length service, and so it has a built-in way to disable the feature at a time to be determined by regional authorities, on a case-by-case basis.
Finally on the privacy front, any apps built using the API will now be provided exposure time in five-minute intervals, with a maximum total exposure time reported of 30 minutes. Rounding these to specific five-minute duration blocks and capping the overall limit across the board helps ensure this info, too, is harder to link to any specific individual when paired with other metadata.
On the developer and health authority side, Apple and Google will now be providing signal strength information in the form of Bluetooth radio power output data, which will provide a more accurate measure of distance between two devices in the case of contact, particularly when used with existing received signal strength info from the corresponding device that the API already provides access to.
Individual developers can also set their own parameters in terms of how strong a signal is and what duration will trigger an exposure event. This is better for public health authorities because it allows them to be specific about what level of contact actually defines a potential contact, as it varies depending on geography in terms of the official guidance from health agencies. Similarly, developers can now determine how many days have passed since an individual contact event, which might alter their guidance to a user (i.e. if it’s already been 14 days, measures would be very different from if it’s been two).
Apple and Google are also changing the encryption algorithm used to AES, from the HMAC system they were previously using. The reason for this switch is that the companies have found that by using AES encryption, which can be accelerated locally using on-board hardware in many mobile devices, the API will be more energy efficiency and have less of a performance impact on smartphones.
As we reported Thursday, Apple and Google also confirmed that they’re aiming to distribute next week the beta seed version of the OS update that will support these devices. On Apple’s side, the update will support any iOS hardware released over the course of the past four years running iOS 13. On the Android side, it would cover around 2 billion devices globally, Android said.
Coronavirus tracing: Platforms versus governments
One key outstanding question is what will happen in the case of governments that choose to use centralized protocols for COVID-19 contact tracing apps, with proximity data uploaded to a central server — rather than opting for a decentralized approach, which Apple and Google are supporting with an API.
In Europe, the two major EU economies, France and Germany, are both developing contact tracing apps based on centralized protocols — the latter planning deep links to labs to support digital notification of COVID-19 test results. The U.K. is also building a tracing app that will reportedly centralize data with the local health authority.
This week Bloomberg reported that the French government is pressuring Apple to remove technical restrictions on Bluetooth access in iOS, with the digital minister, Cedric O, saying in an interview Monday: “We’re asking Apple to lift the technical hurdle to allow us to develop a sovereign European health solution that will be tied our health system.”
While a German-led standardization push around COVID-19 contact tracing apps, called PEPP-PT — that’s so far only given public backing to a centralized protocol, despite claiming it will support both approaches — said last week that it wants to see changes to be made to the Google-Apple API to accommodate centralized protocols.
Asked about this issue an Apple spokesman told us it’s not commenting on the apps/plans of specific countries. But the spokesman pointed back to a position on Bluetooth it set out in an earlier statement with Google — in which the companies write that user privacy and security are “central” to their design.
Judging by the updates to Apple and Google’s technical specifications and API framework, as detailed above, the answer to whether the tech giants will bow to government pressure to support state centralization of proximity social graph data looks to be a strong “no.”
The latest tweaks look intended to reinforce individual privacy and further shrink the ability of outside entities to repurpose the system to track people and/or harvest a map of all their contacts.
The sharpening of the Apple and Google’s nomenclature is also interesting in this regard — with the pair now talking about “exposure notification” rather than “contact tracing” as preferred terminology for the digital intervention. This shift of emphasis suggests they’re keen to avoid any risk of their role being (mis)interpreted as supporting broader state surveillance of citizens’ social graphs, under the guise of a coronavirus response.
Backers of decentralized protocols for COVID-19 contact tracing — such as DP-3T, a key influence for the Apple-Google joint effort that’s being developed by a coalition of European academics — have warned consistently of the risk of surveillance creep if proximity data is pooled on a central server.
Apple and Google’s change of terminology doesn’t bode well for governments with ambitions to build what they’re counter-branding as “sovereign” fixes — aka data grabs that do involve centralizing exposure data. Although whether this means we’re headed for a big standoff between certain governments and Apple over iOS security restrictions — à la Apple vs the FBI — remains to be seen.
Earlier today, Apple and Google’s EU privacy chiefs also took part in a panel discussion organized by a group of European parliamentarians, which specifically considered the question of centralized versus decentralized models for contact tracing.
Asked about supporting centralized models for contact tracing, the tech giants offered a dodge, rather than a clear “no.”
“Our goal is to really provide an API to accelerate applications. We’re not obliging anyone to use it as a solution. It’s a component to help make it easier to build applications,” said Google’s Dave Burke, VP of Android engineering.
“When we build something we have to pick an architecture that works,” he went on. “And it has to work globally, for all countries around the world. And when we did the analysis and looked at different approaches we were very heavily inspired by the DP-3T group and their approach — and that’s what we have adopted as a solution. We think that gives the best privacy preserving aspects of the contacts tracing service. We think it’s also quite rich in epidemiological data that we think can be derived from it. And we also think it’s very flexible in what it could do. [The choice of approach is] really up to every member state — that’s not the part that we’re doing. We’re just operating system providers and we’re trying to provide a thin layer of an API that we think can help accelerate these apps but keep the phone in a secure, private mode of operation.”
“That’s really important for the expectations of users,” Burke added. “They expect the devices to keep their data private and safe. And then they expect their devices to also work well.”
DP-3T’s Michael Veale was also on the panel — busting what he described as some of the “myths” about decentralized contacts tracing versus centralized approaches.
“The [decentralized] system is designed to provide data to epidemiologists to help them refine and improve the risk score — even daily,” he said. “This is totally possible. We can do this using advanced methods. People can even choose to provide additional data if they want to epidemiologists — which is not really required for improving the risk score but might help.”
“Some people think a decentralized model means you can’t have a health authority do that first call [to a person exposed to a risk of infection]. That’s not true. What we don’t do is we don’t tag phone numbers and identities like a centralized model can to the social network. Because that allows misuse,” he added. “All we allow is that at the end of the day the health authority receives a list separate from the network of whose phone number they can call.”
MEP Sophie in ‘t Veld, who organzied the online event, noted at the top of the discussion they had also invited PEPP-PT to join the call but said no one from the coalition had been able to attend the video conference.
California’s new privacy law was years in the making.
The law, California’s Consumer Privacy Act — or CCPA — became law on January 1, allowing state residents to reclaim their right to access and control their personal data. Inspired by Europe’s GDPR, the CCPA is the largest statewide privacy law change in a generation. The new law lets users request a copy of the data that tech companies have on them, delete the data when they no longer want a company to have it, and demand that their data isn’t sold to third parties. All of this is much to the chagrin of the tech giants, some of which had spent millions to comply with the law and have many more millions set aside to deal with the anticipated influx of consumer data access requests.
But to say things are going well is a stretch.
Many of the tech giants that kicked and screamed in resistance to the new law have acquiesced and accepted their fate — at least until something different comes along. The California tech scene had more than a year to prepare, but some have made it downright difficult and — ironically — more invasive in some cases for users to exercise their rights, largely because every company has a different interpretation of what compliance should look like.
Alex Davis is just one California resident who tried to use his new rights under the law to make a request to delete his data. He vented his annoyance on Twitter, saying companies have responded to CCPA by making requests “as confusing and difficult as possible in new and worse ways.”
“I’ve never seen such deliberate attempts to confuse with design,” he told TechCrunch. He referred to what he described as “dark patterns,” a type of user interface design that tries to trick users into making certain choices, often against their best interests.
“I tried to make a deletion request but it bogged me down with menus that kept redirecting… things to be turned on and off,” he said.
Despite his frustration, Davis got further than others. Just as some companies have made it easy for users to opt-out of having their data sold by adding the legally required “Do not sell my info” links on their websites, many have not. Some have made it near-impossible to find these “data portals,” which companies set up so users can request a copy of their data or delete it altogether. For now, California companies are still in a grace period — but have until July when the CCPA’s enforcement provisions kick in. Until then, users are finding ways around it — by collating and sharing links to data portals to help others access their data.
“We really see a mixed story on the level of CCPA response right now,” said Jay Cline, who heads up consulting giant PwC’s data privacy practice, describing it as a patchwork of compliance.
PwC’s own data found that only 40% of the largest 600 U.S. companies had a data portal. Only a fraction, Cline said, extended their portals to users outside of California, even though other states are gearing up to push similar laws to the CCPA.
But not all data portals are created equally. Given how much data companies store on us — personal or otherwise — the risks of getting things wrong are greater than ever. Tech companies are still struggling to figure out the best way to verify each data request to access or delete a user’s data without inadvertently giving it away to the wrong person.
Last year, security researcher James Pavur impersonated his fiancee and tricked tech companies into turning over vast amounts of data about her, including credit card information, account logins and passwords and, in one case, a criminal background check. Only a few of the companies asked for verification. Two years ago, Akita founder Jean Yang described someone hacking into her Spotify account and requesting her account data as an “unfortunate consequence” of GDPR, which mandated companies operating on the continent allow users access to their data.
The CCPA says companies should verify a person’s identity to a “reasonable degree of certainty.” For some that’s just an email address to send the data.
Others require sending in even more sensitive information just to prove it’s them.
Indeed, i360, a little-known advertising and data company, until recently asked California residents for a person’s full Social Security number. This recently changed to just the last four-digits. Verizon (which owns TechCrunch) wants its customers and users to upload their driver’s license or state ID to verify their identity. Comcast asks for the same, but goes the extra step by asking for a selfie before it will turn over any of a customer’s data.
Comcast asks for the same amount of information to verify a data request as the controversial facial recognition startup, Clearview AI, which recently made headlines for creating a surveillance system made up of billions of images scraped from Facebook, Twitter and YouTube to help law enforcement trace a person’s movements.
As much as CCPA has caused difficulties, it has helped forge an entirely new class of compliance startups ready to help large and small companies alike handle the regulatory burdens to which they are subject. Several startups in the space are taking advantage of the $55 billion expected to be spent on CCPA compliance in the next year — like Segment, which gives customers a consolidated view of the data they store; Osano which helps companies comply with CCPA; and Securiti, which just raised $50 million to help expand its CCPA offering. With CCPA and GDPR under their belts, their services are designed to scale to accommodate new state or federal laws as they come in.
Another startup, Mine, which lets users “take ownership” of their data by acting as a broker to allow users to easily make requests under CCPA and GDPR, had a somewhat bumpy debut.
The service asks users to grant them access to a user’s inbox, scanning for email subject lines that contain company names and using that data to determine which companies a user can request their data from or have their data deleted. (The service requests access to a user’s Gmail but the company claims it will “never read” users’ emails.) Last month during a publicity push, Mine inadvertently copied a couple of emailed data requests to TechCrunch, allowing us to see the names and email addresses of two requesters who wanted Crunch, a popular gym chain with a similar name, to delete their data.
(Screenshot: Zack Whittaker/TechCrunch)
TechCrunch alerted Mine — and the two requesters — to the security lapse.
“This was a mix-up on our part where the engine that finds companies’ data protection offices’ addresses identified the wrong email address,” said Gal Ringel, co-founder and chief executive at Mine. “This issue was not reported during our testing phase and we’ve immediately fixed it.”
For now, many startups have caught a break.
The smaller, early-stage startups that don’t yet make $25 million in annual revenue or store the personal data on more than 50,000 users or devices will largely escape having to immediately comply with CCPA. But it doesn’t mean startups can be complacent. As early-stage companies grow, so will their legal responsibilities.
“For those who did launch these portals and offer rights to all Americans, they are in the best position to be ready for these additional states,” said Cline. “Smaller companies in some ways have an advantage for compliance if their products or services are commodities, because they can build in these controls right from the beginning,” he said.
CCPA may have gotten off to a bumpy start, but time will tell if things get easier. Just this week, California’s attorney general Xavier Becerra released newly updated guidance aimed at trying to “fine tune” the rules, per his spokesperson. It goes to show that even California’s lawmakers are still trying to get the balance right.
But with the looming threat of hefty fines just months away, time is running out for the non-compliant.