Posted on

Launched with $17 million by two former Norwest investors, Tau Ventures is ready for its closeup

Amit Garg and Sanjay Rao have spent the bulk of their professional lives developing technology, founding startups and investing in startups at places like Google and Microsoft, HealthIQ, and Norwest Venture Partners.

Over their decade-long friendship the two men discussed working together on a venture fund, but the time was never right — until now. Since last August, the two men have been raising capital for their inaugural fund, Tau Ventures.

The name, like the two partners, is a bit wonky. Tau is two times pi and Garg and Rao chose it as the name for the partnership because it symbolizes their analytical approach to very early stage investing.

It’s a strange thing to launch a venture fund in a pandemic, but for Garg and Rao, the opportunity to provide very early stage investment capital into startups working on machine learning applications in healthcare, automation and business was too good to pass up.

Garg had spent twenty years in Silicon Valley working at Google and launching companies including HealthIQ. Over the years he’d amassed an investment portfolio that included the autonomous vehicle company, Nutonomy, BioBeatsGlookoCohero HealthTerapedeFigure1HealthifyMe,  Healthy.io and RapidDeploy.

Meanwhile, Rao, a Palo Alto, Calif. native, MIT alum, Microsoft product manager and founder of the Accelerate Labs accelerator in Palo Alto, Calif., said that it was important to give back to entrepreneurs after decades in the Valley honing skills as an operator.

Image credit: Tau Ventures

Both Rao and Garg acknowledge that there are a number of funds that have emerged focused on machine learning including Basis Set Ventures, SignalFire, Two Sigma Ventures, but these investors lack the direct company building experience that the two new investors have.

Garg, for instance, has actually built a hospital in India and has a deep background in healthcare. As an investor, he’s already seen an exit through his investment in Nutonomy, and both men have a deep understanding of the enterprise market — especially around security.

So far, the company has made three investments automation, another three in enterprise software, and five in healthcare.

The firm currently has $17 million in capital under management raised from institutional investors like the law firm Wilson Sonsini and a number of undisclosed family offices and individuals, according to Garg.

Much of that capital was committed after the pandemic hit, Garg said. “We started August 29th… and did the final close May 29th.”

The idea was to close the fund and start putting capital to work — especially in an environment where other investors were burdened with sorting out their existing portfolios, and not able to put capital to work as quickly.

“Our last investment was done entirely over Zoom and Google Meet,” said Rao.

That virtual environment extends to the firm’s shareholder meetings and conferences, some of which have attracted over 1,000 attendees, according to the partners.

Read More

Posted on

On demand mental health service provider Ginger raises $50 million

Ginger, a provider of on demand mental healthcare services, has raised $50 million in a new round of funding.

The new capital comes as interest and investment in mental health and wellness has emerged as the next big area of interest for investors in new technology and healthcare services companies.

Mental health startups saw record deal volumes in the second quarter of 2020 on the heels of rising demand caused by the COVID-19 epidemic, according to the data analysis firm CB Insights. More than 55 companies raised rounds of funding over the quarter, even though deal amounts declined 15% to $491 million. That’s still nearly half a billion dollars invested into mental health in one quarter alone.

What started in 2011 as a research-based company spun out of work from the Massachusetts Institute of Technology has become one of the largest providers of mental health services primarily through employer-operated health insurance plans.

Through Ginger’s services, patients have access to a care coordinator that is the first point of entry into the company’s mental health plans. That person is a trained behavioral health coach — typically someone with a master’s degree in psychology with a behavioral health coaching certificate from schools like Duke, UCLA, Michigan or Columbia and 200 hours of training provided by Ginger itself.

These health coaches provide the majority of care that Ginger’s patients receive. For more serious conditions, Ginger will bring in specialists to coordinate care or provide access to medications to alleviate the condition, according to the company’s chief executive officer, Russell Glass.

Ginger began offering its on-demand care services in 2016 and counts tens of thousands of active users on the platform. The company charges companies a fee for access to its services on a per-employee, per-month basis and provides access to mental health services to hundreds of thousands of employees through corporate benefit plans, Glass said.

Over 200 companies, including Delta Air Lines, Sanofi, Chegg, Domino’s, SurveyMonkey, and Sephora, pay  Ginger to cost-efficiently provide employees with high-quality mental healthcare. Ginger members can access virtual therapy and psychiatry sessions as an in-network benefit through the company’s relationships with leading regional and national health plans, including Optum Behavioral Health, Anthem California, and Aetna Resources for Living, according to a statement.

“Our entire mission here is to break the supply/demand imbalance and provide far more care,” said Glass in an interview. “Ultimately we want Ginger to be available to anybody who has a need. Being accessible to anybody, anywhere is an important part of the strategy. that means direct-to-consumer will be a direction we head in.”

For now, the company will use the money to build out its partner ecosystem with companies like Cigna, an investor in the company’s latest $50 million round. Ginger will also look to getting government payers to reach more people. eventually direct-to-consumer could become a larger piece of the business as the company drives down costs of care.

It’s also investing in automation and natural language processing to automate care pathways and personalizing patient care using machine learning.

The company’s $50 million Series D round was co-led by Advance Venture Partners and Bessemer Venture Partners, with additional participation from Cigna Ventures and existing investors such as Jeff Weiner, Executive Chairman of LinkedIn, and Kaiser Permanente Ventures. To date, Ginger has raised roughly $120 million. 

 Even as Ginger is working through the existing network of employer benefit plans and stand-alone insurance providers to offer its mental health services, other startups are raising money to offer employer-provided mental health and wellness plans. SonderMind is working to make it easier for independent mental health professionals to bill insurers, AbleTo helps employers screen for undiagnosed mental health conditions, and SilverLight Health partners with organizations to digitally monitor and manage mental health care. 

Meanwhile other startups are going direct-to-consumer with a flood of offerings around mental health. Well-financed, billion dollar-valued companies like Ro and Hims are offering mental health and wellness packages to customers, while Headspace has both a consumer facing and employer benefit offering. And upstart companies like Real are focusing on providing care specifically for women.

With its funding round, Ginger is adding David ibnAle, a founding partner at Advance Venture Partners (AVP), which is the investment firm behind S.I. Newhouse’s family-owned media and technology holding company, Advance; and the digital health investment guru Steve Kraus from Bessemer Venture Partners. 

“AVP invests in companies that are using technology to tackle large-scale, global challenges and transform traditional businesses and business models,” said David ibnAle, Founding Partner of Advance Venture Partners. “Ginger is doing just that. We are excited to partner with an exceptional team to help make high-quality, on-demand mental healthcare a reality for millions of more people around the world.”

Read More

Posted on

We need a new field of AI to combat racial bias

Since widespread protests over racial inequality began, IBM announced it would cancel its facial recognition programs to advance racial equity in law enforcement. Amazon suspended police use of its Rekognition software for one year to “put in place stronger regulations to govern the ethical use of facial recognition technology.”

But we need more than regulatory change; the entire field of artificial intelligence (AI) must mature out of the computer science lab and accept the embrace of the entire community.

We can develop amazing AI that works in the world in largely unbiased ways. But to accomplish this, AI can’t be just a subfield of computer science (CS) and computer engineering (CE), like it is right now. We must create an academic discipline of AI that takes the complexity of human behavior into account. We need to move from computer science-owned AI to computer science-enabled AI. The problems with AI don’t occur in the lab; they occur when scientists move the tech into the real world of people. Training data in the CS lab often lacks the context and complexity of the world you and I inhabit. This flaw perpetuates biases.

AI-powered algorithms have been found to display bias against people of color and against women. In 2014, for example, Amazon found that an AI algorithm it developed to automate headhunting taught itself to bias against female candidates. MIT researchers reported in January 2019 that facial recognition software is less accurate in identifying humans with darker pigmentation. Most recently, in a study late last year by the National Institute of Standards and Technology (NIST), researchers found evidence of racial bias in nearly 200 facial recognition algorithms.

In spite of the countless examples of AI errors, the zeal continues. This is why the IBM and Amazon announcements generated so much positive news coverage. Global use of artificial intelligence grew by 270% from 2015 to 2019, with the market expected to generate revenue of $118.6 billion by 2025. According to Gallup, nearly 90% Americans are already using AI products in their everyday lives – often without even realizing it.

Beyond a 12-month hiatus, we must acknowledge that while building AI is a technology challenge, using AI requires non-software development heavy disciplines such as social science, law and politics. But despite our increasingly ubiquitous use of AI, AI as a field of study is still lumped into the fields of CS and CE. At North Carolina State University, for example, algorithms and AI are taught in the CS program. MIT houses the study of AI under both CS and CE. AI must make it into humanities programs, race and gender studies curricula, and business schools. Let’s develop an AI track in political science departments. In my own program at Georgetown University, we teach AI and Machine Learning concepts to Security Studies students. This needs to become common practice.

Without a broader approach to the professionalization of AI, we will almost certainly perpetuate biases and discriminatory practices in existence today. We just may discriminate at a lower cost — not a noble goal for technology. We require the intentional establishment of a field of AI whose purpose is to understand the development of neural networks and the social contexts into which the technology will be deployed.

In computer engineering, a student studies programming and computer fundamentals. In computer science, they study computational and programmatic theory, including the basis of algorithmic learning. These are solid foundations for the study of AI – but they should only be considered components. These foundations are necessary for understanding the field of AI but not sufficient on their own.

For the population to gain comfort with broad deployment of AI so that tech companies like Amazon and IBM, and countless others, can deploy these innovations, the entire discipline needs to move beyond the CS lab. Those who work in disciplines like psychology, sociology, anthropology and neuroscience are needed. Understanding human behavior patterns, biases in data generation processes are needed. I could not have created the software I developed to identify human trafficking, money laundering and other illicit behaviors without my background in behavioral science.

Responsibly managing machine learning processes is no longer just a desirable component of progress but a necessary one. We have to recognize the pitfalls of human bias and the errors of replicating these biases in the machines of tomorrow, and the social sciences and humanities provide the keys. We can only accomplish this if a new field of AI, encompassing all of these disciplines, is created.

Read More

Posted on

In effort to fight COVID-19, MIT robot gets to work disinfecting The Greater Boston Food Bank

MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) has put one of its research projects to work providing disinfection services for The Greater Boston Food Bank (GBFB), in an effort to slow the spread and still allow the non-profit to provide services to its patrons. The CSAIL-designed robotic system, which was created in partnership with Ava Robotics, can not only disinfect surfaces that might have come in contact with the novel coronavirus, but also wipe out its aerosolized forms that might be present in the air, the lab says.

CSAIL’s robotic cleaning system goes well beyond your run-of-the-mill Roomba: It employs UV light for a fully automated clean that can be done free of any human oversight, which is key because UV light when used in the strength required for surface and airborne disinfection can be harmful to any people present.

The team behind the design took one of Ava’s telepresence robot, removed the top which normally houses the screen to display a remote operator, and instead replaced it with a UVC light array. Via cameras and sensors, the robot can map an indoor space, and then navigate designed waypoints within that mapped area and disinfect as it goes, keeping track fo the areas it has to disinfect. In operation, after its autonomous mapping exercise, human remote operators showed it the path that people would normally traverse in the space to define priority disinfection zones.

The system is flexible so that it can handle re-mapped routes, which is required because what areas of the GBFB warehouse need to be traversed can change daily as food comes in and food goes out, with stock stored on different shelves. Eventually, the team wants to develop more automated ways for the modified telepresence robots to user their suite of sensors to figure out what areas are priority for disinfection based on foot traffic and changing real-world conditions, but for now, it can easily be manually adjusted to accommodate shifts.

This project focused specifically on use at the GBFB, a priority resource especially during the COVID-19 pandemic, but MIT CSAIL’s researchers envision similar systems being put to use to cover a range of complex spaces that require frequent disinfection, including grocery stores, dorms, schools and airplanes.

Read More

Posted on

Immunai wants to map the entire immune system and raised $20 million in seed funding to do it

For the past two years the founding team of Immunai had been working stealthily to develop a new technology to map the immune system of any patient.

Founded by Noam Solomon, a Harvard and MIT-educated postdoctoral researcher, and former Palantir engineer, Luis Voloch, Immunai was born from the two men’s interest in computational biology and systems engineering. When the two were introduced to Ansuman Satpathy, a professor of cancer immunology at Stanford University, and Danny Wells, who works as a data scientist at the Parker Institute for Cancer Immunotherapy the path forward for the company became clear.

“Together we said we bring the understanding of all the technology and machine learning that needs to be brought into the work and Ansu and Danny bring the single-cell biology,” said Solomon. 

Now as the company unveils itself and the $20 million in financing it has received from investors including Viola Ventures and TLV Partners, it’s going to be making a hiring push and expanding its already robust research and development activities. 

Immunai already boasts clinical partnerships with over ten medical centers and commercial partnerships with several biopharma companies, according to the company. And the team has already published peer-reviewed work on the origin of tumor-fighting T cells following PD-1 blockade, Immunai said.

“We are implementing a complicated engineering pipeline. We wanted to scale to hundreds of patients and thousands of samples,” said Wells. “Right now, in the world of cancer therapy, there are new drugs coming on the market that are called checkpoint inhibitors. [We’re] trying to understand how these molecules are working and find new combinations and new targets. We need to see the immune system in full granularity.”

That’s what Immunai’s combination of hardware and software allows researchers to do, said Wells. “It’s a vertically integrated platform for single cell profiling,” he said. “We go even further to figure out what the biology is there and figure that out in a new combination design for the trial.”

Cell therapies and cancer immunotherapies are changing the practice of medicine and offering new treatments for conditions, but given how complex the immune system is, the developers of those therapies have few insights into how their treatments will effect the immune system. Given the diversity of individual patients variations in products can significantly change the way a patient will respond to the treatment, the company said.

Photo: Andrew Brookes/Getty Images

Immunai has the potential to change the way these treatments are developed by using single-cell technologies to profile cells by generating over a terabyte of data from an individual blood sample. The company’s proprietary database and machine learnings tools map incoming data to different cell types and create profiles of immune responses based on differentiated elements. Finally, the database of immune profiles supports the disvovery of biomarkers that can then be monitored for potential changes.

“Our mission is to map the immune system with neural networks and transfer learning techniques informed by deep immunology knowledge,” said Voloch, in a statement. “We developed the tools and knowhow to help every immuno-oncology and cell therapy researcher excel at their job. This helps increase the speed in which drugs are developed and brought to market by elucidating their mechanisms of action and resistance.”

Pharmaceutical companies are already aware of the transformational potential of the technology, according to Solomon. The company is already in the process of finalizing a seven-figure contract from a Fortune 100 company, according to Solomon. 

One of the company’s earliest research coups was using research to show the way that immune systems function when anti-PD1 molecules are introduced. Typically the presence of PD-1 means that t-cell production is being suppressed. What the research from ImmuneAI revealed was that the response wasn’t happening with T-cells within the tumor. There were new t-cells that were migrating to the tumor to fight it off, according to Wells.

“This whole approach that we have around looking at all of these indications — we believe that the right way and most powerful way to study these diseases is to look at the immune system from the top down,” said Voloch, in an interview. “Looking at all of these different scenarios. From the top, you see these patterns than wouldn’t be available otherwise.” 

Read More

Posted on

In a potential big win for renewable energy, Form Energy gets its first grid-scale battery installation

Form Energy, which is developing what it calls ultra-low-cost, long-duration energy storage for the grid, has signed a contract with the Minnesota-based Great River Energy to develop a 1 megawatt, 150 megawatt hour pilot project.

The second-largest electric utility in the U.S., Great River Energy’s installation in Cambridge, Minn. will be the first commercial deployment of the venture-backed battery technology developer’s long-duration energy storage technology.

From Energy’s battery system is significant for its ability to deliver 1 megawatt of power for 150 hours — a huge leap over the lithium ion batteries currently in use for most grid-scale storage projects. Those battery systems can last for two- to four-hours.

The step change in the duration of energy delivery should allow energy storage projects to replace the peaking power plants that rely on coal and natural gas to smooth demand on the grid.

“Long duration energy storage solutions will play an entirely different role in a clean electricity system than the conventional battery storage systems being deployed at scale today,” said Jesse Jenkins, an assistant professor at Princeton University who studies low-carbon energy systems engineering, in a statement. “Lithium-ion batteries are well suited to fast bursts of energy production, but they run out of energy after just a few hours. A true low-cost, long-duration energy storage solution that can sustain output for days, would fill gaps in wind and solar energy production that would otherwise require firing up a fossil-fueled power plant. A technology like that could make a reliable, affordable 100% renewable electricity system a real possibility,”

Backed with over $49 million in venture financing from investors including MIT’s The Engine investment vehicle; Eni Next, the corporate venture capital arm of the Italian energy firm Eni Spa, and the Bill Gates-backed sustainability focused investment firm, Breakthrough Energy Ventures, Form Energy has developed a new storage technology called an “aqueous air” battery system.

“Our vision at Form Energy is to unlock the power of renewable energy to transform the grid with our proprietary long-duration storage. This project represents a bold step toward proving that vision of an affordable, renewable future is possible without sacrificing reliability,” said Mateo Jaramillo, the chief executive of Form Energy, in a statement.

Form’s pitch to utilities relies on more than just a groundbreaking energy storage technology, and includes an assessment of how best utilities can optimize their energy portfolios using a proprietary software analytics system. That software, was built to model high penetration renewables at a system level to figure out how storage can be combined with renewable energy to create a low-cost energy source that can deliver better returns to energy providers.

“Great River Energy is excited to partner with Form Energy on this important project. The electrical grid is increasingly supplied by renewable sources of energy. Commercially viable long-duration storage could increase reliability by ensuring that the power generated by renewable energy is available at all hours to serve our membership. Such storage could be particularly important during extreme weather conditions that last several days. Long-duration storage also provides an excellent hedge against volatile energy prices,” said Great River Energy Vice President and Chief Power Supply Officer Jon Brekke, in a statement.

Ultimately, this deployment is intended to be the first of many installations of Form Energy’s battery systems, according to the statement from both companies.

“Long duration energy storage solutions will play an entirely different role in a clean electricity system than the conventional battery storage systems being deployed at scale today,” said Jesse Jenkins, an assistant professor at Princeton University who studies low-carbon energy systems engineering, in a statement. “Lithium-ion batteries are well suited to fast bursts of energy production, but they run out of energy after just a few hours. A true low-cost, long-duration energy storage solution that can sustain output for days, would fill gaps in wind and solar energy production that would otherwise require firing up a fossil-fueled power plant. A technology like that could make a reliable, affordable 100% renewable electricity system a real possibility,”

Read More

Posted on

MIT muscle-control system for drones lets a pilot use gestures for accurate and specific navigation

[embedded content]

MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) has released a video of their ongoing work using input from muscle signals to control devices. Their latest involves full and fine control of drones, using just hand and arm gestures to navigate through a series of rings. This work is impressive not just because they’re using biofeedback to control the devices, instead of optical or other kinds of gesture recognition, but also because of how specific the controls can be, setting up a range of different potential applications for this kind of remote tech.

This particular group of researchers has been looking at different applications for this tech, including its use in collaborative robotics for potential industrial applications. Drone piloting is another area that could have big benefits in terms of real-world use, especially once you start to imagine entire flocks of these taking flight with a pilot provided a view of what they can see via VR. That could be a great way to do site surveying for construction, for example, or remote equipment inspection of offshore platforms and other infrastructure that’s hard for people to reach.

[embedded content]

Seamless robotic/human interaction is the ultimate goal of the team working on this tech, because just like how we intuit our own movements and ability to manipulate our environment most effectively, they believe the process should be as smooth when controlling and working with robots. Thinking and doing are essentially happening in parallel when we interact with our environment, but when we act through the extension of machines or remote tools, there’s often something lost in translation that results in a steep learning curve and the requirement of lots of training.

Cobotics, or the industry that focuses on building robots that can safely work alongside and in close collaboration with robots, would benefit greatly from advances that make the interaction between people and robotic equipment more natural, instinctive and, ultimately, safe. MIT’s research in this area could result in future industrial robotics products that require less training and programming to operate at scale.

Read More

Posted on

Silicon Valley needs a new approach to studying ethics now more than ever

Next month, Apple and Google will unveil features to enable contact tracing on iOS and Android to identify people who have had contact with someone who tests positive for the novel coronavirus.

Security experts have been quick to point out the possible dangers, including privacy risks like revealing identities of COVID-19-positive users, helping advertisers track them or falling prey to false positives from trolls.

These are fresh concerns in familiar debates about tech’s ethics. How should technologists think about the trade-off between the immediate need for public health surveillance and individual privacy? And misformation and free speech? Facebook and other platforms are playing a much more active role than ever in assessing the quality of information: promoting official information sources prominently and removing some posts from users defying social distancing.

As the pandemic spreads and, along with it, the race to develop new technologies accelerates, it’s more critical than ever that technology finds a way to fully examine these questions. Technologists today are ill-equipped for this challenge: striking healthy balances between competing concerns — like privacy and safety — while explaining their approach to the public.

Over the past few years, academics have worked to give students ways to address the ethical dilemmas technology raises. Last year, Stanford announced a new (and now popular) undergraduate course on “Ethics, Public Policy, and Technological Change,” taught by faculty from philosophy, as well as political and computer science. Harvard, MIT, UT Austin and others teach similar courses.

If the only students are future technologists, though, solutions will lag. If we want a more ethically knowledgeable tech industry today, we need ethical study for tech practitioners, not just university students.

To broaden this teaching to tech practitioners, our venture fund, Bloomberg Beta, agreed to host the same Stanford faculty for an experiment. Based on their undergraduate course, could we design an educational experience for senior people who work across the tech sector? We adapted the content (incorporating real-world dilemmas), structure and location of the class, creating a six-week evening course in San Francisco. A week after announcing the course, we received twice as many applications as we could accommodate.

We selected a diverse group of students in every way we could manage, who all hold responsibility in tech. They told us that when they faced an ethical dilemma at work, they lacked a community to which to turn — some confided in friends or family, others revealed they looked up answers on the internet. Many felt afraid to speak freely within their companies. Despite several company-led ethics initiatives, including worthwhile ones to appoint chief ethics officers and Microsoft and IBM’s principles for ethical AI, the students in our class told us they had no space for open and honest conversations about tech’s behavior.

If we want a more ethically knowledgeable tech industry today, we need ethical study for tech practitioners, not just university students.

Like undergraduates, our students wanted to learn from both academics and industry leaders. Each week featured experts like Marietje Schaake, former Member of the European Parliament from the Netherlands, who debated real issues, from data privacy to political advertising. The professors facilitated discussions, encouraging our students to discuss multiple, often opposing views, with our expert guests.

Over half of the class came from a STEM background and had missed much explicit education in ethical frameworks. Our class discussed principles from other fields, like medical ethics, including the physician’s guiding maxim (“first, do no harm”) in the context of designing new algorithms. Texts from the world of science fiction, like “The Ones Who Walk Away from Omelas” by Ursula K. Le Guin, also offered ways to grapple with issues, leading students to evaluate how to collect and use data responsibly.

The answers to the values-based questions we explored (such as the trade-offs between misinformation and free speech) didn’t converge on clear “right” or “wrong” answers. Instead, participants told us that the discussions were crucial for developing skills to more effectively check their own biases and make informed decisions. One student said:

After walking through a series of questions, thought experiments or discussion topics with the professors, and thinking deeply about each of the subtending issues, I often ended up with the opposite positions to what I initially believed.

When shelter-in-place meant the class could no longer meet, participants reached out within a week to request virtual sessions — craving a forum to discuss real-time events with their peers in a structured environment. After our first virtual session examining how government, tech and individuals have responded to COVID-19, one participant remarked: “There feels like so much more good conversation to come on the questions, what can we do, what should we do, what must we do?”

Tech professionals seem to want ways to engage with ethical learning — the task now is to provide more opportunities. We plan on hosting another course this year and are looking at ways to provide an online version, publishing the materials.

COVID-19 won’t be the last crisis where we rely on technology for solutions, and need them immediately. If we want more informed discussions about tech’s behavior, and we want the people who make choices to enter these crises prepared to think ethically, we need to start training people who work in tech to think ethically.


To allow students to explore opposing, uncomfortable viewpoints and share their personal experiences, class discussions were confidential. I’ve received explicit permission to share any insights from students here.

Read More

Posted on

What is contact tracing?

One of the best tools we have to slow the spread of the coronavirus is, as you have no doubt heard by now, contact tracing. But what exactly is contact tracing, who does it and how, and do you need to worry about it?

In short, contact tracing helps prevent the spread of a virus by proactively finding people at higher risk than others due to potential exposure, notifying them if possible, and quarantining them if necessary. It’s a proven technique, and smartphones could help make it even more effective — but only if privacy and other concerns can be overcome.

Contact tracing, from memory to RAM

Contact tracing has been done in some form or another as long as the medical establishment has understood the nature of contagious diseases. When a person is diagnosed with an infectious disease, they are asked whom they have been in contact with over the previous weeks, both in order to determine who may have been infected by them and perhaps where they themselves were infected.

Until very recently, however, the process has relied heavily on the recall of people who are in a highly stressful situation and, until prompted, were probably not paying special attention to their movements and interactions.

This results in a list of contacts that is far from complete, though still very helpful. If those people can be contacted and their contacts likewise traced, a network of potential infections can be built up without a single swab or blood drop, and lives can be saved or important resources better allocated.

You might think that has all changed now what with modern technology and all, but in fact contact tracing being done at hospitals right now is almost all still of the memory-based kind — the same we might have used a hundred years ago.

It certainly seems as if the enormous digital surveillance apparatus that has been assembled around us over the last decade should be able to accomplish this kind of contact tracing easily, but in fact it’s surprisingly useless for anything but tracking what you are likely to click on or buy.

While it would be nice to be able to piece together a contagious person’s week from a hundred cameras spread throughout the city and background location data collected by social media, the potential for abuse of such a system should make us thankful it is not so easy as that. In other, less dire circumstances the ability to track the exact movements and interactions of a person from their digital record would be considered creepy at best, and perhaps even criminal.

But it’s one thing when an unscrupulous data aggregator uses your movements and interests to target you with ads without your knowledge or consent — and quite another when people choose to use the forbidden capabilities of everyday technology in an informed and limited way to turn the tide of a global pandemic. And that’s what modern digital contact tracing is intended to do.

Bluetooth beacons

All modern mobile phones use wireless radios to exchange data with cell towers, Wi-Fi routers, and each other. On their own, these transmissions aren’t a very good way to tell where someone is or who they’re near — a Wi-Fi signal can travel 100 to 200 feet reliably, and a cell signal can go miles. Bluetooth, on the other hand, has a short range by design, less than 30 feet for good reception and with a swiftly attenuating signal that makes it unlikely to catch a stray contact from much further out than that.

We all know Bluetooth as the way our wireless earbuds receive music from our phones, and that’s a big part of its job. But Bluetooth, by design, is constantly reaching out and touching other Bluetooth-enabled devices — it’s how your car knows you’ve gotten into it, or how your phone detects a smart home device nearby.

Bluetooth chips also make brief contact without your knowledge with other phones and devices you pass nearby, and if they aren’t recognized, they delete each other from their respective memories as soon as possible. But what if they didn’t?

The type of contact tracing being tested and deployed around the world now uses Bluetooth signals very similar to the ones your phone already transmits and receives constantly. The difference is it just doesn’t automatically forget the other devices it comes into contact with.

Assuming the system is working correctly, what would happen when a person presents at the hospital with COVID-19 is basically just a digitally enhanced version of manual contact tracing. Instead of querying the person’s fallible memory, they query the phone’s much more reliable one, which has dutifully recorded all the other phones it has recently been close enough to connect to. (Anonymously, as we’ll see.)

Those devices — and it’s important to note that it’s devices, not people — would be alerted within seconds that they had recently been in contact with someone who has now been diagnosed with COVID-19. The notification they receive will contain information on what the affected person can do next: Download an app or call a number for screening, for instance, or find a nearby location for testing.

The ease, quickness, and comprehensiveness of this contact tracing method make it an excellent opportunity to help stem the spread of the virus. So why aren’t we all using it already?

Successes and potential worries

In fact digital contact tracing using the above method (or something very like it) has already been implemented with millions of users, apparently to good effect, in east Asia, which of course was hit by the virus earlier than the U.S. and Europe.

In Singapore the TraceTogether app was promoted by the government as the official means for contact tracing. South Korea saw the voluntary adoption of a handful of apps that tracked people known to be diagnosed. Taiwan was able to compare data from its highly centralized healthcare system to a contact tracing system it began work on during the SARS outbreak years ago. And mainland China has implemented a variety of tracking procedures through mega-popular services like WeChat and Alipay.

While it would be premature to make conclusions on the efficacy of these programs while they’re still underway, it seems at least anecdotally to have improved the response and potentially limited the spread of the virus.

But east Asia is a very different place from the U.S.; we can’t just take Taiwan’s playbook and apply it here (or in Europe, or Africa, etc.), for myriad reasons. There are also valid questions of privacy, security, and other matters that need to be answered before people, who for good reason are skeptical of the intentions of both the government and the private sector, will submit to this kind of tracking.

Right now there are a handful of efforts being made in the U.S., the largest profile by far being the collaboration between arch-rivals Apple and Google, which have proposed a cross-platform contact tracing method that can be added to phones at the operating system.

The system they have suggested uses Bluetooth as described above, but importantly does not tie it to a person’s identity in any way. A phone would have a temporary ID number of its own, and as it made contact with other devices, it exchanges numbers. These lists of ID numbers are collected and stored locally, not synced with the cloud or anything. And the numbers also change frequently so no single one can be connected to your device or location.

If and only if a person is determined to be infected with the virus, a hospital (not the person) is authorized to activate the contact tracing app, which will send a notification to all the ID numbers stored in the person’s phone. The notification will say that they were recently near a person now diagnosed with COVID-19 — again, these are only ID numbers generated by a phone and are not connected with any personal information. As discussed earlier, the people notified can then take whatever action seems warranted.

MIT has developed a system that works in a very similar way, and which some states are reportedly beginning to promote among their residents.

Naturally even this straightforward, decentralized, and seemingly secure system has its flaws; this article at the Markup gives a good overview, and I’ve summarized them below:

  • It’s opt-in. This is a plus and a minus, of course, but means that many people may choose not to take part, limiting how comprehensive the list of recent contacts really is.
  • It’s vulnerable to malicious interference. Bluetooth isn’t particularly secure, which means there are several ways this method could be taken advantage of, should there be any attacker depraved enough to do so. Bluetooth signals could be harvested and imitated, for instance, or a phone driven through the city to “expose” it to thousands of others.
  • It could lead to false positives or negatives. In order to maintain privacy, the notifications sent to others would contain a minimum of information, leading them to wonder when and how they might have been exposed. There will be no details like “you stood next to this person in line 4 days ago for about 5 minutes” or “you jogged past this person on Broadway.” This lack of detail may lead to people panicking and running to the ER for no reason, or ignoring the alert altogether.
  • It’s pretty anonymous, but nothing is truly anonymous. Although the systems seem to work with a bare minimum of data, that data could still be used for nefarious purposes if someone got their hands on it. De-anonymizing large sets of data is practically an entire domain of study in data science now and it’s possible that these records, however anonymous they appear, could be cross-referenced with other data to out infected persons or otherwise invade one’s privacy.
  • It’s not clear what happens to the data. Will this data be given to health authorities later? Will it be sold to advertisers? Will researcher be able to access it, and how will they be vetted? Questions like these could very well be answered satisfactorily, but right now it’s a bit of a mystery.

Contact tracing is an important part of the effort to curb the spread of the coronavirus, and whatever method or platform is decided on in your area — it may be different state to state or even between cities — it is important that as many people as possible take part in order to make it as effective as possible.

There are risks, yes, but the risks are relatively minor and the benefits would appear to outweigh them by orders of magnitude. When the time comes to opt in, it is out of consideration for the community at large that one should make the decision to do so.

Read More

Posted on

Apple and Google are launching a joint COVID-19 tracing tool for iOS and Android

Apple and Google’s engineering teams have banded together to create a decentralized contact tracing tool that will help individuals determine whether they have been exposed to someone with COVID-19.
Contact tracing is a useful tool that helps public health authorities track the spread of the disease and inform the potentially exposed so that they can get tested. It does this by identifying and “following up with” people who have come into contact with a COVID-19-affected person.
The first phase of the project is an API that public health agencies can integrate into their own apps. The next phase …

Read More