Posted on

How to Make Remote Monitoring Tech Part of Everyday Health Care

Executive Summary

Remote patient monitoring is a subset of telehealth that involves the collection, transmission, evaluation, and communication of patient health data from electronic devices. These devices include wearable sensors, implanted equipment, and handheld instruments. During the pandemic, such monitoring programs have proven valuable. But special measures and conditions made that possible. By encouraging regulators to make permanent the temporary measures introduced during the pandemic and by following six guidelines to integrate these programs into health care, providers realize their tremendous promise.

Yagi Studio/Getty Images

By making the collection of valuable patient data feasible outside of the clinic, remote monitoring can facilitate care for conditions ranging from chronic diseases to recovery from acute episodes of care. For years, it has been touted as one of the most promising opportunities for health care in the digital age. But the pandemic has underscored its value. Indeed, policy changes introduced during the pandemic due to the riskiness of in-person patient visits have created conditions ripe for its adoption. We urge regulators to extend these changes beyond the pandemic and for health care leaders to take advantage of this window of opportunity to develop, test, and improve remote-patient-monitoring programs.

What is remote patient monitoring? While “telehealth” broadly refers to all health care activities that are conducted through telecommunications technology, remote patient monitoring is a subset that involves the collection, transmission, evaluation, and communication of patient health data from electronic devices. These devices include wearable sensors, implanted equipment, and handheld instruments.

We define remote patient monitoring as the set of activities that meet four key criteria: (1) data on patients is collected remotely (e.g., in a home setting without oversight from a health care provider); (2) the data collected is transmitted to a health care provider in a different location; (3) the data is evaluated and care providers are notified, as needed; and (4) care providers communicate relevant data-driven insights and interventions to patients.

Remote Monitoring During the Pandemic

By making it possible to virtually perform medical activities that have traditionally been conducted in person, remote monitoring technologies have played a significant role in patient care during the Covid-19 pandemic. For example, providers such as Mount Sinai Health System in New York City, University Hospitals in Cleveland, Ohio, St. Luke’s University Health Network in Bethlehem, Pennsylvania, and Providence St. Joseph Health in Renton, Washington, started programs during the Covid-19 pandemic in order to monitor vital sign and symptom data and assess the status of coronavirus patients. Other hospitals, such as Mayo Clinic in Rochester, Minnesota, are working to set up remote patient monitoring programs for non-Covid-19 patients (e.g., as those with congestive heart failure).

New policies have recognized the importance of remote patient monitoring in this context. The U.S. Centers for Medicare and Medicaid Services expanded Medicare coverage for remote patient monitoring to include patients with acute conditions and new patients as well as existing patients. Moreover, the U.S. Food and Drug Administration issued a new policy allowing certain devices (FDA-approved non-invasive devices used to monitor vital signs) to be used in remote settings. Nonetheless, these changes remain temporary: They have only been authorized for the duration of the Covid-19 public health emergency. We hope that additional policies will be enacted to ensure that these programs can serve a variety of patients and conditions beyond the context of Covid-19.

Guidelines for Development and Implementation

These guidelines are drawn from our own experience managing remote-patient-monitoring programs, including one created specifically to care for Covid-19 patients, and research on the drivers of clinical success of established programs.

The technology must be easy for both patients and clinicians to adopt and continue using. It is essential to provide both patients and clinicians with intuitive equipment and user interfaces as well as resources for trouble-shooting when needed. Clinicians should be able to easily explain the equipment to patients, and it should be easy for patients to set up and use. The patient data generated by remote monitoring should also be simple to monitor and analyze.

This need is illustrated by a trial that studied remote monitoring of patients with congestive heart failure. In this trial, study physicians could not collect data for 12 out of 66 enrolled patients because these patients were unable to properly operate the mobile-phone-based monitoring device to begin data transmission.

The tools should be incorporated into clinician workflows. Given the high burden of administrative work that clinicians already face, it is imperative to introduce remote tools that blend seamlessly into their work processes. In some cases, this may require redesigning processes in order to ensure that remote monitoring is appropriately integrated into an organization’s practices.

For example, the administrators of a diabetes management program established at Massachusetts General Hospital found that they needed to modify the existing workflow for managing patients with diabetes in order to readily identify which patients required laboratory testing. Subsequently, the program built an application that remotely monitored diabetic patients and helped coordinate responsibilities for following up with patients about laboratory testing. This redesigned workflow improved efficiency by making it easier for nurse managers to remind patients about laboratory testing.

Sources of sustainable funding must be identified and tapped. This is especially critical at a time when hospitals are struggling financially due to the huge amount of revenue they have lost from pandemic-related cancellations and delays in performing surgeries and imaging.

Reimbursement for remote-patient-monitoring programs is challenging to navigate given that individual activities eligible for reimbursement — such as device set-up, patient education, interpretation of data, and follow-up patient conversations — are reimbursed separately. Nonetheless, reimbursement for such programs has improved with the advent of risk-based models of reimbursement such as Medicare Advantage plans and accountable care organizations, which offer providers increased flexibility in allocating capital to remote monitoring programs.

Many remote-patient-monitoring programs may have to rely on other sources of funding besides reimbursement, especially to fulfill upfront capital needs. In some instances, these sources of funding may be from the provider system’s operating budget. Internal innovation grants also may support programs. For instance, a diabetes remote monitoring program at Su Clinica Familiar, a federally qualified health center, was funded through a grant by the University of Texas System. Regardless of the nature of funding, we believe it is essential to identify a committed source of capital before establishing a remote-patient-monitoring program.

Dedicate sufficient non-physician staff to operate the program. A key reason this is necessary is that busy physicians will have difficulty carving out additional time to administer a program and sift through data. For example, Ochsner Medical Center in New Orleans developed a digital hypertension program staffed by pharmacists, who monitored 6,000 high-risk patients’ blood pressure readings remotely and followed up with patients via text and email. This program resulted in a significant increase in the proportion of patients who met their blood pressure goals.

As demonstrated by this example, it’s critical that staff in these roles are matched with the nature of the work. For instance, the complex tasks involved in hypertension-medication management might require a pharmacist or nurse as opposed to a patient navigator without clinical expertise.

Focus on digital health equity. Patients may appear to be better candidates for remote monitoring if they are younger, technologically savvy, or are fluent English-speakers. However, access to technology may be limited by poverty, and numerous other socio-demographic factors may influence engagement and participation in remote monitoring programs. At a time when the Covid-19 pandemic has disproportionately affected minority populations, care providers should go the extra mile to ensure that underserved patients not only have access to programs, but are also provided education and support needed to make them successful .

Start with an initial pilot and expand after demonstrated successes. Even in a pandemic setting in which time is of the essence, it is essential to demonstrate that remote patient monitoring initiatives improve clinical outcomes. Not all programs have demonstrated success, so the use of pilots can help avoid expensive mistakes. One successful program that scaled gradually is the Hospital of the University of Pennsylvania’s remote postpartum hypertension monitoring program. This program expanded from a small pilot to a larger clinical trial to the entire academic medical center based on evidence that it decreased admissions and costs associated with postpartum hypertension.

Covid-19 has created an opportunity to accelerate the adoption of remote patient monitoring as our health care system struggles to care for patients outside of the physical walls of a clinic or hospital. We encourage leaders to act decisively in establishing new programs by following best-in-class examples and guidelines. We believe that leaders who do so will spur a paradigm shift in how patient care is delivered that lasts far beyond the current crisis.

Read More

Posted on

How One Boston Hospital Built a Covid-19 Forecasting System

Executive Summary

Faced with the Covid-19 pandemic, the healthcare delivery infrastructure in much of the United States has faced the equivalent of an impending hurricane but without a national weather service to warn us where and when it will hit, and how hard. To build a forecasting model that works at the local level, the Beth Israel Deaconess Medical Center relied on an embedded research group, the Center for Healthcare Delivery Science, that reports to the CMO and is dedicated to applying rigorous research methods to study healthcare delivery questions

Andriy Onufriyenko/Getty Images

We’ve made our coronavirus coverage free for all readers. To get all of HBR’s content delivered to your inbox, sign up for the Daily Alert newsletter.

The Covid-19 pandemic created an unprecedented strain on healthcare systems across the globe. Beyond the clinical, financial, and emotional impact of this crisis, the logistical implications have been daunting, with crippled supply chains, diminished capacity for elective procedures and outpatient care, and a vulnerable labor force. Among the most challenging aspects of the pandemic has been predicting its spread. The healthcare delivery infrastructure in much of the United States has faced the equivalent of an impending hurricane but without a national weather service to warn us where and when it will hit, and how hard.

To build a forecasting model that works at the local level – within a hospital’s service area, for example — the Beth Israel Deaconess Medical Center (BIDMC), relied on an embedded research group, the Center for Healthcare Delivery Science, that reports to the CMO and is dedicated to applying rigorous research methods to study healthcare delivery questions. We used a series of methods derived from epidemiology, machine learning, and causal inference, to take a locally focused approach to predicting the timing and magnitude of Covid-19 clinical demands for our hospital. This forecasting serves as an example of a new opportunity in healthcare operations that is particularly useful in times of extreme uncertainty.

Insight Center

In early February, as the U.S. was grappling with the rapid spread of SARS-COV-2, the virus that causes Covid-19, the healthcare community in Boston began to brace for the months ahead. Later that month, participants in a biotechnology conference and other residents returning from overseas travel were diagnosed with the new disease.

It was the start of a public health emergency. To understand how to respond, our hospital needed a Covid-warning system, just as coastal towns need hurricane warning systems. Our hospital is an academic medical center with over 670 licensed beds, of which 77 are intensive care beds. We knew it was hurricane season, but when would the storm arrive, and how hard would it hit? We were uncertain about what lay ahead.

Hurricane season — but where is the storm?

Lesson 1: National forecasting models broke down when predicting hospital capacity for Covid-19 patients because no local variables were included.

Our institution turned first to national models. The most widely used national model applied curve-fitting methods (which draw a best-fit curve on a series of data points) on earlier Covid-19 data from other countries to predict future developments in the United States. National models did not consider local hospital decision-making or local-level socioeconomic factors which dramatically impact key variables like population density, pre-existing health status, and reliance on public transportation. For example, social media data showed many student-dense neighborhoods in Boston emptying after colleges canceled in-person classes at the beginning of March, which meant fewer people were in Boston to contract the virus. Another critical variable in hospital capacity forecasting, the rate of hospitalization for people with Covid-19, varied as the weeks went on, even though national models held this variable constant. For example, early on our hospital was choosing to admit rather than send home many SARS-COV-2 positive patients, even with mild infections, because the clinical trajectory of the disease was so uncertain. Thus we needed a dynamic hyper-local model.

Building our storm alert system

Lesson 2: Local infection modeling required a range of different research methods, and the trust and commitment of operational leaders who recognized the value of the work.

The hospital turned to our research center to achieve these goals. The center, which is embedded in the hospital and reports to the Chief Medical Officer (Dr. Weiss), brought applied machine learning and epidemiological approaches to construct a hyper-local alert system.

Further Reading

To demonstrate the feasibility of forecasting local hospital-capacity needs for managing Covid-19 patients, we built a preliminary SIR model (a traditional epidemiological framework that models the number of Susceptible, Infected and Recovered people in a population), which was integrated into our institution’s incident command structure, an ad hoc team created with members of the hospital and disaster management leadership to respond to the pandemic. However, the accuracy of SIR models depends on the accuracy of estimates of disease characteristics such as incubation time, infectious period, and transmissibility, variables that are still not well understood. Therefore, we turned to machine learning approaches, harnessing real-time data from our electronic medical record to determine these variables directly from real patients. We also gathered Covid-patient census data from multiple hospitals simultaneously, using a common machine-learning technique called multi-task learning to capitalize on limited data. These methods allowed us to estimate when the demand for hospital capacity to treat Covid-19 patients would peak and plateau — predicting the timing to within five days of the true peak and more accurately modeling the slope of the peak and decline than national models did.

Had leadership relied on national models, they would have expected a sharper peak and decline, and a peak two weeks earlier than the actual peak. Our modeling affected key decisions, including the need to bolster personal protective equipment (PPE) supplies; to gauge the necessity of even urgent procedures, and postpone them if necessary in order assure we had the capacity to absorb the peak; and to establish staffing schedules that continued farther into the future than those originally planned.

Predicting the next hurricane

Lesson 3: Effective modeling in confusing times may require rapidly developing new methods for predicting the next storm.  

Hospitals now face a difficult challenge. We need to open our doors to the patients without Covid-19 who didn’t seek care or whose care was deferred. But how do we make sure to have enough protective equipment for safely bringing back outpatient procedures? And when can nurses who had been redeployed to our ICUs return to the floors and interventional areas such as the endoscopy suite and cardiac catheterization lab? Complicating these questions is whether we will see another rise in infections with changes in state-wide policies, reopening of schools and businesses, or a coming influenza season.

In this new phase, we now need to develop methods for understanding how people will move within a community (going to school and visiting stores, for instance) and how much they will interact with one another and, therefore, affect the risk of infection over time. To this end, we constructed a risk index for local businesses by comparing pre-pandemic traffic to traffic as they reopen, and whether they are indoors or partly or entirely outdoors. Businesses where visitors are densely packed in indoor spaces, especially for longer periods, have a higher risk index — meaning they are more likely to be the site of infection spread. Using our risk index, we created and validated a model for identifying such potential “super-spreader” businesses in our service area. This analysis is part of another body of research that will undergo peer review and publication and, therefore, its results are provisional. Meanwhile, we can use our work with businesses to further inform our forecasting model by examining traffic in business locations we have identified as high-risk and assessing whether incorporating these data improves the ability of our model to predict the demand on hospital capacity.

Integrating rigorous research methods into hospital operations

Lesson 4: Given the profound future uncertainty in healthcare, small investments in trusted internal research groups that can answer operational questions with new methods can yield substantial returns.

Our institution made a prescient investment in creating an embedded and trusted research group made up of clinicians, economists, and epidemiologists studying healthcare operations. The team has brought specialized machine learning methods and expertise in extracting conclusions from messy data to quickly and accurately solve emerging real-world problems — capabilities that traditional business analytics groups are less likely to have. Other organizations can similarly unite the rigor and flexibility of methodological experts with the need to rapidly answer operational questions in dynamic and even chaotic environments.

The authors would like to thank Manu Tandon, Venkat Jegadeesan, Lawrence Markson, Tenzin Dechen, Karla Pollick and Joseph Wright for their valuable contributions to this work. 

If our free content helps you to contend with these challenges, please consider subscribing to HBR. A subscription purchase is the best way to support the creation of these resources.

Read More

Posted on

Have Your Privacy Policies Kept Up with Your Digital Transformation?

Executive Summary

For every business that shifts operations online, there are potential privacy pitfalls that will prove very damaging if mismanaged. As new regulations are set to go into force in the United States, the stakes for getting this pivot right are higher than ever before. The Covid-19 pandemic is accelerating digital transformations, and companies should consider implementing these four privacy-focused measures: 1) Check how your vendors and partners use customer data, 2) Perform impact assessments to monitor risk, 3) Strive for clarity in your privacy policy, and 4) Designate a data protection officer.

Illustration by Nathalie Lees

We’ve made our coronavirus coverage free for all readers. To get all of HBR’s content delivered to your inbox, sign up for the Daily Alert newsletter.

For companies everywhere, Covid-19 has expedited digital transformation at almost unimaginable speed. In an effort to survive and get back to business safely, companies have rapidly adopted services such as contactless payment, click-and-collect applications, and enhanced customer relationship management. These transitions are vital for business to continue, but each also introduces new risks. For every business that shifts operations online, there are potential privacy pitfalls that will prove very damaging if mismanaged, and as new regulations are set to go into force in the United States, the stakes for getting this pivot right are higher than ever before.

Across industries, teams with expertise in real-world spaces are rushing into digital ones where they’re novices and pumping huge amounts of user data into new systems. In the restaurant industry, establishments are scrambling to build new online ordering and delivery infrastructure or to partner with companies who already offer those services. In higher education, institutions faced with missing out on a year’s tuition fees are rapidly migrating their entire curriculum online, and rushing to digitize everything from online teaching to student health records. In the live events space, production veterans are being asked to migrate their well-established processes online and into new cloud technologies. In each case, these changes carry the risk that reams of personal data will be mismanaged and vulnerable to exposure.

Further Reading

This situation raises two major challenges for many businesses: First, they need to make quick decisions on procuring new technology: building online storefronts, implementing communications platforms that process customers’ personal data, and more. Second, they lack experience with data processing infrastructure, or even technology in general. That adds up to teams making quick decisions on the use of technology systems they don’t know much about. There might be an understandable temptation to treat privacy concerns as a secondary issue — one that can be addressed after the immediate crisis — but that would be a mistake, and one which would place companies at elevated risk of monetary fines, class-action lawsuits, and PR headaches.

There’s been growing regulatory pressure on both sides of the Atlantic. The General Data Protection Regulation (GDPR) in Europe, which was implemented in May 2018, and the California Consumer Privacy Act (CCPA) in the United States, which becomes enforceable by law on July 1 (impacting any company with a presence in California and over $25 million in annual revenue), contain stringent protocols for the management of user data, and both threaten steep fines for businesses that get data wrong. Particularly in the United States, there’s little reason to think that regulators will meaningfully relax standards because of the pandemic. California Attorney General Xavier Becerra has been unambiguous in his intent to press forward on implementing CCPA, stating: “We’re committed to enforcing the law starting July 1. We encourage businesses to be particularly mindful of data security in this time of emergency.”

The good news is that managing privacy concerns doesn’t have to be yet another daunting task on top of the already Herculean feat of moving large parts of your business online. There are a number of simple, meaningful steps you can take to minimize the risk of a privacy breach. To make your rapid digital transformation as safe as reasonably possible in the coming months, consider implementing these privacy-focused measures. Each can be done independently, but if your business can tick all four of these boxes, you’ll greatly mitigate privacy risk:

1) Be Mindful of How Your Vendors and Partners Use Customer Data

Businesses may be tempted to rush into contracts with third-party vendors who promise “plug-and-play” solutions to a number of digital transformation challenges. And while companies may be aware that they must review any Data Processing Agreements (DPA) during procurement, there is a tendency to underestimate the consequences of skipping this step. Under CCPA and GDPR, a business can be held financially liable for failure to perform due diligence on third parties that process customer data — in fact, this was the scenario that led to Marriott Hotel Group being fined $123 million by ICO in 2019.

Your key focus when reviewing vendor DPAs should be ensuring they’re privacy compliant and that their data policies align with your business’s stated data policies — otherwise a business runs the risk of violating their own privacy policy. Additionally, check the language about subcontractors in any vendor DPA. There should be assurance that vendors won’t subcontract to another processor unless explicitly instructed by your business to do so. This ensures your business is legally protected if a vendor unilaterally offloads data duties to a non-compliant third party.

2)  When Processing Data, Perform Impact Assessments To Monitor Risk

Impact assessments for data processing are required in many cases by GDPR, but not required by the CCPA. However in times of frenetic change, implementing basic risk assessments for data activities — however tedious — forces businesses to think critically before making a potentially damaging decision on issues like data storage, subcontracting, and more. Furthermore, in the event of being charged with a privacy violation, a paper trail demonstrating proactive steps to mitigate risk reads favorably to regulators.

The UK’s Information Commissioner’s Office provides a free data protection impact assessment template that will set your business on the right track to accurately assessing privacy risk, whether you’re based there or not.

3) Strive for Clarity in Your Privacy Policies

As key stakeholders reevaluate privacy policies ahead of CCPA enforcement, consider how the document reads. Your goal is to make this policy accessible to all of your customers — not just those fluent in legalese. You might think you’re covering yourself by including phrases wide open to interpretation to prepare for any future regulatory requirement, but your priority should be to help your increasingly privacy-savvy customers understand your policy and trust your company. Slack’s privacy policy shows that thoroughness doesn’t have to come at the expense of clarity for readers.

4) Designate a Data Protection Officer (DPO)

No matter a business’s size, centralizing responsibility for data decisions is preferable to diffusing responsibility across multiple departments. That is truer than ever during times of rapid change. DPOs serve as a focal point for privacy concerns within an organization and a vital liaison to regulatory bodies while the character of privacy law enforcement remains ambiguous. Even if the person lacks privacy experience, empowering a single set of eyes to focus on privacy is a quick, cost-conscious way to de-risk.

As stated at the outset, managing rapid digital transformation well can require taking risky action. But in the current climate, depending on regulatory largesse is an unnecessary risk for businesses when they can take simple, process-driven steps to shore up privacy.

Data privacy implementation exhibits many features of the economist’s “time inconsistency” dilemma – it’s too soon to do it until it’s too late. And as we’ve seen in the last few weeks, “too late” can mean a serious stumble at a critical business juncture.

If our free content helps you to contend with these challenges, please consider subscribing to HBR. A subscription purchase is the best way to support the creation of these resources.

Read More

Posted on

Khrunichev Space Center to Lower Angara Rocket Carrier Prices

The prime cost of Russia’s carrier rockets Angara will be reduced before 2024 from the current sum of seven billion to four billion rubles (from $100.3 million to $57.3 million), according to the 2019 financial report of the Khrunichev Space Center (the Angara rocket manufacturer), TASS reported.

The Angara is a family of next-generation Russian space rockets. It consists of light, medium, and heavy carrier rockets with a lifting capacity of up to 37.5 tonnes.

The new family of rockets uses environmentally-friendly propellant components. So far, Russia has carried out only two Angara launches, both of them from the Plesetsk spaceport: a light Angara-1.2PP blasted off in July 2014 and its heavy version lifted off in December 2014.

The first and so far the sole launch of a heavy Angara carrier rocket with a payload mock-up took place from the Plesetsk spaceport in December 2014. A Briz-M booster was used as the rocket’s upper stage.

The second test flight of an Angara-A5 launch vehicle is due to take place from Russia’s northern spaceport in the second-third quarter of this year.




Read More

Posted on

Software will reshape our world in the next decade

As I was wrapping up a Zoom meeting with my business partners, I could hear my son joking with his classmates in his online chemistry class.

I have to say this is a very strange time for me: As much as I love my family, in normal times, we never spend this much time together. But these aren’t normal times.

In normal times, governments, businesses and schools would never agree to shut everything down. In normal times, my doctor wouldn’t agree to see me over video conferencing.

No one would stand outside a grocery store, looking down to make sure they were six feet apart from one another. In times like these, decisions that would normally take years are being made in a matter of hours. In short, the physical world — brick-and-mortar reality— has shut down. The world still functions, but now it is operating inside everyone’s own home.

This not-so-normal time reminds me of 2008, the depths of the financial crisis. I sold my company BEA Systems, which I co-founded, to Oracle for $8.6 billion in cash. This liquidity event was simultaneously the worst and most exhausting time of my career, and the best time of my career, thanks to the many inspiring entrepreneurs I was able to meet.

These were some of the brightest, hardworking, never-take-no-for-an-answer founders, and in this era, many CEOs showed their true colors. That was when Slack, Lyft, Uber, Credit Karma, Twilio, Square, Cloudera and many others got started. All of these companies now have multibillion dollar market caps. And I got to invest and partner with some of them.

Once again, I can’t help but wonder what our world will look like in 10 years. The way we live. The way we learn. The way we consume. The way we will interact with each other.

What will happen 10 years from now?

Welcome to 2030. It’s been more than two decades since the invention of the iPhone, the launch of cloud computing and one decade since the launch of widespread 5G networks. All of the technologies required to change the way we live, work, eat and play are finally here and can be distributed at an unprecedented speed.

The global population is 8.5 billion and everyone owns a smartphone with all of their daily apps running on it. That’s up from around 500 million two decades ago.

Robust internet access and communication platforms have created a new world.

The world’s largest school is a software company — its learning engine uses artificial intelligence to provide personalized learning materials anytime, anywhere, with no physical space necessary. Similar to how Apple upended the music industry with iTunes, all students can now download any information for a super-low price. Tuition fees have dropped significantly: There are no more student debts. Kids can finally focus on learning, not just getting an education. Access to a good education has been equalized.

The world’s largest bank is a software company and all financial transactions are digital. If you want to talk to a banker live, you’ll initiate a text or video conference. On top of that, embedded fintech software now powers all industries.

No more dirty physical money. All money flow is stored, traceable and secured on a blockchain ledger. The financial infrastructure platforms are able to handle customers across all geographies and jurisdictions, all exchanges of value, all types of use-cases (producers, distributors, consumers) and all from the start.

The world’s largest grocery store is a software and robotics company — groceries are delivered whenever and wherever we want as fast as possible. Food is delivered via robot or drones with no human involvement. Customers can track where, when and who is involved in growing and handling my food. Artificial intelligence tells us what we need based on past purchases and our calendars.

The world largest hospital is a software and robotics company — all initial diagnoses are performed via video conferencing. Combined with patient medical records all digitally stored, a doctor in San Francisco and her artificial intelligence assistant can provide personalized prescriptions to her patients in Hong Kong. All surgical procedures are performed by robots, with supervision by a doctor of course, we haven’t gone completely crazy. And even the doctors get to work from home.

Our entire workforce works from home: Don’t forget the main purpose of an office is to support companies’ workers in performing their jobs efficiently. Since 2020, all companies, and especially their CEOs, realized it was more efficient to let their workers work from home. Not only can they save hours of commute time, all companies get to save money on office space and shift resources toward employee benefits. I’m looking back 10 years and saying to myself, “I still remember those days when office space was a thing.”

The world’s largest entertainment company is a software company, and all the content we love is digital. All blockbuster movies are released direct-to-video. We can ask Alexa to deliver popcorn to the house and even watch the film with friends who are far away. If you see something you like in the movie, you can buy it immediately — clothing, objects, whatever you see — and have it delivered right to your house. No more standing in line. No transport time. Reduced pollution. Better planet!

These are just a few industries that have been completely transformed by 2030, but these changes will apply universally to almost anything. We were told software was eating the world.

The saying goes you are what you eat. In 2030, software is the world.

Security and protection no longer just applies to things we can touch and see. What’s valuable for each and every one of us is all stored digitally — our email account, chat history, browsing data and social media accounts. It goes on and on. We don’t need a house alarm, we need a digital alarm.

Even though this crisis makes the near future seem bleak, I am optimistic about the new world and the new companies of tomorrow. I am even more excited about our ability to change as a human race and how this crisis and technology are speeding up the way we live.

This storm shall pass. However the choices we make now will change our lives forever.

My team and I are proud to build and invest in companies that will help shape the new world; new and impactful technologies that are important for many generations to come, companies that matter to humanity, something that we can all tell our grandchildren about.

I am hopeful.

Read More

Posted on

No-code industrial robotics programming startup Wandelbots raises $30 million

Dresden, Germany-based Wandelbots – a startup dedicated to making it easier for non-programmers to ‘teach’ industrial robots how to do specific tasks – has raised a $30 million Series B funding round led by 83North, an with participation from Next47 and Microsoft’s M12 venture funding arm.

Wandelbots will use the funding to help it speed the market debut of its TracePen, a hand-held, code-free device that allows human operators to quickly and easily demonstrate desired behavior for industrial robots to mimic. Programming robots to perform specific tasks typically requires massive amounts of code, as well as programmers with very specific, in-demand skillsets to accomplish; Wandelbots wants to make it as easy as simply showing a robot what it is you want it to do – and then showing it a different set of behaviors should you need to reprogram it to accomplish a new task or fill in for a different part of the assembly line.

The software that Wandelbots developed to make this possible originally sprung out of work done at the Faculty of Computer Science at the Technical University of Dresden. The startup was a finalist in our TechCrunch Disrupt Battlefield competition in 2017, and raised a $6.8 million Series A round in 2018 led by Paua Ventures, EQT Ventures and others.

Wandelbots already has some big-name clients, including industrial giants like Volkswagen, BMW, Infineon and others, and as of June 17, it’ll be launching its TracePen publicly for the first time. The company’s technology has the potential to save anyone who makes use of industrial robots many months of programming time, and the associated costs – and could ultimately make use of this kind of robotics practical even for smaller companies for whom the budgetary requirements of doing so previously put it out of reach.

I asked Wandelbots CEO and co-founder Christian Piechnick via email whether their platform can overcome some of the challenges companies including Tesla have faced with introducing ever-greater automation to their manufacturing facilities.

“The reversals regarding automation were caused by the inflexibility, complexity and cost introduced by automation with robots,” Piechnick told me via email. “People are usually not aware that 75% of the total cost of ownership of a robot comes from software development. The problems introduced by robots were killing the benefit. This is exactly the problem we are tackling. We enable manufacturers to use robots with an unseen flexibility and we dramatically lower the cost of using robots. Our product enables non-programmers to easily teach a robot new tasks and thus, reduces the involvement of hard-to-find and costly programmers.”

TracePen, the device and companion platform that Wandelbots is launching this week, is actually an evolution of their original vision, which focuses more on using smart clothes to fully model human behavior in real-time in order to translate that to robotic instruction. The company’s pivot to TracePen employs the same underlying software tech, but meets customers much closer to where they already are in terms of processes and operations, while still providing the same cost reduction benefits and flexibility, according to Piechnick.

I asked Piechnick about COVID-19 and how that has impacted Wandelbots’ business, and he replied that in fact it’s driven up demand for automation, and efficiencies that benefit automation, in a number of key ways.

“COVID-19 has impacted the thinking on global manufacturing in various ways,” he wrote. “First there is the massive trend of reshoring to reduce the risk of globally distributed supply chains. In order to scale volume, ensure quality and reduce cost, automation is a natural consequence for developed countries. With a technology that leads to almost immediate ROI and extremely short time-to-market, we hit a trend. Furthermore, the dependency on human workers and the workplace restrictions (e.g., distance between workers) increases the demand for automation tremendously.”

Read More

Posted on

Internet of Everything vs Internet of Things: What’s the Difference?

Illustration: © IoT For All

Unless you’re an expert, there’s little difference between the Internet of Things (IoT) and the Internet of Everything (IoE). However, the latter term is broader, semantically. In this post, we’ll go into the details to explain why IoT software development companies use the term IoE comparatively rarely.

The Difference

The term IoT was coined in 1999 to refer to machine-to-machine, or M2M, communication. IoE appeared a few years later, to describe interrelated elements of a whole system, including people. IoE entails not only M2M communication but also P2M (people-to-machine) and even P2P (people-to-people) communication.

To understand the differences between the three types of communication, let’s consider several examples. Say it got dark outside and you turned on a light in the office, then you sat and typed on a keyboard. This scenario provides P2M examples of IoE.

We are so used to these things that we don’t even realize they are part of a system. Another example: You make a Skype call to your colleague. That’s a simple human-to-human, or P2P, communication. An example of M2M communication, on the other hand, is the process of data exchange between your office temperature sensing devices and the HVAC mainframe.

You might think M2M communication, being technological, is the most progressive means of interaction. but IoE focuses on P2M and P2P interactions as the most valuable. According to a Cisco analysis, as of 2022, 55% of connections will be of these two types. 

IoE is now considered the next stage of IoT development. Maybe this is why there are so few IoT development companies offering IoE development services at the moment. Internet of Things solutions are now more common and widespread.

4 Main Elements of the IoE Concept

 Thing

By thing, we mean an element of the system that participates in communication. A thing is an object capable of gathering information and sharing it with other elements of the system. The number of such connected devices, according to Cisco, will exceed 50 billion by 2020. 

What are things? In the IoT, a thing could be any object, from a smart gadget to a building rig. In the IoE, that expands to include, say, a nurse, as well as an MRI machine and a “smart” eyedropper. Any element that has a built-in sensing system and is connected on a network can be a part of the IoE.

People

People play a central role in the IoE concept, as without them there would be no linking bridge, no intelligent connection. It is people who connect the Internet of Things, analyze the received data and make data-driven decisions based on the statistics. People are at the center of M2M, P2M, P2P communications. People can also become connected themselves, for example, nurses working together in a healthcare center.

Data

In 2020, it’s projected that everyone using the internet will be receiving up to 1.7 MB of data per second.

As the amount of data available to us grows, management of all that information becomes more complicated. But it’s a crucial task because, without proper analysis, data is useless. Data is a constituent of both IoT and IoE. But it turns into beneficial insights only in the Internet of Everything. Otherwise, it’s just filling up memory storage.

Process

Process is the component innate to IoE. This is how all the other elements — people, things, data — work together to provide a smart, viable system. When all the elements are properly interconnected, each element receives the needed data and transfers it on to the next receiver. The magic takes place through wired or wireless connections.

Another way to explain this is that IoT describes a network and things, while IoE describes a network, things, and also people, data, and process.

Where Is IoE Applied?

As to the market, we can say confidently that IoT is a technology of any industry. IoE technology is especially relevant to some of the most important fields, including (1) manufacturing, (2) retail, (3) information, (4) finance & insurance, (5) healthcare. 

IoE technology has virtually unlimited possibilities. Here’s one example: More than 800 bicyclists die in traffic crashes around the world annually. What if there was a way to connect bike helmets with traffic lights, ambulances, and the hospital ecosystem in a single IoE. Would that increase the chances of survival for at least some of those cyclists? 

Another example: Do you realize how much food goes to waste, say at large supermarkets, because food isn’t purchased by its best-before date? Some perishable products like fruit and vegetables are thrown away due to overstocks even before they get to the market. What happens if you find a way to connect your food stocks with the racks and forklifts of the supermarket in-stock control system using IoE?

There are endless variations on uses of IoE right now, and many of them are already becoming familiar in our “smart” homes.

Summing up

In our industry, few would deny the value of IoE in improving our standard of living. Luckily, there’s a flourishing market of IoT development services. Who knows, maybe one day soon, you’ll be a “thing” in the IoE environment.

Read More

Posted on

Data Communications and Flow: Focus on What You Need

Illustration: © IoT For All

When visualizing a new IoT application; carefully consider not what data flows you want, but what data flows your application needs to be successful.

Data flows are one of the key constraints in the design of any IoT application. Data flows drive not just communications cost, but also indirectly control communication technology selection, power needs, and the actual paradigm of an application’s functionality. As you begin visualizing your new IoT application, think carefully about the data and communication patterns that support your planned features.

IoT is a confluence of smart and connected in a remote device. I assume that if you’re reading this you have a “device” side and a “user” application side that you are thinking about connecting.  Your devices could be in near proximity of your user, their home/office, or anywhere. The user application where the device data lands initially could be a smartphone, or in many cases a cloud platform. Throughout the discussion, chances are the connection will be over a wireless link. This is by far the predominant pattern for typical IoT (non-IIOT/non-manufacturing use cases).

First, let’s differentiate between “data” and “data flows.

Data: what is measured

Data Flows: what is communicated.

Sure, increased data is likely to ramp up the amount of data sent to/from your device, but more data is not a linear predictor of how much data an application needs to communicate with a user. As the power of IoT device MCU chips increases, there is a steady ability to do more processing on the device and only communicate a summary of relevant events and periodic data points.

Communications are power-hungry compared with computation and memory on an IoT device. The more you can keep your radio turned off, the more battery life that remains. There are power light wireless technologies like Bluetooth Low Energy (BLE) for near distance communications, but what if your device is far away? Radios vary in their performance profile and there are numerous articles out there about WiFi vs. LoRa vs. LTE. Know your communications stack. Next, I lay out some concepts that should be considered regardless of which type of radio is in your device.

Most IoT projects fall into two broad categories. These two patterns dictate many aspects of the data flows your application will need to perform and when your communications hardware needs to be turned on.

Interactive

Interactive applications place the user and device in virtual proximity, with physical distance ranging from a few feet to miles to wherever. The communication flows, bridge that physical distance. This application pattern is the most demanding from a communications perspective.

Communications that are interactive require that a device radio stays on to listen for user input. This could be constrained to a specific interval of interest, rather than 24 hours a day. Maybe the communications channel can be predictably enabled during “business hours” or only during predicted device “usage” times.  The key point, radio on all the time increases power consumption considerably. This in turn increases challenges for off-grid or solar applications to have enough power harvesting and storage.

Being that the interactive application pattern is so demanding that you may find variations necessary to make things work. Consider maybe delaying user input by minutes or even hours, opening times for user control of the device.

Remote Monitoring

From a communication and power budget perspective, this application pattern is much easier to implement. Devices can wake up occasionally, gather and locally store data, assess the situation, and then decide if communication is required. The radio stays off until it is needed to send data to the user before it’s back to sleep for the communications.

The remote monitoring pattern can be integrated with an interactive application by making use of when the radio is already turned on. When you periodically send data, check for user directives. This approach is standardized in LoRaWAN Class A devices which listens for user input 1 and 2 seconds after transmitting its data.

IoT applications typically use protocols such as MQTT or HTTP to package their data while in transit. MQTT, HTTP, AMQP and other IoT communication protocols add protocol data to the total amount of IoT device payload data being transmitted between the device and the user. The amount of data communicated typically increases in two ways: framing overhead and keepalives.

Framing overhead is the extra data that is sent along with an application’s data to make communications more robust and reliable. Think of protocol framing as the envelope you put your physical correspondence in. In the case of MQTT, to send data, the overhead is 6 characters + the MQTT topic name your device is publishing to. This can add up and, in some cases, exceed the size of the payload data you are sending to the user side. It is important to note that while MQTT transmits these extra characters with your message payload, MQTT is more efficient than AMQP and HTTP; which is why MQTT is so often used in IoT systems.

The other protocol tax is keepalive messaging (sometimes referred to as heartbeats). MQTT implementations typically perform a keepalive action every 1-4 minutes, this time period is referred to as the keepalive interval. Keepalives are not required if data transmission has been performed recently. To keep communications active, MQTT sends a 2-character long PING when the keepalive interval ends. The keepalive interval is reset with each transmission, for either a PING or payload data.

Most implementations afford the ability to lengthen the keepalive interval (reduce the number of keepalives sent), each system will typically impart some upper limit for the keepalive interval. Azure IoT Hub uses MQTT extensively and limits the keepalive interval to a maximum of 1177 seconds or once every 19 minutes, 37 seconds (Understand Azure IoT Hub MQTT  Support).

When reviewing data and deciding what to send back and forth, think about ways to eliminate or reduce application data flows. When reviewing data flows, take note of how big each one is, how often data is sent, and what is going on with your communications channel when nothing is happening.

There are tools online (IoT Bandwidth Estimation Tool) to help visualize your data budget and be proactive in planning your data communications.

Time adds up… fast! Sending 500 characters of data every 20 seconds:

180 times / hour                           90KB / hour

4,320 times / day                          2,160MB / day

30,240 times / week                   17,120MB / week

129,600 times / month              64,800MB / month

Remember every character you send can increase costs and draws down your device’s battery.

Some Ideas…

  • After sampling remote data, look for ways to summarize prior to sending. For example: consider sending maximum, minimum, average, and number of data points over a specific period.
  • Similarly, once a maximum and minimum are established, consider sending data only when a new outlying maximum or minimum has been observed.
  • For remote sensing consider only sending data once a day or even once a week. But send data events when something significant has been observed at the device.
  • Consider building normal limits in your device software. When the data being sensed leaves these limits, then communicate and report the event to the user.
  • Log your data locally on the device and send a block of data (a day or weeks’ worth) at one time. Once the radio is on using it, then shut it off. Every time the radio is turned on/off, power is wasted before/after when data is sent.

Only you can determine when a piece of data being sent is valuable. Is that piece of data something you want… or is it something you need?

Read More

Posted on

Shielding Frontline Health Workers with AI

Illustration: © IoT For All

We are living through an unprecedented crisis. During the COVID-19 pandemic, healthcare workers have emerged as frontline heroes, working overtime to protect our communities from the spread of novel coronavirus. But they aren’t immune to the anxious, uncertain atmosphere the pandemic has fostered nor, indeed, the coronavirus itself.

We need to protect the first responders and hospital staff who put their wellbeing on the line to support their communities during a crisis. To my mind, that means using every tool at our disposal to the fullest — with AI chief among those at hand.

Creative Solution

There’s little doubt that the current situation demands a creative solution. The United States has become the center of the global pandemic; as of April 16th, the US confirmed 644,188 cases and endured 28,579 deaths. Despite efforts to flatten the curve by ordering regional shut-downs and stay-at-home orders, hospitals across the county have been all but overwhelmed by incoming cases. The impact on provider morale has, according to reporting from NPR, been similarly problematic.

“Nearly a month into the declared pandemic, some health care workers say they’re exhausted and burning out from the stress of treating a stream of critically ill patients in an increasingly overstretched health care system,” NPR reporters Will Stone and Leila Fadel recently wrote. “Many are questioning how long they can risk their own health […] In many hospitals, the pandemic has transformed emergency rooms and upended protocols and precautions that workers previously took for granted.”

Hospitals are doing all they can to keep their caregivers safe and protected, but their resources are stretched far too thin. According to reports, some hospitals in high-infection areas like New York City can only afford to give healthcare workers one N95 mask every five days. Used masks are collected, disinfected, and returned on a cycle between uses. But some frontline workers worry that, given the highly contagious nature of the disease, they may not be adequately protected.

“It can be disheartening to have that feeling of uncertainty that you are not going to be protected,” Sophia Rago, an ER nurse based in St. Louis, told reporters for NPR.

We need to shield our frontline workers as much as possible. The obvious solution would be to increase stores of personal protective equipment (PPE) and N95 masks; however, given that we face a national shortfall and harsh state-to-state bidding wars over the gear, that fix seems unlikely. What we can do to at least lessen the risk of patient-to-provider transmission is to invest in AI-powered solutions that can automate some healthcare protocols and limit the need for close contact.

“Traditional processes — those that rely on people to function in the critical path of signal processing — are constrained by the rate at which we can train, organize, and deploy human labor. Moreover, traditional processes deliver decreasing returns as they scale,” a team of digital health researchers recently wrote in an article for the Harvard Business Review.

“Digital systems can be scaled up without such constraints, at virtually infinite rates. The only theoretical bottlenecks are computing power and storage capacity — and we have plenty of both. Digital systems can keep pace with exponential growth.”

These AI-powered, digitally-facilitated solutions generally fall into two broad categories: disease containment and patient management.

Assessing AI’s Ability to Limit Disease Transmission

When it comes to limiting disease spread, the aim is to use AI tools to allocate human resources better while still protecting patients and staff. Take the screening system that was recently deployed at Tampa General Hospital in Florida, for example. This AI framework was designed by the autonomous care startup Care.ai and intended to facilitate early identification and interception of infected people before they come into contact with others. According to a report from the Wall Street Journal, the Care.ai tool taps into entryway cameras and conducts a facial thermal scan. If the system flags any feverish symptoms such as sweat or discoloration, it can notify healthcare staff and prompt immediate intervention.

Other technology companies––Microsoft, for one––have rolled out similar remote diagnostic and alert tools in facilities across the globe. Their unique capabilities vary, but their purposes are the same: to prevent the spread of infection and provide support to overworked personnel.

As representatives for Microsoft shared in a recent press release, “[AI technology] not only improves the efficiency of epidemic prevention, but it also reduces the work burden of frontline personnel so that limited human resources can be used more effectively.”

In these resource-strapped time, the aid is undoubtedly needed.

AI’s Applications for Diagnostics and Patient Management

Fighting a pandemic is a task that requires speed. Now more than ever, providers must be able to accurately and quickly identify infected patients so that they can trace and hopefully contain the viral spread. But doing so isn’t an easy order.

To borrow a quote from Forbes contributor Wendy Singer, “Analyzing test results nowadays requires skilled technicians and a lot of precious time, as much as a few days. But in our current reality, healthcare systems need to analyze thousands of results instantly, and to expose as few lab workers as possible to the virus.”

We don’t have that kind of time––and we can’t put our lab workers at undue risk. Thankfully, cutting-edge AI technologies may provide a solution. With AI, hospitals can automate some steps of the testing process, cutting down on the time and effort needed to process test results. These capabilities aren’t just hypothetical; in the weeks since the start of the pandemic, the health tech startup Diagnostics.ai has provided laboratories in the US and UK with a diagnostic tool that streamlines the testing process by automating DNA analysis.

However, the applications of AI diagnostics aren’t limited to testing alone. Some have also used artificial intelligence to support population management in overstretched hospitals. One Israeli medical-device developer, EarlySense, recently developed an AI-powered sensor that can identify which patients will most likely face complications like sepsis and respiratory failure within six to eight hours. This can give a hospital the information it needs to best allocate limited resources and staff attention.

No AI innovation — no matter how brilliant or helpful — will fix our resources shortfall. There is no question that healthcare providers need more PPE and support, or that they need it immediately. However, the benefits that AI provides to screen and patient management efforts are evident. It seems reasonable that we at least consider the weight the deployment of such tools could remove from our exhausted front-liners’ shoulders.

Read More

Posted on

What Blockchain Could Mean for Your Health Data

Executive Summary

Data is one of the best tools we have for fighting the Covid-19 outbreak, but right now health data — like consumer data — is held in silos in many different institutions and companies. And while third parties can track, trade, and negotiate that data, the people who create it and who have the biggest stake in it, are often cut out of the deal. Their virtual self doesn’t belong to them, which creates problems of access, security, privacy, monetization, and advocacy.

Blockchain can be used to solve these issues, by putting individuals in control of their data, which would be encrypted and and stored in a distributed network that no entity owned. Putting people in control of their data, and their health data in particular, would allow them to control who has access to it, and what they’re allowed to do with it. It would also allow secure sharing of data for critical public health purposes, such as contract tracing, without compromising privacy. It’s time that we reclaim our data as an asset that we create, and which we should both control and benefit from. Healthcare data is a perfect place to start.

fotofrog/Getty Images

We’ve made our coronavirus coverage free for all readers. To get all of HBR’s content delivered to your inbox, sign up for the Daily Alert newsletter.

Big data is perhaps the most powerful asset we have in solving big problems these days. We need it to track and trace infection, manage healthcare talent and medical supply chains, and plan for our economic futures.

But how can we balance data and privacy? Legislation and regulation of big data such as the European Union’s General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act are partial measures at best. Regulators and pundits have focused so much on the demand side of the data equation — that is, on the use or sale of private citizens’ data in corporate applications like Facebook, Google, and Uber without the individuals’ awareness — that they’ve failed to look at the supply side of data: where data originates, who creates it, who really owns it, and who gets to capture it in the first place.

The answer is you do. All these data are a subset of your digital identity — the “virtual you,” created by your data contrail across the Internet. That’s how most corporations and institutions view you. As Carlos Moreira, CEO of WISeKey, said, “That identity is now yours, but the data that comes from its interaction in the world is owned by someone else.”

Further Reading

It’s time we started taking our personal data as seriously as the top tech firms do. We need to understand its real value to us in all aspects of our lives. Blockchain technology can help us do that, enabling us to use our data proactively and improve our well-being. And while there are many areas where taking control of our data might improve our lives, there is one particularly promising place to start: healthcare data.

Why should we care about our health data?

“Imagine if General Motors did not pay for its steel, rubber, or glass — its inputs,” economist Robert J. Shapiro once said. “That’s what it’s like for the big Internet companies. It’s a sweet deal.” It’s also a real conundrum for business leaders who want as much data as they can get for their enterprise, yet truly value privacy and individual freedom. Consider the tradeoffs we’re making as individuals:

  • We can’t use our own data to plan our lives and long-term healthcare: our treatment plans, the pharmaceuticals and medical supplies we use, our insurance or Medicare supplements, or how we use our health savings accounts. All these data about us reside in other people’s silos — in the separate databases of myriad healthcare providers, pharmacies, insurance companies, and local, state, and national agencies — which we can’t access but third parties like the American Medical Collection Agency (AMCA) can, and often without our knowledge.
  • We enjoy none of the rewards of this data usage, yet bear most of the risk and responsibility for its clean up if it’s lost or abused. In 2019, AMCA was hacked, and the hackers made off with the personal data of some 5 million people whose lab tests were handled by AMCA’s clients Quest Diagnostics, LabCorp, BioReference Lab, and others. None of these clients have to deal with the tsunami of fraud alerts and bespoke phishing scams aimed at patients. Yet, unlike Alectra, Amazon, or Tesco, these parties aren’t using our data to improve our healthcare outcomes or cut our costs. To us, this is data malpractice.
  • We can’t monetize or manage these data assets for ourselves, family, or heirs — think of Henrietta Lacks, whose cancer cells revolutionized the development of cancer treatment without her knowledge— resulting in a bifurcation of reputation, wealth, and all its discontents. Those who lack access to the Internet altogether may not have data profiles or privacy problems per se, but they often don’t have official identity cards, home addresses, or bank accounts either, and so they can’t participate in the global economy. These aren’t people without papers. These are people without data.
  • Our privacy is at risk all the time, as is our family’s. The Chinese government used mass surveillance to gain some measure of control over the spread of Covid-19, tracking data about who specifically was infected, where they lived, when they were infected, when they recovered, how were they infected, whether they sheltered in place, what temperature they had when they went outside, and who else they contacted. Privacy is the foundation of freedom, and while sometimes — perhaps in a pandemic — we may choose to trade on this privacy for the social good, the trouble is that once the crisis is over, we have no way to reclaim or mask our data.
  • We can’t develop or contribute to the proposed health policies of elected officials, we can’t effectively advocate for the changes our family needs, and we can’t collectively bargain with other patients or powers of attorney to lower costs or improve delivery — yet every other party in the system can do all these with our data, not just negotiating coverage and rates with governments but lobbying them for industry-favorable regulations. The Pharmaceutical Research and Manufacturers of America alone spent a record $27.5 million on lobbying in 2018, with individual companies supplementing these efforts to the tune of $194.3 million.

With wearables and the Internet of Things, we can increasingly capture our insulin levels, blood pressure, and the number of steps we take and stairs we climb in a day. By owning our medical and other personal data, we could solve the five problems stated above: access, security, privacy, monetization, and advocacy. The key is to take advantage of existing technologies to manage our data according to our own terms of use.

How patient control over health records could expedite data for treatments

Pioneers like Canada’s University Health Network (UHN) have come up with a win-win solution using blockchain technology, a software that operates as a shared ledger distributed across computer devices connected to a communications network. What sets this type of ledger apart from the interfaces to conventional databases or health record repositories is a) its decentralization, which means we can control transactions involving our data peer to peer, and b) its immutability, in that no one else can alter or undo those transactions behind the scenes or without a majority of the network’s approval.

In 2018, UHN launched a patient control-and-consent platform to enhance the patient experience and to facilitate clinical research using patient data. Designed after workshops with different stakeholder groups and developed in partnership with IBM, the platform leverages blockchain not simply to secure and consolidate patient data across the network, but also to obtain and record patient consent before any information is shared with researchers. When patients consent, the software automatically encrypts and records details of the consent transaction on the shared ledger. The platform also records which parties accessed the data, at what time, and for what purpose.

This kind of functionality can be expanded to uses such as contact tracing. Imagine a scenario where the UHN solution is interconnected to healthcare facilities across Canada, so that every Canadian patient had an opportunity to share personal data, including location over time. With such “a platform for reporting, tracking, and notifying that is global in nature and respects privacy,” said Brian Magierski of the Care Chain collaboration, we can “identify new cases rapidly and verify those who have immunity.” To that effect, the start-up Workwolf has invited the Canadian government to use its proprietary blockchain for tracking Covid-19 cases, immunity or resistance, and test results. And Vital Chain is turning clinically certified results into blockchain-based health and safety credentials for employees to prove their fitness for returning to work.

If we applied these capabilities at a global scale, we could capture a single, comprehensive account of global incidence rates and outcomes that was verified and secure. That’s what the start-up Hacera is trying to do. With the support of IBM, Microsoft, Oracle, the Linux Foundation, and others, it launched MiPasa, an initiative to integrate, aggregate, and share information at a global scale from multiple verified sources — from the Center for Disease Control or the World Health Organization, but also hard-to-get data from local public health agencies, licensed private facilities, and even individuals — all without personal identifiers. MiPasa onboards data providers through Hacera’s Unbounded network, a decentralized blockchain powered by Hyperledger Fabric, and then streams data using the IBM Blockchain platform and IBM Cloud. Hacera has developed a tutorial for coders to build applications on top of the platform. This kind of value creation is the gigantic incentive needed to rally numerous institutions so that we can trace people’s exposure to infected individuals, reduce transmissions, save lives, and put more people back to work.

Finding a Covid-19 vaccine is a top priority. To accelerate discovery, the blockchain start-up Shivom is working on a global project to collect and share virus host data in response to a call for action from the European Union’s Innovative Medicines Initiative. Shivom scientists formed a global Multi-Omics Data Hub Consortium comprised of universities, medical centers, and companies, many of which have expertise in AI and blockchain, all for combatting coronavirus infections. The consortium’s data hub is based on part of Shivom’s blockchain-based precision medicine platform. Founded by Dr. Axel Schumacher, Shivom’s platform uses blockchain not only to manage patient consent dynamically but also to share genomic data and data analysis securely and privately with third parties anywhere, without providing access to raw genomic data. Dr. Schumacher said that researchers “can run algorithms over the data that provide summary statistics to the data sets. No individual, de-identifying data can be obtained without the explicit consent of the patient.”

Transitioning to this self-sovereign future

To realize this future, we need to address the real problem: that you don’t own your virtual self. Each of us needs a self-sovereign and inalienable digital identity that is neither bestowed nor revocable by any central administrator and is enforceable in any context, in person and online, anywhere in the world. Until blockchain, we didn’t have the technological means to assert such sovereignty. Now the technical groundwork has been laid. Organizations are looking at how to deploy it in public key infrastructure, how to separate identification and verification from transactions, and how to expand the use of smart contracts, zero-knowledge proofs, homomorphic encryption, and secure multiparty computation.

Imagine having a digital identity that you stored in your digital wallet on a blockchain. Your wallet collects and protects all your biological, financial, and geospatial data throughout the day, and you decide how you want to use it. Your medical records are central to this identity. Your body generates health data. You, not big companies or governments, have a heart rate and a body temperature. When clinicians measure you or take tests of various kinds, they’re providing a service; the results are your asset, deriving from your body. You should control it.

What we’re shooting for is a wholesale shift in how we define and assign ownership of data assets and how we establish, manage, and protect our identities in a digital world. Change those rules, and we end up changing everything.

If our free content helps you to contend with these challenges, please consider subscribing to HBR. A subscription purchase is the best way to support the creation of these resources.

Read More