Posted on

How to Make Remote Monitoring Tech Part of Everyday Health Care

Executive Summary

Remote patient monitoring is a subset of telehealth that involves the collection, transmission, evaluation, and communication of patient health data from electronic devices. These devices include wearable sensors, implanted equipment, and handheld instruments. During the pandemic, such monitoring programs have proven valuable. But special measures and conditions made that possible. By encouraging regulators to make permanent the temporary measures introduced during the pandemic and by following six guidelines to integrate these programs into health care, providers realize their tremendous promise.

Yagi Studio/Getty Images

By making the collection of valuable patient data feasible outside of the clinic, remote monitoring can facilitate care for conditions ranging from chronic diseases to recovery from acute episodes of care. For years, it has been touted as one of the most promising opportunities for health care in the digital age. But the pandemic has underscored its value. Indeed, policy changes introduced during the pandemic due to the riskiness of in-person patient visits have created conditions ripe for its adoption. We urge regulators to extend these changes beyond the pandemic and for health care leaders to take advantage of this window of opportunity to develop, test, and improve remote-patient-monitoring programs.

What is remote patient monitoring? While “telehealth” broadly refers to all health care activities that are conducted through telecommunications technology, remote patient monitoring is a subset that involves the collection, transmission, evaluation, and communication of patient health data from electronic devices. These devices include wearable sensors, implanted equipment, and handheld instruments.

We define remote patient monitoring as the set of activities that meet four key criteria: (1) data on patients is collected remotely (e.g., in a home setting without oversight from a health care provider); (2) the data collected is transmitted to a health care provider in a different location; (3) the data is evaluated and care providers are notified, as needed; and (4) care providers communicate relevant data-driven insights and interventions to patients.

Remote Monitoring During the Pandemic

By making it possible to virtually perform medical activities that have traditionally been conducted in person, remote monitoring technologies have played a significant role in patient care during the Covid-19 pandemic. For example, providers such as Mount Sinai Health System in New York City, University Hospitals in Cleveland, Ohio, St. Luke’s University Health Network in Bethlehem, Pennsylvania, and Providence St. Joseph Health in Renton, Washington, started programs during the Covid-19 pandemic in order to monitor vital sign and symptom data and assess the status of coronavirus patients. Other hospitals, such as Mayo Clinic in Rochester, Minnesota, are working to set up remote patient monitoring programs for non-Covid-19 patients (e.g., as those with congestive heart failure).

New policies have recognized the importance of remote patient monitoring in this context. The U.S. Centers for Medicare and Medicaid Services expanded Medicare coverage for remote patient monitoring to include patients with acute conditions and new patients as well as existing patients. Moreover, the U.S. Food and Drug Administration issued a new policy allowing certain devices (FDA-approved non-invasive devices used to monitor vital signs) to be used in remote settings. Nonetheless, these changes remain temporary: They have only been authorized for the duration of the Covid-19 public health emergency. We hope that additional policies will be enacted to ensure that these programs can serve a variety of patients and conditions beyond the context of Covid-19.

Guidelines for Development and Implementation

These guidelines are drawn from our own experience managing remote-patient-monitoring programs, including one created specifically to care for Covid-19 patients, and research on the drivers of clinical success of established programs.

The technology must be easy for both patients and clinicians to adopt and continue using. It is essential to provide both patients and clinicians with intuitive equipment and user interfaces as well as resources for trouble-shooting when needed. Clinicians should be able to easily explain the equipment to patients, and it should be easy for patients to set up and use. The patient data generated by remote monitoring should also be simple to monitor and analyze.

This need is illustrated by a trial that studied remote monitoring of patients with congestive heart failure. In this trial, study physicians could not collect data for 12 out of 66 enrolled patients because these patients were unable to properly operate the mobile-phone-based monitoring device to begin data transmission.

The tools should be incorporated into clinician workflows. Given the high burden of administrative work that clinicians already face, it is imperative to introduce remote tools that blend seamlessly into their work processes. In some cases, this may require redesigning processes in order to ensure that remote monitoring is appropriately integrated into an organization’s practices.

For example, the administrators of a diabetes management program established at Massachusetts General Hospital found that they needed to modify the existing workflow for managing patients with diabetes in order to readily identify which patients required laboratory testing. Subsequently, the program built an application that remotely monitored diabetic patients and helped coordinate responsibilities for following up with patients about laboratory testing. This redesigned workflow improved efficiency by making it easier for nurse managers to remind patients about laboratory testing.

Sources of sustainable funding must be identified and tapped. This is especially critical at a time when hospitals are struggling financially due to the huge amount of revenue they have lost from pandemic-related cancellations and delays in performing surgeries and imaging.

Reimbursement for remote-patient-monitoring programs is challenging to navigate given that individual activities eligible for reimbursement — such as device set-up, patient education, interpretation of data, and follow-up patient conversations — are reimbursed separately. Nonetheless, reimbursement for such programs has improved with the advent of risk-based models of reimbursement such as Medicare Advantage plans and accountable care organizations, which offer providers increased flexibility in allocating capital to remote monitoring programs.

Many remote-patient-monitoring programs may have to rely on other sources of funding besides reimbursement, especially to fulfill upfront capital needs. In some instances, these sources of funding may be from the provider system’s operating budget. Internal innovation grants also may support programs. For instance, a diabetes remote monitoring program at Su Clinica Familiar, a federally qualified health center, was funded through a grant by the University of Texas System. Regardless of the nature of funding, we believe it is essential to identify a committed source of capital before establishing a remote-patient-monitoring program.

Dedicate sufficient non-physician staff to operate the program. A key reason this is necessary is that busy physicians will have difficulty carving out additional time to administer a program and sift through data. For example, Ochsner Medical Center in New Orleans developed a digital hypertension program staffed by pharmacists, who monitored 6,000 high-risk patients’ blood pressure readings remotely and followed up with patients via text and email. This program resulted in a significant increase in the proportion of patients who met their blood pressure goals.

As demonstrated by this example, it’s critical that staff in these roles are matched with the nature of the work. For instance, the complex tasks involved in hypertension-medication management might require a pharmacist or nurse as opposed to a patient navigator without clinical expertise.

Focus on digital health equity. Patients may appear to be better candidates for remote monitoring if they are younger, technologically savvy, or are fluent English-speakers. However, access to technology may be limited by poverty, and numerous other socio-demographic factors may influence engagement and participation in remote monitoring programs. At a time when the Covid-19 pandemic has disproportionately affected minority populations, care providers should go the extra mile to ensure that underserved patients not only have access to programs, but are also provided education and support needed to make them successful .

Start with an initial pilot and expand after demonstrated successes. Even in a pandemic setting in which time is of the essence, it is essential to demonstrate that remote patient monitoring initiatives improve clinical outcomes. Not all programs have demonstrated success, so the use of pilots can help avoid expensive mistakes. One successful program that scaled gradually is the Hospital of the University of Pennsylvania’s remote postpartum hypertension monitoring program. This program expanded from a small pilot to a larger clinical trial to the entire academic medical center based on evidence that it decreased admissions and costs associated with postpartum hypertension.

Covid-19 has created an opportunity to accelerate the adoption of remote patient monitoring as our health care system struggles to care for patients outside of the physical walls of a clinic or hospital. We encourage leaders to act decisively in establishing new programs by following best-in-class examples and guidelines. We believe that leaders who do so will spur a paradigm shift in how patient care is delivered that lasts far beyond the current crisis.

Read More

Posted on

How One Boston Hospital Built a Covid-19 Forecasting System

Executive Summary

Faced with the Covid-19 pandemic, the healthcare delivery infrastructure in much of the United States has faced the equivalent of an impending hurricane but without a national weather service to warn us where and when it will hit, and how hard. To build a forecasting model that works at the local level, the Beth Israel Deaconess Medical Center relied on an embedded research group, the Center for Healthcare Delivery Science, that reports to the CMO and is dedicated to applying rigorous research methods to study healthcare delivery questions

Andriy Onufriyenko/Getty Images

We’ve made our coronavirus coverage free for all readers. To get all of HBR’s content delivered to your inbox, sign up for the Daily Alert newsletter.

The Covid-19 pandemic created an unprecedented strain on healthcare systems across the globe. Beyond the clinical, financial, and emotional impact of this crisis, the logistical implications have been daunting, with crippled supply chains, diminished capacity for elective procedures and outpatient care, and a vulnerable labor force. Among the most challenging aspects of the pandemic has been predicting its spread. The healthcare delivery infrastructure in much of the United States has faced the equivalent of an impending hurricane but without a national weather service to warn us where and when it will hit, and how hard.

To build a forecasting model that works at the local level – within a hospital’s service area, for example — the Beth Israel Deaconess Medical Center (BIDMC), relied on an embedded research group, the Center for Healthcare Delivery Science, that reports to the CMO and is dedicated to applying rigorous research methods to study healthcare delivery questions. We used a series of methods derived from epidemiology, machine learning, and causal inference, to take a locally focused approach to predicting the timing and magnitude of Covid-19 clinical demands for our hospital. This forecasting serves as an example of a new opportunity in healthcare operations that is particularly useful in times of extreme uncertainty.

Insight Center

In early February, as the U.S. was grappling with the rapid spread of SARS-COV-2, the virus that causes Covid-19, the healthcare community in Boston began to brace for the months ahead. Later that month, participants in a biotechnology conference and other residents returning from overseas travel were diagnosed with the new disease.

It was the start of a public health emergency. To understand how to respond, our hospital needed a Covid-warning system, just as coastal towns need hurricane warning systems. Our hospital is an academic medical center with over 670 licensed beds, of which 77 are intensive care beds. We knew it was hurricane season, but when would the storm arrive, and how hard would it hit? We were uncertain about what lay ahead.

Hurricane season — but where is the storm?

Lesson 1: National forecasting models broke down when predicting hospital capacity for Covid-19 patients because no local variables were included.

Our institution turned first to national models. The most widely used national model applied curve-fitting methods (which draw a best-fit curve on a series of data points) on earlier Covid-19 data from other countries to predict future developments in the United States. National models did not consider local hospital decision-making or local-level socioeconomic factors which dramatically impact key variables like population density, pre-existing health status, and reliance on public transportation. For example, social media data showed many student-dense neighborhoods in Boston emptying after colleges canceled in-person classes at the beginning of March, which meant fewer people were in Boston to contract the virus. Another critical variable in hospital capacity forecasting, the rate of hospitalization for people with Covid-19, varied as the weeks went on, even though national models held this variable constant. For example, early on our hospital was choosing to admit rather than send home many SARS-COV-2 positive patients, even with mild infections, because the clinical trajectory of the disease was so uncertain. Thus we needed a dynamic hyper-local model.

Building our storm alert system

Lesson 2: Local infection modeling required a range of different research methods, and the trust and commitment of operational leaders who recognized the value of the work.

The hospital turned to our research center to achieve these goals. The center, which is embedded in the hospital and reports to the Chief Medical Officer (Dr. Weiss), brought applied machine learning and epidemiological approaches to construct a hyper-local alert system.

Further Reading

To demonstrate the feasibility of forecasting local hospital-capacity needs for managing Covid-19 patients, we built a preliminary SIR model (a traditional epidemiological framework that models the number of Susceptible, Infected and Recovered people in a population), which was integrated into our institution’s incident command structure, an ad hoc team created with members of the hospital and disaster management leadership to respond to the pandemic. However, the accuracy of SIR models depends on the accuracy of estimates of disease characteristics such as incubation time, infectious period, and transmissibility, variables that are still not well understood. Therefore, we turned to machine learning approaches, harnessing real-time data from our electronic medical record to determine these variables directly from real patients. We also gathered Covid-patient census data from multiple hospitals simultaneously, using a common machine-learning technique called multi-task learning to capitalize on limited data. These methods allowed us to estimate when the demand for hospital capacity to treat Covid-19 patients would peak and plateau — predicting the timing to within five days of the true peak and more accurately modeling the slope of the peak and decline than national models did.

Had leadership relied on national models, they would have expected a sharper peak and decline, and a peak two weeks earlier than the actual peak. Our modeling affected key decisions, including the need to bolster personal protective equipment (PPE) supplies; to gauge the necessity of even urgent procedures, and postpone them if necessary in order assure we had the capacity to absorb the peak; and to establish staffing schedules that continued farther into the future than those originally planned.

Predicting the next hurricane

Lesson 3: Effective modeling in confusing times may require rapidly developing new methods for predicting the next storm.  

Hospitals now face a difficult challenge. We need to open our doors to the patients without Covid-19 who didn’t seek care or whose care was deferred. But how do we make sure to have enough protective equipment for safely bringing back outpatient procedures? And when can nurses who had been redeployed to our ICUs return to the floors and interventional areas such as the endoscopy suite and cardiac catheterization lab? Complicating these questions is whether we will see another rise in infections with changes in state-wide policies, reopening of schools and businesses, or a coming influenza season.

In this new phase, we now need to develop methods for understanding how people will move within a community (going to school and visiting stores, for instance) and how much they will interact with one another and, therefore, affect the risk of infection over time. To this end, we constructed a risk index for local businesses by comparing pre-pandemic traffic to traffic as they reopen, and whether they are indoors or partly or entirely outdoors. Businesses where visitors are densely packed in indoor spaces, especially for longer periods, have a higher risk index — meaning they are more likely to be the site of infection spread. Using our risk index, we created and validated a model for identifying such potential “super-spreader” businesses in our service area. This analysis is part of another body of research that will undergo peer review and publication and, therefore, its results are provisional. Meanwhile, we can use our work with businesses to further inform our forecasting model by examining traffic in business locations we have identified as high-risk and assessing whether incorporating these data improves the ability of our model to predict the demand on hospital capacity.

Integrating rigorous research methods into hospital operations

Lesson 4: Given the profound future uncertainty in healthcare, small investments in trusted internal research groups that can answer operational questions with new methods can yield substantial returns.

Our institution made a prescient investment in creating an embedded and trusted research group made up of clinicians, economists, and epidemiologists studying healthcare operations. The team has brought specialized machine learning methods and expertise in extracting conclusions from messy data to quickly and accurately solve emerging real-world problems — capabilities that traditional business analytics groups are less likely to have. Other organizations can similarly unite the rigor and flexibility of methodological experts with the need to rapidly answer operational questions in dynamic and even chaotic environments.

The authors would like to thank Manu Tandon, Venkat Jegadeesan, Lawrence Markson, Tenzin Dechen, Karla Pollick and Joseph Wright for their valuable contributions to this work. 

If our free content helps you to contend with these challenges, please consider subscribing to HBR. A subscription purchase is the best way to support the creation of these resources.

Read More

Posted on

7 Critical Lessons CEOs Are Learning In The Crisis

The COVID-19 pandemic, though devastating, is a surprisingly useful lever for improving a company’s resilience in the face not only of infectious disease, but also of other more common shocks, including cyberattacks, floods, fires, hurricanes, earthquakes, economic downturns and climate change.

For CEOs, the deadly coronavirus has instilled valuable lessons in risk management and continuity planning. Let me share seven that I’ve learned:

1. Focus on supply chains. As we’re witnessing, unexpected events can knock out operations in large geographical swaths, dealing tremendous blows to companies that lack a plan B, whether it’s having backup facilities, alternative suppliers or big inventories. Where you’re doing business matters: Although a pandemic knows no geographic boundaries, certain countries have more resilient business environments than others, foretelling which regions may be best positioned for rebounding faster as the pandemic wanes. FM Global ranks nearly 130 countries by the resilience of their business environments, overall and separately, according to 12 resilience measures like inherent cyber risk, natural hazard risk, economic productivity, supply chain visibility and control of corruption. Consider geographical resilience as a factor in your pandemic recovery strategy as you decide where to site facilities, build supply chain redundancy and cultivate markets.

2. Quantify risk. Business risk assessment is a science and it involves looking at your entire operation, determining which functions and facilities contribute most to profit, analyzing the threats to each, and shoring up the most glaring vulnerabilities first. Much has been written on institutional denial and the deceptive gut-level calculation that a catastrophe “probably won’t happen during my tenure as CEO.” Another misconception is that loss in a catastrophe is more or less inevitable. In fact, the majority of loss is preventable. The bottom line is it’s far better for your company to prevent a loss than to experience one. Suffering an avoidable loss can have permanent impact on your profit, market share, growth, investor confidence and overall value. Invest in prevention when and where you can.

3. Plan for shutdowns. Takeoffs and landings are the riskiest parts of aviation, and so it can be for business shutdowns and restarts. Although you as a chief executive won’t get mired in the details of shutdown planning, understand that one of the biggest and potentially costliest hazards is waiting too long in the throes of a catastrophe to cease operation. This institutional reluctance to halt production (and revenue flow) even as, say, rising floodwaters are buffeting the walls of the building is another form of denial. Many millions have been lost by waiting to abandon production until it’s too late. Make sure you have a decision tree for shutdowns and a loss-prevention culture.

4. Keep watch on vacant or idled properties. When everyone’s working from home, make sure your operations team has essential personnel in all your geographies doing daily rounds for criminal activity, signs of smoke or new property damage. Ensure they keep fire protection equipment activated. It’s also a great time to commission overdue maintenance.

5. Prepare carefully for restarts. There are several big business continuity risks around resuming normal operations. Many a restart has triggered a disruption leading immediately to another shutdown. The classic case is maintenance equipment left in pipes, vessels and machines or improper reassembly after inspection. Be especially careful if your facilities have switched over to pandemic-related production (e.g., distilleries making hand sanitizer). The coronavirus is also throwing the business world a curveball on staffing: Rare is the company restarting with the same personnel levels in the same configuration. Social distancing will likely require significant modifications. Other planning considerations: Do you have sufficient inventory to restart? Will your suppliers be ready when you are?

6. Formalize and execute. All of these considerations should inform a carefully crafted business continuity plan that will spell out ahead of time procedures for emergency response, evacuation, business recovery, IT recovery, crisis communications and supply chain continuity. Ensure the plan covers the entire company and all its operations; too often, companies plan in silos and aren’t ready to coordinate during a crisis. Also plan for worst-case scenarios: It’s common to plan for 30- to 60-day disruptions, but the worst case—say, a building that burns to the ground—can disrupt an organization for a much longer period. Make sure your enterprise risk team has done all it can to ensure the resilience and readiness of key suppliers.

7. Appoint a business continuity team. Ensure that members of the team embrace the continuity plan and train on a variety of scenarios – including cyber attack, fire, natural disaster and infectious disease. When operations are back to normal, have them perform that testing on the tabletop and in the occasional drill. Refine the plan as test outcomes indicate. And don’t forget to study your role in the plan.

While the current pandemic has delivered an exogenous shock to many businesses, it is teaching valuable lessons to help you prepare your organization for the next disruption. And that’s worth considering given that a hurricane season predicted to bring above-normal activity is in the offing and other natural hazards are ever present. One thing is for certain: Your company’s stakeholders will be watching. Observe the lessons, learn from them and be the most resilient business you can be.

It will pay off.

Read More

Posted on

A Covid-19 Vaccine Will Need Equitable, Global Distribution

Executive Summary
The development of possible vaccines for the Covid-19 virust are advancing at a record pace, raising hopes that will be available for large-scale distribution in 18 to 24 months. The global distribution must be on an equitable basis. To achieve that, there are five investment priorities: financing the purchase of …

Read More

Posted on

How Can’t-Close Retailers Are Keeping Workers Safe

Executive Summary
One of the biggest challenges that essential retailers face is keeping their workers and customers safe from the Covid-19 virus. Learning along the way, adhering to strong standards, and dedication are essential. The longtime practices and values of a group of model retailers including Costco, Mercadona, H-E-B, and …

Read More

Posted on

Sherrill Manufacturing Co-Founders Step Up As Co-Leaders During Pandemic

During the coronavirus pandemic, the co-founders of Sherrill Manufacturing are trying to exemplify the sort of calm and determination that helped them make a success of the company in the first place – and that employees and their community are counting on to guide the company through the ongoing global crisis.

Read More

Posted on

How Business Owners Are Responding to Business Disruption–and Even Finding Reasons for Optimism

Pacific Manufacturing: Crafting new work arrangementsThe coronavirus has only just started to hit San Diego, but Pacific Manufacturing, a private-label sock business, has seen how quickly it could change everything. Its 12 employees in Haining, China, have finally returned to work in staggered shifts–a social distancing measure–following a two-week quarantine.Founder …

Read More

Posted on

To Improve Data Quality, Start at the Source

Executive Summary

You can’t do anything important in your company without high-quality data. But most organizations focus their data-quality efforts on cleaning up errors, rather than finding and fixing the root cause of the errors in the first place. To become a more data-driven organization, managers and teams must adopt a new mentality — one that focuses on creating data correctly the first time to ensure quality throughout the process.

Part of this process requires identifying two new roles in data quality: the data customer and the data creator. The customer is the person using the data, and the creator is the person who creates, or first inputs, the needed data. People must recognize themselves as customers, clarify their needs, and communicate those needs to creators. People must also recognize themselves as creators, and make improvements to their processes, so they provide data in accordance with their customers’ needs. Once customers and creators have an open dialog, they can work together to make improvements, stopping bad data at its source.

Phil Ashley/Getty Images

You can’t do anything important in your company without high-quality data, and most people suspect, deep down, that their data is not up-to-snuff. They do their best to clean up their data, install software to find errors automatically, and seek confirmation from external sources — efforts I call “the hidden data factory.” It is time-consuming, expensive work, and most of the time, it doesn’t go well.

Even worse, cleanup never goes away! Imagine that you had cleaned all your existing data perfectly, but not addressed the problem of poor quality at the source. As you acquire new data, you will also acquire new errors that impact your work. You and your team will once again waste time dealing with errors. Cleanup as the primary means of data quality is long past its sell-by date.

Rather than fixing data quality by finding and correcting errors, managers and teams must adopt a new mentality — one that focuses on creating data correctly the first time to ensure quality throughout the process. This new approach — and the changes needed to make it happen — must be step one for any leader that is serious about cultivating a data-driven mindset across the company, implementing data science, monetizing its data, or even simply striving to become more efficient. It requires seeing yourself and the role you play in data in a new way, all the while identifying and ruthlessly attacking the root causes of errors, making them disappear once and for all.

Insight Center

Eliminating most root causes is surprisingly easy. For instance, at one health clinic, staff often had difficulties contacting patients post-visit when they needed to schedule more tests, change medications, and so forth. No one knew how frequently this occurred or exactly how much time was wasted, but it could impact patients’ health and it was frustrating for the staff.

So employees in the clinic looked at the data associated with the last 100 patient visits and found that the phone number was wrong for 46 of them. They reviewed their procedures and found that no one was responsible for obtaining that data. They made a simple change: When patients checked in, the front-desk person asked them to verify their phone numbers. It was the first thing they requested upon arrival: “It is nice to see you again, Ms. Jones. Can I confirm your cell phone number?” This clinic re-measured a couple of weeks later — errors in cell phone numbers were virtually eliminated.

The process the health clinic used appears universal: sort out the data you need; measure the quality of needed data; identify areas where quality could be improved and identify root cause(s); and eliminate those causes. It is remarkably flexible, easy-to-teach, and simple-to-use.

Digging deeper into the example, you’ll see that the health clinic also features two important roles in data quality: the data customer and the data creator. The customer is the person using the data. The creator, on the other hand, is the person who creates, or first inputs, the needed data (note that machines, devices, and algorithms also use and create data. So the customer or creator may also be the person responsible for such machines, devices, and algorithms).

It is essential that people recognize themselves as customers, clarify their needs, and communicate those needs to creators. People must also recognize themselves as creators, and make improvements to their processes, so they provide data in accordance with their customers’ needs. In the health clinic, post-visit staff did not recognize themselves as customers and desk personnel did not recognize themselves as creators. Once they did, completing the improvement project was straightforward.

I find that quality improves quickly when teams and companies adopt this approach, take on these roles, and follow the steps. People in companies large and small, in industries as diverse as financial services, oil and gas, retail, and telecom, have used them to make order-of-magnitude improvements in billing, customer, people, production, and other types of data and, as a direct result, improved their team’s performance. In some cases, the savings come to hundreds of millions per year.

So why aren’t they the norm? It turns out that a variety of organizational and cultural issues get in the way. Like those in the clinic, many are only vaguely aware they have a problem; they think that data is the province of IT or are afraid to make the needed connections across organizational silos. Indeed, people have gotten into bad habits when it comes to data quality and bad habits are hard to break.

To see how these bad habits take root and grow, consider Laura, a saleswoman who receives contact data from the marketing department. She is well aware the data isn’t very good — she spends a couple of hours a day making corrections. Laura’s performance is based on the number of sales calls she makes successfully — and her quota is high! On any given day, it is easier to deal with the errors than to take the time to reach out to marketing, even though a small investment in time will free her up down the road.

It is easy to see Laura’s actions as justified. After all she needs to meet her quota, even in the face of bad data. But in taking it upon herself to fix the data and not communicate her needs to the marketing department, she is assuming responsibility for the quality of data created by others. And every day she further embeds a bad habit into her routine. What’s more, if anyone else were to use the same marketing data, they wouldn’t have access to Laura’s corrections, and the cycle of errors and corrections would continue elsewhere.

There is no shortage of Lauras in every job, department, and at every level. Without giving it much thought, too many people take the wrong approach and bake bad data quality practices in their work!

While these issues are both subtle and powerful, any manager can take them on and adopt the mindset that “data quality means creating data correctly the first time” within their sphere of influence. Start by asking yourself if you’ve grown too tolerant of bad data and taken on the extra work it engenders. Then step into the customer role next time you experience any sort of problem. Don’t just complain, “this isn’t what I want!” Rather, think deeply about what you really need and open a dialog with data creators. Work together to make one improvement, then another, and another. After a short time, this will become second nature.

At the company level, senior leaders must insist that everyone take on these roles. Toward that end, I recommend that a small but mighty team of data quality professionals form up and administer an overall program, train people on how to do the work, help customers and creators connect, and assist when difficulties arise.

Shifting paradigms is difficult. Fortunately, creating data correctly the first time pays great dividends. It saves time and money — and possibly, as in the case of the health clinic, sometimes lives! It builds confidence in the data and leads to better decisions. All of us are data customers and data creators. Taking on these roles helps people build the right mindset around data quality and stop data problems before they begin.

Source: HBR.org

Posted on

This CEO Predicts Cannabis Failure In 2020

We’ll see the first major cannabis company fail in 2020. I’m calling it now.

The truth is, the panic around the Great Cannabis Armageddon is mostly justified. Over the past few years, there’s been a perspective, pervasive among investors, that cannabis stocks – any cannabis stock – would result in windfall returns. But investors are now seeing that this isn’t the case at all. Stocks have plummeted. We’ve already seen the first medium-sized company, Wayland Group Corp., go into receivership. This trend will continue in 2020, and I see three contributing factors for why that is.

First, the rollout of cannabis retail has been far too slow across Canada. These LPs have developed business models based on supply meeting demand, but in order for that to have worked, we’ve needed far more retail stores to open than we’ve seen, particularly in the British Columbia and Ontario markets. As a result, these LPs are burdened with oversupply, causing them to miss revenue projections, which in turn has caused a massive sell-off due to underperformance. Downward pressure, right there.

Second, these companies weren’t developed for profitable business models, but were rather capital intensive because capital was so easy to find. Rather than building revenues on their own, these companies acquired it by either merging with or acquiring other companies, or acquiring assets of other companies, at over-inflated values. Come financials time, they’ve needed to either write down or completely write off some of these acquisitions. As a result, their market caps are negatively affected because investors are viewing these assets as major liabilities. More downward pressure.

Finally, all of this has created an unsustainable model that’s relies on continual capital raises – but that capital has all but dried up. What’s still available is either very expensive or toxic or both. What we’re seeing across the industry are these rickety financial models and convertible debentures that will start to strain cashflow, if they haven’t already. Which means these companies will have trouble becoming cashflow positive because they can’t meet debt. These debts have to convert at some point, but because the market has gone down, the conversion prices are way too high. Investors are simply going to want their money back, but it’s likely that these companies will have trouble actually paying it back. Receivership is inevitable for some of them.

Like I said, this has already underway. Wayland filed for creditor protection in early December, which I think is really just the beginning of a wave of similar scenarios that will play out across the industry in 2020.

But there is some hope for investors in this scenario. Cannabis stocks have been oversold, so quite a few of the good companies are undervalued. This is creating an entry point for long-term investors, like family offices, which is who Wildflower is now talking and marketing to in the US, Europe and Asia. These family offices are the best type of investors because they’re long-term holders who are seeing real value in companies that have developed smart, sustainable business models.

The truth is, the shift in the markets for cannabis companies was inevitable. It’ll also be a net positive for the industry, particularly for the companies that have been built on sustainable, profitable business models. The market will sort itself out. The weak companies will either fail completely, or be absorbed by the stronger ones, and we’ll see more sustainable and innovative players survive and stake their claim. As they should.

Source: ChiefExecutive.net