Jimmy Qian and Lucia Huang, the co-founders of a new clinical practice management and data analysis platform for mental health providers focusing on cutting edge psychedelic treatments, met at Stanford University.
The two both come from healthcare backgrounds. Huang, whose mother was a biomedical engineer, worked as an associate at Warburg Pincus focused on healthcare and worked at the startup Verge Genomics before heading to Stanford’s business school while Qian was in medical school at Stanford.
Both also went to high school in the Bay Area and were intimately familiar with the mental health crisis affecting the communities around Silicon Valley.
Qian worked on a few non-profits in the mental health space through his undergraduate years at Penn and then again in the Bay Area while he was at Stanford.
Osmind’s founders say the goal for their young startup is to help patients access innovative treatments to mental health by providing clinicians and pharmaceutical companies with software and services that will make the provision of care, and proof of the efficacy of treatment, more readily available.
There are 11 million Americans that are resistant to most mental health therapies, according to Huang and Qian. Those patients can cost the healthcare as much as $250 billion, they said. “Nobody has been able to help this patient population,” said Huang in an interview. “Pharma doesn’t develop drugs for them.”
Now graduating with Y Combinator’s latest cohort of companies, Osmind’s public benefit corporation intends to aggregate data from the sickest patient population and provide that data to drug developers for clinical trials and to help insurers route patients to the treatment providers that can benefit them the most, according to Qian.
The company, which launched its services two months ago, already has 30 practices using its software covering 3,000 patients.
“The beauty of all of this is that it’s a win-win for everyone,” said Huang. Providers get a software platform that streamlines administrative tasks and provides patient outreach and remote monitoring services. They also have a web portal that allows them to view patient progress.
Qian said its a service designed for physicians that are not necessarily technically savvy. It also provides a dataset that can be used to clinically validate some of these more experimental forms of therapy including psychedelics and ketamine treatment.
“We improve the care journey,” said Qian. “These are clinics that don’t have the manpower to do that.. You can’t call your patients every single day.”
Researchers at Vanderbilt University Medical Center are using a novel, data-driven “target trial” framework to investigate the efficacy and safety of medicines in pregnant women, who are underrepresented in randomized controlled trials (RCTs). The approach leverages observational data in electronic health records (EHRs) to spot connections between real-world drug exposures during pregnancy and adverse outcomes in the women’s offspring—an exercise that can be expedited with commonly used AI machine learning models like logistic regression.
So says Vanderbilt undergraduate student Anup Challa, founding director of the investigative collaboration MADRE (Modeling Adverse Drug Reactions in Embryos). The group works in the emerging field of PregOMICS that applies systems biology and bioinformatics to study the efficacy and safety of drugs in treating a rising tide of obstetrical diseases. Partnering institutions are Northwestern University, the National Institutes of Health, and the Harvard T.H. Chan School of Public Health.
The concept of target trials was first mentioned in epidemiology literature a decade ago, says Challa, and is only starting to gain traction. “Target trials really hinge on retrospective analysis of existing data using machine learning methods or other kinds of inferential statistics.”
A study coming out of Vanderbilt a few years ago looked at the effects of pregnant patients’ genomics on outcomes in their neonates and found harmful single-nucleotide mutations on key maternal genes that mimicked patients taking inhibitory drugs, says Challa. Specifically, the research team conducted a target trial to learn that these mutations on the gene PCSK9, which controls cholesterol levels, led mothers to deliver babies with spina bifida.
That was a signal that mothers ought not to be taking PCSK9 inhibitors, Challa continues, which are “becoming of increasing interest to physicians for treating hypercholesterolemia.” It also meant common genetic variants could serve as a proxy for drug exposures in target trials when insufficient prescription data exist in pregnant people’s records.
A probability value generated by a machine learning algorithm would not be “sufficiently indicative” of a drug safety signal to warrant immediate interrogation in humans, says David Aronoff, M.D., director of the division of infectious diseases at the Vanderbilt University Medical Center. But, as he and his MADRE colleagues argued in a recent paper published in Nature Medicine (DOI: 10.1038/s41591-020-0925-1), target trials are a viable and potentially more definitive alternative to fetal safety than animal models or cellular response to a drug in a dish.
The ultimate goal with target trials is to simulate the level of safety and efficacy testing done in RCTs with non-pregnant populations as a matter of health equity for people who for ethical or logistical reasons can’t be enrolled, says Challa. But where they fit into the regulatory framework for drugs has yet to be defined, or even explored.
Next Step: Tissue Modeling
Aronoff thinks of target trials as “reverse engineering” the normal drug development process, which typically starts in a petri dish on the bench then advances to animal models and finally clinical trials outside the pregnant population that (if all goes well) leads to an indication for use. “We’re trying to take existing, real-world data about the use of those drugs in pregnancy to identify [safety] signals… some sort of problem in the development of the fetus in utero that ends up showing itself either during pregnancy or postpartum in the offspring. If there is a mechanistic basis for that, then we can now go backwards to the bench and try to understand whether there is a causal relationship.”
Organ-on-a-chip technologies and other advances in tissue modeling can be particularly good at recapitulating drug exposure information, “particularly in the context of what is happening in the pregnant uterus,” says Aronoff. His MADRE colleague Ethan Lippmann, Ph.D., in Vanderbilt’s department of chemical and biomolecular engineering, has been building three-dimensional models of brain development that could be used as a platform for testing the teratogenic effects of drugs (or metabolites of those drugs) on neural development and neural outcomes like seizure disorders or microcephaly.
Aronoff, who is also a professor of obstetrics and gynecology at Vanderbilt, is keenly interested in seeing three-dimensional organotypic models of the placenta exposed to various drugs, metabolites and toxins of interest—and serially to other organ models that might include the brain, heart and musculoskeletal system. The different models could be viewed as “cartridges” that get plugged in based on signals seen in the machine learning study.
“We’re trying to look at organ development and organ function in this better, more innovative context,” says Aronoff, which would add to what is learned from target trials.
The potential of target trials is both about discovering and investigating drug safety, says Aronoff. “Most drugs have never been clinically tried in a randomized, placebo-controlled way in pregnancy and, even if they have been, it’s uncertain that anyone was paying close attention to outcomes not only for the fetus but in early childhood and [beyond]. But when you have electronic health records that couple mothers and their exposures with their offspring sometimes even years later, you have the power to discover for the first time an association that no one knew about.”
That first level of discovery—e.g., a higher level of prevalence of schizophrenia or autism or asthma in childhood due to exposure to a drug in the womb—prompts questions about whether the association has a mechanistic basis that may be revealing of fundamental aspects of human development, he notes.
Indeed, it should be possible to use target trials as a first step in identifying whether diseases that occur later in life are linked to an earlier stimulus or cause, adds Challa. The story of an individual’s health is influenced by factors not immediately visible, including exposures in utero that can lead to lifelong disease.
EHRs could provide researchers with the ability to evaluate people’s health from the time they were in their mother’s uterus until late in life, so they can start to think from a “systems perspective,” says Challa. When tapped by target trials, they greatly enlarge the information available to guide therapeutic choices and inform drug safety.
Many databases and patient registries exist for reproductive toxicology and the reporting of significant adverse events. But the information isn’t available in a form that’s easily manipulated by machine learning models, says Challa, making it challenging to arrive at statistically rigorous results.
The problem extends to Food and Drug Administration and National Institutes of Health datasets used in a recent study appearing in Reproductive Toxicology. “What we found and continue to find is that the data out there is not at the level it should be” for informing prescribing behavior at the point of care for pregnant women and their developing fetuses, Challa says.
The study was attempting to identify chemical features of a drug that would be predictive of its teratogenic potential and could be fed into a machine learning model to formalize those associations, he explains. Specifically, researchers looked at whether or not adverse outcomes have an “inherent structural rationale” and, if so, if a meta-structural analysis might be performed to identify known pharmacological variables (e.g., absorption, distribution, metabolism, and excretion profile) that may be the culprits. They also accessed real-world laboratory data to look for chemical structures associated with markers of disease in human tissue samples.
Recognition of the conflicting nature of adverse events data within patient registries was a key takeaway of the study, says Challa, and gave researchers “even more impetus” to focus on EHRs as a data source. But it also gave the team some structural information predictive of an adverse outcome that they can now use to cross-validate results produced by their target trial framework.
The paper highlighted a novel application of machine learning, the quantitative structure activity relationship (QSAR), to learn about the structures of drugs and their pharmacological behaviors that are associated with teratogenicity. QSAR should also be able to make similar predictions for any new compound, says Aronoff. “It’s a separate way [than EHR mining] of interrogating drug safety to look for associations.”
The two techniques are related in that any unwanted drug effects in a fetus or offspring that are uncovered in medication-wide association studies could be plugged into QSAR, Aronoff says. Perhaps something already known about the drug’s structure could point to a causal relationship, a hypothesis which could then get tested more directly in tissue models.
Aronoff’s hypothetical example is an antidepressant drug that gets newly associated with an adverse pregnancy outcome. “Can we keep its antidepressant activity but enhance its safety by targeting the structure that is actually the bad actor?” If so, he says, medicinal chemistry stands to gain some ground.
Linked Patient Records
Another limitation of mining the databases where adverse events are being reported is that “some subtle, infrequent and unexpected relationships” invariably get missed, says Aronoff. Women may be on medications chronically when they give birth to a child with a teratogenic problem or later health problem and “there may be no awareness that those things are related.” It’s unreasonable to expect anyone to make the mental connection when years can separate the drug exposure and unwanted outcome.
Target trials use the power of machine learning to interrogate hundreds of thousands, if not millions, of linked patient records to find the “needles in the haystack,” Aronoff adds. “In some respects, that can be much more sensitive than relying on individual people to report some association where there may need to be an incredibly strong signal or very horrible outcomes that are chronologically associated with the exposure.”
Available adverse exposure reporting information is also mostly freeform text, making it difficult to extract for use in target trial models, says Challa. EHRs, in contrast, are much more structured and minable documents.
Vanderbilt has taken a leadership position in creating meaningful databases out of EHR information, Challa says, including the use of natural language processing to put text fields in a machine-readable format. Its BioVU DNA repository, for instance, consists of high-quality, up-to-date genomics information linked to de-identified medical records and is routinely updated and maintained by a team of on-campus IT experts. Another repository is Vanderbilt’s longstanding Research Derivative, a database of identified health records and related data drawn from Vanderbilt University Medical Center’s clinical systems and restructured for research.
Large databases of linked health records, available mainly at institutions with similar patient volume and health IT infrastructure as Vanderbilt (whose clinical databanks contain EHR information for more than 2 million patients), are what make target trials feasible, says Challa. “It is often unethical to create linkages across clinical datasets that don’t already have it.”
The proposed target trials framework will robustly input several medication exposures of interest from pregnant patients and try to associate them with a battery of developmental outcomes from the EHRs of their children, says Challa. In contrast, clinical trials typically test the potency and safety of one drug for a single disease or cluster of similar diseases.
By providing a basis for causal inference, target trials are “the only ethical way to gather human drug exposure data for pregnant people on a significant scale and across all classes of drugs,” he and his colleagues argue in the Nature Medicine paper.
Within a few years, MADRE researchers hope to be inputting drug lists into a reproducible set of machine learning algorithms and statistical methods and outputting associations to several serious neurodevelopmental diseases, Challa says. Future plans also include taking positive drug-disease associations in pediatric patients and extrapolating the impact of early exposures to their later life course.
“As I like to say to my friends who are physicians and have specialty areas,” Aronoff says, “many people suffer from the diseases they care about, but every human being has experienced childbirth. We have to get that right.”
While pregnant people should be enrolled in RCTs of drugs and vaccines, Aronoff adds, “the reality is that [pregnancy] is always going to be a barrier. Target trials are a way forward.”
Expectations for customer service are higher today than a year ago, with the coronavirus pandemic fueling online shopping and challenging enterprise customer service operations, according to Customer Thermometer. That puts companies offering automation solutions in the right place at the right time.
Directly of San Francisco, cofounded by Antony Brydon, Jean Tessier and Jeff Patterson, offers a platform to integrate into call centers and provide a mix of automation and human support. Directly recently added $11 million in funding to bring its total investor commitment to $66.8 million, according to Crunchbase.
The Directly platform is trained by thousands of subject matter experts to analyze call center interactions and provide a degree of automation, according to a recent account in VentureBeat. The platform is designed to integrate with other customer relationship management platforms, including Microsoft’s Bot Framework, the Einstein Bot from Salesforce, and Dialog Flow from Google. These match chatbots and human agents with customers across channels.
The Directly API enables clients to insert automatic answers into messaging channels to resolve issues in real time. The AI-powered expert determines which questions are best handled by a network of subject matter experts who provide live assistance over channels.
Client Microsoft worked with Directly to build a network of Excel and Surface hardware users who could answer questions directly. The experts receive a cash incentive while Directly gets a 30% cut. The AI identifies the top performers on specific topics, which benefits those performers in the long run. Experts may get paid an average of $200 per week, with the top 5% making $2,000 or more per week, Directly has said.
“We prioritize the team over everything else,” stated Directly CEO Michael de la Cruz, in emailed comments to AI Trends. “And we solve a business problem of trying to make AI work.”
“Our platform helps identify and reach out to experts, folks that have a lot of contact with the business problem. We collect that knowledge and fold it into a company’s AI.”
Directly reaches the market with a direct sales force and, more recently, through partnerships with providers of virtual agents using AI who can leverage the Directly platform for their own customers. These include Percept.ai and Smart Action, independent providers of virtual agent technology.
The company is said to be growing 10% per month over the last six months. “Business is good,” said de la Cruz. “Although there is a contraction in the economy, demand for customer service is high, so we are benefiting from the increased demand.”
Butterfly Networks Brings Ultrasound to a Smartphone
Butterfly Network, founded in 2011, offers technology to make a smartphone into an ultrasound machine for use by medical professionals. The late stage venture has raised $350 million so far, according to Crunchbase.
Butterfly was founded by Jonathan Rothberg, a biotechnology entrepreneur who previously led two companies that developed machines for sequencing DNA. Investors include Fidelity, the Gates Foundation and Fosun Pharma, a Chinese drugmaker.
The ultrasound device, called the iQ, referred to as Ultrasound on a Chip, is priced at approximately $2,000, plus a monthly service fee that varies by type of use. The inspiration for the iQ was personal for Rothberg, whose daughter suffered from a disease called tuberous sclerosis (TSC), which causes patients to develop tumors throughout their bodies, according to an account in Forbes. Seeing the needed treatment equipment as unwieldy, and having heard a talk on AI by MIT physicist Max Tegmark, he thought there must be a better way. He recruited one of Tegmark’s students, Nevada Sanchez, as a cofounder, and launched Butterfly Network.
Today, the iQ is in the forefront of point of care tests (POCTs), medical diagnostic testing at or near the point of care. The company is reported to have sold over 30,000 units in 2019. Dagta in the Butterfly Cloud and in the Butterfly iQ app is AES 56-bit encrypted. Butterfly Cloud is protected by HTTPS, TLS 1.2 encryption. SOC II certification secures access to the data.
The device is winning converts in the medical community. Dr. Cian McDermott, a consultant in Emergency Medicine and co-director of Emergency Ultrasound Education at the Mater Hospital in Dublin, Ireland, stated in a recent account in the Irish Times, “When I’m using the Butterfly device, I’m not trying to replace what the radiologists are doing. Point of care ultrasound is done by physicians treating patients at the bedside and interpreting and integrating the images to their care live in real time.”
One doctor called attention to the limitations of the POCT device compared to a full radiological examination. Dr. David O’Keeffe, consultant radiologist at University Hospital Galway, stated, “When non-radiologists, for example, undertake ultrasound examination of the abdomen, it is very easy to confuse loops of bowel with gallstones.”
Medical researchers are employing AI to search through databases of known drugs to see if any can be associated with a treatment for the new COVID-19 coronavirus.
An early success story comes from BenevolentAI of London, which using tools developed to search through medical literature, identified rheumatoid arthritis drug baricitinib as a possible treatment for COVID-19.
In a pilot study at the end of March, 12 adults with moderate COVID-19 admitted to the hospital in either Alessandria or Prato, Italy, received a daily dose of baricitinib, along with an anti-HIV drug combination of lopinavir and ritonavir, for two weeks. Another study group of 12 received just lopinavir and ritonavir.
After their two-week treatment, the patients who received baricitinib had mostly recovered, according to a recent account in The Scientist. Their coughs and fevers were gone; they were no longer short of breath. Seven of the 12 had been discharged from the hospital. In contrast, the group who didn’t get baricitinib still had elevated temperatures, nine were coughing, and eight remained short of breath. Just one patient from the lopinavir-ritonavir–only group had been discharged.
Researchers at Benevolent AI, along with collaborator Justin Stebbing, an oncologist at Imperial College London, published a letter to The Lancet on February 4, describing how they used AI to identify baricitinib’s potential to treat COVID-19.
AI “makes higher-order correlations that a human wouldn’t be capable of making, even with all the time in the world. It links datasets that a human wouldn’t be able to link,” stated Stebbing.
Benevolent researchers used the company’s knowledge graph—a digital storehouse of biomedical information and connections inferred and enhanced by machine learning—to identify two human protein targets to focus on: AP2-associated protein kinase 1 (AAK1) and cyclin g-associated kinase (GAK).
The team used another algorithm to find existing drugs that could hit the protein targets, completing the work in a few days. Drugs not approved by regulators were eliminated, cutting the list to about 30. Eli Lilly, the company that makes baricitinib, has entered into an agreement with the National Institute of Allergy and Infectious Diseases to study the drug’s effectiveness in COVID-19 patients in the US.
“Even if the trial doesn’t work, we’re going to find out a huge amount of who it might work in and when it might work,” stated Stebbing. “It’s all about personalized medicine, which means treating the right person at the right time with the right disease with the right drugs. Hopefully, this will be a powerful part of the jigsaw.”
MIT-IBM Watson AI Lab Funding 10 Projects
Elsewhere, the MIT-IBM Watson AI Lab is funding 10 research projects incorporating AI to address the health and economic consequences of the pandemic.
One project seeks to establish early detection of sepsis in COVID-19 patients. About 10 percent of COVID-19 patients get sick with sepsis within a week of showing symptoms, but only about half survive, according to the account from MIT News. Identifying patients at risk for sepsis can lead to earlier, more aggressive treatment and a better chance of survival.
In a project led by MIT Professor Daniela Rus, researchers will develop a machine learning system to analyze images of patients’ white blood cells for signs of an activated immune response against sepsis.
Another project led by MIT professors Daron Acemoglu, Simon Johnson, and Asu Ozdaglar will model the effects of targeted lockdowns on the economy and public health. The team analyzed the relative risk of infection, hospitalization, and death for different age groups. When they compared uniform lockdown policies against those targeted to protect seniors, they found that a targeted approach could save more lives. Building on this work, researchers will consider how antigen tests and contact tracing apps can further reduce public health risks.
Other studies are looking at: which material makes the best face masks; a privacy-first approach to contact tracing; overcoming hurdles to global access to a COVID-19 vaccine, and leveraging electronic medical records to find a treatment for COVID-19.
A COVID Symptom Study app created by researchers at King’s College of London and Mass General Hospital of Boston, aims to predict who is at risk of having the COVID-19 virus, without a test. The app has been downloaded by over three million people worldwide. A prediction system was developed by examining data from 25 million people in the UK and US between the dates of March 24 and April 21, who actively used the app to update their health status.
When the AI-based model was applied to over 800,000 app users who displayed exact symptoms, it revealed that some 17 percent were likely to have coronavirus, information that could be of high value in heavily-populated areas especially.
Vanderbilt University Researcher Also Working with King’s College
A tool in development at Vanderbilt University to study human immune responses to rhinovirus, a cause of the common cold, is being applied to Covid-19-related research in partnership with King’s College of London and Guy’s and St Thomas’ NHS Foundation Trust. The research is being led by Jonathan Irish, associate professor of cell and developmental biology and scientific director of the Cancer & Immunology Core at Vanderbilt.
In the race to understand the inner-workings of COVID-19, the tool helps by parsing through vast quantities of data to identify extremely rare immune cells that specifically respond to viruses.
The tool employs aspects of high dimensional (HD) cytometry, a technique that takes measurements of many features of a single blood cell simultaneously. The resulting huge volume of data is challenging to analyze. “We think that HD cytometry can be particularly useful in understanding COVID-19,” stated Irish in a press release from Vanderbilt.
The quickly-developing trial was to begin treating 19 patients the last week of May. The research hopes to identify immune cells that are reacting to the virus, on the order of a couple of hundred in a sample of 10 million blood cells.
The goal of the joint research is to identify which human immune cells are specific to coronavirus infections, and distinguish these cells from each person’s immune fingerprint. “Understanding and identifying the types of immune cells that help to fight off the virus could help us optimize vaccine and treatment strategies,” Irish stated.
Researching the Best Strategies for Exiting Social Distancing
How best to exit the isolation strategies for dealing with COVID-19 is the subject of experimentation at the University of Luxumbourg’s SnT, Interdisciplinary Centre for Security, Reliability and Trust. The idea of the research is to make it possible for governments around the world to analyze how various exit strategies will impact the spread of COVID-19 in a six-month time frame.
Yves Le Traon, vice-director of SnT, brought together two teams to collaborate on this project. To generate its predictions, the tool uses data publicly available from the Google COVID-19 dataset, as well as data from Johns Hopkins University. A user is able to understand how policies related to each activity impact the spread of the disease, by selecting a country and changing the value that represents the intensity of any given isolation measure.
“The saying ‘knowledge is power’ may be overused, but when it comes to the coronavirus it takes on new meaning as every piece of data has the potential to impact the lives of people around the world,” stated Prof. Le Traon, in a release published on EurekAlert! “Given the enormous amount of data to analyze, we have developed this tool to support exit strategy planning. As many countries in Europe are beginning to execute on their plans already, we wanted to release our work as soon as possible.” EurekAlert!
Simon Fraser University Working on Bio-Image Detection of Covid-19 from X-rays
Researchers at Simon Fraser University and Providence Health Care (PHC) are collaborating on a new tool incorporating AI to help speed the diagnosis of Covid-10 patients. PHC leveraged the expertise of SFU researchers to validate a deep learning AI tool that enables a clinician to feed a patient’s chest x-ray image into a computer, run a bio-image detection analysis and determine a positive pneumonia case that is consistent with COVID-19.The tool is currently in the validation phase at St. Paul’s Hospital in Vancouver, Canada.
YaĞiz Aksoy, an assistant professor in the School of Computing Science’s GrUVi Lab, and MAGPIE Group researcher Vijay Naidu, a mathematician, helped refine the machine learning system using X-ray images of both COVID-19 and non-COVID-19 patients, to identify the unique characteristics found in the virus.
“Instead of doctors checking each X-ray image individually, this system is trained to use algorithms and data to identify it for them,” stated Aksoy. Naidu also shared his expertise in bio-sequence analysis to create a database of COVID-19 biological signatures, or unique identifiers, to zero in on those found in positive patients.
The beta version of the tool – still in an early testing phase – has uploaded to the United Nations Global Platform and is whitelisted in the AWS Machine Learning Marketplace.
NYU College of Dentistry Develops Mobile App to Detect Covid-19 Severity
Researchers at the NYU College of Dentistry have developed a mobile app to help clinicians determine which patients testing positive for Covid-19 are likely to have severe cases. The app uses AI to assess risk factors and key biomarkers from blood tests to provide a Covid-19 “severity score.” Current tests for Covid-19 detect whether someone does or does not have the virus, but they do not provide clues as to how sick a patient might become.
“Identifying and monitoring those at risk for severe cases could help hospitals prioritize care and allocate resources like ICU beds and ventilators,” stated John T. McDevitt, PhD, professor of biomaterials at NYU College of Dentistry, who led the research. “Likewise, knowing who is at low risk for complications could help reduce hospital admissions while these patients are safely managed at home.”
Using data from 160 hospitalized Covid-19 patients in Wuhan, China, the researchers identified four biomarkers measured in blood tests that were significantly elevated in patients who died versus those who recovered: C-reactive protein (CRP), myoglobin (MYO), procalcitonin (PCT), and cardiac troponin I (cTnI). These biomarkers can signal complications that are relevant to Covid-19, including acute inflammation, lower respiratory tract infection, and poor cardiovascular health.
The researchers then built a model using the biomarkers as well as age and sex, two established risk factors. They trained the model using a machine learning algorithm to define the patterns of COVID-19 disease and predict its severity. When a patient’s biomarkers and risk factors are entered into the model, it produces a numerical Covid-19 severity score ranging from 0 (mild or moderate) to 100 (critical).
The model was validated using data from 12 hospitalized COVID-19 patients from Shenzhen, China, which confirmed that the model’s severity scores were significantly higher for the patients that died versus those who were discharged. These findings are published in Lab on a Chip, a journal of the Royal Society of Chemistry.
Contributed Commentary by David W. Craig, Ph.D. and Brooke Hjelm, Ph.D.
We have heard a lot about cellular and tissue spatial biology lately, and for good reason. Tissues are heterogeneous mixtures of cells; this is particularly important in disease. Cells are also the foundational unit of life, and they are shaped by those cells proximal to them. Not surprisingly, the research field sought to survey cellular and tissue heterogeneity. The last decade saw massive adoption of single-cell sequencing RNA. This approach requires that we disaggregate cells, leading to accounting and characterization of cell populations, but at the same time losing their spatial context such as their proximity to other cells or where they fit with traditional approaches such as histopathology.
Enter Spatial Genomics
That’s why we have welcomed spatial transcriptomics and a focus on mapping RNA transcripts to their location within a tissue. After all, understanding disease pathology requires that we understand not only the underlying genomics and transcriptomics but also the relationship between cells and their relative locations within a tissue. Along for the ride: new avenues for the study of cancer, immunology, and neurology, among many others. What’s changed is the emergence of new tools for resolving spatial heterogeneity. SeqFISH and MerFISH are novel approaches for mapping gene expression within model systems. Multiple companies such as 10x Genomics and NanoString are now democratizing access to spatial transcriptomics, introducing new technologies and assays. They are opening up the study of disease pathology.
AI & Deep Learning: Adding to Our Vocabulary
New experimental methods often start with historical analysis approaches. Let’s consider the first step in analysis: finding clusters of spots/cells with similar gene expression and then visualizing by reducing dimensions. In single-cell RNA-seq, the tSNE projection and color-coding clustering may be the signature plot, much like the Manhattan plot was to the GWAS.
Yet, critically, we haven’t leveraged the underlying histopathology image—the foundation of diagnosis and study of disease. We haven’t leveraged the fact that two spots are neighboring. What happens when we do? What happens at the edges between two clusters? What happens when cell types intersperse or infiltrate, such as in immune response? Are there image analysis methods we aren’t considering that have a high potential impact?
Indeed, concepts such as convolutional neural networks (CNNs) and generative adversarial networks (GANs) have been instrumental in classifying features and underlying hidden layers. We can go beyond the tSNE in spatial transcriptomics—and the question should be about viewing the latent space (the representation of the data that drives classifying regions and the discovery of hidden biology). These terms and concepts are foundational when it comes to artificial intelligence and need to be front and center in spatial transcriptomics analysis.
Of course, the use of AI and deep learning terminology is ubiquitous. Getting away from the hype, from self-driving cars to the successes in image recognition (ImageNet Challenge), some of the most remarkable achievements leverage spatial and imaging data. Data matters and one then asks: should we consider a single spatial transcriptomics section as one experimental data point, or is it 4,000 images and 4,000 transcriptomes?
In spatial biology, we can anticipate that applying AI to cell-by-cell maps of gene or protein activity will pave the way for significant discoveries that we might never achieve on our own. Incorporating spatially-resolved data could be the next leap forward in our understanding of biology. There will be questions we never even knew to ask that may be answered by combining spatial transcriptomics and spatial proteomics. But to get there, we need to come together and work as a community to build up the training data sets and other resources that will be essential for giving AI the best chance at success.
We have yet to truly make the most of the spatial biology data that has been generated. If we do not address this limitation, we will continue to miss out even as we produce more and more of this information.
David W. Craig, PhD (firstname.lastname@example.org), and Brooke Hjelm, Ph.D. (email@example.com) are faculty within the Department of Translational Genomics, University of Southern California Keck School of Medicine.
A functioning healthcare system depends on caregivers having the right data at the right time to make the right decision about what course of treatment a patient needs.
In the aftermath of the COVID-19 epidemic and the acceleration of the consumer adoption of telemedicine, along with the fragmentation of care to a number of different low-cost providers, access to a patient’s medical records to get an accurate picture of their health becomes even more important.
Opening access to developers also could unlock new, integrated services that could give consumers a better window into their own health and consumer product companies opportunities to develop new tools to improve health.
While hospitals, urgent care facilities and health systems have stored patient records electronically for years thanks to laws passed under the Clinton administration, those records were difficult for patients themselves to access. The way the system has been historically structured has made it nearly impossible for an individual to access their entire medical history.
It’s a huge impediment to ensuring that patients receive the best care they possibly can, and until now it’s been a boulder that companies have long tried to roll uphill, only to have it roll over them.
Now, new regulations are requiring that the developers of electronic health records can’t obstruct interoperability and access by applications. Those new rules may unlock a wave of new digital services.
At least that’s what companies like the New York-based startup Particle Health are hoping to see. The startup was founded by a former emergency medical technician and consultant, Troy Bannister, and longtime software engineer for companies like Palantir and Google, Dan Horbatt.
Particle Health is stepping into the breach with an API -based solution that borrows heavily from the work that Plaid and Stripe have done in the world of financial services. It’s a gambit that’s receiving support from investors including Menlo Ventures, Startup Health, Collaborative Fund, Story Ventures and Company Ventures, as well as angel investors from the leadership of Flatiron Health, Clover Health, Plaid, Petal and Hometeam.
Image via Getty Images / OstapenkoOlena
“My first reaction when I met Troy, and he was describing what they’re doing, was that it couldn’t be done,” said Greg Yap, a partner with Menlo Ventures, who leads the firm’s life sciences investments. “We’ve understood how much of a challenge and how much of a tax the lack of easy portability of data puts on the healthcare system, but the problem has always felt like there are so many obstacles that it is too difficult to solve.”
What convinced Yap’s firm, Menlo Ventures, and the company’s other backers, was an ability to provide both data portability and privacy in a way that put patients’ choice at the center of how data is used and accessed, the investor said.
“[A service] has to be portable for it to be useful, but it has to be private for it to be well-used,” says Yap.
The company isn’t the first business to raise money for a data integration service. Last year, Redox, a Madison, Wis.-based developer of API services for hospitals, raised $33 million in a later-stage round of funding. Meanwhile, Innovaccer, another API developer, has raised more than $100 million from investors for its own take.
Each of these companies is solving a different problem that the information silos in the medical industry presents, according to Bannister. “Their integrations are focused one-to-one on hospitals,” he said. Application developers can use Redox’s services to gain access to medical records from a particular hospital network, he explained. Whereas using Particle Health’s technology, developers can get access to an entire network.
“They get contracts and agreements with the hospitals. We go up the food chain and get contracts with the [electronic medical records],” said Bannister.
One of the things that’s given Particle Health a greater degree of freedom to acquire and integrate with existing healthcare systems is the passage of the 21st Century Cures Act in 2016. That law required that the providers of electronic medical records like Cerner and EPIC had to remove any roadblocks that would keep patient data siloed. Another is the Trusted Exchange Framework and Common Agreement, which was just enacted in the past month.
“We don’t like betting on companies that require a change in law to become successful,” said Yap of the circumstances surrounding Particle’s ability to leapfrog well-funded competitors. But the opportunity to finance a company that could solve a core problem in digital healthcare was too compelling.
“What we’re really saying is that consumers should have access to their medical records,” he said.
Isometric Healthcare and technology concept banner. Medical exams and online consultation concept. Medicine. Vector illustration
This access can make consumer wearables more useful by potentially linking them — and the health data they collect — with clinical data used by physicians to actually make care and treatment decisions. Most devices today are not clinically recognized and don’t have any real integration into the healthcare system. Access to better data could change that on both sides.
“Digital health application might be far more effective if it can take into context information in the medical record today,” said Yap. “That’s one example where the patient will get much greater impact from the digital health applications if the digital health applications can access all of the information that the medical system collected.”
With the investment, which values Particle Health at roughly $48 million, Bannister and his team are looking to move aggressively into more areas of digital healthcare services.
“Right now, we’re focusing on telemedicine,” said Bannister. “We’re moving into the payer space… As it stands today we’re really servicing the third parties that need the records. Our core belief is that patients want control of their data but they don’t want the stewardship.”
The company’s reach is impressive. Bannister estimates that Particle Health can hit somewhere between 250 and 300 million of the patient records that have been generated in the U.S. “We have more or less solved the fragmentation problem. We have one API that can pull information from almost everywhere.”
So far, Particle Health has eight live contracts with telemedicine and virtual health companies using its API, which have pulled 1.4 million patient records to date.
“The way it works right now, when you give them permission to access your data it’s for a very specific purpose of use… they can only use it for that one thing. Let’s say you were using a telemedicine service. I allow this doctor to view my records for the purpose of treatment only. After that we have built a way for you to revoke access after the point,” Bannister said.
Particle Health’s peers in the world of API development also see the power in better, more open access to data. “A lot of money has been spent and a lot of blood and sweat went into putting [electronic medical records] out there,” said Innovaccer chief digital officer Mike Sutten.
The former chief technology officer of Kaiser Permanente, Sutten knows healthcare technology. “The next decade is about ‘let’s take advantage of all of this data.’ Let’s give back to physicians and give them access to all that data and think about the consumers and the patients,” Sutten said.
Innovaccer is angling to provide its own tools to centralize data for physicians and consumers. “The less friction there is in getting that data extracted, the more benefit we can provide to consumers and clinicians,” said Sutten.
Already, Particle Health is thinking about ways its API can help application developers create tools to help with the management of COVID-19 populations and potentially finding ways to ease the current lockdowns in place due to the disease’s outbreak.
“If you’ve had an antibody test or PCR test in the past… we should have access to that data and we should be able to provide that data at scale,” said Bannister.
“There’s probably other risk-indicating factors that could at least help triage or clear groups as well… has this person been quarantined has this person been to the hospital in the past month or two… things like that can help bridge the gap,” between the definitive solution of universal testing and the lack of testing capacity to make that a reality, he said.
“We’re definitely working on these public health initiatives,” Bannister said. Soon, the company’s technology — and other services like it — could be working behind the scenes in private healthcare initiatives from some of the nation’s biggest companies as software finally begins to take bigger bites out of the consumer health industry.
Virgin Orbit has secured an Emergency Use Authorization (EUA) from the U.S. Food and Drug Administration (FDA) for its ventilator, which the small satellite launch company designed and prototyped within the past few weeks in response to growing need for ventilator hardware to address the most severe cases of COVID-19 infection. Virgin Orbit anticipates deliveries of the ventilator hardware to start “within the next few days” now that it has secured the agency’s authorization.
Virgin Orbit designed its ventilator, which is a take on an automated version of the manual resuscitators used most frequently in ambulances by paramedics responding to calls where a person has lost the ability to breath on their own, based on guidance form a group of experts and doctors called ‘The Bridge Ventilator Consortium.” It’s designed mostly as a stop-gap and supplement to free up use of proper ventilator hardware to treat the most severe respiratory symptoms in COVID-19 patients, but should still free up a valuable medical resources that are in short supply as the pandemic continues.
Already, Vrigin says it’s manufacturing the ventilators, and is making “over 100 per week” in terms of its ongoing production rate. The initial delivery set to go out this week will be 100 units that will be shipped to California’s Emergency Medial Services Authority, for distribution depending on need in that sate.
While it has done a lot to quickly ramp up this production line and start shipping ventilators, Virgin Orbit says that it’s been continuing to build out its own small satellite launch system. In fact, it just recently flew a key final test of its LauncherOne vehicle and the carrier aircraft that brings it to its launch altitude – the last big step before it runs a full demonstration of its system, including an orbital flight, later this year.
Innovative private enterprises and selected academics are employing AI in the fight against the spread of the Covid-29 virus, each hoping for the breakthrough idea. (GETTY IMAGES)
By John P. Desmond, AI Trends Editor
The worldwide Covid-19 spread is a tragic experience that is spurring innovators to try many approaches …
Livongo Health’s stock jumped over ten percent on a day that saw most exchanges tumble after a day of crazy volatility.
The digital diagnostics and therapeutics company is benefiting from booming demand for digital health services as remote medicine takes center stage for beleaguered health care providers looking to keep treating patients while also responding to the COVID-19 epidemic.
Livongo, a provider of behavioral management treatments and diagnostic tools for chronic conditions including diabetes, hypertension, weight management, and mental health, sits squarely in the center of current medical needs.
The company announced a revised preliminary guidance for its first …
The White House has issued a “call to action” for AI researchers to fight the coronavirus spread, and private industry races to discover effective drugs.
By AI Trends Staff
The White House has issued a “call to action” to AI researchers to help fight the coronavirus spread; hospitals are pursuing …