AI techniques are being applied by researchers aiming to extend the life and monitor the health of batteries, with the aim of powering the next generation of electric vehicles and consumer electronics.
Researchers at Cambridge and Newcastle Universities have designed a machine learning method that can predict battery health with ten times the accuracy of the current industry standard, according to an account in ScienceDaily. The promise is to develop safer and more reliable batteries.
In a new way to monitor batteries, the researchers sent electrical pulses into them and monitored the response. The measurements were then processed by a machine learning algorithm to enable a prediction of the battery’s health and useful life. The method is non-invasive and can be added on to any battery system.
The inability to predict the remaining useful charge in lithium-ion batteries is a limitation to the adoption of electric vehicles, and annoyance to mobile phone users. Current methods for predicting battery health are based on tracking the current and voltage during battery charging and discharging. The new methods capture more about what is happening inside the battery and can better detect subtle changes.
“Safety and reliability are the most important design criteria as we develop batteries that can pack a lot of energy in a small space,” stated Dr. Alpha Lee from Cambridge’s Cavendish Laboratory, who co-led the research. “By improving the software that monitors charging and discharging, and using data-driven software to control the charging process, I believe we can power a big improvement in battery performance.”
The researchers performed over 20,000 experimental measurements to train the model in how to spot signs of battery aging. The model learns how to distinguish important signals from irrelevant noise. The model learns which electrical signals are most correlated with aging, which then allows the researchers to design specific experiments to probe more deeply why batteries degrade.
“Machine learning complements and augments physical understanding,” stated co-author Dr Yunwei Zhang, also from the Cavendish Laboratory, in .”The interpretable signals identified by our machine learning model are a starting point for future theoretical and experimental studies.”
Department of Energy Researchers Using AI Computer Vision Techniques
Researchers at the Department of Energy’s SLAC National Accelerator Laboratory are using AI computer vision techniques to study battery life. The scientists are combining machine learning algorithms with X-ray tomography data to produce a detailed picture of degradation in one battery component, the cathode, according to an account in SciTechDaily. The referenced study was published in Nature Communications.
For cathodes made of nickel-manganese-cobalt (NMC) particles are held together by a conductive carbon matrix. Researchers have speculated that a cause of battery performance decline could be particles breaking away from that matrix. The team had access to advanced capabilities at SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL), a unit of the Department of Energy operated by Stanford University, and the European Synchrotron Radiation Facility (ESRF), a European collaboration for the advancement of X-rays, based in Grenoble, France. The goal was to build a picture of how NMC particles break apart and away from the matrix, and how that relates to battery performance loss.
The team turned to computer vision with AI capability to help conduct the research. They needed a machine learning model to train the data in how to recognize different types of particles, so they could develop a three-dimensional picture of how NMC particles, large or small, break away from the cathode.
The authors encouraged more research into battery health. “Our findings highlight the importance of precisely quantifying the evolving nature of the battery electrode’s microstructure with statistical confidence, which is a key to maximize the utility of active particles towards higher battery capacity,” the authors stated.
(Citation: Jiang, Z., Li, J., Yang, Y. et al. Machine-learning-revealed statistics of the particle-carbon/binder detachment in lithium-ion battery cathodes. Nat Commun 11, 2310 (2020). https://doi.org/10.1038/s41467-020-16233-5)
(For an account of how researchers from Stanford University, MIT and the Toyota Research Institute are studying radical reductions in electric-vehicle charging times, see AI Trends.).
A bipartisan group of legislators in the US House and Senate proposed a bill in the first week of June that would direct the federal government to develop a national cloud computing infrastructure for AI research.
This idea originated with a proposal from Stanford University in 2019.
The legislation was introduced by Sens. Rob Portman, R-Ohio, and Martin Heinrich, D-NM, is called the National Cloud Computing Task Force Act. It would convene a mix of technical experts across academia, industry and government, to plan for how the US should build, deploy, govern and maintain a national research cloud for AI.
“With China focused on toppling the United States’ leadership in AI, we need to redouble our efforts with a sustained commitment to the best and brightest by developing a national research cloud to ensure our technical researchers get the tools they need to succeed,” stated Portman, according to an account in Nextgov. “By democratizing access to computing power we ensure that any American with computer science talent can pursue their good ideas.”
“Artificial Intelligence is likely to be one of the most transformative technologies of all time. If we defer its development to other nations, important ethical, safety, and privacy principles will be at risk, which not only harms the United States, but also the international community as a whole,” stated Sen. Heinrich.
A companion bill was introduced the same week in the House, filed by Reps. Anna Eshoo, D-Calif., and Anthony Gonzalez, R-Ohio.
Original Suggestion for National Research Cloud From Stanford
A project to support a National Research Cloud was suggested by John Etchemendy, co-director of the Stanford Institute for Human Centered AI (HAI), and Fei-Fei Li, also a co-director of HAE and a computer science professor at Stanford. Etchemendy is retired as provost of Stanford, a position he held for 17 years, stepping down in 2017. Li was the director of Stanford’s AI Lab from 2013 to 2018; she served as VP of and Chief Scientist of AI/ML at Good Cloud during a sabbatical from January 2017 to September 2018.
In an update published on the HAE blog at Stanford University in March, the authors outlined how the advance of AI by US companies is a direct outgrowth of federally-funded university research, furthered by exceptional R&D in the private sector. Then a warning: “Today, the research prowess that’s powered decades of growth and prosperity is at risk.”
The two primary reasons are: university researchers especially lack access to compute power; and meaningful datasets are scarce. These two resources are “prerequisites for advanced AI research,” the authors stated.
Today’s AI, the researchers note, requires massive amounts of compute power, huge volumes of data and high expertise to train the gigantic machine learning models underlying the most advanced research. “There is a wide gulf between the few companies that can afford these resources and everyone else,” the authors stated. In an example, Google said the company itself spent $1.5 million in computer cycles to train the Meena chatbot announced earlier this year. “Such costs for a single research project are out of reach for most corporations, let alone for academic researchers,” the authors stated.
Meanwhile, the large data sets required to train AI algorithms are mostly controlled by industry or government, hobbling academic researchers, important partners in the American research partnership.
Here is the call: “It is for this reason that we are calling for the creation of a US Government-led task force from academia, government, and industry to establish a National Research Cloud. Support from Congress and the President could have a meaningful impact on American innovation through the creation of such a task force. Indeed, we believe that this could be one of the most strategic research investments the federal government has ever made.”
After HAI launched the initiative last year, the presidents and provosts of 22 universities nationwide signed a joint letter to the President and Congress in support of the effort. But HAE held off on issuing the letter until now, with the nation absorbed in responding to the pandemic.
Eric Schmidt Suggests Building on CloudBank of the NSF
Former Google CEO Eric Schmidt put his support behind a national cloud effort at a hearing of the House Science, Space and Technology committee in late January. The hearing was to consider actions the US could take to maintain and extend its technological leadership in the world.
Now the chair of the Defense Innovation Board and the National Security Commission on Artificial Intelligence (AI), Schmidt spoke about the CloudBank program launched last year by the National Science Foundation to provide public cloud allocations and associated training to support projects.
The committee considered the importance of collaboration between government, industry, and academia in the effort to sustain and grow US competitiveness. Schmidt suggested that CloudBank “could expand into a nation-wide National Research Cloud,” according to a recent account in Meritalk.
“Congress should also explore tax incentives for companies to share data and provide computing capabilities to research institutions, and accelerate efforts to make government datasets more widely available,” Schmidt stated.
Schmidt offered the following recommendations to boost US tech competitiveness:
More Federal research and development funding. “For AI, the scale of investment should be multiple times current levels,” he stated, adding, “Simply put, we need to place big bets.”
Federal investment in nationwide infrastructure. That should include a secure alternative to 5G network equipment made by China-based Huawei, investing in high-performance computing, and emulating previous national models like the National Nanotechnology Initiative.
Boosting public confidence in advanced technology. “If we do not earn the public’s trust in the benefits of new technologies, especially AI, doubts will hold us back,” Schmidt stated.
The committee seemed to support the concept of public-private-academic partnerships to achieve the recommended outcomes.
Medical researchers are employing AI to search through databases of known drugs to see if any can be associated with a treatment for the new COVID-19 coronavirus.
An early success story comes from BenevolentAI of London, which using tools developed to search through medical literature, identified rheumatoid arthritis drug baricitinib as a possible treatment for COVID-19.
In a pilot study at the end of March, 12 adults with moderate COVID-19 admitted to the hospital in either Alessandria or Prato, Italy, received a daily dose of baricitinib, along with an anti-HIV drug combination of lopinavir and ritonavir, for two weeks. Another study group of 12 received just lopinavir and ritonavir.
After their two-week treatment, the patients who received baricitinib had mostly recovered, according to a recent account in The Scientist. Their coughs and fevers were gone; they were no longer short of breath. Seven of the 12 had been discharged from the hospital. In contrast, the group who didn’t get baricitinib still had elevated temperatures, nine were coughing, and eight remained short of breath. Just one patient from the lopinavir-ritonavir–only group had been discharged.
Researchers at Benevolent AI, along with collaborator Justin Stebbing, an oncologist at Imperial College London, published a letter to The Lancet on February 4, describing how they used AI to identify baricitinib’s potential to treat COVID-19.
AI “makes higher-order correlations that a human wouldn’t be capable of making, even with all the time in the world. It links datasets that a human wouldn’t be able to link,” stated Stebbing.
Benevolent researchers used the company’s knowledge graph—a digital storehouse of biomedical information and connections inferred and enhanced by machine learning—to identify two human protein targets to focus on: AP2-associated protein kinase 1 (AAK1) and cyclin g-associated kinase (GAK).
The team used another algorithm to find existing drugs that could hit the protein targets, completing the work in a few days. Drugs not approved by regulators were eliminated, cutting the list to about 30. Eli Lilly, the company that makes baricitinib, has entered into an agreement with the National Institute of Allergy and Infectious Diseases to study the drug’s effectiveness in COVID-19 patients in the US.
“Even if the trial doesn’t work, we’re going to find out a huge amount of who it might work in and when it might work,” stated Stebbing. “It’s all about personalized medicine, which means treating the right person at the right time with the right disease with the right drugs. Hopefully, this will be a powerful part of the jigsaw.”
MIT-IBM Watson AI Lab Funding 10 Projects
Elsewhere, the MIT-IBM Watson AI Lab is funding 10 research projects incorporating AI to address the health and economic consequences of the pandemic.
One project seeks to establish early detection of sepsis in COVID-19 patients. About 10 percent of COVID-19 patients get sick with sepsis within a week of showing symptoms, but only about half survive, according to the account from MIT News. Identifying patients at risk for sepsis can lead to earlier, more aggressive treatment and a better chance of survival.
In a project led by MIT Professor Daniela Rus, researchers will develop a machine learning system to analyze images of patients’ white blood cells for signs of an activated immune response against sepsis.
Another project led by MIT professors Daron Acemoglu, Simon Johnson, and Asu Ozdaglar will model the effects of targeted lockdowns on the economy and public health. The team analyzed the relative risk of infection, hospitalization, and death for different age groups. When they compared uniform lockdown policies against those targeted to protect seniors, they found that a targeted approach could save more lives. Building on this work, researchers will consider how antigen tests and contact tracing apps can further reduce public health risks.
Other studies are looking at: which material makes the best face masks; a privacy-first approach to contact tracing; overcoming hurdles to global access to a COVID-19 vaccine, and leveraging electronic medical records to find a treatment for COVID-19.
A COVID Symptom Study app created by researchers at King’s College of London and Mass General Hospital of Boston, aims to predict who is at risk of having the COVID-19 virus, without a test. The app has been downloaded by over three million people worldwide. A prediction system was developed by examining data from 25 million people in the UK and US between the dates of March 24 and April 21, who actively used the app to update their health status.
When the AI-based model was applied to over 800,000 app users who displayed exact symptoms, it revealed that some 17 percent were likely to have coronavirus, information that could be of high value in heavily-populated areas especially.
Vanderbilt University Researcher Also Working with King’s College
A tool in development at Vanderbilt University to study human immune responses to rhinovirus, a cause of the common cold, is being applied to Covid-19-related research in partnership with King’s College of London and Guy’s and St Thomas’ NHS Foundation Trust. The research is being led by Jonathan Irish, associate professor of cell and developmental biology and scientific director of the Cancer & Immunology Core at Vanderbilt.
In the race to understand the inner-workings of COVID-19, the tool helps by parsing through vast quantities of data to identify extremely rare immune cells that specifically respond to viruses.
The tool employs aspects of high dimensional (HD) cytometry, a technique that takes measurements of many features of a single blood cell simultaneously. The resulting huge volume of data is challenging to analyze. “We think that HD cytometry can be particularly useful in understanding COVID-19,” stated Irish in a press release from Vanderbilt.
The quickly-developing trial was to begin treating 19 patients the last week of May. The research hopes to identify immune cells that are reacting to the virus, on the order of a couple of hundred in a sample of 10 million blood cells.
The goal of the joint research is to identify which human immune cells are specific to coronavirus infections, and distinguish these cells from each person’s immune fingerprint. “Understanding and identifying the types of immune cells that help to fight off the virus could help us optimize vaccine and treatment strategies,” Irish stated.
Researching the Best Strategies for Exiting Social Distancing
How best to exit the isolation strategies for dealing with COVID-19 is the subject of experimentation at the University of Luxumbourg’s SnT, Interdisciplinary Centre for Security, Reliability and Trust. The idea of the research is to make it possible for governments around the world to analyze how various exit strategies will impact the spread of COVID-19 in a six-month time frame.
Yves Le Traon, vice-director of SnT, brought together two teams to collaborate on this project. To generate its predictions, the tool uses data publicly available from the Google COVID-19 dataset, as well as data from Johns Hopkins University. A user is able to understand how policies related to each activity impact the spread of the disease, by selecting a country and changing the value that represents the intensity of any given isolation measure.
“The saying ‘knowledge is power’ may be overused, but when it comes to the coronavirus it takes on new meaning as every piece of data has the potential to impact the lives of people around the world,” stated Prof. Le Traon, in a release published on EurekAlert! “Given the enormous amount of data to analyze, we have developed this tool to support exit strategy planning. As many countries in Europe are beginning to execute on their plans already, we wanted to release our work as soon as possible.” EurekAlert!
Simon Fraser University Working on Bio-Image Detection of Covid-19 from X-rays
Researchers at Simon Fraser University and Providence Health Care (PHC) are collaborating on a new tool incorporating AI to help speed the diagnosis of Covid-10 patients. PHC leveraged the expertise of SFU researchers to validate a deep learning AI tool that enables a clinician to feed a patient’s chest x-ray image into a computer, run a bio-image detection analysis and determine a positive pneumonia case that is consistent with COVID-19.The tool is currently in the validation phase at St. Paul’s Hospital in Vancouver, Canada.
YaĞiz Aksoy, an assistant professor in the School of Computing Science’s GrUVi Lab, and MAGPIE Group researcher Vijay Naidu, a mathematician, helped refine the machine learning system using X-ray images of both COVID-19 and non-COVID-19 patients, to identify the unique characteristics found in the virus.
“Instead of doctors checking each X-ray image individually, this system is trained to use algorithms and data to identify it for them,” stated Aksoy. Naidu also shared his expertise in bio-sequence analysis to create a database of COVID-19 biological signatures, or unique identifiers, to zero in on those found in positive patients.
The beta version of the tool – still in an early testing phase – has uploaded to the United Nations Global Platform and is whitelisted in the AWS Machine Learning Marketplace.
NYU College of Dentistry Develops Mobile App to Detect Covid-19 Severity
Researchers at the NYU College of Dentistry have developed a mobile app to help clinicians determine which patients testing positive for Covid-19 are likely to have severe cases. The app uses AI to assess risk factors and key biomarkers from blood tests to provide a Covid-19 “severity score.” Current tests for Covid-19 detect whether someone does or does not have the virus, but they do not provide clues as to how sick a patient might become.
“Identifying and monitoring those at risk for severe cases could help hospitals prioritize care and allocate resources like ICU beds and ventilators,” stated John T. McDevitt, PhD, professor of biomaterials at NYU College of Dentistry, who led the research. “Likewise, knowing who is at low risk for complications could help reduce hospital admissions while these patients are safely managed at home.”
Using data from 160 hospitalized Covid-19 patients in Wuhan, China, the researchers identified four biomarkers measured in blood tests that were significantly elevated in patients who died versus those who recovered: C-reactive protein (CRP), myoglobin (MYO), procalcitonin (PCT), and cardiac troponin I (cTnI). These biomarkers can signal complications that are relevant to Covid-19, including acute inflammation, lower respiratory tract infection, and poor cardiovascular health.
The researchers then built a model using the biomarkers as well as age and sex, two established risk factors. They trained the model using a machine learning algorithm to define the patterns of COVID-19 disease and predict its severity. When a patient’s biomarkers and risk factors are entered into the model, it produces a numerical Covid-19 severity score ranging from 0 (mild or moderate) to 100 (critical).
The model was validated using data from 12 hospitalized COVID-19 patients from Shenzhen, China, which confirmed that the model’s severity scores were significantly higher for the patients that died versus those who were discharged. These findings are published in Lab on a Chip, a journal of the Royal Society of Chemistry.
In a periodic profile of selected startups, we look today at Dyno Therapeutics and ElevateBIO.
Dyno Therapeutics Using AI to Develop Gene Therapies
On May 11, startup Dyno Therapeutics announced partnerships with Novartis and Sarepta Therapeutics to develop gene therapies for eye disease and neuromuscular and cardiovascular disease.
Dyno’s platform—CapsidMap—aims to create disease-specific vectors for gene therapy, explains Eric Kelsic, CEO and one of the six company co-founders. Kelsic and other co-founders worked together in George Church’s lab at Harvard; Dyno has an exclusive option to enter into a license agreement with Harvard University for this technology. Church is also a co-founder of Dyno and Chairman of the company’s Scientific Advisory Board.
“Gene therapy is such a huge opportunity to treat disease; there’s a huge unmet need there on the disease front. In addition to that, AAV vectors—it feels like we’re at the beginning of the field. There’s a lot of great work that’s been done on natural vectors, but they have limitations. They only go to certain cells and tissues,” Kelsic told AI Trends. “We decided to focus on engineering of the AAV capsid, which are the protein shell of the vectors.”
The company’s approach combines AI and wet lab biology to iteratively design novel adeno-associated virus vectors (AAV) that improve on current gene therapy. Kelsic calls it “high-throughput biology”, measuring many of the properties that are critical for gene therapy in high throughput, specifically efficiency of delivery, specificity to a target, the immune system response, packaging size, and manufacturing features.
“Those five things really make up all the characteristics that are critical for in vivo delivery,” Kelsic said. For gene therapy there’s a capsid profile for each disease. “Think about every disease that you want to treat, every potential therapy, and there’s a certain profile of what’s going to be the optimal vector for that treatment,” he said. “We built that profile into our platform to inform how we do our screening. Essentially we can measure all those properties independently using this high throughput approach.”
When Kelsic explained the platform to financier Alan Crane in June 2018, Crane was “absolutely blown away,” he told AI Trends. Crane is a Polaris Entrepreneur Partner and has been exploring the role of AI in life sciences applications for years. “This was by far the most direct, most potential-for-creating-value-for-patients application of AI to biology that I had ever seen,” he said. Not only did Polaris invest in the $9 million 2018 seed funding, but Crane joined the company as a co-founder and executive chairman.
ElevateBio Incubating Gene and Cell Therapy Startups
ElevateBio, which was officially launched to the public less than a year ago, specializes in development of new types of cellular and genetic therapies, and operates by the creation of new companies under its portfolio. Each is dedicated to the development and manufacturing of a specific type of therapeutic approach.
Founded in 2017 in Cambridge, Mass., the company recently announced $170 million in Series B funding, which brings the total raised to over $300 million, according to an account in TechCrunch.
ElevateBio has ramped up quickly, completing a 140,000 square foot facility in Waltham, Mass. to focus on R&D. It has launched a company called AlloVir, which is working on T-cell immunotherapy for combating viruses. That company is now in the later stages of clinical trials. Another launched company called HighPassBio, aims to help treat stem cell-related diseases using T-cell therapies, focused on the potential relapse of leukemia following a transplant.
ElevateBio is also focusing some of its efforts towards research focused on mitigating the impact of Covid-19. The AlloVir subsidiary has expanded an existing research agreement in place with the Baylor College of Medicine to work on developing a type of T-cell therapy that can help protect patients with conditions that compromise their immune systems.
Company co-founder and CEO David Hallal says that the ElevateBio site will be more efficient as a shared resource than it would be if it were owned by a single company. “We get to build it once and then run multiple companies through it,” he stated in an account in Xconomy.
Hallal said his team is already talking with scientists at universities in the US and abroad about bringing their early gene and cell therapy work into ElevateBio. The company seeks to invest in nascent cell and gene therapy startups spun out of academia. The plan is to nurture the startups until they progress, get more private financing or go public, Hallal stated. They hope for a number of gene and cell therapy companies being created, grown and spun of the company’s space in Waltham.
Human thought has always been central to creativity. This has been true through development of printing presses, gramophones, cameras, camcorders, typewriters, word processors, photo editing software and many other tools invented over centuries.
Key advances include: AI-assisted art, including an application called style transfer. Well-trained neural networks map the style of one image onto another. First proposed in 2015 by Leon Gatys in a paper titled, “A Neural Algorithm of Artistic Style.” It allows for example a photograph to take on the style of a van Gogh painting. Gatys is affiliated with the University of Tuebingen, Germany.
Style transfer has caught on, finding commercial applications in social media platforms. “I want to have a machine that perceives the world in a similar way as we do, then to use that machine to create something that is exciting to us,” Gatys is quoted as Miller’s book.
Another innovation is Pix2Pix, an AI algorithm that can convert a rough sketch into a real photograph. The Dutch broadcasting network NPO developed Pix2Pix as part of a project to use AI to analyze human creations and turn them into lifelike paintings. It uses a specialized form of generative adversarial network (GAN), which have been used in many creative AI projects, including creation of a painting that sold for $432,000.
“Pix2Pix empowers people who may not have the requisite motor skills and technical skills to express their creativity,” stated Phillip Isola, the creator of Pix2Pix. “It allows mixing of science and art together, offering a means to show data in a way that’s provocative, emotional, and compelling.”
The authors outline how AI is being used in the creative process, notably in GANs applied to pictures. “But even if machines can create innovations from data, this does not mean that they are likely to steal all the spark of human creativity any time soon,” the authors state. “Even if machines cannot replace humans in the creative domain, they are a great help to complement human creativity.”
The use of AI in the creative process is called “innovation analytics” by authors of a recent account in ScienceDirect, who also see AI in a support and not a replacement role. “Extant literature coupled with our experiences as practitioners suggest that while AI may not be ready to completely take over highly creative tasks within the innovation process, it shows promise as a significant support to innovation managers,” the authors state. They describe computer-enabled, data-driven insights, models and visualizations as innovation analytics. “AI can play a key role in the innovation process by driving multiple aspects of innovation analytics,” they state.
Chinmay Kakatkar from Ludwig Maximilian University of Munich, and a senior data scientist at Fineway of Munich, a firm working on smart travel, was lead author of the paper.
Poetry and Physics Interact with Quantum Computing
The interaction of poetry and physics was the pursuit of poet Amy Catanzano when she created “World Lines: A Quantum Supercomputer Poem.” which translates the quantum theory behind a topological quantum computer in both its word choices and its visual structure, a practice Catanzano calls quantum poetics. “My aim was to write a poem that served as an imaginative and rigorous site of interaction between poetry and physics,” she stated in a recent account in Physics.
She describes poetry as a nuanced and complex form of language that goes beyond simple dictionary definitions of individual words. Poems use rhythm, visual structure, line breaks, word order, and other devices to explore invisible worlds, alter the flow of time, and depict the otherwise unimaginable, states Catanzano, who is also an assistant professor of English at Wake Forest University in Winston-Salem, N.C..
World Lines is happening in phases, with phase one nearly complete. She is writing more quantum supercomputer poems in phase 2, and in phase 3, she is working to bring World Lines into a 3D environment and art installation. In August 2019 on a visit to CERN, she spoke with Joao Pequenão, head of the MediaLab at CERN and a multimedia storyteller, about her goal. He suggested using gaming technology, AI, machine learning and virtual reality software to bring the poem into a 3D environment.
While phase 1 of World Lines is complete, I am writing other quantum supercomputer poems in Phase 2. For phase 3, I am taking steps to work with scientists to bring World Lines into a 3D environment and art installation. In August 2019, during my second site visit to CERN, I spoke once again with Joao Pequenão, head of the MediaLab at CERN and a multimedia storyteller, about my goal. He suggested the possibilities of using gaming technology, artificial intelligence, machine learning, and virtual reality software to bring the poem into a 3D environment.
“I imagine an environment through which the reader moves, writing the poem as they walk,” Catanzano stated, in the hopes that poetry can help physicists develop a more effective language to describe the complex ideas of quantum physics.
Artist Mario Klingerman, who has used Pix2Pix to transform portraits into award-winning paintings, sees machines with AI as having a better opportunity than humans to create. Humans build on what they have learned but machines can create from scratch, he suggests, stating, “I hope machines will have a rather different sort of creativity and open up different doors.”
Contributed Commentary by David W. Craig, Ph.D. and Brooke Hjelm, Ph.D.
We have heard a lot about cellular and tissue spatial biology lately, and for good reason. Tissues are heterogeneous mixtures of cells; this is particularly important in disease. Cells are also the foundational unit of life, and they are shaped by those cells proximal to them. Not surprisingly, the research field sought to survey cellular and tissue heterogeneity. The last decade saw massive adoption of single-cell sequencing RNA. This approach requires that we disaggregate cells, leading to accounting and characterization of cell populations, but at the same time losing their spatial context such as their proximity to other cells or where they fit with traditional approaches such as histopathology.
Enter Spatial Genomics
That’s why we have welcomed spatial transcriptomics and a focus on mapping RNA transcripts to their location within a tissue. After all, understanding disease pathology requires that we understand not only the underlying genomics and transcriptomics but also the relationship between cells and their relative locations within a tissue. Along for the ride: new avenues for the study of cancer, immunology, and neurology, among many others. What’s changed is the emergence of new tools for resolving spatial heterogeneity. SeqFISH and MerFISH are novel approaches for mapping gene expression within model systems. Multiple companies such as 10x Genomics and NanoString are now democratizing access to spatial transcriptomics, introducing new technologies and assays. They are opening up the study of disease pathology.
AI & Deep Learning: Adding to Our Vocabulary
New experimental methods often start with historical analysis approaches. Let’s consider the first step in analysis: finding clusters of spots/cells with similar gene expression and then visualizing by reducing dimensions. In single-cell RNA-seq, the tSNE projection and color-coding clustering may be the signature plot, much like the Manhattan plot was to the GWAS.
Yet, critically, we haven’t leveraged the underlying histopathology image—the foundation of diagnosis and study of disease. We haven’t leveraged the fact that two spots are neighboring. What happens when we do? What happens at the edges between two clusters? What happens when cell types intersperse or infiltrate, such as in immune response? Are there image analysis methods we aren’t considering that have a high potential impact?
Indeed, concepts such as convolutional neural networks (CNNs) and generative adversarial networks (GANs) have been instrumental in classifying features and underlying hidden layers. We can go beyond the tSNE in spatial transcriptomics—and the question should be about viewing the latent space (the representation of the data that drives classifying regions and the discovery of hidden biology). These terms and concepts are foundational when it comes to artificial intelligence and need to be front and center in spatial transcriptomics analysis.
Of course, the use of AI and deep learning terminology is ubiquitous. Getting away from the hype, from self-driving cars to the successes in image recognition (ImageNet Challenge), some of the most remarkable achievements leverage spatial and imaging data. Data matters and one then asks: should we consider a single spatial transcriptomics section as one experimental data point, or is it 4,000 images and 4,000 transcriptomes?
In spatial biology, we can anticipate that applying AI to cell-by-cell maps of gene or protein activity will pave the way for significant discoveries that we might never achieve on our own. Incorporating spatially-resolved data could be the next leap forward in our understanding of biology. There will be questions we never even knew to ask that may be answered by combining spatial transcriptomics and spatial proteomics. But to get there, we need to come together and work as a community to build up the training data sets and other resources that will be essential for giving AI the best chance at success.
We have yet to truly make the most of the spatial biology data that has been generated. If we do not address this limitation, we will continue to miss out even as we produce more and more of this information.
David W. Craig, PhD (firstname.lastname@example.org), and Brooke Hjelm, Ph.D. (email@example.com) are faculty within the Department of Translational Genomics, University of Southern California Keck School of Medicine.
AI is being employed in a wide range of efforts to explore and study space, including the study of exoplanets by NASA, the support of satellites by ESA, development of an empathic assistant for astronauts and efforts to track space debris.
NASA scientists are partnering with AI experts from companies including Intel, IBM and Google to apply advanced computer algorithms to problems in space science.
Machine learning is seen as helping space scientists to learn from data generated by telescopes and observatories such as the James Webb Space Telescope, according to a recent account from NASA. “These technologies are very important, especially for big data sets and in the exoplanet field,” stated Giada Arney, an astrobiologist at NASA’s Goddard Space Flight Center in Greenbelt, Md. (Exoplanets are beyond the solar system.) “Because the data we’re going to get from future observations is going to be sparse and noisy, really hard to understand So using these kinds of tools has so much potential to help us.”
NASA has laid some groundwork for collaborating with private industry. For the past four summers, NASA’s Frontier Development Lab (FDL) has brought together technology and space innovators for eight weeks every summer to brainstorm and develop code. The program is a partnership between the SETI Institute and NASA’s Ames Research Center, both located in Silicon Valley.
The program pairs science and computer engineering early-career doctoral students with experts from the space agency, academia and some big tech companies. The companies contribute hardware, algorithms, supercomputing resources funding, facilities and subject matter experts. Some of the resulting technology has been put to use, helping to identify asteroids, find planets and predict extreme solar radiation events.
Scientists at Goddard have been using different techniques to reveal the chemistry of exoplanets, based on the wavelengths of light emitted or absorbed by molecules in their atmospheres. With thousands of exoplanets discovered so far, the ability to make quick decisions about which ones deserve further study would be a plus.
Arney, working with Shawn Domagal-Goldman, an astrobiologist at Goddard Center, working with technical support from Google Cloud, deployed a neural network to compare performance to a machine learning approach. University of Oxford computer science graduate student Adam Cobb led a study to test the capability of a neural network against a widely-used machine learning technique known as a “random forest.” The team analyzed the atmosphere of WASP-12b, an exoplanet discovered in 2008 that had a comparison study done with a random forest technique, using data supplied by NASA’s Hubble Space Telescope.
“We found out right away that the neural network had better accuracy than random forest in identifying the abundance of various molecules in WASP-12b’s atmosphere,” Cobb stated. Beyond the greater accuracy, the neural network model could also tell the scientists how certain it was about its prediction. “In a place where the data weren’t good enough to give a really accurate result, this model was better at knowing that it wasn’t sure of the answer, which is really important if we are to trust these predictions,” states Domagal-Goldman.
The European Space Agency (ESA) is studying how to employ AI to support satellite operations, including relative position, communication and end-of-life management for large satellite constellations, according to an account from ESA.
The ESA has engaged in a number of studies on how to use AI for space applications and spacecraft operations as part of its Basic Activities program. One study examines using AI to support autonomous spacecraft that can navigate, perform telemetry analysis and upgrade their own software without communicating with Earth.
Another study focused on how AI can support the management of complex satellite constellations, to reduce the active workload of ground operators. Greater automation, such as for collision avoidance, can reduce the need for human intervention.
Additional studies are researching how a swarm of picosatellites – very small ones – can evolve a collective consciousness. The method employed explored known results in crystallography, the study of crystals, which may open a new way of conceiving lattice formation, a sub-discipline of order theory and abstract algebra.
AI Helping Astronauts Too; An AI Assistant with Empathy Coming
Astronauts traveling long distances for extended periods might be offered assistance from AI-powered emotional support robots, suggests a recent report in yahoo! News. Scientists are working to create an AI assistant that can sense human emotion and “respond with empathy.”
The robots could be trained to anticipate the needs of crew members and “intervene if their mental health is at stake.” An AI assistant with empathy could be helpful to astronauts on a deep-space mission to Mars.
Astronauts on the International Space Station have an intelligent robot called CIMON that can interact, but is lacking in emotional intelligence. NASA CTO Tom Soderstrom has stated. A team at the organization’s Jet Propulsion Laboratory is working on a more sophisticated emotional support companion that can help fly the spacecraft as well as track the health and well-being of crew members.
AI Employed in Effort to Track Space Debris
Space debris is becoming a critical issue in space. Scientists count more than 23,000 human-made fragments larger than 4 inches, and another 500,000 particles between half an inch and 4 inches in diameter. These objects move at 22,300 miles per hour; collisions cause dents, pits or worse.
Scientists have begun to augment the lasers employed to measure and track space debris with AI, specifically neural nets, according to a recent account in Analytics India Magazine.
Laser ranging technology was becoming a challenge due to poor prediction accuracy, small size of objects, and no reflection prism on the surface of debris, making it difficult to spot the exact location of fragments. Scientists began using a method to correct the telescope pointing error of the laser ranging system by enhancing certain hardware equipment. Most recently, AI deep learning techniques are starting to be employed to enhance the correction models.
Chinese researchers from the Chinese Academy of Surveying and Mapping, Beijing and Liaoning Technical University, Fuxin have worked to enhance the accuracy of identifying space junk. The team used a backpropagation neural network model, optimized by a proposed genetic algorithm and the Levenberg-Marquardt algorithm (used in curve fitting) to help pinpoint the location of debris. The results showed higher probability of accurately locating debris between three and nine times.
“After improving the pointing accuracy of the telescope through deep learning techniques, space debris with a cross-sectional area of one meter squared and a distance of 1,500 kilometres can be identified,” stated Tianming Ma of the Chinese Academy of Surveying and Mapping, Beijing and Liaoning Technical University, Fuxin.
UPS, the logistics and delivery company, is spearheading a project in England to explore how AI systems can optimize the charging of electric fleet vehicles, and help integrate onsite renewable energy resources at vehicle depots.
The EV Fleet-Centered Local Energy Systems (EFLES) project is scheduled to start in May at the UPS depot in the Camden borough of London, according to a recent account in electrive. The UK Power Networks Services will provide oversight, while the smart battery and EV-charging software provider Moixa will contribute its GridShare smart AI platform to manage solar, storage and charging assets.
“We have the global expertise, smart-charging infrastructure and resources to host this first-of-a-kind test bed at our Camden facility,” stated UPS sustainable development coordinator Claire Thompson-Sage. “This project will build on our EV infrastructure technology to help develop a holistic local energy system.”
The GridShare software helps track hundreds of data sources for energy prices, power demand, weather conditions and more, to help determine which charging times are less expensive and which mix of renewable energy makes the most sense at any given point in time.
The ERLES project is the next stage in the UPS partnership with Arrival, the UK-based “generation 2” electric vehicle manufacturer, which developed its newest vehicle with UPS.
UPS recently placed an order for 10,000 electric vehicles from Arrival, to be delivered from 2020 to 2024, according to a recent press release issued by Arrival.
UPS co-developed the Generation 2 vehicles with Arrival, which employed a new method of assembly using low capital, low-footprint micro-factories located to serve local communities and be profitable making thousands of units. The UPS partnership with Arrival was first announced in 2016.
“UPS has been a strong strategic partner of Arrival, providing valuable insight to how electric delivery vans are used on the road and how they can be optimized for drivers, stated Denis Sverdlov, founder and CEO of Arrival. “Together our teams have been creating bespoke [custom] electric vehicles, based on our flexible skateboard platforms, that meet the end-to-end needs of UPS from driving, loading/unloading, depot and back office operations.”
Carlton Rose, President of UPS Global Fleet Maintenance & Engineering, stated, “Our investment and partnership with Arrival is directly aligned with UPS’s transformation strategy, led by the deployment of cutting-edge technologies.These vehicles will be among the world’s most advanced package delivery vehicles, redefining industry standards for electric, connected and intelligent vehicle solutions.”
UPS Has Had Long Commitment to Electric Vehicles
UPS had 1,000 electric vehicles in its fleet of 112,000 vehicles two years ago. The cost of the vehicle new was found to be no more than regular diesel vehicles, because the cost of electric batteries plummeted 80 percent in six years, according to a UPS press release from April 2018.The electric transporters are expected to create additional value of UPS in operational savings and routing efficiency.
In the U.S., UPS has been working with Workhorse to develop an electric transport vehicle. The target at the outset was a range of 100 miles, with similar procurement costs as an internal combustion motor. The founder and CEO of Workhorse, Steve Burns, late last year bought the Lordstown, Ohio electric vehicle plant from General Motors, through a company he set up to execute the transaction, Lordstown Motors. He has said he wants to build electric pickup trucks for “business and government customers” and has decided the name of the first model will be: Endurance.
Financially, Workhorse has faced some challenges, losing $38 million in 2019 and having little sales in late 2019, according to a recent account in The Verge. Workhorse will own 10 percent of Lordstown Motors, and license to it the intellectual property related to the planned W-15 electric pickup truck. Burns will transfer 6,000 pre-orders for the truck to Lordstown. He is searching for financing, saying he needs $300 million to start product in a year. He plans to run a union shop and produce 500,000 vehicles per year, double the number of Cruze sedans GM made at the plant.
In the case of these new trucks, UPS worked closely with a supplier, Workhorse, to redesign the trucks “from the ground up,” stated Scott Phillippi, UPS’s senior director of maintenance and engineering. Phillippi expects the new design will reduce the truck’s weight by some 1,000 pounds, compared with a diesel or gas-powered vehicle. That plus better batteries will give the truck an electric range of around 100 miles, enough for most routes in and around cities.
The White House has issued a “call to action” for AI researchers to fight the coronavirus spread, and private industry races to discover effective drugs.
By AI Trends Staff
The White House has issued a “call to action” to AI researchers to help fight the coronavirus spread; hospitals are pursuing …
A team of researchers from Stanford University, MIT and the Toyota Research Institute have used AI to dramatically speed up the time required to test and optimally charge batteries for electric vehicles (EVs).
As recently reported in Nature, Stanford professors Stefano Ermon and William Chueh sought ways to charge an EV battery more quickly while maximizing the overall battery life. The study showed how a patented AI program could predict different ways batteries would react to charging methods.
The software also decided in real time what charging approaches to focus on or ignore. The researchers cut the testing process from two years to 16 days by reducing the length and number of trials.
The machine learning system was trained on data of batteries that failed. It was able to detect patterns for predicting how long batteries would last.
This resulted in a new fast-charging protocol, which showed how to optimize battery life. Using AI in battery testing is a new approach, according to the researchers.
“When talking to material scientists and people who work in batteries for a living, we realized that nobody was actually using more sophisticated AI in this space, so we thought it was promising,” stated Ermon, a professor of computer science at Stanford, in an interview published in TechRepublic.
He described the many ways to charge a battery. “You can apply different voltages, different currents, different intensities––they may all charge the battery in the same amount of time, but some might harm the internal components of the battery,” he stated. “Depending on what kind of charging protocol you use, that can significantly affect the life of the battery.”
Major EV manufacturers may take an interest, Ermon predicted.
“We figured out how to greatly accelerate the testing process for extreme fast charging,” stated Peter Attia, who participated in the study as a graduate student, in an interview with SciTechDaily. “What’s really exciting, though, is the method. We can apply this approach to many other problems that, right now, are holding back battery development for months or years.”
“Machine learning is trial-and-error, but in a smarter way,” stated Aditya Grover, a graduate student in computer science who also participated in the study. “Computers are far better than us at figuring out when to explore – try new and different approaches – and when to exploit, or zero in, on the most promising ones.”
Ermon stated, “It gave us this surprisingly simple charging protocol – something we didn’t expect. That’s the difference between a human and a machine: The machine is not biased by human intuition, which is powerful but sometimes misleading.”
Wider Application Seen
The approach has the potential to accelerate every piece of the battery development pipeline, from designing the chemistry of a batter, to determining its size and shape, to finding better systems for manufacturing and storing, the researchers suggested. This has implications not only for EV battery charging but for other types of energy storage, such as for wind and solar power.
“This is a new way of doing battery development,” stated Patrick Herring, a co-author of the study and a scientist at the Toyota Research Institute. “Having data that you can share among a large number of people in academia and industry, and that is automatically analyzed, enables much faster innovation.”
The researchers intend to make the study’s machine learning and data collection system available for future battery scientists to freely use.
Ermon suggested other big data testing problems, from drug development to optimizing the performance of X-rays and lasers, could be revolutionized by the use of machine learning optimization.
Private industry has been working on applying AI to battery charging as well. Researchers at battery company StoreDot have been using machine learning to extend its capabilities, wrote Dr. Doron Myersford, CEO of StoreDot, in a recent account in Engineering and Technology.
“An initial foray into this technique has achieved remarkable results,” he stated, resulting in a decision to dedicate an R&D team to building capabilities in machine learning. The plan is to apply the lessons learned to the company’s next generation of EV batteries. He cautioned, “Ultra-fast charging presents a very complex issue,” involving innovative data science combined with expertise in electrochemistry, cell structure, anodes, cathodes and electrolytes, so more complex conclusions can be reached.
In other battery research efforts, the search is on for new materials that can store more energy than the graphite anode in modern lithium-ion batteries, according to a recent account in Battery Power Online. Rechargeable batteries with lithium metal anodes could represent the ultimate limit in energy density; however, they face major technical and safety hurdles. The high energy density means they are prone to react with other components in a battery cell to break down through large volume changes. They also run the risk of short circuiting, causing rapid heat generation and potential fire or explosion. Research is continuing.