Posted on

Daily Crunch: India bans PUBG and other Chinese apps

India continues to crack down on Chinese apps, Microsoft launches a deepfake detector and Google offers a personalized news podcast. This is your Daily Crunch for September 2, 2020.

The big story: India bans PUBG and other Chinese apps

The Indian government continues its purge of apps created by or linked to Chinese companies. It already banned 59 Chinese apps back in June, including TikTok.

India’s IT Ministry justified the decision as “a targeted move to ensure safety, security, and sovereignty of Indian cyberspace.” The apps banned today include search engine Baidu, business collaboration suite WeChat Work, cloud storage service Tencent Weiyun and the game Rise of Kingdoms. But PUBG is the most popular, with more than 40 million monthly active users.

The tech giants

Microsoft launches a deepfake detector tool ahead of US election — The Video Authenticator tool will provide a confidence score that a given piece of media has been artificially manipulated.

Google’s personalized audio news feature, Your News Update, comes to Google Podcasts — That means you’ll be able to get a personalized podcast of the latest headlines.

Twitch launches Watch Parties to all creators worldwideTwitch is doubling down on becoming more than just a place for live-streamed gaming videos.

Startups, funding and venture capital

Indonesian insurtech startup PasarPolis gets $54 million Series B from investors including LeapFrog and SBI — The startup’s goal is to reach people who have never purchased insurance before with products like inexpensive “micro-policies” that cover broken device screens.

XRobotics is keeping the dream of pizza robots alive — XRobotics’ offering resembles an industrial 3D printer, in terms of size and form factor.

India’s online learning platform Unacademy raises $150 million at $1.45 billion valuation — India has a new startup unicorn.

Advice and analysis from Extra Crunch

The IPO parade continues as Wish files, Bumble targets an eventual debut — Alex Wilhelm looks at the latest IPO news, including Bumble planning to go public at a $6 to $8 billion valuation.

3 ways COVID-19 has affected the property investment market — COVID-19 has stirred up the long-settled dust on real estate investing.

Deep Science: Dog detectors, Mars mappers and AI-scrambling sweaters — Devin Coldewey kicks off a new feature in which he gets you all caught up on the most recent research papers and scientific discoveries.

(Reminder: Extra Crunch is our subscription membership program, which aims to democratize information about startups. You can sign up here.)

Everything else

‘The Mandalorian’ launches its second season on Oct. 30 — The show finished shooting its second season right before the pandemic shut down production everywhere.

GM, Ford wrap up ventilator production and shift back to auto business — Both automakers said they’d completed their contracts with the Department of Health and Human Services.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Read More

Posted on

What does GPT-3 mean for the future of the legal profession?

Historically, lawyers have struggled with some AI-based tools

One doesn’t have to dig too deep into legal organizations to find people who are skeptical about artificial intelligence.

AI is getting tremendous attention and significant venture capital, but AI tools frequently underwhelm in the trenches. Here are a few reasons why that is and why I believe GPT-3, a beta version of which was recently released by the OpenAI Foundation, might be a game changer in legal and other knowledge-focused organizations.

GPT-3 is getting a lot of oxygen lately because of its size, scope and capabilities. However, it should be recognized that a significant amount of that attention is due to its association with Elon Musk. The OpenAI Foundation that created GPT-3 was founded by heavy hitters Musk and Sam Altman and is supported by Mark Benioff, Peter Thiel and Microsoft, among others. Arthur C. Clarke once observed that great innovations happen after everyone stops laughing.

Musk has made the world stop laughing in so many ambitious areas that the world is inclined to give a project in which he’s had a hand a second look. GPT-3 is getting the benefit of that spotlight. I suggest, however, that the attention might be warranted on its merits.

Why have some AI-based tools struggled in the legal profession, and how might GPT-3 be different?

1. Not every problem is a nail

It is said that when you’re a hammer, every problem is a nail. The networks and algorithms that power AI are quite good at drawing correlations across enormous datasets that would not be obvious to humans. One of my favorite examples of this is a loan-underwriting AI that determined that the charge level of the battery on your phone at the time of application is correlated to your underwriting risk. Who knows why that is? A human would not have surmised that connection. Those things are not rationally related, just statistically related.

Read More

Posted on

Deepfake video app Reface is just getting started on shapeshifting selfie culture

A bearded Rihanna gyrates and sings about shining bright like a diamond. A female Jack Sparrow looks like she’d be a right laugh over a pint. The cartoon contours of The Incredible Hulk lend envious tint to Donald Trump’s awfully familiar cheek bumps.

Selfie culture has a fancy new digital looking glass: Reface (previously Doublicat) is an app that uses AI-powered deepfake technology to let users try on another face/form for size. Aka “face swap videos”, in its marketing parlance.

Deepfake technology — or synthesized media, to give it its less pejorative label — is just getting into its creative stride, according to Roman Mogylnyi, CEO and co-founder of RefaceAI, which makes the eponymous app whose creepily lifelike output you may have noticed bubbling up in your social streams in recent months.

The startup has Ukrainian founders — as well as Mogylnyi, there’s Oles Petriv, Yaroslav Boiko, Dima Shvets, Denis Dmitrenko, Ivan Altsybieiev and Kyle Sygyda — but the business is incorporated in the US. Doubtless it helps to be nearer to Hollywood studios whose video clips power many of the available face swaps. (Want to see Titanic‘s Rose Hall recast with Trump’s visage staring out of Kate Winslet’s body? No we didn’t either — but once you’ve hit the button it’s horribly hard to unsee… 😷)

TechCrunch noticed a bunch of male friends WhatsApp-group-sharing video clips of themselves as scantily clad female singers and figured the developers must be onto something — a la Face App, or the earlier selfie trend of style transfer (a craze that was sparked by Prisma and cloned mercilessly by tech giants).

Reface’s deepfake effects are powered by a class of machine learning frameworks known as GANs (generative adversarial network) which is how it’s able to get such relatively slick results, per Mogylnyi. In a nutshell it’s generating a new animated face using the twin inputs (the selfie and the target video), rather than trying to mask one on top of the other.

Deepface technology has of course been around for a number of years, at this point, but the Reface team’s focus is on making the tech accessible and easy to use — serving it up as a push-button smartphone app with no need for more powerful hardware and near instant transformation from a single selfie snap. (It says it turns selfies into face vectors representing distinguishing user’s facial features — and pledges that uploaded photos are removed from its Google Cloud platform “within an hour”.)

No need for tech expertise nor lots of effort to achieve a lifelike effect. The inexorable social shares flowing from such a user friendly tech application then work to chalk off product marketing.

It was a similar story with the AI tech underpinning Prisma — which left that app open to merciless cloning, though it was initially only transforming photos. But Mogylnyi believes the team behind the video face swaps has enough of a head (ha!) start to avoid a similar fate.

He says usage of Reface has been growing “really fast” since it added high res videos this June — having initially launched with only far grainier GIF face swaps on offer.  In terms of metrics the startup us not disclosing active monthly users but says it’s had around 20 million downloads at this point across 100 countries. (On Google Play the app has almost a full five star rating, off of approaching 150k reviews.)

“I understand that an interest from huge companies might come. And it’s obvious. They see that it’s a great thing — personalization is the next trend, and they are all moving in the same direction, with Bitmoji, Memoji, all that stuff — but we see personalized, hyperrealistic face swapping as the next big thing,” Mogylnyi tells TechCrunch.

“Even for [tech giants] it takes time to create such a technology. Even speaking about our team we have a brilliant team, brilliant minds, and it took us a long time to get here. Even if you spawn many teams to work on the same problems surely you will get somewhere… but currently we’re ahead and we’re doing our best to work on new technologies to keep in pace,” he adds.

Reface’s app is certainly having a moment right now, bagging top download slots on the iOS App Store and Google Play in 100 countries — helped, along the way, by its reflective effects catching the eye of the likes of Elon Musk and Britney Spears (who Mogylnyi says have retweeted examples of its content).

But he sees this bump as just the beginning — predicting much bigger things coming down the sythensized pipe as more powerful features are switched on. The influx of bitesized celebrity face swaps signals an incoming era of personalized media, which could have a profoundly transformative effect on culture.

Mogylnyi’s hope is that wide access to synthensized media tools will increase humanity’s empathy and creativity — providing those who engage with the tech limitless chances to (auto)vicariously experience things they maybe otherwise couldn’t ever (or haven’t yet) — and so imagine themselves into new possibilities and lifestyles.

He reckons the tech will also open up opportunities for richly personalized content communities to grow up around stars and influencers — extending how their fans can interact with them.

“Right now the way influencers exist is only one way; they’re just giving their audience the content. In my understanding in our case we’ll let influencers have the possibility to give their audience access to the content and to feel themselves in it. It’s one of the really cool things we’re working on — so it will be a part of the platform,” he says.

“What’s interesting about new-gen social networks [like TikTok] is that people can both be like consumers and providers at the same time… So in our case people will also be able to be providers and consumers but on the next level because they will have the technology to allow themselves to feel themselves in the content.”

“I used to play basketball in school years but I had an injury and I was dreaming about a pro career but I had to stop playing really hard. I’ll never know how my life would have gone if I was a pro basketball player so I have to be a startup entrepreneur right now instead… So in the case with our platform I actually will have a chance to see how my pro basketball career would look like. Feel myself in the content and life this life,” he adds.

This vision is really the mirror opposite of the concerns that are typically attached to deepfakes, around the risk of people being taken in, tricked, shamed or otherwise manipulated by intentionally false imagery.

So it’s noteworthy that Reface is not letting users loose on their technology in a way that could risk an outpouring of problem content. For example, you can’t yet upload your own video to make into a deepfake — although the ability to do so is coming. For now, you have to pick from a selection of preloaded celebrity clips and GIFs which no one would mistake for the real-deal.

That’s a very deliberate decision, with Mogylnyi emphasizing they want to be responsible in how they bring the tech to market.

User generated video and a lot more — full body swaps are touted, next year — are coming, though. But before they turn on more powerful content generation functionality they’re working on building a counter tech to reliably detect such generated content. Mogylnyi says it will only open up usage once they’re confident of being able to spot their own fakes.

“It will be this autumn, actually,” he says of launching UGC video (plus the deepfake detection capability). “We’ll launch it with our Face Studio… which will be a tool for content creators, for small studios, for small post production studios, maybe some music video makers.”

“We also have five different technologies in our pipeline which we’ll show in the upcoming half a year,” he adds. “There are also other technologies and features based on current tech [stack] that we’ll be launching… We’ll allow users to swap faces in pictures with the new stack and also a couple of mechanics based on face swapping as well, and also separate technologies as well we’re aiming to put into the app.”

He says higher quality video swapping is another focus, alongside building out more technologies for post production studios. “Face Studio will be like an overall tool for people who want full access to our technologies,” he notes, saying the pro tool will launch later this year.

The Ukrainian team behind the app has been honing their deep tech chops for years — starting working together back in 2011 straight out of university and going on to set up a machine learning dev shop in 2013.

Work with post production studios followed, as they were asked to build face swapping technology to help budget-strapped film production studios do more while having to move their actors move around less.

By 2018, with plenty of expertise under their belt, they saw the potential for making deepface technology more accessible and user friendly — launching the GIF version of the app late last year, and going on to add video this summer when they also rebranded the app to Reface. The rest looks like it could be viral face swapping tech history…

So where does all this digital shapeshifting end up? “In our dreams and in our vision we see the app as a personalization platform where people will be able to live different lives during their one lifetime. So everyone can be anyone,” says Mogylnyi. “What’s the overall problem right now? People are scrolling content, not looking deep into it. And when I see people just using our app they always try to look inside — to look deeply into the picture. And that’s what really inspires us. So we understand that we can take the way people are browsing and the way they are consuming content to the next level.”

Read More

Posted on

Enabling humanoid robot movement with imitation learning and mimicking of animal behaviors

Over the past two decades, humanoid robots have greatly improved their ability to perform functions like grasping objects and using computer vision to detect things since Honda’s release of the ASIMO robot in 2000. Despite these improvements, their ability to walk, jump and perform other complex legged motions as fluidly as humans has continued to be a challenge for roboticists.

In recent years, new advances in robot learning and design are using data and insights from animal behavior to enable legged robots to move in much more human-like ways. 

Researchers from Google and UC Berkeley published work earlier this year that showed a robot learning how to walk by mimicking a dog’s movements using a technique called imitation learning. Separate work showed a robot successfully learning to walk by itself through trial and error using deep reinforcement learning algorithms.

Imitation learning in particular has been used in robotics for various use cases, such as OpenAI’s work in helping a robot grasp objects by imitation, but its use in robot locomotion is new and encouraging. It can enable a robot to take input data generated by an expert performing the actions to be learned, and combine it with deep learning techniques to enable more effective learning of movements. 

Much of the recent work using imitation and broader deep learning techniques has involved small-scale robots, and there will be many challenges to overcome to apply the same capabilities to life-size robots, but these advances open new pathways for innovation in improving robot locomotion. 

The inspiration from animal behaviors has also extended to robot design, with companies such as Agility Robotics and Boston Dynamics incorporating force modeling techniques and integration of full-body sensors to help their robots more closely mimic how animals execute complex movements. 

Read More

Posted on

Startups: Focus on Dyno Therapeutics, ElevateBio


The Base Camp facility of ElevateBio is 140,000 square feet in Waltham, Mass. for the support of gene and cell therapy startups. (ELEVATEBIO)

By AI Trends Staff

In a periodic profile of selected startups, we look today at Dyno Therapeutics and ElevateBIO.

Dyno Therapeutics Using AI to Develop Gene Therapies

On May 11, startup Dyno Therapeutics announced partnerships with Novartis and Sarepta Therapeutics to develop gene therapies for eye disease and neuromuscular and cardiovascular disease.

Dyno’s platform—CapsidMap—aims to create disease-specific vectors for gene therapy, explains Eric Kelsic, CEO and one of the six company co-founders. Kelsic and other co-founders worked together in George Church’s lab at Harvard; Dyno has an exclusive option to enter into a license agreement with Harvard University for this technology. Church is also a co-founder of Dyno and Chairman of the company’s Scientific Advisory Board.

“Gene therapy is such a huge opportunity to treat disease; there’s a huge unmet need there on the disease front. In addition to that, AAV vectors—it feels like we’re at the beginning of the field. There’s a lot of great work that’s been done on natural vectors, but they have limitations. They only go to certain cells and tissues,” Kelsic told AI Trends. “We decided to focus on engineering of the AAV capsid, which are the protein shell of the vectors.”

Eric Kelsic, CEO and co-founder, Dyno Therapeutics

The company’s approach combines AI and wet lab biology to iteratively design novel adeno-associated virus vectors (AAV) that improve on current gene therapy. Kelsic calls it “high-throughput biology”, measuring many of the properties that are critical for gene therapy in high throughput, specifically efficiency of delivery, specificity to a target, the immune system response, packaging size, and manufacturing features.

“Those five things really make up all the characteristics that are critical for in vivo delivery,” Kelsic said. For gene therapy there’s a capsid profile for each disease. “Think about every disease that you want to treat, every potential therapy, and there’s a certain profile of what’s going to be the optimal vector for that treatment,” he said. “We built that profile into our platform to inform how we do our screening. Essentially we can measure all those properties independently using this high throughput approach.”

When Kelsic explained the platform to financier Alan Crane in June 2018, Crane was “absolutely blown away,” he told AI Trends. Crane is a Polaris Entrepreneur Partner and has been exploring the role of AI in life sciences applications for years. “This was by far the most direct, most potential-for-creating-value-for-patients application of AI to biology that I had ever seen,” he said. Not only did Polaris invest in the $9 million 2018 seed funding, but Crane joined the company as a co-founder and executive chairman.

Learn more at Dyno Therapeutics.

ElevateBio Incubating Gene and Cell Therapy Startups

ElevateBio, which was officially launched to the public less than a year ago, specializes in development of new types of cellular and genetic therapies, and operates by the creation of new companies under its portfolio. Each is dedicated to the development and manufacturing of a specific type of therapeutic approach.

Founded in 2017 in Cambridge, Mass., the company recently announced $170 million in Series B funding, which brings the total raised to over $300 million, according to an account in TechCrunch.

ElevateBio has ramped up quickly, completing a 140,000 square foot facility in Waltham, Mass. to focus on R&D. It has launched a company called AlloVir, which is working on T-cell immunotherapy for combating viruses. That company is now in the later stages of clinical trials. Another launched company called HighPassBio, aims to help treat stem cell-related diseases using T-cell therapies, focused on the potential relapse of leukemia following a transplant.

ElevateBio is also focusing some of its efforts towards research focused on mitigating the impact of Covid-19. The AlloVir subsidiary has expanded an existing research agreement in place with the Baylor College of Medicine to work on developing a type of T-cell therapy that can help protect patients with conditions that compromise their immune systems.

David Hallal, CEO and co-founder, ElevateBio

Company co-founder and CEO David Hallal says that the ElevateBio site will be more efficient as a shared resource than it would be if it were owned by a single company. “We get to build it once and then run multiple companies through it,” he stated in an account in Xconomy.

Hallal said his team is already talking with scientists at universities in the US and abroad about bringing their early gene and cell therapy work into ElevateBio. The company seeks to invest in nascent cell and gene therapy startups spun out of academia. The plan is to nurture the startups until they progress, get more private financing or go public, Hallal stated. They hope for a number of gene and cell therapy companies being created, grown and spun of the company’s space in Waltham.

Learn more at ElevateBio.

Read the source articles at TechCrunchXconomy and in Bio ITWorld.

Read More

Posted on

AI Being Employed to Predict Wildfires with Greater Accuracy


As wildfire risk has grown larger and more destructive in recent years, especially in the drought-plagued West, AI is being employed to assess wildfire risks more accurately. (GETTY IMAGES)

By John P. Desmond, AI Trends Editor

As wildfire risk has grown larger and more destructive in recent years, especially in the drought-plagued West, AI is being employed to assess wildfire risks more accurately and sound the alarm hours earlier after fire breaks out.

For example, the Nature Conservancy is using controlled burning techniques in forests in California’s Sierra Nevada susceptible to wildfires. This summer, on a 28,000-acre plot near Lake Tahoe, the group plans to test an AI program designed to assess how well the thinning plan will prevent fires.

“Nothing is going to completely replace the human brain to make decisions, but AI can help us make better decisions across a much larger area,” stated Edward Smith, forest ecology and fire manager at the Nature Conservancy, based in Arlington, Va., in a recent account in The Wall Street Journal.

Edward Smith, forest ecology and fire manager at the Nature Conservancy, Arlington, Va.

The AI program will use satellite imagery of pre- and post-thinning work to make the assessments. This has been made possible by the availability of small satellites taking more photos of forests, and powerful computers needed to process the data.

Microsoft’s launch in 2017 of “AI for Earth,” which allowed the general public access to AI tools that can be used to process data from satellites as a cloud service, gave a boost to the work. Microsoft issues some grants to help defray the cost of the research for certain issues, including wildfire prevention.

“We’re flying blind now,” stated Lucas Joppa, chief environmental officer for Microsoft. “We have no idea what the state of our national forests are in the US, yet I can point you to a coffee shop in a millisecond.”

SilviaTerra LLC of San Francisco began using the Microsoft network about a year ago to build a map of the nation’s forest based on data from satellites, including the Landsat program run by the US Geological Survey and NASA. The firm hopes to map more than 400 million acres of forest for the project, called Basemap, to help guide thinning work.

Lucas Joppa, chief environmental officer for Microsoft

AI is helping the company do with a workforce of 10 what it would take thousands to do without. It. “One person can measure maybe 20 acres in a day,” stated Zack Parisa, co-founder of SilviaTerra. “With AI, you can do a whole forest.”

Salo Sciences of San Francisco is developing a product incorporating AI to map areas of highest wildfire risk, based on an analysis of dead and dying trees. The firm is concentrating on California, where an estimated 150 million trees died during a five-year drought earlier in the decade.

“Some of the data that goes into state wildfire risk maps is 15 years old,” stated Dr. Dave Marvin, co-founder and CEO. He formed the company with Christopher Anderson, a graduate student from Stanford University. “We saw we needed to bring together a new framework of how you take satellite imagery and data and more rapidly inform conservation efforts,” stated Dr. Marvin.

In similar work, a team of experts in hydrology, remote sensing and environmental engineering have developed a deep-learning model that maps fuel moisture levels in fine detail across 12 western states, from Colorado, Montana, Texas, and Wyoming to the Pacific Coast, according to a recent press release from Stanford University.

The technique is described in the August 2020 issue of Remote Sensing of Environment. The senior author of the paper, Stanford University ecohydrologist Alexandra Konings, the new dataset produced by the model could “massively improve fire studies.”

The paper’s lead author, Krishna Rao, a PhD student in Earth system science at Stanford, said the model needs more testing to be used in fire management decisions, but is revealing previously invisible patterns. Over time, it should help to “chart out candidate locations for prescribed burns,” Rao stated.

The Stanford model uses a recurrent neural network, an AI system that can recognize patterns in vast volumes of data. The scientists trained the model using data from the National Fuel Moisture Database, which has compiled some 200,000 measurements since the 1970s using a painstaking manual method. The researchers added measurements of visible light bouncing off Earth, and the return of microwave radar signals, which can penetrate through leafy branches all the way to the ground.

“Now, we are in a position where we can go back and test what we’ve been assuming for so long – the link between weather and live fuel moisture – in different ecosystems of the western United States,” Rao stated.

“One of our big breakthroughs was to look at a newer set of satellites that are using much longer wavelengths, which allows the observations to be sensitive to water much deeper into the forest canopy and be directly representative of the fuel moisture content,” stated researcher Konings.

Read the source article in The Wall Street Journal and read the Stanford University press release.

Read More

Posted on

TinyML is giving hardware new life

The latest embedded software technology moves hardware into an almost magical realm

Aluminum and iconography are no longer enough for a product to get noticed in the marketplace. Today, great products need to be useful and deliver an almost magical experience, something that becomes an extension of life. Tiny Machine Learning (TinyML) is the latest embedded software technology that moves hardware into that almost magical realm, where machines can automatically learn and grow through use, like a primitive human brain.

Until now building machine learning (ML) algorithms for hardware meant complex mathematical modes based on sample data, known as “training data,” in order to make predictions or decisions without being explicitly programmed to do so. And if this sounds complex and expensive to build, it is. On top of that, traditionally ML-related tasks were translated to the cloud, creating latency, consuming scarce power and putting machines at the mercy of connection speeds. Combined, these constraints made computing at the edge slower, more expensive and less predictable.

But thanks to recent advances, companies are turning to TinyML as the latest trend in building product intelligence. Arduino, the company best known for open-source hardware is making TinyML available for millions of developers. Together with Edge Impulse, they are turning the ubiquitous Arduino board into a powerful embedded ML platform, like the Arduino Nano 33 BLE Sense and other 32-bit boards. With this partnership you can run powerful learning models based on artificial neural networks (ANN) reaching and sampling tiny sensors along with low-powered microcontrollers.

Over the past year great strides were made in making deep learning models smaller, faster and runnable on embedded hardware through projects like TensorFlow Lite for Microcontrollers, uTensor and Arm’s CMSIS-NN. But building a quality dataset, extracting the right features, training and deploying these models is still complicated. TinyML was the missing link between edge hardware and device intelligence now coming to fruition.

Tiny devices with not-so-tiny brains

Read More

Posted on

AI in Space: NASA Studying Exoplanets, ESA Supporting Satellites


AI is being employed in a wide range of efforts to explore and study space, including the study of exoplanets by NASA. (GETTY IMAGES)

By AI Trends Staff

AI is being employed in a wide range of efforts to explore and study space, including the study of exoplanets by NASA, the support of satellites by ESA, development of an empathic assistant for astronauts and efforts to track space debris.

NASA scientists are partnering with AI experts from companies including Intel, IBM and Google to apply advanced computer algorithms to problems in space science.

Machine learning is seen as helping space scientists to learn from data generated by telescopes and observatories such as the James Webb Space Telescope, according to a recent account from NASA. “These technologies are very important, especially for big data sets and in the exoplanet field,” stated Giada Arney, an astrobiologist at NASA’s Goddard Space Flight Center in Greenbelt, Md. (Exoplanets are beyond the solar system.)  “Because the data we’re going to get from future observations is going to be sparse and noisy, really hard to understand So using these kinds of tools has so much potential to help us.”

Giada Arney, astrobiologist, NASA’s Goddard Space Flight Center

NASA has laid some groundwork for collaborating with private industry. For the past four summers, NASA’s Frontier Development Lab (FDL) has brought together technology and space innovators for eight weeks every summer to brainstorm and develop code. The program is a partnership between the SETI Institute and NASA’s Ames Research Center, both located in Silicon Valley.

The program pairs science and computer engineering early-career doctoral students with experts from the space agency, academia and some big tech companies. The companies contribute hardware,  algorithms, supercomputing resources funding, facilities and subject matter experts. Some of the resulting technology has been put to use, helping to identify asteroids, find planets and predict extreme solar radiation events.

Scientists at Goddard have been using different techniques to reveal the chemistry of exoplanets, based on the wavelengths of light emitted or absorbed by molecules in their atmospheres. With thousands of exoplanets discovered so far, the ability to make quick decisions about which ones deserve further study would be a plus.

Arney, working with Shawn Domagal-Goldman, an astrobiologist at Goddard Center, working with technical support from Google Cloud, deployed a neural network to compare performance to a machine learning approach. University of Oxford computer science graduate student Adam Cobb led a study to test the capability of a neural network against a widely-used machine learning technique known as a “random forest.” The team analyzed the atmosphere of WASP-12b, an exoplanet discovered in 2008 that had a comparison study done with a random forest technique, using data supplied by NASA’s Hubble Space Telescope.

“We found out right away that the neural network had better accuracy than random forest in identifying the abundance of various molecules in WASP-12b’s atmosphere,” Cobb stated. Beyond the greater accuracy, the neural network model could also tell the scientists how certain it was about its prediction. “In a place where the data weren’t good enough to give a really accurate result, this model was better at knowing that it wasn’t sure of the answer, which is really important if we are to trust these predictions,” states Domagal-Goldman.

The European Space Agency (ESA) is studying how to employ AI to support satellite operations, including relative position, communication and end-of-life management for large satellite constellations, according to an account from ESA.

The ESA has engaged in a number of studies on how to use AI for space applications and spacecraft operations as part of its Basic Activities program. One study examines using AI to support autonomous spacecraft that can navigate, perform telemetry analysis and upgrade their own software without communicating with Earth.

Another study focused on how AI can support the management of complex satellite constellations, to reduce the active workload of ground operators. Greater automation, such as for collision avoidance, can reduce the need for human intervention.

Additional studies are researching how a swarm of picosatellites – very small ones – can evolve a collective consciousness. The method employed explored known results in crystallography, the study of crystals, which may open a new way of conceiving lattice formation, a sub-discipline of order theory and abstract algebra.

AI Helping Astronauts Too; An AI Assistant with Empathy Coming

Astronauts traveling long distances for extended periods might be offered assistance from AI-powered emotional support robots, suggests a recent report in yahoo! News. Scientists are working to create an AI assistant that can sense human emotion and “respond with empathy.”

The robots could  be trained to anticipate the needs of crew members and “intervene if their mental health is at stake.” An AI assistant with empathy could be helpful to astronauts on a deep-space mission to Mars.

Astronauts on the International Space Station have an intelligent robot called CIMON that can interact, but is lacking in emotional intelligence. NASA CTO Tom Soderstrom has stated. A team at the organization’s Jet Propulsion Laboratory is working on a more sophisticated emotional support companion that can help fly the spacecraft as well as track the health and well-being of crew members.

AI Employed in Effort to Track Space Debris 

Space debris is becoming a critical issue in space. Scientists count more than 23,000 human-made fragments larger than 4 inches, and another 500,000 particles between half an inch and 4 inches in diameter. These objects move at 22,300 miles per hour; collisions cause dents, pits or worse.

Scientists have begun to augment the lasers employed to measure and track space debris with AI, specifically neural nets, according to a recent account in Analytics India Magazine.

Laser ranging technology was becoming a challenge due to poor prediction accuracy, small size of objects, and no reflection prism on the surface of debris, making it difficult to spot the exact location of fragments. Scientists began using a method to correct the telescope pointing error of the laser ranging system by enhancing certain hardware equipment. Most recently, AI deep learning techniques are starting to be employed to enhance the correction models.

Chinese researchers from the Chinese Academy of Surveying and Mapping, Beijing and Liaoning Technical University, Fuxin have worked to enhance the accuracy of identifying space junk. The team used a backpropagation neural network model, optimized by a proposed genetic algorithm and the Levenberg-Marquardt algorithm (used in curve fitting) to help pinpoint the location of debris. The results showed higher probability of accurately locating debris between three and nine times.

“After improving the pointing accuracy of the telescope through deep learning techniques, space debris with a cross-sectional area of one meter squared and a distance of 1,500 kilometres can be identified,” stated Tianming Ma of the Chinese Academy of Surveying and Mapping, Beijing and Liaoning Technical University, Fuxin.

Read the source articles from NASA, the  ESA, Robotics Business Review, in yahoo! News   and in Analytics India Magazine.

Read More

Posted on

AWS and Facebook launch an open-source model server for PyTorch

AWS and Facebook today announced two new open-source projects around PyTorch, the popular open-source machine learning framework. The first of these is TorchServe, a model-serving framework for PyTorch that will make it easier for developers to put their models into production. The other is TorchElastic, a library that makes it easier for developers to build fault-tolerant training jobs on Kubernetes clusters, including AWS’s EC2 spot instances and Elastic Kubernetes Service.

In many ways, the two companies are taking what they have learned from running their own machine learning systems at scale and are putting this into the project. For AWS, that’s mostly SageMaker, the company’s machine learning platform, but as Bratin Saha, AWS VP and GM for Machine Learning Services, told me, the work on PyTorch was mostly motivated by requests from the community. And while there are obviously other model servers like TensorFlow Serving and the Multi Model Server available today, Saha argues that it would be hard to optimize those for PyTorch.

“If we tried to take some other model server, we would not be able to quote optimize it as much, as well as create it within the nuances of how PyTorch developers like to see this,” he said. AWS has lots of experience in running its own model servers for SageMaker that can handle multiple frameworks, but the community was asking for a model server that was tailored toward how they work. That also meant adapting the server’s API to what PyTorch developers expect from their framework of choice, for example.

As Saha told me, the server that AWS and Facebook are now launching as open source is similar to what AWS is using internally. “It’s quite close,” he said. “We actually started with what we had internally for one of our model servers and then put it out to the community, worked closely with Facebook, to iterate and get feedback — and then modified it so it’s quite close.”

Bill Jia, Facebook’s VP of AI Infrastructure, also told me, he’s very happy about how his team and the community has pushed PyTorch forward in recent years. “If you look at the entire industry community — a large number of researchers and enterprise users are using AWS,” he said. “And then we figured out if we can collaborate with AWS and push PyTorch together, then Facebook and AWS can get a lot of benefits, but more so, all the users can get a lot of benefits from PyTorch. That’s our reason for why we wanted to collaborate with AWS.”

As for TorchElastic, the focus here is on allowing developers to create training systems that can work on large distributed Kubernetes clusters where you might want to use cheaper spot instances. Those are preemptible, though, so your system has to be able to handle that, while traditionally, machine learning training frameworks often expect a system where the number of instances stays the same throughout the process. That, too, is something AWS originally built for SageMaker. There, it’s fully managed by AWS, though, so developers never have to think about it. For developers who want more control over their dynamic training systems or to stay very close to the metal, TorchElastic now allows them to recreate this experience on their own Kubernetes clusters.

AWS has a bit of a reputation when it comes to open source and its engagement with the open-source community. In this case, though, it’s nice to see AWS lead the way to bring some of its own work on building model servers, for example, to the PyTorch community. In the machine learning ecosystem, that’s very much expected, and Saha stressed that AWS has long engaged with the community as one of the main contributors to MXNet and through its contributions to projects like Jupyter, TensorFlow and libraries like NumPy.

Read More

Posted on

R&D Roundup: Ultrasound/AI medical imaging, assistive exoskeletons and neural weather modeling

In the time of COVID-19, much of what transpires from the science world to the general public relates to the virus, and understandably so. But other domains, even within medical research, are still active — and as usual, there are tons of interesting (and heartening) stories out there that shouldn’t be lost in the furious activity of coronavirus coverage. This last week brought good news for several medical conditions as well as some innovations that could improve weather reporting and maybe save a few lives in Cambodia.
Ultrasound and AI promise better diagnosis of arrhythmia
Arrhythmia is a relatively common …

Read More