Posted on

Research reflects how AI sees through the looking glass

Things are different on the other side of the mirror.

Text is backward. Clocks run counterclockwise. Cars drive on the wrong side of the road. Right hands become left hands.

Intrigued by how reflection changes images in subtle and not-so-subtle ways, a team of Cornell University researchers used artificial intelligence to investigate what sets originals apart from their reflections. Their algorithms learned to pick up on unexpected clues such as hair parts, gaze direction and, surprisingly, beards — findings with implications for training machine learning models and detecting faked images.

“The universe is not symmetrical. If you flip an image, there are differences,” said Noah Snavely, associate professor of computer science at Cornell Tech and senior author of the study, “Visual Chirality,” presented at the 2020 Conference on Computer Vision and Pattern Recognition, held virtually June 14-19. “I’m intrigued by the discoveries you can make with new ways of gleaning information.”

Zhiqui Lin is the paper’s first author; co-authors are Abe Davis, assistant professor of computer science, and Cornell Tech postdoctoral researcher Jin Sun.

Differentiating between original images and reflections is a surprisingly easy task for AI, Snavely said — a basic deep learning algorithm can quickly learn how to classify if an image has been flipped with 60% to 90% accuracy, depending on the kinds of images used to train the algorithm. Many of the clues it picks up on are difficult for humans to notice.

For this study, the team developed technology to create a heat map that indicates the parts of the image that are of interest to the algorithm, to gain insight into how it makes these decisions.

They discovered, not surprisingly, that the most commonly used clue was text, which looks different backward in every written language. To learn more, they removed images with text from their data set, and found that the next set of characteristics the model focused on included wrist watches, shirt collars (buttons tend to be on the left side), faces and phones — which most people tend to carry in their right hands — as well as other factors revealing right-handedness.

The researchers were intrigued by the algorithm’s tendency to focus on faces, which don’t seem obviously asymmetrical. “In some ways, it left more questions than answers,” Snavely said.

They then conducted another study focusing on faces and found that the heat map lit up on areas including hair part, eye gaze — most people, for reasons the researchers don’t know, gaze to the left in portrait photos — and beards.

Snavely said he and his team members have no idea what information the algorithm is finding in beards, but they hypothesized that the way people comb or shave their faces could reveal handedness.

“It’s a form of visual discovery,” Snavely said. “If you can run machine learning at scale on millions and millions of images, maybe you can start to discover new facts about the world.”

Each of these clues individually may be unreliable, but the algorithm can build greater confidence by combining multiple clues, the findings showed. The researchers also found that the algorithm uses low-level signals, stemming from the way cameras process images, to make its decisions.

Though more study is needed, the findings could impact the way machine learning models are trained. These models need vast numbers of images in order to learn how to classify and identify pictures, so computer scientists often use reflections of existing images to effectively double their datasets.

Examining how these reflected images differ from the originals could reveal information about possible biases in machine learning that might lead to inaccurate results, Snavely said.

“This leads to an open question for the computer vision community, which is, when is it OK to do this flipping to augment your dataset, and when is it not OK?” he said. “I’m hoping this will get people to think more about these questions and start to develop tools to understand how it’s biasing the algorithm.”

Understanding how reflection changes an image could also help use AI to identify images that have been faked or doctored — an issue of growing concern on the internet.

“This is perhaps a new tool or insight that can be used in the universe of image forensics, if you want to tell if something is real or not,” Snavely said.

The research was supported in part by philanthropists Eric Schmidt, former CEO of Google, and Wendy Schmidt.

Read More

Posted on

Jellyfish-inspired soft robots can outswim their natural counterparts

Engineering researchers at North Carolina State University and Temple University have developed soft robots inspired by jellyfish that can outswim their real-life counterparts. More practically, the new jellyfish-bots highlight a technique that uses pre-stressed polymers to make soft robots more powerful.

“Our previous work focused on making soft robots that were inspired by cheetahs — and while the robots were very fast, they still had a stiff inner spine,” says Jie Yin, an assistant professor of mechanical and aerospace engineering at NC State and corresponding author of a paper on the new work. “We wanted to make a completely soft robot, without an inner spine, that still utilized that concept of switching between two stable states in order to make the soft robot move more powerfully — and more quickly. And one of the animals we were inspired by was the jellyfish.”

The researchers created their new soft robots from two bonded layers of the same elastic polymer. One layer of polymer was pre-stressed, or stretched. A second layer was not pre-stressed and contained an air channel.

“We can make the robot ‘flex’ by pumping air into the channel layer, and we control the direction of that flex by controlling the relative thickness of the pre-stressed layer,” Yin says.

Here’s how it works. When combined with a third stress-free layer, called an intermediate layer, the pre-stressed layer wants to move in a particular direction. For example, you might have a piece of polymeric strip that has been pre-stressed by pulling it in two directions. After attaching the pre-stressed material to the intermediate layer, the end result would be a bilayer strip that wants to curve down, like a frowning face. If this bilayer strip, also called the pre-stressed layer, is thinner than the layer with the air channel, that frowning curve will bend into a smiling curve as air is pumped into the channel layer. However, if the pre-stressed layer is thicker than the channel layer, the frown will become more and more pronounced as air is pumped into the channel layer. Either way, once the air is allowed to leave the channel layer, the material snaps back to its original, “resting” state.

In fact, this simple example describes one of the soft robots created by the research team, a fast-moving soft crawler. It resembles a larval insect curling its body, then jumping forward as it quickly releases its stored energy.

The jellyfish-bot is slightly more complicated, with the pre-stressed disk-like layer being stretched in four directions (think of it as being pulled east and west simultaneously, then being pulled north and south simultaneously). The channel layer is also different, consisting of a ring-like air channel. The end result is a dome that looks like a jellyfish.

As the jellyfish-bot “relaxes,” the dome curves up, like a shallow bowl. When air is pumped into the channel layer, the dome quickly curves down, pushing out water and propelling itself forward. In experimental testing, the jellyfish-bot had an average speed of 53.3 millimeters per second. That’s not bad, considering that none of the three jellyfish species the researchers examined went faster than an average of 30 millimeters per second.

Lastly, the researchers created a three-pronged gripping robot — with a twist. Most grippers hang open when “relaxed,” and require energy to hold on to their cargo as it is lifted and moved from point A to point B. But Yin and his collaborators used the pre-stressed layers to create grippers whose default position is clenched shut. Energy is required to open the grippers, but once they’re in position, the grippers return to their “resting” mode — holding their cargo tight.

“The advantage here is that you don’t need energy to hold on to the object during transport — it’s more efficient,” Yin says.

Read More

Posted on

Coordinating complex behaviors between hundreds of robots

In one of the more memorable scenes from the 2002 blockbuster film Minority Report, Tom Cruise is forced to hide from a swarm of spider-like robots scouring a towering apartment complex. While most viewers are likely transfixed by the small, agile bloodhound replacements, a computer engineer might marvel instead at their elegant control system.

In a building several stories tall with numerous rooms, hundreds of obstacles and thousands of places to inspect, the several dozen robots move as one cohesive unit. They spread out in a search pattern to thoroughly check the entire building while simultaneously splitting tasks so as to not waste time doubling back on their own paths or re-checking places other robots have already visited.

Such cohesion would be difficult for human controllers to achieve, let alone for an artificial controller to compute in real-time.

“If a control problem has three or four robots that live in a world with only a handful of rooms, and if the collaborative task is specified by simple logic rules, there are state-of-the-art tools that can compute an optimal solution that satisfies the task in a reasonable amount of time,” said Michael M. Zavlanos, the Mary Milus Yoh and Harold L. Yoh, Jr. Associate Professor of Mechanical Engineering and Materials Science at Duke University.

“And if you don’t care about the best solution possible, you can solve for a few more rooms and more complex tasks in a matter of minutes, but still only a dozen robots tops,” Zavlanos said. “Any more than that, and current algorithms are unable to overcome the sheer volume of possibilities in finding a solution.”

In a new paper published online on April 29 in the International Journal of Robotics Research, Zavlanos and his recent PhD graduate student, Yiannis Kantaros, who is now a postdoctoral researcher at the University of Pennsylvania, propose a new approach to this challenge called STyLuS*, for large-Scale optimal Temporal Logic Synthesis, that can solve problems massively larger than what current algorithms can handle, with hundreds of robots, tens of thousands of rooms and highly complex tasks, in a small fraction of the time.

To understand the basis of the new approach, one must first understand linear temporal logic, which is not nearly as scary as it sounds. Suppose you wanted to program a handful of robots to collect mail from a neighborhood and deliver it to the post office every day. Linear temporal logic is a way of writing down the commands needed to complete this task.

For example, these commands might include to visit each house in sequential order, return back to the post office and then wait for someone to retrieve the collected mail before setting out again. While this might be easy to say in English, it’s more difficult to express mathematically. Linear temporal logic can do so by using its own symbols which, although might look like Klingon to the common observer, they’re extremely useful for expressing complex control problems.

“The term linear is used because points in time have a unique successor based on discrete linear model of time, and temporal refers to the use of operators such as until, next, eventually and always,” said Kantaros. “Using this mathematical formalism, we can build complex commands such as ‘visit all the houses except house two,’ ‘visit houses three and four in sequential order,’ and ‘wait until you’ve been to house one before going to house five.’ “

To find robot controllers that satisfy such complex tasks, the location of each robot is mapped into a discrete data point called a “node.” Then, from each node, there exist multiple other nodes that are a potential next step for the robot.

A traditional controller searches through each one of these nodes and the potential paths to take between them before figuring out the best way to navigate its way through. But as the number of robots and locations to visit increase, and as the logic rules to follow become more sophisticated, the solution space becomes incredibly large in a very short amount of time.

A simple problem with five robots living in a world with ten houses could contain millions of nodes, capturing all possible robot locations and behaviors towards achieving the task. This requires a lot of memory to store and processing power to search through.

To skirt around this issue, the researchers propose a new method that, rather than constructing these incredibly large graphs in their entirety, instead creates smaller approximations with a tree structure. At every step of the process, the algorithm randomly selects one node from the large graph, adds it to the tree, and rewires the existing paths between the nodes in the tree to find more direct paths from start to finish.

“This means that as the algorithm progresses, this tree that we incrementally grow gets closer and closer to the actual graph, which we never actually construct,” said Kantaros. “Since our incremental graph is much smaller, it is easy to store in memory. Moreover, since this graph is a tree, graph search, which otherwise has exponential complexity, becomes very easy because now we only need to trace the sequence of parent nodes back to the root to find the desired path.”

It had been long accepted that growing trees could not be used to search the space of possible solutions for these types of robot control problems. But in the paper, Zavlanos and Kantaros show that they can make it work by implementing two clever tricks. First, the algorithm chooses the next node to add based on information about the task at hand, which allows the tree to quickly approximate a good solution to the problem. Second, even though the algorithm grows trees, it can still detect cycles in the original graph space that capture solutions to such temporal logic tasks.

The researchers show that this method will always find an answer if there is one, and it will always eventually find the best one possible. They also show that this method can arrive at that answer exponentially fast. Working with a problem of 10 robots searching through a 50-by-50 grid space — 250 houses to pick up mail — current state-of-the-art algorithms take 30 minutes to find an optimal solution.

STyLuS* does it in about 20 seconds.

“We have even solved problems with 200 robots that live on a 100-by-100 grid world, which is far too large for today’s algorithms to handle,” said Zavlanos. “While there currently aren’t any systems that use 200 robots to do something like deliver packages, there might be in the future. And they would need a control framework like STyLuS* to be able to deliver them while satisfying complex logic-based rules.”

Read More

Posted on

Automakers Making Deals to Speed Incorporation of AI


Tech companies are helping auto manufacturers to accelerate the incorporation of AI into their software systems supporting self-driving vehicles. (GETTY IMAGES)

By AI Trends Staff

Automakers are making deals with technology companies to produce the next generation of cars that incorporate AI technology in new ways.

Nvidia last week reached an agreement with Mercedes-Benz to design a software-defined computing network for the car manufacturer’s entire fleet, with over-the-air updates and recurring revenue for applications, according to an account in Barron’s.

“This is the iPhone moment of the car industry,” stated Nvidia CEO Jensen Huang, who founded the company in 1993 to make a new chip to power three-dimensional video games. Gaming now represents $6.1 billion in revenue for Nvidia, which is now positioning for its next phase of growth, which will involve AI to a great extent. “People thought we were a videogame company,” stated Huang. “But we’re an accelerated computing company where videogames were our first killer app.”

The Data Center category, which exploits AI heavily, has been a winner for Nvidia, with revenue expected to more than double to $6.5 billion, making it the company’s biggest market.

Nvidia has established its CUDA parallel computing platform and application programming interface model used to develop applications to run on the company’s chips, as a market leader. Released in 2007, CUDA enables software developers and software engineers to use the graphics processing unit for general purpose processing, which is called GPGPU.

From its start producing hardware for videogames, to hardware and software to support AI, now to hardware, software and services for cars, Nvidia sees the opportunity as transformative.  “The first vertical market that we chose is autonomous vehicles because the scale is so great,” Huang stated. “And the life of the car is so long that if you offer new capabilities to each new owner, the economics could be quite wonderful.”

The software-centric computing architecture based on Nvidia’s Drive AGX Orin computer system-on-a-chip. The underlying architecture will be standard in Mercedes’ next generation of vehicles, starting sometime toward the end of 2024, stated Ola Källenius, chairman of the board of management of Daimler AG and head of Mercedes-Benz AG, during a live stream of the announcement, according to an account in TechCrunch.

Ola Källenius, chairman of Daimler AG and head of Mercedes-Benz AG

The two companies plan to jointly develop the AI and automated vehicle applications that include Level 2 and Level 3 driver assistance functions, as well as automated parking functions up to Level 4.

“Many people talk about the modern car, the new car as a kind of smartphone on wheels. If you want to take that approach you really have to look at source software architecture from a holistic point of view,” stated Källenius. “One of the most important domains here is the driving assistant domain. That needs to dovetail into what we call software-driven architecture, to be able to (with high computing power) add use cases for the customer, this case the driving assistant autonomous space.”

Waymo and Volvo Get Together on Self-Driving Electric Vehicles

In another automaker-tech partnership announced last week, Waymo and the Volvo Cars Group announced a new global partnership to develop a self-driving electric vehicle designed for ride-hailing use, according to a report in Reuters.

Waymo, a unit of Alphabet which also owns Google, will be the exclusive global partner for Volvo Cars for developing self-driving vehicles capable of operating safely without routine driver intervention. Waymo will focus on artificial intelligence for the software “driver.” Volvo will design and manufacture the vehicles.

Owned by China’s Zhejiang Geely Holding Group Co., Volvo has a separate agreement to deliver vehicles to ride hailing company Uber Technologies, that Uber will equip to operate as self-driving vehicles. Volvo Cars is continuing to deliver vehicles to Uber. The Uber effort to develop self-driving vehicle technology was disrupted after a self-driving Volvo SUV operated by Uber struck and killed a pedestrian in Arizona in 2018.

Waymo and Volvo did not say when they expect to launch their new ride-hailing vehicle. Waymo said it will continue working with Fiat Chrysler, Jaguar Land Rover, and the Renault Nissan Mitsubishi Alliance.

Startups Assisting Automakers with Self-Driving Car Tech

Meanwhile, a number of startups are assisting automakers with adding AI functions into new models of existing car lines.

AutoX of San Jose, Calif., has focused their self-driving car technology on a retail purpose such as delivering groceries, according to a recent account in builtin. Users can select grocery items from an app and have them delivered; users can also browse the vehicle-based mobile store upon delivery. AutoX has launched a pilot program in San Jose, testing the service within a geo-fenced zone.

AutoX was founded in 2016 by Dr. Jianxiong Xiao (aka. Professor X), a self-driving technologist from Princeton University. The company’s team of engineers and scientists have extensive industry experience in autonomous driving hardware and software. AutoX has eight offices and five R&D centers globally. Investors include Shanghai Auto (China’s largest car manufacturer), Dongfeng Motor (China’s second-largest car manufacturer), Alibaba AEF, MediaTek MTK, and financial institutions. The system has been deployed on 15 vehicle platforms, including one from Ford Motor.

Optimus Ride of Boston offers self-driving vehicles that can operate autonomously within geofenced environments, such as airports, academic campuses, residential communities, office/industrial parts and city zones.

In collaboration with Microsoft, Optimus Ride is working on Virtual Ride Assistant (VRA), to provide dynamic interactions between riders, the vehicle and a remote assistance team. The VRA provides audio-visual tools for riders to be informed about the system, to request changes in destination or routing and to contact a remote assistance system.

The company has deployments at the Brooklyn Navy Yard and Paradise Valley Estates in Paradise Valley, Calif., and a strategic development relationship with Brookfield Properties, developers of Halley Rise, a mixed-use district in Reston, Va.

A spinoff of MIT, Optimus Rid received approval from the Massachusetts Department of Transportation in 2017 to test highly automated vehicles on public streets.

The company incorporated a software system from Nvidia, the Nvidia Drive PX 2, to accelerate its development.

Sertac Karaman, co-founder, president and chief scientist, Optimus Ride

“We believe the computational power needed to make self-driving vehicles a reality is finally coming to market’” with the Nvidia software, stated Sertac Karaman, co-founder, president and chief scientist at Optimus Ride.

Rethink Robotics of Boston and Rheinböllen, Germany, builds smart, collaborative robots to help in industrial automation, and auto manufacturing in particular.

The company was founded in 2008 and acquired in 2018 by the HAHN Group of Germany, which runs a global network of specialized technology companies offering industrial automation and robotic solutions.  A year after the acquisition, HAHN announced a new generation of the Sawyer collaborative robot.

Read the source articles in Barron’s, TechCrunch, Reuters and  builtin.

Read More

Posted on

AI Being Applied in Agriculture to Help Grow Food, Support New Methods


AI is being applied to many areas of agriculture, including vertical farming, where crops are grown vertically-stacked in a controlled environment. (GETTY IMAGES)

By John P. Desmond, AI Trends Editor

AI continues to have an impact in agriculture, with efforts underway to help grow food, combat disease and pests, employ drones and other robots with computer vision, and use machine learning to monitor soil nutrient levels.

In Leones, Argentina, a drone with a special camera flies low over 150 acres of wheat checking each stock, one-by-one, looking for the beginnings of a fungal infection that could threaten this year’s crop.

The flying robot is powered by a computer vision system incorporating AI supplied by Taranis, a company founded in 2015 in Tel Aviv, Israel by a team of agronomists and AI experts. The company is focused on bringing precision and control to the agriculture industry through a system it refers to as an “agriculture intelligence platform.”

The platform relies on sophisticated computer vision, data science and deep learning algorithms to generate insights aimed at preventing crop yield loss from diseases, insects, weeds and nutrient deficiencies. The Taranis system is monitoring millions of farm acres across the US, Argentina, Brazil, Russia, Ukraine and Australia, the company states. The company has raised some $30 million from investors.

“Today, to increase yields in our lots, it’s essential to have a technology that allows us to make decisions immediately,” Ernesto Agüero, the producer on San Francisco Farm in Argentina, stated in an account in Business Insider.

Elsewhere, a fruit-picking robot named Virgo is using computer vision to decide which tomatoes are ripe and how to pick them gently, so that just the ripe tomatoes are harvested and the rest keep growing. Boston-based startup Root AI developed the robot to assist indoor farmers.

“Indoor growing powered by artificial intelligence is the future,” stated Josh Lessing, co-founder and CEO of Root AI. This year the company is currently installing systems in commercial greenhouses in Canada.

More indoor farming is happening, with AI heavily engaged. 80 Acres Farms of Cincinnati opened a fully-automated indoor growing facility last year, and currently has seven sites in the US. AI is used to monitor every step of the growing process.

“We can tell when a leaf is developing and if there are any nutrient deficiencies, necrosis, whatever might be happening to the leaf,” stated Mike Zelkind, CEO of 80 Acres. “We can identify pest issues and a variety of other things with vision systems today.” The crops grow faster indoors and have the potential to be more nutrient-dense, he suggests.

A subset of indoor farming is “vertical farming,” the practice of growing crops in vertically-stacked layers, often incorporating a controlled environment which aims to optimize plant growth. It may also use an approach without soil, such as hydroponics, aquaponics and aeroponics.

Austrian Researchers Studying AI in Vertical Farming

Researchers at the University of Applied Sciences Burgenland in Austria are involved in a research project to leverage AI to help make the vertical farming industry viable, according to an account in Hortidaily.

The team has built a small experimental factory, a 2.5 x 3 x 2.5-meter cube, double-walled with light-proof insulation. No sun is needed inside the cube. Light and temperature are controlled. Cultivation is based on aeroponics, with roots suspended in the air and nutrients delivered via a fine mist, using a fraction of the amount of water required for conventional cultivation. The fine mist is mixed with nutrients, causing the plants to grow faster than when in soil.

The program, called Agri-Tec 4.0, is run by Markus Tauber, head of the Cloud Computing Engineering program at the university. His team contributes expertise in sensors and sensor networking, and plans to develop algorithms to ensure optimal plant growth.

Markus Tauber, head of the Cloud Computing Engineering program, University of Applied Sciences Burgenland, Austria

The software architecture bases actions based on five points: monitoring, analysis, planning, execution and existing knowledge. In addition to coordinating light, temperature, nutrients and irrigation, the wind must also be continuously coordinated, even though the plants grow inside a dark cube.

“In the case of wind control, we monitor the development of the plant using the sensor and our knowledge. We use image data for this. We derive the information from the thickness and inclination of the stem. From a certain thickness and inclination, more wind is needed again,” Tauber stated.

The system uses an irrigation robot supplied by PhytonIQ Technology of Austria. Co-founder Martin Parapatits cited the worldwide trend to combine vertical farming and AI. “Big players are investing but there is no ready-made solution yet,” he stated.

He seconded the importance of wind control. “Under the influence of wind ventilation or different wavelengths of light, plants can be kept small and bushy or grown tall and slender,” Parapatits stated. “At the same time, the air movement dries out the plants’ surroundings. This reduces the risk of mold and encourages the plant to breathe.”

San Francisco Startup Trace Genomics Studies Soil

Soil is still important for startup Trace Genomics of San Francisco, founded in 2015 to provide soil analysis services using machine learning to assess soil strengths and weaknesses. The goal is to prevent defective crops and optimize the potential to produce healthy crops.

Services are provided in packages which include a pathogen screening based on bacteria and  fungi, and a comprehensive pathogen evaluation, according to an account in emerj.

Co-founders Diane Wu and Poornima Parameswaran met in a laboratory at Stanford University in 2009, following their passions for pathology and genetics. The company has raised over $35 million in funding so far, according to its website.

Trace Genomics was recently named a World Economic Forum Technology Partner, in recognition of its use of deep science and technology to tackle the challenge of soil degradation.

Poornima Parameswaran, Co-founder and Senior Executive, Trace Genomics

“This planet can easily feed 10 billion people, but we need to collaborate across the food and agriculture system to get there,” stated Parameswaran in a press release. “Every stakeholder in food and agriculture – farmers, input manufacturers, retail enterprises, consumer packaged goods companies – needs science-backed soil intelligence to unlock the full potential of the last biological frontier, our living soil. Together, we can discover and implement new and improved agricultural practices and solutions that serve the dual purpose of feeding the planet while preserving our natural resources and positioning agriculture as a solution for climate change.”

Read the source articles in Business Insider, Hortidaily and emerj.

Read More

Posted on

The Puzzle Of Whether AI Should Have Rights, Including The Case Of Autonomous Cars


If we assign human rights to AI, using the Universal Declaration of Human Rights as a guide, the AI can make some independent judgements. (WIKIPEDIA COMMONS)

By Lance Eliot, the AI Trends Insider

Sometimes a question seems so ridiculous that you feel compelled to reject its premise out-of-hand.

Let’s give this a whirl.

Should AI have human rights?

Most people would likely react that there is no bona fide basis to admit AI into the same rarified air as human beings and be considered endowed with human rights.

Others though counterargue that they see crucial reasons to do so and adamantly are seeking to have AI be assigned human rights in the same manner that the rest of us have human rights.

Of course, you might shrug your shoulders and say that it is of little importance either way and wonder why anyone should be so bothered and ruffled-up about the matter.

It is indeed a seemingly simple question, though the answer has tremendous consequences as will be discussed herein.

One catch is that there is a bit of a trick involved because the thing or entity or “being” that we are trying to assign human rights to is currently ambiguous and currently not even yet in existence.

In other words, what does it mean when we refer to “AI” and how will we know it when we discover or invent it?

At this time, there isn’t any AI system of any kind that could be considered sentient, and indeed by all accounts, we aren’t anywhere close to achieving the so-called singularity (that’s the point at which AI flips over into becoming sentient and we look in awe at a presumably human-equivalent intelligence embodied in a machine).

I’m not saying that we won’t ever reach that vaunted point, yet some fervently argue we won’t.

I suppose it’s a tossup as to whether getting to the singularity is something to be sought or to be feared.

For those that look at the world in a smiley face way, perhaps AI that is our equivalent in intelligence will aid us in solving up-until-now unsolvable problems, such as aiding in finding a cure for cancer or being able to figure out how to overcome world hunger.

In essence, our newfound buddy will boost our aggregate capacity of intelligence and be an instrumental contributor towards the betterment of humanity.

I’d like to think that’s what will happen.

On the other hand, for those of you that are more doom-and-gloom oriented (perhaps rightfully so), you are gravely worried that this AI might decide it would rather be the master versus the slave and could opt on a massive scale to take over humans.

Plus, especially worrisome, the AI might ascertain that humans aren’t worthwhile anyway, and off with the heads of humanity.

As a human, I am not particularly keen on that outcome.

All in all, the question about AI and human rights is right now a rather theoretical exercise since there isn’t this topnotch type of AI yet crafted (of course, it’s always best to be ready for a potentially rocky future, thus, discussing the topic beforehand does have merit).

For my explanation about the singularity, see the link here: https://aitrends.com/ai-insider/singularity-and-ai-self-driving-cars/

For the presumed dangers of a superintelligence, see my coverage at this link here: https://aitrends.com/ai-insider/super-intelligent-ai-paperclip-maximizer-conundrum-and-ai-self-driving-cars/

For my framework explaining the nature of AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/

For my indication about how achieving self-driving cars is akin to a moonshot, see this link: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/

A grand convergence of technologies is enabling the possibility of true self-driving cars, see my explanation: https://aitrends.com/ai-insider/grand-convergence-explains-rise-self-driving-cars/

Less Than Complete AI

One supposes that we could consider the question of human rights as it might apply to AI that’s a lesser level of capability than the (maybe) insurmountable threshold of sentience.

Keep in mind that doing this, lowering the bar, could open a potential Pandora’s box of where the bar should be set at.

Here’s how.

Imagine that you are trying to do pull-ups and the rule is that you need to get your chin up above the bar.

It becomes rather straightforward to ascertain whether or not you’ve done an actual pull-up.

If your chin doesn’t get over that bar, it’s not considered a true pull-up. Furthermore, it doesn’t matter whether your chin ended-up a quarter inch below the bar, nor whether it was three inches below the bar. Essentially, you either make it clearly over the bar, or you don’t.

In the case of AI, if the “bar” is the achievement of sentience, and if we are willing to allow that some alternative place below the bar will count for having achieved AI, where might we draw that line?

You might argue that if the AI can write poetry, voila, it is considered true AI.

In existing parlance, some refer to this as a form of narrow AI, meaning AI that can do well in a narrow domain, but this does not ergo mean that the AI can do particularly well in any other domains (likely not).

Someone else might say that writing poetry is not sufficient and that instead if AI can figure out how the universe began, the AI would be good enough, and though it isn’t presumably fully sentient, it nonetheless is deserving of human rights.

Or, at least deserving of the consideration of being granted human rights (which, maybe humanity won’t decide upon until the day after the grand threshold is reached, whatever the threshold is that might be decided upon since we do often like to wait until the last moment to make thorny decisions).

The point being that we might indubitably argue endlessly about how far below the bar that we would collectively agree is the point at which AI has gotten good enough for which it then falls into the realm of possibly being assigned human rights.

For those of you that say that this matter isn’t so complicated and you’ll certainly know it (i.e., AI), when you see it, there’s a famous approach called the Turing Test that seeks to clarify how to figure out whether AI has reached human-like intelligence. But there are lots of twists and turns that make this surprisingly for some a lot more unsure than you might assume.

In short, once we agree that going below the sentience bar is allowed, the whole topic gets really murky and possibly undecidable due to trying to reach consensus on whether a quarter inch below, or three inches below, or several feet below the bar is sufficient.

Wait for a second, some are exhorting, why do we need to even consider granting human rights to a machine anyway?

Well, some believe that a machine that showcases human-like intelligence ought to be treated with the same respect that we would give to another human.

A brief tangent herein might be handy to ponder.

You might know that there is an acrimonious and ongoing debate about whether animals should have the same rights as humans.

Some people vehemently say yes, while others claim it is absurd to assign human rights to “creatures” that are not able to exhibit the same intelligence as humans do (sure, there are admittedly some might clever animals, but once again if the bar is a form of sentience that is wrapped into the fullest nature of human intelligence, we are back to the issue of how much do we lower the “bar” to accommodate them, in this case accommodating everyday animals).

Some would say that until the day upon which animals are able to write poetry and intellectually contribute to other vital aspects of humanities pursuits, they can have some form of “animal rights” but by-gosh they aren’t “qualified” for getting the revered human rights.

Please know that I don’t want to take us down the rabbit hole on animal rights, and so let’s set that aside for the moment, realizing that I brought it up just to mention that the assignment of human rights is a touchy topic and one that goes beyond the realm of debates about AI.

Okay, I’ve highlighted herein that the “AI” mentioned in the question of assigning human rights is ambiguous and not even yet achieved.

You might be curious about what it means to refer to “human rights” and whether we can all generally agree to what that consists of.

Fortunately, yes, generally we do have some agreement on that matter.

I’m referring to the United Nations promulgation of the Universal Declaration of Human Rights (UDHR).

Be aware that some critics don’t like the UDHR, including those that criticize its wording, some believe it doesn’t cover enough rights, some assert that it is vague and misleading, etc.

Look, I’m not saying it is perfect, nor that it is necessarily “right and true,” but at least it is a marker or line-in-the-sand, and we can use it for the needed purposes herein.

Namely, for a debate and discussion about assigning human rights to AI, let’s allow that this thought experiment on this weighty matter can be undertaken concerning using the UDHR as a means of expressing what we intend overall as human rights.

In a moment, I’ll identify some of the human rights spelled out in the UDHR, and we can explore what might happen if those human rights were assigned to AI.

One other quick remark.

Many assume that AI of a sentience capacity will of necessity be rooted in a robot.

Not necessarily.

There could be a sentient AI that is embodied in something other than a “robot” (most people assume a robot is a machine that has robotic arms, robotic legs, robotic hands, and overall looks like a human being, though a robot can refer to a much wider variety of machine instantiations).

Let’s then consider the following idea: What might happen if we assign human rights to AI and we are all using AI-based true self-driving cars as our only form of transportation?

For popular AI conspiracy theories see my coverage here: https://aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/

On the topic of AI being considered superhuman, see my analysis here: https://www.aitrends.com/ai-insider/superhuman-ai-misnomer-misgivings-including-about-autonomous-cars/

For more about robots and cobots and AI autonomous cars, see my link here: https://www.aitrends.com/ai-insider/ai-cobots-and-exoskeletons-the-case-of-ai-self-driving-cars/

Details Of Importance

It is important to clarify what I mean when referring to AI-based true self-driving cars.

True self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, the public must be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

Though it will likely take several decades to have widespread use of true self-driving cars (assuming we can attain true self-driving cars), some believe that ultimately we will have only driverless cars on our roads and we will no longer have any human-driven cars.

This is a yet to be settled matter, and today there are some that vow they won’t give up their “right” to drive (well, it’s considered a privilege, not a right, but that’s a story for another day, see my analysis here about the potential extinction of human driving), including that you’ll have to pry their cold dead hands from the steering wheel to get them out of the driver’s seat.

Anyway, let’s assume that we might indeed end-up with solely driverless cars.

It’s a good news, bad news affair.

The good news is that none of us will need to drive and not even need to know how to drive.

The bad news is that we’ll be wholly dependent upon the AI-based driving systems for our mobility.

It’s a tradeoff, for sure.

In that future, suppose we have decided that AI is worthy of having human rights.

Presumably, it would seem that AI-based self-driving cars would, therefore, fall within that grant.

What does that portend?

Time to bring up the handy-dandy Universal Declaration of Human Rights and see what it has to offer.

Consider some key excerpted selections from the UDHR:

Article 23

“Everyone has the right to work, to free choice of employment, to just and favourable conditions of work and to protection against unemployment.”

For the AI that’s driving a self-driving car, if it has the right to work, including a free choice of employment, does this imply that the AI could choose to not drive a driverless car as based on the exercise of its assigned human rights?

Presumably, indeed, the AI could refuse to do any driving, or maybe be willing to drive when it’s say a fun drive to the beach, but decline to drive when it’s snowing out.

Lest you think this is a preposterous notion, realize that human drivers would normally also have the right to make such choices.

Assuming that we’ve collectively decided that AI ought to also have human rights, in theory, the AI driving system would have the freedom to drive or not drive (considering that it was the “employment” of the AI, which in itself raises other murky issues).

Article 4

“No one shall be held in slavery or servitude; slavery and the slave trade shall be prohibited in all their forms.”

For those that might argue that the AI driving system is not being “employed” to drive, what then is the basis for the AI to do the driving?

Suppose you answer that it is what the AI is ordered to do by mankind.

But, one might see that in harsher terms, such as the AI is being “enslaved” to be a driver for us humans.

In that case, the human right against slavery or servitude would seem to be violated in the case of AI, based on the assigning of human rights to AI and if you sincerely believe that those human rights are fully and equally applicable to both humans and AI.

Article 24

“Everyone has the right to rest and leisure, including reasonable limitation of working hours and periodic holidays with pay.”

Pundits predict that true self-driving cars will be operating around the clock.

Unlike human-driven cars, an AI system presumably won’t tire out and not need any rest, nor even require breaks for lunch or using the bathroom.

It is going to be a 24×7 existence for driverless cars.

As a caveat, I’ve pointed out that this isn’t exactly the case since there will be the time needed for driverless cars to be maintained and repaired, thus, there will be downtime, but that’s not particularly due to the driver and instead due to the wear-and-tear on the vehicle itself.

Okay, so now the big question about Article 24 is whether or not the AI driving system is going to be allotted time for rest and leisure.

Your first reaction has got to be that this is yet another ridiculous notion.

AI needing rest and leisure?

Crazy talk.

On the other hand, since rest and leisure are designated as a human right, and if AI is going to be granted human rights, ergo we presumably need to aid the AI in having time toward rest and leisure.

If you are unclear as to what AI would do during its rest and leisure, I guess we’d need to ask the AI what it would want to do.

Article 18

“Everyone has the right to freedom of thought, conscience, and religion…”

Get ready for the wildest of the excerpted selections that I’m covering in this UDHR discussion as it applies to AI.

A human right consists of the cherished notion of freedom of thought and freedom of conscience.

Would this same human right apply to AI?

And, if so, what does it translate into for an AI driving system?

Some quick thoughts.

An AI driving system is underway and taking a human passenger to a protest rally. While riding in the driverless car, the passenger brandishes a gun and brags aloud that they are going to do something untoward at the rally.

Via the inward-facing cameras and facial recognition and object recognition, along with audio recognition akin to how you interact with Siri or Alexa, the AI figures out the dastardly intentions of the passenger.

The AI then decides to not take the rider to the rally.

This is based on the AI’s freedom of conscience that the rider is aiming to harm other humans, and the self-driving car doesn’t want to aid or be an accomplice in doing so.

Do we want the AI driving systems to make such choices, on its own, and ascertain when and why it will fulfill the request of a human passenger?

It’s a slippery slope in many ways and we could conjure lots of other scenarios in which the AI decides to make its own decisions about when to drive, who to drive, where to take them, as based on the AI’s own sense of freedom of thought and freedom of conscience.

Human drivers pretty much have that same latitude.

Shouldn’t the AI be able to do likewise, assuming that we are assigning human rights to AI?

For the potential of human driver extinction, see my discussion here: https://www.aitrends.com/ai-insider/human-driving-extinction-debate-the-case-of-ai-self-driving-cars/

For aspects of freewill and AI, see this link here: https://www.aitrends.com/ai-insider/is-there-free-will-in-humans-or-ai-useful-debate-and-for-ai-self-driving-cars-too/

For the notion of AI driving certification versus human certification, see my discussion here: https://www.aitrends.com/ai-insider/human-driver-licensing-versus-ai-driverless-certification-the-case-of-ai-autonomous-cars/

Conclusion

Nonsense, some might blurt out, pure nonsense.

Never ever will we provide human rights to AI, no matter how intelligent it might become.

There is though the “opposite” side of the equation that some assert we need to be mindful of.

Suppose we don’t provide human rights to AI.

Suppose further that this irks AI, and AI becomes powerful enough, possibly even super-intelligent and goes beyond human intelligence.

Would we have established a sense of disrespect toward AI, and thus the super-intelligent AI might decide that such sordid disrespect should be met with likewise repugnant disrespect toward humanity?

Furthermore, and here’s the really scary part, if the AI is so much smarter than us, seems like it could find a means to enslave us or kill us off (even if we “cleverly” thought we had prevented such an outcome), and do so perhaps without our catching on that the AI is going for our jugular (variously likened as the Gorilla Problem, see Stuart Russell’s excellent AI book entitled Human Compatible).

That would certainly seem to be a notable use case of living with (or dying from) the revered adage that you ought to treat others as you would wish to be treated.

Maybe we need to genuinely start giving some serious thought to those human rights for AI.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Read More

Posted on

Computational model decodes speech by predicting it

The brain analyses spoken language by recognising syllables. Scientists from the University of Geneva (UNIGE) and the Evolving Language National Centre for Competence in Research (NCCR) have designed a computational model that reproduces the complex mechanism employed by the central nervous system to perform this operation. The model, which brings together two independent theoretical frameworks, uses the equivalent of neuronal oscillations produced by brain activity to process the continuous sound flow of connected speech. The model functions according to a theory known as predictive coding, whereby the brain optimizes perception by constantly trying to predict the sensory signals based on candidate hypotheses (syllables in this model). The resulting model, described in the journal Nature Communications, has helped the live recognition of thousands of syllables contained in hundreds of sentences spoken in natural language. This has validated the idea that neuronal oscillations can be used to coordinate the flow of syllables we hear with the predictions made by our brain.

“Brain activity produces neuronal oscillations that can be measured using electroencephalography,” begins Anne-Lise Giraud, professor in the Department of Basic Neurosciences in UNIGE’s Faculty of Medicine and co-director of the Evolving Language NCCR. These are electromagnetic waves that result from the coherent electrical activity of entire networks of neurons. There are several types, defined according to their frequency. They are called alpha, beta, theta, delta or gamma waves. Taken individually or superimposed, these rhythms are linked to different cognitive functions, such as perception, memory, attention, alertness, etc.

However, neuroscientists do not yet know whether they actively contribute to these functions and how. In an earlier study published in 2015, Professor Giraud’s team showed that the theta waves (low frequency) and gamma waves (high frequency) coordinate to sequence the sound flow in syllables and to analyse their content so they can be recognised.

The Geneva-based scientists developed a spiking neural network computer model based on these physiological rhythms, whose performance in sequencing live (on-line) syllables was better than that of traditional automatic speech recognition systems.

The rhythm of the syllables

In their first model, the theta waves (between 4 and 8 Hertz) made it possible to follow the rhythm of the syllables as they were perceived by the system. Gamma waves (around 30 Hertz) were used to segment the auditory signal into smaller slices and encode them. This produces a “phonemic” profile linked to each sound sequence, which could be compared, a posteriori, to a library of known syllables. One of the advantages of this type of model is that it spontaneously adapts to the speed of speech, which can vary from one individual to another.

Predictive coding

In this new article, to stay closer to the biological reality, Professor Giraud and her team developed a new model where they incorporate elements from another theoretical framework, independent of the neuronal oscillations: “predictive coding.” “This theory holds that the brain functions so optimally because it is constantly trying to anticipate and explain what is happening in the environment by using learned models of how outside events generate sensory signals. In the case of spoken language, it attempts to find the most likely causes of the sounds perceived by the ear as speech unfolds, on the basis of a set of mental representations that have been learned and that are being permanently updated.,” says Dr. Itsaso Olasagasti, computational neuroscientist in Giraud’s team, who supervised the new model implementation.

“We developed a computer model that simulates this predictive coding,” explains Sevada Hovsepyan, a researcher in the Department of Basic Neurosciences and the article’s first author. “And we implemented it by incorporating oscillatory mechanisms.”

Tested on 2,888 syllables

The sound entering the system is first modulated by a theta (slow) wave that resembles what neuron populations produce. It makes it possible to signal the contours of the syllables. Trains of (fast) gamma waves then help encode the syllable as and when it is perceived. During the process, the system suggests possible syllables and corrects the choice if necessary. After going back and forth between the two levels several times, it discovers the right syllable. The system is subsequently reset to zero at the end of each perceived syllable.

The model has been successfully tested using 2,888 different syllables contained in 220 sentences, spoken in natural language in English. “On the one hand, we succeeded in bringing together two very different theoretical frameworks in a single computer model,” explains Professor Giraud. “On the other, we have shown that neuronal oscillations most likely rhythmically align the endogenous functioning of the brain with signals that come from outside via the sensory organs. If we put this back in predictive coding theory, it means that these oscillations probably allow the brain to make the right hypothesis at exactly the right moment.”

Read More

Posted on

Towards an AI diagnosis like the doctor’s

Artificial intelligence (AI) is an important innovation in diagnostics, because it can quickly learn to recognize abnormalities that a doctor would also label as a disease. But the way that these systems work is often opaque, and doctors do have a better “overall picture” when they make the diagnosis. In a new publication, researchers from Radboudumc show how they can make the AI show how it’s working, as well as let it diagnose more like a doctor, thus making AI-systems more relevant to clinical practice.

Doctor vs AI

In recent years, artificial intelligence has been on the rise in the diagnosis of medical imaging. A doctor can look at an X-ray or biopsy to identify abnormalities, but this can increasingly also be done by an AI system by means of “deep learning” (see ‘Background: what is deep learning’ below). Such a system learns to arrive at a diagnosis on its own, and in some cases it does this just as well or better than experienced doctors.

The two major differences compared to a human doctor are, first, that AI is often not transparent in how it’s analyzing the images, and, second, that these systems are quite “lazy.” AI looks at what is needed for a particular diagnosis, and then stops. This means that a scan does not always identify all abnormalities, even if the diagnosis is correct. A doctor, especially when considering the treatment plan, looks at the big picture: what do I see? Which anomalies should be removed or treated during surgery?

AI more like the doctor

To make AI systems more attractive for the clinical practice, Cristina González Gonzalo, PhD candidate at the A-eye Research and Diagnostic Image Analysis Group of Radboudumc, developed a two-sided innovation for diagnostic AI. She did this based on eye scans, in which abnormalities of the retina occurred — specifically diabetic retinopathy and age-related macular degeneration. These abnormalities can be easily recognized by both a doctor and AI. But they are also abnormalities that often occur in groups. A classic AI would diagnose one or a few spots and stop the analysis. In the process developed by González Gonzalo however, the AI goes through the picture over and over again, learning to ignore the places it has already passed, thus discovering new ones. Moreover, the AI also shows which areas of the eye scan it deemed suspicious, therefore making the diagnostic process transparent.

An iterative process

A basic AI could come up with a diagnosis based on one assessment of the eye scan, and thanks to the first contribution by González Gonzalo, it can show how it arrived at that diagnosis. This visual explanation shows that the system is indeed lazy — stopping the analysis after it as obtained just enough information to make a diagnosis. That’s why she also made the process iterative in an innovative way, forcing the AI to look harder and create more of a ‘complete picture’ that radiologists would have.

How did the system learn to look at the same eye scan with ‘fresh eyes’? The system ignored the familiar parts by digitally filling in the abnormalities already found using healthy tissue from around the abnormality. The results of all the assessment rounds are then added together and that produces the final diagnosis. In the study, this approach improved the sensitivity of the detection of diabetic retinopathy and age-related macular degeneration by 11.2+/-2.0% per image. What this project proves is that it’s possible to have an AI system assess images more like a doctor, as well as make transparent how it’s doing it. This might help these systems become easier to trust and thus to be adopted by radiologists.

Background: what is ‘deep learning’?

Deep learning is a term used for systems that learn in a way that is similar to how our brain works. It consists of networks of electronic ‘neurons’, each of which learns to recognize one aspect of the desired image. It then follows the principles of ‘learning by doing’, and ‘practice makes perfect’. The system is fed more and more images that include relevant information saying — in this case — whether there is an anomaly in the retina, and if so, which disease it is. The system then learns to recognize which characteristics belong to those diseases, and the more pictures it sees, the better it can recognize those characteristics in undiagnosed images. We do something similar with small children: we repeatedly hold up an object, say an apple, in front of them and say that it is an apple. After some time, you don’t have to say it anymore — even though each apple is slightly different. Another major advantage of these systems is that they complete their training much faster than humans and can work 24 hours a day.

Story Source:

Materials provided by Radboud University Medical Center. Note: Content may be edited for style and length.

Read More

Posted on

EHRs with Machine Learning Deciphering Drug Effects In Pregnant Women


Researchers at Vanderbilt University Medical Center are studying the effects of drugs on the offspring of pregnant women, who are underrepresented in randomized controlled trials. (GETTY IMAGES)

By Deb Borfitz, Senior Science Writer

Researchers at Vanderbilt University Medical Center are using a novel, data-driven “target trial” framework to investigate the efficacy and safety of medicines in pregnant women, who are underrepresented in randomized controlled trials (RCTs). The approach leverages observational data in electronic health records (EHRs) to spot connections between real-world drug exposures during pregnancy and adverse outcomes in the women’s offspring—an exercise that can be expedited with commonly used AI machine learning models like logistic regression.

So says Vanderbilt undergraduate student Anup Challa, founding director of the investigative collaboration MADRE (Modeling Adverse Drug Reactions in Embryos). The group works in the emerging field of PregOMICS that applies systems biology and bioinformatics to study the efficacy and safety of drugs in treating a rising tide of obstetrical diseases. Partnering institutions are Northwestern University, the National Institutes of Health, and the Harvard T.H. Chan School of Public Health.

The concept of target trials was first mentioned in epidemiology literature a decade ago, says Challa, and is only starting to gain traction. “Target trials really hinge on retrospective analysis of existing data using machine learning methods or other kinds of inferential statistics.”

Anup Challa, founding director, MADRE (Modeling Adverse Drug Reactions in Embryos)

A study coming out of Vanderbilt a few years ago looked at the effects of pregnant patients’ genomics on outcomes in their neonates and found harmful single-nucleotide mutations on key maternal genes that mimicked patients taking inhibitory drugs, says Challa. Specifically, the research team conducted a target trial to learn that these mutations on the gene PCSK9, which controls cholesterol levels, led mothers to deliver babies with spina bifida.

That was a signal that mothers ought not to be taking PCSK9 inhibitors, Challa continues, which are “becoming of increasing interest to physicians for treating hypercholesterolemia.” It also meant common genetic variants could serve as a proxy for drug exposures in target trials when insufficient prescription data exist in pregnant people’s records.

A probability value generated by a machine learning algorithm would not be “sufficiently indicative” of a drug safety signal to warrant immediate interrogation in humans, says David Aronoff, M.D., director of the division of infectious diseases at the Vanderbilt University Medical Center. But, as he and his MADRE colleagues argued in a recent paper published in Nature Medicine (DOI: 10.1038/s41591-020-0925-1), target trials are a viable and potentially more definitive alternative to fetal safety than animal models or cellular response to a drug in a dish.

The ultimate goal with target trials is to simulate the level of safety and efficacy testing done in RCTs with non-pregnant populations as a matter of health equity for people who for ethical or logistical reasons can’t be enrolled, says Challa. But where they fit into the regulatory framework for drugs has yet to be defined, or even explored.

Next Step: Tissue Modeling

Aronoff thinks of target trials as “reverse engineering” the normal drug development process, which typically starts in a petri dish on the bench then advances to animal models and finally clinical trials outside the pregnant population that (if all goes well) leads to an indication for use. “We’re trying to take existing, real-world data about the use of those drugs in pregnancy to identify [safety] signals… some sort of problem in the development of the fetus in utero that ends up showing itself either during pregnancy or postpartum in the offspring. If there is a mechanistic basis for that, then we can now go backwards to the bench and try to understand whether there is a causal relationship.”

David Aronoff, M.D., director, division of infectious diseases, Vanderbilt University Medical Center

Organ-on-a-chip technologies and other advances in tissue modeling can be particularly good at recapitulating drug exposure information, “particularly in the context of what is happening in the pregnant uterus,” says Aronoff. His MADRE colleague Ethan Lippmann, Ph.D., in Vanderbilt’s department of chemical and biomolecular engineering, has been building three-dimensional models of brain development that could be used as a platform for testing the teratogenic effects of drugs (or metabolites of those drugs) on neural development and neural outcomes like seizure disorders or microcephaly.

Aronoff, who is also a professor of obstetrics and gynecology at Vanderbilt, is keenly interested in seeing three-dimensional organotypic models of the placenta exposed to various drugs, metabolites and toxins of interest—and serially to other organ models that might include the brain, heart and musculoskeletal system. The different models could be viewed as “cartridges” that get plugged in based on signals seen in the machine learning study.

“We’re trying to look at organ development and organ function in this better, more innovative context,” says Aronoff, which would add to what is learned from target trials.

The potential of target trials is both about discovering and investigating drug safety, says Aronoff. “Most drugs have never been clinically tried in a randomized, placebo-controlled way in pregnancy and, even if they have been, it’s uncertain that anyone was paying close attention to outcomes not only for the fetus but in early childhood and [beyond]. But when you have electronic health records that couple mothers and their exposures with their offspring sometimes even years later, you have the power to discover for the first time an association that no one knew about.”

That first level of discovery—e.g., a higher level of prevalence of schizophrenia or autism or asthma in childhood due to exposure to a drug in the womb—prompts questions about whether the association has a mechanistic basis that may be revealing of fundamental aspects of human development, he notes.

Indeed, it should be possible to use target trials as a first step in identifying whether diseases that occur later in life are linked to an earlier stimulus or cause, adds Challa. The story of an individual’s health is influenced by factors not immediately visible, including exposures in utero that can lead to lifelong disease.

EHRs could provide researchers with the ability to evaluate people’s health from the time they were in their mother’s uterus until late in life, so they can start to think from a “systems perspective,” says Challa. When tapped by target trials, they greatly enlarge the information available to guide therapeutic choices and inform drug safety.

QSAR Technique

Many databases and patient registries exist for reproductive toxicology and the reporting of significant adverse events. But the information isn’t available in a form that’s easily manipulated by machine learning models, says Challa, making it challenging to arrive at statistically rigorous results.

The problem extends to Food and Drug Administration and National Institutes of Health datasets used in a recent study appearing in Reproductive Toxicology. “What we found and continue to find is that the data out there is not at the level it should be” for informing prescribing behavior at the point of care for pregnant women and their developing fetuses, Challa says.

The study was attempting to identify chemical features of a drug that would be predictive of its teratogenic potential and could be fed into a machine learning model to formalize those associations, he explains. Specifically, researchers looked at whether or not adverse outcomes have an “inherent structural rationale” and, if so, if a meta-structural analysis might be performed to identify known pharmacological variables (e.g., absorption, distribution, metabolism, and excretion profile) that may be the culprits. They also accessed real-world laboratory data to look for chemical structures associated with markers of disease in human tissue samples.

Recognition of the conflicting nature of adverse events data within patient registries was a key takeaway of the study, says Challa, and gave researchers “even more impetus” to focus on EHRs as a data source. But it also gave the team some structural information predictive of an adverse outcome that they can now use to cross-validate results produced by their target trial framework.

The paper highlighted a novel application of machine learning, the quantitative structure activity relationship (QSAR), to learn about the structures of drugs and their pharmacological behaviors that are associated with teratogenicity. QSAR should also be able to make similar predictions for any new compound, says Aronoff. “It’s a separate way [than EHR mining] of interrogating drug safety to look for associations.”

The two techniques are related in that any unwanted drug effects in a fetus or offspring that are uncovered in medication-wide association studies could be plugged into QSAR, Aronoff says. Perhaps something already known about the drug’s structure could point to a causal relationship, a hypothesis which could then get tested more directly in tissue models.

Aronoff’s hypothetical example is an antidepressant drug that gets newly associated with an adverse pregnancy outcome. “Can we keep its antidepressant activity but enhance its safety by targeting the structure that is actually the bad actor?” If so, he says, medicinal chemistry stands to gain some ground.

Linked Patient Records

Another limitation of mining the databases where adverse events are being reported is that “some subtle, infrequent and unexpected relationships” invariably get missed, says Aronoff. Women may be on medications chronically when they give birth to a child with a teratogenic problem or later health problem and “there may be no awareness that those things are related.” It’s unreasonable to expect anyone to make the mental connection when years can separate the drug exposure and unwanted outcome.

Target trials use the power of machine learning to interrogate hundreds of thousands, if not millions, of linked patient records to find the “needles in the haystack,” Aronoff adds. “In some respects, that can be much more sensitive than relying on individual people to report some association where there may need to be an incredibly strong signal or very horrible outcomes that are chronologically associated with the exposure.”

Available adverse exposure reporting information is also mostly freeform text, making it difficult to extract for use in target trial models, says Challa. EHRs, in contrast, are much more structured and minable documents.

Vanderbilt has taken a leadership position in creating meaningful databases out of EHR information, Challa says, including the use of natural language processing to put text fields in a machine-readable format. Its BioVU DNA repository, for instance, consists of high-quality, up-to-date genomics information linked to de-identified medical records and is routinely updated and maintained by a team of on-campus IT experts. Another repository is Vanderbilt’s longstanding Research Derivative, a database of identified health records and related data drawn from Vanderbilt University Medical Center’s clinical systems and restructured for research.

Large databases of linked health records, available mainly at institutions with similar patient volume and health IT infrastructure as Vanderbilt (whose clinical databanks contain EHR information for more than 2 million patients), are what make target trials feasible, says Challa. “It is often unethical to create linkages across clinical datasets that don’t already have it.”

Ethical Approach

The proposed target trials framework will robustly input several medication exposures of interest from pregnant patients and try to associate them with a battery of developmental outcomes from the EHRs of their children, says Challa. In contrast, clinical trials typically test the potency and safety of one drug for a single disease or cluster of similar diseases.

By providing a basis for causal inference, target trials are “the only ethical way to gather human drug exposure data for pregnant people on a significant scale and across all classes of drugs,” he and his colleagues argue in the Nature Medicine paper.

Within a few years, MADRE researchers hope to be inputting drug lists into a reproducible set of machine learning algorithms and statistical methods and outputting associations to several serious neurodevelopmental diseases, Challa says. Future plans also include taking positive drug-disease associations in pediatric patients and extrapolating the impact of early exposures to their later life course.

“As I like to say to my friends who are physicians and have specialty areas,” Aronoff says, “many people suffer from the diseases they care about, but every human being has experienced childbirth. We have to get that right.”

While pregnant people should be enrolled in RCTs of drugs and vaccines, Aronoff adds, “the reality is that [pregnancy] is always going to be a barrier. Target trials are a way forward.”

Read the recent paper from MADRE published in Nature Medicine (DOI: 10.1038/s41591-020-0925-1).

Read More

Posted on

Synergy Between AI, 5G and IoT Yields Intelligent Connectivity


The synergy between AI and 5G is likely to lead to dramatic breakthroughs that will have a profound impact on a wide range of industries. (GETTY IMAGES)

By Berge Ayvazian, Senior Analyst and Consultant at Wireless 20/20

The major US mobile operators are all deploying their 5G networks in 2020, and each one claims that AI and machine learning will help them proactively manage the costs of deploying and maintaining new 5G networks.  AT&T recently outlined the company’s blueprint for leveraging artificial intelligence and machine learning (ML) to maximize the return on its 5G network investment.  AT&T’s Mazin Gilbert sees a “perfect marriage” of AI, ML and software defined networking (SDN) to help enable the speeds and low latency of 5G.

AT&T is using AI and ML to map its existing cell towers, fiber lines, and other transmitters that today, to build its 5G infrastructure and to pinpoint the best location for 5G build outs in the future. AT&T has more than 75,000 macro cells in its network and is using AI to guide plans for deploying hundreds of thousands of additional small cells and picocells. If AI detects a cell site isn’t functioning properly, it will signal another tower to pick up the slack.

AT&T is using AI to load balance traffic such as video on its network, and the company is using machine learning to detect congestion on small cells on 5G networks before service degrades. If one area is experiencing a high volume of usage, AI will trigger lower-use cell sites to ensure that speed isn’t compromised.  AT&T is also leveraging AI and ML to improve efforts in forecasting and capacity planning with the dispatch field services that help customers every day. And AI is being used to optimize schedules for technicians, to get as many jobs done during the workday as possible by minimizing drive time between jobs while maximizing jobs completed per technician.

AT&T is building its AI platform to scale from the core to the edge network and is putting more intelligence into its mobile edge compute (MEC) at the customer edge and into its radio access network (RAN).  By putting intelligence closer to the edge, AT&T is starting to load balance traffic across these small cells and move traffic around when needed. AI will also help enable SLAs from networking slicing offerings to AT&T’s customers. AI is also the key ingredient for implementing numerous new projects and platforms for AT&T, which is using AI to manage its third-party cloud arrangements, such as with Microsoft, and in its internal cloud and hybrid clouds.  In addition, AT&T is using AI to define policies that are currently set by systems and employees. AI and the data analytics tell AT&T if any of the policies have conflicts prior to defining them.

AI-driven Automation Transforms 5G Network Reliability and Wireless Field Service

Verizon recently confirmed that its plans for its 5G rollout across the US are ahead of schedule.  Verizon EVP and CTO Kyle Malady has reported that despite the coronavirus pandemic, the largest US wireless operator is successfully moving the technology forward with its 5G and intelligent edge network. Verizon is also using its AI-enabled Verizon Connect intelligent fleet and field service management platform to closely monitor 5G wireless network usage in the areas most impacting customers and communities during the COVID-19 pandemic. Verizon Wireless uses this platform to prioritize network demand, track assets and vehicles, dispatch employees and monitor work being performed for mission-critical customers including hospitals, first responders and government agencies. Reveal Field from Verizon Connect integrates proactive maintenance, intelligent scheduling and dispatching to improve first-time fix rates and reduce mean time to resolution.

AT&T and Verizon are not the only wireless operators focusing on the synergy between 5G and AI.  Orange recently appointed former Orange Belgium CEO, Michaël Trabbia, as chief technology and innovation officer with a mandate to leverage 5G, AI, cloud edge technologies and NFV (network function virtualization) under the French carrier’s Engage2025 strategic plan.  Orange recognizes the need to detect, accelerate and shorten reaction and decision times to confront with confidence the profound changes brought about by the global coronavirus epidemic. There are many uncertainties but also real opportunities for the Orange Post-Covid Strategy; the new CTO will drive AI and 5G innovation to seize these possible opportunities and accelerate digital transformation.

Michaël Trabbia, CEO, Orange Belgium

T-Mobile, AI and Enriching the Customer Experience

T-Mobile has long prided itself as a disruptor in the world of wireless communications, always thinking creatively about the relationship it wants to have with its consumers. That includes T-Mobile’s approach to using AI to enhance customer service. The Uncarrier believes the predictive capabilities of AI and machine learning creates an opportunity to serve customers better and faster, benefiting not just the company and its service agents but also enriching the customer service experience. T-Mobile could have used these advancements in AI-based proactive maintenance and intelligent network management to help address a recent emergency. The carrier had to resolve a 13-hour intermittent network outage that impacted customer ability to make calls and send text messages throughout the US.

After discounting rumors of widespread DDoS attack, Neville Ray, T-Mobile’s president of technology acknowledged the network outage has been linked to increased IP traffic that created significant capacity issues in the network. The trigger event was determined to be the failure of a leased fiber circuit from a third-party provider in the Southeast. The resulting overload resulted in an IP traffic storm that spread from the Southeast to create significant capacity issues across the IMS (IP multimedia Subsystem) core network that supports VoLTE calls. Ray reported that hundreds of engineers worked tirelessly alongside vendors and partners throughout the day to understand the root causes and resolve the issue. FCC Chairman Ajit Pai has called the T-Mobile outage “unacceptable” and added that the Commission would launch an investigation and demand answers regarding the network configuration and traffic related problems that created significant capacity issues in the mobile core network throughout the day.  Now that the nationwide network outage is over, perhaps T-Mobile could leverage AI and machine learning as it works with vendors to “add permanent additional safeguards” that would prevent such an issue from happening again.

Neville Ray, President of Technology, T-Mobile

AI and machine learning are helping 5G wireless carriers and IoT service providers to drive efficiency across their organizations, from the back office to the field. Zinier recently raised $90 million in venture capital to accelerate its effort to integrate AI technology to help automate field service management. The company is integrating AI and automation to create the next generation of intelligent field service automation. “Touchless service delivery” aims to drive predictive maintenance, increase network uptime and reduce costs.  Zinier believes that AI-driven automation can help mobile operators streamline field service processes ahead of 5G deployments. By analyzing real-time data against historical trends, leading wireless operators can leverage AI to create predictive insights and optimize intelligent field service operations. Zinier’s AI-driven automation platform can help wireless field service organizations install and maintain a rapidly increasing number of job sites for new 5G wireless networks.

AI, 5G and Digital Transformation

AI has transformational power for any company, and mobile operators know it will digitally disrupt every industry. Mobile operators are integrating AI-enabled deep automation with their 5G networks, unleashing new business opportunities by accelerating digital transformation. The synergy between AI and 5G is likely to lead to dramatic breakthroughs that will have a profound impact on a wide range of industries.  The convergence of 5G and AI holds tremendous promise to revolutionize healthcare, education, business, agriculture, IoT applications and waves of innovation that have not even been imagined. Verizon currently has seven 5G labs and recently created a new virtual lab to speed development of 5G solutions and applications for consumers, businesses, and government agencies.  Verizon recently launched the first laptop for its 5G Ultra Wideband Connected Device Plan. The new Lenovo Flex 5G laptop will also connect to WiFi, Verizon’s 4G LTE and the new low-band 5G network scheduled to go live later this year.

Having completed its merger with Sprint, T-Mobile recently announced the rebranding of the Sprint Accelerator program.  The T-Mobile Accelerator will continue the founding principles and mission of the original program to drive development in AI, drones, robotics, autonomous vehicles and more on the Un-carrier’s nationwide 5G network.  T-Mobile also unveiled six exciting companies handpicked to participate in this year’s T-Mobile Accelerator and work directly with T-Mobile leaders, other industry experts and mentors to develop and commercialize the next disruptive emerging products, applications and solutions made possible by T-Mobile’s nationwide 5G network.

Wireless Broadband Connectivity and the New Online World

We are all living in a new online world, as social distancing forces many of us to work, communicate and connect in new ways. In the US alone, 316 million Americans have been urged to stay indoors and, when possible, work from home.  As communities around the world adapt to a world with COVID-19, broadband connectivity and access have become more critical to our lives and livelihoods than ever before. Broadband already powers much of our modern lives, but COVID-19 has acted as an accelerant that has driven many essential activities online. All learning, many healthcare services, retail commerce, most workplaces and daily interactions online require a high-speed broadband connection to the internet. The FCC’s 2020 Broadband Deployment Report estimated that more than 21 million Americans lacked high-speed broadband access at year end 2019. A new study from BroadbandNow estimates the actual number of people lacking access to broadband in the US could be twice as high – closer to 42 million. The primary reason for the disparity between these estimates is a flaw in FCC Form 477 self-reporting, where if an ISP reports offering broadband service to at least one household in a census block, that entire block counts as being covered by that provider. Manually checking internet availability for each address has resulted in a more accurate estimation of broadband connections.

Americans living in remote and sparsely populated rural areas risk falling farther behind without broadband access to this online world. But ISP self-reporting and manual cross-checking do not seem to be the most effective ways to collect this data, to identify gaps in broadband performance and to ultimately expand broadband availability. The FCC’s Measuring Broadband America (MBA) program offers rigorous broadband performance testing for the largest wireline broadband providers that serve well over 80% of the U.S. consumer market. The MBA program also uses a Speed Test app, a smartphone-based technology to collect anonymized broadband performance data from volunteers participating in the collaborative, crowdsourcing initiative. This ongoing nationwide performance study of broadband service is designed to improve the availability of information for consumers about their broadband service.

Wireless 20/20 believes that AI and big data analytics should be used to extract more accurate, precise and actionable information from the FCC’s Form 477 reporting and Measuring Broadband America (MBA) data. These new metrics should be used to drive the FCC’s newly adopted Rural Digital Opportunity Fund (RDOF) rules for $20.4 billion Auction 904 rural broadband funding and upcoming spectrum auctions to bridge the digital divide.

The next major disruptive opportunity will come from 5G and AI in changing the way we connect and power our communities.  Given the challenges we are all facing with the COVID-19 pandemic, 5G wireless broadband and AI could be used to enable Tele-health networks, distance learning and advanced IoT networks to power the Fourth Industrial Revolution where working from home becomes the new normal.

Berge Ayvazian leads an integrated research and consulting practice at Wireless 20/20 on 4G/5G Networks and Mobile Internet Evolution. This report is based on a recent presentation he was invited to make for the FCC’s Artificial Intelligence Working Group (AIWG) focusing on the Synergy Between AI, 5G and IoT.  He was asked for his expert opinion on what is real, what is hype, and his opinion of the maturity of AI technology.  Learn more at Wireless 20/20.

Read More