Posted on

Car Parts Thefts and AI Autonomous Cars


An AI self-driving car with its superb cameras, numerous radar and ultrasonic devices is a goldmine for car parts thieves. Credit: Getty Images

By Lance Eliot, the AI Trends Insider 

What do manhole covers, beer kegs, and catalytic converters have in common?

 This seems like one of those crafty questions asked when interviewing for a job at a high-tech firm. 

It is admittedly a tricky question. 

The answer is that they are items that at one point or another were being stolen in large numbers. 

For manhole covers, there was a case last year in Massachusetts of a man that stole seven of them and tried to sell them for scrap metal at a salvage yard, and in some countries they are faced with a raft of such thefts. 

A few years ago, beer kegs were being stolen to the tune of 300,000 kegs a year, doing so to sell the kegs for about $50 each in steel scrap.

Currently, the United States is seeing a surge in thefts of catalytic converters. 

You might assume that the thieves are trying to sell the catalytic converter as a used part and hoping to get unsuspecting car repair shops and car owners to buy these “hot” devices as a replacement when an older one on your car is not working or gets damaged in an accident. 

Nope. 

The thieves are seeking to grab the palladium that’s in the catalytic converters.

Palladium?

Palladium is a rare metal that is silvery and white in overall color. 

It is part of the Platinum Group Metals (PGM) and has the handy properties of a low melting point and being the least dense of the PGMs.

 From a humanity perspective, the palladium in a catalytic converter is essential as it converts around 90% of the noxious gases coming out of your car’s exhaust into much less dangerous chemicals (generally producing carbon dioxide, nitrogen, and simple water vapor). Palladium is also used in fuel cells and when combined with oxygen and hydrogen is a nifty producer of electrical power, heat, and water.

As they say, palladium doesn’t grow on trees. 

You need to find it and mine it. 

Or, as the thieves would say, you need to remove it and steal it. 

Your catalytic converter is usually in the under-body of your car and sits between the exhaust pipe and the engine or your car. In Chicago, they have had thieves recently go along a block at the wee hours of the morning and one-by-one crawl underneath cars to remove and steal the catalytic converter. Another popular approach involves going into a parking lot such as at an airport and stealing the catalytic converters from the long-term parked cars.

Here’s a bit of a shocker, the value of palladium is now worth more than gold. No need to give someone a gold watch or a gold necklace, instead they should be happier to get a palladium coated gift. It is estimated that the thieves are typically getting around $200 or perhaps $400 when selling the catalytic converter palladium, depending upon the condition and how many grams are contained in the device.

 Is it difficult to steal a catalytic converter?

Sadly, no. 

With just a few tools and a few minutes, you can seemingly readily take one.

 You crawl underneath the target car and do your handiwork. There are even online videos on YouTube that show how to do this! Of course, you would not target just any car and instead would want to go after certain kinds of cars, such as ones that are easier to steal from, and that have larger amounts of palladium in the catalytic converter, and are located in an area where you would guess that you won’t get caught during the theft effort.

Car Parts Theft Is Big Business

There are tons of car parts thefts annually in the United States alone (literally, tons worth!). 

Some car parts are stolen to glean the elements within them. 

Some car parts are being taken to then supply the huge market for used car parts. Perhaps nearly 1 million car parts and car thefts are undertaken each year, though it is hard to know how many happen since the crime is not always consistently reported. If you are interested in some fascinating statistics about stolen car parts and stolen cars, the National Insurance Crime Bureau (NICB) provides online stats that are insightful on the matter.

In the case of stealing a car part and attempting to resell it, an entire black-market supports such efforts. When you go into a car repair shop to get your car fixed, and they offer to put into your car a used part, it could very well be a stolen part that they use. The car repair shop is not necessarily in on the thievery. They might be buying the used parts from seemingly reputable sources. It turns out that the supply chain of car parts is quite muddled and there is a kind of ease with which stolen car parts can appear to be legitimate used parts.

 Some believe that the emergence of blockchain might be a means to curtail such sneaky acts. 

The notion is that all car parts would be notated in a publicly available online log, accessed via the Internet, and by doing so it would make the history and lineage of the car part readily available. Anyone could figure out the status of a car part by merely consulting the blockchain.

For my article about blockchain, see: https://www.aitrends.com/selfdrivingcars/blockchain-self-driving-cars-using-p2p-distributed-ledgers-bitcoin/

 The motivation for stealing car parts is rather apparent when you think about it. 

Car parts are becoming increasingly expensive and therefore the profit to be made by selling a stolen car part is quite handsome. 

If car parts were dirt cheap, it would be difficult to make much money off stealing them. People that own cars are continually in need of getting replacement car parts, due to the wear-and-tear of an existing car part or sometimes due to the car part getting wrecked by a car accident.

I remember one car that I used to own that seemed to fall apart one-car-part at a time. 

I took my car to a small car repair shop that did quick work and was never crowded. They replaced one part that had failed due to wear-and-tear. About a month later, another car part failed. Once again, I took the car to have the part replaced. This kept happening, nearly each month, for a period of about 6 months. The car repair shop probably posted my name on their calendar in anticipation that I’d be there next month and so on.

Admittedly, it was an older car that had not been given much attention to keeping it in shape. I bought it used and knew that it was likely to someday start to come apart at the seams. Once the seams started coming undone, it was a tsunami of car parts failures and replacements. 

What a pain in the neck!

In any case, the car parts thieves are choosy about which cars they steal from and which car parts they try to steal. 

Cars that are well-sold and the most popular brands of cars tend to be the right choice for stealing car parts from, since there is a much larger market potential of the need for used parts. Another angle too is cars that tend to be involved in car accidents, which then are in need of replacement car parts.

It’s a demand and supply phenomenon. 

Parts being stolen are dictated generally by the demand exhibited via car repair shops, and in combination with the used parts marketplace and the underground marketplace.

The car part needs to have sufficient profit potential, thus the more expensive it is, the more attractive as a target to steal.

The car part needs to be relatively easy to steal. 

If it takes too long to steal it, or if the chances of getting caught are high, thieves are not going to take the risk as readily.

The car part needs to be small enough that a thief can readily cart it away and then transport it to whatever locale might be needed to dispense with it. If the car part is extremely heavy or bulky, it makes the stealing of it much harder and along with complicating a fast getaway with the car part.

The car part needs to be removable without excessive effort. 

If the thief needs a multitude of tools to try to extract the part, or if the part is welded into place, these are barriers to stealing it. Likewise, if there is a car alarm system that the car part will potentially set off when stealing the car part, that’s a no-no kind of car part to try to take.

I realize you might be thinking that it would be “better” for the car parts thieves to consider taking the entire car, rather than futzing around trying to take a particular car part itself. 

Certainly ,the stealing of the entire car would be more efficient since you would have all the car parts now available. You could either try to sell the entire car intact, or instead take apart the parts and sell those individually.

My Car Was Stolen

This reminds me of the occasion when my sports car was stolen here in Los Angeles.

I was a university professor at the time. 

I had parked my car in one of the on-campus parking lots. There wasn’t any faculty designated parking per se, and so I parked in the same parking structure as did the students, administrators, and the other faculty. Each time that I parked on-campus, I would drive throughout each parking structure, trying to find an available parking spot. It was one of those hunts for a parking spot kind of games, though I sometimes cut it close to class starting time and got into a bit of a sweat about finding an available parking stall.

One day, I parked my car in the mid-morning, getting parked in plenty of time to teach my lunchtime class. I was teaching classes throughout the day. I also was teaching an evening class. At about 9:30 p.m., I walked out to the parking structure to drive home. When I got to where my car was supposed to be parked, it wasn’t there. What? My mind raced as I thought that perhaps I had parked on a different floor and was just confused about where my car was.

I went to each floor of the parking structure and searched stridently to find my car. After covering the entire parking structure, I concluded that my car was not there. Had I perhaps parked in a different on-campus parking structure that day? No. I knew I had not. Maybe my car had been towed for some reason, perhaps I had not displayed my faculty parking tag, or my car was sloppily parked and took up more than one parking space.

I dutifully went to the campus security office. 

Had they perchance towed my car? 

No, they said they did not do so. 

They asked me what I did at the campus. When I explained that I was a professor, the office clerk made a small smirk and called over one of the campus security officers. They said they would take me over to the parking structure in one of the campus security golf-like carts and the officer would help me find my car.

Sure enough, the campus security officer drove me to the parking structure and slowly drove on each floor and past each car. Is that your car, the security officer would ask? No, I said. In fact, I was getting quite upset that this effort of slowly driving through the parking structure was taking place. I already had looked and was unable to find my car. The campus security officer insisted we would need to continue the slow poke search.

After exhausting the parking structure’s set of cars and not having seen mine, I thought that we could now start to discuss what to do about my apparently stolen car. I assumed that an all-out broadcast would take place to alert numerous police and highway patrol to be on the lookout for my car. Police helicopters would take to the skies to find my car. Squad cars would be peeling out of police stations, seeking to find my car. Okay, maybe I watch too many movies and TV shows, but I had in mind that my car was important and by-gosh somehow someone should be looking for it.

The campus security officer proceeded to drive us to another one of the parking structures. Yikes! I bitterly complained and said that this was an utter waste of time. He then explained to me that the reason they do this is that it was possible that I had merely gotten mentally mixed-up about where I had parked my car. He also indicated that when I said that I was a faculty member, the reason that the clerk had smirked, they routinely had faculty that would claim their car was gone, yet it was sitting safely in a parking structure where they had parked it earlier in the day.

They had over time gotten used to “genius” level professors that might be incredible researchers but were not able to do everyday tasks very well. This faculty were often forgetful of mundane aspects. Sure, they might be able to explain the intricacies of some arcane area of science or technology and could be steeped in grand theories. They though were at times not able to tie their own shoes and nor keep track of where they parked their car.

I endured an eternity as we drove to each of the numerous parking structures and slowly drove throughout each one. It was painful and I kept thinking that the thieves of my car were merely being handed more time to drive it further and further away. Did they target a faculty member’s car, doing so because they knew that the campus security would waste time trying to find it while letting the clock tick for the escape plan of the thieves?

Anyway, it was not in any of the parking structures. The campus security officer took my back to the main security office and I filled-in paperwork. After doing the paperwork, they said that there wasn’t anything else needed to be done. I was puzzled. 

Were the helicopters getting underway and the police squads rallying to find my car?

Turns out that there are many cars stolen each day in Los Angeles. 

I might think my car was the only one being stolen, plus it was my treasured sports car, having a great deal of personal sentiment associated with it, yet the reality was that there were lots of cars being stolen daily. Sigh.

Furthermore, the campus security officer explained that the local gangs had an initiation challenge to join their gang, consisting of stealing a car at the nearby university. The best guess was that my car had been stolen by a gang, they would joy ride in it, and then likely take it across the border to a chop shop. At the chop shop, my car would be gutted, and the car parts then put out for resale.

This news was disheartening.

One aspect too that they had asked me, adding to my frustration at the time, was whether I had left the keys in my car. 

Say what? 

No, I said, of course I didn’t do so.

I found out then that incredibly there is a preponderance of stolen cars wherein the owner left the keys in the car. 

People Dumbly Aid Thieves

The key is often one of the fobs that has the super-duper security capabilities, which automakers spent many millions of dollars perfecting. Turns out that last year, it is estimated that around 60% of all stolen cars in the United States were taken by the thief simply using the keys left inside the car.

For those of us that are technologists, it highlights how humans can undermine the best of technology by how they behave. 

In spite of the handshake security capabilities that took years to perfect and embody in a key fob, it turns out that a lot of people merely leave the fob sitting inside the car. A car thief does not need any special skills to steal such a car.

This is reminiscent of the “hacking” of people’s online accounts or their PC’s or their IoT devices. 

Many people use a password that is easily guessed. I’m sure you are as frustrated as me that many of the so-called hackings of people’s accounts are not due to any shrewd computer hacker, but instead by the simplistic and mindless act of trying obvious passwords. 

I say this is frustrating because the news often portrays these thieves as some kind of computer geniuses, when it was in actuality that the humans owning the computer accounts were lazy or ill-informed about setting better passwords.

For my article about back-door security issues, see: https://www.aitrends.com/selfdrivingcars/ai-deep-learning-backdoor-security-holes-self-driving-cars-detection-prevention/

For my article about issues with IoT devices, see: https://www.aitrends.com/selfdrivingcars/internet-of-things-iot-and-ai-self-driving-cars/

For more about stealing of cars, see my article: https://www.aitrends.com/selfdrivingcars/stealing-secrets-about-ai-self-driving-cars/ 

I tell the story of my car being stolen to point out that it was apparently more likely that it would be turned into a treasure trove of used car parts, rather than trying to sell the entire stolen car itself. 

This makes sense. Selling a stolen car is likely to be riskier and requires that someone come up with a larger bag of cash to acquire it. Yanking off the parts and selling those is less chancy.

Unbelievably, at times the value of the parts exceed the value of the car itself. In other words, the amount of money you can make by selling the stolen parts is going to be more than if you tried to sell the entire car. In that case, when you toss into the equation the troubles and risks involved in selling an intact stolen car, the notion of taking the car to a chop shop makes a lot of sense. Divide up the car and sell the parts, then find a place to discard or bury or destroy whatever might be leftover.

AI Autonomous Cars And Parts Theft

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. 

One question that I often get asked at industry conferences involves whether AI self-driving cars will be subject to the stolen car parts marketplace. 

I believe so.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, the brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/ 

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/ 

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/ 

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a Utopian world in which there are only AI self-driving cars on public roads. Currently, there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ 

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/ 

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the topic of stolen car parts, let’s consider how this might apply to the advent of AI self-driving cars.

First, let’s all agree that an AI self-driving car is still a car. 

This might seem obvious, but I assure you that a lot of people seem to think that when you add the AI aspects to a car that the car somehow transforms into a magical vehicle. It’s a car.

I mention this so that it is perhaps apparent that the same aspects of car parts being stolen are going to apply to the conventional parts of a self-driving car. If there are AI self-driving cars that have catalytic converters, and if the price of palladium remains high, you can reasonably assume that car thieves might want to steal the catalytic converter from your AI self-driving car.

So, overall, yes, AI self-driving cars face the same dangers of car parts thievery as might be the case for conventional cars. 

There are some important caveats to consider.

Why Autonomous Car Parts Theft Is Special

I mentioned earlier that I anticipate the volume of AI self-driving cars will be a gradual build-up of adoption, and we’ll continue to have conventional cars during that same time period. 

This means that there might not be many AI self-driving cars on the roadway. 

Recall that an important aspect of being a targeted car for stolen parts is that the car itself is popular. 

The more cars of a particular brand in the marketplace, the more the number of used car parts that are needed.

In the case of AI self-driving cars, the number of AI self-driving cars might be so low for the initial adoption period that it is not as worthwhile to steal a car part from an AI self-driving car as it is to steal from a conventional car that has greater popularity. The thieves tend to react based on marketplace demand. If there aren’t many AI self-driving cars out and about, there’s little incentive to steal parts from those cars.

In a similar form of logic, if the AI self-driving cars are not as readily around, they are a harder target to find and steal from. 

The nice thing about popular cars, from a car parts thievery perspective, would be that you can find those popular cars just about anywhere, parked along the street, parked in mall parking lots, and so on.

The relatively rarity of AI self-driving cars at the start, before there is a gradual shift toward AI self-driving cars and away from conventional cars, means that finding an AI self-driving car to steal parts from will be an arduous task. I’m not saying that it becomes impossible, and I am sure that if the thieves think it worthwhile, they could hunt down the locations of AI self-driving cars.

Another factor to consider about AI self-driving cars will be their likely use on a somewhat non-stop basis. 

The notion is that an AI self-driving car is more likely to be used in a ride sharing service and therefore will be used potentially around-the-clock. If you don’t need to hire a human driver, you can keep the AI driving the car all the time. The more time the self-driving car is underway, and assuming you can fill it with paying passengers, the more money you make from owning an AI self-driving car.

This brings up the facet that trying to get to a parked and motionless AI self-driving car might be harder than it seems at first glance. A conventional car is parked and motionless for most of its time on this earth. You park your conventional car in a mall, and it waits until you come back to use it. That’s not going to be the case presumably for AI self-driving cars. A self-driving car is more likely roaming to find paying passengers, rather than being parked for hours at a time.

Thus, besides the likely lesser number of AI self-driving cars on the roadway versus conventional cars, during the early stages of adoption of AI self-driving cars, even once AI self-driving cars become prevalent, they are bound to be underway most of the time. This makes stealing car parts problematic in that the car itself is a kind of moving target. Imagine trying to slide underneath an AI self-driving car that is slowly cruising the street and waiting to be hailed by a passenger, and somehow a thief extracts the catalytic converter – that’s a real magic act.

For ridesharing and AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/ridesharing-services-and-ai-self-driving-cars-notably-uber-in-or-uber-out/

For my article about the invasive curve and AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/invasive-curve-and-ai-self-driving-cars/

For the carjacking or robojacking of AI self-driving cars, see my article: https://www.aitrends.com/features/robojacking-self-driving-cars-prevention-better-ai/ 

For the non-stop use of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/non-stop-ai-self-driving-cars-truths-and-consequences/ 

For my article about the affordability aspects of AI self-driving cars, see:  https://www.aitrends.com/selfdrivingcars/affordability-of-ai-self-driving-cars/

You might be wondering how many of the existing being-tested varying-levels of AI self-driving cars are having their car parts stolen? 

None that I know of.

Does that undermine the notion that there will be car parts stolen from AI self-driving cars? 

No, it does not undermine the argument.

Being Tested Autonomous Cars Are Well-Protected

As already stated, when the number of cars of a particular brand or model are low, there is little interest in trying to steal their car parts. That’s certainly the case right now with the low number of AI self-driving cars of one kind or another being trial run on roadways.

An even greater factor right now is that the AI self-driving cars that are being trial run have a quite devoted crew that keeps those AI self-driving cars in tiptop working order. Unlike a conventional car, these special testing AI self-driving cars are handled delicately and devotedly by assigned car mechanics and engineers.

These AI self-driving cars are kept parked in secure locations, and they are maintained to a meticulous threshold. The auto makers and tech firms do not want their existing tryouts of AI self-driving cars to be undermined due to parts that fail or wear out. These are spoiled cars, being provided with by-hand daily care.

A car parts thief is unlikely to find any of these AI self-driving cars, and if they did, it would be in a secure location that has a crew doting to the self-driving car when it is parked.

When an AI self-driving car is in-motion, the camera is functioning to undertake image and video stream captures and analyze the roadway around the AI self-driving car, along with the other sensors doing the same, such as the radar, the ultrasonic, the LIDAR, etc. Any car parts thief would likely get caught on camera, making their theft someone stupid, when attempting to steal from an underway AI self-driving car, though I suppose they could try to wear a mask and disguise themselves.

I’ve already pointed out that since the AI self-driving car will be in-motion most of the time, a masked thief still has little chance of stealing any of the car parts. Admittedly, once the AI self-driving car is parked and motionless, the sensors are often no longer being powered and used, which means that you could be possibly undetected when stealing parts from the car. This will be something important to consider once AI self-driving cars become prevalent.

It is anticipated that once there are a lot of AI self-driving cars in the marketplace, it might not be the case that they will all be cruising around all the time. It is suggested that there might be special areas that the AI self-driving cars go to wait for being summoned. These staging areas would resemble parking lots. The idea is that rather than an AI self-driving car driving around endlessly, which might not be very efficient, they would sit in temporary staging areas and await a request to be used.

I mention this because few of the AI self-driving car pundits have realized that we might reach a kind of saturation point of AI self-driving cars prevalence. It is admittedly a long way off in the future.

Overall, the notion is that if there is a saturation level in a given locale, it might not be prudent to have an AI self-driving car driving around and around, waiting to be put to use. The endless driving is going to cause wear-and-tear on the self-driving car, plus it will be using up whatever fuel is used by the AI self-driving car. 

As such, we might reach a point where it makes sense to have an oversupply of AI self-driving cars be staged in a temporary area until summoned.

Where Thieves Will Go

I suppose the staging area could be a place to try to steal car parts from those waiting AI self-driving cars. 

I’ll guess that by the time we have such a prevalence of AI self-driving cars, those staging areas would be well-protected and well-monitored. Seems less likely an easy target for car parts thievery.

One aspect that will potentially make AI self-driving cars an attractive target would be the specialized components included onto the self-driving car for the AI driving purposes. 

An AI self-driving car has lots of superb cameras, it has numerous radar devices, it might have LIDAR, it will have ultrasonic devices, and so on. This is exciting, especially as a goldmine for car parts theft. I can imagine the car thieves already salivating at this.

These sensory devices are an ideal target for car parts theft. They tend to be expensive, which I had mentioned earlier that the value of a car part is a crucial attraction for car parts thievery. There are many of them on each AI self-driving car, making the car target-rich. They tend to be small, meaning that you can readily steal and transport them.

The question is whether these sensors can be readily removed from an AI self-driving car or not.

On the one hand, if the sensors are well-embedded and bolted or meshed into the car, it is going to make things harder for car parts thieves to steal those car parts. Unlike the ease of taking a catalytic converter, these sensors might be arduous to extract from the self-driving car. As mentioned earlier, the longer it takes to steal a car part and the more difficult it is to steal it, the less likely the car part is as a worth-stealing car part.

Some are suggesting that we’ll have add-on kits that can convert a conventional car into becoming an AI self-driving car. If that’s the case, this is a potential gift from heaven for the car parts thieves. Presumably, an add-on kit means that you simply augment the car with these added sensors, making them more vulnerable to being taken off the car too. Easy on, easy off. It’s the easy off aspect that makes the car thieves happy.

This would also make the selling of the stolen car parts easier too. Imagine a thriving market of add-on kits for AI components of an AI self-driving car. A huge marketplace might develop. It will be hungry for these specialized car parts. The underground will work double-time to try and fulfill the demand.

For my article about add-on kits for AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/kits-and-ai-self-driving-cars/

For why AI self-driving cars won’t be an economic commodity, see my article: https://www.aitrends.com/selfdrivingcars/economic-commodity-debate-the-case-of-ai-self-driving-cars/

For my article about reverse engineering of AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/reverse-engineering-and-ai-self-driving-cars/

For safety aspects, see my article: https://www.aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

I’ve already stated in my writings and presentations that I doubt the add-on approach is going to be viable for AI self-driving cars. By this claim, I am suggesting that there won’t be a mass consumer add-on market. There could though be a car dealer marketplace of doing this after-market effort, though I doubt that will be likely either.

Does the potential lack of an AI self-driving car parts add-on market imply that the sensors on an AI self-driving car will necessarily be difficult to remove?

No, not necessarily.

Here’s why.

For an auto maker trying to maintain AI self-driving cars, they’ve got to be considering that the life of the sensors is limited, and they will break down. When the sensors need to be replaced, if they are somehow hidden and embedded within the body of the self-driving car, it will make it harder and costlier to have those sensors replaced.

I realize that the auto makers might not care about this facet and are so focused on just producing a viable AI self-driving car that they don’t care right now about the maintenance side of things. Plus, if you like conspiracy theories, you might say that it is perhaps better for the automaker to make it arduous and costly to replace the sensors, thus guaranteeing a lucrative maintenance fee after selling the AI self-driving car.

In any case, it isn’t yet clear whether it will be made easy or hard to remove the sensors from an AI self-driving car. This might differ by car maker and car model.

The same question can be asked about the computer processors in the AI self-driving car. AI self-driving cars will be chock-full of likely expensive computer processors and computer memory. These expensive and small sized components would be another potential target for car parts theft. Will these processors be so deeply embedded that it precludes much chance of car part thievery, or will they be readily accessed for maintenance purposes and therefore more prone to being stolen?  We’ll need to wait and see.

For my article about the grand convergence of tech on AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

For sensors such as LIDAR, see my article: https://www.aitrends.com/selfdrivingcars/lidar-secret-sauce-self-driving-cars/

For conspiracy theories about AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/

For my article about the Top 10 predictions and trends of AI self-driving cars, see: https://www.aitrends.com/ai-insider/top-10-ai-trends-insider-predictions-about-ai-and-ai-self-driving-cars-for-2019 

Anti-Theft Tech Could Help Reduce Thefts

One additional aspect to keep in mind will be advances in anti-theft technologies.

It could be that once we have any prevalence of AI self-driving cars that there will be new advances in car anti-theft systems and those devices will be included into AI self-driving cars. 

If so, it might make stealing car parts a near impossibility.

Imagine a scenario in which a thief attempts to carry out a car parts theft on an AI self-driving car. 

The AI might detect the effort. It could then honk the horn or take some other effort to bring attention to the car. It might also use it’s Over The Air (OTA) capabilities, usually used for pushing updates and patches into the AI system remotely via the cloud, and contact the authorities electronically to report a car parts robbery in progress.

Another futuristic possibility is the use of the V2V (vehicle-to-vehicle) electronic communications that will be included into AI self-driving cars.

Normally, the V2V will be used for an AI self-driving car to share roadway info with another nearby AI self-driving car. Perhaps an AI self-driving car has detected debris in the road. It might relay this finding to other nearby AI self-driving cars. They can then each try to avoid hitting the debris, being able to proactively anticipate the debris due to the V2V warning provided.

Suppose one AI self-driving car notices that a thief is trying to steal the parts from another AI self-driving car. The observing AI self-driving car might try to honk its horn or make a scene, or it might via OTA contact the authorities, or it might try to wake-up the AI self-driving car that is the victim, doing so via sending a V2V urgent message. Assuming that the AI self-driving car was “asleep” and parked, the V2V message could awaken it, and then the “victim” self-driving could sound an alert or possibly even try to drive away.

For conspicuity and AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/conspicuity-self-driving-cars-overlooked-crucial-capability/

For my article about the OTA aspects, see: https://www.aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/

For cryptojacking of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/cryptojacking-and-ai-self-driving-cars/

For swarm intelligence of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/swarm-intelligence-ai-self-driving-cars-stigmergy-boids/ 

For my article about sleeping AI and self-driving cars, see: https://www.aitrends.com/selfdrivingcars/sleeping-ai-mechanism-self-driving-cars/

Conclusion

Will the sensors and other AI physical components be easy to steal or hard to steal? 

They are going to be attractive due to their expensive cost and small size.  Auto makers and tech firms are not likely considering the matter right now of whether those parts are able to be stolen or not. Instead, right now it’s mainly about getting them to work and produce a true AI self-driving car.

The marketplace for those devices will be slim until there is a prevalence of AI self-driving cars. 

Therefore, this is a low-chance risk for now. 

We’ll have to wait and see what the future holds.

I can just imagine in the future coming out to my vaunted AI self-driving car, which I would proudly opt to park in my driveway, doing so as a showcase for the rest of the neighborhood, and suddenly realizing that it has been stripped of its parts. 

Many of the conventional car parts might be taken, along with the AI specialized car parts. 

Darn it, struck by car thieves a second time in my life. 

Will it never end? 

I’d hope that helicopters were dispatched immediately, along with police drones, and squad cars, all searching for my stolen AI self-driving car parts. 

Get those dastardly heathens! 

Copyright 2019 Dr. Lance Eliot 

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Source: AI Trends

Posted on

Superhuman AI Misnomer Misgivings Including About Autonomous Cars


The superhuman AI moniker might lead the driving public to believe that the AI of the self-driving car is more capable than it really is. (Credit: Getty Images)

By Lance Eliot, the AI Trends Insider 

I have a beef with the now seemingly in-vogue use of the “superhuman AI” phrase that keeps popping up in the media. 

When I was asked about “superhuman AI” at a recent Machine Learning and AI industry conference, I admit that I wound myself up into a bit of a tizzy and launched into a modest diatribe. 

Now that I’ve calmed down, I thought you might like to know what my angst is about the so-called superhuman AI moniker and why it is important to give the matter of its use some serious consideration.

Have you noticed the phrase? 

It can be subtle and at times easy to miss. 

I’d guess that if you look around, you’ll see that the superhuman AI phrase might have been in any of a number of recent articles you have read about AI breakthroughs, or might have gotten mentioned during a radio broadcast or on a podcast that you listen to while in your car. If so, I realize you might have not given it any thought at all.

In that sense, you could argue that the superhuman AI phrase is not consequential and there’s no reason to get upset about its use. It is perhaps a kind of filler phrase that sounds good and hopefully most people know it likely lacks any substantive true meaning. Just more noise and nothing noteworthy.

 On the other hand, there is a potential danger that this superhuman AI phrase is indeed being taken quite seriously by those that are not well familiar with AI, and thus it can tend to over-inflate what AI can actually achieve. 

There are some in AI that seem to be pleased to inflate expectations about AI, but fortunately there are moderates that rightfully worry that these subtle attempts at overstating AI are going to get everyone into trouble.

The trouble can be that we all begin to believe that AI can do things it cannot and then allow ourselves to become vulnerable to automated systems that really have no bearing on this mythical and made-up notion of today’s AI.

 For my article about AI as a potential Frankenstein, see: https://aitrends.com/selfdrivingcars/frankenstein-and-ai-self-driving-cars/

 For the potential coming singularity of AI, see my article: https://aitrends.com/selfdrivingcars/singularity-and-ai-self-driving-cars/

 For idealism about AI, see my article: https://aitrends.com/selfdrivingcars/idealism-and-ai-self-driving-cars/

 For the Turing test and AI, see my article: https://aitrends.com/selfdrivingcars/turing-test-ai-self-driving-cars/

 Superhuman AI Phrase Is Usually Hyped Or Otherwise Misused

I’ll make a bold statement herein and claim that by-and-large when someone reports that they have developed an AI system that is characterized by superhuman AI, they are generally being misleading. 

It could be marketing hype. 

It could be personal bravado. 

It could be they are purposely or possibly even inadvertently are deploying hyperbole. 

It could be that the person is naive. 

It could be that they are unknowingly overstating things.

It could be that they don’t care whether their statements are accurate or not.

It could be that they are saying it because everyone else is saying it.

And so on.

They might also believe that for their definition of superhuman AI, they are reasonably making use of the phrase.

 This does bring up a bit of a conundrum about the superhuman AI phrase.

 There is not an across-the board all-agreed standardized and codified definition that everyone has said that yes, this is what superhuman AI consists of. 

Nobody has laid out precise and demonstrable metrics that we should use to decide concretely whether there is or is not superhuman AI involved. 

Therefore, with no specific rules to be followed, you can use the phrase however you might wish. There isn’t a kind of word-usage cop standing by the roadside that has an improper-AI-phrases radar gun that can detect the true and proper use of the superhuman AI phrase. It is instead a wild west.

You can even fiendishly use the phrase to suggest or imply with a wink-wink that there is really a lot of super-duper AI in your system, but at the same time claim when pressed that you were intending to use it in a less-than over-the-top manner. The looseness of the existing commonplace unstated definition allows for making what you want of the handy and catchy phrase and gives you face-saving wiggle room to do so.

 What Does Superhuman AI Mean

Let’s consider what the phrase potentially means.

The first word, superhuman, we all generally would agree means to accomplish something of an extraordinary nature beyond what a normal human might be able to do. 

The other day a man lifted a car because his child had gotten pinned underneath it. Normally, it is doubtful that he could lift a car by himself. He somehow gained momentarily a kind of superhuman strength, perhaps due to adrenaline running through his body, and was able to lift that car.

You could say that he was superhuman. 

But, does this mean that in all respects he is superhuman? 

Can he lift a building like superman? No. 

Could he even lift a car again? Unlikely, unless his child once again got stuck underneath one. 

For all reasonable notions of superhuman, I think we could say that he momentarily displayed an extraordinary strength that a normal person might not typically have.

He wasn’t therefore a now-permanent superhuman that forever would have this super strength. He did not arrive here from the planet Krypton. Instead, he momentarily appeared to engage in an activity that most of us would probably be willing to say seems relatively superhuman-like.

 Suppose though that we went out and found a really strong weightlifter and asked that person to lift the same car as the man that had been saving his child. If the strong lifter could do so, is that strong lifter also then someone that we would immediately applaud as being superhuman? 

I’d suggest we would not.

This brings up the aspect that when we refer to someone as superhuman, we probably need to have some basis for comparison.

 If the basis for comparison is solely confined to what the particular person could normally do, this would seem to quite dilute the idea of being superhuman. 

I tried to open a screw-top can the other day and could not do so. I tried and tried. A few days later, I tried again and managed to squirm and grunt and got that darned lid to turn and come off. 

I was now superhuman!

I don’t think that it seems fair or reasonable to say that I was superhuman in that case. Big deal, I opened a screw-top can that was somewhat jammed up. A lot of humans could do the same. Just because I was able to exceed my prior effort, it doesn’t seem to warrant handing me a trophy as being superhuman. 

I think you would likely agree.

It was the famous Friedrich Nietzsche that many suggest first helped to bolster the notion of someone potentially being superhuman. 

You might become superhuman by perhaps being genetically bred in a manner that gives you greater strength or greater intelligence than other humans. Or, maybe you have a cybernetic implant inserted into your body that gives you superhuman strength, or you take special drugs that make your mind more powerful and superhuman, similar to what is often portrayed in many science fiction movies.

There is a slippery slope between talking about superhuman and contemplating the aspects of Superman or Superwoman. 

The word “super” is used in the case of superhuman, and can allude to having super powers, and so you then start to think of Superman or Superwoman, which even though they don’t exist and are only fictional characters, nonetheless the superhuman word gets the glow from those now ubiquitous fake characters. It is easy to therefore mentally get intertwined that “superhuman = superman or superwoman,” which is part of the reason that superhuman is a lousy word and distorts our sense of what is real and what is not.

In spite of the troubles associated with the meaning of the word superhuman, we’ll go ahead and add to the mess by appending the “AI” moniker to the superhuman word.

 Superhuman AI. 

Now what do we mean?

 If I develop a tic-tac-toe game that uses AI techniques and even executes on so-called AI hardware, can I claim that my tic-tac-toe game is superhuman AI?

 I wouldn’t think so. 

But, I assure you, there are some that would happily say it is. 

It’s the best darned tic-tac-toe game player ever devised. 

It exceeds anyone else in being able to play tic-tac-toe. 

It will never ever lose a tic-tac-toe game. 

Must be AI. 

Probably a breakthrough. 

Must be superhuman AI.

Where this especially seems to come up about using the phrase superhuman AI is when referring to automated systems that play games. Automated systems have been developed for chess that have been able to beat human grand masters and reach new heights of human chess play scoring. 

Some say that’s superhuman AI.

Top playing automated systems for Scrabble reached a zenith around 2006. 

Superhuman AI. 

More recently, in 2016, an automated system was a winner at Go, considered by many to have been a nearly unreachable goal due to the nature of the rules of Go and the kinds of strategies used. Superhuman AI. You’ve likely seen the plentiful ads for the IBM Watson about the winning at Jeopardy. Superhuman AI.

I want to outright congratulate those that were able to get automated systems to play at such a vaunted human level in those games. They used every computer science and AI technique and novelty to get to that accomplishment. Here, here!

 Were those all superhuman AI examples?

 Some say that games are great as a means to perfect many AI and computer science techniques and approaches, but they are quite narrow in their domains.

 There are games with perfect information, meaning that you are informed of all events that occur throughout the playing of the game, knowing each of the moves as they arise and the starting position. Chess is an example of having “perfect” information since you know how the game starts and you know each move that occurs along the way. Players don’t somehow hide their moves. Also, there isn’t any “chance” involved in the game since there isn’t dice or something else being used to determine what the moves are. It is straight ahead. Imperfect information games are those not within the definition of a perfect information game.

 Does playing games well or even extraordinarily with an automated system mean that it is superhuman AI?

 Suppose that we don’t use any special AI techniques at all, and merely leverage having a much vaster amount of online storage available than a human could likely have in their mind, and we do a great job at searching an extremely large space of pre-calculated best moves (having been pre-calculated by human led direction for the automated system).

 Is it fair to toss the “AI” into the superhuman wording?

 Superhuman AI Outside Of Games

Let’s consider domains beyond those of playing games.

I had helped an assembly plant put a robotic arm into their assembly line, doing so to speed-up the line and reduce the labor needed to produce their product. 

Most would agree that the field of robotics generally fits within the overarching definition of AI.

I’d like to brag about the robotic arm work, but admittedly the mechanical arm was merely trained by having a human move the arm back-and-forth until it caught onto the movements needed. I also added various safety related code to make sure the robotic arm wouldn’t go astray. It was a nice project and would also reduce the various repetitive motion injuries that the human workers had been experiencing while doing the same task. I also trained the former assembly worker how to take care of the robotic arm and be able to make changes and updates to the code as needed.

 Was the robotic arm that I helped customize and got working an example of a superhuman AI accomplishment?

 Yes, you could apparently make such a claim.

 It used robotics, which as mentioned generally fits within the AI rubric. It was superhuman because it can easily beat any human at the assembly line task. Whereas before a human could do the overall task about six times per hour, the robotic arm could nearly double that pace (about 10 times per hour). The robotic arm could work 24×7, no need to rest or relax or take breaks. For all practical purposes, you could assert that it was superhuman in comparison to what a human could do. You could say it was superhuman beyond any other human, since no human could possibly unaided by machinery work like that.

 Personally, it would give me heartburn to go around and say that this robotic arm was superhuman AI. But, that’s just me.

 You might say that physical things don’t count for the superhuman AI moniker. 

In the case of the robotic arm, it wasn’t “thinking” in any superhuman kind of way. Therefore, maybe we should only use superhuman AI when the matter at-hand involves thinking, akin to winning at chess or Scrabble.

Does the superhuman AI have to be the best in comparison to all humans? In other words, if we construct an AI system that plays chess, and it beats the topmost human chess players, we might then assert or infer that it can beat all humans in that domain and therefore it is rightfully superhuman.

 If it cannot beat all humans in the domain, what then?

 Someone develops a Machine Learning capability that is able to diagnose MRI’s and find cancerous regions, doing so more consistently than the average medical doctor and at times better than the best medical specialists in that domain. Let’s assume we cannot say it is always better than all humans in that respect. It is only sometimes better.

 Superhuman AI?

 Coming Up With Categories Of Human AI Achievement

Some suggest that we ought to have a graduated series of categories that lead-up to being referred to as superhuman. 

We might do this:

  •         Superhuman AI = better than all humans in the domain, as far as we can infer
  •         High-human AI = better than most humans in the domain, high as in heightened
  •         Par-human AI = similar to most humans in the domain or “on par” with humans
  •         Sub-human AI = less than or worse than most humans in the domain, sub-par

Notice that I’ve qualified the superhuman in two important respects, one by the aspect of saying it pertains to a particular domain, and the other that we can only infer that the AI in that case is better than all humans in the domain.

 The latter assumption is due to the aspect that we cannot really say whether or not the AI might be better than all humans (unless we really can have all human’s line-up and showcase that none are better than the AI, all 7.5 billion people on the planet).

 Let’s take chess. 

Just because you can beat the most recent top-rated chess masters does not mean you can beat all humans in chess. There might be someone that is a chess wizard that nobody even knows exists. Or, maybe next week a chess wizard appears seemingly out of nowhere that can play chess better than any other human on Earth. 

Thus, we’ll have to approximate that based on whatever kind of circumstance is involved, such as with chess, we’ll say that the AI is better than “all” humans but do so knowing that we are making a bit of a leap-in-faith on that aspect.

Some react to the superhuman AI by believing that the superhuman AI can do anything in any field of endeavor. 

That’s why I’ve put into the aforementioned indications that it is the AI within the domain of choice, such as chess or Go.

This is also why the superhuman AI moniker is on that slippery slope. 

If you tell someone that you have superhuman AI that can play chess, they might think this implies that AI can play any kind of game to that same topmost level, since we all know that chess is hard and therefore presumably the AI can just switch over and be superhuman at say Monopoly (which most people would say is a lot less arduous than chess and so it should be “easy” presumably for a superhuman AI chess playing system to win at Monopoly).

Of course, today’s so-called superhuman AI instances are all within a narrow domain and do not have the ability to just switch over on their own and be tops at other domains.

 Worse still, some people that hear about something that is superhuman AI will often ascribe to the matter that is must also have common sense reasoning and also Artificial General Intelligence (AGI). If it can play chess so well, it likely can solve world hunger or clean-up our environment too. Nope, sorry about that but those aren’t in the cards right now.

 For my article about the limits of today’s AI when it comes to common sense reasoning, see: https://aitrends.com/selfdrivingcars/common-sense-reasoning-and-ai-self-driving-cars/

 For issues about AI boundaries, see my article: https://aitrends.com/ai-insider/ai-boundaries-and-self-driving-cars-the-driving-controls-debate/

 For reasons to consider starting over on AI, see my article: https://aitrends.com/selfdrivingcars/starting-over-on-ai-and-self-driving-cars/

 For conspiracy theories about AI, see my article: https://aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/

 Superhuman AI And Autonomous Cars

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. 

There are some within the self-driving driverless car industry that are either using the phrase “superhuman AI” or are letting others use that phrase for them.

Is the superhuman AI moniker applicable to aspects of today’s AI self-driving cars?

 For the fake news about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/ai-fake-news-about-self-driving-cars/

 For the sizzle reels that mislead about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/sizzle-reel-trickery-ai-self-driving-car-hype/

 Allow me to elaborate.

 I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, the brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

 For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

 For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

 For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

 For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

 For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

 Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

 Here’s the usual steps involved in the AI driving task:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

 Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a Utopian world in which there are only AI self-driving cars on public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

 Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

 For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

 See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

 For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

 For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

 Returning to the superhuman AI discussion, let’s consider how this catchy phrase is being used by some in today’s AI self-driving car industry.

 For the sensors side of things, it seems like anytime an improvement is made in being able to analyze an image and detect whether there is say a pedestrian in a picture, there is someone that will claim the AI or ML based capability is superhuman AI. This seems to suggest that if the shape of a pedestrian can be picked-out of a hazy image, it is somehow better than a human’s ability to do human-based image analysis. Rarely is there much substantive support provided for such a contention.

 Furthermore, given the relatively brittle nature of most of today’s image processing capabilities, even if the new routine can do a better job at that one particular aspect, one has to ask whether this is a fair and reasonable way to then ascribe to it that it is superhuman AI. We all know that a human could do a much broader scope kind of image analysis and likely “best” the image processing software in an overall effort of doing image detection.

 We also know that the image processing software has no “understanding” whatsoever about the image that it has detected. It has found a shape and associated it with something within the system tagged as a pedestrian. 

Does it “know” that a pedestrian is a human being that breaths and walks and lives and thinks? 

No. 

Does it “know” that a pedestrian might suddenly run or jump or shout at the self-driving car? 

No.

Yet, is it truly superhuman AI?

 Seems like a stretch.

Trouble’s Afoot With Superhuman AI Proclamations

For those AI developers that would argue that it is superhuman AI, I’ll simply repeat my earlier qualm that for those that aren’t aware of AI’s limitations and constraints as it exists today, your willingness to toss around the superhuman AI moniker is going to get someone in trouble. The public will falsely believe that the AI of the self-driving car is more sophisticated and more capable than it really is. 

Regulators are going to falsely believe that the AI of the self-driving car is more robust and safer than it really is. And so on, down the line for all of the stakeholders involved in AI self-driving cars.

I’d be willing to bet that this wanton use of “superhuman AI” will ultimately come to the spotlight when there are product liability lawsuits lobbed against the auto makers and tech firms that brandished such wording. 

Didn’t the “superhuman AI” mislead consumers into believing that their AI self-driving car could do things that it really could not, will be a question asked during the case. By what manner did you arrive at being able to proclaim that your AI self-driving car had this kind of superhuman AI, will be another question. And so on. 

For my article about the safety concerns of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

 For my article about how the levels of AI self-driving car might be misleading, see: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/

 For the responsibility aspects of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/responsibility-and-ai-self-driving-cars/

 For the dangers associated with being an egocentric AI developer, see my article: https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

 For the marketing of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/marketing-self-driving-cars-new-paradigms/

 Conclusion

Some argue that being perhaps over-the-top is the only way to make sure that funding and energy continues to pour into the AI field.

Presumably, a bit of hyperbole is worth the “cost” as it provides an overwhelming goodness when considered as a trade-off to otherwise losing steam and momentum in the quest to reach true AI. 

If we went around and told people that we are in the midst of AI systems that are par-human, or subpar-human, it might be a shock that would undermine investments and faith in pushing ahead with AI. 

Superhuman AI seems like a modest enough phrase that it can be used without having an abundance of guilt or misgivings, at least for some. 

You’ll have to make that decision on your own and live with it.

Fortunately, I don’t think you need to be superhuman to make the right decision about this.

Copyright 2019 Dr. Lance Eliot 

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Source: AI Trends

Posted on

Emergency-Only AI and Autonomous Cars


AI self-driving cars need to be able to respond to emergency situations that come up on the road. An accident scene can have a ripple effect as the car swerves. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

I had just fallen asleep in my hotel room when the fire alarm rang out.

It was a few minutes past 2 a.m. and the hotel was relatively full and mainly had been quiet at this hour as most of the hotel guests had earlier retired for the evening. The sharp twang of the fire alarm pierced throughout the hallways and walls of the hotel. I could hear the sounds of people moving around in their hotel rooms as they quickly got up to see what was going on.

Just last month I had been staying in a different hotel in a different city and the fire alarm had gone off, but it turned out to be a false alarm. The hotel was a new one that had only been open a few weeks and apparently they were still trying to iron out the bugs of the hotel systems. Somehow, the fire alarm system had gone off, right around midnight, and after a few minutes the hotel staff told everyone not to worry since it was a false alarm.

Of course, some more discerning hotel guests remained worried since they didn’t necessarily believe the staff that the fire alarm was a false one.

Maybe the staff was wrong, and if so, the consequences could be deadly.

Ultimately, there was no apparent sign of a fire, no smoke, no flames, and gradually even the most skeptical of guests went back to sleep.

I could not believe that I was once again hearing a fire alarm.

In my many years of staying at hotels while traveling for work and sometimes (rarely) for vacations, I had only experienced a few occasions of a fire alarm ringing out. Now, I had two in a short span of time. The earlier one had been a dud, a false alarm. I figured that perhaps this most recent fire alarm was also a false alarm.

But, should I base this assumption on the mere fact that I had a few weeks earlier experienced a false alarm?

The logic was not especially iron tight since these were two completely different hotels and had nothing particularly in common, other than that I had stayed at both of them.

False Alarm Or Genuine Fire Emergency

The good thing about my recent experience of a false alarm was that it had reminded me of the precautions you should undertake when staying at a hotel.

As such, I had made sure that my normal routine while staying at hotels incorporated the appropriate fire-related safeguards. One is to have your shoes close to where you can find them when you awakened at night by a fire or a fire alarm, allowing you to quickly put on the shoes for escaping from the room. Without shoes on, you might try to escape the room or run down the hallway and there could be broken items like glass or other shards that would inhibit your escape or harm you as you tried to get out.

I also kept my key personal items such as my wallet and smartphone nearby the bed and had my pants and jacket also ready in case needed. I knew too the path to the doorway of my hotel room and kept it clear of obstructions, doing so before I went to sleep for the night. I had made sure to scrutinize the hallway and knew the nearest exits and where the stairs were. Some people also prefer to stay in the lower floors of a hotel, doing so in case the firefighters are trying to get you out, which they can then more readily reach either by foot or via a fire truck ladder.

I don’t want you to think I was obsessed with being ready for an emergency. The precautions I’ve just mentioned are all easily done without any real extra or extraordinary effort involved. When I first check into my hotel room, I glance around the hallway as I am walking to the room, spotting where the exits and the stairs are. When I go to sleep at night, I make sure the hotel room door is locked and then as I walk back to the bed I then also make sure the path is unobstructed. These are facets that can be done by habit and seamlessly fit in with the effort involved in staying at a hotel.

So, what did I do about the blaring fire alarm on this most recent hotel stay?

I decided that it was worthwhile to assume it was a real fire and not a false alarm, putting my safety at the higher bet than the slovenly assumption that I could remain laying in bed and wait to find out if the alarm was true or false.

I rapidly got into my jeans and coat, put on my shoes, grabbed my wallet and smartphone from the bed stand, and went straight to the door that led into the hallway.

I touched the door to see if it was hot, another kind of precaution in case the fire is right on the other side of the door (you don’t want to open a door leading into a fire, of which the air of your room will simply help ignite further and you’ll be charred to a crisp).

Feeling no heat on the door, I slowly opened it to peek into the hallway.

Believe it or not, there was smoke in the hallway.

Thank goodness that I had opted to believe the fire alarm. I stepped into the hallway cautiously. The smoke appeared to be coming from the west end and not from the east end. I figured this implied that wherever the fire was, it might be more so on the west side rather than the east side of the hotel. I began to walk in the easterly direction.

What seemed peculiar was that there was no one else also making their way through the hallway.

I was pretty sure that there were people in the other rooms as I had heard them coming to their rooms earlier that evening (often making a bit of noise after likely visiting the hotel bar and having a few drinks there).

Were these other people still asleep?

How could they not hear the incessant clanging of the fire alarm?

The sound was blaring and loud enough to wake the dead.

I decided to bang on the doors of the rooms that I was walking past.

I would rap a door with my fist and then yell out “Fire!” to let them know that there was indeed something really happening. My guess was that others had heard the fire alarm but chosen to wait and see what might happen. With the hallway starting to fill with smoke, this seemed sufficient proof to me that a fire was somewhere. The smoke would eventually seep into the rooms. For now, the smoke was mainly floating near the ceiling of the hallway. It wasn’t thick enough yet to have filled down to the floor and try to permeate into the rooms at the door seams.

The good news of the situation turned out that no one ended-up getting hurt and the fire was confined to the laundry room of the hotel.

The fire department showed-up and put out the flames. They brought in large fans too to help clear out the smoke from the hotel. The staff did an adequate job of helping the hotel guests and moved many of them to another wing of the hotel to get away from the residual smoky smell. It was one of the few times that I’d ever been in a hotel that had a fire and for which I was directly impacted by the fire.

The hotel had smoke alarms in each of the hotel rooms, along with smoke alarms in other areas of the hotel. This is nowadays standard fare for most hotels and also personal residences that you are supposed to have fire alarm devices setup in appropriate areas. These silent guardians are there to be your watchdogs. When smoke begins to fill the air, the fire alarm detects the smoke and then starts to beep or clang to alert you.

Some of today’s fire alarms speak at you. Rather than simply making a beep sound, these newer fire alarms emit the words “Fire!” or “Get out!” or other kinds of sayings. It is thought that people might be more responsive to hearing what sounds like a human voice telling them what to do. Hearing a beeping sound might not create as strong a response.

You’ve likely at times wrestled with the fire alarm in your home.

Perhaps the fire alarm battery became low and the fire alarm started a low beeping sound to let you know. This often happens on a timed basis wherein the beep sound for the low-battery is at first every say five minutes. If you don’t change the battery, the beeping time interval gets reduced. The low-battery beep might then happen every minute, and then every 30 seconds, and so on.

In the hotels that I stay at, they usually also have a fire alarm pull. These are devices typically mounted in the hallways that allow you to grab and pull to alert that a fire is taking place. I’d bet that perhaps when you were in school, someone one time pulled the fire alarm to avoid taking a test. The prankster that pulled the fire alarm is putting everyone at risk, since people can get injured when trying to rush out as a result of hearing a fire alarm, plus it might dull their reaction times the next time there is an actual true fire alarm alert.

Some hotels have a sprinkler system that will spray water to douse a fire.

The sprinkler activation might be tied into the fire alarms so that the moment a fire alarm goes off the sprinklers are then activated. This is not usually so closely linked though because of the chances that a false fire alarm might activate the sprinklers. Once those sprinklers start going, it’s going to be more damaging to the hotel property and you’d obviously want the sprinklers to only go when you are pretty certain that a fire is really occurring. As such, there is often a separate mechanism that has to be operated to get the fire sprinklers to engage.

Emergency Systems That Save Lives

This discussion about fire alarms and fire protection illuminates some important elements about systems that are designed to help save human lives.

In particular:

  • A passive system like the fire alarm pull won’t automatically go off and instead the human needs to overtly activate it
  • For a passive system, the human needs to be aware of where and how to activate it, else the passive system otherwise does little good to help save the human
  • An active system like the smoke alarm is constantly detecting the environment and ready to go off as soon as the conditions occur that will activate the alarm
  • Some system elements are intended to simply alert the human and it is then up to the human to take some form of action
  • Some system elements such as a fire sprinkler are intended to automatically engage to save human lives and the humans being saved do not need to directly activate the life-saving effort
  • These emergency-only systems are intended to be used only when absolutely necessary and otherwise are silent, being somewhat out-of-sight and out-of-mind of most humans
  • Such systems are not error-free in that they can at times falsely activate even when there isn’t any pending emergency involved
  • Humans can undermine these emergency-only systems by not abiding by them or taking other actions that reduce the effectiveness of the system
  • Humans will at times distrust an emergency-only system and believe that the system is falsely reporting an emergency and therefore not take prescribed action

I’m invoking the use case of fire alarms as a means to convey the nature of emergency-only systems.

There are lots of emergency-only related systems that we might come in contact with in all walks of life. The fire alarm is perhaps the easiest to describe and use as an illustrative aspect to derive the underpinnings of what they do and how humans act and react to them.

Autonomous Cars And Emergency-Only AI

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One approach that some automakers and tech firms are taking toward the AI systems for self-driving cars involves designing and implementing those AI systems for emergency-only purposes.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, the brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car. Level 4 is akin to Level 5 but with constraints self-imposed as to the scope of the AI driving capabilities.

For self-driving cars less than a Level 4, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus mainly herein on the true Level 4 and Level 5, but also begin by studying the ramifications related to Level 3.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the topic of emergency-only systems, there are various approaches that automakers and tech firms are taking toward the design and development of AI for self-driving cars and one such approach involves an emergency-only AI paradigm.

As already mentioned, currently the most vaunted and desired approach consists of having the AI always be driving the car and there is no human driving involved at all, which is the intent of a true Level 4 and Level 5 This is much harder to pull off than it might seem.

I’ve repeatedly exhorted that the true Level 5 as a kind of moonshot.

It’s going to take a lot longer to get there than most people seem to think it will.

At the less than Level 4, there is a co-sharing of the driving task. We can step back for a moment and ask an intriguing question about the co-sharing of the driving task, namely, what should the split be between when the AI does the driving and when the human does the driving?

The Level 2 split of human versus AI driving is that the human tends to do the bulk of the driving and the AI tends to do relatively little of the driving task.

For the Level 3, the split tends toward having the AI do more so of the driving and the human do less of it.

Suppose we somewhat turned this split on its head, so to speak.

We might design the AI to be an emergency-only kind of mechanism.

Rather than the AI driving the self-driving car to varying increasing progressive degrees at the Level 2, Level 3, we might instead opt to have the human be the mainstay driver.

The AI would be used nearly solely for emergency-only purposes.

Emergency-Only AI Driving For Level 3

Let’s say I am driving in a Level 3 self-driving car. I would normally be expecting the AI to be the primary driver and I am there in case the AI needs me to take over.

I’ve written and spoken many times about the dangers of this co-sharing arrangement. As a human, I might become complacent and not be ready to take over the driving task when the moment arises for me to do so. Maybe I was playing a video game on my smartphone, maybe I was reading a book that’s in my lap, and other kinds of distractions might occur.

For the concerns about Level 3 AI self-driving cars, see my article: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/

For driving controls aspects of self-driving cars, see my article: https://aitrends.com/ai-insider/ai-boundaries-and-self-driving-cars-the-driving-controls-debate/

For safety as a crucial aspect of self-driving cars, see my article: https://aitrends.com/ai-insider/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

For my article about the moonshot of self-driving cars achievement, see: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

Instead of having the AI do most of the driving while in a Level 3, suppose we instead said that the human is the primary driver.

The AI is relegated to being an emergency-only driver.

Here’s how that might work.

I’m driving my Level 3 car and the AI is quietly observing what is going on. The AI is using all of its sensors to continuously detect and interpret the roadway situation. The sensor fusion is occurring. The virtual world model is being updated. The AI action planning is taking place. The only thing not happening is the issuance of the car controls commands.

In a sense, the AI is for all practical purposes “driving” the car without actually taking over the driving controls. This might be likened to when I was teaching my children how to drive a car. They would sit in the driver’s seat. I had no ready means to access the driver controls. Nonetheless, in my head, I was acting as though I was driving the car. I did this to be able to comprehend what my teenage novice driver children were doing and so that I could also come to their aid when needed.

Okay, so the Level 3 car is being driven by the human and all of a sudden another car veers into the lane and threatens to crash into the Level 3 car. We now have a circumstance wherein the human driver of the Level 3 car should presumably take evasive action.

Does the human notice that the other car is veering dangerously?

Will the human take quick enough action to avoid the crash?

Suppose that the AI was able to ascertain that the veering car is going to crash with the Level 3 car.

Similar to a fire protection system such as at the hotels, the AI can potentially alert the human driver to take action (akin to a fire alarm that belts out an alarm bell).

Or, the AI might take more overt action and momentarily take over the driving controls to maneuver the car away from the danger (this would be somewhat equivalent to the fire sprinklers getting invoked in a hotel).

If the AI was devised to work in an emergency-only mode, some would assert that it relieves the pressure on the AI developers to try and devise an all-encompassing AI system that can handle any and all kinds of driving situations.

Instead, the AI developers could focus on the emergency-only kinds of situations.

This also would presumably shift attention toward the AI being a kind of hero, stepping into the driving when things are dire and saving the day.

Hey, someone might say, the other day the AI of my self-driving car kept me from hitting a dog that ran unexpectedly into the street.

Another person might say that the AI saved them from ramming into a car that had come to a sudden halt on the freeway just ahead of their car (and they sheepishly admit they had turned to look at a roadside billboard and by the time they turned their head back the halted car ahead was a surprise).

We are all already somewhat familiar with automated driving assistance systems that can do something similar.

Many cars today have a simplistic detection device that if your car is going to hit something ahead of it, the brakes are automatically applied. These tend to be extremely simplistic in how they work. It is almost a knee-jerk reaction kind of system. There’re not much “smarts” involved. You might liken these low-level automated systems as similar to the autonomic nervous system of a human, it reacts instinctively and without much direct thinking involved (when my hand is near a hot stove, presumably my instincts kick-in and I withdraw my hand, doing so without a lot of contemplative effort involved).

These behind-the-scenes automated driving assistance systems would be quietly replaced with a more sophisticated AI-based system that is more robust and paying attention to the overall driving task.

The paradigm is that the emergency-only AI is likened to having a second human driver in the car and the secondary driver is there only for emergency driving purposes.

The rest of the time, the primary driver is the human that is driving the car.

As mentioned, this might suggest that the AI then does not need to be full-bodied and does not need to be able to drive the car all of the time, and instead be focused on just being able to drive when emergency situations arise. Some would assert that this is a bit of a paradox.

If the AI is not versed enough to be able to drive at any time, how will it be able to discern when an emergency is arising that requires the AI to step into the driving task?

In other words, some would say that only until you have a fully capable driving AI that you would be risking things unduly to have the AI only be used in emergencies.

Unless you opted to say that the AI is exclusively used solely in emergencies, you are otherwise suggesting that the AI is able to monitor the driving task throughout and is ready to at any moment do the driving, but if that’s the case, why not then let the AI do the driving as the primary driver anyway.

Defining Emergency Driving Situations

This also brings up the notion of defining the nature of an emergency driving situation.

The obvious example of an emergency would be the case of a dog that has darted into the street directly in front of the car and the speed, direction, and timing of the car is such that it will mathematically intersect with the dog if some kind of driving action is not taken to immediately attempt to avoid striking the animal. But this takes us back to the kind of simpleton automated driving assistance systems that are not especially imbued with AI anyway.

If we’re going to consider using AI for emergency-only situations, presumably the kinds of emergency situations will range from rather obvious ones that a knee-jerk reactive driving system could handle and all the way up to much more subtle and harder to predict emergencies.

If the AI is going to be continuously monitoring the driving situation, we’d want it to be acting like a true secondary driver and be able to do more sophisticated kind of emergency situation detection.

You are on a mountain road that curves back-and-forth.

The slow lane has large rambling trucks in it. Your car is in the fast lane that is adjacent to the slow lane. The AI has been observing the slow lane and detected a truck up ahead that periodically has swerved into the fast lane when on a curve. The path of the car is such that in about 10 seconds the car will be passing the truck while on a curve. At this moment there is no apparent danger. But, it can be predicted with sufficient probability that in 10 seconds the likelihood is that the truck will swerve into the lane of the car as it tries to pass the truck on the curve.

Notice that in this example there is not a simple act-react cycle involved.

Most of the automated driving assist systems would only react once the car is actually passing the truck and if perchance as the passing action occurred that the truck then veered into the path of the car. Instead, in my example, the AI has anticipated a potential future emergency and will opt to take action beforehand to either prevent the danger or at least be better prepared to cope with it when (if) it occurs.

The emergency-only AI would be presumably boosted beyond the nature of a traditional automated driving assist system, and likely be augmented by the use of Machine Learning (ML).

How did the AI even realize that observing the trucks in the slow lane was worthwhile to do?

An AI driving system that has learned over time would have the “realization” that trucks often tend to swerve out of their lanes while on curving roads.

This then becomes part-and-parcel of the “awareness” that the AI will have when looking for potential emergency driving situations.

For my article about Machine Learning core aspects, see: https://aitrends.com/selfdrivingcars/machine-learning-benchmarks-and-ai-self-driving-cars/

For ensemble Machine Learning, see my article: https://aitrends.com/selfdrivingcars/ensemble-machine-learning-for-ai-self-driving-cars/

For federated Machine Learning, see my article: https://aitrends.com/selfdrivingcars/federated-machine-learning-for-ai-self-driving-cars/

For the importance of explanation-based Machine Learning, see my article: https://aitrends.com/selfdrivingcars/explanation-ai-machine-learning-for-ai-self-driving-cars/

True Autonomous Cars And Emergency-Only AI

Let’s now revisit my earlier comments about the nature of emergency-only systems and my illustrative examples of the fire alarm and fire protection systems.

I present to you those earlier points and then recast them into the context of AI self-driving cars:

  • A passive system like the fire alarm pull won’t automatically go off and instead the human needs to overtly activate it

Would a driving emergency-only AI system be setup for only a passive mode, meaning that the human driver would need to invoke the AI system? We might have a button that the human could press that invokes the AI emergency capability, or the human might have a “safe word” that they utter to ask the AI to step into the picture.

Downsides with this include that the human might not realize they need or even could use the AI emergency option. Or, the human might realize it but enact the AI emergency mode once it is too late to do anything to avert the incident by the AI.

We would also need to have a means of letting the human know that the AI has “accepted” the inception of going into the AI emergency option mode, otherwise the human might be unsure as to whether or not the AI got the signal and whether the AI is actually stepping into the driving.

There is also the matter of returning the driving back to the human once the emergency action by the AI has been undertaken. How would the AI be able to “know” that the human is prepared to resume driving the car? Would it ask the human driver or just assume that if the human is still at the driver controls that it is Okay to disengage by the AI?

  • For a passive system, the human needs to be aware of where and how to activate it, else the passive system otherwise does little good to help save the human

As mentioned, a human driver might forget that the AI is standing ready to take over. Plus, when an emergency arises, the human might be so startled and mentally consumed that they lack a clear-cut mind to be able to turn over the driving to the AI.

  • An active system like the smoke alarm is constantly detecting the environment and ready to go off as soon as the conditions occur that will activate the alarm

With this approach, the AI is ready to step into the driving task and will do so whenever it deems necessary. This can be handy since the human driver might not realize an emergency is arising, or might realize it but not invoke the AI to help, or be perhaps incapacitated in some manner and wanting to invoke the AI but cannot.

Downside here is that the AI might shock or startle the human driver by summarily taking over the driving and catching the human driver off-guard. If so, the human driver might try to take some dramatic action that counters the actions of the AI.

We might also end-up with the human driver become on-edge that at any moment the AI is going to take over. This might cause the human driver to get suspicious of the AI.

It could be that the AI only alerts the human driver and lets the human driver decide what the human driver wants to do. Or, it could be that the AI grabs control of the car.

  • Some system elements are intended to simply alert the human and it is then up to the human to take some form of action

In this case, if the AI is acting as an alert, the question arises as to how best to communicate the alert. If the AI rings a bell or turns on a red light, the human driver won’t especially know what the declared emergency is about. Thus, the human driver might react to the “wrong” emergency in terms of what the human perceives versus what the AI detected.

If the AI tries to explain the nature of the emergency, this can use up precious time. When an emergency is arising, the odds are that there is little available time to try and explain what to do.

I am reminded that at one point my teenage novice driver children were about to potentially hit a bicyclist and I was tongue tied trying to explain the situation. I could just say “swerve to your right!” but this offered no explanation for why to do so. If I tried to say “there is a bicyclist to your left, watch out!” this provided some explanation and the desired action would be up to the driver. If I had said “there is a bicyclist to your left, swerve to your right!” it could be that the time taken to say the first part, depicting the situation, used up the available time to actually make the swerving action that would save the bike rider. Etc.

  • Some system elements such as a fire sprinkler are intended to automatically engage to save human lives and the humans being saved do not need to directly activate the life-saving effort

This approach involves the AI taking over the driving control, which as mentioned has both pluses and minuses.

  • These emergency-only systems are intended to be used only when absolutely necessary and otherwise are silent, being somewhat out-of-sight and out-of-mind of most humans

For emergency-only AI driving systems, they are intended only for use when an emergency driving situation arises. This begs the question though of what is considered an emergency versus not an emergency.

Also, suppose a human believes an emergency is arising but the AI has not detected it, or maybe the AI detected it and determined that it does not believe that a genuine emergency is brewing. This brings up the usual hand-off issues that arise when doing any kind of co-sharing of the driving task.

  • Such systems are not error-free in that they can at times falsely activate even when there isn’t any pending emergency involved

Some AI developers seem to think that their AI driving system is going to work perfectly and do so all the time. This makes little sense. There is a good likelihood that the AI will have hidden bugs. There is a likelihood that the AI as devised will potentially make a wrong move. There is a chance that the AI hardware might glitch. And so on.

If an emergency-only AI system engages on a false positive, it will likely undermine confidence by the human driver that the AI is worthy to have engaged at all. There is also the concern that if the AI gets caught in a false negative and does not take action when needed, this too is worrisome since the human would assert that they relied upon the AI to deal with the emergency, but it failed in its duty to perform.

  • Humans can undermine these emergency-only systems by not abiding by them or taking other actions that reduce the effectiveness of the system

With the co-sharing of the driving task, there is an inherent concern that you have two drivers trying to each drive the car as they see fit.

Imagine that when my children were learning to drive if I had a second set of driving controls. The odds are that I would have kept my foot on the brake nearly all of the time and been keeping a steady grip on the steering wheel. This though would have undermined their driving effort and created confusion as to which of us was really driving the car. The same can be said of the AI emergency-only driving versus the human driving.

  • Humans will at times distrust an emergency-only system and believe that the system is falsely reporting an emergency and therefore not take prescribed action

Would we lockout the driving controls for the human whenever the AI takes over control due to a perceived emergency situation by the AI detection? This would prevent having the human driver fight with the AI in terms of what driving action to take. But the human driver is likely to have qualms about this. Suppose the AI has taken over when there wasn’t a genuine emergency.

We might assume or hope that the AI in the case of acting on a false alarm (false positive) would not get the car into harm’s way. This though is not necessarily the case.

Suppose the AI perceived that the car was potentially going to hit a bicyclist, and so the AI swerved the car to avoid the bike rider. Meanwhile, by swerving the car, another car in the next lane got unnerved and the driver in that car reacted by slamming on their brakes. Meanwhile, by slamming on their brakes, the car behind them slammed into the car that had hit its brakes. All of this being precipitated by the AI that opted to avoid hitting the bicyclist.

Imagine though that the bicyclist took a quick turn away from the car and thus there really wasn’t an emergency per se.

For my article about ghosts in AI self-driving car systems, see: https://aitrends.com/selfdrivingcars/ghosts-in-ai-self-driving-cars/

For the debugging of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/debugging-of-ai-self-driving-cars/

For the egocentric views of some AI developers, see my article: https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

For the burnout of AI developers, see my article: https://aitrends.com/selfdrivingcars/developer-burnout-and-ai-self-driving-cars/

Conclusion

There are going to be AI systems that are devised to work only on an emergency basis.

Astute ones will be designed to silently be detecting what is going on and be ready to step into a task when needed.

We’ll need though to make sure that humans know when and how the AI is going to take action. Those humans too will be imperfect and potentially forget that the AI is there or might even end-up fighting with the AI if the human believes that the AI is wrong to take action or otherwise has qualms about the AI.

We usually think of an emergency as a situation involving the need for an urgent intervention in order to avoid or mitigate the chances of injury to life, health, or property. There is a lot of judgment that often comes to play when declaring that a situation is an emergency. When an automated AI system tries to help out, clarity will be needed as to what constitutes an emergency and what does not.

The Hippocratic Oath states that primum non nocere, meaning first do no harm.

An emergency-only AI system for a self-driving car is going to have a duty to abide by that principle, which I assure you is going to be a high burden to bear.

The emergency-only AI approach is not as easy of a path as some might at initial glance assume, and indeed for some it might even be considered insufficient, while for others it is a step forward toward the goal of a full autonomous AI self-driving car.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Source: AI Trends