Posted on

Car Wash Complexities and AI Autonomous Cars


Having the AI self-driving car successfully navigate a car wash is not a high priority for developers, but it would provide a value for car owners. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

The other day I went to my local car wash here in California.

After getting my car washed, I was provided with a coupon that said if it rained within the next 48 hours that I could come back for a free car wash.

When I showed this to a colleague visiting here from the East Coast, he was surprised about the coupon and said he had never heard of such a thing being provided to car wash customers.

I was surprised that he was surprised, since this is a pretty customary offer here in California and has been as long as I can remember.

The basis for the coupon is that though we rarely get rain, there’s a paltry 12 inches of rain per year in Los Angeles and it occurs on only about 35 days of the year (meaning that 90% of the year is no rain!), the car wash businesses want to make sure that locals don’t freak-out and avoid getting a car wash when there’s even a chance of rain.

Indeed, many here would look up at the clouds and if it looks gloomy, they might put off getting a car wash under the belief that why waste money on getting your car washed and then it gets pelted by rain and those pesky rain drops mar your shiny car. The coupon is an easy way to ensure that potential customers feel confident about getting a car wash, regardless whether rain is maybe going to occur or not.

When I was a young child, my parents had me wash our own cars as a means of earning my allowance.

Rather than taking the cars to a professional car wash, they had me, the youthful amateur car washer, do so instead. It was a tedious and laborious task. Put clean water into buckets, have clean sponges and rags, be ready to wax the car after washing it, make sure to vacuum the inside of the car, and so on. I tried to make it into a game, sometimes timing myself to see how fast I could go or coming up with variant ways of doing the washing. To me, it seemed like a misuse of human labor since there were already automated car washes and I failed to see why we would not use “robotics” instead of a human to do the chore (little did I realize then that someday I would become heavily involved in AI and robots!).

Today, there are places in California that have outlawed doing car washing at your home.

This also is a surprise to some out-of-towners.

The basis for the law is that you might tend to waste water when washing your car, and here in parts of California we are water “starved” and required to conserve water. Presumably, a professional car wash is supposed to not only use just the right amount of water but also have provisions to reclaim the water. Furthermore, another reason that home car washes are discouraged is that the run-off from the car wash, which might include grease and oil and other contaminants, might flow into the sewer system and end-up in polluting our oceans. Professional car washes are supposed to have provisions to trap this or otherwise contend with it.

Car Washing Is Big Business

You might be wondering whether professional car washing is much of a business.

We are all used to seeing car washes on various street corners and often associated with gas stations. Is there a lot of money to be made via a car wash? Yes, indeed. In the United States alone, there are an estimated 16,000+ car wash establishments and it is estimated to be a nearly $10 billion-dollar industry, often commanding a hefty profit.

Something seemingly as simple as washing cars is big business.

Maybe I should have continued my amateur car washing and progressed it into a professional car wash!

The industry is dominated by smaller mom-and-pop car washes in the sense that the top 50 car washing firms only have about 20% of the total market.

This means that the market is very fragmented.

There isn’t a handful of massive car wash firms that run things. Instead, there are lots and lots of car wash owners and car wash operators, all vying to compete with each other.

Competing can be fierce in the car wash business.

Generally, the biggest competitive advantage and the one main success criteria for any car wash is its location. Car washes are considered a location-based business. People need to get their cars to your car wash. People don’t want to have to go out of their way to get to the car wash. If there’s a car wash a block from their home, and another one three blocks away, it’s going to require something extraordinary to get those drivers to take their cars those few blocks further to get their cars washed.  If you are the only car wash in-town, you’ve kind of got it made since the alternatives of hand washing yourself is now passé, as I’ve mentioned earlier.

Location is key.

You still need to have a car wash that actually does car washing. Even the best of locations can be undermined by providing shoddy car washes. People will figure this out and word will spread. Other than unsuspecting drivers, you won’t get any repeat business. You need to leverage a good location and make sure that you provide sufficient quality and consistency of your car washes.

Of course, in case your car wash and another one is pretty close in terms of location, you can try to differentiate your car wash.

They all pretty much need to be able to wash, clean, and wax, in terms of the services being provided. Those are the basics needed to be playing the game, the so-called table stakes. Time is an important factor for most customers, and so the faster your car wash can complete the service, the better it is perceived. But, you cannot sacrifice the basics for the speed, in the sense that even if you are a faster washer, if the end result is a car not as clean as some other slightly slower car wash, the odds are that people will figure this out and no longer consider your faster speed of much value.

Here’s what the mantra of car washes is, and for which people expect and clamor for out of a car wash: Clean, dry, and shiny.

This can be achieved by providing a hands-off automated service operation.

People drive up their cars, usually enter a code to activate the car wash, drive forward into the car wash, remain in their cars as the car gets washed and waxed, and then proceed out of the car wash when finished. These are the tunnel systems that have become prevalent at most car washes. There are also the full-service operations, consisting of labor that will drive your car forward for you, and do hand drying and vacuuming the inside of your car. Most car washes choose one or the other of the two approaches.

One means to gain some added revenue and profit involves selling merchandise at the car wash.

The full-service car washes especially do this since the human driver usually gets out of their car and has nothing much else to do while the car is being washed. Might as well see if the car wash can make some more bucks off those idle customers. This though also ups the ante on the nature of the experience for the customer. If a customer interacts with a scowling retail clerk, the customer might decide to never come back to the car wash, even though the car wash itself might be doing a wonderful job washing cars.

Speaking of labor, the automated operations have reduced the labor that used to be involved in car washing. There are some that cling to the belief that the labor-based full-service car washes are much better than the automated no-labor ones, but overall the market has shown that the “express” washes have grown like weeds and obviously have satisfied a significant segment of the market.

Car washes will try to encourage loyalty by offering various loyalty cards or clubs to customers. Purchase five car washes and get the sixth one free. Some go the subscription route, wherein you buy a year’s worth of car washes or maybe even unlimited number of annual car washes. There can also be discounts and special programs involved. Veterans get a 10% discount. Or, if your child goes to the local high school, you get a discount on your car wash.

So, in recap, we seem to really want to have our cars washed, as evidenced by the billion dollar industry of professional car washing.

Car washing is more than just an idle concept, it’s a big business and one that consumers seem to relish.

Car Washes And Autonomous Cars

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI systems for self-driving cars. One of the “edge” problems involves how AI self-driving cars can handle car washes.

When I refer to an edge problem, it means a type of problem not considered at the core of an otherwise larger problem. In the case of AI self-driving cars, being able to have the AI drive a car is at the core of the driving task. You want to make sure that the AI can properly undertake the driving while the car is on the highway, doing so while in the inner-city areas, and while in the suburbs, etc. That’s the mainstay of the driving tasks for the AI.

For my article about edge problems, see: https://aitrends.com/selfdrivingcars/edge-problems-core-true-self-driving-cars-achieving-last-mile/

For why AI self-driving cars are a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

Having the AI be able to properly navigate and undertake a car wash is admittedly quite a bit further down on the list of priorities.

Nonetheless, it is an interesting problem and one that obviously provides some value to car owners, given the rather sizable nature of the car washing industry.

Imagine if you have a brand-new shiny AI self-driving car, but it cannot make its way into and through a car wash. This would seem like a let-down and in fact suggest that the AI is rather weak that it cannot handle something as simple as contending with a car wash. I’ve previously written about how the same thing can apply to other areas of driving tasks, such as being able to handle tolls at bridges.

For my article about weaknesses of AI self-driving cars, see:  https://aitrends.com/selfdrivingcars/ten-human-driving-foibles-self-driving-car-deep-learning-counter-tactics/

For my piece about tolls, see: https://aitrends.com/selfdrivingcars/toll-roads-traversal-self-driving-cars-entry-exit-complexities/

The car wash industry would certainly want to be able to tap into doing car washes for AI self-driving cars.

There are an estimated 250+ million conventional cars today in the United States, and presumably ultimately, they will be overtaken by AI self-driving cars. It won’t happen overnight. And, it is most likely that the AI self-driving cars will be new purchases, rather than somehow retrofitting the existing conventional cars. But, if somehow it is difficult or arduous to get AI self-driving cars to enter into and get car washed, this would not be good for the car wash industry.

This point about the AI self-driving cars being new purchases ties again to the topic of car washes in another facet.

The newer the car, the more likely that consumers take their car to the car wash.

The older the car, the less likely they take their car to the car wash.

This makes sense when you ponder it for a moment. If I have new shiny car, I want it to look new and shiny, and be able to show it off and enjoy the newness of it. If I have an older somewhat beat-up car, scratches included and other divots, it probably wouldn’t matter much to me whether it looks new and shiny. In fact, I suppose the dirt and grime might help to hide the aspect that it is an older and somewhat downtrodden car.

With the gradual sunsetting of conventional cars, people will likely not go to car washes as much.

No need to take in your conventional car that’s becoming gradually and progressively outdated.

Booming Business For Car Washing

With the advent of AI self-driving cars, since those are generally going to be new cars, people will likely want to go to car washes again. Therefore, over time, the car wash industry will see quite an impact of the decreasing interest by consumers of washing their conventional cars, and presumably an arising and increasing interest in getting their AI self-driving cars washed.

There are other factors that might further boost the car washing industry as a result of the advent of AI self-driving cars.

One is that it is anticipated that most AI self-driving cars will be turned into ridesharing services. This makes sense in that if you have a self-driving car that can be driving 24×7, and if you can make money by renting it out, you would likely do so. In that sense, AI self-driving cars will need to look nice, presumably, as a means of appearing attractive to the ridesharing public, and also with the self-driving cars being on-the-go 24×7 there’s heightened chances of them getting dirty or at least dirty looking.

Could be good times for car washes!

Those AI self-driving cars that are involved in ridesharing might be coming to car washes with a high frequency. This keeps the AI self-driving car looking in good shape. And, since the AI self-driving car is on-the-road a lot, it will likely need to get car washed with a higher frequency than today’s conventional cars. As an analogy, some sales people that drive their cars all day long here in Los Angeles area tend to get their cars washed several times a week, wanting to keep the car looking shiny and also to deal with the dust and grime that gets onto their always being driven cars.

Let’s go ahead and assume therefore that there will be interest in having AI self-driving cars go to the car wash.

I think we can all agree to that notion.

You might quibble about the frequency aspects, but in any case, it seems reasonable to believe that owners of AI self-driving cars will want to get those cars washed, from time-to-time or for a lot of the time.

What’s the big deal, you might ask, it’s a car and it’s getting washed. End of story.

Not so fast!

We can dig further into this topic.

First, I’d bet that the times of day that an AI self-driving car will be going to a car wash might differ overall than today’s conventional cars.

Think about that for a moment.

Today’s conventional cars require that a human driver takes the car to the car wash. This generally means that the time chosen is a time best suited to the human driver. I might have a lunch break and use that time to take my car to the car wash. I might do so after work, or on the weekend.

In the case of the AI self-driving car, for a true Level 5 self-driving car, which is one that can drive without any human driver on-board the car, the AI self-driving car can be sent to the car wash at the bidding of the car owner. This can happen any time of the day. If I were ridesharing out my AI self-driving car, I would likely want to have it fully available during the prime time of when people need a ridesharing pick-up. It wouldn’t make sense for me to send my AI self-driving car to the car wash when it could otherwise be making me money by doing ridesharing.

So, the odds are that I’d send my AI self-driving car to the car wash at oddball times, such as say 3:00 a.m. when presumably there is little or no ridesharing opportunities occurring.

This means that car wash owners need to realize that they might see a radical shift of when cars come to their car washes. If you are a labor-based car wash, you might need to reconsider the work shifts of your labor. If you are a fully automated car wash, this change in times might not impact your labor, but it also means that your car wash is going to be in higher use at oddball times, and if it breaks down or needs maintenance, that’s going to happen at oddball times too.

For my article about the levels of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For my overall framework about AI self-driving cars, see: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

Navigating In A Car Wash Can Be Tricky

Another facet of AI self-driving cars and car washes will be the likelihood that there is no human occupant in the self-driving car when it arrives at the car wash.

This means that the car wash itself cannot rely upon a human being to aid in the process of having the car proceed into and undertake the car wash. It’s going to be done entirely with the AI system of the self-driving car.

This lends itself to technological related solutions.

The car wash might be outfitted with Internet of Things (IoT) devices that can readily electronically communicate with the AI self-driving car. This would allow the AI and the car wash to engage in an electronic dialogue about what needs to be done. It’s almost like having an air-traffic-controller that can guide the self-driving car, such as move to the front of the tunnel, move forward onto the conveyor belt, stop now that you are on the conveyor belt, and so on.

For those car washes that won’t modernize, the AI could try to do the same things that human drivers do today.

This often involves reading signs that describe what to do. The AI could use its sensors to try and figure out where the self-driving car needs to be placed within the washing system. This can be trickier than it seems since if the AI places the self-driving car to the left or right of some obstruction, it could end-up hitting the self-driving car. If you’ve ever driven into an automated car wash, you likely know the “dance” involved of you maneuvering the car and the car wash trying to crudely convey to you where the car needs to be (sometimes they flash lights, sometimes they blare a horn).

For my article about IoT, see: https://aitrends.com/selfdrivingcars/internet-of-things-iot-and-ai-self-driving-cars/

For my article about reading of signs by AI self-driving cars, see: https://aitrends.com/selfdrivingcars/making-ai-sense-of-road-signs/

For defensive driving tactics of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/

Dealing With Dumb Versus Smart Washes

The effort by the AI to contend with a “dumb” car wash is going to be much greater than a modernized “smart” car wash that can electronically use IoT or the equivalent. As such, those car washes that are slow to modernize might find themselves as a disadvantage in terms of attracting owners of AI self-driving cars not wanting to send their cars to the outdated car wash.

This brings up another significant point about the fundamental nature of car washes, which I’ve mentioned earlier is their location.

Will the location of a car wash still matter in a world of AI self-driving cars?

You might say that it won’t be as important anymore. The AI self-driving car can be sent to wherever the owner opts to send it. This is a factor that will no longer depend on the human driver. We usually go to a car wash near our home or work place. With an AI self-driving car, the owner of the self-driving car can just tell it to go to anyplace that the owner thinks is best to have the car get washed.

It could be that the owner of an AI self-driving car will want to keep it mainly in a geographical area that has the best odds of getting ridesharing. If they are also using it for personal driving purposes, they’d obviously still want the AI self-driving car to come to their home and their workplace. In that sense, there’s some hope for car wash locations of today in that the owners might still want to have the car washed near their home or workplace. But, this is not something quite as guaranteed as it is with today’s conventional cars.

Another facet of car washes will be whether or not they are able to accommodate the physical aspects of an AI self-driving car.

The versions of AI self-driving cars that are being utilized today tend to have a LIDAR system on the top of the car, and have various sensitive cameras, radar, sonic sensors that are embedded just under the skin of the car or sometimes mounted on the exterior of the self-driving car.

For more about LIDAR, see my article: https://aitrends.com/selfdrivingcars/lidar-secret-sauce-self-driving-cars/

If you drive an AI self-driving car into a conventional car wash, the ones that have the various brushes and aren’t touchless, the question will be whether the AI self-driving car will survive the car wash. It could be that the lenses might get scratched or some sensors might be sheared off. A car wash that wants to attract AI self-driving cars will need to make sure it can accommodate any of the physical considerations associated with an AI self-driving car.

This also brings up whether the car wash will also be doing anything inside the AI self-driving car.

I would tend to think car washes would perceive the interior cleaning aspects to be a good potential money maker. Here’s why. If you are using your AI self-driving car for ridesharing, and someone drunkenly upchucks while in your AI self-driving car, as the owner you probably don’t want to deal directly with cleaning up the mess, and so instead you would likely route your AI self-driving car to the nearest car wash that can provide that kind of cleaning service.

It would seem like an owner of an AI self-driving car is likely to consider using car washes to help keep the interior of the car clean. This is good news for the car washes. It could be that you might need to route your AI self-driving car to the car wash every day, just to keep it cleaned-up after all the people that have ridden in your self-driving car throughout the day have dirtied it. As owner of the self-driving car, you could do the cleaning yourself, but I’d bet that most AI self-driving car owners would aim to have a car wash do it, if the price is right.

For my article about the affordability of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/affordability-of-ai-self-driving-cars/

I can imagine that car washes will provide a range of specialized services for AI self-driving cars.

This could be a key differentiator as to why an owner sends their self-driving car to car wash X versus car wash Y.

Some added twists will be that the car owner can presumably be monitoring the car wash while their AI self-driving car is at one. Via the cameras on the AI self-driving car, the owner could presumably on their smartphone bring up what the self-driving car sees and watch as the car wash undertakes the services requested. There are likely inward facing cameras too, and thus when the car wash has someone doing cleaning inside of the self-driving car, the owner can watch that too.

Not only could the owner watch what is happening, they presumably can interact too with whomever is at the car wash. For the inside cleaning of a car, right now it’s mainly a manual effort. The outside cleaning can be readily automated, but the interior cleaning is not so easily automated. As such, assuming that the car wash has labor that goes into the AI self-driving car to clean it, the owner can watch what is being done and likely even interact with the labor (hey, you missed a spot right there on the backseat, please wipe it again).

Another aspect could be the scheduling of having an AI self-driving car go to a car wash. I’m sure you’ve had moments where you drove to a car wash and there were several cars ahead of you. You had to either wait it out, or decide to come back. With an AI self-driving car, if it’s being used for other purposes such as ridesharing, having it sit at a car wash waiting to get washed is not a good use of its time. Therefore, a “smart” car wash would likely put in place an electronically scheduling system.

An AI self-driving car could communicate over the Internet with a car wash scheduling system and indicate that it wants to come to the car wash in twenty minutes and make an appointment to do so.  This could be done via the same mechanism on-board the AI self-driving car for doing OTA (Over the Air) updates.

There might even be the use of blockchain for keeping track of car washes undertaken by AI self-driving cars and be used to aid in the electronic payment for the use of the car wash. All in all, there are a myriad of ways in which automation can make the entire life cycle of seeking a car wash to going there to then getting washed, entirely be something that requires no particular human intervention.

For my article about OTA, see: https://aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/

For the use of blockchain, see my article: https://aitrends.com/business-applications/dun-bradstreet-eyes-blockchain-machine-learning-projects/

Conclusion

The famous song by Rose Royce about car washes relates that you might not ever get rich working at a car wash, but it’s at least better than digging a ditch.

Generally, the already reduced use of labor at car washes is likely to continue, though until there’s an automated solution for cleaning of the interior of a car (a robot?), there’s still some amount of labor required.

In any case, the advent of AI self-driving cars will not do away with the need for car washes and to the contrary would seem to bolster the need for car washes. For those out there that are thinking of investing in a car wash, it seems like a reasonably good bet, but you’ll need to be willing to modernize your car wash for it to be well-aligned with the needs of AI self-driving cars and the human owners of those self-driving cars.

See you at the car wash!

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Source: AI Trends
Continue reading Car Wash Complexities and AI Autonomous Cars

Posted on

Ethics In AI Awareness And AI Autonomous Cars


Ethically, developers of AI self-driving cars need to be aware of incidents happening in the real world, from small ones to accidents resulting in fatalities. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

One of the first jobs that I took after having earned my degree in computer science involved doing work for a large manufacturer of airplanes.

I was excited about the new job and eager to showcase my programming prowess.

My friends that had graduated when I did were at various other companies and we were all vying to see which of us would get the juiciest project. There were some serious bragging rights to be had. The more important the project and world shaking it might be, the more you could lord it over the others of us.

My manager handed me some specs that he had put together for a program that would do data analysis.

Nowadays, you’d likely use any of a myriad of handy data analysis tools to do the work required, but in those days the data analysis tools were crude, expensive, and you might as well build your own. He didn’t quite tell me what the data was about and instead just indicated the types of analyses and statistics that my program would need to generate based on the data.

I slaved away at the code.

I got in early and left late.

I was going to show my manager that I would do whatever it took to get the thing going in the shortest amount of days that I could. I had it working pretty well and presented the program to him. He seemed pleased and told me he’d be using the program and would get back to me. After about a week, he came back and said that some changes were needed based on feedback about the program.

He also then revealed to me the nature of the data and the purpose of the effort.

It had to do with the design of airplane windshields.

You’ve probably heard stories of planes that take-off in some locales and encounter flocks of birds. The birds can potentially gum up the engines of the plane. Even more likely is that the birds might strike the windshield and fracture it or punch a hole in it. The danger to the integrity of the plane and the issues this could cause for the pilots is significant and thus worthwhile to try and design windshields to withstand such impacts.

The data that my program was analyzing consisted of two separate datasets.

First, there was data collected from real windshields that in the course of flying on planes around the world had been struck by birds. Second, the company had set up a wind tunnel that contained various windshield designs and were firing clay blobs at the windshields. There was an analysis by my program of the various impacts to the windshields and also a comparison of the test ones used in the wind tunnel versus the real-world impacted ones.

I right away contacted my former college buddies and pointed out that my work was going to save lives. Via my program, there would be an opportunity to redesign windshields to best ensure that newer windshields would have the right kind of designs. Who knew that my first program out of college would have a worldwide impact, it was amazing.

I also noted that whenever any of my friends were to go flying in a plane in the future, they should look at the windshield and be thinking “Lance made that happen.”

Bragging rights for sure!

What happened next though dashed my delight to some degree.

After the windshield design team reviewed the reports produced by my program, they came back to me with some new data and some changes needed to the code. I made the changes. They looked at the new results. About two weeks later, they came back with newer data and some changes to be made to the code. No one had explained what made this data any different and nor why the code changes were needed. I assumed it was just a series of tests using the clay blobs in the wind tunnel.

Turns out that the clay blobs were not impacting the windshields in the same manner as the real-world results of birds hitting the windshields. Believe it or not, they switched to using frozen chickens instead of the clay blobs. After I had loaded that data and they reviewed the results, they determined that a frozen chicken does not have the same impact as a live bird. They then got permission to use real live chickens. That was the next set of data I received, namely, data of living chickens that had been shot out of a cannon inside a wind tunnel and that were smacking against test windshields.

When I mentioned this to my friends, some of them said that I should quit the project. It was their view that it was ethically wrong to use live chickens. I was contributing to the deaths of living animals. If I had any backbone, some of them said, I would march into my manager’s office and declare that I would not stand for such a thing. I subtly pointed out that the loss of the lives of some chickens was a seemingly small price to pay for better airplane windshields that could save human lives. Plus, I noted that most of them routinely ate chicken for lunch and dinner, and so obviously those chickens had given their lives for an even less “honorable” act.

What would you have done?

Ethical Choices At Work

While you ponder what you would have done, one salient aspect to point out is that at first I was not aware of what the project consisted of. In other words, at the beginning, I had little awareness of what my efforts were contributing toward. I was somewhat in the blind. I had assumed that it was some kind of “need to know” type of project.

You might find of idle interest that I had worked on some top security projects prior to this effort, projects that were classified, and so I had been purposely kept in the dark about the true nature of the effort. For example, I wrote a program that calculated the path of “porpoises” trying to intersect with “whales” — my best guess was that maybe the porpoises were actually submarines and the whales were surface ships like navy destroyers or carriers (maybe that’s what it was about, or maybe something completely different!).

In the case of the live chickens and the airplane windshields, upon my becoming more informed and with the realization of what I was contributing toward, presumably the added awareness gave me a chance to reflect upon the matter.

Would my awareness cause me to stop working on the effort?

Would my awareness taint my efforts such that I might do less work on it or be less motivated to do the work?

Might I even try to somehow subvert the project, doing so under the “justified” belief that what was taking place was wrong to begin with?

If you are interested in how workers begin to deviate from the norms as they get immersed in tech projects, take a look at my article: https://aitrends.com/selfdrivingcars/normalization-of-deviance-endangers-ai-self-driving-cars/

For the dangers of company groupthink, see my article: https://aitrends.com/selfdrivingcars/groupthink-dilemmas-for-developing-ai-self-driving-cars/

For background about some key ethics of workplace and society issues, see: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

Workplace Awareness Of Ethical Matters

When referring to how workplace related awareness can make a potential difference in worker behavior, a recent study of that phenomena gained national interest.

The study examined the opioid drug crisis occurring in the United States.

There are many thousands of deaths each year due to opioid overdoses and an estimated nearly 2 million Americans that are addicted to opioids. According to the study, part of the reason that opioid use has vastly increased over the last two decades is as a result of prescribing opioids for pain relief and for similar purposes.

Apparently, medical doctors had gotten used to prescribing opioids and did so without necessarily overtly considering the downsides of becoming possibly addicted to it. If a patient can be helped now by giving them opioids, it’s an easy immediate solution for them. The patient is presumably then happy. The doctor is also happy because they’ve made the patient happy. Everyone would seem to be happy. This is not as true if you consider the longer term impacts of prescribing opioids.

The researchers wondered whether they could potentially change the behavior of the prescribing medical doctors.

Via analyzing various data, the researchers were able to identify medical doctors that had patients that had suffered opioid overdoses. Dividing that set of physicians into a control group and an experimental group, the researchers arranged for those in the experimental group to receive a letter from the county medical examiner telling the medical doctor about the death and tying this matter to the overall dangers of prescribing opioids.

The result seemed to be that the medical doctors in the experimental group subsequently dispensed fewer opioids. It was asserted that the use of the awareness letters as targeted to the medical doctors was actually more effective in altering their behavior than the mere adoption of regulatory limits related to prescribing opioids.

By increasing the awareness of these physicians, this added awareness apparently led to a change in their medical behavior.

You can quibble about various aspects of the study, but let’s go with the prevailing conclusions for now, thanks.

Ethics Awareness By AI Developers And AI Autonomous Cars

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI systems for self-driving cars. We also remain very much aware of any incidents involving AI self-driving cars and discuss those incidents with our teams, regardless of whether those incidents relate directly to any of our work per se.

In essence, we believe that it is important for every member of the team, whether an AI developer, QA specialist, hardware engineer, project manager, and so on, for them to be aware of what’s happening throughout industry about AI self-driving cars. Small incidents to big incidents, ones involving no injuries to ones involving deaths, whatever the incident might be it is considered vital to consider it.

Should the auto makers and tech firms that are also developing AI self-driving cars do likewise?

There’s no written rule that says there is any obligation of the automaker or tech firm to keep their AI developers apprised of AI self-driving car incidents. Indeed, it’s often easy to ignore incidents that happen to competing AI self-driving car efforts. Those dunces, they don’t know what they are doing, can sometimes be the attitude involved. Why look at what they did and figure out what went wrong, since they were not up-to-snuff anyway in terms of their AI self-driving car efforts. That’s a cocky kind of attitude often prevalent among AI developers (actually, prevalent among many in high-tech that think they have the right-stuff!).

So, the question arises as to whether promoting awareness of AI self-driving car incidents to AI self-driving car developers would be of value to the automakers and tech firms developing AI self-driving cars and their teams. You might say that even if you did make them aware, what difference would it make in what they are doing. Won’t they just continue doing what they are already doing?

The counter-argument is that like the prescribing medical doctors, perhaps an increased awareness would change their behavior. And, you might claim that without the increased awareness there is little or no chance of changing their behavior. As the example of the chickens and the airplane windshield suggests, if you don’t know what you are working on and its ramifications, it makes it harder to know that you should be concerned and possibly change course.

In the case of the opioid prescribing medical doctors, it was already ascertained that something was “wrong” about what the physicians were doing. In the case of the automakers and tech firms that are making AI self-driving cars, you could say that there’s nothing wrong with what they are doing. Thus, there’s no point to increasing their awareness.

That might be true, except for the aspect that most of the AI self-driving car community would admit if pressed that they know that their AI self-driving car is going to suffer an incident someday, somehow. Even if you’ve so far been blessed to have nothing go awry, it’s going to happen that something will go awry. There’s really no avoiding it. Inextricably, inexorably, it’s going to happen.

There are bound to be software bugs in your AI self-driving car system. There are bound to be hardware exigencies that will confuse or confound your AI system. There are bound to be circumstances that will arise in a driving situation that will exceed what the AI is able to cope with, and the result will at some point produce an adverse incident. The complexity of AI self-driving car systems is relatively immense and the ability to test all possibilities prior to fielding is questioned.

For issues of irreproducibility and AI self-driving cars, see my article: https://aitrends.com/ai-insider/irreproducibility-and-ai-self-driving-cars/

For pre-mortems about AI self-driving cars, see my article: https://aitrends.com/ai-insider/pre-mortem-analysis-for-ai-self-driving-cars/

For my article on software neglect issues, see: https://aitrends.com/ai-insider/software-neglect-will-impede-ai-self-driving-cars/

For the likely freezing robot problem and AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/freezing-robot-problem-and-ai-self-driving-cars/

Furthermore, there is a perceived rush to get AI self-driving cars on our public roadways, at least by some.

The auto makers and tech firms tend to argue that the only viable means to test out AI self-driving cars is by running them on our public roadways.

Simulations, they claim, can only do so much.

Proving grounds, they say, are limited and there’s only so much you can discover.

The public roadways are the means to get us to true AI self-driving cars. The risks to the public are presumed to be worth the assumed faster pace to perfecting AI self-driving cars.

You’ve got to accept some public pain to gain a greater public good, some say.

For public trust issues about AI self-driving cars and the makers of them, see: https://aitrends.com/ai-insider/roller-coaster-public-perception-ai-self-driving-cars/

Are AI developers and other tech specialists involved in the making of AI self-driving cars keeping apprised of what is going on in terms of the public roadways trials and especially the incidents that occur from time-to-time?

On an anecdotal basis of asking those that I meet at industry conferences, many are so focused on their day-to-day job and the pressures to produce that they find little time or energy to keep up with the outside world per se. Indeed, at the conferences, many times they tell me that they have scooted over to the event for just a few hours and need to rush back to the office to continue their work efforts.

The intense pressure by their workplace and their own internal pressure to do the development work would seem to be preoccupying them.

I’ve mentioned before in my writings and speeches that there is a tendency for these developers to get burned out.

For my article about the burnout factor of AI developers, see: https://aitrends.com/selfdrivingcars/developer-burnout-and-ai-self-driving-cars/

For my article about the recent spate of accidents with AI self-driving cars, see: https://aitrends.com/ai-insider/accidents-contagion-and-ai-self-driving-cars/

Proposed Research Project Focused On AI Developers

Here’s then a proposed research project that would be interesting and informative to undertake.

Suppose that akin to the research on physicians and the awareness of opioids prescribing, we were to do a study of AI self-driving car developers and their awareness of AI self-driving car incidents. The notion would be to identify to what degree they have awareness in mind already, and whether increased awareness would aid in their efforts.

A null hypothesis could be: Developers of AI self-driving cars have little or no awareness of AI self-driving car incidents.

The definition of awareness could be operationalized by indicating that it consists of having read or seen information about one or more AI self-driving car incidents in the last N number of months.

This hypothesis is structured in a rather stark manner by indicating “no awareness” which would presumably be easiest to break. One would assume or hope that these developers would have some amount of awareness, even if minimal, about relatively recent incidents.

The next such hypothesis could examine the degree of awareness. For example, maybe levels such as Q, R, S, and T number of impressions about incidents in the last N months, wherein we use say Q=1, R=2-4, S=5-7, T=8+, in order to indicate ranges of awareness. One potential flaw to simply using the number of impressions would be whether they are repetitive of the same incident, or another loophole is that they read or saw something but did so in a cursory way (this could be further tested by gauging how much they remembered or knew about the incident as an indicator of whether they actually gained awareness per se or not).

The next aspect to consider is whether awareness makes a difference in behavior.

In the case of the physicians and the opioids prescribing, it was indicated that their presumed increased awareness led to less prescriptions of opioids being written. We don’t know for sure that the increased awareness “caused” that change in behavior, and it could be that some other factor produced the change, but in any case, the study suggests or asserts that the two aspects went hand-in-hand.

What might an AI developer do differently as a result of increased awareness about AI self-driving car incidents?

We can postulate that they might become more careful and retrospective about the AI systems they are developing. They might take longer to develop their code in the belief that they need to be more cautious to pay attention to systems safety related aspects. They might increase the amount of testing time. They might use tools for inspecting their code that they hadn’t used before or might re-double their use of such tools. They might devise new safety mechanisms for their systems that they had not otherwise done previously.

They might within their firm become an advocate for greater attention and time towards AI systems safety. They might seek to collaborate more so with the QA teams or others that are tasked with trying to find bugs and errors and do other kinds of systems testing. They might seek to bolster AI safety related practices within the company. They might seek to learn more about how to improve their AI system safety skills and how to apply them to the job. They might push back within the firm at deadlines that don’t take into account prudent AI systems safety considerations. And so on.

For my framework on AI self-driving cars, see: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For purposes of a research study, it would be necessary to somehow quantify those potential outcomes in order to readily measure whether the awareness does have an impact. The quantification could be subjectively based; the developers could be asked to rate their changes based on a list of the possible kinds of changes. This is perhaps the simplest and easiest way to determine it. A more arduous and satisfying means would be to try to arrive at true counts of other signifiers of those changes.

Similar to the physicians and opioids study, there would be a control group and an experimental or treatment group. The treatment group might be provided with information about recent AI self-driving car incidents, and then post-awareness a follow-up some X days or weeks later try to discern whether their behavior has changed as a result of the treatment. It would not be necessary axiomatic that any such changes in behavior could be entirely construed as due to the awareness increase, but it would seem like a reasonable inference. There is also the chance of a classic Hawthorne effect coming to play, and for which the research study would want to consider how to best handle.

Conclusion

AI developers for self-driving cars are dealing with systems that involve life-and-death.

In the pell-mell rush to try and get AI self-driving cars onto our roadways, we all collectively need to be mindful of the dangers that a multi-ton car can have if the AI encounters difficulties and runs into other cars, or runs into pedestrians, or otherwise might lead to human injuries or deaths.

Though AI developers certainly grasp this overall perspective, in the day-to-day throes of slinging code and building Machine Learning systems for self-driving cars it can become a somewhat lost or lessened consideration, and instead the push to get things going can overtake that awareness. We believe fervently that AI developers need to keep this awareness at the forefront of their efforts, and by purposely allow time for it and structuring it as part of the job effort, it is our hope that it makes a difference in the reliability and ultimate safety of these AI systems.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Source: AI Trends
Continue reading Ethics In AI Awareness And AI Autonomous Cars

Posted on

Razzle-dazzle and AI Autonomous Cars


Whether AI self-driving cars should have a clear, identifying “razzle-dazzle” color scheme like yellow taxicabs, is a subject of discussion. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

We generally think of camouflage as a means to blend into the surroundings.

Many animals have a colorization pattern that allows them to seemingly disappear by laying quietly at a standstill on a tree branch or hiding inside a bush. Military personnel often put on special camouflage designated clothing and use skin paints to try to likewise be indistinguishable from their surroundings. The hope is that the enemy cannot spot them and so would be unable to accurately aim at them for purposes of attack.

This overall notion of camouflage is just one of several classes of camouflage, in this case the attempt at concealment is referred to as a crypsis form of camouflage.

There’s another form of camouflage that we don’t often consider, namely the kind that is known as motion dazzle, also sometimes called razzle-dazzle, or perhaps more easily understood by the name disruptive camouflage. In this instance, the desire is to actually stand out and be readily seen. In fact, being seen is the part that involves the camouflage trickery, specifically that the use of colors and shapes makes the observer take notice. In addition, the use of disruptive patterns and even countershading make it difficult for the observer to readily figure out the true shape and nature of what is being camouflaged.

Background About Disruptive Camouflage

This idea of using disruptive camouflage was extensively undertaken during World War I and also somewhat during World War II.

We tend to think of navy ships as always being painted a rather dull monotone grey color.

This would seem to be a wise choice.

At sea, the navy ships would tend to blend into the background of a grayish sky and a blue sea. Presumably, whales and dolphins use a similar colorization to try to blend into their surroundings. Here in Southern California, you can wander to San Diego and see lots of United States Navy ships docked in the harbor as it is a major west coast naval center. Many visitors often say to me that they didn’t even notice the big ships, in spite of the fact that there are many dozens of them all readily seen. Must be the grey monotone.

In a controversial manner, upon the onset of World War I, there were some that suggested it would be better to use a disruptive camouflage pattern for navy ships rather than a monotone grey.

Various complex patterns involving striking geometric shapes were devised and painted onto navy ships of the time period. Thousands of such motion dazzle depictions were used. The colors and shapes are intersecting each other and interrupting each other.

It was said that Picasso himself found these ship colorization approaches to be inspiring.

Why would anyone in their right mind do this to navy ships?

Isn’t it just asking for those ships to be readily attacked by submarines, airplanes, and other enemy ships?

Wouldn’t it be better to try to hide or conceal the ships so that the enemy would not realize it is floating along on the sea?

There indeed were acrimonious debates on this topic.

Those that favored the disruptive camouflage claimed that the navy ships were inevitably going to be spotted anyway and that trying to hide or conceal them was generally ineffective.

If you are in agreement that the concealment approach is not going to be successful, then you would certainly be open to considering other options. The razzle-dazzle approach was intended to not only make the ships seen but that when seen it would be potentially confusing to the enemy as to what they were looking at.

An enemy might not be sure if the ship is a battleship or a destroyer. It would be hard to discern the type of ship and also whether the ship was heading toward you or away from you. The speed was not easy to estimate either. Are you looking at the bow or the stern of the ship? All of this disruptive kind of visualization was intended to confound the enemy. Since it was assumed that the enemy was likely going to spot the ships anyway, might as well try to make things hard for the enemy to figure out what the ship was and where it was going.

Those navies that adopted the razzle-dazzle were also smart enough to realize that if they painted certain kinds of ships the same way, such as all battleships in the same patterns and colors, it would undermine the purpose of the disruptive notion. It would mean that once the enemy figured out how you were discoloring that category of ships, they could easily then “break the code” and be able to figure out what kind of ship it was and how to interpret the colors and patterns. As such, those using the razzle-dazzle had to come up with varying patterns and use them in a somewhat unique manner for each individual ship.

There are zoological theories that suggest the zebra, jaguar, and giraffe are examples of a razzle-dazzle form of camouflage.

There is much contention whether those animals are indeed going the razzle-dazzle route or whether there is some other reason for the patterns and colors on their skin. For example, some say that the giraffe is actually attempting to use the traditional form of camouflage by having the colors that you might see associated with trees. Given that giraffes stand tall, maybe the idea is that nature led them toward concealment when among a clump of trees.

In terms of the navy use of disruptive camouflage, numerous scientific studies have tried to determine whether the razzle-dazzle is really more effective than the monotone grey. It would seem that most studies ultimately are unable to control sufficiently the numerous factors involved and it is a muddied picture as to whether the razzle-dazzle is truly an effective defensive technique.

There are some that say it is a freakish method and all it does is make the ships look like bizarre oddities.

razzle-dazzle And AI Autonomous Cars

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect that is going to be crucial for the ultimate success of self-driving cars will be whether pedestrians are safe when around AI self-driving cars.

Allow me to elaborate.

First, let’s clarify that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, the brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

One of the top worries that the auto makers and tech firms have about AI self-driving cars will be the interaction of AI self-driving cars and pedestrians.

In theory, the AI self-driving car is supposed to be good enough at the driving task to be able to detect pedestrians. Once so detected, the AI should be doing what it can to avoid hitting pedestrians. There are some pundits that keep referring to “zero fatalities” once AI self-driving cars have been adopted on a widespread basis, but I’ve said many times that this zero-fatality notion is nonsensical.

Imagine that an AI self-driving car is driving down a street at the legally posted speed. Suppose it is 45 miles per hour, which is about 66 feet per second. A pedestrian is standing behind a pole that is adjacent to the road and right at the curb. Neither an AI self-driving car sees the pedestrian, and nor could a human driven car see the person that’s standing behind that pole. The pedestrian decides to step out into the street just as the AI self-driving car gets within a few feet of the pole (or, if you like, pretend it was a human driver – the same end result is going to occur).

By what magic is the AI self-driving car not going to hit that pedestrian?

For my article about pedestrians and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/avoiding-pedestrian-roadkill-self-driving-cars/

For my article about the myth of zero fatalities, see: https://aitrends.com/selfdrivingcars/self-driving-cars-zero-fatalities-zero-chance/

I ask the question because the answer is rather plain and simple, the law of physics is going to take over and the AI self-driving car is going to ram into that pedestrian. There was insufficient time to swerve the self-driving car. There was insufficient time to brake the self-driving car. There was no means to detect the pedestrian’s presence beforehand. Wham. It happened. That’s more than zero fatalities.

Well, I realize that you might object and say that it is preposterous that the pedestrian was unable to be seen.

Maybe I’ve contrived the situation in a manner that could never happen.

Obscured From View

I’m not sure where you drive, but I assure you that in the downtown Los Angeles area that I drive, it happens all the time that pedestrians are obscured from view.

One time, a FedEx worker had stacked a bunch of boxes next to the curb, standing high about eight feet or so.

Two pedestrians suddenly emerged from behind those boxes and stepped directly into the street. There was no viable means to have detected them beforehand.

Some of you might say that if they were jaywalking, it’s their fault for having illegally attempted to cross the street.

I’m not particularly discussing fault right now, and just trying to clarify and emphasize that pedestrians can mix with cars in a deadly fashion, and that an AI self-driving car is not necessarily going to somehow remedy entirely that equation.

If you still aren’t convinced, and still believe that the “hidden” pedestrian is a falsehood, I’ll change the indication and say that the pedestrian was completely seen by the car driver, whether it’s a human driver or an AI self-driving car. Suppose the pedestrian is completely visible and there’s no chance of not seeing the pedestrian.

Once again, if the pedestrian suddenly steps off the curb into the street, and if a car is approaching at 66 feet per second, and if the pedestrian does get in front of the car with just a second or two before impact, the law of physics enters into the picture. There will not be enough time to swerve the car or stop the car. It’s going to hit the pedestrian.

Why would a pedestrian do this? Is it a suicide by car? No (well, I hope not). There are pedestrians that routinely step into the street and aren’t at all paying attention to the cars. One recent concern by many municipalities if that people seem to be looking at their smartphones rather than the street traffic, and end-up making dumb moves into oncoming traffic. There are some local ordinances that have now made it against the law for you to be looking at your cell phone while crossing the street, even while doing so in a legal crosswalk.

In short, either a human driven car or an AI self-driving car can end-up hitting and possibly killing a pedestrian, doing so for sure in the circumstance wherein the pedestrian opts to enter into the street when there is no viable means of avoiding the pedestrian, either by braking or swerving the car.

You might be thinking that shouldn’t the AI self-driving car do a better job on this than a human driven car?

I suppose you could say that an AI self-driving car might do a better job in that the AI is presumably not distracted as a human driver might get distracted. A human driver might be looking at the other side of the street and noticing a new barbershop that’s opened up, and thus fail to notice the pedestrian to their right that suddenly steps into the street.

If there is a chance to avoid striking the pedestrian, by-and-large you might say that the AI will perhaps be more likely to do so, since it lacks the distraction factor that could beset a human driver. The AI is presumably continually scanning the surroundings, on all sides, and not going to allow itself to become preoccupied with one particular thing, such as the new barbershop.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

The AI self-driving car should be using all of its sensors, including cameras for visual images, radar, sonic, and LIDAR, trying to detect where pedestrians are. It’s not sufficient though to just detect them, since you also need to try and predict what the pedestrian is going to do.

If a pedestrian is on the sidewalk and running, and they are veering toward the street, the AI is intended to make a prediction that the pedestrian might logically end-up in the street, and if intersecting with the path of the self-driving car, the AI should direct the AI self-driving car to avoid striking the pedestrian, once they (possibly) enter into the street.

For the cognition timing aspects of the driving task, see my article: https://aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

To learn about LIDAR, see my article: https://aitrends.com/selfdrivingcars/lidar-secret-sauce-self-driving-cars/

Imperfection Machines

Please keep in mind that the AI and the self-driving car are not considered perfect machines.

Is there a chance that a pedestrian might not be detected in spite of the array of sensors on the self-driving car?

Yes, absolutely. Can the sensors themselves at times fail or falter? Yes, absolutely. Could the AI system end-up making a wrong choice about whether there is a pedestrian nearby and whether the pedestrian might be intending to get into harm’s way? Yes, absolutely.

I say this because there are some pundits that seem to want to portray a Utopian world that involves self-driving cars that are always working and always flawless. I ask you, can you cite for me any of today’s machines that are utterly perfect and flawless? I don’t think so. The self-driving car is still a car.

It is prone to equipment wear-and-tear.

There might even be parts on the self-driving car that are subject to a recall.

The AI itself might have bugs or errors in the software. And so on.

For my article about recalls and self-driving cars, see: https://aitrends.com/selfdrivingcars/auto-recalls/

For my article about potential bugs and errors in AI, see: https://aitrends.com/ai-insider/irreproducibility-and-ai-self-driving-cars/

For the egocentric designs of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

Let’s then assume that the AI self-driving car is going to do what it can to avoid hitting pedestrians, but that it is not a perfect system and there are still substantive chances of hitting pedestrians, especially when a pedestrian does something erratic or rash.

There are some AI developers that say we should go after the pedestrians. In other words, if we cannot make the AI good enough to deal with the pedestrians, let’s change the behavior of pedestrians. It’s those pesky humans that are the real problem here. The AI is fine if it is just good enough, and we can presumably bend the will of humans so that the AI won’t get itself into trouble.

I’m not one that ascribes to this notion that the pedestrians are the entire problem per se, and I’m a strong advocate that we need to make the AI stronger and self-driving cars more robust to be able to contend with pedestrians.

Nonetheless, admittedly the behavior of pedestrians does enter into the matter at-hand. That being said, trying to change human behavior is not an easy thing. I dare say that the jurisdictions that have outlawed looking at your cell phone as you cross the street are finding that most pedestrians are still looking down at their cell phones. Unless you were to put police at those crosswalks and they write a zillion tickets, few pedestrians are going to heed the new law.

There are though some aspects about human behavior that we can try to contend with.

One aspect is to try to grab the attention of the pedestrian so that they are less likely to blindly step into the path of an AI self-driving car.

Getting The Attention Of Pedestrians

At my presentations about AI self-driving cars, I’ve often described a phenomenon that I refer to as the “head nod” problem. For human driven cars, a pedestrian can usually see the human that is driving a car.

There is usually a kind of “courtship” that takes place between a car driver and a pedestrian. The pedestrian might make eye contact and be subliminally saying that they are going to step out into the street and the car driver better let them do so. The car driver might be making eye contact to say that don’t dare get in front of this car. In some cities, this takes place in a mere few seconds or so, such as in New York City or Boston, and a stare down leads to someone “winning” the chicken contest.

When you have an AI self-driving car, there is no longer a visually present human driver that represents the intentions of the self-driving car. Thus, an important signaling aspect of car-to-pedestrian is now removed from the day-to-day arrangement that we all seem to have with traffic and crossing streets or entering into streets. There’s been an unwritten contract of sorts that pedestrians generally should look toward the driver and the driver should look toward pedestrians. It’s not a perfect contract by any means. Pedestrians routinely don’t look at a driver, or perhaps cannot see the driver anyway due to tinted windows or other obscurities.

We’ve been instituting various means of being conspicuous, already integral to the everyday capabilities of cars, doing so as a means for the AI to try to signal to pedestrians. Just like conventional cars, an AI self-driving car has its turn signals, its headlights, and can potentially use its horn too. Even the direction of the tires and the maneuvering of the physical motions of the car are all part of the signaling aspects to pedestrians.

For more about the head nod problem, see my article: https://aitrends.com/selfdrivingcars/head-nod-problem-ai-self-driving-cars/

For the conspicuity aspects of an AI self-driving car, see my article: https://aitrends.com/selfdrivingcars/conspicuity-self-driving-cars-overlooked-crucial-capability/

More recently, there are some automakers and tech firms that are making use of add-ons to an AI self-driving car to provide further signaling or messaging to pedestrians.

These are pretty much experiments right now.

No one is quite sure what kind of signaling or messaging might be best. Do you place an electronic signboard on the roof of the AI self-driving car?

Do you put special wing like attachments on the sides of the AI self-driving car that can display visual signals?

Do you include audio sounds, beyond just honking a horn, such as a tone to indicate when the AI self-driving car is making a turn, or even an automated Alexa or Siri like voice that says what the self-driving car is going to do?

There is also a move afoot to provide a V2P capability (vehicle to pedestrians).

There are already efforts toward V2V (vehicle to vehicle) communications, allowing an AI self-driving car to electronically communicate with another AI self-driving car. In a crowded downtown area, the AI self-driving car ahead of your AI self-driving car might via V2V warn your AI self-driving car that a pedestrian is about to step into the street.

And there are V2I efforts (vehicle to infrastructure) too, in which the roadway infrastructure such as street lights or even crosswalks will communicate electronically with your AI self-driving car.

The V2P is the idea that a pedestrian might have a smartphone or maybe a smartwatch (or other wearable), and the AI self-driving car could communicate with the pedestrian. This is kind of “head nod” aspect that I mentioned earlier, except taken into a modern day with the use of electronics. The AI self-driving car might broadcast to the pedestrians on the corner that the AI is intending to make a right turn on red when it reaches the intersection. This could forewarn the pedestrians.

Presumably, a pedestrian could also make a request to an AI self-driving car. Perhaps the pedestrian needs extra time to make it through the crosswalk, and so their electronic device sends an indication to an AI self-driving car that’s coming down the street. The AI then realizes that there might be a longer wait than normal up ahead at the crosswalk and would slow down or come to a stop accordingly.

Making A Self-Driving Car Standout

I now present to you a somewhat provocative notion.

Should an AI self-driving car be readily noticed as an AI self-driving car?

Some would say that yes, it is important for pedestrians to realize that an AI self-driving car is coming along on the street. If they realize it is an AI self-driving car, perhaps the pedestrians will be more cautious than they otherwise might be. This could be especially important if we are conceding that in many respects the AI self-driving car might not be as good as a human driver in terms of dealing with pedestrians. By realizing that there is an AI self-driving car, the pedestrians are essentially forewarned.

So far, most of the AI self-driving cars tend to have a LIDAR unit on the top of the car, which is a beacon looking device. This is there for functional purposes. It also though happens to help make an AI self-driving car standout as an AI self-driving car. Visually, when you see the cap on the top of the self-driving car, you are likely looking at an AI self-driving car. Many in the general public already have become accustomed to this feature and immediately assume that any car with the beacon or cone is most likely an AI self-driving car.

Please be aware though that not all of the AI self-driving cars will necessarily opt to use LIDAR. Also, as the LIDAR units get improved and further miniaturized, it will become less obvious that there’s a LIDAR unit on the top of some AI self-driving cars since they will be streamlined in shape and size.

Some might argue that we should purposely put a special kind of dome or cap on all AI self-driving cars, regardless of whether LIDAR is being used or not. Doing so would make it a visually obvious aspect that the car is an AI self-driving car. It might even be a regulated requirement that all AI self-driving cars would have to have one. It is mandated as a kind of “decorative” matter (serving as a physical shape for distinctiveness, a footprint as it were), rather than for an electronic functional purpose.

And this brings us to the topic of razzle-dazzle!

Right now, most of the automakers and tech firms are taking a conventional looking car and outfitting it to be an AI self-driving car. The shape and colorization of the AI self-driving car is pretty much the same as any other car on the roadways today. There are some future concept cars that have designs of a rather new look, but those aren’t particularly aimed to be on our roadways soon.

Maybe, if we extend the idea of having a dome or beacon on the top of an AI self-driving car, we might consider doing something even more extravagant about the shape and colors of an AI self-driving car.

We already accept the notion that a cab or taxi is often yellow in color and has an indicator on the roof. We accept the notion that police cars have a certain pattern and color scheme. Would it make sense to consider that all AI self-driving cars would need to abide by some special designated “razzle-dazzle” combination of shapes and colors?

The rather obvious downside to this idea is that perhaps the public might not like the razzle-dazzle look. If you are an auto maker pouring tons of money into AI self-driving cars, you probably don’t want to risk having people not be willing to buy an AI self-driving car because of how it looks. Remember before that some expressed that the razzle-dazzle looking navy ships are freakish in appearance? I doubt that we want the public to perceive AI self-driving cars in a freakish manner.

For my article about marketing and selling AI self-driving cars, see: https://aitrends.com/selfdrivingcars/marketing-self-driving-cars-new-paradigms/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For defensive driving tactics and AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/

There might be a fine line then between purposely coming up with a common or standard scheme of shapes or colors that could make an AI self-driving car become readily apparent to humans, and especially pedestrians, and yet also not be overly garish. A kind of softer disruptive camouflage, as it were.

Unlike the normal kind of disruptive camouflage that intends to deceive about speed and direction, we’d of course want the razzle-dazzle to make those factors actually more apparent, rather than less so. Also, the traditional disruptive camouflage for navy ships is distinct per ship so that the class of ships cannot be revealed, while in this case we would want something that was consistent across all instances. In that sense, please use this analogy to the disruptive camouflage concept in a thoughtful manner and not a stricter aspect-for-aspect manner.

Presumably, each automaker would want to be able to still provide their own look-and-feel to the AI self-driving car, allowing them to be differentiated in the marketplace.

A pedestrian might take more notice of an AI self-driving car if it had some kind of standardized markings or indication, and hopefully the pedestrian would be more cautious accordingly. This though also raises the other side of the coin, namely that pedestrians might purposely try to prank an AI self-driving car, and by knowing right away that the car coming down the street is an AI self-driving car, they might more readily be apt to play such tricks.

Conclusion

If we don’t do something to make AI self-driving cars appear distinctive, it essentially means that they will be using traditional “camouflage” in that they will blend into the surroundings consisting of other conventional cars.

You won’t be able to readily notice whether those cars nearby are human driven or AI self-driving cars. As a society, do we believe that AI self-driving cars should be required to appear distinctive, or are we fine with them blending in?

Razzle-dazzle, or just the norm.

Time will tell.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Source: AI Trends
Continue reading Razzle-dazzle and AI Autonomous Cars

Posted on

Clogs and AI Autonomous Cars


AI self-driving cars will need the capability to avoid traffic clogs at highway on ramps, or run the risk of adding to traffic jams. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

When my children were young, we had a toy that they assembled consisting of seventy-five plastic interconnecting tunnel pieces, including having numerous tall ramps and winding paths, and when a marble was dropped into the topmost funnel it would be of great delight to all as we watched the marble roll throughout the structure.

It was advertised via a slogan that said down the tube it goes, where the marble stops, nobody knows, and presumably helped teach my children about physics (well, it was actually mainly just a lot of fun).

Being quite rambunctious, the kids sought out new ways to test the capabilities and limits of the toy. Putting one marble down the shoot was fun. Perhaps putting two marbles would be twice the fun! They tried this and it made them squeal with delight. If two marbles are twice the fun, certainly four marbles would quadruple the fun.

They kept increasing the number of marbles and with each such increment the plastic contraption would shake and shimmy more so. How many marbles would the system withstand?

The kids ran to the kitchen and grabbed an empty lemonade pitcher. They then collected together as many marbles as they could find in the house. Placing the marbles into the pitcher, they envisioned that they could pour the marbles into the topmost funnel of the contraption, and by doing so would be able to flood the system with zillions of marbles (okay, I admit zillions is not quite the case, let’s say at least 50 to 100 marbles, or something like that number).

My daughter and son began to jointly pour the marbles into the funnel. Sure enough, the marbles would zoom along, each following the other, doing so in a nearly continuous stream.

Marble after marble, it became a blur. There were so many marbles flowing that it became difficult to watch any particular marble and instead it was a stream of them. Now we had something truly marvelous to watch. After all the marbles had been poured out of the pitcher and had made their way to the bottom of the structure, the children sat back and discussed what to try next.

They filled up the pitcher again with the marbles.

They decided that rather than pouring the marbles via the spout of the pitcher and then into the funnel, they would use the top edge of the pitcher and just let the marbles all spill over into the funnel. In this manner, they could fill-up the funnel quickly, and not need to continue to hold and pour from the pitcher.

They could then watch in glee as the marbles massively weaved their way throughout the system.

Holding the pitcher with their combined sets of hands, they struggled to tip it over and let the marbles blurb out into the funnel. It was a cavalcade of marbles. At first, the marbles indeed began to flow into the tunnels.

But, suddenly, the marbles at the top came to a halt. Did something jam the funnel? Upon inspection, the children discovered that with so many marbles sitting in the funnel, they had collided with each other and did so in a manner that none of them was able to flow out of the funnel. Even though an individual marble could still have made its way down the funnel, the myriad of them had bunched up in a manner that prevented any single marble from proceeding.

The kids had invented a clog.

Thinking Seriously About Clogs

We all know about clogs in our bathroom sinks.

Over time, the gunk of hair, toothpaste, and who-knows-what will inevitably cling to the walls of the pipe and prevent water from readily flowing down the pipes. You need to either use some kind of acid dissolving liquid or a plunger or a mechanical rooter or other approach to get the gunk to break free. Clogs, I’d dare say they are universally hated and they are a pain in the neck to deal with.

There’s another kind of clog that you likely have to deal with every day. If you drive to work on a freeway, which I do daily here in Southern California, there are inevitably clogs of one kind or another on the freeways here.

The first clog that I usually encounter involves navigating an on-ramp onto the freeway. For many of our freeways, we are using a metered ramp system.

If you’ve not seen one of these before, it is a traffic signal setup on the ramp and regulates the passage of cars from the on-ramp onto the freeway. When they were first introduced in Los Angeles, some drivers hated them, while other drivers were thankful the metered system was put in place.

The way it works is that cars come from a street onto the lower part of an on-ramp and then come up to a point where there’s a traffic signal on the on-ramp.

When the traffic signal is green, you can proceed further onto the on-ramp and then onto the freeway. When the traffic signal is red, you are supposed to stay put until the light goes green. In some cases, the green light allows for just one car to proceed, while in other cases a posted sign states that two cars can proceed for each green light. There are some pretty hefty monetary fines for violating the traffic signal and most people tend to obey it.

During the busiest traffic times of the day, the metered light is running. When the traffic thins out on the freeway, the metered light is either shown entirely as green or they turn off the meter and it is therefore assumed that you do not need to wait and can just proceed ahead. In many cases, there are actually two lanes on the on-ramp, one that is for those that can use the HOV lanes and one lane for those that aren’t able to use the HOV lanes. Typically, the HOV lane is not constrained by the meter and can proceed ahead at will.

In the case of just a single lane for the on-ramp and with the meter being used, the traffic on the on-ramp is relatively orderly.

Drivers wait their turn and sit on the on-ramp patiently waiting for the green (though some get irritated waiting for the green and I suppose saying they are waiting patiently is a bit of an overstatement). When the light goes momentarily green, the car at the front of the line of cars proceeds. And, if the green light allows for two cars to go at once, the second car quickly follows on the heels of the first car.

The reason why some people like this approach is that it turns what could otherwise be an ugly free-for-all into an orderly sequence series of events.

Rather than cars all jockeying for position, it is clear cut that you wait in line, in the lane, and you take your turn. It’s like being in kindergarten again. Those that are worn out by line cutters and rude drivers see this as a systematic way to put them into their place and get them to act in a polite manner, whether they like it or not. The ones that hate the meters are those that feel it is an infringement of their right to drive as they wish and it seems like it takes forever for the on-ramp traffic to proceed.

A study done about ten years ago, around the year 2000, in Minnesota, claimed that ramp meters had a substantive positive impact on aiding freeway traffic, including that without the meters there was a 9% drop in available freeway capacity, a 22% increase in travel times, a drop in freeway speeds by 7%, and accidents increased by 26%. I’d vote you take that with a grain of salt and there are various studies that both support meters and refute the use of meters.

You might assume this metered approach eliminates any chances of a clog. Not so.

There’s at least one loophole in this approach when there are two lanes available via the on-ramp and when one of the lanes is the HOV one that does not need to stop for the meter.

Clogs On The Freeway Onramp

Here’s what often happens.

A car at the front of the line and waiting for the meter to go green is eager to get moving, and the instant the light goes green, the car hits the gas to burst forward. Meanwhile, a car in the HOV on-ramp lane is eager to get ahead of the cars waiting for the meter, and so that driver has hit the gas to zip along past the standstill cars, doing so since they don’t need to wait for a green light. You end-up with two cars both trying to rocket forward and yet the path just past the meter is often a slimming down of two lanes into one lane (the one lane that will lead onto the freeway).

Thus, you can have two cars, both of which thinks they have the right-of-way, entering into a tight squeeze of slimming down from two lanes to one lane.

Sometimes, the metered driver is unaware that the HOV driver is coming up upon them. Sometimes the HOV driver doesn’t realize that the metered driver is moving forward, having gotten the green light. Sometimes, both drivers know that the other driver is moving ahead and they purposely challenge each other.

If the other driver doesn’t know the other one is there, they can accidentally ram into each other or perhaps make a wild and dangerous swerve or similar maneuver.

If both drivers know the other is there, it becomes a scary chicken match as to which one will back-down first.

In fact, believe it or not, I’ve seen two such drivers that came to a complete halt on the upper part of the on-ramp, each not being able to proceed ahead, and each not willing to allow the other to move ahead. I’ve heard stories of occasions where two such drivers got out of their cars and then went to fisticuffs right there on the freeway. Road rage!

For my article about road rage, see: https://aitrends.com/selfdrivingcars/road-rage-and-ai-self-driving-cars/

For my article about driving and traffic aspects, see: https://aitrends.com/ai-insider/driving-styles-and-ai-self-driving-cars/

Clogs In Nature

Clogs can happen in a multitude of circumstances.

A recent research study took a close look at fire ants and how they avoid creating clogs when they are developing their underground tunnel systems.

The study done by researchers from the Georgia Institute of Technology, the Department of Physics at the University of Colorado Boulder, and the Max Planck Institute for the Physics of Complex Systems, involved observing ants during a collective excavation effort.

The researchers placed the ants into transparent containers and rigged up a means to track them and analyze their movements. Using Lorenz curves, the researchers mathematically made various calculations about the work efforts.

They proceeded to also create a simulation involving a cellular automata model and wanted to compare biological behaviors to robo-physical behaviors.  This work also pertains to swarm intelligence.

For my article about swarm intelligence, see: https://aitrends.com/selfdrivingcars/swarm-intelligence-ai-self-driving-cars-stigmergy-boids/

One of the key findings was that the ants seemed to be willing to undertake idle time for some of the ants that were outside the tunnel in order to avoid clogs in the tunnels.

It was a kind of wait your turn approach.

There were observed instances of ants that appeared to wait outside the tunnel and did so presumably because they were able to somehow discern that if they entered into the tunnel it would clog things up.

Did the ants actually logically reach this brilliant conclusion and maybe had deeply thought through the ramifications of too many of them in the tunnel at one time?

Or, was it some more innate kind of detection and reaction?

Either way that it happens, the anti-clogging method of these living creatures was fascinating to see happen and it lends additional credence to using similar kinds of strategies and tactics to be used for artificial systems such as AI based systems.

Anti-Clogging Techniques For Autonomous Cars

At the Cybernetic AI Self-Driving Car Institute, we are developing AI systems for self-driving cars. One aspect involves the traffic coordination of multitudes of AI self-driving cars.

There are some pundits of AI self-driving cars that seem to believe that the advent of AI self-driving cars will magically do away with any and all traffic jams.

This is a rather far-fetched assumption. It seems to be based on the notion that all AI self-driving cars will carefully orchestrate their collective movements and therefore they will overtly avoid any traffic jams or clogs.

For my article about idealism and AI self-driving cars, please see: https://aitrends.com/selfdrivingcars/idealism-and-ai-self-driving-cars/

If you live in some kind of Utopian world, I suppose you can imagine that all AI self-driving cars will politely and carefully communicate with each other in a flawless manner and somehow arise to the challenge of being able to ensure there aren’t any traffic jams.

I assure you this is a thorny problem and not so easily solved.

Before launching into a discussion about the all-seeing all-knowing anti-clogging, let’s also consider another factor in the advent of AI self-driving cars.

It isn’t going to happen overnight that we suddenly have all AI self-driving cars on our roads and no conventional cars. Right now, there are around 200+ million conventional cars in the United States alone. Those are not going to disappear. The emergence of AI self-driving cars will occur over many years. It will take even more years for people to give up their conventional cars and gradually switch over to true AI self-driving cars, which, we don’t even know yet whether people will be willing to do so.

Allow me to explain that last point about switching over to true AI self-driving cars. There are various levels of AI self-driving cars. The topmost level, Level 5, consists of an AI self-driving car that can drive entirely by the AI and does not need any human driver. In fact, the idea is that there is no provision for a human driver in a Level 5 self-driving car (the pedals aren’t there, the steering wheel is not there, etc.).

For the levels of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

Will people be willing to have only Level 5 self-driving cars for which no human driving is presumably allowed?

Some people like to drive.

They love to drive.

They insist that driving is essentially a human right (that’s a bit extreme, I realize). Those car drivers might cling to being able to drive and fight any effort to force them to no longer drive. It will be interesting to see how society and the government opt to deal with those last scrappers that won’t give over to having the AI solely be the driver of cars.

The reason that their driving is important takes us back to the anti-clogging topic.

The anti-clogging camp would assert that if you include human drivers onto the roadways then you are not going to achieve the full sense of anti-clogging.

Those darned human drivers will inextricably cause a clog.

A justification then of banning human driving would be that it would presumably then allow for no clogs. Which has the greater weight in our society, people being able to drive or eliminating traffic clogs (of course, there are other reasons for restricting or preventing human drivers)? You be the judge (for now).

For the moment, we’ll sidestep the question of human drivers in the mix.

Assume that we had an all and only AI self-driving car world.

Would we be able to avoid any and all clogs?

Let’s use the on-ramp circumstance as an exemplar.

You have AI self-driving cars trying to get onto the freeway.

We can assume that the principles of HOV lane use might still apply, and so we might have AI self-driving cars that have no human occupants or maybe one human occupant that are waiting in the metered line, meanwhile there are AI self-driving cars with two or more human occupants and thus considered HOV-permitted and able to speed-up the ramp and not abide by the meter.

The meter goes green and the AI self-driving car at the head of the pack starts to move ahead. The AI self-driving car in the unfettered HOV on-ramp lane comes up to the point where the two cars are going to meet-up and needs to decide which of them goes first. This is reminiscent of the human driver problem earlier described. Now, we have AI self-driving car getting caught up in the same predicament.

What happens?

There is the presumed availability of V2V or vehicle-to-vehicle communications available.

This means that the HOV on-ramp self-driving car, we’ll refer to it as car “X” might initiate a V2V conversation with the AI self-driving car that was waiting for the green light, we’ll refer to it as car “Q” and that X might inform Q that X is barreling ahead and please stay back.

This is somewhat akin to the fire ants.

One fire ant is proceeding into the tunnel, so to speak, and the other fire ant is remaining “idle” as it waits its turn.

Here’s a question for you, why should Q abide by the instructions or edict provided by X?

In other words, why can’t Q tell X that X should slow down and let Q proceed ahead?

Why should one of them be considered the commander of the other?

Indeed, some argue that our driving is based partially on a sense of “greed” or selfishness, whereby traffic generally flows because each car is doing what it can to maximize its own advantage.

But, if you have two “drivers” and each of which demands to go first, what kind of tie breaker do you have?

For a look at greed and driving, see my article: https://aitrends.com/selfdrivingcars/selfishness-self-driving-cars-ai-greed-good/

Who Decides To Avert A Clog

You could say that in this case it should not be up to the two cars and their respective AI’s to decide as to which goes ahead first.

Instead, it should be the infrastructure.

It is anticipated that our roadways will gradually be outfitted with high-tech sensors and other systems, and there will be the advent of V2I, vehicle-to-infrastructure communications.

Thus, in this example, perhaps the meter should be “smart enough” to realize that another AI self-driving car is coming up the on-ramp in the HOV lane, and so the meter then via V2I informs the Q to not proceed just yet (or, maybe keeps the red light a bit longer), and allows X to flow along through the ramp. Or, perhaps the V2I informs the X to slow down and allow the Q to proceed ahead.

How did the infrastructure determine which goes first?

It could be based on some algorithm that tries to ascertain which of the two is “best” suited to go first.

Or, maybe it randomly selects if otherwise everything else about the situation would be considered a tie. As an aside, if you are interested in algorithms for traffic jams solving, you might want to explore ALINEA, one of the more studied such algorithms for this purpose.

There are some that advocate we might consider implementing a points system.

AI self-driving cars would earn points for certain acts and potentially need to use up points for other kinds of acts.

Suppose you are human occupant in Q, and you are in a hurry to get to work, you might have instructed your AI to go ahead and use up points to try and get ahead of other traffic. When in a dilemma such as the standoff at the on-ramp, perhaps the Q offers to provide points to X for purposes of letting Q go ahead. There might be a negotiation among them as to trading points for the circumstance.

It might not though still solve things because suppose that both Q and X are determined to go first, and are each willing to give up points to do so. You can include other variants such as maybe they auction points and the highest bidder wins. Etc. In any case, this can get complicated and some doubt that we’ll use a point system.

A similar viewpoint is that maybe there would be electronic money exchanged.

Instead of using points as a kind of barter, we might allow for the purchase of traffic maneuvers. You want onto the freeway fast, you can pay money to do so, electronically transferred in real-time (perhaps using blockchain).

But, this has its downsides as it might lead to our public roadways becoming dominated by those that have money over those that do not.

For my article about blockchains and AI self-driving cars, see: https://aitrends.com/business-applications/dun-bradstreet-eyes-blockchain-machine-learning-projects/

For my article about tit-for-tat and AI self-driving cars, see: https://aitrends.com/ai-insider/tit-for-tat-and-ai-self-driving-cars/

Clogs Arise In Lots Of Driving Situations

So far, I’ve focused on the on-ramp example.

This serves as a simple means to look at the clog problem.

Enlarge the scope to the freeway overall.

The number of clogs and the emergence of clogs is many times the magnitude of the on-ramp example.

You’ve got multiple lanes.

Multiple on-ramps and off-ramps.

Hundreds or perhaps thousands of cars.

Each car is headed to its own desired location.

There are miles upon miles of freeway.

There are numerous freeway interchanges.

Maintenance and upkeep of the freeways is taking place and can mire the roadways while doing so. And so on.

The other day, I was driving on the freeway and a car became stalled in the middle of the freeway.

Up until the point of the stalled car, the traffic was flowing smoothly and pretty much at the maximum legal speed.

The traffic began to get snarled and it wasn’t at first apparent as to why.

As I got within a few cars of the stalled car, I could see the upcoming cars would come right up to the standstill car and then try to move into the lane to the left or right of the clogging car.

The cars in those lanes would sometimes allow the other cars to get into their lane, and in other cases they would not.

It’s a dog eat dog world.

Many drivers didn’t want to allow the other cars to get ahead and wanted to preserve their own movement forward unimpeded.

I tell this story about the stalled car because you need to keep in mind that even an AI self-driving car could become stalled on the freeway.

Pundits dreaming about the AI self-driving car utopia don’t seem to realize that a self-driving car is still a car.

A self-driving car is going suffer breakdowns.

It will happen.

Guaranteed.

For my article about repairing of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/towing-and-ai-self-driving-cars/

For my article about non-stop use of AI self-driving cars, see: https://aitrends.com/ai-insider/non-stop-ai-self-driving-cars-truths-and-consequences/

Conclusion

How will the sudden stalling of an AI self-driving car while on the freeway be handled and done so in a manner that avoids any kind of clogging or traffic jam?

If the anti-clogging happens only for those AI self-driving cars near to the incident, by sharing with each other V2V, would this alleviate all clogging or would there still be some residual clogging?

And, it would seem like the residual clogging would have a cascading effect. Self-driving cars downstream of the disabled self-driving car are likely to experience some impact, even if minimal.

You might suggest that there would be a master control system that would oversee all traffic.

It would seek to prevent any traffic jams.

Therefore, rather than the envisioned more localized P2P of the V2V, we might have a “Big Brother” kind of system to optimize traffic flow and eliminate clogs. This would seem like a rather tall task computationally, and one even questions whether it is possible to achieve, given too the logistical vagaries involved.

There will also be some that find this notion somewhat repugnant as it might give the government excessive control and oversight.

It seems doubtful that a “perfect” world of no clogs is likely feasible.

The goal might be instead to focus on minimizing and mitigating clogs. Overall, the hope would be to limit the severity of clogs and the prevalence of clogs. This might combine both a global master control system along with localized P2P systems. It’s an interesting and challenging “edge” problem that will become more apparent as the advent of AI self-driving cars emerges.

Meanwhile, I guess we’ll all struggle with your day-to-day clogs, including that my kitchen sink has now clogged up and it looks like I’ll need to call a plumber.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Source: AI Trends
Continue reading Clogs and AI Autonomous Cars

Posted on

Cars Careening Out-of-Control In Crash Mode: The Case Of AI Autonomous Cars


The AI of self-driving cars needs a “crash mode” to try to prevent it from going out of control as an accident unfolds. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

Bam!

While innocently sitting at a red light, a car rammed into the rear of my car. I was not expecting it.

Things began to happen so quickly that I barely remember what actually did happen once the crash began.

Within just a few brisk seconds, my car was pushed into a car ahead of me, ripping the back and left-side of my car. The gas tank ruptured and gasoline leaked onto the ground, my airbag deployed, most of the windows fractured and bits of glass flew everywhere. Basically all heck broke loose.

This actually happened some years ago when I was a university professor. I had been driving past an elder care facility on my way to the campus. A car driven by someone quite elderly had come up behind me at the red light and he inadvertently punched on the accelerator rather than the brake. His car rammed into my car, and my car rammed into the car ahead of me.

Fortunately, none of us were badly injured, but if you saw a picture of my car after the incident, you’d believe that no one in my car should or could have survived the crash.

My car was totaled.

I think back to that crash and can readily talk about it today, but at the time it was quite a shocker.

Speaking of shock, I am pretty sure that I must have temporarily gone into shock when the crash first started.

I say this because I really do not remember exactly how things went in those few life-threatening seconds. All I can remember is that I kind of “woke-up” in that I consciously realized the airbag had deployed, and that my windshield was busted, otherwise I was utterly confused about what was going on. It was as though a magic wand had transformed my setting into some other bizarre world.

As I sat there in the driver’s seat looking stunned, and as I slowly looked around to survey the scene, trying to make sense of what had just occurred, some people from other cars nearby had gotten out of their cars right away and ran to my car. With my driver’s side window nearly entirely smashed and gone, they yelled into the car and asked me if I was okay. I looked at them and wasn’t sure that I understood what they were asking me and nor why they were even talking to me.

It was at that point that I smelled the strong odor of gasoline.

In that same instant, the people standing outside my driver’s side window were yelling for me to get out of the car because of the gasoline that had poured onto the street. I realized later on that these good Samaritans were very brave and generous to have endangered themselves in order to warn me about the dangers that I faced.

Luckily the car door still worked, so I opened it, undid my seatbelt, pushed away the remains of the air bag, shifted my body and my legs to position outside the door, and stepped out of the car.

I nearly collapsed.

Turns out my legs had gone weak as the aftermath of the shock and fright involved. Several people helped walk and semi-drag me to the curb and get away from the car itself. I sat there on the curb, watching as everyone was running around trying to help, and for a moment I thought it had occurred without me being involved at all. I was just a bystander sitting at the curb after a car accident had happened.

When the police and an ambulance showed-up, I had regained my composure. I was standing up sturdily now and calmly examining the cars. At first, the police officers and the medical crew doubted that I had been inside the car and certainly doubted that I had been the driver. I had nary a scratch on me. I seemed coherent and able to talk about what had happened.

In fact, and you’ll maybe laugh at this, I was mainly worried that I would be late to teach my class at the college.

I had never been late to any of my lectures.

What would the students do, what would they think?

Of course, I realized later on, after several years of being a professor, the students probably welcomed being able to skip a lecture and would fruitfully use their time for other “academic” purposes.

The main aspect about the incident was that my mind was blurry about those key seconds between having gotten hit from behind and the realization that I was sitting in my driver’s seat and glass was around me and my airbag was in front of me.

I cannot to this day tell you exactly what happened in those precious few seconds.

I am pretty sure that my body was likely a rag doll and merely flopped around as the impact to the car occurred.

Which way was my head facing?

Well, I had been looking straight ahead at the intersection while waiting for a green light, so presumably my head was still pointed in that direction when the initial impact occurred. Where were my arms and hands? I had been lightly holding the steering wheel and so that’s where my arms and hands were, at least up until the impact. My legs and feet were under the dash and positioned at the pedals, including that my right foot was on the brake, doing so because I was at a red light and stopped, again that was just before the impact.

I wondered whether there was anything I could have done once the impact began.

Suppose I had been forewarned and told or knew that a car was going to violently ram into the back of my car. Let’s further assume that I didn’t have sufficient time to get out of the way or make any kind of evasive maneuver.

It’s an interesting problem to postulate.

Acting As A Car Crash Begins To Emerge

We usually think about the ways to avoid a car accident.

What about trying to cope with an accident that is emerging, supposing you have a brief chance to take some form of action, aiding in perhaps reducing the impact but not being able to fully avoid the incident overall.

In this case, if I had some premonition or clue that the accident was going to happen, maybe I could have tried to turn the wheels of the car so that it might move away from the car ahead of me once my car got rammed.

Or, maybe I might have put on the parking brake in hopes it would further keep my car from being pushed by the ramming action.

The medics at the scene told me that I was probably lucky that I did not realize that the ramming was going to occur, since most people tense up.

They said that tensing up is often worse for you when you get into a car accident. According to their medical training and experience, there is a greater chance that when being jarred harshly, jostled and tossed around, the tightened or tensed muscles of my body would try to fight against the movement, and likely lose, thus it would lead to greater physical injury to my body. Instead, by being loose and unknowing, my body was more fluid and accommodated the rapid pushing, shoving, and fierce shaking.

I’d like to put aside the idea that I might have been forewarned, and instead consider a slightly different angle to the incident.

Suppose that my mind had remained clearly alert and available during those few seconds in which the accident evolved. I mentioned to you earlier that I have no particular recall and those moments are blurry in my mind, let’s pretend differently.

Pretend that my mind was completely untouched and could act as though it was separate from the severe contortions happening to my physical body.

What then?

Reenactment Of Car Crash Timing

We’ll start the clock at the moment of impact.

The car behind me has just collided into the rear of my car.

This is time zero.

Over the next few seconds, the impact will work its way throughout my car.

You might want to consider this akin to those popular online videos in which things are filmed in slow motion. You know, the videos that show what it looks like in the split seconds of a bullet going through a piece of wood or a watermelon being smashed. Imagine a slow-motion version of my car incident.

We’re now assuming that my mind can undertake whatever kind of thinking might be pertinent to the matter at-hand. Of course, my mind might be thinking about that lecture I was going to give that day, or maybe what I was going to eat for dinner that night. Put those thoughts aside. In this slow-motion version, devote my mind to focusing on the car accident that is happening.

I’d also suggest that we assume that my senses are all in perfect working order too. You might argue that my senses are going to get muddled by the forceful jerking efforts of the car being rammed, which I agree seems likely.

In a moment, I’ll revisit the pretend with that mushing effect to my senses as another variation.

Okay, my mind is fully active, focused on the car incident as the clock starts to tick, and I’ve got control over my sensory faculties, and we’ll include that I have control over my body. This means that I can take whatever kind of driving action that I want to undertake.

Is there anything that I can do to drive the car in those few seconds that might in some manner lessen the impact of the car accident?

Maybe I had taken my foot off the brakes when the real accident occurred, reflexively, and in the case of this pretend we could assert that I am going to keep my foot on the brakes. Perhaps my arms and hands flew off the steering wheel in the real incident.

Let’s pretend that I keep them on the steering wheel.

It’s not evident how much my added ability to control the car in this particular incident is going to be aided by my clear mind and the use of my senses and my body.

One limiting factor is the car and the circumstances of where the car was positioned.

The car was being pushed fiercely from behind. In this case, the brakes weren’t doing much in those split seconds anyway. The fact that there was a car ahead of me pretty much stopped my car from going much further ahead, due to my ramming into it, and I was pinned between two cars now. One car pushing from behind, the other car at a standstill and preventing me from readily driving forward.

The car itself is a limiting factor too in that the brakes might have gotten cut anyway upon the impact to the car.

In that case, pushing on the brake pedal might not have had any material effect. Likewise, the steering wheel might not be useful during those few seconds, if the linkages and internal steering controls were damaged or unable to relay my positioning of the steering wheel.

In my case, I’m going to toss in the towel and say that it is unlikely that if my mind had remained clear and available, and if my senses were continually available and working, and if my body was functioning so that I could use it to actively and purposely drive the car, there’s not much that could have gone differently to improve what happened during those seconds of impact and reaction.

If you look at different circumstances, the results might come out differently.

Remove the car that was ahead of me.

Pretend I have a straight-ahead path.

Assume too that I can see the intersection and there are no cars in it, meaning that I can use the intersection if I want to do so.

Does this change things?

In theory, depending upon the pace at which my car can accelerate, and depending upon the pace at which the car from behind me is ramming into me, there is some chance that I could have punched down on the accelerator and tried to leap ahead. It would have become a kind of race, starting when the impact began, the zero-clock point that I mentioned earlier. This could potentially have allowed me to lessen the blow from the rear of my car. I might even have accelerated fast enough to escape much of the impact, ending up on the other side of the intersection without much damage to the rear of my car.

I’d bet there are many car accidents wherein if the driver involved could magically have a clear and present mind, and be able to control their car, there is a chance that whatever dire results occurred could have been lessened.

On the news, I saw an instance recently of a driver that veered their car to avoid hitting something in the street and the car driver lost control of the car, which resulted in the car ramming into a parked car and a light post and a fire hydrant. It sheared off the fire hydrant and sent water shooting into the sky.

How did the driver lose control of the car?

Was it because of the mechanics of the car, or was it because the driver themselves lost their presence of mind and no longer were of their right mental faculties? It could be that the shock of veering caused the person to mentally go into a blur. This blurred mental state meant that the human was no longer actively driving the car. The car was out-of-control.

There was no driver actively driving the car.

Out-Of-Control Cars

I’m sure you’ve seen lots of news clips and videos of cars that became a kind of mindless projectile.

There was an incident captured on YouTube of a car that swerved to avoid hitting an animal in the street and the car smashed through a wood fence, continued onto a farm adjacent to the road, plowed a bunch of planted vegetables, and finally the car came to a stop.

Out-of-control car.

Another incident showed a car that didn’t make a left turn very well, veering beyond the confines of the left turn. The car continued to make too large a turn and rammed into a mailbox. This car then rammed into a hot dog vendor stand and ultimately came to a stop once it hit a storefront.

There are plenty of videos of cars that missed a turn and went through a fence into someone’s swimming pool. Having a car fly off a bridge is another example of an out-of-control car.

There are situations whereby an out-of-control car might be due to the car having mechanical problems and there is seemingly nothing that the driver can potentially do. For example, the accelerator pedal getting stuck and refusing to budge, forcing the car into going faster and faster.

This might happen because something is lodged into the accelerator pedal like a floor mat.

It has also happened as a result of an intrinsic defect in the car design.

Assume that the driver did not cause the accelerator pedal to be jammed downward. In that instance, is the driver now merely a passenger in that there is nothing the driver can do? I’d dare say we would all agree that the driver can still do something. They need to try and steer the car to avoid hitting other cars and other objects. They could try to see if they could dislodge the pedal to curtail the rapid acceleration. They could start honking their horns to try and warn other drivers and pedestrians that the car is a runaway.

And so on.

Not everyone would have the presence of mind to do those things.

If you’ve never had your accelerator pedal get stuck, the odds are that when it does get stuck, you’ll be shocked and unsure of what to do. You might lose your mental presence and become panicked. Even though there are actions you could take, those actions might not come to your mind. If they do come to your mind, you’d have to remain calm enough to enact those actions by forcing your body to undertake the desired actions.

Have you ever been to a demolition derby or seen one on TV?

At a demolition derby, the cars all try to smash into each other. It’s the purpose for the derby. Usually, the last running car gets the grand prize. I bring up the topic of demolition derbies to point out that those drivers are well-prepared to deal with their cars when the car is out-of-control.

A driver in one car might get hit from the left side by another car, meanwhile be getting hit from the right side by another car, and at the same time trying to hit a car ahead of them. The cars are all being pushed and shoved. Driver’s in those cars are generally able to keep their mind and wits about them. They are trained for the situation and know what to do, though of course it is somewhat easier when the matter is expected versus when unexpected (in the derby, it is expected that your car is going to be hit and go out-of-control).

One aspect of a car being out-of-control is when the car is sliding or otherwise in a motion that you as a driver did not intentionally seek to have the car do. Have you ever had your car slide on ice or snow?

That’s an example of the car being out-of-control.

Again, how you react as the driver can make a big difference. If you aren’t aware of the sliding action and aren’t prepared to react, or if your mind is muddled, you might not try the usual techniques that are recommended for dealing with a sliding car. You can potentially regain control by typically turning the wheels in the direction of the slide and avoid jamming on the brakes.

Dealing With Out-Of-Control Cars

In essence, there are actions that you can take to bring the car back into control, or you can take no actions and hope for the best, or you can take misguided actions that cause the car to go into a further out-of-control result.

You need to not only determine what is the proper course of action, you need to try to prevent the situation from getting worse, you need to take into account what your car can and cannot do, you need to consider any damage the car is undertaking and how it will limit what you can do, and consider a slew of other factors.

Demolition derby drivers are able to do this.

I don’t want to make them into seeming to be super drivers per se. Their cars are usually jiggered in a manner to make things simpler for them. Usually, the cars are stripped of items that can fly around. Cables are reinforced. There aren’t any passengers on-board. Gas tanks get special protections. Plus, the derby typically takes place in a confined area that has no pedestrians, no other obstacles, and it is like a playground in which all you can do is ram into other cars.

That’s a far cry from dealing with a real-world crash-mode and having to figure out what to do, and cope with bystanders, and cope with a myriad of other factors. Nonetheless, the derby drivers get a chance to practice dealing with the stresses of being in a car crash and able to train themselves to keep a mental awareness, enabling them to continue driving a car and maintain control, as much as feasible.

AI Autonomous Cars And Cars Out-Of-Control

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect that few automakers and tech firms are considering at this time is the special characteristics of driving a car while it is in crash-mode and how the AI should be skilled to do so.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, the brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 4 and Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a Utopian world in which there are only AI self-driving cars on public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the topic of out-of-control cars and dealing with them as a driver when the car is seemingly out-of-control, let’s consider how the AI of an AI self-driving car should be coping with such situations.

We’ll start by debunking a popular false myth, namely that true Level 4 and Level 5 AI self-driving cars will never get into car accidents, therefore the claim is that the AI does not need to be able to cope with car crashes.

Wrong!

Well of course AI self-driving cars are going to get into car crashes.

It is nonsense and foolhardy to think otherwise.

First, as mentioned earlier, the roadways will have a mixture of human driven cars and AI driven cars.

This mixture is going to have car crashes, involving a human driven car that crashes into an AI self-driving car, and likely too occasions whereby an AI self-driving car crashes into a human driven car. In some instances, the AI self-driving car might be the instigator of the car crash, while in other cases it is carried into the car crash as a cascading action of the car crash.

There will also be instances of AI self-driving cars crashing into other AI self-driving cars, as I’ll describe in a moment.

Recall that in my story about how I had gotten hit from behind while sitting at a red light, I was in a conventional car, but for pretend sake let’s assume that the car was actually a Level 4 or Level 5 AI self-driving car.

Would it have been able to avoid getting hit?

No.

There was no place to escape to and the car that rammed me from behind did so with almost no warning.

The AI might have detected the aspect that the car behind it was suddenly speeding up, and therefore the AI would likely have had a few seconds heads-up that the crash was going to occur.

But, given that there was a car immediately ahead of my car, and there were other cars also sitting at the intersection and all around my car, the AI would have been boxed-in or essentially surrounded and would have had no opportunity to escape. The crash would have happened.

This is an instance of a human driven car hitting an AI self-driving car.

An AI self-driving car could get hit by another car such as a human driven car and could even get hit by an AI self-driving car.

Suppose the car ahead of me at the intersection was an AI self-driving car and continue assuming that my car was an AI self-driving car.

Once the car behind me has hit my car, it would have forced my AI self-driving car to ram into the car ahead, which we’re pretending was another AI self-driving car.

This is an instance of an AI self-driving car hitting another AI self-driving car.

Sometimes when I mention this possibility, there are those that will say that it is “cheating” to say that one AI self-driving car hit another one in the sense that they were both pinned into a situation whereby the impact was unavoidable.

I then say indeed they were pinned, that’s the case here, but it still doesn’t negate the fact that one AI self-driving car hit another AI self-driving car.

My point is that it can happen.

Use Of V2V For Crash Tip-off

Some say that an AI self-driving car won’t get hit because it will have V2V (vehicle-to-vehicle) electronic communications.

This means that one AI self-driving car can electronically communicate with other AI self-driving cars, perhaps doing so to forewarn that the road ahead has debris on it or maybe that the traffic is snarled.

Okay, let’s assume in my pretend scenario that my AI self-driving car has V2V and the AI self-driving car sitting ahead of me at the intersection has V2V too.

In those few seconds wherein my AI self-driving realizes it is going to get hit, it sends quickly a V2V broadcast, which the AI self-driving car ahead of me receives and decodes. Based on the electronic message, the AI of this car ahead of me has to decide whether it believes what it is being told, which in of itself is a potential question and problem associated with V2V aspects, and if the AI does believe that my car is about to hit it, the next aspect involves figuring out what to do.

The AI self-driving car ahead of me, now forewarned and with presumably a second or two maybe to react, could try to proceed into the intersection to avoid my car hitting it from behind. The AI has to ascertain whether it is worse or not to remain in-place and get hit from behind, or potentially try to gun the engine and rush into the intersection. If the intersection has other traffic in it, the idea of aiming to rush ahead is not so attractive, though likewise staying in place is not attractive either.

This highlights the kind of ethical choices that an AI system is going to need to make when driving an AI self-driving car.

It has to decide in this instance the risks of injury, death, damages from waiting to get hit from behind by my AI self-driving car versus the instance of risks of injury, death, damages if it attempts to rush into the intersection. There is also the matter of whether the amount of time involved would actually allow the AI self-driving to rush into the intersection, depending upon the acceleration capability of the AI self-driving car.

More Examples Of AI Driverless Car Crashes

I’ll give you another example of how an AI self-driving car might hit another AI self-driving car.

Suppose there are two AI self-driving cars are going down a street (hey, this sounds like an AI self-driving car joke of some kind, like two people going into a bar!).

They are following each other at the proper distance, based on their speeds and car lengths, and is supposed to be how humans are to drive a car, though I’d wager few humans allow sufficient distances between their cars when driving.

A dog darts from seemingly nowhere and into the street. In this case, there was no possibility of detecting the dog prior to its entering into the street.

The AI self-driving car that’s ahead of the other AI self-driving car has insufficient distance to come to a stop and avoid hitting the dog. The choices for the AI are to either try to stop and yet know it will ram into the dog, or try to swerve to avoid the dog, but let’s assume there are parked cars and other cars coming down the street too.

This means that the AI will need to decide whether to hit and likely kill the dog or take a chance and swerve into the oncoming lane of traffic and possibly get hit head-on or ram itself into a parked car to try to avoid the dog.

For more about AI self-driving cars and accidents, see: https://www.aitrends.com/selfdrivingcars/accidents-happen-self-driving-cars/

What should the AI do?

The AI is between the proverbial rock and a hard place.

There aren’t any “good” choices to be made here.

Which is the least of the worst options is more akin to this problem. Suppose the AI opts to ram into a parked car, figuring that the parked car has no humans in it and thus no humans will be put at risk, and it is only property damage that will result. This saves the dog, prevents potentially hitting an oncoming car, and perhaps seems to be the least-of-the-worst choices.

The AI quickly sends out a V2V to forewarn that it is going to ram into a parked car.

The AI self-driving car coming up behind is given a somewhat sudden heads-up that this action is going to occur.

Can the AI self-driving car stop in time and avoid hitting the AI self-driving car that is going to ram into the parked car?

Maybe yes, maybe not.

We also don’t know if ramming into the parked car will cause the AI self-driving car to perhaps bounce back into the street and maybe make the situation from the perspective of the upcoming AI self-driving car even worse.

The point of these scenarios is that there will absolutely be car crashes involving AI self-driving cars.

I want to make sure that we all agree with that possibility.

Some might argue that we’ll have less car crashes due to the advent of AI self-driving cars, and for that I’d be willing to say it is hopefully the case that we’ll have less, but in no manner at all will we have zero instances of car crashes involving AI self-driving cars.

For the false notion of zero fatalities, see my article: https://www.aitrends.com/selfdrivingcars/self-driving-cars-zero-fatalities-zero-chance/

For more on ethics and AI, see my article: https://www.aitrends.com/selfdrivingcars/ethics-review-boards-and-ai-self-driving-cars/

For the need of AI to be a defensive driver, see my article: https://www.aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/

For my article about the human foibles involved in driving, see: https://www.aitrends.com/selfdrivingcars/ten-human-driving-foibles-self-driving-car-deep-learning-counter-tactics/

For my article about safety issues of AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

Missing The Boat On Crash Avoidance

In terms of AI self-driving cars and getting involved in car crashes, most of the automakers and tech firms are focused on avoiding car crashes and not particularly considering what the AI should do once a car crash is imminent or underway.

This is troubling.

If the AI is not intentionally established to have special processes or procedures for dealing with a car accident once underway, it means that the AI self-driving car is essentially going to become out-of-control.

The AI developers are assuming that the AI will be able to handle the self-driving car as if the AI self-driving car is just nonchalantly driving along, but once the car accident starts, all bets are off. The AI self-driving car is going to be likely pushed, shoved, and otherwise taken out of the “comfort zone” in which one assumes the self-driving car is operating most of the time. Assumptions about being able to brake, accelerate, and steer are no longer going to be valid due to the extenuating circumstances involving what is about to happen to the self-driving car.

Some automakers and tech firms aren’t working on this at all, or they are working on it but put it on the back-burner as a so-called edge problem.

Their logic to put this on the back-burner is that they assume the AI self-driving car is highly unlikely to get into a car accident, thus, why worry about it now.

If it is going to only happen once in a blue moon, deal with it later on.

For more about edge problems, see my article: https://www.aitrends.com/selfdrivingcars/edge-problems-core-true-self-driving-cars-achieving-last-mile/

Part of the grave concern with this kind of thinking is that it means that when an AI self-driving car does get into a car accident, it will likely do little to try to minimize the impacts and be unable or ill-equipped to find ways to either escape or  at least try to “improve” upon a bad situation.

I’ve predicted many times over that when AI self-driving cars get into car crashes, society is going to become hyper-focused on why and how it happened, and the entire future of AI self-driving cars is going to be based on these instances. It becomes the classic “bad apple” that spoils the entire barrel.

I know many AI developers are frustrated that this can occur and feel that it is unfair of society and the media to react in such a manner, but, hey, that’s the way the cookie crumbles.

Generally, the public and the media are not especially forgiving about AI self-driving cars getting involved in car accidents.

Tesla’s Elon Musk has bitterly complained that society over-hypes these instances and should try to balance those instances against the thousands of car accidents with conventional cars, but he’s barking up a rough tree to think that society will be willing to view AI self-driving cars in that kind of context.

Auto makers and tech firms need to be doing as much as they can to cope with not only avoiding car crashes but also being able to have the AI enter into a kind of “crash mode” when a car accident is either imminent or underway.

I would likely anticipate that if the auto makers and tech firms don’t have such a provision in their AI, besides the aspect that it means the AI will be somewhat acting in a willy-nilly manner during a car accident, I would predict that the auto makers and tech firms are going to be faced with some hefty legal bills and potential product liability issues.

Lawyers for those humans that are immersed in a car accident are going to ask tough questions about what the AI did, why it did so, etc.

For my article about product liability and AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/product-liability-self-driving-cars-looming-cloud-ahead/

For my article about the lawsuits over AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/first-salvo-class-action-lawsuits-defective-self-driving-cars/

For the crossing of the Rubicon, see: https://www.aitrends.com/selfdrivingcars/crossing-the-rubicon-and-ai-self-driving-cars/

For the public perception of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/roller-coaster-public-perception-ai-self-driving-cars/

Tick-Tock Goes The Clock

As suggested earlier, let’s put a stopwatch into the car accident aspects and assume that at the initial point of impact we start the clocking ticking.

This is almost as though we are able to slow down time and do a slow-motion analysis of a car accident.

In a manner of speaking, we might look at this from the perspective of “speeding up” rather than slowing down. A human might not be able to give much mental concentration to a car accident once the accident begins to unfold. The time allotted is very short, perhaps fractions of a second or just a few split seconds.

On the other hand, the AI might be running on some very fast computer processors and so it could potentially do a lot of computational processing in that rather short amount of time.

There are those that would argue too that the AI won’t go into shock and so it will keep its head about it, while a human is likely to not keep their presence of mind. In my car accident, I still don’t know exactly what happened from the moment of impact until I was suddenly aware that I was seated in my car and something untoward had just occurred. Whether I was in shock or maybe blacked out momentarily, we presumably don’t need to worry about the AI suffering that same fate.

I would though wish to put a caveat on this idea that the AI won’t suffer from the shock aspects.

Those that make such a claim are leaving out the important element that the AI is running on computer processors that are on-board the self-driving car. When the self-driving car is getting rammed, there is a high chance that those processors are going to suffer too. The physics of the situation can mess with the electronics. The physical crushing actions and blows to the car are likely to mess with the electronics of the computer processors and computer memory on-board the self-driving car.

In a manner of speaking, you could assert that there is a chance that the AI will go into “shock” or maybe we call it “artificial shock,” involving damage being done to the AI systems and its on-board computers.

This could alter what the AI is able to do during the crash itself. What kind of fail-safe capabilities does the AI have?

For more about fail-safe AI, see my article: https://www.aitrends.com/selfdrivingcars/fail-safe-ai-and-self-driving-cars/

For cognitive timing of the AI, see my article: https://www.aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/

For my article about self-awareness being crucial to AI, see: https://www.aitrends.com/selfdrivingcars/self-awareness-self-driving-cars-know-thyself/

Maybe the AI is not able to do anything once the crash gets underway and has become completely inoperative.

Maybe the AI is messed up and does not realize that it has become messed up, and yet still tries to drive the car, doing so in a manner that actually makes the situation worse!

Overall, the special “crash mode” of the AI needs to be able to discern what it can and cannot do, what it’s own status is in terms of working properly, and have a number of contingencies ready to go.

Special AI Capability For Crash Mode

There is no doubting that the “crash mode” becomes a highly complex problem.

The self-driving car is likely becoming less drivable as the car crash clock ticks, starting at point in time t=0. At some time, we’ll say is t+1, perhaps the brakes are no longer functioning. At some time, t+2, it could be that the car is now in a slide as a result of the ramming and the wheels are unable to gain traction to redirect the direction of the self-driving car. And so on.

I had earlier mentioned that I wasn’t sure in my car accident as to the capability of my limbs, such as whether I still had any ability to keep my arms and hands on the steering wheel or have my foot on the brake pedal.

The AI is going to have similar “driving controls” issues to cope with. Though the AI doesn’t have arms or legs, it does have electronic systems and various means to undertake the driving controls of the self-driving car.

Are those driving controls still available to the AI or might they have been damaged or cut as a result of the underway car crash as it evolves in those split seconds?

In essence, you have these aspects:

  • AI system as “mindset” for driving the car
  • Sensors of the self-driving car that the AI needs to sense what’s happening
  • Car controls for the AI to use to drive or control the self-driving car

The AI system itself might be degraded or faulty during the car crash and must have a provision to ascertain its own status and reliability. This then would be used to try to decide what actions the AI ought to be taking, and also avoid taking actions that the AI ought to not be taking.

The sensors such as the cameras, the LIDAR, radar, and the ultrasonics can become degraded or faulty during the car crash. This means that whatever the sensor fusion is reporting might be false or incomplete. This means that the updating of the virtual world model might be false or faulty. The AI action planner needs to try to ascertain what about the sensors and virtual world model seem to make sense and what aspects might now be suspect.

The car controls might no longer be accessible by the AI, due to the car crash aspects as they unfold.

Or, maybe the AI can issue car controls commands, but the controls themselves are non-responsive, or the car controls attempt to carry out the order, but the physics of the car and the evolving situation preclude the car from physically being able to carry out the instructions.

Impacts To Human Passengers Inside The Autonomous Car

One aspect that I’ve not brought up herein involves the AI having to decide what to do about any human passengers that are in the AI self-driving car.

This is quite important and must be taken into consideration.

Here’s what I mean.

For the AI to consider what action to take during the car crash, there is the matter of how the humans within the AI self-driving car are going to be impacted too.

Which is better or worse for the passengers, having the AI attempt to accelerate out of the full impact or instead maybe letting the impact happen but steer the car so that the impact happens on one side of the car versus the side that the humans are sitting in?

The crux is that the number of human passengers, where they are seated, possibly their size and age (adult versus child), could all play into how to “best” respond to the car crash as it is underway. This takes us again into an ethics laden situation. If the AI can find a means to more likely save let’s say an adult in the self-driving car versus the child, should it take such action, or should it attempt to save the child more so than the adult?

I know that you might be saying that the AI should seek to save all humans inside of the AI self-driving car. Sorry, that’s too easy an answer.

There is a myriad of options that the AI might be able to consider.

Each of those options will involve uncertainties.

We also need to consider the humans outside the AI self-driving car, such as there might be pedestrians standing nearby that are at risk, and humans in the other nearby cars that are cascading into the car crash.

For my article about uncertainties and AI, see: https://www.aitrends.com/selfdrivingcars/probabilistic-reasoning-ai-self-driving-cars/

For pedestrians as roadkill, see my article: https://www.aitrends.com/selfdrivingcars/avoiding-pedestrian-roadkill-self-driving-cars/

For my Top 10 predictions about AI self-driving cars, see: https://www.aitrends.com/ai-insider/top-10-ai-trends-insider-predictions-about-ai-and-ai-self-driving-cars-for-2019/

For the aspects about rear-end collisions, see my article: https://www.aitrends.com/selfdrivingcars/rear-end-collisions-and-ai-self-driving-cars-plus-apple-lexus-incident/

Twists And Turns Involved

The AI has quite an arduous problem to solve during a car crash.

That’s partially why it is being avoided by some AI developers, it is a really tough nut to crack.

I don’t think that this means that we should just shrug it off and wave our hands in the air.

Without any kind of crash mode capability, the AI is going to be potentially useless and merely add fuel to the fire of the self-driving car becoming a kind of unguided missile.

When I say this, there are some that try to retort that the AI might have a crash mode and try to deal with a car crash as it evolves, and yet ultimately be unable to do anything of substance anyway. It might be that the car controls are unavailable or non-functioning. It might be that the choices of what to do are so rotten that doing nothing is a better choice. Etc.

Yes, it is true that the AI might end-up not being able to aid the lessening of the car crash repercussions. Does this mean that the AI should not even try to do so? Are you willing to toss away the chance that the AI might be able to assist? I don’t believe that’s prudent and nor what we might hope a true AI self-driving car will do.

Each situation will have its own particulars that dictate what becomes feasible during the crash.

Was there any in-advance indication of what was about to occur?

Was any preparation possible prior to the actual crash?

Once the crash began, what possibilities existed of still being able to exert control over the self-driving car?

Throughout the car crash, what could be done and what was done?

There’s also the post-crash aspects too.

If the AI self-driving is still in some functional capability, the AI should be trying to ascertain what to do. Is the car still drivable by the AI such that the AI can pull the self-driving car off to the side of the road, and avoid possibly getting hit therefore by other traffic that might be soon coming upon the accident scene?

The AI must be continually monitoring the car controls status to try to discern what is usable and what is not:

  • No steering, limited steering, steering is stuck
  • No accelerator, limited accelerator, accelerator stuck
  • No brakes, limited brakes, brakes stuck

Use Of Machine Learning

Here’s a bit of a twist that might catch your interest.

Some say that we should be using Machine Learning (ML) or Deep Learning (DL) to cope with and aid the crafting of the special “crash mode” of the AI for a self-driving car.

The notion is that the use of deep or large-scale artificial neural networks might allow the AI to identify patterns in what to do during car crashes. By examining perhaps hundreds or thousands of car crashes, in the same way that the ML or DL studies pictures of street signs to identify what street signs consist of, maybe the AI could become versed in handling car crashes.

This seems sensible.

One question for you, where are we going to get all of this car crash data that will be needed to do the ML or DL training? Right now, the AI self-driving cars and auto makers and tech firms are doing everything they can to avoid car crashes.

There isn’t a vast trove of car crash data available to do this kind of pattern matching and training with.

Sure, there are tons of car crashes daily that are occurring with conventional cars.

This though does not encompass the kind of car crash data that we need to have collected. For today’s car crashes, at best there is info about what happened before a crash and what the end result was.

Whatever happened in-between is not particularly data captured and nor analyzed.

Unfortunately, there is not a handy treasure trove of car crash data that includes what took place during the crash itself.

The closest that we can come would be to use simulations.

The simulations though need to be based on the reality of what happens during car crashes. This might seem obvious, but I point this out because it is “easy” to make a simulation based on aspects that have little to do with what really happens in the real-world. Training via ML or DL via simulations that aren’t realistic is not likely to be overly helpful, though it at least provides a potential step forward.

For more about Deep Learning, see my article: https://www.aitrends.com/selfdrivingcars/plasticity-in-deep-learning-dynamic-adaptations-for-ai-self-driving-cars/

For ensemble Machine Learning, see my article: https://www.aitrends.com/selfdrivingcars/ensemble-machine-learning-for-ai-self-driving-cars/

For my article about street signs and DL/ML, see: https://www.aitrends.com/selfdrivingcars/making-ai-sense-of-road-signs/

For my article about the importance of simulations in AI self-driving car development, see: https://www.aitrends.com/selfdrivingcars/simulations-self-driving-cars-machine-learning-without-fear/

Conclusion

We need to have the AI of a self-driving car be able to deal with car crashes.

This includes not just the pre-crash aspects and the post-crash aspects, which is usually where the attention by the AI developers is aimed.

There must be a “crash mode” that is able to cope with the unwinding or evolving elements that happen during a car crash.

The crash mode could be a kind of last-resort core portion that does what it can to try to keep aware of the moment-to-moment situation and exert any car control that it can, doing so in hopes of minimizing injury, death, or damages. Similar to humans, in a manner of speaking, the AI can suffer from a type of “artificial shock” that means it will become degraded in being able to figure out what is taking place and what can be done about the emerging situation.

The complexities during a crash are enormous.

What on the self-driving car is still working and usable?

What is the status of the humans on-board? What is the situation outside the self-driving car?

How can all of these variables be coalesced into a sensible plan of action and carried out by the AI?

The odds are that whatever the AI derives, the plan itself will need to be instantly re-planned, based on the aspect that the situation is rapidly changing.

Other than demolition derby drivers, I’d suggest that most drivers are unable to remain steady and have the presence of mind during a car accident to do much to mitigate the consequences. The AI has a chance to be that demolition derby driver, though let’s subtract the part about wanting to purposely hit other cars as is the goal of a derby.

The AI potentially has fast-enough processing speed to try to find ways to cope with the car crash while it is occurring and take rudimentary actions related to the self-driving car. For the sake of AI self-driving cars, and for the sake of human lives, let’s put some keen focus on having “crash mode” savvy AI.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Source: AI Trends
Continue reading Cars Careening Out-of-Control In Crash Mode: The Case Of AI Autonomous Cars

Posted on

Car Parts Thefts and AI Autonomous Cars


An AI self-driving car with its superb cameras, numerous radar and ultrasonic devices is a goldmine for car parts thieves. Credit: Getty Images

By Lance Eliot, the AI Trends Insider 

What do manhole covers, beer kegs, and catalytic converters have in common?

 This seems like one of those crafty questions asked when interviewing for a job at a high-tech firm. 

It is admittedly a tricky question. 

The answer is that they are items that at one point or another were being stolen in large numbers. 

For manhole covers, there was a case last year in Massachusetts of a man that stole seven of them and tried to sell them for scrap metal at a salvage yard, and in some countries they are faced with a raft of such thefts. 

A few years ago, beer kegs were being stolen to the tune of 300,000 kegs a year, doing so to sell the kegs for about $50 each in steel scrap.

Currently, the United States is seeing a surge in thefts of catalytic converters. 

You might assume that the thieves are trying to sell the catalytic converter as a used part and hoping to get unsuspecting car repair shops and car owners to buy these “hot” devices as a replacement when an older one on your car is not working or gets damaged in an accident. 

Nope. 

The thieves are seeking to grab the palladium that’s in the catalytic converters.

Palladium?

Palladium is a rare metal that is silvery and white in overall color. 

It is part of the Platinum Group Metals (PGM) and has the handy properties of a low melting point and being the least dense of the PGMs.

 From a humanity perspective, the palladium in a catalytic converter is essential as it converts around 90% of the noxious gases coming out of your car’s exhaust into much less dangerous chemicals (generally producing carbon dioxide, nitrogen, and simple water vapor). Palladium is also used in fuel cells and when combined with oxygen and hydrogen is a nifty producer of electrical power, heat, and water.

As they say, palladium doesn’t grow on trees. 

You need to find it and mine it. 

Or, as the thieves would say, you need to remove it and steal it. 

Your catalytic converter is usually in the under-body of your car and sits between the exhaust pipe and the engine or your car. In Chicago, they have had thieves recently go along a block at the wee hours of the morning and one-by-one crawl underneath cars to remove and steal the catalytic converter. Another popular approach involves going into a parking lot such as at an airport and stealing the catalytic converters from the long-term parked cars.

Here’s a bit of a shocker, the value of palladium is now worth more than gold. No need to give someone a gold watch or a gold necklace, instead they should be happier to get a palladium coated gift. It is estimated that the thieves are typically getting around $200 or perhaps $400 when selling the catalytic converter palladium, depending upon the condition and how many grams are contained in the device.

 Is it difficult to steal a catalytic converter?

Sadly, no. 

With just a few tools and a few minutes, you can seemingly readily take one.

 You crawl underneath the target car and do your handiwork. There are even online videos on YouTube that show how to do this! Of course, you would not target just any car and instead would want to go after certain kinds of cars, such as ones that are easier to steal from, and that have larger amounts of palladium in the catalytic converter, and are located in an area where you would guess that you won’t get caught during the theft effort.

Car Parts Theft Is Big Business

There are tons of car parts thefts annually in the United States alone (literally, tons worth!). 

Some car parts are stolen to glean the elements within them. 

Some car parts are being taken to then supply the huge market for used car parts. Perhaps nearly 1 million car parts and car thefts are undertaken each year, though it is hard to know how many happen since the crime is not always consistently reported. If you are interested in some fascinating statistics about stolen car parts and stolen cars, the National Insurance Crime Bureau (NICB) provides online stats that are insightful on the matter.

In the case of stealing a car part and attempting to resell it, an entire black-market supports such efforts. When you go into a car repair shop to get your car fixed, and they offer to put into your car a used part, it could very well be a stolen part that they use. The car repair shop is not necessarily in on the thievery. They might be buying the used parts from seemingly reputable sources. It turns out that the supply chain of car parts is quite muddled and there is a kind of ease with which stolen car parts can appear to be legitimate used parts.

 Some believe that the emergence of blockchain might be a means to curtail such sneaky acts. 

The notion is that all car parts would be notated in a publicly available online log, accessed via the Internet, and by doing so it would make the history and lineage of the car part readily available. Anyone could figure out the status of a car part by merely consulting the blockchain.

For my article about blockchain, see: https://www.aitrends.com/selfdrivingcars/blockchain-self-driving-cars-using-p2p-distributed-ledgers-bitcoin/

 The motivation for stealing car parts is rather apparent when you think about it. 

Car parts are becoming increasingly expensive and therefore the profit to be made by selling a stolen car part is quite handsome. 

If car parts were dirt cheap, it would be difficult to make much money off stealing them. People that own cars are continually in need of getting replacement car parts, due to the wear-and-tear of an existing car part or sometimes due to the car part getting wrecked by a car accident.

I remember one car that I used to own that seemed to fall apart one-car-part at a time. 

I took my car to a small car repair shop that did quick work and was never crowded. They replaced one part that had failed due to wear-and-tear. About a month later, another car part failed. Once again, I took the car to have the part replaced. This kept happening, nearly each month, for a period of about 6 months. The car repair shop probably posted my name on their calendar in anticipation that I’d be there next month and so on.

Admittedly, it was an older car that had not been given much attention to keeping it in shape. I bought it used and knew that it was likely to someday start to come apart at the seams. Once the seams started coming undone, it was a tsunami of car parts failures and replacements. 

What a pain in the neck!

In any case, the car parts thieves are choosy about which cars they steal from and which car parts they try to steal. 

Cars that are well-sold and the most popular brands of cars tend to be the right choice for stealing car parts from, since there is a much larger market potential of the need for used parts. Another angle too is cars that tend to be involved in car accidents, which then are in need of replacement car parts.

It’s a demand and supply phenomenon. 

Parts being stolen are dictated generally by the demand exhibited via car repair shops, and in combination with the used parts marketplace and the underground marketplace.

The car part needs to have sufficient profit potential, thus the more expensive it is, the more attractive as a target to steal.

The car part needs to be relatively easy to steal. 

If it takes too long to steal it, or if the chances of getting caught are high, thieves are not going to take the risk as readily.

The car part needs to be small enough that a thief can readily cart it away and then transport it to whatever locale might be needed to dispense with it. If the car part is extremely heavy or bulky, it makes the stealing of it much harder and along with complicating a fast getaway with the car part.

The car part needs to be removable without excessive effort. 

If the thief needs a multitude of tools to try to extract the part, or if the part is welded into place, these are barriers to stealing it. Likewise, if there is a car alarm system that the car part will potentially set off when stealing the car part, that’s a no-no kind of car part to try to take.

I realize you might be thinking that it would be “better” for the car parts thieves to consider taking the entire car, rather than futzing around trying to take a particular car part itself. 

Certainly ,the stealing of the entire car would be more efficient since you would have all the car parts now available. You could either try to sell the entire car intact, or instead take apart the parts and sell those individually.

My Car Was Stolen

This reminds me of the occasion when my sports car was stolen here in Los Angeles.

I was a university professor at the time. 

I had parked my car in one of the on-campus parking lots. There wasn’t any faculty designated parking per se, and so I parked in the same parking structure as did the students, administrators, and the other faculty. Each time that I parked on-campus, I would drive throughout each parking structure, trying to find an available parking spot. It was one of those hunts for a parking spot kind of games, though I sometimes cut it close to class starting time and got into a bit of a sweat about finding an available parking stall.

One day, I parked my car in the mid-morning, getting parked in plenty of time to teach my lunchtime class. I was teaching classes throughout the day. I also was teaching an evening class. At about 9:30 p.m., I walked out to the parking structure to drive home. When I got to where my car was supposed to be parked, it wasn’t there. What? My mind raced as I thought that perhaps I had parked on a different floor and was just confused about where my car was.

I went to each floor of the parking structure and searched stridently to find my car. After covering the entire parking structure, I concluded that my car was not there. Had I perhaps parked in a different on-campus parking structure that day? No. I knew I had not. Maybe my car had been towed for some reason, perhaps I had not displayed my faculty parking tag, or my car was sloppily parked and took up more than one parking space.

I dutifully went to the campus security office. 

Had they perchance towed my car? 

No, they said they did not do so. 

They asked me what I did at the campus. When I explained that I was a professor, the office clerk made a small smirk and called over one of the campus security officers. They said they would take me over to the parking structure in one of the campus security golf-like carts and the officer would help me find my car.

Sure enough, the campus security officer drove me to the parking structure and slowly drove on each floor and past each car. Is that your car, the security officer would ask? No, I said. In fact, I was getting quite upset that this effort of slowly driving through the parking structure was taking place. I already had looked and was unable to find my car. The campus security officer insisted we would need to continue the slow poke search.

After exhausting the parking structure’s set of cars and not having seen mine, I thought that we could now start to discuss what to do about my apparently stolen car. I assumed that an all-out broadcast would take place to alert numerous police and highway patrol to be on the lookout for my car. Police helicopters would take to the skies to find my car. Squad cars would be peeling out of police stations, seeking to find my car. Okay, maybe I watch too many movies and TV shows, but I had in mind that my car was important and by-gosh somehow someone should be looking for it.

The campus security officer proceeded to drive us to another one of the parking structures. Yikes! I bitterly complained and said that this was an utter waste of time. He then explained to me that the reason they do this is that it was possible that I had merely gotten mentally mixed-up about where I had parked my car. He also indicated that when I said that I was a faculty member, the reason that the clerk had smirked, they routinely had faculty that would claim their car was gone, yet it was sitting safely in a parking structure where they had parked it earlier in the day.

They had over time gotten used to “genius” level professors that might be incredible researchers but were not able to do everyday tasks very well. This faculty were often forgetful of mundane aspects. Sure, they might be able to explain the intricacies of some arcane area of science or technology and could be steeped in grand theories. They though were at times not able to tie their own shoes and nor keep track of where they parked their car.

I endured an eternity as we drove to each of the numerous parking structures and slowly drove throughout each one. It was painful and I kept thinking that the thieves of my car were merely being handed more time to drive it further and further away. Did they target a faculty member’s car, doing so because they knew that the campus security would waste time trying to find it while letting the clock tick for the escape plan of the thieves?

Anyway, it was not in any of the parking structures. The campus security officer took my back to the main security office and I filled-in paperwork. After doing the paperwork, they said that there wasn’t anything else needed to be done. I was puzzled. 

Were the helicopters getting underway and the police squads rallying to find my car?

Turns out that there are many cars stolen each day in Los Angeles. 

I might think my car was the only one being stolen, plus it was my treasured sports car, having a great deal of personal sentiment associated with it, yet the reality was that there were lots of cars being stolen daily. Sigh.

Furthermore, the campus security officer explained that the local gangs had an initiation challenge to join their gang, consisting of stealing a car at the nearby university. The best guess was that my car had been stolen by a gang, they would joy ride in it, and then likely take it across the border to a chop shop. At the chop shop, my car would be gutted, and the car parts then put out for resale.

This news was disheartening.

One aspect too that they had asked me, adding to my frustration at the time, was whether I had left the keys in my car. 

Say what? 

No, I said, of course I didn’t do so.

I found out then that incredibly there is a preponderance of stolen cars wherein the owner left the keys in the car. 

People Dumbly Aid Thieves

The key is often one of the fobs that has the super-duper security capabilities, which automakers spent many millions of dollars perfecting. Turns out that last year, it is estimated that around 60% of all stolen cars in the United States were taken by the thief simply using the keys left inside the car.

For those of us that are technologists, it highlights how humans can undermine the best of technology by how they behave. 

In spite of the handshake security capabilities that took years to perfect and embody in a key fob, it turns out that a lot of people merely leave the fob sitting inside the car. A car thief does not need any special skills to steal such a car.

This is reminiscent of the “hacking” of people’s online accounts or their PC’s or their IoT devices. 

Many people use a password that is easily guessed. I’m sure you are as frustrated as me that many of the so-called hackings of people’s accounts are not due to any shrewd computer hacker, but instead by the simplistic and mindless act of trying obvious passwords. 

I say this is frustrating because the news often portrays these thieves as some kind of computer geniuses, when it was in actuality that the humans owning the computer accounts were lazy or ill-informed about setting better passwords.

For my article about back-door security issues, see: https://www.aitrends.com/selfdrivingcars/ai-deep-learning-backdoor-security-holes-self-driving-cars-detection-prevention/

For my article about issues with IoT devices, see: https://www.aitrends.com/selfdrivingcars/internet-of-things-iot-and-ai-self-driving-cars/

For more about stealing of cars, see my article: https://www.aitrends.com/selfdrivingcars/stealing-secrets-about-ai-self-driving-cars/ 

I tell the story of my car being stolen to point out that it was apparently more likely that it would be turned into a treasure trove of used car parts, rather than trying to sell the entire stolen car itself. 

This makes sense. Selling a stolen car is likely to be riskier and requires that someone come up with a larger bag of cash to acquire it. Yanking off the parts and selling those is less chancy.

Unbelievably, at times the value of the parts exceed the value of the car itself. In other words, the amount of money you can make by selling the stolen parts is going to be more than if you tried to sell the entire car. In that case, when you toss into the equation the troubles and risks involved in selling an intact stolen car, the notion of taking the car to a chop shop makes a lot of sense. Divide up the car and sell the parts, then find a place to discard or bury or destroy whatever might be leftover.

AI Autonomous Cars And Parts Theft

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. 

One question that I often get asked at industry conferences involves whether AI self-driving cars will be subject to the stolen car parts marketplace. 

I believe so.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, the brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/ 

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/ 

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/ 

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a Utopian world in which there are only AI self-driving cars on public roads. Currently, there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ 

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/ 

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the topic of stolen car parts, let’s consider how this might apply to the advent of AI self-driving cars.

First, let’s all agree that an AI self-driving car is still a car. 

This might seem obvious, but I assure you that a lot of people seem to think that when you add the AI aspects to a car that the car somehow transforms into a magical vehicle. It’s a car.

I mention this so that it is perhaps apparent that the same aspects of car parts being stolen are going to apply to the conventional parts of a self-driving car. If there are AI self-driving cars that have catalytic converters, and if the price of palladium remains high, you can reasonably assume that car thieves might want to steal the catalytic converter from your AI self-driving car.

So, overall, yes, AI self-driving cars face the same dangers of car parts thievery as might be the case for conventional cars. 

There are some important caveats to consider.

Why Autonomous Car Parts Theft Is Special

I mentioned earlier that I anticipate the volume of AI self-driving cars will be a gradual build-up of adoption, and we’ll continue to have conventional cars during that same time period. 

This means that there might not be many AI self-driving cars on the roadway. 

Recall that an important aspect of being a targeted car for stolen parts is that the car itself is popular. 

The more cars of a particular brand in the marketplace, the more the number of used car parts that are needed.

In the case of AI self-driving cars, the number of AI self-driving cars might be so low for the initial adoption period that it is not as worthwhile to steal a car part from an AI self-driving car as it is to steal from a conventional car that has greater popularity. The thieves tend to react based on marketplace demand. If there aren’t many AI self-driving cars out and about, there’s little incentive to steal parts from those cars.

In a similar form of logic, if the AI self-driving cars are not as readily around, they are a harder target to find and steal from. 

The nice thing about popular cars, from a car parts thievery perspective, would be that you can find those popular cars just about anywhere, parked along the street, parked in mall parking lots, and so on.

The relatively rarity of AI self-driving cars at the start, before there is a gradual shift toward AI self-driving cars and away from conventional cars, means that finding an AI self-driving car to steal parts from will be an arduous task. I’m not saying that it becomes impossible, and I am sure that if the thieves think it worthwhile, they could hunt down the locations of AI self-driving cars.

Another factor to consider about AI self-driving cars will be their likely use on a somewhat non-stop basis. 

The notion is that an AI self-driving car is more likely to be used in a ride sharing service and therefore will be used potentially around-the-clock. If you don’t need to hire a human driver, you can keep the AI driving the car all the time. The more time the self-driving car is underway, and assuming you can fill it with paying passengers, the more money you make from owning an AI self-driving car.

This brings up the facet that trying to get to a parked and motionless AI self-driving car might be harder than it seems at first glance. A conventional car is parked and motionless for most of its time on this earth. You park your conventional car in a mall, and it waits until you come back to use it. That’s not going to be the case presumably for AI self-driving cars. A self-driving car is more likely roaming to find paying passengers, rather than being parked for hours at a time.

Thus, besides the likely lesser number of AI self-driving cars on the roadway versus conventional cars, during the early stages of adoption of AI self-driving cars, even once AI self-driving cars become prevalent, they are bound to be underway most of the time. This makes stealing car parts problematic in that the car itself is a kind of moving target. Imagine trying to slide underneath an AI self-driving car that is slowly cruising the street and waiting to be hailed by a passenger, and somehow a thief extracts the catalytic converter – that’s a real magic act.

For ridesharing and AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/ridesharing-services-and-ai-self-driving-cars-notably-uber-in-or-uber-out/

For my article about the invasive curve and AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/invasive-curve-and-ai-self-driving-cars/

For the carjacking or robojacking of AI self-driving cars, see my article: https://www.aitrends.com/features/robojacking-self-driving-cars-prevention-better-ai/ 

For the non-stop use of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/non-stop-ai-self-driving-cars-truths-and-consequences/ 

For my article about the affordability aspects of AI self-driving cars, see:  https://www.aitrends.com/selfdrivingcars/affordability-of-ai-self-driving-cars/

You might be wondering how many of the existing being-tested varying-levels of AI self-driving cars are having their car parts stolen? 

None that I know of.

Does that undermine the notion that there will be car parts stolen from AI self-driving cars? 

No, it does not undermine the argument.

Being Tested Autonomous Cars Are Well-Protected

As already stated, when the number of cars of a particular brand or model are low, there is little interest in trying to steal their car parts. That’s certainly the case right now with the low number of AI self-driving cars of one kind or another being trial run on roadways.

An even greater factor right now is that the AI self-driving cars that are being trial run have a quite devoted crew that keeps those AI self-driving cars in tiptop working order. Unlike a conventional car, these special testing AI self-driving cars are handled delicately and devotedly by assigned car mechanics and engineers.

These AI self-driving cars are kept parked in secure locations, and they are maintained to a meticulous threshold. The auto makers and tech firms do not want their existing tryouts of AI self-driving cars to be undermined due to parts that fail or wear out. These are spoiled cars, being provided with by-hand daily care.

A car parts thief is unlikely to find any of these AI self-driving cars, and if they did, it would be in a secure location that has a crew doting to the self-driving car when it is parked.

When an AI self-driving car is in-motion, the camera is functioning to undertake image and video stream captures and analyze the roadway around the AI self-driving car, along with the other sensors doing the same, such as the radar, the ultrasonic, the LIDAR, etc. Any car parts thief would likely get caught on camera, making their theft someone stupid, when attempting to steal from an underway AI self-driving car, though I suppose they could try to wear a mask and disguise themselves.

I’ve already pointed out that since the AI self-driving car will be in-motion most of the time, a masked thief still has little chance of stealing any of the car parts. Admittedly, once the AI self-driving car is parked and motionless, the sensors are often no longer being powered and used, which means that you could be possibly undetected when stealing parts from the car. This will be something important to consider once AI self-driving cars become prevalent.

It is anticipated that once there are a lot of AI self-driving cars in the marketplace, it might not be the case that they will all be cruising around all the time. It is suggested that there might be special areas that the AI self-driving cars go to wait for being summoned. These staging areas would resemble parking lots. The idea is that rather than an AI self-driving car driving around endlessly, which might not be very efficient, they would sit in temporary staging areas and await a request to be used.

I mention this because few of the AI self-driving car pundits have realized that we might reach a kind of saturation point of AI self-driving cars prevalence. It is admittedly a long way off in the future.

Overall, the notion is that if there is a saturation level in a given locale, it might not be prudent to have an AI self-driving car driving around and around, waiting to be put to use. The endless driving is going to cause wear-and-tear on the self-driving car, plus it will be using up whatever fuel is used by the AI self-driving car. 

As such, we might reach a point where it makes sense to have an oversupply of AI self-driving cars be staged in a temporary area until summoned.

Where Thieves Will Go

I suppose the staging area could be a place to try to steal car parts from those waiting AI self-driving cars. 

I’ll guess that by the time we have such a prevalence of AI self-driving cars, those staging areas would be well-protected and well-monitored. Seems less likely an easy target for car parts thievery.

One aspect that will potentially make AI self-driving cars an attractive target would be the specialized components included onto the self-driving car for the AI driving purposes. 

An AI self-driving car has lots of superb cameras, it has numerous radar devices, it might have LIDAR, it will have ultrasonic devices, and so on. This is exciting, especially as a goldmine for car parts theft. I can imagine the car thieves already salivating at this.

These sensory devices are an ideal target for car parts theft. They tend to be expensive, which I had mentioned earlier that the value of a car part is a crucial attraction for car parts thievery. There are many of them on each AI self-driving car, making the car target-rich. They tend to be small, meaning that you can readily steal and transport them.

The question is whether these sensors can be readily removed from an AI self-driving car or not.

On the one hand, if the sensors are well-embedded and bolted or meshed into the car, it is going to make things harder for car parts thieves to steal those car parts. Unlike the ease of taking a catalytic converter, these sensors might be arduous to extract from the self-driving car. As mentioned earlier, the longer it takes to steal a car part and the more difficult it is to steal it, the less likely the car part is as a worth-stealing car part.

Some are suggesting that we’ll have add-on kits that can convert a conventional car into becoming an AI self-driving car. If that’s the case, this is a potential gift from heaven for the car parts thieves. Presumably, an add-on kit means that you simply augment the car with these added sensors, making them more vulnerable to being taken off the car too. Easy on, easy off. It’s the easy off aspect that makes the car thieves happy.

This would also make the selling of the stolen car parts easier too. Imagine a thriving market of add-on kits for AI components of an AI self-driving car. A huge marketplace might develop. It will be hungry for these specialized car parts. The underground will work double-time to try and fulfill the demand.

For my article about add-on kits for AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/kits-and-ai-self-driving-cars/

For why AI self-driving cars won’t be an economic commodity, see my article: https://www.aitrends.com/selfdrivingcars/economic-commodity-debate-the-case-of-ai-self-driving-cars/

For my article about reverse engineering of AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/reverse-engineering-and-ai-self-driving-cars/

For safety aspects, see my article: https://www.aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

I’ve already stated in my writings and presentations that I doubt the add-on approach is going to be viable for AI self-driving cars. By this claim, I am suggesting that there won’t be a mass consumer add-on market. There could though be a car dealer marketplace of doing this after-market effort, though I doubt that will be likely either.

Does the potential lack of an AI self-driving car parts add-on market imply that the sensors on an AI self-driving car will necessarily be difficult to remove?

No, not necessarily.

Here’s why.

For an auto maker trying to maintain AI self-driving cars, they’ve got to be considering that the life of the sensors is limited, and they will break down. When the sensors need to be replaced, if they are somehow hidden and embedded within the body of the self-driving car, it will make it harder and costlier to have those sensors replaced.

I realize that the auto makers might not care about this facet and are so focused on just producing a viable AI self-driving car that they don’t care right now about the maintenance side of things. Plus, if you like conspiracy theories, you might say that it is perhaps better for the automaker to make it arduous and costly to replace the sensors, thus guaranteeing a lucrative maintenance fee after selling the AI self-driving car.

In any case, it isn’t yet clear whether it will be made easy or hard to remove the sensors from an AI self-driving car. This might differ by car maker and car model.

The same question can be asked about the computer processors in the AI self-driving car. AI self-driving cars will be chock-full of likely expensive computer processors and computer memory. These expensive and small sized components would be another potential target for car parts theft. Will these processors be so deeply embedded that it precludes much chance of car part thievery, or will they be readily accessed for maintenance purposes and therefore more prone to being stolen?  We’ll need to wait and see.

For my article about the grand convergence of tech on AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

For sensors such as LIDAR, see my article: https://www.aitrends.com/selfdrivingcars/lidar-secret-sauce-self-driving-cars/

For conspiracy theories about AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/

For my article about the Top 10 predictions and trends of AI self-driving cars, see: https://www.aitrends.com/ai-insider/top-10-ai-trends-insider-predictions-about-ai-and-ai-self-driving-cars-for-2019 

Anti-Theft Tech Could Help Reduce Thefts

One additional aspect to keep in mind will be advances in anti-theft technologies.

It could be that once we have any prevalence of AI self-driving cars that there will be new advances in car anti-theft systems and those devices will be included into AI self-driving cars. 

If so, it might make stealing car parts a near impossibility.

Imagine a scenario in which a thief attempts to carry out a car parts theft on an AI self-driving car. 

The AI might detect the effort. It could then honk the horn or take some other effort to bring attention to the car. It might also use it’s Over The Air (OTA) capabilities, usually used for pushing updates and patches into the AI system remotely via the cloud, and contact the authorities electronically to report a car parts robbery in progress.

Another futuristic possibility is the use of the V2V (vehicle-to-vehicle) electronic communications that will be included into AI self-driving cars.

Normally, the V2V will be used for an AI self-driving car to share roadway info with another nearby AI self-driving car. Perhaps an AI self-driving car has detected debris in the road. It might relay this finding to other nearby AI self-driving cars. They can then each try to avoid hitting the debris, being able to proactively anticipate the debris due to the V2V warning provided.

Suppose one AI self-driving car notices that a thief is trying to steal the parts from another AI self-driving car. The observing AI self-driving car might try to honk its horn or make a scene, or it might via OTA contact the authorities, or it might try to wake-up the AI self-driving car that is the victim, doing so via sending a V2V urgent message. Assuming that the AI self-driving car was “asleep” and parked, the V2V message could awaken it, and then the “victim” self-driving could sound an alert or possibly even try to drive away.

For conspicuity and AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/conspicuity-self-driving-cars-overlooked-crucial-capability/

For my article about the OTA aspects, see: https://www.aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/

For cryptojacking of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/cryptojacking-and-ai-self-driving-cars/

For swarm intelligence of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/swarm-intelligence-ai-self-driving-cars-stigmergy-boids/ 

For my article about sleeping AI and self-driving cars, see: https://www.aitrends.com/selfdrivingcars/sleeping-ai-mechanism-self-driving-cars/

Conclusion

Will the sensors and other AI physical components be easy to steal or hard to steal? 

They are going to be attractive due to their expensive cost and small size.  Auto makers and tech firms are not likely considering the matter right now of whether those parts are able to be stolen or not. Instead, right now it’s mainly about getting them to work and produce a true AI self-driving car.

The marketplace for those devices will be slim until there is a prevalence of AI self-driving cars. 

Therefore, this is a low-chance risk for now. 

We’ll have to wait and see what the future holds.

I can just imagine in the future coming out to my vaunted AI self-driving car, which I would proudly opt to park in my driveway, doing so as a showcase for the rest of the neighborhood, and suddenly realizing that it has been stripped of its parts. 

Many of the conventional car parts might be taken, along with the AI specialized car parts. 

Darn it, struck by car thieves a second time in my life. 

Will it never end? 

I’d hope that helicopters were dispatched immediately, along with police drones, and squad cars, all searching for my stolen AI self-driving car parts. 

Get those dastardly heathens! 

Copyright 2019 Dr. Lance Eliot 

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Source: AI Trends
Continue reading Car Parts Thefts and AI Autonomous Cars

Posted on

Superhuman AI Misnomer Misgivings Including About Autonomous Cars


The superhuman AI moniker might lead the driving public to believe that the AI of the self-driving car is more capable than it really is. (Credit: Getty Images)

By Lance Eliot, the AI Trends Insider 

I have a beef with the now seemingly in-vogue use of the “superhuman AI” phrase that keeps popping up in the media. 

When I was asked about “superhuman AI” at a recent Machine Learning and AI industry conference, I admit that I wound myself up into a bit of a tizzy and launched into a modest diatribe. 

Now that I’ve calmed down, I thought you might like to know what my angst is about the so-called superhuman AI moniker and why it is important to give the matter of its use some serious consideration.

Have you noticed the phrase? 

It can be subtle and at times easy to miss. 

I’d guess that if you look around, you’ll see that the superhuman AI phrase might have been in any of a number of recent articles you have read about AI breakthroughs, or might have gotten mentioned during a radio broadcast or on a podcast that you listen to while in your car. If so, I realize you might have not given it any thought at all.

In that sense, you could argue that the superhuman AI phrase is not consequential and there’s no reason to get upset about its use. It is perhaps a kind of filler phrase that sounds good and hopefully most people know it likely lacks any substantive true meaning. Just more noise and nothing noteworthy.

 On the other hand, there is a potential danger that this superhuman AI phrase is indeed being taken quite seriously by those that are not well familiar with AI, and thus it can tend to over-inflate what AI can actually achieve. 

There are some in AI that seem to be pleased to inflate expectations about AI, but fortunately there are moderates that rightfully worry that these subtle attempts at overstating AI are going to get everyone into trouble.

The trouble can be that we all begin to believe that AI can do things it cannot and then allow ourselves to become vulnerable to automated systems that really have no bearing on this mythical and made-up notion of today’s AI.

 For my article about AI as a potential Frankenstein, see: https://aitrends.com/selfdrivingcars/frankenstein-and-ai-self-driving-cars/

 For the potential coming singularity of AI, see my article: https://aitrends.com/selfdrivingcars/singularity-and-ai-self-driving-cars/

 For idealism about AI, see my article: https://aitrends.com/selfdrivingcars/idealism-and-ai-self-driving-cars/

 For the Turing test and AI, see my article: https://aitrends.com/selfdrivingcars/turing-test-ai-self-driving-cars/

 Superhuman AI Phrase Is Usually Hyped Or Otherwise Misused

I’ll make a bold statement herein and claim that by-and-large when someone reports that they have developed an AI system that is characterized by superhuman AI, they are generally being misleading. 

It could be marketing hype. 

It could be personal bravado. 

It could be they are purposely or possibly even inadvertently are deploying hyperbole. 

It could be that the person is naive. 

It could be that they are unknowingly overstating things.

It could be that they don’t care whether their statements are accurate or not.

It could be that they are saying it because everyone else is saying it.

And so on.

They might also believe that for their definition of superhuman AI, they are reasonably making use of the phrase.

 This does bring up a bit of a conundrum about the superhuman AI phrase.

 There is not an across-the board all-agreed standardized and codified definition that everyone has said that yes, this is what superhuman AI consists of. 

Nobody has laid out precise and demonstrable metrics that we should use to decide concretely whether there is or is not superhuman AI involved. 

Therefore, with no specific rules to be followed, you can use the phrase however you might wish. There isn’t a kind of word-usage cop standing by the roadside that has an improper-AI-phrases radar gun that can detect the true and proper use of the superhuman AI phrase. It is instead a wild west.

You can even fiendishly use the phrase to suggest or imply with a wink-wink that there is really a lot of super-duper AI in your system, but at the same time claim when pressed that you were intending to use it in a less-than over-the-top manner. The looseness of the existing commonplace unstated definition allows for making what you want of the handy and catchy phrase and gives you face-saving wiggle room to do so.

 What Does Superhuman AI Mean

Let’s consider what the phrase potentially means.

The first word, superhuman, we all generally would agree means to accomplish something of an extraordinary nature beyond what a normal human might be able to do. 

The other day a man lifted a car because his child had gotten pinned underneath it. Normally, it is doubtful that he could lift a car by himself. He somehow gained momentarily a kind of superhuman strength, perhaps due to adrenaline running through his body, and was able to lift that car.

You could say that he was superhuman. 

But, does this mean that in all respects he is superhuman? 

Can he lift a building like superman? No. 

Could he even lift a car again? Unlikely, unless his child once again got stuck underneath one. 

For all reasonable notions of superhuman, I think we could say that he momentarily displayed an extraordinary strength that a normal person might not typically have.

He wasn’t therefore a now-permanent superhuman that forever would have this super strength. He did not arrive here from the planet Krypton. Instead, he momentarily appeared to engage in an activity that most of us would probably be willing to say seems relatively superhuman-like.

 Suppose though that we went out and found a really strong weightlifter and asked that person to lift the same car as the man that had been saving his child. If the strong lifter could do so, is that strong lifter also then someone that we would immediately applaud as being superhuman? 

I’d suggest we would not.

This brings up the aspect that when we refer to someone as superhuman, we probably need to have some basis for comparison.

 If the basis for comparison is solely confined to what the particular person could normally do, this would seem to quite dilute the idea of being superhuman. 

I tried to open a screw-top can the other day and could not do so. I tried and tried. A few days later, I tried again and managed to squirm and grunt and got that darned lid to turn and come off. 

I was now superhuman!

I don’t think that it seems fair or reasonable to say that I was superhuman in that case. Big deal, I opened a screw-top can that was somewhat jammed up. A lot of humans could do the same. Just because I was able to exceed my prior effort, it doesn’t seem to warrant handing me a trophy as being superhuman. 

I think you would likely agree.

It was the famous Friedrich Nietzsche that many suggest first helped to bolster the notion of someone potentially being superhuman. 

You might become superhuman by perhaps being genetically bred in a manner that gives you greater strength or greater intelligence than other humans. Or, maybe you have a cybernetic implant inserted into your body that gives you superhuman strength, or you take special drugs that make your mind more powerful and superhuman, similar to what is often portrayed in many science fiction movies.

There is a slippery slope between talking about superhuman and contemplating the aspects of Superman or Superwoman. 

The word “super” is used in the case of superhuman, and can allude to having super powers, and so you then start to think of Superman or Superwoman, which even though they don’t exist and are only fictional characters, nonetheless the superhuman word gets the glow from those now ubiquitous fake characters. It is easy to therefore mentally get intertwined that “superhuman = superman or superwoman,” which is part of the reason that superhuman is a lousy word and distorts our sense of what is real and what is not.

In spite of the troubles associated with the meaning of the word superhuman, we’ll go ahead and add to the mess by appending the “AI” moniker to the superhuman word.

 Superhuman AI. 

Now what do we mean?

 If I develop a tic-tac-toe game that uses AI techniques and even executes on so-called AI hardware, can I claim that my tic-tac-toe game is superhuman AI?

 I wouldn’t think so. 

But, I assure you, there are some that would happily say it is. 

It’s the best darned tic-tac-toe game player ever devised. 

It exceeds anyone else in being able to play tic-tac-toe. 

It will never ever lose a tic-tac-toe game. 

Must be AI. 

Probably a breakthrough. 

Must be superhuman AI.

Where this especially seems to come up about using the phrase superhuman AI is when referring to automated systems that play games. Automated systems have been developed for chess that have been able to beat human grand masters and reach new heights of human chess play scoring. 

Some say that’s superhuman AI.

Top playing automated systems for Scrabble reached a zenith around 2006. 

Superhuman AI. 

More recently, in 2016, an automated system was a winner at Go, considered by many to have been a nearly unreachable goal due to the nature of the rules of Go and the kinds of strategies used. Superhuman AI. You’ve likely seen the plentiful ads for the IBM Watson about the winning at Jeopardy. Superhuman AI.

I want to outright congratulate those that were able to get automated systems to play at such a vaunted human level in those games. They used every computer science and AI technique and novelty to get to that accomplishment. Here, here!

 Were those all superhuman AI examples?

 Some say that games are great as a means to perfect many AI and computer science techniques and approaches, but they are quite narrow in their domains.

 There are games with perfect information, meaning that you are informed of all events that occur throughout the playing of the game, knowing each of the moves as they arise and the starting position. Chess is an example of having “perfect” information since you know how the game starts and you know each move that occurs along the way. Players don’t somehow hide their moves. Also, there isn’t any “chance” involved in the game since there isn’t dice or something else being used to determine what the moves are. It is straight ahead. Imperfect information games are those not within the definition of a perfect information game.

 Does playing games well or even extraordinarily with an automated system mean that it is superhuman AI?

 Suppose that we don’t use any special AI techniques at all, and merely leverage having a much vaster amount of online storage available than a human could likely have in their mind, and we do a great job at searching an extremely large space of pre-calculated best moves (having been pre-calculated by human led direction for the automated system).

 Is it fair to toss the “AI” into the superhuman wording?

 Superhuman AI Outside Of Games

Let’s consider domains beyond those of playing games.

I had helped an assembly plant put a robotic arm into their assembly line, doing so to speed-up the line and reduce the labor needed to produce their product. 

Most would agree that the field of robotics generally fits within the overarching definition of AI.

I’d like to brag about the robotic arm work, but admittedly the mechanical arm was merely trained by having a human move the arm back-and-forth until it caught onto the movements needed. I also added various safety related code to make sure the robotic arm wouldn’t go astray. It was a nice project and would also reduce the various repetitive motion injuries that the human workers had been experiencing while doing the same task. I also trained the former assembly worker how to take care of the robotic arm and be able to make changes and updates to the code as needed.

 Was the robotic arm that I helped customize and got working an example of a superhuman AI accomplishment?

 Yes, you could apparently make such a claim.

 It used robotics, which as mentioned generally fits within the AI rubric. It was superhuman because it can easily beat any human at the assembly line task. Whereas before a human could do the overall task about six times per hour, the robotic arm could nearly double that pace (about 10 times per hour). The robotic arm could work 24×7, no need to rest or relax or take breaks. For all practical purposes, you could assert that it was superhuman in comparison to what a human could do. You could say it was superhuman beyond any other human, since no human could possibly unaided by machinery work like that.

 Personally, it would give me heartburn to go around and say that this robotic arm was superhuman AI. But, that’s just me.

 You might say that physical things don’t count for the superhuman AI moniker. 

In the case of the robotic arm, it wasn’t “thinking” in any superhuman kind of way. Therefore, maybe we should only use superhuman AI when the matter at-hand involves thinking, akin to winning at chess or Scrabble.

Does the superhuman AI have to be the best in comparison to all humans? In other words, if we construct an AI system that plays chess, and it beats the topmost human chess players, we might then assert or infer that it can beat all humans in that domain and therefore it is rightfully superhuman.

 If it cannot beat all humans in the domain, what then?

 Someone develops a Machine Learning capability that is able to diagnose MRI’s and find cancerous regions, doing so more consistently than the average medical doctor and at times better than the best medical specialists in that domain. Let’s assume we cannot say it is always better than all humans in that respect. It is only sometimes better.

 Superhuman AI?

 Coming Up With Categories Of Human AI Achievement

Some suggest that we ought to have a graduated series of categories that lead-up to being referred to as superhuman. 

We might do this:

  •         Superhuman AI = better than all humans in the domain, as far as we can infer
  •         High-human AI = better than most humans in the domain, high as in heightened
  •         Par-human AI = similar to most humans in the domain or “on par” with humans
  •         Sub-human AI = less than or worse than most humans in the domain, sub-par

Notice that I’ve qualified the superhuman in two important respects, one by the aspect of saying it pertains to a particular domain, and the other that we can only infer that the AI in that case is better than all humans in the domain.

 The latter assumption is due to the aspect that we cannot really say whether or not the AI might be better than all humans (unless we really can have all human’s line-up and showcase that none are better than the AI, all 7.5 billion people on the planet).

 Let’s take chess. 

Just because you can beat the most recent top-rated chess masters does not mean you can beat all humans in chess. There might be someone that is a chess wizard that nobody even knows exists. Or, maybe next week a chess wizard appears seemingly out of nowhere that can play chess better than any other human on Earth. 

Thus, we’ll have to approximate that based on whatever kind of circumstance is involved, such as with chess, we’ll say that the AI is better than “all” humans but do so knowing that we are making a bit of a leap-in-faith on that aspect.

Some react to the superhuman AI by believing that the superhuman AI can do anything in any field of endeavor. 

That’s why I’ve put into the aforementioned indications that it is the AI within the domain of choice, such as chess or Go.

This is also why the superhuman AI moniker is on that slippery slope. 

If you tell someone that you have superhuman AI that can play chess, they might think this implies that AI can play any kind of game to that same topmost level, since we all know that chess is hard and therefore presumably the AI can just switch over and be superhuman at say Monopoly (which most people would say is a lot less arduous than chess and so it should be “easy” presumably for a superhuman AI chess playing system to win at Monopoly).

Of course, today’s so-called superhuman AI instances are all within a narrow domain and do not have the ability to just switch over on their own and be tops at other domains.

 Worse still, some people that hear about something that is superhuman AI will often ascribe to the matter that is must also have common sense reasoning and also Artificial General Intelligence (AGI). If it can play chess so well, it likely can solve world hunger or clean-up our environment too. Nope, sorry about that but those aren’t in the cards right now.

 For my article about the limits of today’s AI when it comes to common sense reasoning, see: https://aitrends.com/selfdrivingcars/common-sense-reasoning-and-ai-self-driving-cars/

 For issues about AI boundaries, see my article: https://aitrends.com/ai-insider/ai-boundaries-and-self-driving-cars-the-driving-controls-debate/

 For reasons to consider starting over on AI, see my article: https://aitrends.com/selfdrivingcars/starting-over-on-ai-and-self-driving-cars/

 For conspiracy theories about AI, see my article: https://aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/

 Superhuman AI And Autonomous Cars

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. 

There are some within the self-driving driverless car industry that are either using the phrase “superhuman AI” or are letting others use that phrase for them.

Is the superhuman AI moniker applicable to aspects of today’s AI self-driving cars?

 For the fake news about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/ai-fake-news-about-self-driving-cars/

 For the sizzle reels that mislead about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/sizzle-reel-trickery-ai-self-driving-car-hype/

 Allow me to elaborate.

 I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, the brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

 For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

 For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

 For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

 For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

 For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

 Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

 Here’s the usual steps involved in the AI driving task:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

 Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a Utopian world in which there are only AI self-driving cars on public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

 Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

 For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

 See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

 For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

 For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

 Returning to the superhuman AI discussion, let’s consider how this catchy phrase is being used by some in today’s AI self-driving car industry.

 For the sensors side of things, it seems like anytime an improvement is made in being able to analyze an image and detect whether there is say a pedestrian in a picture, there is someone that will claim the AI or ML based capability is superhuman AI. This seems to suggest that if the shape of a pedestrian can be picked-out of a hazy image, it is somehow better than a human’s ability to do human-based image analysis. Rarely is there much substantive support provided for such a contention.

 Furthermore, given the relatively brittle nature of most of today’s image processing capabilities, even if the new routine can do a better job at that one particular aspect, one has to ask whether this is a fair and reasonable way to then ascribe to it that it is superhuman AI. We all know that a human could do a much broader scope kind of image analysis and likely “best” the image processing software in an overall effort of doing image detection.

 We also know that the image processing software has no “understanding” whatsoever about the image that it has detected. It has found a shape and associated it with something within the system tagged as a pedestrian. 

Does it “know” that a pedestrian is a human being that breaths and walks and lives and thinks? 

No. 

Does it “know” that a pedestrian might suddenly run or jump or shout at the self-driving car? 

No.

Yet, is it truly superhuman AI?

 Seems like a stretch.

Trouble’s Afoot With Superhuman AI Proclamations

For those AI developers that would argue that it is superhuman AI, I’ll simply repeat my earlier qualm that for those that aren’t aware of AI’s limitations and constraints as it exists today, your willingness to toss around the superhuman AI moniker is going to get someone in trouble. The public will falsely believe that the AI of the self-driving car is more sophisticated and more capable than it really is. 

Regulators are going to falsely believe that the AI of the self-driving car is more robust and safer than it really is. And so on, down the line for all of the stakeholders involved in AI self-driving cars.

I’d be willing to bet that this wanton use of “superhuman AI” will ultimately come to the spotlight when there are product liability lawsuits lobbed against the auto makers and tech firms that brandished such wording. 

Didn’t the “superhuman AI” mislead consumers into believing that their AI self-driving car could do things that it really could not, will be a question asked during the case. By what manner did you arrive at being able to proclaim that your AI self-driving car had this kind of superhuman AI, will be another question. And so on. 

For my article about the safety concerns of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

 For my article about how the levels of AI self-driving car might be misleading, see: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/

 For the responsibility aspects of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/responsibility-and-ai-self-driving-cars/

 For the dangers associated with being an egocentric AI developer, see my article: https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

 For the marketing of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/marketing-self-driving-cars-new-paradigms/

 Conclusion

Some argue that being perhaps over-the-top is the only way to make sure that funding and energy continues to pour into the AI field.

Presumably, a bit of hyperbole is worth the “cost” as it provides an overwhelming goodness when considered as a trade-off to otherwise losing steam and momentum in the quest to reach true AI. 

If we went around and told people that we are in the midst of AI systems that are par-human, or subpar-human, it might be a shock that would undermine investments and faith in pushing ahead with AI. 

Superhuman AI seems like a modest enough phrase that it can be used without having an abundance of guilt or misgivings, at least for some. 

You’ll have to make that decision on your own and live with it.

Fortunately, I don’t think you need to be superhuman to make the right decision about this.

Copyright 2019 Dr. Lance Eliot 

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Source: AI Trends
Continue reading Superhuman AI Misnomer Misgivings Including About Autonomous Cars

Posted on

Versioning and AI Autonomous Cars


Hand turning a process knob. Concept of software or app development. Composite image between a photography and a 3D background.

By Lance Eliot, the AI Trends Insider

Quick, tell me which version of Microsoft Windows you are running on your PC.

 Is it Windows 10?

 Or, maybe it is Window 8, or Windows 7, or Vista, or XP, or 2000, or 98, or NT, or 95, etc. (that’s a walk down memory lane). 

There have been numerous versions of Microsoft Windows that have been released over the years since its inception. You might have resisted upgrading at some point and still be on an older version of Windows, meanwhile the person seated next to you might be on the most recent version. 

Within versions, there are also releases. For Windows 10, you might be on release 1507 (code named Threshold 1), or 1511 (Threshold 2), or 1607 (Redstone 1), and so on.

Does it make a difference as to which version and which release of that version that you are running? 

Absolutely it makes a difference. Each version and each release has its own set of features and functionality. 

There are things you probably like about any one specific version and its releases, and things that you likely dislike. In spite of your own likes or dislikes, you pretty much get the whole kit and caboodle with a particular version/release and so you have to live with its good parts and its bad parts. Take it or leave it, that’s the motto.

There are also bound to be bugs or errors in whichever version/release you are using. 

Some of those bugs or errors are known and published as being known. Some of those bugs and errors have patches or fixes that you can put in place to overcome the bug or error. There might be some bugs that aren’t yet known, and so they lurk within the system, waiting until possibly a bad moment to arise. Sometimes a version/release gets so riddled with bugs and errors, and has so many irritating “features” that you pine away waiting for the next version, in hopes that maybe you can dump the old one and adopt the new one.

Once again, though, the new one is ultimately going to have drawbacks in capabilities, along with both known and unknown bugs. You aren’t going to wake-up and suddenly find that there’s a version that does exactly what you want, in the way you want it, and that is totally bug free. 

Not going to happen.

AI Autonomous Cars And Versioning

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic Self-Driving Car Institute, we are working on AI software that will be able to ascertain behaviors of AI self-driving cars as based on their versioning.

Allow me a moment to explain.

 When AI self-driving cars begin to truly populate our traffic mix, they will not be homogeneous. 

By this I mean that not all AI self-driving cars will be the same. 

Some people mistakenly think that all AI self-driving cars will be identical. 

This false belief is predicated on the notion that every self-driving car will have the same AI components and software. You instead need to think about these AI systems in the same manner as you think about Microsoft Windows, namely there will be lots of different versions, many of which will be active at any given point in time.

Suppose you are on the freeway and there are a lot of cars in traffic. 

Let’s assume that some of those cars are conventional cars and are being driven by humans. 

There are also some cars that are AI self-driving cars, of which, some are true self-driving cars at the Level 4 and Level 5, while others are semi-autonomous at Level 3. 

A true self-driving car is one that can be driven entirely by the AI without human intervention and does not need any human driver for the undertaking of the driving task.

Let’s assume we have an AI self-driving car that is Brand Q Model A and Version 8, and another AI self-driving car that is Brand Q Model B Version 1, and there’s also a Brand Y Model Z Version 3, and a Brand Y Model Z Version 2.  

You can pretend that say the Brand Q is perhaps a Ford self-driving car, and the Brand Z is say a Toyota self-driving car.

The auto makers are going to have various brands of their self-driving cars and various models, just as they do today for conventional cars. 

You might buy an AI self-driving car that is the latest version, while a friend of yours had bought one a few years earlier and has an earlier model. 

Even within the models of the AI self-driving cars, the AI software is going to be in different versions.

It’s akin to having a room full of PC’s that are running Microsoft Windows.

Some are running version 10, some are running version 10 release 1507 and others are running version 10 release 1511. Other PC’s in the room are running say version 8. And so on. 

The mix of AI self-driving cars in traffic will be just like this. 

Also, some of those PC’s have the latest processors and other components, while some of those PC’s have older processors and lack components that a more modern PC has.  

Some AI self-driving cars will have a radar and cameras that are of a particular type and model, while other AI self-driving cars will have different brands of radars and different brands of cameras. And so on.

I hope you are now past the idea of homogeneous AI self-driving cars, and if so, you might be wondering what does it matter that on our roads we’re going to have heterogeneous AI self-driving cars?

 It matters for the same reason that knowing what version of Windows you are running on your PC, namely, the PC does different things, has differing functions and features, and contains different kinds of bugs and errors, some known and some unknown. 

The same is true for AI self-driving cars. 

The AI self-driving car that is Brand Q Model A that is running version 8, it might be known for being able to take turns very well, but it’s not so good at handling roundabouts. 

The Brand Q Model B is a superior AI self-driving car to the Brand Q Model A in that it has newly added LIDAR (a sensor device that uses light and radar), which the Model B lacks. As such, the Model B can do a better job of detecting pedestrians and also spotting other cars further ahead than does the Model A.

I suppose you could think of this as having different models of conventional cars and having different kinds of drivers driving those cars. 

There are some human drivers that drive in a speedy fashion, and other human drivers that go slowly. There are some human drivers that seem to be able to see far off in the distance, and others that can barely see a few car lengths ahead. The capabilities of the AI self-driving cars will similarly vary.

For the AI pundits you might immediately object and say that with OTA (Over The Air) capabilities that we will be able to push new versions remotely to the AI self-driving cars. 

I agree that’s going to be the case. 

But, it does not change the fact that the different auto makers will have different AI self-driving cars, and that besides having different physical car models with differing sensors on them, the AI systems running those models will differ.

That being said, if I am driving a Brand Y Model Z Version 3, and you are driving a Brand Y Model Z Version 2, it is perhaps likely that the Version 2 maybe has fallen behind doing its OTA and that it is pending a remote update so that it will be at Version 3. 

The OTA updates are not all going to happen at the same time to all the AI self-driving cars in a particular brand and model. Perhaps I had done my OTA update to my beloved Brand Y Model Z in the morning before heading to work, but meanwhile your Brand Y Model Z was continually driving around and has not yet been put into a resting state to get its OTA (some OTA’s will be require the AI self-driving car to be at a resting state.

See my article https://aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/)

Predicting Behavior And Dealing With Versions

It would be prudent and some say essential that the AI of a self-driving car be able to predict the behavior of other cars, both human driven cars and self-driving cars. 

See my article: https://aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/

Predicting the behavior of AI self-driving cars can be based on three methods: (1) what they look like, (2) by what it does (its behaviors), and (3) by V2V (vehicle to vehicle communications). 

Let’s consider each of these three methods.

First, when you see a Ford Mustang driving down the street, you instantly recognize the car due to its distinctive shape and styling. As such, for the AI self-driving cars, the auto makers are each taking their own approach to what their AI self-driving car will look like. Via shape and style alone, you can quickly gauge what auto maker made the car, what brand and model it is.

 We do this via the sensors of our AI self-driving car, in that it uses the cameras to detect other cars and does an image analysis to identify what the other car is. Thus, the AI will be able to gauge readily what kinds of capabilities another AI self-driving car has, by typifying it via visual matching and then having a database of what the capabilities and limitations of that self-driving car is. This then allows the AI to predict what kinds of actions or moves that other AI self-driving car might make while on the road.

 If the particular version of the AI self-driving car is not evident by visual inspection alone, another approach would be to observe its behavior. Suppose it is known that the Brand Y Model Z Version 2 often makes long stops at a stop sign, and it also is known for weaving far around a bicyclist, often going into another lane to try to get as far from a pedestrian as it can. Meanwhile, Version 3 has improvements that make the AI do a full stop at stop sign but not linger needlessly, and also that it does a better job of detecting the distance to a bicyclist and so it doesn’t have to move over into another lane when things get tight.

 The AI of our self-driving car observes the behaviors of other AI self-driving cars. 

And, based on the database of the limits and capabilities by model/brand/release, it is able to guess which release the other self-driving car is likely on. 

As such, the AI then also can better predict what the other AI self-driving car will do in various situations and circumstances.

Some AI pundits would say that there’s no need to go to this much trouble about figuring out what the other AI self-driving car is and what it will do. 

They say that you should just ask it. 

With V2V, AI self-driving cars will be able to communicate directly with other AI self-driving cars. In that case, there’s presumably no guessing needed. Instead, the AI of one car just asks the AI of the other car what kind of AI self-driving car it is. Furthermore, when any roadway action occurs, such as if there is a bicyclist up ahead, the AI can ask the other car what it is going to do once it gets near to the bicyclist.

Yes, there’s no doubt that having V2V will allow for this kind of communication and coordination. 

Getting to that point though is somewhat unknown as yet. 

The industry is still working on the protocols for V2V. Also, the question arises as to how much V2V volume do we want and will we allow. Imagine if all AI self-driving cars are continually bombarding each other with requests about what they are doing. The computational effort to be responding to all these requests is going to chew-up both processing time and bandwidth.

Those that are strong proponents of V2V are also likely to make the assumption that V2V will always be available. It might not be. A particular self-driving car might not yet have V2V. Or, maybe a particular self-driving car has disabled some aspects of V2V. Or, maybe the communication link between two cars is not working well and so in spite of the two wanting to do V2V, they are stymied in doing so. The hope that V2V will be universal, will be always on, will never be disrupted, and will always work in all situations, I’d say it’s a bit of false hope for now and that it would be more realistic to assume that V2V will intermittently be available.

 If the V2V is intermittently available, we’d then have the other two approaches, the looks of the self-driving car and the behavior of the self-driving car. Thus, all three approaches can come together to try to predict what another AI self-driving car is doing and possibly going to do.

 One other aspect about the heterogeneous nature of the AI self-driving cars will be how features will come and go, potentially. 

You might remember that Windows 7 had lots of nifty gadgets that allowed you to use a calculator or find out the weather status. Those were dropped in Windows 10. In a Darwinian process, some features are kept and others are dropped. 

The same will be true for AI self-driving cars.

Auto makers will try to differentiate their AI self-driving car over a competitor by claiming that their self-driving car does things that the other ones do not do. 

Prefer a really smooth ride in your AI self-driving car, the Brand X self-driving car has AI that is able to reduce reactions to bumps and potholes, and no other AI self-driving car has that same feature. It will be interesting to see how the features come and go. Would competitors perceive this smooth ride AI capability as essential, and therefore include it into a future version of their AI self-driving car. Perhaps. 

See my article: https://aitrends.com/selfdrivingcars/marketing-self-driving-cars-new-paradigms/

There’s also the bugs and errors aspects of AI self-driving cars. 

Suppose the Brand Y Model Z is known for being buggy. To-date, there have been several bugs found. Fortunately, those bugs were fixed and then pushed into the self-driving car via OTA. But, often where there are a few, there are more. It could be that the Brand Y Model Z has numerous other bugs or errors that just haven’t been discovered. Or, maybe they are somewhat known, such as the Brand Y Model Z occasionally opts to slow down and speed-up, but no one has yet figured out why it does this.

Other AI self-driving cars can be on the watch for and wary of the behavior of the other AI self-driving cars.

 They can do this by trying to gauge what the nature of the other AI self-driving car is. 

It’s the same thing that we humans do. 

I am sure you’ve watched other cars around you and mentally thought that there’s a car that might make a sudden turn, there’s a car that will probably get in your way soon, and so on. True AI self-driving cars cannot just be driving along and acting as though there are other cars and getting caught off-guard by what those cars do. 

Instead, by paying attention to versioning of AI self-driving cars, every AI self-driving car can be preparing for the actions of their fellow AI self-driving cars.

Versioning is not yet something of notable interest since there are so few self-driving cars on our roadways. 

It’s one of those issues that we won’t be thinking about until there are lots of autonomous car, but we might as well get started thinking and preparing for that day.

Copyright 2019 Dr. Lance Eliot 

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot.]

Source: AI Trends
Continue reading Versioning and AI Autonomous Cars

Posted on

Perpetual Computing and AI Autonomous Cars


Perpetual computing is an upcoming area that could be enabled by energy harvesting, such as by nanogenerators in automobile tires to power sensors.

By Lance Eliot, the AI Trends Insider

The bookstore manager looked at me and said that the computer program that I had developed to analyze the books database was going to run “perpetually” and he was quite steamed about how long it was taking to execute.

Well, hold on, let’s start this story at the beginning so you’ll have some context about what was happening.

Back in my college days, I was a gun-for-hire in terms of a willingness to whip together off-the-cuff computer programs for anyone that needed a quick-and-dirty programmable task done, doing so to earn a few extra bucks for those large pepperoni pizzas and kegs of beer that I kept ordering with my classmates.

The college bookstore manager had asked me to craft a program that would generate some reports for him.

Without taking much time to analyze the situation (that’s when I was young and headstrong), I wrote a brute force algorithm that would sort the voluminous data and produce the reports.

On a Monday morning, I launched the program and let it fly.

In that era, the amount of data involved was considered rather large since it was data for all 30,000+ students and included their classes, the books required for their classes, etc.

When the bookstore manager asked me how long it would take for the program to run, I hedged and said it would take about a day.

In my mind, I was thinking it was around four to eight hours to run, and so I said “a day” meaning a workday, though a day might of course also mean a 24-hour period.

The next morning, I got an angry call from the store manager.

He told me that the program was still running and it had been “a day” since I started it.

I was thankful that I hadn’t said it would be four to eight hours since I would have really been off-target. I assured him that the program was going to finish soon and not to get concerned. Having done the program with little attention to any kind of debugging or testing, I hadn’t even included aspects that would readily allow me to check on the progress of the code.

It was pretty much a wait-and-see situation.

I went to my college classes for the day and assumed that since the bookstore manager had not tried to contact me again that the program had successfully completed and presumed that he had his desired reports.

Problem solved.

No need to put any added energy or thought towards that sketchy program.

Sure, it had used one of the most inefficient sorting algorithms known to mankind, but hey it was running uninterruptedly and how long would it take to get the job done?

By Tuesday evening, I was chowing down on more pizza and pleased with having presumably gotten the bookstore manager what he had wanted.

Wednesday morning was a real wake-up call, literally.

I got an urgent call from the bookstore manager at sunrise, which I admit was not my normal waking time in college, and he was yammering away about how my program was running and running, eating up the main computer system for the store, and still there was no sign of any reports being produced.

Yikes!

It had now been running non-stop for 48 hours and had not yet completed.

With great embarrassment and chagrin, I promised to rush over to the computer center and dig into what was going on with the program.

Looking like I had been on a drunken spree the night before (I had not!), I scurried to campus and sprinted into the computer center to have an under-the-hood look at the execution of my program.

The good news was that it was working as intended.

I was sure that it would ultimately produce the reports.

The bad news was that I had not considered the run-time speed, and nor had I considered how the data was structured and nor the nature of the disk drives that were being used, nor the amount of memory in RAM, etc.

It was a handy lesson about what can happen when you do sloppy programming.

I vowed to not let this kind of mess happen again and that I would be more “software engineering minded” henceforth.

In case you are wondering what eventually happened, believe it or not the darned thing kept running and running, and the manager asserted that I had written a program that would run perpetually (that brings us full loop to the “start” of my story).

From his viewpoint, he hadn’t yet seen any results and so it was all invisible run-time and no actual visible reports.

Finally, on Sunday, the program completed and produced the needed reports.

It had taken nearly seven days, which the store manager pointed out that the entire earth could have been created in that length of time (depending perhaps upon your beliefs).

Anyway, this story highlights the notion of having a computer that might run perpetually.

Not by accident or happenstance, but by purposeful design.

It is commonly referred to as perpetual computing.

Perpetual Computing Arising

Perpetual computing. It’s a new and upcoming area that we’ll be all thinking about in the next several years.

Imagine a computer that could run perpetually.

In fact, when you consider the matter, what is it that usually would stop a computer from running perpetually?

Other than the notion that it might breakdown from wear-and-tear or exhaustion, the other factor would most likely be electrical power. A computer that runs all of the time will need electrical power all of the time. Electrical power is typically a scarce and costly resource.

I’m betting that if you have a desktop computer, it is plugged into an electrical socket and thus you rarely consider how much electrical power it needs (when it is plugged in, your computer is considered “tethered” since it is physically connected with the electrical socket).

If you have a laptop, I’d wager that you do pay attention to electrical power and have found yourself scrambling to find a place to plug-in your laptop before it runs out of power. For your smartphone, you certainly have experienced the same kind of anxiety about watching how much power is left and clamoring to find a way to recharge the battery that is in the cell phone.

Consider the world once the Internet of Things (IoT) has really taken off.

There are going to be tons and tons of small IoT devices that will be attached to walls, attached to doors, attached to appliances in your home, and all over the place. Some analysts claim that by the year 2020 there will be around 200 billion IoT devices and by the year 2030 a total of perhaps 1 trillion IoT devices. This already vast trillion number could increase to 10 trillion by the year 2040.

For my article about IoT, see: https://aitrends.com/ai-insider/internet-of-things-iot-and-ai-self-driving-cars/

For my article about electrical power consumption, see: https://aitrends.com/selfdrivingcars/power-consumption-vital-for-ai-self-driving-cars/

Let’s assume that most of those IoT devices are powered by a battery.

Have you ever been annoyed at having to change the batteries in your home smoke alarm?

You usually only have a few of those devices in your home. Pretend that you have dozens, maybe hundreds of small-scale IoT devices in your home, all of which are powered by tiny batteries. How often will you need to be changing those batteries? It could almost become a full-time job of each day walking around your house and changing batteries. Maybe we’ll christen a new job for homes and businesses that provides employment for people that will change the batteries in your IoT devices.

There must be a better way to attend to the power needs of all of these ubiquitous IoT devices.

By the way, having a vast number of IoT devices is often referred to as ubiquitous computing, meaning that it is computer related devices that are all around us and everywhere.

Another way to describe this trend is to call it pervasive computing.

Pervasive in this context means the same thing as ubiquitous.

Don’t confuse though pervasive computing with perpetual computing.

Pervasive just means there are a lot of computing devices, while perpetual computing means that there are some computing devices will be able to run perpetually without stopping.

Though these always-on tiny devices will hopefully be beneficial, it is important to also consider the privacy concerns that they raise, along with the security related apprehensions.

Harvesting Energy To Satisfy Perpetual Computing

How can we provide electrical power to these ubiquitous untethered computer devices and do so without the hassle and logistical nightmare of having to walk around and change their batteries?

We might instead undertake energy harvesting.

If possible, an untethered computing device might try to scavenge energy from its surroundings.

One obvious means is the use of solar energy.

If the computing device is outfitted with a mini-solar panel, this might provide sufficient energy to keep the device going perpetually. You need to always consider the amount of effort required to get the energy and thus make sure that the energy harvesting is “profitable” (if it takes more energy to snatch energy, you end-up with a net negative that does you little good).

There are some promising research efforts that provide a multitude of other ways to harvest energy from the environment in which the computing device resides.

You might be able to use thermal gradients and the differences in air temperature to provide power to a computing device.

You might be able to use magnetic fields to power a computing device.

The WiFi that you are using in your home or office for making electronic communications can become a power source by having computing devices that rake in the RF waves and turn those into electrical power.

It is anticipated that via miniaturization, we’ll see that IoT devices keep getting smaller and smaller in size, and are able to rely entirely on energy harvesting via nearby vibrations, sound waves, chemical reactions, light waves, motion elements, and the like.

Smart Dust Coming Your Way

These tiny and always-on IoT devices will be so small and so prevalent that some say we will refer to them as “smart dust.”

Another consideration involves how much storage capacity the computing device has for the storage of the energy collected.

Does the computing device have enough energy storage capacity to survive during times when there is insufficient energy to be harvested nearby?

If the computing device has essentially no energy storage capacity, it means that it “lives” off the energy harvesting and needs to be harvesting continually and hope that there is energy there to be harvested.

The ambient energy sources might be unpredictable. Here in sunny Southern California, you would assume that any kind of solar powered device would always have plenty of sunshine to draw power from. Unfortunately, I’ve gone on hikes in the woods with some of my hiking gear dependent upon solar power and they’ve gotten depleted during a hike, regrettably due to not enough sun energy striking the solar panels to keep the units powered. I’m sure it’s worse in climates that don’t have the kind of always-on sunshine like we do.

When you first deploy any kind of IoT device, it usually comes pre-charged up.

What you don’t necessarily know is how long will that initial charge last?

There is an initial energy allotment when the device is first deployed and depending on how the computing device functions, it might last a long time on that initial supply or it might run out quickly.

You’ve maybe found out from time-to-time that when you buy a child’s toy that comes with a battery included, sometimes the toy maker will include a super-cheap battery that holds almost no charge at all. This keeps down their costs in terms of what is included into the toy and allows that it will at least work the moment you get home. Pretty soon though, after taking the toy out of the box and having your child play with it, the next thing you know it has run out of power and you need to replace the cheapo batteries with more robust ones.

For some of the IoT makers, they might do the same thing. They might include a low-end super-cheap battery so that the IoT computing device works for a short while, and then it runs out. If the computing device is one that is trying to make use of perpetual computing, it would switch right away into a mode of harvesting energy and not need to dip into the initial charge, or it might be able to recharge the initial charge, doing so on its own while harvesting.

If we don’t find ways to achieve perpetual computing, it implies that you might eventually end-up with IoT devices all around your home and work that are just sitting there and doing nothing at all, because they’ve run out of power and it is too troublesome to try and replace their batteries. That’s likely a sad waste of those devices, and it also creates a clutter.

Some are especially worried that there will become a mindset of simply throwing away IoT devices that run out of power. Consider the millions and billions of IoT devices that might get discarded, fouling up our reclamation capabilities and likely polluting our waters and earth. If the IoT was able to harvest power, presumably people would be more likely to hang onto it and make use of it.

At conferences, I often discuss perpetual computing and some people seem to think that perpetual computing equates with having perpetual motion machines.

Nope, that’s a misnomer.

I don’t think anyone of a reasonable mind would consider a computing device that can run “perpetually” due to harvesting power from its environment is the same as a perpetual motion machine. A perpetual motion machine is one that once set into motion will continue in motion, forever, and does so without adding any additional energy into it. In the case of perpetual computing, we are straight out saying that the device will be adding energy to it, doing so in at times clever ways from its environment, but nonetheless it is not a free ride akin to what a perpetual motion machine promises.

We also need to be practical and consider that eventually these computing devices are going to wear out.

The word “perpetual” needs to be taken with a grain of salt. Assuming that the perpetual computing device can really always glean sufficient energy from its surroundings, one way or another that device is ultimately going to falter or fail due to some kind of mechanical breakdown. The device might last many years, but it won’t last until the end of time (well, unless you are predicting the end of time is coming sooner than I hope it will!).

AI Autonomous Cars And Perpetual Computing

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. It will be interesting to see how perpetual computing comes to play regarding the advent of AI self-driving cars.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car. The Level 4 is akin to a Level 5 but with self-imposed scope constraints.

For self-driving cars less than a Level 4, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus herein on the Level 4 and Level 5.

Many of the comments apply to the less than Level 4 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 4 or Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the topic of perpetual computing, let’s consider how the advent of these new innovations in energy production might impact AI self-driving cars.

Harvesting Energy To Power The AI Of Autonomous Cars

First, it certainly would be tremendous if somehow the self-driving car itself could harvest energy from its surroundings, thus no longer being “tethered” to having to go to a gasoline station for a refill and not needing to be connected to a charger for an EV (Electrical Vehicle).

One means of providing energy consists of solar panels on a self-driving car.

Right now, the energy derived would be insufficient to fully run the self-driving car. You also need to take into account the size of the solar panels and their weight, which then impacts the car design and shape. As per my earlier comments, even if this could be perfected you would then still have the unpredictable nature of the solar energy that might be available and also in some parts of the world you would barely have use for this approach for most of the year.

I am not counting out the solar route and just saying that until there are more breakthroughs in terms of their size, shape, and energy harvesting capability, it is unlikely to do much for self-driving cars other than to act as a mild add-on for potentially providing some limited amount of energy generation.

Another means to gain energy would be via regenerative braking.

Your car brakes can be used to convert kinetic energy into electrical power. In essence, you are recovering energy that would otherwise be tossed away by the brakes as heat. Instead, you take the friction and put it to a more useful purpose, namely helping to power the self-driving car.

Similar to the issue about the solar panels, right now the use of regenerative braking can only supply a rather small amount of electrical power. It is not going to be enough to run the self-driving car. In any case, it is something to be watched and will ultimately likely be a handy contributor to the power needs of the self-driving car.

Akin to the conversion of kinetic energy with the brakes, you can also make tires that are embedded with nanogenerators and have those specialized tires generate electrical power from the roadway friction. Right now, your tires are creating friction as they come in contact with the roadway surface, but your car is just tossing away that potential energy. Harvesting it will help provide some energy to the self-driving car, though again a rather minor amount and be insufficient to truly power-up the self-driving car.

There are lots of other ideas out there about this matter.

Maybe we would have along our roadways various magnetic generator boxes that the self-driving car could grab energy from as it whooshes past the boxes on the highway. Perhaps the self-driving car could make use of temperature gradients to try to harvest energy. And so on.

For now, I’ll say that you cannot hold your breath that any of these approaches will in the near-term arise sufficiently to be able to fully power an AI self-driving car.

They will each be handy supplemental sources of energy, but not “the” source.

Consider The Sensors As Perpetual Computing Devices

We ought to then return to the notion that perpetual computing will likely consist of very small IoT devices that have a built-in energy harvester.

The energy harvest has to be tiny too, since otherwise it would bulk-up the IoT device. The weight of the energy harvester element also has to be relatively low, since it would make the IoT device hefty and heavy.

This could be handy for the sensors of the AI self-driving car.

Right now, we are assuming that the sensory devices on an AI self-driving car will all be powered by the AI self-driving car per se.

Suppose though that some of the sensors could provide their own power?

This would then cause less of a drain on the self-driving car and reduce its need to generate the tremendous amount of power required to run all of the sensory devices (of which there will be many included onto and into a self-driving car).

You might then more readily be able to add more sensors to the AI self-driving car too.

Knowing that they can harvest their own energy means that it relieves the self-driving car of having to do so. Of course, the downside involves the chance that the sensor is not able to harvest energy when needed and the sensor goes blank, such as if the self-driving car is doing 80 miles per hour on the freeway and the sensor is supposed to be providing key readings to the AI system on-board the self-driving car but runs out of power.

One approach would be to tie the perpetual computing devices into the electrical power of the AI self-driving car, and yet only have those devices draw power from the self-driving car when they otherwise are not able to grab sufficient energy from their surroundings on their own. Indeed, you could have a two-way flow, involving the perpetual computing devices not only drawing energy from the self-driving car when needed, but also possibly pouring energy into the AI self-driving car if the device is able to grab more energy than itself needs to function.

Another aspect of self-driving cars will be the number and variety of IoT devices included in the self-driving car by the automaker, along with the numerous IoT devices that passengers will bring with them into an AI self-driving car. These IoT devices might need to tap into the electrical reserves of the self-driving car to be able to run. On the other hand, if they are able to be perpetual computing devices, they might be able to harvest energy on their own and not bother using up the power of the self-driving car (plus, possibly even be able to contribute their “excess” energy to the self-driving car).

For ridesharing and AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/ridesharing-services-and-ai-self-driving-cars-notably-uber-in-or-uber-out/

For the non-stop use of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/non-stop-ai-self-driving-cars-truths-and-consequences/

For start-ups making innovative devices for AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/how-to-best-pitch-your-startup/

V2X Electronic Communications Importance

Some have speculated that perhaps via V2V (vehicle-to-vehicle communications) there will be an opportunity for self-driving cars to not only share electronic communications but also share energy.

While your AI self-driving car is on the highway, it might be immersed in heavy traffic and other self-driving cars nearby are sharing roadway traffic info via V2V with your self-driving car. At the same time, it could be that the V2V allows your self-driving car to grab some of the excess energy generated via the V2V, and the energy can be plowed back into the electrical power reserves of the self-driving car.

This could likewise potentially be the case with V2I (vehicle-to-infrastructure communications). V2I consists of the roadway infrastructure sending electronic communications to your AI self-driving car. An upcoming bridge might electronically warn your self-driving car that the bridge is blocked and not usable at this time. A street up ahead might forewarn your self-driving car that there is a big pothole in the road and it should be avoided. In the process of making those V2I communications, energy might be harvested from the excess of those communications.

Conclusion

Perpetual computing can be in the small and in the large.

Currently, the focus is primarily on the small, mainly the IoT devices that we are going to be using in the billions and someday trillions of them. It would be a boon to society if those IoT devices could harvest their own energy and work around the clock, as needed, without having to plug them in (tether them) and nor having to replace their batteries.

Say, excuse me for a moment as I have to go change the batteries in my outdoor portable lights – which I hope soon to be able to never say again, namely, I’d like to eliminate the phrase “go change the batteries.”

Let’s aim for that.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Source: AI Trends
Continue reading Perpetual Computing and AI Autonomous Cars

Posted on

Emergency-Only AI and Autonomous Cars


AI self-driving cars need to be able to respond to emergency situations that come up on the road. An accident scene can have a ripple effect as the car swerves. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

I had just fallen asleep in my hotel room when the fire alarm rang out.

It was a few minutes past 2 a.m. and the hotel was relatively full and mainly had been quiet at this hour as most of the hotel guests had earlier retired for the evening. The sharp twang of the fire alarm pierced throughout the hallways and walls of the hotel. I could hear the sounds of people moving around in their hotel rooms as they quickly got up to see what was going on.

Just last month I had been staying in a different hotel in a different city and the fire alarm had gone off, but it turned out to be a false alarm. The hotel was a new one that had only been open a few weeks and apparently they were still trying to iron out the bugs of the hotel systems. Somehow, the fire alarm system had gone off, right around midnight, and after a few minutes the hotel staff told everyone not to worry since it was a false alarm.

Of course, some more discerning hotel guests remained worried since they didn’t necessarily believe the staff that the fire alarm was a false one.

Maybe the staff was wrong, and if so, the consequences could be deadly.

Ultimately, there was no apparent sign of a fire, no smoke, no flames, and gradually even the most skeptical of guests went back to sleep.

I could not believe that I was once again hearing a fire alarm.

In my many years of staying at hotels while traveling for work and sometimes (rarely) for vacations, I had only experienced a few occasions of a fire alarm ringing out. Now, I had two in a short span of time. The earlier one had been a dud, a false alarm. I figured that perhaps this most recent fire alarm was also a false alarm.

But, should I base this assumption on the mere fact that I had a few weeks earlier experienced a false alarm?

The logic was not especially iron tight since these were two completely different hotels and had nothing particularly in common, other than that I had stayed at both of them.

False Alarm Or Genuine Fire Emergency

The good thing about my recent experience of a false alarm was that it had reminded me of the precautions you should undertake when staying at a hotel.

As such, I had made sure that my normal routine while staying at hotels incorporated the appropriate fire-related safeguards. One is to have your shoes close to where you can find them when you awakened at night by a fire or a fire alarm, allowing you to quickly put on the shoes for escaping from the room. Without shoes on, you might try to escape the room or run down the hallway and there could be broken items like glass or other shards that would inhibit your escape or harm you as you tried to get out.

I also kept my key personal items such as my wallet and smartphone nearby the bed and had my pants and jacket also ready in case needed. I knew too the path to the doorway of my hotel room and kept it clear of obstructions, doing so before I went to sleep for the night. I had made sure to scrutinize the hallway and knew the nearest exits and where the stairs were. Some people also prefer to stay in the lower floors of a hotel, doing so in case the firefighters are trying to get you out, which they can then more readily reach either by foot or via a fire truck ladder.

I don’t want you to think I was obsessed with being ready for an emergency. The precautions I’ve just mentioned are all easily done without any real extra or extraordinary effort involved. When I first check into my hotel room, I glance around the hallway as I am walking to the room, spotting where the exits and the stairs are. When I go to sleep at night, I make sure the hotel room door is locked and then as I walk back to the bed I then also make sure the path is unobstructed. These are facets that can be done by habit and seamlessly fit in with the effort involved in staying at a hotel.

So, what did I do about the blaring fire alarm on this most recent hotel stay?

I decided that it was worthwhile to assume it was a real fire and not a false alarm, putting my safety at the higher bet than the slovenly assumption that I could remain laying in bed and wait to find out if the alarm was true or false.

I rapidly got into my jeans and coat, put on my shoes, grabbed my wallet and smartphone from the bed stand, and went straight to the door that led into the hallway.

I touched the door to see if it was hot, another kind of precaution in case the fire is right on the other side of the door (you don’t want to open a door leading into a fire, of which the air of your room will simply help ignite further and you’ll be charred to a crisp).

Feeling no heat on the door, I slowly opened it to peek into the hallway.

Believe it or not, there was smoke in the hallway.

Thank goodness that I had opted to believe the fire alarm. I stepped into the hallway cautiously. The smoke appeared to be coming from the west end and not from the east end. I figured this implied that wherever the fire was, it might be more so on the west side rather than the east side of the hotel. I began to walk in the easterly direction.

What seemed peculiar was that there was no one else also making their way through the hallway.

I was pretty sure that there were people in the other rooms as I had heard them coming to their rooms earlier that evening (often making a bit of noise after likely visiting the hotel bar and having a few drinks there).

Were these other people still asleep?

How could they not hear the incessant clanging of the fire alarm?

The sound was blaring and loud enough to wake the dead.

I decided to bang on the doors of the rooms that I was walking past.

I would rap a door with my fist and then yell out “Fire!” to let them know that there was indeed something really happening. My guess was that others had heard the fire alarm but chosen to wait and see what might happen. With the hallway starting to fill with smoke, this seemed sufficient proof to me that a fire was somewhere. The smoke would eventually seep into the rooms. For now, the smoke was mainly floating near the ceiling of the hallway. It wasn’t thick enough yet to have filled down to the floor and try to permeate into the rooms at the door seams.

The good news of the situation turned out that no one ended-up getting hurt and the fire was confined to the laundry room of the hotel.

The fire department showed-up and put out the flames. They brought in large fans too to help clear out the smoke from the hotel. The staff did an adequate job of helping the hotel guests and moved many of them to another wing of the hotel to get away from the residual smoky smell. It was one of the few times that I’d ever been in a hotel that had a fire and for which I was directly impacted by the fire.

The hotel had smoke alarms in each of the hotel rooms, along with smoke alarms in other areas of the hotel. This is nowadays standard fare for most hotels and also personal residences that you are supposed to have fire alarm devices setup in appropriate areas. These silent guardians are there to be your watchdogs. When smoke begins to fill the air, the fire alarm detects the smoke and then starts to beep or clang to alert you.

Some of today’s fire alarms speak at you. Rather than simply making a beep sound, these newer fire alarms emit the words “Fire!” or “Get out!” or other kinds of sayings. It is thought that people might be more responsive to hearing what sounds like a human voice telling them what to do. Hearing a beeping sound might not create as strong a response.

You’ve likely at times wrestled with the fire alarm in your home.

Perhaps the fire alarm battery became low and the fire alarm started a low beeping sound to let you know. This often happens on a timed basis wherein the beep sound for the low-battery is at first every say five minutes. If you don’t change the battery, the beeping time interval gets reduced. The low-battery beep might then happen every minute, and then every 30 seconds, and so on.

In the hotels that I stay at, they usually also have a fire alarm pull. These are devices typically mounted in the hallways that allow you to grab and pull to alert that a fire is taking place. I’d bet that perhaps when you were in school, someone one time pulled the fire alarm to avoid taking a test. The prankster that pulled the fire alarm is putting everyone at risk, since people can get injured when trying to rush out as a result of hearing a fire alarm, plus it might dull their reaction times the next time there is an actual true fire alarm alert.

Some hotels have a sprinkler system that will spray water to douse a fire.

The sprinkler activation might be tied into the fire alarms so that the moment a fire alarm goes off the sprinklers are then activated. This is not usually so closely linked though because of the chances that a false fire alarm might activate the sprinklers. Once those sprinklers start going, it’s going to be more damaging to the hotel property and you’d obviously want the sprinklers to only go when you are pretty certain that a fire is really occurring. As such, there is often a separate mechanism that has to be operated to get the fire sprinklers to engage.

Emergency Systems That Save Lives

This discussion about fire alarms and fire protection illuminates some important elements about systems that are designed to help save human lives.

In particular:

  • A passive system like the fire alarm pull won’t automatically go off and instead the human needs to overtly activate it
  • For a passive system, the human needs to be aware of where and how to activate it, else the passive system otherwise does little good to help save the human
  • An active system like the smoke alarm is constantly detecting the environment and ready to go off as soon as the conditions occur that will activate the alarm
  • Some system elements are intended to simply alert the human and it is then up to the human to take some form of action
  • Some system elements such as a fire sprinkler are intended to automatically engage to save human lives and the humans being saved do not need to directly activate the life-saving effort
  • These emergency-only systems are intended to be used only when absolutely necessary and otherwise are silent, being somewhat out-of-sight and out-of-mind of most humans
  • Such systems are not error-free in that they can at times falsely activate even when there isn’t any pending emergency involved
  • Humans can undermine these emergency-only systems by not abiding by them or taking other actions that reduce the effectiveness of the system
  • Humans will at times distrust an emergency-only system and believe that the system is falsely reporting an emergency and therefore not take prescribed action

I’m invoking the use case of fire alarms as a means to convey the nature of emergency-only systems.

There are lots of emergency-only related systems that we might come in contact with in all walks of life. The fire alarm is perhaps the easiest to describe and use as an illustrative aspect to derive the underpinnings of what they do and how humans act and react to them.

Autonomous Cars And Emergency-Only AI

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One approach that some automakers and tech firms are taking toward the AI systems for self-driving cars involves designing and implementing those AI systems for emergency-only purposes.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, the brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car. Level 4 is akin to Level 5 but with constraints self-imposed as to the scope of the AI driving capabilities.

For self-driving cars less than a Level 4, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus mainly herein on the true Level 4 and Level 5, but also begin by studying the ramifications related to Level 3.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the topic of emergency-only systems, there are various approaches that automakers and tech firms are taking toward the design and development of AI for self-driving cars and one such approach involves an emergency-only AI paradigm.

As already mentioned, currently the most vaunted and desired approach consists of having the AI always be driving the car and there is no human driving involved at all, which is the intent of a true Level 4 and Level 5 This is much harder to pull off than it might seem.

I’ve repeatedly exhorted that the true Level 5 as a kind of moonshot.

It’s going to take a lot longer to get there than most people seem to think it will.

At the less than Level 4, there is a co-sharing of the driving task. We can step back for a moment and ask an intriguing question about the co-sharing of the driving task, namely, what should the split be between when the AI does the driving and when the human does the driving?

The Level 2 split of human versus AI driving is that the human tends to do the bulk of the driving and the AI tends to do relatively little of the driving task.

For the Level 3, the split tends toward having the AI do more so of the driving and the human do less of it.

Suppose we somewhat turned this split on its head, so to speak.

We might design the AI to be an emergency-only kind of mechanism.

Rather than the AI driving the self-driving car to varying increasing progressive degrees at the Level 2, Level 3, we might instead opt to have the human be the mainstay driver.

The AI would be used nearly solely for emergency-only purposes.

Emergency-Only AI Driving For Level 3

Let’s say I am driving in a Level 3 self-driving car. I would normally be expecting the AI to be the primary driver and I am there in case the AI needs me to take over.

I’ve written and spoken many times about the dangers of this co-sharing arrangement. As a human, I might become complacent and not be ready to take over the driving task when the moment arises for me to do so. Maybe I was playing a video game on my smartphone, maybe I was reading a book that’s in my lap, and other kinds of distractions might occur.

For the concerns about Level 3 AI self-driving cars, see my article: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/

For driving controls aspects of self-driving cars, see my article: https://aitrends.com/ai-insider/ai-boundaries-and-self-driving-cars-the-driving-controls-debate/

For safety as a crucial aspect of self-driving cars, see my article: https://aitrends.com/ai-insider/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

For my article about the moonshot of self-driving cars achievement, see: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

Instead of having the AI do most of the driving while in a Level 3, suppose we instead said that the human is the primary driver.

The AI is relegated to being an emergency-only driver.

Here’s how that might work.

I’m driving my Level 3 car and the AI is quietly observing what is going on. The AI is using all of its sensors to continuously detect and interpret the roadway situation. The sensor fusion is occurring. The virtual world model is being updated. The AI action planning is taking place. The only thing not happening is the issuance of the car controls commands.

In a sense, the AI is for all practical purposes “driving” the car without actually taking over the driving controls. This might be likened to when I was teaching my children how to drive a car. They would sit in the driver’s seat. I had no ready means to access the driver controls. Nonetheless, in my head, I was acting as though I was driving the car. I did this to be able to comprehend what my teenage novice driver children were doing and so that I could also come to their aid when needed.

Okay, so the Level 3 car is being driven by the human and all of a sudden another car veers into the lane and threatens to crash into the Level 3 car. We now have a circumstance wherein the human driver of the Level 3 car should presumably take evasive action.

Does the human notice that the other car is veering dangerously?

Will the human take quick enough action to avoid the crash?

Suppose that the AI was able to ascertain that the veering car is going to crash with the Level 3 car.

Similar to a fire protection system such as at the hotels, the AI can potentially alert the human driver to take action (akin to a fire alarm that belts out an alarm bell).

Or, the AI might take more overt action and momentarily take over the driving controls to maneuver the car away from the danger (this would be somewhat equivalent to the fire sprinklers getting invoked in a hotel).

If the AI was devised to work in an emergency-only mode, some would assert that it relieves the pressure on the AI developers to try and devise an all-encompassing AI system that can handle any and all kinds of driving situations.

Instead, the AI developers could focus on the emergency-only kinds of situations.

This also would presumably shift attention toward the AI being a kind of hero, stepping into the driving when things are dire and saving the day.

Hey, someone might say, the other day the AI of my self-driving car kept me from hitting a dog that ran unexpectedly into the street.

Another person might say that the AI saved them from ramming into a car that had come to a sudden halt on the freeway just ahead of their car (and they sheepishly admit they had turned to look at a roadside billboard and by the time they turned their head back the halted car ahead was a surprise).

We are all already somewhat familiar with automated driving assistance systems that can do something similar.

Many cars today have a simplistic detection device that if your car is going to hit something ahead of it, the brakes are automatically applied. These tend to be extremely simplistic in how they work. It is almost a knee-jerk reaction kind of system. There’re not much “smarts” involved. You might liken these low-level automated systems as similar to the autonomic nervous system of a human, it reacts instinctively and without much direct thinking involved (when my hand is near a hot stove, presumably my instincts kick-in and I withdraw my hand, doing so without a lot of contemplative effort involved).

These behind-the-scenes automated driving assistance systems would be quietly replaced with a more sophisticated AI-based system that is more robust and paying attention to the overall driving task.

The paradigm is that the emergency-only AI is likened to having a second human driver in the car and the secondary driver is there only for emergency driving purposes.

The rest of the time, the primary driver is the human that is driving the car.

As mentioned, this might suggest that the AI then does not need to be full-bodied and does not need to be able to drive the car all of the time, and instead be focused on just being able to drive when emergency situations arise. Some would assert that this is a bit of a paradox.

If the AI is not versed enough to be able to drive at any time, how will it be able to discern when an emergency is arising that requires the AI to step into the driving task?

In other words, some would say that only until you have a fully capable driving AI that you would be risking things unduly to have the AI only be used in emergencies.

Unless you opted to say that the AI is exclusively used solely in emergencies, you are otherwise suggesting that the AI is able to monitor the driving task throughout and is ready to at any moment do the driving, but if that’s the case, why not then let the AI do the driving as the primary driver anyway.

Defining Emergency Driving Situations

This also brings up the notion of defining the nature of an emergency driving situation.

The obvious example of an emergency would be the case of a dog that has darted into the street directly in front of the car and the speed, direction, and timing of the car is such that it will mathematically intersect with the dog if some kind of driving action is not taken to immediately attempt to avoid striking the animal. But this takes us back to the kind of simpleton automated driving assistance systems that are not especially imbued with AI anyway.

If we’re going to consider using AI for emergency-only situations, presumably the kinds of emergency situations will range from rather obvious ones that a knee-jerk reactive driving system could handle and all the way up to much more subtle and harder to predict emergencies.

If the AI is going to be continuously monitoring the driving situation, we’d want it to be acting like a true secondary driver and be able to do more sophisticated kind of emergency situation detection.

You are on a mountain road that curves back-and-forth.

The slow lane has large rambling trucks in it. Your car is in the fast lane that is adjacent to the slow lane. The AI has been observing the slow lane and detected a truck up ahead that periodically has swerved into the fast lane when on a curve. The path of the car is such that in about 10 seconds the car will be passing the truck while on a curve. At this moment there is no apparent danger. But, it can be predicted with sufficient probability that in 10 seconds the likelihood is that the truck will swerve into the lane of the car as it tries to pass the truck on the curve.

Notice that in this example there is not a simple act-react cycle involved.

Most of the automated driving assist systems would only react once the car is actually passing the truck and if perchance as the passing action occurred that the truck then veered into the path of the car. Instead, in my example, the AI has anticipated a potential future emergency and will opt to take action beforehand to either prevent the danger or at least be better prepared to cope with it when (if) it occurs.

The emergency-only AI would be presumably boosted beyond the nature of a traditional automated driving assist system, and likely be augmented by the use of Machine Learning (ML).

How did the AI even realize that observing the trucks in the slow lane was worthwhile to do?

An AI driving system that has learned over time would have the “realization” that trucks often tend to swerve out of their lanes while on curving roads.

This then becomes part-and-parcel of the “awareness” that the AI will have when looking for potential emergency driving situations.

For my article about Machine Learning core aspects, see: https://aitrends.com/selfdrivingcars/machine-learning-benchmarks-and-ai-self-driving-cars/

For ensemble Machine Learning, see my article: https://aitrends.com/selfdrivingcars/ensemble-machine-learning-for-ai-self-driving-cars/

For federated Machine Learning, see my article: https://aitrends.com/selfdrivingcars/federated-machine-learning-for-ai-self-driving-cars/

For the importance of explanation-based Machine Learning, see my article: https://aitrends.com/selfdrivingcars/explanation-ai-machine-learning-for-ai-self-driving-cars/

True Autonomous Cars And Emergency-Only AI

Let’s now revisit my earlier comments about the nature of emergency-only systems and my illustrative examples of the fire alarm and fire protection systems.

I present to you those earlier points and then recast them into the context of AI self-driving cars:

  • A passive system like the fire alarm pull won’t automatically go off and instead the human needs to overtly activate it

Would a driving emergency-only AI system be setup for only a passive mode, meaning that the human driver would need to invoke the AI system? We might have a button that the human could press that invokes the AI emergency capability, or the human might have a “safe word” that they utter to ask the AI to step into the picture.

Downsides with this include that the human might not realize they need or even could use the AI emergency option. Or, the human might realize it but enact the AI emergency mode once it is too late to do anything to avert the incident by the AI.

We would also need to have a means of letting the human know that the AI has “accepted” the inception of going into the AI emergency option mode, otherwise the human might be unsure as to whether or not the AI got the signal and whether the AI is actually stepping into the driving.

There is also the matter of returning the driving back to the human once the emergency action by the AI has been undertaken. How would the AI be able to “know” that the human is prepared to resume driving the car? Would it ask the human driver or just assume that if the human is still at the driver controls that it is Okay to disengage by the AI?

  • For a passive system, the human needs to be aware of where and how to activate it, else the passive system otherwise does little good to help save the human

As mentioned, a human driver might forget that the AI is standing ready to take over. Plus, when an emergency arises, the human might be so startled and mentally consumed that they lack a clear-cut mind to be able to turn over the driving to the AI.

  • An active system like the smoke alarm is constantly detecting the environment and ready to go off as soon as the conditions occur that will activate the alarm

With this approach, the AI is ready to step into the driving task and will do so whenever it deems necessary. This can be handy since the human driver might not realize an emergency is arising, or might realize it but not invoke the AI to help, or be perhaps incapacitated in some manner and wanting to invoke the AI but cannot.

Downside here is that the AI might shock or startle the human driver by summarily taking over the driving and catching the human driver off-guard. If so, the human driver might try to take some dramatic action that counters the actions of the AI.

We might also end-up with the human driver become on-edge that at any moment the AI is going to take over. This might cause the human driver to get suspicious of the AI.

It could be that the AI only alerts the human driver and lets the human driver decide what the human driver wants to do. Or, it could be that the AI grabs control of the car.

  • Some system elements are intended to simply alert the human and it is then up to the human to take some form of action

In this case, if the AI is acting as an alert, the question arises as to how best to communicate the alert. If the AI rings a bell or turns on a red light, the human driver won’t especially know what the declared emergency is about. Thus, the human driver might react to the “wrong” emergency in terms of what the human perceives versus what the AI detected.

If the AI tries to explain the nature of the emergency, this can use up precious time. When an emergency is arising, the odds are that there is little available time to try and explain what to do.

I am reminded that at one point my teenage novice driver children were about to potentially hit a bicyclist and I was tongue tied trying to explain the situation. I could just say “swerve to your right!” but this offered no explanation for why to do so. If I tried to say “there is a bicyclist to your left, watch out!” this provided some explanation and the desired action would be up to the driver. If I had said “there is a bicyclist to your left, swerve to your right!” it could be that the time taken to say the first part, depicting the situation, used up the available time to actually make the swerving action that would save the bike rider. Etc.

  • Some system elements such as a fire sprinkler are intended to automatically engage to save human lives and the humans being saved do not need to directly activate the life-saving effort

This approach involves the AI taking over the driving control, which as mentioned has both pluses and minuses.

  • These emergency-only systems are intended to be used only when absolutely necessary and otherwise are silent, being somewhat out-of-sight and out-of-mind of most humans

For emergency-only AI driving systems, they are intended only for use when an emergency driving situation arises. This begs the question though of what is considered an emergency versus not an emergency.

Also, suppose a human believes an emergency is arising but the AI has not detected it, or maybe the AI detected it and determined that it does not believe that a genuine emergency is brewing. This brings up the usual hand-off issues that arise when doing any kind of co-sharing of the driving task.

  • Such systems are not error-free in that they can at times falsely activate even when there isn’t any pending emergency involved

Some AI developers seem to think that their AI driving system is going to work perfectly and do so all the time. This makes little sense. There is a good likelihood that the AI will have hidden bugs. There is a likelihood that the AI as devised will potentially make a wrong move. There is a chance that the AI hardware might glitch. And so on.

If an emergency-only AI system engages on a false positive, it will likely undermine confidence by the human driver that the AI is worthy to have engaged at all. There is also the concern that if the AI gets caught in a false negative and does not take action when needed, this too is worrisome since the human would assert that they relied upon the AI to deal with the emergency, but it failed in its duty to perform.

  • Humans can undermine these emergency-only systems by not abiding by them or taking other actions that reduce the effectiveness of the system

With the co-sharing of the driving task, there is an inherent concern that you have two drivers trying to each drive the car as they see fit.

Imagine that when my children were learning to drive if I had a second set of driving controls. The odds are that I would have kept my foot on the brake nearly all of the time and been keeping a steady grip on the steering wheel. This though would have undermined their driving effort and created confusion as to which of us was really driving the car. The same can be said of the AI emergency-only driving versus the human driving.

  • Humans will at times distrust an emergency-only system and believe that the system is falsely reporting an emergency and therefore not take prescribed action

Would we lockout the driving controls for the human whenever the AI takes over control due to a perceived emergency situation by the AI detection? This would prevent having the human driver fight with the AI in terms of what driving action to take. But the human driver is likely to have qualms about this. Suppose the AI has taken over when there wasn’t a genuine emergency.

We might assume or hope that the AI in the case of acting on a false alarm (false positive) would not get the car into harm’s way. This though is not necessarily the case.

Suppose the AI perceived that the car was potentially going to hit a bicyclist, and so the AI swerved the car to avoid the bike rider. Meanwhile, by swerving the car, another car in the next lane got unnerved and the driver in that car reacted by slamming on their brakes. Meanwhile, by slamming on their brakes, the car behind them slammed into the car that had hit its brakes. All of this being precipitated by the AI that opted to avoid hitting the bicyclist.

Imagine though that the bicyclist took a quick turn away from the car and thus there really wasn’t an emergency per se.

For my article about ghosts in AI self-driving car systems, see: https://aitrends.com/selfdrivingcars/ghosts-in-ai-self-driving-cars/

For the debugging of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/debugging-of-ai-self-driving-cars/

For the egocentric views of some AI developers, see my article: https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

For the burnout of AI developers, see my article: https://aitrends.com/selfdrivingcars/developer-burnout-and-ai-self-driving-cars/

Conclusion

There are going to be AI systems that are devised to work only on an emergency basis.

Astute ones will be designed to silently be detecting what is going on and be ready to step into a task when needed.

We’ll need though to make sure that humans know when and how the AI is going to take action. Those humans too will be imperfect and potentially forget that the AI is there or might even end-up fighting with the AI if the human believes that the AI is wrong to take action or otherwise has qualms about the AI.

We usually think of an emergency as a situation involving the need for an urgent intervention in order to avoid or mitigate the chances of injury to life, health, or property. There is a lot of judgment that often comes to play when declaring that a situation is an emergency. When an automated AI system tries to help out, clarity will be needed as to what constitutes an emergency and what does not.

The Hippocratic Oath states that primum non nocere, meaning first do no harm.

An emergency-only AI system for a self-driving car is going to have a duty to abide by that principle, which I assure you is going to be a high burden to bear.

The emergency-only AI approach is not as easy of a path as some might at initial glance assume, and indeed for some it might even be considered insufficient, while for others it is a step forward toward the goal of a full autonomous AI self-driving car.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Source: AI Trends
Continue reading Emergency-Only AI and Autonomous Cars