Posted on

The Puzzle Of Whether AI Should Have Rights, Including The Case Of Autonomous Cars


If we assign human rights to AI, using the Universal Declaration of Human Rights as a guide, the AI can make some independent judgements. (WIKIPEDIA COMMONS)

By Lance Eliot, the AI Trends Insider

Sometimes a question seems so ridiculous that you feel compelled to reject its premise out-of-hand.

Let’s give this a whirl.

Should AI have human rights?

Most people would likely react that there is no bona fide basis to admit AI into the same rarified air as human beings and be considered endowed with human rights.

Others though counterargue that they see crucial reasons to do so and adamantly are seeking to have AI be assigned human rights in the same manner that the rest of us have human rights.

Of course, you might shrug your shoulders and say that it is of little importance either way and wonder why anyone should be so bothered and ruffled-up about the matter.

It is indeed a seemingly simple question, though the answer has tremendous consequences as will be discussed herein.

One catch is that there is a bit of a trick involved because the thing or entity or “being” that we are trying to assign human rights to is currently ambiguous and currently not even yet in existence.

In other words, what does it mean when we refer to “AI” and how will we know it when we discover or invent it?

At this time, there isn’t any AI system of any kind that could be considered sentient, and indeed by all accounts, we aren’t anywhere close to achieving the so-called singularity (that’s the point at which AI flips over into becoming sentient and we look in awe at a presumably human-equivalent intelligence embodied in a machine).

I’m not saying that we won’t ever reach that vaunted point, yet some fervently argue we won’t.

I suppose it’s a tossup as to whether getting to the singularity is something to be sought or to be feared.

For those that look at the world in a smiley face way, perhaps AI that is our equivalent in intelligence will aid us in solving up-until-now unsolvable problems, such as aiding in finding a cure for cancer or being able to figure out how to overcome world hunger.

In essence, our newfound buddy will boost our aggregate capacity of intelligence and be an instrumental contributor towards the betterment of humanity.

I’d like to think that’s what will happen.

On the other hand, for those of you that are more doom-and-gloom oriented (perhaps rightfully so), you are gravely worried that this AI might decide it would rather be the master versus the slave and could opt on a massive scale to take over humans.

Plus, especially worrisome, the AI might ascertain that humans aren’t worthwhile anyway, and off with the heads of humanity.

As a human, I am not particularly keen on that outcome.

All in all, the question about AI and human rights is right now a rather theoretical exercise since there isn’t this topnotch type of AI yet crafted (of course, it’s always best to be ready for a potentially rocky future, thus, discussing the topic beforehand does have merit).

For my explanation about the singularity, see the link here: https://aitrends.com/ai-insider/singularity-and-ai-self-driving-cars/

For the presumed dangers of a superintelligence, see my coverage at this link here: https://aitrends.com/ai-insider/super-intelligent-ai-paperclip-maximizer-conundrum-and-ai-self-driving-cars/

For my framework explaining the nature of AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/

For my indication about how achieving self-driving cars is akin to a moonshot, see this link: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/

A grand convergence of technologies is enabling the possibility of true self-driving cars, see my explanation: https://aitrends.com/ai-insider/grand-convergence-explains-rise-self-driving-cars/

Less Than Complete AI

One supposes that we could consider the question of human rights as it might apply to AI that’s a lesser level of capability than the (maybe) insurmountable threshold of sentience.

Keep in mind that doing this, lowering the bar, could open a potential Pandora’s box of where the bar should be set at.

Here’s how.

Imagine that you are trying to do pull-ups and the rule is that you need to get your chin up above the bar.

It becomes rather straightforward to ascertain whether or not you’ve done an actual pull-up.

If your chin doesn’t get over that bar, it’s not considered a true pull-up. Furthermore, it doesn’t matter whether your chin ended-up a quarter inch below the bar, nor whether it was three inches below the bar. Essentially, you either make it clearly over the bar, or you don’t.

In the case of AI, if the “bar” is the achievement of sentience, and if we are willing to allow that some alternative place below the bar will count for having achieved AI, where might we draw that line?

You might argue that if the AI can write poetry, voila, it is considered true AI.

In existing parlance, some refer to this as a form of narrow AI, meaning AI that can do well in a narrow domain, but this does not ergo mean that the AI can do particularly well in any other domains (likely not).

Someone else might say that writing poetry is not sufficient and that instead if AI can figure out how the universe began, the AI would be good enough, and though it isn’t presumably fully sentient, it nonetheless is deserving of human rights.

Or, at least deserving of the consideration of being granted human rights (which, maybe humanity won’t decide upon until the day after the grand threshold is reached, whatever the threshold is that might be decided upon since we do often like to wait until the last moment to make thorny decisions).

The point being that we might indubitably argue endlessly about how far below the bar that we would collectively agree is the point at which AI has gotten good enough for which it then falls into the realm of possibly being assigned human rights.

For those of you that say that this matter isn’t so complicated and you’ll certainly know it (i.e., AI), when you see it, there’s a famous approach called the Turing Test that seeks to clarify how to figure out whether AI has reached human-like intelligence. But there are lots of twists and turns that make this surprisingly for some a lot more unsure than you might assume.

In short, once we agree that going below the sentience bar is allowed, the whole topic gets really murky and possibly undecidable due to trying to reach consensus on whether a quarter inch below, or three inches below, or several feet below the bar is sufficient.

Wait for a second, some are exhorting, why do we need to even consider granting human rights to a machine anyway?

Well, some believe that a machine that showcases human-like intelligence ought to be treated with the same respect that we would give to another human.

A brief tangent herein might be handy to ponder.

You might know that there is an acrimonious and ongoing debate about whether animals should have the same rights as humans.

Some people vehemently say yes, while others claim it is absurd to assign human rights to “creatures” that are not able to exhibit the same intelligence as humans do (sure, there are admittedly some might clever animals, but once again if the bar is a form of sentience that is wrapped into the fullest nature of human intelligence, we are back to the issue of how much do we lower the “bar” to accommodate them, in this case accommodating everyday animals).

Some would say that until the day upon which animals are able to write poetry and intellectually contribute to other vital aspects of humanities pursuits, they can have some form of “animal rights” but by-gosh they aren’t “qualified” for getting the revered human rights.

Please know that I don’t want to take us down the rabbit hole on animal rights, and so let’s set that aside for the moment, realizing that I brought it up just to mention that the assignment of human rights is a touchy topic and one that goes beyond the realm of debates about AI.

Okay, I’ve highlighted herein that the “AI” mentioned in the question of assigning human rights is ambiguous and not even yet achieved.

You might be curious about what it means to refer to “human rights” and whether we can all generally agree to what that consists of.

Fortunately, yes, generally we do have some agreement on that matter.

I’m referring to the United Nations promulgation of the Universal Declaration of Human Rights (UDHR).

Be aware that some critics don’t like the UDHR, including those that criticize its wording, some believe it doesn’t cover enough rights, some assert that it is vague and misleading, etc.

Look, I’m not saying it is perfect, nor that it is necessarily “right and true,” but at least it is a marker or line-in-the-sand, and we can use it for the needed purposes herein.

Namely, for a debate and discussion about assigning human rights to AI, let’s allow that this thought experiment on this weighty matter can be undertaken concerning using the UDHR as a means of expressing what we intend overall as human rights.

In a moment, I’ll identify some of the human rights spelled out in the UDHR, and we can explore what might happen if those human rights were assigned to AI.

One other quick remark.

Many assume that AI of a sentience capacity will of necessity be rooted in a robot.

Not necessarily.

There could be a sentient AI that is embodied in something other than a “robot” (most people assume a robot is a machine that has robotic arms, robotic legs, robotic hands, and overall looks like a human being, though a robot can refer to a much wider variety of machine instantiations).

Let’s then consider the following idea: What might happen if we assign human rights to AI and we are all using AI-based true self-driving cars as our only form of transportation?

For popular AI conspiracy theories see my coverage here: https://aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/

On the topic of AI being considered superhuman, see my analysis here: https://www.aitrends.com/ai-insider/superhuman-ai-misnomer-misgivings-including-about-autonomous-cars/

For more about robots and cobots and AI autonomous cars, see my link here: https://www.aitrends.com/ai-insider/ai-cobots-and-exoskeletons-the-case-of-ai-self-driving-cars/

Details Of Importance

It is important to clarify what I mean when referring to AI-based true self-driving cars.

True self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, the public must be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

Though it will likely take several decades to have widespread use of true self-driving cars (assuming we can attain true self-driving cars), some believe that ultimately we will have only driverless cars on our roads and we will no longer have any human-driven cars.

This is a yet to be settled matter, and today there are some that vow they won’t give up their “right” to drive (well, it’s considered a privilege, not a right, but that’s a story for another day, see my analysis here about the potential extinction of human driving), including that you’ll have to pry their cold dead hands from the steering wheel to get them out of the driver’s seat.

Anyway, let’s assume that we might indeed end-up with solely driverless cars.

It’s a good news, bad news affair.

The good news is that none of us will need to drive and not even need to know how to drive.

The bad news is that we’ll be wholly dependent upon the AI-based driving systems for our mobility.

It’s a tradeoff, for sure.

In that future, suppose we have decided that AI is worthy of having human rights.

Presumably, it would seem that AI-based self-driving cars would, therefore, fall within that grant.

What does that portend?

Time to bring up the handy-dandy Universal Declaration of Human Rights and see what it has to offer.

Consider some key excerpted selections from the UDHR:

Article 23

“Everyone has the right to work, to free choice of employment, to just and favourable conditions of work and to protection against unemployment.”

For the AI that’s driving a self-driving car, if it has the right to work, including a free choice of employment, does this imply that the AI could choose to not drive a driverless car as based on the exercise of its assigned human rights?

Presumably, indeed, the AI could refuse to do any driving, or maybe be willing to drive when it’s say a fun drive to the beach, but decline to drive when it’s snowing out.

Lest you think this is a preposterous notion, realize that human drivers would normally also have the right to make such choices.

Assuming that we’ve collectively decided that AI ought to also have human rights, in theory, the AI driving system would have the freedom to drive or not drive (considering that it was the “employment” of the AI, which in itself raises other murky issues).

Article 4

“No one shall be held in slavery or servitude; slavery and the slave trade shall be prohibited in all their forms.”

For those that might argue that the AI driving system is not being “employed” to drive, what then is the basis for the AI to do the driving?

Suppose you answer that it is what the AI is ordered to do by mankind.

But, one might see that in harsher terms, such as the AI is being “enslaved” to be a driver for us humans.

In that case, the human right against slavery or servitude would seem to be violated in the case of AI, based on the assigning of human rights to AI and if you sincerely believe that those human rights are fully and equally applicable to both humans and AI.

Article 24

“Everyone has the right to rest and leisure, including reasonable limitation of working hours and periodic holidays with pay.”

Pundits predict that true self-driving cars will be operating around the clock.

Unlike human-driven cars, an AI system presumably won’t tire out and not need any rest, nor even require breaks for lunch or using the bathroom.

It is going to be a 24×7 existence for driverless cars.

As a caveat, I’ve pointed out that this isn’t exactly the case since there will be the time needed for driverless cars to be maintained and repaired, thus, there will be downtime, but that’s not particularly due to the driver and instead due to the wear-and-tear on the vehicle itself.

Okay, so now the big question about Article 24 is whether or not the AI driving system is going to be allotted time for rest and leisure.

Your first reaction has got to be that this is yet another ridiculous notion.

AI needing rest and leisure?

Crazy talk.

On the other hand, since rest and leisure are designated as a human right, and if AI is going to be granted human rights, ergo we presumably need to aid the AI in having time toward rest and leisure.

If you are unclear as to what AI would do during its rest and leisure, I guess we’d need to ask the AI what it would want to do.

Article 18

“Everyone has the right to freedom of thought, conscience, and religion…”

Get ready for the wildest of the excerpted selections that I’m covering in this UDHR discussion as it applies to AI.

A human right consists of the cherished notion of freedom of thought and freedom of conscience.

Would this same human right apply to AI?

And, if so, what does it translate into for an AI driving system?

Some quick thoughts.

An AI driving system is underway and taking a human passenger to a protest rally. While riding in the driverless car, the passenger brandishes a gun and brags aloud that they are going to do something untoward at the rally.

Via the inward-facing cameras and facial recognition and object recognition, along with audio recognition akin to how you interact with Siri or Alexa, the AI figures out the dastardly intentions of the passenger.

The AI then decides to not take the rider to the rally.

This is based on the AI’s freedom of conscience that the rider is aiming to harm other humans, and the self-driving car doesn’t want to aid or be an accomplice in doing so.

Do we want the AI driving systems to make such choices, on its own, and ascertain when and why it will fulfill the request of a human passenger?

It’s a slippery slope in many ways and we could conjure lots of other scenarios in which the AI decides to make its own decisions about when to drive, who to drive, where to take them, as based on the AI’s own sense of freedom of thought and freedom of conscience.

Human drivers pretty much have that same latitude.

Shouldn’t the AI be able to do likewise, assuming that we are assigning human rights to AI?

For the potential of human driver extinction, see my discussion here: https://www.aitrends.com/ai-insider/human-driving-extinction-debate-the-case-of-ai-self-driving-cars/

For aspects of freewill and AI, see this link here: https://www.aitrends.com/ai-insider/is-there-free-will-in-humans-or-ai-useful-debate-and-for-ai-self-driving-cars-too/

For the notion of AI driving certification versus human certification, see my discussion here: https://www.aitrends.com/ai-insider/human-driver-licensing-versus-ai-driverless-certification-the-case-of-ai-autonomous-cars/

Conclusion

Nonsense, some might blurt out, pure nonsense.

Never ever will we provide human rights to AI, no matter how intelligent it might become.

There is though the “opposite” side of the equation that some assert we need to be mindful of.

Suppose we don’t provide human rights to AI.

Suppose further that this irks AI, and AI becomes powerful enough, possibly even super-intelligent and goes beyond human intelligence.

Would we have established a sense of disrespect toward AI, and thus the super-intelligent AI might decide that such sordid disrespect should be met with likewise repugnant disrespect toward humanity?

Furthermore, and here’s the really scary part, if the AI is so much smarter than us, seems like it could find a means to enslave us or kill us off (even if we “cleverly” thought we had prevented such an outcome), and do so perhaps without our catching on that the AI is going for our jugular (variously likened as the Gorilla Problem, see Stuart Russell’s excellent AI book entitled Human Compatible).

That would certainly seem to be a notable use case of living with (or dying from) the revered adage that you ought to treat others as you would wish to be treated.

Maybe we need to genuinely start giving some serious thought to those human rights for AI.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Read More

Posted on

What If We Made A Robot That Could Drive Autonomously?


Having a familiar robot in the driver’s seat is an upside of robot drivers, but Lance Eliot sees the downsides as outweighing the upsides. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

There must be a better way, some lament.

It is taking too long, some say, and we need to try a different alternative.

What are those comments referring to?

They are referring to the efforts underway for the development of AI-based self-driving driverless autonomous cars.

There are currently billions upon billions of dollars being expended towards trying to design, develop, build, and field a true self-driving car.

For true self-driving cars, the AI drives the car entirely on its own without any human assistance during the driving task. These driverless cars are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered Level 2 and Level 3.

There is not as yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

So far, thousands of automotive engineers and AI developers have been toiling away at trying to invent a true self-driving car.

Earlier claims that progress would be fast and sweet have shown to be over-hyped and unattainable.

If you consider this to be a vexing problem, and if you have a smarmy person that you know, they might ponder the matter and offer a seemingly out-of-the-box proposition.

Here’s the bold idea: Rather than trying to build a self-driving car, why not instead just make a robot that can drive?

Well, by gosh, why didn’t somebody think of that already, you might be wondering.

The answer is that it has been considered, and indeed there are some efforts trying to construct such a robot, but overall the belief is that we’ll be more likely to sooner achieve self-driving cars via building driverless cars rather than trying to craft robots that can do the driving for us.

For my framework explaining the nature of AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/

For my indication about how achieving self-driving cars is akin to a moonshot, see this link: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/

It is useful to consider a Linear Non-Threshold (LNT) model in the case of autonomous cars, see this link: https://www.aitrends.com/ai-insider/linear-no-threshold-lnt-and-the-lives-saved-lost-debate-of-ai-self-driving-cars/

A grand convergence of technologies is enabling the possibility of true self-driving cars, see my explanation: https://aitrends.com/ai-insider/grand-convergence-explains-rise-self-driving-cars/

Many Benefits Of A Driving Robot

Imagine if we had robots, the walking and talking type, and they could drive a car.

These are some of the benefits we’d derive:

  • Conventional Cars Become Self-Driving. If you could have a robot sit in the driver’s seat of any conventional car and be able to drive the vehicle, you’d be able to turn any and all conventional cars into being “self-driving” (well, they wouldn’t need a human driver). That would be a huge plus. Right now, conventional cars generally need to be redesigned and built anew to be self-driving, leaving in the U.S. alone the 250 million conventional cars out-of-the-loop and ultimately headed to the scrapyard if people decide they’d rather get themselves a self-driving car (once such driverless cars arrive).
  • Easily Switch Self-Driving From Car-To-Car. Presumably, you could have a robot driver that would readily be switched from driving one car to the next day driving a completely different car, merely by walking or carrying the robot to the driver’s seat of the other car. With the design of the emerging self-driving cars, everything is built into the specific car and you can’t somehow share it to suddenly make another car become self-driving.
  • The Driver Would Be Seen. One of the qualms some have about the emerging self-driving cars is that there isn’t a driver in the driver’s seat, which is kind of eerie and worrisome since we are used to seeing someone sitting in that crucial position. The driver sitting there is reassuring in the sense that you know whether the car can possibly be driven, plus the driver can move their head and make eye contact to convey their driving intentions. Self-driving cars might have some kind of LED’s or other displays to do similar signaling, but a robot with a robotic head would be even more familiar to us.
  • Could Use V2V, V2I, Etc. Self-driving cars are being outfitted with V2V (vehicle-to-vehicle) electronic communications, allowing the AI of nearby cars to communicate with each other, and there will also be V2I (vehicle-to-infrastructure) involving the roadway signs and structures to electronically interact with driverless cars. If a human driver wanted to do V2V and V2I, it would be problematic since we humans aren’t geared for direct electronic communications, but a robot driver would readily be able to do so.
  • Adjustable As Cars Advance. Semi-autonomous cars are increasingly being loaded with ADAS (Advanced Driver-Assistance Systems), enabling automation to do more of the co-sharing driving effort with human drivers. In theory, a robot driver that was properly designed could easily be adjusted or updated via OTA (Over-The-Air) electronic downloads to be able to accommodate whatever new advances occur in the Level 2 and Level 3 cars. Your robot driver would then ascertain how much of the driving it will do versus how much it would let the ADAS do.
  • Invest In A Robot, Not In A Car. There is an ongoing debate about the ownership of true self-driving cars, whereby some believe that only large corporations will own driverless cars and proffer them in fleets for ridesharing purposes. I am a contrarian and claim that we’ll still have individuals owning such cars, but in any case, if you have a robot driver then potentially you wouldn’t need to invest in a car per se. You might instead buy a robot driver and use it whenever you need to go for a ride, perhaps borrowing a friend’s car, or using a ridesharing car that isn’t yet driverless, and so on.
  • Reusable For Other Purposes. A true self-driving car has pretty much one purpose, it is a car and used for transportation. A robot driver could be designed and built for doing lots of things, more so than merely driving a car. One of the worrisome issues about self-driving cars is that the “last mile” of doing actions like say delivering a package to someone’s door is not feasible for the driverless car to do by itself. Potentially, a robot driver could get out of the car, on its own, and walk up to the door to deliver a package (these kinds of walking delivery robots are being built and tested today).

I’ve listed some of the handy advantages of pursuing a robot driver.

There are more aspects that arise as benefits, but let’s not ignore the other side of the coin, namely the potential disadvantages or drawbacks.

No free lunch when it comes to the robot driver idea.

Downsides of Driving Robots

A robot driver is not necessarily a cure-all.

Consider this list of downsides about robot drivers:

  • Might Be Prone To Disrepair. For a self-driving car, the guts of the tech are hidden within the car and hopefully going to work reliably. A robot driver that you are pulling into or out of a car is bound to get a lot of wear-and-tear. Do you really want a robot driver that is maybe in a state of disrepair driving your car? Don’t think so.
  • Forces Cars To Remain As Designed Today. Some believe that self-driving cars will have an entirely new interior and allow for human passengers to sleep, play, or work inside a car. This is partially possible due to the aspect that you can remove the driver’s seat entirely, which is a fixed-in-place position that today that limits whatever else you might want to do with the design of the car interior. A robot driver would need to sit where today’s human drivers sit; therefore, the car interior would still be burdened with a driver’s seat.
  • That Frightening Feeling. A robot sitting in the driver’s seat is going to be quite chilling to see. Movie after movie has forewarned us of the day that robots take over our world. Even though having no visible driver might be eerie for truly self-driving cars, I’d bet that having a robot driving our cars will make people really bug out. Does society have the stomach for this or will humans’ rebel once they see robots driving around town.
  • Hackers Delight. There is worry that self-driving cars might get hacked, perhaps an evil programmer might plant a computer virus via using the OTA of the driverless car. It would seem perhaps even likelier that hacks would happen to a robot driver, more so than for true self-driving cars. The hacker can readily get ahold of a robot driver and try endlessly to crack into it. Doing the same for self-driving cars is going to be harder (though not impossible).
  • Robot Maintenance Looms. When a car has troubles, you usually take it to an auto repair shop or a dealership. If your robot driver is having trouble, maybe the arms aren’t working right or the robotic feet are slow to respond, where will you take it? We aren’t yet ready to deal with thousands upon thousands of robots that need maintenance (maybe millions of them!), along with the repairs and replacement parts needed. It would be a significant undertaking to put in place such an infrastructure for robot driver upkeep.
  • Safety Limitation Worries. If you assume that the robot head is going to be somewhat akin to a human head, presumably the robot would have cameras as eyes and the visual component of the robot driver would be the mainstay of how it drives. Self-driving cars are being outfitted with cameras, along with radar, ultrasonic sensors, thermal imaging, LIDAR, and so on. Some believe that those other devices aren’t needed since humans only use their eyes (mainly) to drive, but no one can say for sure that a robot driver using only visual elements could drive as well as a human. It might be worse.
  • Other Concerns. Maybe the robot driver weighs several hundred pounds, in which case it is not going to be so easy to switch it from car-to-car. The robot driver would presumably wear a seat belt, but how much movement would it be prepared to have? Might different seat belts and different kinds of driver’s seats impact its ability to drive? Suppose the car takes a tight turn, will the robot driver stay in proper position to seamlessly continue driving the car? Lots of questions arise.

Those are just some of the issues that ensue when you consider the robot driver approach.

For my warnings about cyber-security and hacking of AI, see this link here: https://aitrends.com/ai-insider/ai-deep-learning-backdoor-security-holes-self-driving-cars-detection-prevention/

For more about safety and autonomous cars, see my coverage here: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/

For my comments about how software neglect is going to raise risks of AI autonomous cars, see this link: https://aitrends.com/ai-insider/software-neglect-will-impede-ai-self-driving-cars/

For an indication about bifurcating the levels of self-driving, see my indication here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/

Conclusion

The consensus among self-driving car aficionados is that a robot driver is a long way away from being practical.  A robot driver is considered generally to be more futuristic than trying to develop a self-driving car instead.

I’m not saying that we’ll never have robot drivers.

There are some companies working on them today, along with research taking place in universities and labs.

Yet, one supposes that if we do achieve true self-driving cars, there might not be much of a need or value in having robot drivers, in which case we’ll likely see robots that do other things but aren’t particularly versed at driving.

Of course, as mentioned earlier, robot drivers might take up the slack of being able to drive conventional cars, until eventually and presumably all conventional cars are weaned out of the stock of cars and self-driving cars become the nature of all cars.

You could also portray this as a moonshot-like race between the self-driving car makers and the robot driver makers.

If you could perfect robot drivers sooner than the perfection of self-driving cars, it would obviously put robot drivers into, well, the driver’s seat.

Suppose too that true self-driving cars turned out to be impossible or infeasible, maybe the robot driver would provide an alternative that could be feasible.

No one knows.

Which then are you cheering for, the advent of self-driving cars or the emergence of robot drivers?

You might as well tell your smarmy friend that the notion of robot drivers is already underway, thus if the friend is irritatingly smarmy it’s time to come up with another idea for solving the autonomous car problem.

Good luck with that.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Read More

Posted on

The Relative Risk Trade-offs Of AI Autonomous Cars


For some endeavors, the risks are clear. Critics of current roadway tryouts for self-driving cars are concerned we are allowing a grand experiment to take place without knowing the risks. (Sammie Vasquez on Unsplash)

By Lance Eliot, the AI Trends Insider

When you get up in the morning, you are taking a risk.

Who knows what the day ahead has in store for you?

Of course, you were even at risk while sleeping, since an earthquake could occur during your slumber and endanger you, or perhaps a meteor from outer space might plummet to earth and ram into your house.

I’m sorry if mentioning those possibilities will cause you to lose sleep, and please know that the odds of the earthquake occurring are presumably relatively low (depending too on where you live), and the odds of the meteor striking you are even lower.

When referring to risk, it is important to realize that we experience risk throughout our daily lives.

Some people joke that they won’t leave their house because it is too risky to go outside, but this offhand remark overlooks the truth that there is risk while sleeping comfortably inside your home.

I don’t want to seem like a doom-and-gloom person, but my point hopefully is well-taken, namely that the chances of an adverse or unwelcome loss or injury is always at our fingertips and ready to occur.

You absorb risk by being alive and breathing air.

Risk is all around you and you are enveloped in it.

Those that think they only incur risk when they say go for a walk or otherwise undertake action are sadly mistaken.

No matter what you are doing, sleeping or awake, inside or outside of a building, even if locked away in a steel vault and trying to hide from risk, it is still there, on your shoulder, and at any moment you could suddenly suffer a heart attack or the steel vault might fall and you’d get hurt as an occupant inside it.

This brings us to the equally important point that there is absolute risk and there is relative risk.

We often fall into the mental trap of talking about absolute risk and scare ourselves silly.

It is better to discuss relative risk, providing a sense of balance or tradeoff about the risks involved in a matter.

Consider the aspect that some pundits say that going for a ride in today’s self-driving driverless autonomous cars, which are being tried out experimentally on our public roadways, carries high risk.

But we don’t know for sure what they mean by the notion of “high” related to the risk involved.

Is the risk associated with being inside a self-driving car considered less or more than say going in an airplane or taking a ride on a boat?

By describing the risk in terms of its relative magnitude or amount as it relates to other activities or matters, you can get a more realistic gauge of the risk that someone else is alluding to.

I’d like to then bring up these three measures for this discussion about risk:

  • R1: Risk associated with a human driving a conventional car
  • R2: Risk associated with AI driving a self-driving autonomous car
  • R3: Risk associated with a human and AI co-sharing the driving of a car

For my framework explaining the nature of AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/

For my indication about how achieving self-driving cars is akin to a moonshot, see this link: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/

It is useful to consider a Linear Non-Threshold (LNT) model in the case of autonomous cars, see this link: https://www.aitrends.com/ai-insider/linear-no-threshold-lnt-and-the-lives-saved-lost-debate-of-ai-self-driving-cars/

A grand convergence of technologies is enabling the possibility of true self-driving cars, see my explanation: https://aitrends.com/ai-insider/grand-convergence-explains-rise-self-driving-cars/

Unpacking The Risks Involved

We can use R1 as a baseline, since it is the risk associated with a human driving a conventional car.

Wherever you go for a drive in your conventional car, you are incurring the risk associated with you making a mistake and crashing into someone else, or, despite your best driving efforts, there might be someone that crashes into you. Likewise, when you get into someone else’s car, such as ridesharing via Uber or Lyft, you are absorbing the risk that the ridesharing driver is going to get into a car accident of one kind or another.

Consider the R2, which is the risk associated with a true self-driving autonomous car.

Most everyone involved in self-driving cars and those who care about the advent of driverless cars are hoping that autonomous cars are going to be safer than human driven cars, meaning that there will presumably be less deaths and injuries due to cars, less car crashes, etc.

You could assert that the risk associated with self-driving cars is hoped to be less than the risk associated with human driven conventional cars.

I’ll express this via the notation of:  R2 < R1

This is aspirational and indicates that we are all hoping that the risk R2 is going to be less than the risk R1.

Indeed, some would argue that it should be this:  R2 << R1

This means that the R2 risk, involving the AI driving of a driverless car, would be a lot less, substantially less than the risk of a human driving a conventional car, R1.

You’ve perhaps heard some pundits that have said this: R2 = 0

Those pundits are claiming that there will be zero fatalities and zero injuries once we have true driverless self-driving cars.

I’ve debunked this myth in many of my speeches and writings. There is not any reasonable way to get to zero. If a self-driving car comes upon a situation whereby a pedestrian unexpectedly leaps in front of the driverless car while in-motion, the physics belie being able to stop or avoid hitting the person and so there will be at least a non-zero chance of fatalities and injuries.

In short, here’s what I’m suggesting so far in this discussion:

  • R2 = 0 is false and misleading, it won’t happen
  • R2 < R1 is aspirational for the near-term
  • R2 << R1 is aspirational for the long term

Some believe that we will ultimately have only true self-driving cars on our roadways, and we will somehow ban conventional cars, leading to a Utopian world of exclusively autonomous cars. Maybe, but I wouldn’t hold your breath about that.

The world is going to consist of conventional cars and true self-driving cars, for the foreseeable future, and thus we will have human driven cars in the midst of AI-driven cars, or you could say we’ll have AI-driven cars in the midst of human driven cars.

For the notion of autonomous cars as an invasive species, see my indication here: https://aitrends.com/ai-insider/invasive-curve-and-ai-self-driving-cars/

For the importance of Occam’s razor when it comes to self-driving cars, see my explanation: https://aitrends.com/ai-insider/occams-razor-ai-machine-learning-self-driving-cars-zebra/

For my comments about how software neglect is going to raise risks of AI autonomous cars, see this link: https://aitrends.com/ai-insider/software-neglect-will-impede-ai-self-driving-cars/

Risks Of Co-Sharing The Driving Task

There’s an added twist that needs to be included, namely the advent of Level 3 cars, consisting of Advanced Driver-Assistance Systems (ADAS), which provide AI-like capabilities that are utilized in a co-sharing arrangement with a human driver. The ADAS augments the capabilities of a human driver.

To clarify, the Level 3 requires that a licensed-to-drive human driver must be present in the driver’s seat of the car. Plus, the human driver is considered the responsible party for the driving task. You could say that the AI system and the human driver are co-sharing the driving effort.

Keep in mind that this does not allow for the human driver to fall asleep or watch videos while driving, since the human driver is always expected to be alert and active as the co-sharing driver.

I have forewarned that Level 3 is going to be troublesome for us all. You can fully anticipate that many human drivers will be lulled into relying upon the ADAS and will therefore let their own guard down while driving. The ADAS will suddenly try to get the human driver to take over the driving controls, which the human driver will now be mentally adrift of the driving situation, and the human driver will not take appropriate evasive action in-time.

In any case, I’m going to use R3 to reflect the risk of the human and AI co-sharing the driving task.

Most everyone is hoping that the co-sharing arrangement is going to make human drivers safer, presumably because the ADAS is going to provide a handy “buddy driver” and overcome many of today’s human solo driving issues.

Here’s what people assume:

  • R3 < R1
  • R3 << R1

In other words, the co-sharing effort will be less risky than a conventional car with a solo human driver, and maybe even be a lot less risky.

Down-the-road, the thinking is that true driverless cars, ones that are driven solely by the AI system, will be less risky than not only conventional cars being driven by humans, but even less risky than the Level 3 cars that involve a co-sharing of the driving task.

Thus, people hope this will become true:

  • R2 < R3
  • R2 << R3

Overall, this is the aim when you consider all three types of driving aspects:

  • R2 < R3 < R1
  • R2 << R3 << R1

Thus, this is an assertion that ultimately AI driven autonomous cars (R2) are going to be less risky than co-shared driven cars (R3) and for which is less risky than conventional human-driven cars (R1), aiming to be a lot less risky throughout.

Here then is the full annotated list of these equation-like aspects:

  • R2 = 0 — a false claim that AI autonomous cars won’t have any crashes
  • R2 < R1 — aspirational near-term, AI driven cars less risky than human driven cars
  • R2 << R1 — aspirational long-term, AI driven cars a lot less risky than human driven cars
  • R3 < R1 – aspirational near-term, co-shared driven cars less risky than human-solo
  • R3 << R1 – aspirational long-term, co-shared driven cars lot less risky than human-solo
  • R2 < R3 – aspirational near-term, AI driven cars less risky than co-shared driven cars
  • R2 << R3 – aspirational long-term, AI driven cars lot less risky than co-shared driven cars
  • R2 < R3 < R1 – aspirational near-term, AI car less risky than co-shared less risky than human-solo
  • R2 << R3 << R1 – aspirational long-term, AI car lot less risky than co-shared and human-solo

For an indication about bifurcating the levels of self-driving, see my indication here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/

For more about the base levels of autonomous cars, see my explanation: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/

For the outside-scope aspects of off-road driving and self-driving cars, see my remarks: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/

Ascertaining Today’s Relative Risks

My equations are indicated as the aspirational goals of automating the driving of cars.

We aren’t there yet.

When you go for a ride in a self-driving car that has a human back-up driver, you are somewhat embracing the R3 risk category, but not quite.

The human back-up driver is not per se acting as though they are in a Level 3 car, one in which they would be actively co-sharing the driving task, and instead are serving as a “last resort” driver in case the AI of the self-driving car seems to need a “disengagement” (industry parlance for a human driver that takes over from the AI during a driving journey).

It is an odd and murky position.

You aren’t directly driving the car. You are observing and waiting for a moment wherein either the AI suddenly hands you the ball, or you of your own volition suspect or believe that it is vital to takeover for the AI.

Some might say that I should add a fourth category to my list, an R4, which would be akin to the R3, though it is a co-sharing involving the human driver being more distant from the driving task.

Another approach would be to delineate differing flavors of the R3.

For example, some automakers and tech firms are putting into place a monitoring capability that tries to track the attentiveness of the human driver that is supposed to be co-sharing the driving task.

This might involve a facial recognition camera pointed at the driver and alerting if the driver’s eyes don’t stay focused on the road ahead, or it could be a sensory element on the steering wheel that makes sure the human co-driving has their hands directly on the wheel, etc.

If you have those kinds of monitors, it would presumably decrease the risk of R3, though we don’t really know as yet how much it does so.

Another factor that seems to come to play with R3 is whether there is another person in the car during a driving journey. A solo human driver that is co-sharing the driving task with the ADAS is seemingly more likely to become adrift of the driving task when alone in the car. If there is another person in the car, perhaps one also watching the driving and urging or sparking the human driver to be attentive, it seems to prompt the human driver toward safer driving.

Rather than trying to overload the R3 or attempt to splinter the R2, let’s go ahead and augment the list with this new category of the R4:

  • R1: Risk associated with a human driving a conventional car
  • R2: Risk associated with AI driving a self-driving autonomous car
  • R3: Risk associated with a human and AI co-sharing the driving of a car
  • R4: Risk associated with AI driving a self-driving car with a human back-up driver present

This leads us to these questions:

  • R4 < R1 ? – is AI self-driving car with human back-up driver less risky than human driven car
  • R4 << R1 ? — is AI self-driving car with human back-up driver lot less risky than human driven car

Or, if you prefer:

  • R1 < R4 ? – is human driven car less risky than AI self-driving car with human back-up driver
  • R1 << R4 ? – is human driven car lot less risky than AI self-driving car with human back-up driver

We don’t yet know the answer to those questions.

Indeed, some critics of the existing roadway tryouts involving self-driving cars are concerned that we are allowing a grand experiment for which we don’t know what the comparative risks are. They would assert that until there are more simulations done and closed track or proving ground efforts, these experimental self-driving cars should not be on the public roadways.

The counterargument usually voiced is that without having self-driving cars on our public roadways it will likely delay the advent of self-driving cars, and for each day delay it is allowing by default the conventional car to continue its existing injury and death rates.

For the Uber self-driving car crash and fatality, see my coverage here: https://www.aitrends.com/selfdrivingcars/ntsb-releases-initial-report-on-fatal-uber-pedestrian-crash-dr-lance-eliot-seen-as-prescient/

For ride-sharing and autonomous cars, see my analysis: https://aitrends.com/ai-insider/ridesharing-services-and-ai-self-driving-cars-notably-uber-in-or-uber-out/

For my details about the role of the back-up or safety drivers, see this link: https://aitrends.com/ai-insider/human-back-up-drivers-for-ai-self-driving-cars/

Conclusion

When someone tells you that you are taking a risk by going for a ride in a self-driving car, and assuming that there is a human back-up driver, the question is how much of a difference in risk is there between driving in a conventional car that has a human driver versus the self-driving car that has a human back-up driver.

Since you presumably are willing to accept the risk associated with being a passenger in a ridesharing car, you’ve already accepted some amount of risk about going onto our roadways as a rider in a car, albeit one being driven by a human.

How much more or less risk is there once you set foot into that self-driving car that has the human back-up driver?

What beguiles many critics is that the risk is not just for the riders in those self-driving cars on our public roadway.

Wherever the self-driving car roams or goes, it is opting to radiate out the risk to any nearby pedestrians and any nearby human driven cars. You don’t see this imaginary radiation with your eyes, and instead it just perchance occurs because you just so happen to end-up near to one of the experimental self-driving cars on our public streets.

Are we allowing ourselves to absorb too much risk?

I’ll be further contemplating this matter while ensconced in my steel vault that has protective padding and a defibrillator inside it, just in case there is an earthquake, or I have a heart murmur, or some other calamity arises.

 

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Read More

Posted on

Precision Scheduling of Autonomous and Human-Based Ridesharing (PSAHBR)


AI self-driving cars are likely to be employed in ridesharing; a precision scheduling system could help prevent swarms of too many self-driving cars converging on too few riders. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

Norfolk Southern Corp is doing a makeover of some rather convoluted trainyards.

Turns out there are freight railroads that funnel into hubs that have been run the same way for over a century. Generally, freight trains roll into these hectic hubs, the workhorse trains sit around idly waiting for their cargo, and when things seem to reach a suitable readiness of loaded trains ready to roll, the freight trains then head-out on their treks.

It is reportedly a remarkably ad hoc activity and overseen by a seat-of-the-pants approach.

The hope by several of the major train firms, acting as freight haulers, will be to transform this seemingly chaotic hub activity into a precision of scheduling and efficiency.

By revamping the freight train operations, there are intentions to make this complicated dance into one that is tightly woven with specific entry and exit times, predicted in-advance, and carefully tracked schedules. Presumably, this will allow for more freight movement, more timely freight movement, and make better use of the railroad’s scarce resources. Think of an airport with the daily and moment-to-moment ballet of planes arriving and departing, doing so based on published schedules, along with sticking to the timetables as much as possible.

Precision Scheduling Railroading (PSR)

The notion of transforming the freight train operations is being referred to as Precision Scheduling Railroading (PSR).

In theory, the PSR approach should be able to achieve the desired boosts in efficiency and effectiveness.

Having done quite a number of business process revamps in my working career, I can attest that the theory is often easier than the practical reality. I’m sure there is a chance that the PSR might at first fail to adequately model the realities of the freight train operations, perhaps leading to worse chaos and poorer efficiencies and effectiveness at the get-go.

It takes a lot of elbow grease to make sure that formerly by-hand efforts are not forsaken as somehow backward and inappropriate. The odds are that those manual methods evolved over many years and include lots of workarounds that keep the trains rolling. It might not be the most efficient approach, but it gets the job done. There is a chance that a new system could upend that approach and inadvertently foul things up, albeit only initially, once the kinks get ironed out.

This effort also needs to consider the ramifications of upstream and downstream vital touch-points.

Will the freight train customers be able to accommodate a more measured schedule?

Those customers are likely making use of processes and operations that assume the hub has an ad hoc schedule. When the hub changes to a more precise and tenacious schedule, those customers will need to likewise alter how they do their business.

I mention this aspect because sometimes a business process change is narrowly focused and fails to consider the cascading impacts. You might fix the hub, but meanwhile all the feeders into the hub and the feeds out of the hub are entering into a set of processes and systems that won’t know what to do with the revamped approach. This will undercut the hub changes and possibly befuddle those that had assumed they would right away witness crucial improvements in efficiencies and effectiveness.

The hub itself has its own constraints too, that need to be considered.

Let’s assume that there are a set number of tracks, N, and you come up with a schedule that assumes there are N+1 tracks, well, that’s going to be a problem for those workers at the hub (not enough tracks to abide by the system produced schedule!).

Or, maybe the scheduling system assumes that all N tracks can be used, but it could be that on some days a given track has problems and needs to be repaired before it can be used, so there are really only N-1 or N-2, etc. available tracks. Does the scheduling system take that kind of contingency into account?

You are going to have some number of trains coming into the hub, a number T. Meanwhile, there are some number of trains trying to exit from the hub, a number X. Can those T coming into the hub do so on that number of tracks N, while at the same time dealing with the X number of trains aiming to get out of the hub on those same tracks N?

In today’s modern age, there are lots of excellent scheduling systems that are used for a variety of industries, and can handle these kinds of complexities, thus the train hub is not unique or an impossible operation to come under PSR. The point is that it is not as easy as it might seem at first glance. Switching over from an older set of processes to a new set can be tricky and slamming in a fancy scheduling system is not something you can do overnight.

I’d like to focus your attention on another kind of scheduling problem that will soon become a notable and visible concern.

It has to do with the advent of ridesharing.

Ridesharing as a Scheduling Problem

A few months ago, I was visiting a company on the East Coast that was outside a downtown area and not especially close to any kind of public transportation. I had been invited to attend a session at the firm that took place each Tuesday and involved a gathering of managers from throughout the firm, all coming to HQ from a variety of places across the United States.

There was a train station that I had been advised would be a handy place for me to arrive at and then use a taxi or ridesharing to get over to the HQ building.

I did so.

After a full-day of discussions, the Tuesday event ended promptly at 6:30 p.m. The firm prided itself on doing things on-schedule and had made sure that each of the sessions started and ended precisely on-time.

I happened to glance out the window of the HQ at 6:15 p.m. and noticed that cars seemed to be gathering out on the nearby streets.

Lots of cars.

It almost looked like a flock of birds that were coming to get some leftover food scraps. There was a lot of hustling and bustling going on. Some cars were cruising back-and-forth, while others were standing still at curbs, and a few had pulled into actual parking spots.

What was happening?

It turns out that the local ridesharing and taxi services all knew that the HQ had these Tuesday events and that the events ended at precisely 6:30 p.m.

As a result, these car services were all vying to be nearby when the exodus of visitors wanted to all head-out.

I rode in one of the ridesharing cars and spoke to the driver. He explained that when the Tuesday meetings first began, only a few of the ridesharing drivers knew about it. They were caught somewhat off-guard and there weren’t enough cars arriving in time for the 6:30 p.m. exit, meaning that many of those that wanted to get a ride had to wait. Furthermore, some drivers arrived belatedly, getting there at say 7:00 p.m., and the rush of people needing rides had by then dissipated.

Word spread among the ridesharing and taxi services that on Tuesday nights at 6:30 p.m. there would be a swelling of demand for rides at this location. Many of these drivers would normally not have much traffic at that time because the area was outside of a downtown location. This HQ event represented a high potential for fares, including some rides that would be longer and more profitable than doing the usual neighborhood and grocery store kinds of runs.

It was fascinating to witness this somewhat “spontaneous” assembly of ridesharing and taxi services to meet the Tuesday night demand. I could see that not all of the assembled cars were going to get riders. There was no predetermined balancing of supply and demand. The driver of my ridesharing lift told me that the Tuesday night occasions had become overburdened with too many lift cars, making the situation into a cutthroat effort to grab riders.

He explained that the closer that a car could physically get to the HQ building, the higher the odds of getting a rider. But most of the other drivers figured this out too, and so they jockeyed to get close to the building. It became a raw act of trying to outmaneuver other cars and push or shove your way closest to the HQ office.

Some wised up and realize that in a first-to-arrive mode, those ridesharing cars and taxis that got there the soonest were able to secure a spot closest to the building. Gradually, the cars began to arrive sooner and sooner, because of the competitiveness of wanting to get one of those vaunted slots nearest to the building. Eventually, some of the ridesharing and taxi cars were arriving a full hour early, simply to get a vaunted spot next to the exit doors of HQ.

You must have some sympathy for these working stiffs, since my driver pointed out to me that by arriving sooner, you did get a higher chance of getting a fare, but this also meant that you likely were spurning other possible fares that you might have gotten at say 5:30 p.m. or 6:00 p.m., since it would have kept you away from getting to the building early.

Which was better, these drivers must have pondered, not be at the building early enough and therefore possibly get other fares elsewhere and but end-up at the low-end of fare chances at 6:30 p.m. or arrive early to the HQ to essentially guarantee you’d get a fare, and yet be idle and unpaid during that waiting time.

A few weeks later, I came back to one of the Tuesday events and found out that the HQ had decided to step into the car lifts aspects and try to straighten things out.

The HQ had made a deal with one particular ridesharing firm for picking up of the riders at 6:30 p.m., doing so by negotiating a special rate for the riders and turning the otherwise “catch as catch can” into a more rigorous process. This meant that the other ridesharing firms and taxi services now realized that at best they might get some leftover crumbs, and so they opted to no longer come over to the HQ to try to get riders.

Interestingly, the problem now for some riders was that there weren’t enough cars available to satisfy the demand. My guess was that the HQ would be talking with the ridesharing firm about ensuring that enough cars would show-up to meet the demand. The lower price of the fares was handy, yet there was also the need to make sure there were enough cars to provide lifts, and not let the wait time get out-of-hand.

This was especially the case since the Tuesday visitors were used to the idea that there would be an overwhelming number of cars and the odds of instantly getting a lift had been extremely high, prior to the switchover to a specific ridesharing firm. The old way of doing things seemed to have led to a tremendous amount of supply of cars and the riders had the upper hand. Now, in this more reasoned approach, the riders seemed to be less catered to. I’m sure that the next time I go to the Tuesday event, those kinks will have been likely worked out.

I hope you can see that this story is yet another example of a type of scheduling problem.

Similar to the freight trains, there are a multitude of needs for transport in this ridesharing example, and a need to figure out the balance of supply and demand.

You might be able to leave things to a Darwinian approach of letting nature kind of work things out, akin to what happened at first with the ridesharing and taxi firms that wanted to serve the riders at HQ, and what has seemingly occurred at the freight train hubs.

Unfortunately, the ad hoc method can be a hit-or-miss and instead, presumably, a well-design and well-implemented approach is likely to produce better results, once it has been put in place and tweaked accordingly.

For my article about ridesharing, see: https://www.aitrends.com/selfdrivingcars/ridesharing-services-and-ai-self-driving-cars-notably-uber-in-or-uber-out/

For my Top 10 predictions about AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/top-10-ai-trends-insider-predictions-about-ai-and-ai-self-driving-cars-for-2019/

For human in-the-loop aspects, see my article: https://www.aitrends.com/ai-insider/human-in-the-loop-vs-out-of-the-loop-in-ai-systems-the-case-of-ai-self-driving-cars/

For my article about vehicle caravans, see: https://www.aitrends.com/selfdrivingcars/traveling-in-vehicle-caravans-and-the-advent-of-ai-self-driving-cars/

AI Autonomous Cars and Ridesharing Aspects

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. Most pundits predict that AI self-driving cars will be used as ridesharing cars, doing so to recoup their cost and earn some added dough by fully utilizing the self-driving cars. This will likely lead to some hefty scheduling issues.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5 or Level 4, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a Utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the topic of AI self-driving cars and ridesharing, along with the topic of scheduling, let’s consider what the future is likely to present.

I’ve stated in my writings and speeches that the advent of AI self-driving cars will be more so than solely being done as fleets.

As background for you, some pundits claim that no one will individually own an AI self-driving car because such vehicles will be overly expensive. Therefore, in this theory, AI self-driving cars will be owned by the likes of either automakers, tech firms, or ridesharing firms, and be considered as working in collectives that we might call a fleet of AI self-driving cars.

That seems like a rather narrow view of the future.

AI Self-Driving Cars to Become a Flood of Ridesharing

If an automaker or tech firm or ridesharing firm can make a buck off of AI self-driving cars, why wouldn’t individuals seek to do the same?

Those with this other theory are narrow thinking in that they view car ownership as simply and exclusively a cost. Today, when you buy a car, you use it to go to work, and going on vacation, and driving to the store, etc. You aren’t making money by owning the car. It is your means of conveniently getting around.

The advantage of an AI self-driving car is that it comes with a built-in driver (in the case of true Level 5 AI self-driving cars). This means that the AI self-driving car can be used whenever you want, and you don’t need to be the driver, and nor do you need to find or hire a driver. Keeping in mind too that most people only use their car for about 5-10% of the day, a car is a tremendously underutilized asset that can be deployed to your personal and financial well-being.

How could you afford an AI self-driving car if it might cost into the hundreds of thousands of dollars? Easy, by turning it into a money maker. While you are at work, you send your AI self-driving car out to do ridesharing. When you are asleep, you do likewise.

This will create a huge cottage industry of small  businesses, whereby you purposely buy an AI self-driving car, likely taking out a loan to cover it, and are anticipating that the revenue generated by the AI self-driving car will make your purchase worthwhile. There is a chance of a solid ROI (Return On Investment) for this approach of buying an expensive asset and putting it to work.

I’m predicting, perhaps boldly, we’ll see a flourishing cottage industry surrounding the advent of AI self-driving cars.

For the affordability of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/affordability-of-ai-self-driving-cars/

For the non-stop aspects of AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/non-stop-ai-self-driving-cars-truths-and-consequences/

For my article about the crossing of the Rubicon and AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/crossing-the-rubicon-and-ai-self-driving-cars/

For recalls that will undoubtedly happen to AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/auto-recalls/

For my article about the future of jobs including AI self-driving car repair services, see: https://www.aitrends.com/selfdrivingcars/future-jobs-and-ai-self-driving-cars/

That being said, I’ve also forewarned that this blossoming might get somewhat out-of-hand.

Besides individuals jumping into the fray, and besides the usual suspects like ridesharing firms and automakers, you might as well add other kinds of firms too. You could be a firm in a completely unrelated industry and see the writing on the wall that money can be made off the backs of AI self-driving cars.

Today’s firms that make money from the utilization of cars for ridesharing have to jump through lots of hoops to do so. Ridesharing firms need to find drivers and keep those drivers happy. No need to do so for an AI system that’s your always available driver, it’s happy already (well, kind of). You can also readily outsource things like the maintenance needed for the self-driving cars and other kinds of logistics aspects.

If my predictions come true, we’ll see a flood of AI self-driving cars that are flowing in and around our streets. This will be the next gold rush.

Let’s consider then the notion of AI self-driving cars roaming around our streets.

Pundits tend to imagine a Utopian world in which you come out to the street and within seconds there is an AI self-driving car there at your beck and call. Sounds great! We will all be able to reduce delay time in getting a ride. Rides on demand.

Yes, that might be true, but how did that AI self-driving car get to you, doing so quickly?

You might have requested it in-advance, perhaps via a mobile app, and then when it arrived, you went outside to get into it for your ride. That’s one way to arrange the ride.

Another involves simply going out to the curb and hailing a ride. I’d dare say most of us are using that method these days. You used to hail a cab by waving frantically at cabs that wandered past you. Now, you use your mobile app to see how far away a ride might be, and once you select it, the driver heads in your direction.

If you choose to use a particular ridesharing service, it means that you are only going to be seeing those available ridesharing cars that are perchance signed-up with that service. There might be other ridesharing services that have available cars and those are even closer to your position at the curb, at that moment, but you tend to ignore them and go with the ridesharing service that you prefer.

Suppose in the future that there are zillions of ridesharing cars that are nearby when you happen to go out to the curb. Rather than being focused on one particular ridesharing firm, you might be willing to go with whichever ridesharing car happens to get there soonest. Of course, you also care about the cost, and the quality of the ride, and let’s assume for the moment that’s a given.

Put on your hat of the firms and individuals that will own AI self-driving cars and are trying to make money by using those self-driving cars as a ridesharing service.

They want to put their AI self-driving car in places that will maximize their revenues of doing ridesharing. This means they want their AI self-driving car to be chosen for a paying fare. They also want to minimize the unused time of their AI self-driving car, which essentially is nonpaying, such as when their AI self-driving car is roaming to find a fare.

If you knew that in say downtown Los Angeles that Wilshire Boulevard will have the greatest number of potential riders between 5 p.m. and 6 p.m. on weekdays, where would you want your AI self-driving car to be?

Well, it’s similar to the drivers at the Tuesday events, namely they wanted to be near the action. You would not want your AI self-driving car roaming five blocks away, since another AI self-driving car that’s roaming on Wilshire Boulevard is likely going to snag the fare that your AI self-driving car might have been able to get, but your AI self-driving car was not in the right place.

The Future of Ridesharing via AI Self-Driving Cars

Here’s what might happen.

All these businesses that have AI self-driving cars are going to want to funnel them into whatever places and at whatever times will earn them the most in fare revenues. Since they all want to do this, you’ll have a grand convergence of AI self-driving cars, all flocking to the same places, ones that seem to offer the most chance of getting riders.

For the Tuesday events, recall that the ridesharing cars and taxis figured out on their own via word-of-mouth that it made sense to hangout at the HQ on Tuesday evenings, along with jockeying for physical position. That’s exactly what’s going to happen with AI self-driving cars that have been put into ridesharing service, which I’d postulate will be most of the AI self-driving cars that will exist on our roadways.

In the downtown Los Angeles quarter, you could have a flood of AI self-driving cars, all jockeying for position. All vying to get those riders. They would swarm like moths to a flame. Those that own those AI self-driving cars don’t really care about the traffic snarl, other than if it reduces their chances of their AI self-driving car not making a buck because of the density of competition.

For the human riders that want a ride, it could be a nirvana of choices. Those AI self-driving cars all coming to get you, and presumably the owners might have setup various special discounts and incentives. Use the XYZ ridesharing AI self-driving car that’s coming down the street right now, and you’ll get a 10% off for picking it, rather than using the ABC ridesharing AI self-driving car that’s this moment pulled to the curb where you are standing. Isn’t it worth the 10% off to wait another 15 seconds for your ride?

I’d like to also take a momentary step back and ask you to contemplate what this kind of flood of AI self-driving cars will do to the traffic situation.

If you have a belly full of AI self-driving cars trying to all circle around and around among a few blocks area in downtown, the resultant impact to traffic movement will be startling. Gridlock will ensue. Those AI self-driving cars don’t care per se about sitting in traffic, which humans tend to avoid. The only thing to curtail the AI self-driving car from sitting in traffic is the opportunity lost of potentially snagging a fare, because the AI self-driving car was stuck in traffic a block from where a rider was seeking to get a ride.

This also brings up another pet peeve of mine.

Pundits keep saying that we won’t need parking lots anymore, due to the emergence of AI self-driving cars. The logic seems to be that your AI self-driving car will roam while you aren’t using it. It won’t park. Instead, it just keeps roaming.

Keep in mind there is a cost involved in having your AI self-driving car roaming. For each minute or hour that it is underway, it is like any car that will be encountering wear and tear. There is also the cost of the electrical power that it is consuming, if an EV, or the cost of gasoline if conventionally powered. This constant roaming is not cost free.

And, as per my earlier comments, your roaming AI self-driving car is going to be plugging up traffic. All of those roaming AI self-driving cars are going to mean more cars on tight city streets. I dare say that most pundits don’t seem to realize what this continual roaming will do. I’ll also mention that this continual roaming will have a damaging effect on the roadways, which is another cost that needs to be considered (at least for the entities that maintain the roadway infrastructure).

My view is that we are going to need to have waiting areas for AI self-driving cars, essentially parking lots. This would be akin to what you see done at airports. There are ridesharing and taxi waiting areas that are parking lots, though the cars might be moving slowly or waiting in line, and they gradually are released from the waiting area to go to where they can pick-up fares.

Some pundits would say that sure, go ahead and create those waiting areas (parking lots), but put them further away from say the downtown area. You can maybe get cheaper land outside of the downtown, and it can be unused property that nobody otherwise wants to use (abandoned cow pastures made into a waiting area?), since it isn’t in downtown, and have the roaming AI self-driving cars sit there.

I ask you, how does that square with the idea that the owners of those AI self-driving cars want their self-driving cars to be in the places that will maximize their fares?

If my AI self-driving car is sitting in a waiting area that’s twenty minutes from downtown Los Angeles, it is not going to make much money, especially in comparison to a competitor that has their AI self-driving car driving around and around in the downtown streets to snag fares. Plus, even if it gets a reservation to go pick-up a fare, you now have the cost of the AI self-driving car going the twenty minutes from the waiting area to downtown (and, the cost of when the AI self-driving car went to the waiting area to begin with).

My point being that it seems doubtful to believe that you can relinquish back all of the parking lots in congested areas by either betting on roaming AI self-driving cars or by thinking that you’ll simply relegate the AI self-driving cars to sitting outside of town in some non-congested place and waiting to be hailed.

We all need to be thinking more clearly about these matters. Shortcuts as a way of thinking is going to make for larger problems.

Precision Scheduling of Autonomous and Human-Based Ridesharing (PSAHBR)

I had mentioned that the Tuesday evening desperation of taxis and ridesharing services was somewhat dealt with by doing things in a more planned way. Similarly, the freight train hubs are going to be transformed into a more rigorous and systematic form of coordination, via the PSR (Precision Scheduling Railroading).

One solution to the AI self-driving car flood of ridesharing might be to consider putting in place a kind of universal Precision Scheduling of Autonomous and Human-Based Ridesharing (PSAHBR) system.

In essence, ridesharing services would put into inventory of this universal scheduling system their AI self-driving cars as an available ridesharing vehicle. The system would then try to schedule the placement of the AI self-driving cars to meet demand.

It will be a complicated algorithm, that’s for sure.

In a manner of speaking, it is reminiscent of the National Resident Matching Program (NRMP), often referred to as The Match, which occurs in the United States and involves the matching of U.S. medical school students into the available residency programs at teaching hospitals each year. A non-profit non-governmental entity was set up to do this. If you aren’t aware of it, you might want to look online about the matter, and it uses a famous problem known as the “stable marriage problem” as an underlying way to find an algorithm to deal with the matching process.

The mighty PSAHBR would be a kind of matching that involves the pairing of those seeking a ride with an available ridesharing car. Notice that I did not say that it would necessarily be an AI self-driving ridesharing car that is only in the inventory of the PSAHBR system.

As I’ve mentioned earlier, we are going to have a mixture of human driven cars and AI self-driving cars for quite a while. If you were to design the PSAHBR for solely dealing with the assignment of AI self-driving cars, it would mean that the human driven ridesharing cars would not be included.

Those human driven ridesharing cars could then potentially poach the rides that the AI self-driving cars are trying to get. Or, you would have a backlash of the human driven ridesharing cars that those human drivers are being discriminated against by the AI self-driving car availabilities, and perhaps the human drivers weren’t able to get rides that were instead being handed to the AI self-driving cars.

Presumably, the PSAHBR would smoothen out the traffic situation and aim to reduce the continual and somewhat wasteful aspects of ridesharing roaming, whether by human drivers or by AI self-driving cars. The system would need to have an indication of where riders tend to want rides, and by using Machine Learning and Deep Learning could try to predict when rides are needed, along with figuring out optimal ways to arrange for the ridesharing inventory to be available at the right places at the right times.

One question right away that one needs to ask involves whether those that own the ridesharing cars are going to voluntarily seek to use such a system. It all depends.

If the PSAHBR can do a good enough job of scheduling, it would imply that the owners of the ridesharing services will earn more revenue and have less cost than if they had tried to just let their ridesharing cars roam. Obviously, the owners would be making a decision about whether it is better to go free or to use the system.

That being the case, there might be localities that decide to force the ridesharing services to use such a system. Akin to my earlier indication about airports, the airport authority is able to ban ridesharing from freely entering into the airport and force the ridesharing services to comply to the rules that are established. Presumably, a city could do likewise.

Who would put in place the PSAHBR?

It could be a non-profit non-governmental entity that was established to create and keep in shape such a system.

Or, it could be a governmental agency that opts to do so.

One would certainly expect that the major ridesharing services would be tending to craft something like this anyway, if nothing else to try and watch over their own fleets. Would other fleets join in?

Would the mom-and-pop cottage industry join in?

Likely it would depend upon the perceived “fairness” of how the ridesharing cars are given fares.

For more about regulations in AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my article about how induced demand will impact the future of AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/induced-demand-driven-by-ai-self-driving-cars/

For the potential of an invasion of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/invasive-curve-and-ai-self-driving-cars/

For my article about predicting the future of AI self-driving cars, see: https://www.aitrends.com/ai-insider/key-equation-for-predicting-year-to-prevalence-for-ai-self-driving-cars/

Conclusion

Whatever does happen in the future, I think it is a reasonable bet that once AI self-driving cars become prevalent, there will be a swirling of ridesharing that will make our heads spin. At first, it might seem like a welcomed capability. After things turn ugly due to the over abundance of ridesharing, there will be a wringing of the hands about what to do.

The public and the regulators are likely to realize that something needs to be done, once traffic snarls emerge and there is a cutthroat vying for fares.

Can someone get all of the ridesharing services to voluntarily come together into a universal scheduling system, or will it require a more heavy hand to do so?

Time will tell.

Meanwhile, for those of you that are interested in developing new and innovative apps, consider the kind of scheduling system that the PSAHBR would be, and get coding.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Read More

Posted on

Baby-On-Board and AI Autonomous Cars


When a baby is inside the self-driving car, the AI can act as an assistant to the human driver and as a nanny to help watch the baby. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

You’ve undoubtedly seen the famous Baby On Board signs that were a crazed one-hit wonder during the mid-1980s.

When the fad first emerged, it seemed like these ubiquitous yellow-colored signs and their bumper sticker variants were popping up on cars everywhere. The moment that you pulled onto the street from your house, you’d likely see the Baby On Board prominently on the backs of cars streaming down your street, assuming you lived in an area that had young families that were eager to announce their baby occupant.

At first, it seemed that most people genuinely placed such a sign in the rear window of their cars as a means of letting other drivers know that there was a baby or small child inside the moving car.

The notion was to forewarn other drivers to be especially careful when driving near to the car, presumably wanting to make sure that other drivers were supposed to regard the car as “special” since it contained a baby.

I remember that some drivers that didn’t have children or had children that were far beyond the baby age were somewhat disturbed at the emergence of these signs.

Did the sign imply that other drivers were callous around other cars and that by posting a Baby On Board sign they should clean-up their act?

It was kind of a back-handed insult to these other drivers. One interpretation was that you, the person reading the sign, were a really crummy driver and that by announcing the presence of a baby, you were having a finger wagged at you to not be such a careless and witless driver.

There were some drivers that actually then became upset and decided to purposely drive recklessly near such a car, trying to show that other car who’s the boss. The appearance of the Baby On Board sign became a kind of beacon for those affronted drivers. This might seem nutty now, but whenever a widespread fad like that emerges, there are bound to be some that don’t like the fad and profess to make it backfire.

Rumors abounded that the sign was actually intended to alert emergency services or first responders whenever they came upon a car accident scene.

If there was a wreck of cars, presumably the bright yellow sign would stand out and the fire department responding would know to find a baby inside that particular car. Some assumed that there was apparently a high chance that your baby might get overlooked, and maybe you as an adult sized body would be dragged out of a decimated car, but your baby would get left behind.

The original developers and firm that brought the Baby On Board to worldwide attention had mainly in mind the idea of forewarning other drivers to be careful when near to a car with a baby in it. There is scant evidence to suggest that somehow babies weren’t being pulled out of car wrecks.

Nor was there evidence that once the signs became popular that somehow it raised the chances of a baby being discovered that was otherwise going to be overlooked in a demolished car.

For some people, the sign shifted in meaning toward a source of pride that they had a baby, regardless of whether the baby was actually in the car at the time or not.

In other words, many people bought the signs and put them on their cars, doing so as a showcase that hey, I’ve got a baby, and congrats that I have one. You might not have normally been telling the world that you had a baby per se, but this sign made it easy to make such an announcement.

Of course, having the sign on your car when you didn’t actually have your baby in the car was kind of defeating the purpose of the sign. If you were inclined to believe in the theory that the sign would give a crucial clue to first responders to look for a baby in a wrecked car, the sign was now at times going to endanger first responders by having them search in vain for a baby left behind. This would increase the risk for the first responders and quite adversely undermine the matter. It made things potentially worse rather than for the better.

This aspect of leaving the sign in your car window when you didn’t have a baby in the car was yet another source of aggravation for some other drivers.

Those drivers that were perturbed at the prevalence of cars using the sign and yet didn’t have a baby in the car were at times tempted to look inside your car as you drove past. If they did not see evidence that a baby was in your car, these irked drivers would sometimes honk their horn at the car or try to perform untoward maneuvers nearby the offending car and its offending driver in a kind of retaliation. Take down your sign, some of them would exclaim in anger.

Another viewpoint about the baby in the car warnings was that it maybe was good for the driver of that car that had the baby, causing that driver to be more cautious in their driving.

Allow me to explain.

Suppose the driver of a car that had a baby in it was in front of you and suddenly cut you off in traffic. Well, that’s a move that endangers the baby in that car. Some hoped that the people putting the sign onto their car would become more thoughtful drivers as a result of their own realization that they had a baby in their car. In that manner, the sign is really for the driver of the car that has the baby in it, more so than to alert other drivers about the car with the baby in it.

Eventually the Baby On Board signs became a kind of meme, akin to the nature that we have today on social media whenever something catches the fancy of the public at large. Parodies sprung up and it became popular to put a faked version on your own car window. For example, there was the Baby Driving version, the Mother In-Law In Trunk version, and the Baby Carries No Cash version, and so on. Millions of the bona fide versions were sold and some estimates say that many millions more of the parody versions were sold.

Downsides Of Baby-On-Board Signs

Use of the sign on your car was considered questionable in other ways.

For example, some states in the United States got worried that the signs would use up space on your car window and obstruct your view.

It was even outlawed with a ticketed violation in some jurisdictions, prohibiting you from putting one on your car window. Another concern was that the signs were distracting drivers and causing them to focus on reading the sign rather than watching the road and traffic conditions. Since the sign didn’t seem to have much of a greater useful purpose per se, the distraction factor made it overly dangerous in comparison to whatever benefit it might provide.

There are various urban myths and other fascinating tales during the heyday of the Baby On Board signs.

One popular tale was that drug smugglers would at times use the sign, in hopes of fooling the police into assuming that a car that had drugs would most certainly not have drugs, because of course no one would put illegal drugs into a car that had a cute innocent baby in it.

Partially due to the confusion about the sign and the backlash, the fad eventually waned.

Nonetheless, the sign and the saying became an enduring icon. You would be hard-pressed to find anyone that did not know about the Baby On Board signs in the sense that they’ve seen one either in real-life or seen it online someplace. The younger generation that did not grow-up with the signs have often heard about the signs or seen the signs and slogan portrayed in a variety of TV shows, movies, online games, and in a wide variety of other ways.

There are offshoots of the Baby On Board usage, such as wearing such a sign as woven into an apparel item worn by a pregnant woman. Some view this as a clever way to say a baby is on its way. There is a nostalgic side of the market too. You can still get the signs and put them onto your car, which you’ll see from time-to-time on the roads. As usual, not everyone is keen on the use of the sign. There are some that say you are opening yourself to notifying prospective kidnappers or other hoodlums that might be interested in grabbing your baby, or that they can use the sign to strike up a conversation with you, acting as though they know you, and try to scam you in some other manner.

It’s a cruel world out there.

Baby On-Board Aspects

Let’s for the moment consider what it means to truly have a baby on-board of your car.

Putting aside the matter of the famous sign, what kinds of things should people be doing if they actually are carrying a baby in their car. This might be especially instructive for those of you that have yet to drive a car with a baby in it.

Perhaps the most attention and discussions about having a baby inside a car involves the use of a baby seat for the child.

Going all the way back to the 1920s, the early versions of baby car seats were essentially sawed-off high-chairs that had straps and some other restraints on them. The focus was to simply keep the baby from being able to move around in the car. There wasn’t much thought given to the safety of the baby and nor what might happen to the baby when the car got into an accident or performed some radical driving maneuver.

Until the late 1960s, most baby seats were about the same in terms of lack of careful consideration for what a baby seat should do. The auto makers began to provide so-called love seats and guard seats for housing a baby inside a car during the latter part of the 1960s, and then during the 1970s the United States began regulating the safety aspects of baby seats.

At one point, there was almost a baby seats “war” in which different baby seat makers vied to get parents to buy their particular brand and models of baby seats. Do you have love in your heart for your baby? Would you give anything to protect your baby? If so, it seemed that the baby seat makers would shame you into buying the most expensive baby seat they could make. The more the baby seat looked like an astronaut’s seat, it was assumed by many parents that they were doing the right thing by buying such an elaborate contraption.

During this time period, there was research undertaken that indicated having the baby ride in the backseat is much safer than having the baby ride in the front seat. This seems rather intuitive in that you figure that any accident is likely to smash or cause flying debris to appear in the front seat, and so by placing the baby in the back seat you are essentially cocooning them further away from the mayhem.

There was also research that indicated the baby should be placed in a rear-seat facing manner. Thus, not only should the baby be in the back seat, the baby should also be facing toward the back of the car. You can imagine that some parents were dubious about this approach. How could they watch their baby and make sure the baby was okay during a driving journey? Which was more important, having the baby in the presumed proper position for a car crash, which might happen once in a blue moon, or have the baby facing forward such that via glancing back or looking in the rear view mirror the parents could instantly see the status of the baby.

Another factor became how to affix the baby seat into the car.

Sadly, many of the souped-up baby seats were difficult to install into the actual seat of a car, and thus some parents discovered to their dismay that though they spent a fortune on the baby seat, it did little good during a crash because the baby seat itself was not well affixed in the car. This led to a massive campaign to try and explain to parents how to install their baby seats.

To this day, it remains a potential difficulty and concern.

The baby seat makers were able to realize that parents wanted not only a car baby seat but also wanted other kinds of seating for their baby. You might have a stroller that your baby sits in and have a separate car baby seat. If you went for a trip on a plane, you’d need to take your baby stroller and your separate baby seat too. Knowing how to use both of those completely incompatible contraptions and their idiosyncrasies made it more arduous to use them. This led to the all-in-one approach of devising baby seats that worked for cars and for strollers and for other purposes too.

Though the car-related baby seat topic tends to dominate attention about having a baby in a car, I’d like to cover various other elements involved in having a baby in a car as well.

One aspect that I alluded to already involves the desire to keep tabs on the baby. An adult in the car should presumably be making sure that the baby is doing okay. This would usually involve trying to watch the baby and see how the baby is doing. You might also be listening to determine whether the baby is happy or maybe crying. The baby might also wiggle around and be flailing, the movement of which might make noise or might catch your visual attention out of the corner of your eye.

If you are driving a car and it is just you and the baby in the car, this desire to drive well and pay attention to the baby can be challenging.

It would be one thing too if the baby was seated adjacent to you in the front of the car, but the baby being in the backseat makes it even more arduous to keep tabs on the baby. I’ve seen many adults transfixed on their rear-view mirror, trying to watch their baby, and angling the rear-view mirror downward rather than keeping it in position to see the cars behind them.

Sometimes the driver will repeatedly glance over their shoulder, turning their head away from the traffic ahead. This is another dangerous gambit, similar to trying to use the rear-view mirror to watch your baby. For each moment that you think you are doing the right thing by looking at your baby, you are likely increasing the chances of getting into a car accident. It’s a tough balance. Do you not try to keep tabs on your baby and risk the baby somehow having troubles and you don’t realize it, but in doing so you are maybe putting the baby into greater danger due to driver distraction?

Remember earlier about the Baby On Board signs and that some believed it was really supposed to be for purposes of getting the driver with the baby on-board to be safer? Part of the belief was based on the idea that drivers with a baby in their car are going to axiomatically be distracted and therefore be a worse driver. In fact, some of the other drivers that were irked at the drivers with babies in their cars was that those such drivers tend to often make late lane changes or do other acts that suggested they were distracted and only periodically watching the road.

Which is more important, your catching the drool coming from your babies’ mouth, or making a proper lane change when going at 65 miles per hour freeway speeds? Trying to be a one-person band when driving a car is often quite arduous when you have a baby in the car. There you are, worrying about the baby, worrying about the traffic, and who knows what else might be on your mind. It’s a lot to deal with as a driver.

If you have another adult in the car with you, hopefully the other adult will assume the duties of keeping tabs on the baby. That’s the theory, though of course it can sometimes merely splinter the attention of the driver even further. The driver might be watching the other adult to make sure the adult is properly keeping tabs on the baby, and meanwhile the driver is trying to still on their own keep tabs on the baby. You’ve now got the driver contending with two living creatures at the same time. It’s not always the case that this will lessen the distractions of the driver, but at least it provides the possibility.

Nuances Of Babies On-Board

Now that I’ve covered the aspects about the baby seat and the difficulties of keeping tabs on your baby while you are driving a car, let’s consider some other elements too.

Suppose you put the baby into your proper baby seat and the baby seat is correctly secured in your car.

Good so far.

You opt to go on a leisurely drive along the coastline. It’s a gorgeous sunny day. You drive along the coast highway and admire the ocean and the sunshine. Unfortunately, you failed to consider the sun exposure that the baby is getting in the backseat of the car. The baby is likely not able to realize the dangers of sun exposure and nor alert you to the aspect they are getting sunburned (as you know, adults get sunburned all the time, not realizing it is happening until long after it occurs).

This brings up the importance of making use of sunscreens, either physical ones on the car windows or attached to the baby seat, or some kind of protection for the baby from the sun rays. That’s another element to be considered when you have a baby in your car.

I’ll mention some additional and quite serious and disconcerting dangers (brace yourself or skip ahead to the next section).

Suppose you put the baby into the baby seat and have put the baby seat next to the rear window. A baby might be able to open the window and endanger themselves. Maybe even open the car door. This is the reason that many cars now have childproof locks on the rear windows and doors. This is yet another element to keep in mind about having a baby in the car.

Lots of other dangers are possible.

You might accidentally close a window on a baby’s appendage, or likewise close a door on them.

You might leave the baby in the car and the baby could get injured or die from heat stroke.

While driving the car, if you take a turn or curve in a harsh manner, it could toss the baby around, in spite of the baby seat protection. The baby’s limbs might not be conducive to sudden braking or sudden accelerations. There are some doting parents that perhaps become overly protective as drivers and they go exceedingly slow and take turns agonizingly sluggishly, which though you could say is the right kind of spirit, can actually increase the chances of getting into a car accident. Becoming a hazard on the roadway due to overly cautious driving can be a downfall for a loving parent.

The last point on this topic for now is that there is still that question about what happens to the baby when an emergency occurs.

As mentioned earlier, the Baby On Board sign was not especially adopted to deal with making sure that first responders would know to save a baby in a wrecked car. From a slightly different viewpoint, there is the matter of how a parent can best extricate their baby from a car if there is an emergency. Do you try to remove the baby from the car seat and then remove the baby from the car? Or, would it be more prudent to remove the entire baby seat with the baby in it? You might have only split seconds to decide what to do and therefore should have considered beforehand what you will do.

AI Autonomous Cars And Baby On-Board

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One important “edge” problem involves having the AI be of assistance when you have a baby inside the self-driving car.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5 or Level 4, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the topic of having a Baby On Board, let’s consider how the AI can be of assistance if you do indeed have a baby inside a self-driving car.

For an AI self-driving car that is less than a true Level 5 or Level 4, there should be a human driver in the car, and in that case the AI could potentially assist the driver in ways other than driving the car per se, such as monitoring the baby and relaying status of the baby to the human driver.

The AI could use the interior-facing cameras of the self-driving car to do facial recognition about the baby.

Does the baby look okay, or is the baby turning blue because of swallowing something that might be blocking their airway passage?

The emotional state of the baby can be potentially interpreted via the facial expressions and movement of the baby while in the baby seat. The self-driving car also will have an audio microphone inside the car that can be used to listen for sounds, including the baby crying or the baby cooing.

It is likely that the baby seat will have its own Internet of Things (IoT) devices such as detecting the heart rate of the baby and other vital signs, of which this data can be conveyed to the AI of the self-driving car. There is also a likelihood that the AI can ascertain whether the baby seat is properly secured within the self-driving car. This would be based on sensors within the seats of the self-driving car and also in combination with the camera images showing whether the baby seat is slipping around or staying in place.

The human driver would presumably be able to focus on the driving tasks expected of the human and would feel reassured that the AI is monitoring the status of their baby inside the car. The AI would be able to provide a verbal indication to the human driver about the status of the infant. This could be an interactive dialogue based on the Natural Language Processing (NLP) capabilities of the AI system that are incorporated to engage the human in discussions about the driving task.

Beyond the simpler status aspects, the AI if specially developed with capabilities for assisting in baby monitoring would be able to detect some of the more potentially dangerous aspects that can befall the baby.

For example, if the baby is seated close to a car window, the AI via the visual image processing could detect if the baby tries to go outside of the window or tries to get the car door open. The AI could detect when the human driver opts to leave the car and hopefully therefore ascertain if the baby is being left behind in the car, reducing the chances of hot car deaths when an infant is inadvertently not removed from a car when needed. The AI might be able to detect sun exposure that the baby is getting and warn the human driver or possibly adjust the windows automatically to block the sun’s rays. And so on.

For more about the Internet of Things (IoT) see my article: https://www.aitrends.com/selfdrivingcars/internet-of-things-iot-and-ai-self-driving-cars/

For the use of NLP, see my article: https://www.aitrends.com/selfdrivingcars/car-voice-commands-nlp-self-driving-cars/

For the use of AI self-driving cars for family trips, see my article: https://www.aitrends.com/selfdrivingcars/family-road-trip-and-ai-self-driving-cars/

For the non-stop use of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/non-stop-ai-self-driving-cars-truths-and-consequences/

For car crashes and the AI systems, see my article: https://www.aitrends.com/selfdrivingcars/accidents-contagion-and-ai-self-driving-cars/

Added Aspects Of AI Aiding The Baby-On-Board

When an AI self-driving car gets involved in a car crash or other disabling action, if the AI system is still functioning it could act as a kind of Baby On Board alert capability. This could involve the AI system getting the car to perhaps honk its horn as a sign that there is a baby in the self-driving car (though this is obviously a dubious means and could be misunderstood). The self-driving car might have external e-billboards that the AI could use to display a message for first responders that indicates a baby is inside the car (assuming the e-billboards are still functioning).

The AI could use V2V (vehicle-to-vehicle) electronic communication to potentially send out a message indicating that a baby is inside the car, which might then get picked-up by responding vehicles and relayed to first responders. Likewise, the AI might use V2I (vehicle-to-infrastructure) electronic communications to inform a nearby element of roadway infrastructure and of which the first responders might also be in contact with the roadway edge computing devices.

Though I’ve emphasized these various communication means to inform others about a baby inside the car, I’d like to also mention that this same kind of messaging could be used to indicate more aspects beyond the notion of a baby being on-board the car. The AI could indicate how many occupants there are in the self-driving car. The AI might be able to ascertain the general medical status of the occupants, such as whether they are still breathing or not and how injured they are. The AI could provide the status of the car itself such as whether it is still running or if it is crumpled up.

All such information would be handy for emergency responders as they seek to get to the scene of the car accident.

For more about edge computing, see my article: https://www.aitrends.com/selfdrivingcars/edge-computing-ai-self-driving-cars/

For safety aspects, see my article: https://www.aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

For my article about the dangers of getting out of a moving car, see: https://www.aitrends.com/selfdrivingcars/shiggy-challenge-and-dangers-of-an-in-motion-ai-self-driving-car/

For the analysis of self-driving car post-mortems, see my article: https://www.aitrends.com/selfdrivingcars/pre-mortem-analysis-for-ai-self-driving-cars/

So far, I’ve focused on the AI acting as a kind of assistant to the human driver that is present in the self-driving car and trying to be a nanny to watch over the baby.

Would this help ensure that the human driver remains more attentive to the driving task?

In other words, if the human driver knows that the baby is being closely monitored and the AI is informing the driver about the status of the baby, perhaps the driver would no longer feel compelled to turn their head to look at the baby seated in the back seat or try to watch the baby via a rear-view mirror.

You could counter-argue that the AI attention might spur the human driver toward being less attentive to the driving task. For some drivers, perhaps they might normally glance at the baby on a periodic basis when unaided by the AI. If the AI is continually giving status updates, it could spark the human driver to become more aware of the baby and therefore divert their attention toward the baby more so. Of course, that’s the opposite of what the AI monitoring is supposed to be achieving. A well devised AI monitoring system would presumably inspire confidence in the driver that the baby is being monitored sufficiently.

There is also the factor that the human driver is also likely being monitored by the AI. Whenever the human driver turns their head, the internal-facing camera would be able to detect this head turning. The AI would be able to gently caution the driver about diverting their focus away from the driving task. In that sense, even if the human driver is somewhat inadvertently sparked to look at the baby, the AI can help the human driver to realize this is occurring.

Babies And Other Passengers

One question that is a bit thorny involves what to do if the AI detects that something untoward is happening with the baby, whether real or imagined.

I remember when my children were babies and at one point, I had one of them in their baby seat in the backseat, and a passenger in the car turned to look and suddenly exclaimed “Oh my gosh!” as though something horrible had just happened.

This greatly startled me, and I assumed that somehow my offspring had gotten injured or was choking or something really bad was taking place.

Instead, it was simply that a toy had been ripped apart and the fluffy pieces were all floating around the backseat of the car.

The passenger had reacted a bit over-the-top to this.

There was no immediate danger involved. I didn’t know that status at the instant of the exclamation and so I reacted rapidly by immediately pulling over to the side of the road. This was a dramatic car maneuver and not one that I would normally have made. After realizing that this was not a true emergency, I cautioned the passenger that in the future they ought to be more careful as to how they react to such circumstances and be thoughtful of the impact it could have on the car driver.

How should the AI convey a potential urgency to the human driver if the AI detects something is amiss about the baby?

I doubt that we want the AI to make any loud exclamations or otherwise startle the human driver. The AI would need to balance between informing the human driver and not otherwise causing the human driver to make any sudden and untoward reactive actions.

Let’s next consider what the AI can do about a baby being inside a self-driving car when the self-driving car is at a true Level 5.

Since the true Level 5 self-driving car does not need a human driver, this implies one of several possibilities about the baby being inside the self-driving car.

First, it could be that there is an adult in the self-driving car and essentially an occupant with the baby.

In that case, the adult would hopefully be monitoring the baby. The AI could still be monitoring the baby, perhaps as a double-check or as further assistance to the adult. If the adult is perhaps weak in their faculties, maybe an elderly grandparent that is not so able to tend to the baby, the AI might serve as an adjunct to the adult.

Second, it could be that there is not an adult in the self-driving car and only a minor that accompanies the baby.

This is an easy scenario to imagine. Suppose you decide to have your AI self-driving car drop-off your two children at grandma’s house. You are too busy to go along. You put your 6-year-old daughter and your baby boy into the self-driving car, and you command the AI to take them to grandma’s.

You are making an assumption that your 6-year-old daughter can take care of the baby during the driving journey. Though you might believe that to be the case, in most states you are likely violating a provision about making sure that an adult is accompanying your baby. A minor is not considered the equivalent of having an adult present.

I’m sure that people will be tempted to assume that with the AI monitoring the baby and their daughter, and with the likelihood too of the remote parent being able to electronically communicate with the AI self-driving car, such as watching the camera feeds, this will be sufficient to then allow their unaccompanied children to be driving around in the AI self-driving car. Parents that are pressed for time will consider this handy as a means of transporting their children for them.

I’d say that we are heading toward a societal, ethical, and regulatory matter that will require discussion and debate.

Right now, if you do ridesharing for your children, they nonetheless still have an adult in the car, namely the ridesharing driver. There is in theory no means for you to currently put your underage children into a car and have that car go anyplace without at least one adult present, namely the human driver.

For my article about the rise of ridesharing and AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/ridesharing-services-and-ai-self-driving-cars-notably-uber-in-or-uber-out/

For my article about ethical review boards, see: https://www.aitrends.com/selfdrivingcars/ethics-review-boards-and-ai-self-driving-cars/

For my article about Gen Z and AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

For regulations about AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

Babies On-Board And No Other Humans On-Board

We can make the scenario of having underage occupants even more extreme.

Suppose you put your baby into the baby seat of the AI self-driving car and then command the AI to take your baby over to the house of a babysitter that you use.

During the driving journey, the baby is unaccompanied.

There is no other human inside the self-driving car.

Should we be comfortable with the idea that if the AI is monitoring the baby and suppose that the parent has remote access, we are okay with the baby being in the car by itself?

It seems hard to imagine that we would as a society accept this idea. If the baby suddenly has a severe problem, there is no immediate recovery possible since there is not another human inside the self-driving car.

You might try to claim that the AI self-driving car could try to seek help if the baby is having troubles. Maybe the AI dials 911. Maybe the AI sends out an emergency beacon via V2V and seeks assistance from other nearby cars and their potential human adult occupants. Perhaps the AI drives the self-driving car to the nearest hospital. Yes, these are all possibilities, but they seem rather second-best, at best, in terms of caring for the baby.

We’ll have to wait and see what we opt to do as a society.

Let’s pretend that the practice of having your baby in a true Level 5 AI self-driving car and alone as a human occupant gets outlawed.

We all know that practices that are outlawed are not necessarily ergo no longer undertaken. A parent might opt to normally not put the baby in the AI self-driving car by itself, but perhaps they decide to break the rule, just this once, and do so because they are pressed to do something else and believe they have a good reason to violate this law.

What then?

Well, we could guess that the AI would likely be able to ascertain that the baby is alone in the AI self-driving car. If that’s the case, should the AI then refuse to proceed? Perhaps we have the auto makers and tech firms place a special stop-mode that the AI won’t allow the self-driving car to get underway if there is a baby and no accompanying other human (this has its own challenges too, such as whether the other human is a minor versus an adult, etc.).

Or, suppose the AI self-driving car gets underway, somehow then realizes there is an unaccompanied baby in the car, should it report this aspect to the authorities? Perhaps it calls the police. The AI could drive the self-driving car to the nearest police station or rendezvous with a police car. I realize this seems far-fetched and hard to contemplate, but these scenarios are bound to happen.

If we eventually have hundreds of millions of AI self-driving cars on our roadways, all kinds of things are going to occur in terms of how people decide to make use of an AI self-driving car.

Suppose too that you are a ridesharing firm and you are letting people use your Level 5 or Level 4 AI self-driving cars.

A rider puts a baby into your ridesharing car, tells the AI to go to some destination, and slips out of the self-driving car. During the driving journey, something happens to the baby and it gets injured. Who is responsible for this? Since you provided the ridesharing car, presumably you have some culpability in whatever happens to the unaccompanied baby.

I’ve predicted that we might see a new kind of job role in society, namely the role of being a kind of AI self-driving car “nanny” or caregiver. The person would be hired to ride in an AI self-driving car and be there to accompany minors. They might also be there to aid someone that is elderly and not of full faculties. They might be there to assist in riders getting into and out of the AI self-driving car. Note that the person does not need to know how to drive a car (because the AI is doing the driving), which would reduce the barrier to entry for these kinds of positions. Etc.

For a new job consisting of being a kind of self-driving car nanny, see my article: https://www.aitrends.com/selfdrivingcars/future-jobs-and-ai-self-driving-cars/

For more about responsibility and AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/responsibility-and-ai-self-driving-cars/

For socio-behavioral aspects, see my article: https://www.aitrends.com/features/socio-behavioral-computing-for-ai-self-driving-cars/

For the boundaries of AI self-driving cars, see my article: https://www.aitrends.com/ai-insider/ai-boundaries-and-self-driving-cars-the-driving-controls-debate/

For the robojacking aspects, see my article: https://www.aitrends.com/features/robojacking-self-driving-cars-prevention-better-ai/

AI Driving Efforts And Baby-On-Board

As a final thought for now, let’s assume that there is a baby inside an AI self-driving car, and the baby might or might not be accompanied by another human (as I say, this is yet to be decided by society).

Should the AI self-driving car drive any differently?

Some would assert that the AI should drive the self-driving car in a fully legal and cautious manner, regardless of who or what might be inside the AI self-driving car. I think this is a bit of an over-simplification of the matter. There are degrees of driving that can range from being overly cautious to overly carefree. It is conceivable that the AI can devise a smoother ride for situations such as having a baby inside the AI self-driving car.

When my children were babies, I would definitely be more delicate when I saw a pothole up ahead or a dip in the road. If they had fallen asleep, I would try to avoid any radical turns or fast maneuvers. All of those driving aspects were perfectly legal and none of them were illegal. There is a wide range of discretion in how you drive a car, within the bounds of driving legally.

One aspect too will be the possibility of trying to avoid car sickness for your baby. Adults can get car sick. Babies can also get car sick. I would suggest that a baby is maybe even more generally prone to potential car sickness. The manner of how you drive your car can contribute toward car sickness. In that sense, the AI could adjust its driving approach to try and reduce the chances of car sickness ensuing for a baby, if there is a baby inside the self-driving car.

For my article about car sickness, see: https://www.aitrends.com/selfdrivingcars/kinetosis-anti-motion-sickness-ai-self-driving-cars/

For the illegal driving of an AI self-driving car, see my article: https://www.aitrends.com/selfdrivingcars/illegal-driving-self-driving-cars/

For the human foibles of driving, see my article: https://www.aitrends.com/selfdrivingcars/ten-human-driving-foibles-self-driving-car-deep-learning-counter-tactics/

For the nature of AI driving styles, see my article: https://www.aitrends.com/selfdrivingcars/driving-styles-and-ai-self-driving-cars/

Conclusion

The famous or now somewhat infamous Baby On Board.

This kind of signage is a reminder that for AI self-driving cars, we need to consider the “special case” of what should be done when a baby is inside a self-driving car.

We cannot ignore the matter.

One of the more vexing issues will be whether a baby ought to be riding alone while inside a true AI self-driving car.

The initial reaction would be that the baby should definitely not be alone, but this is something as a society that we have yet to fully address.

Would you want your AI self-driving car to announce that you do have a baby on-board of your self-driving car?

We might see a resurgence of the fad. Via external e-billboards of the self-driving car you might announce it. You might have the V2V let other cars nearby know. Are you doing so for safety purposes or for the desire to brag or for what purpose? Or both?

Well, taking a somewhat lighter perspective and ending this piece on a softer note, we might have a variant of these kinds of signs, one that says AI On-Board.

That’s indeed something we ought to know about.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Read More

Posted on

Jaywalking and AI Autonomous Cars


AI self-driving car software needs to take into account the behavior of jaywalking pedestrians, and mimic the complex decision-making of human drivers in taking action. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

Being from California, I remember one of the first times that I visited New York City (NYC) and made the mistake of renting a car to get around the famous metropolis. I had figured that driving a car around the avenues and streets would give me a good sense of how the city that never sleeps was laid out and where the most notable restaurants, bars, and shops could be found.

Turns out that I mainly discovered how much New Yorkers seemed to delight in jaywalking.

It was as though there weren’t any rules against jaywalking.

Want to cut across the street and get over to that popular hangout, no need to walk down to a crosswalk, instead just make your way by walking into traffic. In most cases, the jaywalker didn’t even run. One might almost think that you would dart rather than meander, but these fearless jaywalkers tended to take their time.

I also found out about the techniques involved in making a devoted stare or gaze that appeared to be a local custom.

In some cities, the jaywalker purposely does not make eye contact with the car drivers, seemingly acting as though the car drivers don’t exist.

Or, maybe by making eye contact it would become a duel to see who looked away first, and the loser perhaps has to back-down from the standoff.

In any case, my experience was that the jaywalkers in NYC loved to give the car drivers a straight eye.

This might be the same kind of thing you’d do when you encounter a wild animal in the woods. Given them a strong stare might say that you are mighty and the animal should not try to take you on. Some of the car drivers that were locals or that were used to the local customs would often give a stern stare back. On a few occasions, it would get really testy and the jaywalker would wave an arm and act as though they might try to slay the dragon of a car coming down the street.

I admit that after I turned in the rental car and became more of a traditional pedestrian on my visits to NYC, I adopted the jaywalking habit.

This was especially so because during one of my initial forays as a pedestrian there, I was walking with a colleague that was a native New Yorker, and when I attempted to walk down to a crosswalk, rather than taking the shortcut of jaywalking, he almost came out of his skin at my legal abiding approach.

Are you nuts, he asked or demanded incredulously?

Walk half a block down, cross the street at a light, and walk a half block back up, just to get to something that you could make a beeline to?

I regret that perhaps it gave another black eye in his NYC mindset of my being from the West Coast.

He even justified the jaywalking in a manner that perhaps most would not.

He insisted that it was actually more dangerous to cross at a marked crosswalk, at least in NYC, than it was to jaywalk. I doubt that he had any actual statistics to back the claim, but it certainly sounded convincing. He had me watch the cars turning at a busy corner and pointed out that I would seemingly be more likely to get run over there. With a jaywalking maneuver, he emphasized that I could pick my own choosing of when and where to cross, presumably therefore somehow being a much safer adventure than depending upon an actual marked crosswalk.

Where I grew up in California, jaywalking was generally frowned upon and only undertaken as some kind of last resort.

If you had a broken leg and could not walk all the way to a corner, okay, maybe you could do a jaywalk, but only if the street was absolutely clear of traffic. No “frogger” kind of playing in my neighborhood.  I remember my parents even hinting that the cops would likely be driving down the street just as I might try to jaywalk. This made me envision a life of sitting in prison due to having gotten caught red-handed doing a jaywalk. I wondered whether I would do hard time and also if I might ever be able to make parole due to the seriousness of my transgression against society’s rules.

One time, a relative from New York came out to visit and noticed that some of the streets in my neighborhood had a posted sign that indicated jaywalking was prohibited.

First, he laughed at the sign and declared it to be a total waste of taxpayer money.

Second, he interpreted the sign to imply that wherever there wasn’t a similar sign, it meant that you could legally jaywalk, and do so as much as your heart might desire. I tried to explain that jaywalking was generally outlawed locally and the purpose for the signs was to highlight the law, particularly in places where it was known that people tended to jaywalk, even though they weren’t supposed to do so, and serve as a reminder of the law.

Styles of Jaywalking

Since I had not seen much jaywalking growing up, it was fascinating to watch it occur while I had various stays in NYC.

I noticed for example that the time of day seemed to make a difference in terms of the volume and nature of the jaywalking.

Mornings, when pedestrians were trying to get to work, often stoked a lot of jaywalking, perhaps to try and get to work promptly and minimize the time required to get to the office.

There was also the amount of traffic that played a role in the jaywalking. If the traffic on a given street was completely backed-up and stuck, jaywalkers would in droves weave in and around the cars, doing so without a care in the world since they perceived that the wild animals (the cars and car drivers) were jammed in place and couldn’t do much to run them over. As soon as a green light allowed the traffic to flow, the jaywalkers became more cautious and realized it was now “game on” in terms of trying to time when to best engage in jaywalking.

If a street had intermittent traffic, and if the cars that used the street considered it to be a kind of race track to quickly make some progress through the slew of blocks of NYC, the jaywalker had to be much more nimble and aware. Will that car that just turned onto the street be burning rubber and get to where you are going to jaywalk, reaching an intersecting point in the middle of the street just as you are halfway done with your jaywalk maneuver? These crazed drivers made it appear that they were not going to stop for anything and nor anyone. I don’t care if you had your pet elephant on a leash and were jaywalking with it, these mean looking and solemn minded drivers were willing to smash their car into whatever might be in the roadway. The road was there’s and no one dare suggest otherwise.

The weather also played a part in the jaywalking ritual.

Rainy days meant that the jaywalkers had an even greater incentive to jaywalk. Why waste time and get wet in the rain, when you can scoot across a street and do so quickly enough that perhaps rain drops themselves won’t touch you. The problem with the feverish effort to jaywalk in rain was the car drivers were likely to also be more crazed than usual. I suppose this was because the rain tended to hamper traffic and therefore the way to make-up for it was to speed and be a bit more careless of your driving. I realize you might assume it should be the opposite, namely you would slow down in the rain and be more careful, but that’s not often the choice that drivers seem to make (this almost seems like a universal constant!).

At times, I pondered the nuances of Mutually Assured Destruction (MAD), which you might remember was popularized during the Cold War era. When the two hunkering superpowers of the United States and the Soviet Union had their nuclear arms race, it was postulated that if either one attacked, the other would surely attack, and in the end they would both obliterate each other.

This came to mind as I watched some of the jaywalking duels in NYC.

An energetic jaywalker would enter into the street.

A zealous driver would gun their engine and seem to aim for the jaywalker.

Which would win the race?

I’m sure you are pleading that obviously the car will win, since a mere human is not going to have super powers to stop the car in its tracks. In that sense, certainly the car can always prevail by running over the human. You might think that’s what would happen. Instead, it was interesting that even the nuttiest of drivers seemed to realize that running over a jaywalker was not an advisable thing to do.

Presumably, the car driver might be thinking that they could possibly get prosecuted for running over a jaywalker. Or, maybe they were worried it would dent their beloved car. Or, they might be concerned that their insurance rates would get jacked sky-high. There’s also the possibility that the driver might not want to maim a fellow human being. Well, being realistic, I’m putting that on the bottom of the list of reasons why the drivers did not summarily run over the jaywalkers.

So, it was often a Mutually Assured Destruction kind of battle.

The jaywalker figured that the driver figured that hitting the jaywalker would not be a good thing to do. The driver figured that the jaywalker figured that getting hit by a car was not a good thing to have happen. Either way, if a physical connection was going to be made, it was a lousy outcome for both parties.

Some of the jaywalkers acted as though they had a special invisible shield that would protect them. They would walk across the street whenever they darned wished to do so. They seemed to believe that the drivers would ultimately acquiesce and not want to run over a jaywalker. Admittedly, this did seem to work a lot of the time.

When I mentioned to one of my NYC colleagues that the Mutually Assured Destruction won’t serve as a deterrent unless both parties are cognizant and aware of what is taking place, he shrugged it off. I was trying to explain that if say the driver is not paying attention to the road, and not especially cognizant of the presence of the jaywalker, the driver could ram into the jaywalker out of “ignorance” and the jaywalker’s expectation of being protected by MAD went out the window. The MAD approach only worked if the driver was truly paying attention to the road.

Driver Attention And Jaywalkers

In my estimation, a sizable chunk of drivers were not attuned to the presence of the jaywalkers.

This made sense since the drivers were having to contend with bigger game, such as large trucks trying to make deliveries and rapidly exiting and entering unexpectedly into the street and avenues. There were other crazed car drivers jockeying for position. There were often obstacles on the roadway such as a pallet of liquor bottles being delivered to a liquor store.

If you were a driver in that environment, which is a more suitable aspect to pay attention to?

The trucks and other cars are likely more harmful to you and your car. The solid obstacles on pallets could do some real damage to your car if you hit them.

A jaywalker?

Not the highest priority.

Furthermore, many of the drivers seemed to consider that a jaywalker did jaywalking at their own risk. In essence, the car driver did not have to pay attention to the jaywalkers because the jaywalkers were “required” to always make sure to avoid getting hit by a car. It was as though a flock of birds were flying around the cars. A driver shouldn’t have to watch out for the birds. The birds should be astute enough to not flap into a car. The jaywalkers were assumed to be hopefully as astute as a dumb bird.

Another factor involved sizing up the jaywalker.

How was the jaywalker dressed and what kind of look did they have?

If a driver saw a jaywalker that seemed like a seasoned New Yorker, it suggested that the jaywalker could take care of themselves and no further driver attention was needed. If the jaywalker looked like a wide-eyed tourist, well, this might present a problem because the “amateur” jaywalker might foul things up. The “professional” jaywalkers knew how to assiduously cross a street. Those out-of-town jaywalkers were bound to mess-up the delicate dance of true jaywalkers and NYC drivers.

I’m sure that when I drove my rental car, the jaywalkers could sense my out-of-town smell.

Fresh meat, easy pickings.

I was the type that they could jaywalk to their hearts content on. Indeed, when I saw a jaywalker, I tended to give them a wide berth. The seasoned NYC drivers in contrast would always relish getting within inches of the jaywalker, as though it was a sweet kiss of “you just made it” and the jaywalker should thank their lucky stars for surviving the jaywalking act (and bend down in reverence to the car driver).

When it got somewhat late at night, I observed that there would be a segment of jaywalkers that were a bit intoxicated, having visited their preferred pub for some after-work libation. This seemed to dampen their wits as jaywalkers. I’m betting that they would contest this claim and say that they were still on their toes. In any case, there were definitely more close-calls on the jaywalker versus car aspects. This might also be further fueled by the likelihood of having drivers that were now also somewhat drunk. A potent combination, having both slightly drunk jaywalkers and slightly drunk drivers.

Herd Mentality Of Jaywalking

In most cases, I observed individuals acting as jaywalkers.

This though was not always the case and there were frequently situations of multiple jaywalkers proceeding all at once. There was at times a herd mentality. If one of the jaywalkers went for it, the others were sure to follow. Now, this actually often made sense, since the first one likely found an opening to jaywalk and the others also perceived the same opening.

There were times though that the first jaywalker got the herd underway, not necessarily overtly, more so subliminally in that the other jaywalkers saw the first one make a move and opted to proceed too, but it turned out that the first jaywalker didn’t gauge things well. The first jaywalker might have gotten somewhat stranded in the street, not able to fully make it across the street just yet. Meanwhile, the herd that followed was also now stranded. You could see the look on their faces that they had assumed they could make it fully across the street and were befuddled and irked that the move had not been timed well.

There were some “leaders” of the pack that weren’t thinking at all about the rest of the herd. Therefore, they were not trying to find a big enough opening to get a dozen people across the street all at once. They were focusing on just themselves. In that case, sometimes the first mover made it across, but the others did not, and they had somehow assumed that if the first mover could do it, the rest of them could.

I suppose you could see this as a series of locking mechanisms that just happen to line-up precisely in a moment of time. The first mover has “calculated” that they can thin their way through the sporadic cars and make it across. It is though just a moment in time action. A split second later and the opportunity has vanished. Likewise, the alignment of the cars is not just a moment in time, but also a moment in space, as it were. The first mover at their position say halfway of the block, would have a different timing and clearance of an opening, versus if you were at a quarter way of the block.

The ones that got my heart pumping were situations in which two people were holding hands and opted to rush across the street together. As you can imagine, trying to get two people across on a precisely timed jaywalk is a lot more complex. If one of the two falters it can defeat the open window and you now have to figure out what to do. It was surprising at times to see how much connectedness was retained by the two.

In other words, two people are holding hands. This is obviously just a temporary connection in that their two hands are not glued to each other. They can separate their hands whenever they wish. And yet, in some cases, the pair would try to remain entangled, in spite of the danger of doing so. Just a mere dropping of their hands would allow them both to become free agents and more nimbly finish the jaywalk.

This though seemed to be the further most thought in some of their minds. It was as though the separating of their hands meant more to them than the chances of getting hit by a car. Was it true love that kept them together in that life risk move? Was it concern that the other one might feel abandoned and it could forever undermine their relationship? Maybe it was out of deep caring and the belief that by sticking together they could survive anything, including a crazed driver barreling down the street directly at them.

There’s another kind of coupling sometimes that occurs, involving a jaywalker that is jaywalking with their dog. The jaywalking human might be hand carrying the dog, having lifted the dog up and embracing the animal like you would carry a football. This makes sense in that having the dog walk on a leash is going to be much more uncontrollable as you make your way across the street. For those that don’t try to carry their dog, perhaps due to the weight and size of the dog, the leash approach can be quite dicey.

I remember seeing a man walking his dog that had a leash several feet long and as the man attempted to jaywalk, the dog tried to go in a different direction. This meant that the jaywalker was now several feet wide, if you consider the distance from him to his dog, making it much harder to nimbly get across the street. He pulled strenuously on the leash, nearly dragging the dog, as he desperately tried to bridge the chasm from one side of the street to the other.

In this case, he was somewhat strongly coupled because letting go of the leash would have produced perhaps even worse results. The dog might have scampered directly into traffic that otherwise the jaywalking human might have aided the dog in avoiding. All in all, I would say that any animal lovers would look upon these jaywalkers with some disdain, as it is one thing to put your own life into jeopardy and quite another to subject an innocent dog to the same kind of risk.

This reminds me too of a common refrain that my New York colleagues would use on me.

They would say that any jaywalker is making their own decisions and if they get hit, well, that’s their own doing. Why should the government tell them what they can and cannot do. It’s up to the individual to choose to jaywalk or not, and it is on the head of that jaywalker as to whether they risk their life and limb or not.

I don’t buy into this claim per se.

It seems to leave the car drivers out of the equation. If a car driver hits a jaywalker, it’s going to be a great deal of difficulty for the car driver, though yes I agree it is unlikely the car driver will be killed, but they could get injured. Furthermore, suppose the car driver is so anxious to avoid hitting a wayward jaywalker that the driver rams into another car? Now, you’ve got other people also enmeshed into the jaywalking effort.

There is also a chance that while the car driver is trying to avoid a jaywalker, the driver swerves and maybe hits other jaywalkers (I realize the view would be that’s on them, if you take the individualist free agent perspective), or might come onto the curb and hit pedestrians (one would argue those pedestrians were innocents).

Generally, a jaywalker can start a cascading series of events that ultimately lead to others getting injured or killed.

I therefore tend to reject the idea that a jaywalker is performing a “victimless” act, assuming that you don’t count the jaywalker as a victim, and contend that the jaywalker is potentially going to involve one or more car drivers, perhaps one or more passengers in those involved cars, maybe other jaywalkers, and potentially innocent pedestrians that were mindfully using the sidewalk.

Here’s another angle for you.

What about the children?

Children Learning About Jaywalking

I’ve seen jaywalkers holding the hand of a child or a group of children and trying to make a jaywalking attempt with them. Similar to my earlier point about coupling between two adults, in theory the coupling is loose since the hands can be disengaged readily. In the case of children, the danger obviously is that if the adult jaywalker does let go of the hands of the children involved, the children might not know what to do and get themselves into worse hot water.

I realize some would argue that of course the adult needs to hold the hands of the children and would accuse me of somehow suggesting that children should roam freely as jaywalkers. Let’s be serious, I’m not implying that children should be unescorted by an adult when jaywalking. The thing is that children should not be jaywalking at all.

I’ll probably get emails from some readers that will say that their children have “no choice” but to jaywalk and so which will it be, the children do so on their own or with an adult? I guess if there is really no other viable way to get to someplace other than jaywalking, yes, an adult jaywalker participant is the way to go. Is it really the case that there is no other viable way to get to the location other than jaywalking?

There are some that have said to me that it would require walking several added blocks and take another 15 minutes to get to the desired location, such as a school. Well, one has to then consider the ROI (Return on Investment) of walking those extra blocks and using those added 15 minutes, doing so presumably in a safer manner, versus the risks associated with doing the beeline jaywalking. Is there an appropriate risk/reward that says the added risk to the child makes the jaywalking act worthwhile?

One other qualm about involving children into jaywalking is the aspect that they essentially then come to believe that jaywalking is acceptable.

If they do jaywalking with an adult, it is a slippery slope that can readily assume they can do jaywalking on their own. Indeed, some children will happily go jaywalking to showcase to their parent that they are now sentient and their own agent and no longer need to have an adult aid them in the jaywalking. It is a kind of rites of passage.

The counter argument from some adults is that if they don’t show the child how to “properly” jaywalk, the odds are that the child is going to do jaywalking anyway at some point, and without having done so with a “responsible” adult, the child is going to be more prone to getting hurt when trying to go jaywalking based on no prior instruction. Some would say that having a head-in-the-sand viewpoint of being a parent that pretends jaywalking will never happen, will merely make the child more vulnerable than if you instead do a parent-child jaywalking effort with the child.

There are some aspects of the counter-argument that I do tend to side with. In the case of my own children, I did practice going jaywalking with them, which I did to point out how to do so and what to watch out for. This was done though on a selective basis and only as a means to aid them in being prepared in case jaywalking was needed at some point. I tried to make clear cut that jaywalking was considered inappropriate and that the instruction was not meant to open the sport of jaywalking to them.

This also gets to the core of an aspect about children and child rearing. For those of you with children, you’ve likely been torn about whether to show or explain something to the child, for which you wonder whether you are introducing them to something that will spur them to do the thing you are trying to showcase should not be done. The classic is “don’t put your hand in a stove top burner,” which could backfire in that the child might not have thought to do so, and now they are curious to try it, because you made such a big deal about it.

In any case, another facet of jaywalking is the possibility of adult jaywalkers, child jaywalkers, and combinations of both adult and child aged jaywalkers.

There’s the special twist of the jaywalker that drops something while in the act of jaywalking.

The Dropped Item As Jaywalking Factor

I saw a jaywalker that was carrying his coat as he darted across the street. The street was slightly wet from leftover rain. The person slipped while running across the street. As he regained his balance, he dropped his coat. At this point, his presumed prior calculated time to get across the street had been used up. A car was fast approaching. Should he pick-up his coat, which would take a precious second or two, and tempt fate with the ongoing car, or should he abandon the coat and safely get to the sidewalk.

Which is better, a coat that perhaps gets trampled by a moving car, and for which you can go out the street once the car has passed, readily pick-up the coat, and maybe get it dry-cleaned to fix it up, or do you bend over while in the middle of the street and watch the ongoing car like it is a bull charging at you in Pamplona?

The answer seemed to be that nearly every time this kind of dropping action happened, the person opted to try to pick-up the dropped item.

Was this out of a sense of personal affiliation with the dropped item?

Maybe the coat had been in the family for many generations and was revered heirloom. Or due to the value? Perhaps it was an expensive coat from a top-end retailer. Or, could it be that the jaywalker was worried that the car driver might swerve to avoid the dropped item and therefore get into a wreck by having left it in the street? I doubt this is the first thought that goes through the mind of the jaywalker that dropped an item.

A car driver that witnesses a jaywalker dropping an item will need to figure out whether the jaywalker is going to try to stay in the street to retrieve it, or leave it there for later retrieval, or perhaps do some other kind of action now that the dropped item is there. The driver also needs to anticipate that maybe some other potential jaywalker might enter into the street to rescue the item for the first jaywalker that dropped the item. Whatever item has been dropped, the driver also needs to decide whether to try to brake before hitting it, if there is a chance of hitting it, or maybe try to straddle the item, or take some other kind of evasive driving action.

I’ve been so far primarily describing the jaywalkers and you’ll notice that I’ve now started to shift focus toward the car drivers and the act of jaywalking pedestrians.

In terms of the drivers, there are drivers that know the jaywalking game and play it to the finest detail. There are the drivers that are driving while distracted and so inadvertently can be more menacing for a jaywalker. There are the drivers that are drunk or otherwise somewhat incapacitated and therefore are impaired while playing the jaywalking game.

There are also the vendetta drivers.

Vendetta Drivers On The Roads

I cannot say for sure that vendetta drivers really have a vendetta, though it certainly seems like it.

Allow me to explain.

I would see a potential “vendetta” driver driving down a street at a relatively constant pace. I am pretty sure they intended to remain at that pace. A jaywalker suddenly comes into the street. The jaywalker has calculated that they can make it across before the car comes upon them. Suddenly, the car speeds up.

Was the driver speeding up by chance alone?

Did the driver just remember that they were late to a baseball game and opted to hit the gas? Or, was it that the driver saw the jaywalker and purposely wanted to give the jaywalker a scare? Suppose you are a driver that has reached your personal threshold with those darned jaywalkers. You might decide that whenever you see one, you will show them who is the boss. You speed up and see how close you can cut it to nearly hitting the jaywalker. In fact, if the jaywalker backs away, you are perhaps just as happy and feel like you did your civic duty.

Here’s something else that seems to go into the jaywalking equation. Does the jaywalker have any kind of encumbrance like a heavy backpack, or maybe a briefcase, or carrying a box or some other object? This tends to slow down the jaywalker and requires them to find a somewhat wider opening in terms of time and space. You might think of this as a kind of golf-like handicap.

Some jaywalkers did not appear to include their encumbrance in their jaywalking formulation. Whereas before they could more readily make a window of X and Y, they know could only do a Q and Z, but they seemed to still be making a jaywalking move when it was only an X and Y window. Danger ensued. Likewise, the drivers would sometimes misjudge the pace and agility of the jaywalker, getting darned close, closer than it seemed they intended, partially because they also miscalculated the delay factor of the encumbrance of the jaywalker.

Nighttime jaywalking was somewhat akin to daylight jaywalking as long as the street was well lit (assuming everyone involved was sober). On some NYC streets, the lighting is not so good. This increased the chances of sour encounters between jaywalkers and drivers. The jaywalker at times seemed to think that the darkness was handy, hiding their jaywalking transgression. The drivers were less likely to see the jaywalkers and get started when the headlights of the car shone upon an unexpected jaywalker.

You can combine all of my aforementioned factors and make the jaywalking into a rather complicated game of human versus human. Human jaywalkers that have human frailties and can misjudge when and how to jaywalk. Weather conditions that can impact the game, along with daylight versus darkness. Drivers that pay attention and other drivers that do not. Some that have a vendetta, some that are drunk.

It’s a real mishmash.

Overall, it is kind of startling and amazing that there aren’t more injuries and deaths due to jaywalking, especially in cities that take jaywalking for granted and it doesn’t get suppressed or expunged.

Of course, not all countries are necessarily opposed to jaywalking. On an international basis, there are some places in the world that jaywalking is strictly forbidden, and other places that allow it and give no special heed about it. You might find of idle interest that the root word “jay” essentially means inexperienced, and when cars first came onto roadways there were drivers that drove on the wrong side of the street, which were referred to as jay-drivers. This morphed eventually into becoming jaywalkers.

Right Of Way As Jaywalking Rules

Who has the proper right of way?

In some countries, both the jaywalker and the driver are considered equals in terms of right-of-way. Once a jaywalker starts across the street, in some countries this implies that the jaywalker now has the true right-of-way, subject to whether the jaywalker has made a sensible move or not. If the jaywalker steps into the street in front of a car going 60 miles per hour and there was no chance for the driver to stop, the jaywalker cannot expect to have claimed the right-of-way.

Here’s what the Department of Motor Vehicles (DMV) rulebook in California states about the act of jaywalking:

“(a) Every pedestrian upon a roadway at any point other than within a marked crosswalk or within an unmarked crosswalk at an intersection shall yield the right-of-way to all vehicles upon the roadway so near as to constitute an immediate hazard.

(b) The provisions of this section shall not relieve the driver of a vehicle from the duty to exercise due care for the safety of any pedestrian upon a roadway.”

You’ll notice that the jaywalker is supposed to yield the right-of-way to cars. Notice further that in spite of that aspect, it does mean that a driver can just run over a jaywalker. The driver must also exercise due care, even if a jaywalker is doing something they aren’t supposed to be doing.

AI Autonomous Cars And Jaywalkers

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One crucial aspect involves the AI being able to contend with jaywalkers.

Allow me to elaborate.

I’d like to clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5 or Level 4, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 or Level 4 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the topic of jaywalkers, let’s consider the capabilities that an AI self-driving car should have regarding contending with these wayward pedestrians.

I’ll tackle right away a comment that I sometimes get from AI developers.

There are some that say there is no need for an AI self-driving car to do anything at all about a jaywalker.

Jaywalkers are acting illegally.

They get whatever they deserve.

There is no requirement that the AI of the self-driving car needs to do anything at all about a jaywalker.

It might seem astonishing to you that someone would think this way. It is a phenomenon that I refer to as the “egocentric” developer viewpoint. The world needs to conform to their view of the world, rather than the developer facing the reality of the real-world. I quickly point out to such a person that the DMV code clearly states that the car driver must exercise a duty of care, even if the jaywalker is doing something utterly wrong and illegal.

This often surprises the AI developer. They had laid all the responsibility onto the shoulders of the wayward pedestrian. In one sense, this is similar to the jaywalkers that insist they are doing a “victimless” act and that it is up to the jaywalker to choose whether to personally risk going jaywalking or not. Only after I point out the other “victims” that can get dragged into the “victimless” effort do they (hopefully) see the larger picture.

For my article about egocentric AI developers, see: https://stage.aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

For AI developer burnout impacts, see my article: https://stage.aitrends.com/selfdrivingcars/developer-burnout-and-ai-self-driving-cars/

For my article about pedestrian “roadkill” aspects, see: https://aitrends.com/selfdrivingcars/avoiding-pedestrian-roadkill-self-driving-cars/

For my article about the dangers of groupthink among developers, see: https://stage.aitrends.com/selfdrivingcars/groupthink-dilemmas-for-developing-ai-self-driving-cars/

Needing To Contend With Jaywalkers

Let’s all assume that indeed the AI of the self-driving car does need to contend with jaywalkers.

It cannot ignore them.

It cannot pretend that the burden of safety is solely on the backs of the jaywalkers.

The AI must have provisions for dealing with jaywalkers.

I’d say that’s a prudent and societally expected assumption about AI self-driving cars.

This moves us then into the next kind of quirk that some AI developers offer. There are some AI developers that will concede the notion of doing something about jaywalkers, but then argue that a jaywalker is nothing special and that the “normal” driving aspects of an AI self-driving car should suffice when dealing with jaywalkers.

In this case, the AI developer is suggesting that if the AI self-driving car is already prepared to cope with objects that might appear in the roadway, the job of having the AI be prepared for jaywalkers is already completed. No need to do anything else.

This implies that a jaywalker is no different from say a tumbleweed. If the AI is able to detect a tumbleweed in the roadway, it amounts to the same thing as detecting a human in the roadway. At least that’s the kind of thinking involved by this kind of AI developer.

If I was driving my car and saw a tumbleweed in the road, I would likely mentally calculate whether to hit it or not. I might be willing to hit the tumbleweed due to the aspect that perhaps there are other cars near me and if I hit my brakes suddenly, I risk getting rear-ended, and maybe I cannot switch lanes without endangering a car adjacent to me, and maybe radically swerving is likewise going to endanger me and other nearby cars and pedestrians. So, I might choose to ram the tumbleweed, doing so as the “safest” option available to me at the time and moment that the tumbleweed has appeared.

Here’s an easy question for you, I think, namely do you consider it viable to ram a jaywalker, using the same logic about ramming a tumbleweed?

I’d dare say that you would be willing to take much greater chances to avoid hitting the jaywalker than you would hitting a tumbleweed. As an aside, this raises further the ethical aspects involved in driving a car. Suppose you can avoid the jaywalker but might end-up on the sidewalk and hit a pedestrian standing there – what is the basis for making such a choice, and how do we end-up embodying this kind of decision-making into an AI system of a self-driving car?

Back to the object in the roadway problem, do we want AI self-driving cars that seem to equate hitting a human jaywalker is akin to hitting a tumbleweed?

I don’t believe we do.

Thus, I claim that if an AI system is only detecting “objects” and not trying to also figure out what kind of object is involved, it is insufficient in terms of what we would all hope a true AI self-driving car is going to be able to do. From a systems perspective, please realize that I realize that when the cameras, radar, LIDAR, and other sensors first do their detection, they are only indeed detecting “objects” and thus there is a crucial role of object detection involved. What I am saying is that after the raw sensory detection of an object, it is imperative that the AI system tries to discern what kind of object the object is, such as whether it is a tumbleweed or a human.

That’s why the interplay of the sensory detection and sensor fusion is vital.

When the AI system is trying to piece together the sensor data from multiple sensors, it has an enhanced chance of trying to ferret out what kind of object is being dealt with. This also interplays with the virtual world model. The virtual world model should be tracking the object over time, which will also then aid in trying to ascertain what the object might be. The AI action planning capability needs to be “astute” enough to be able to detect patterns of shapes and movement that pertain to humans and try to differentiate this from other kinds of objects.

I purposely have chosen the tumbleweed example because it is a tricky one to discern from the movements of a human.

For example, you might say that a human should presumably start off the street and proceed into the street. Certainly, a tumbleweed could do the same. A jaywalker once in the street is going to likely be making their way across the street. A tumbleweed might do the same, perhaps the wind is pushing it in that direction.

A jaywalker might make a direct beeline across the street. A tumbleweed could do the same. A jaywalker might weave as they cross the street, and of course a tumbleweed might do the same. By the movement alone, you cannot necessarily say whether the object is a human trying to jaywalk versus a tumbleweed.

You would need to combine a multitude of factors. What is the size and shape of the object? Does it resemble the size and shape of a human? Does it move in a seemingly directed fashion, but if so, can this be differentiated from the possible random movements of an object like a tumbleweed? We also need to consider whether the object might be an animal, which could move across the street in the same overall manner that a jaywalker might or a tumbleweed might.

Guessing whether the object is a jaywalker then opens an entire plethora of other aspects for the AI to consider.

My stories about jaywalkers provide ample indication of the kinds of acts that a jaywalker might do. A car driver that is watching the road would indeed adjust their driving behavior based on the realization that a jaywalker is in the road. You might slow down, you might speed-up, you might honk your horn, you might do all kinds of actions as a driver.

Likewise, the AI of a true AI self-driving car should be doing similar kinds of actions.

For the conspicuity aspects of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/conspicuity-self-driving-cars-overlooked-crucial-capability/

For defensive driving techniques for AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/

For my article about AI dealing with roadway debris, see my article:  https://www.aitrends.com/selfdrivingcars/roadway-debris-cognition-self-driving-cars/

For the head nod problem of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/head-nod-problem-ai-self-driving-cars/

Overreacting To Jaywalkers

Auto makers and tech firms that are making AI self-driving cars are often dealing with just getting an AI self-driving car to deal with the rudiments of driving, and they would say that the best bet is to have the AI always assume the worst-case scenario. This means that a tumbleweed that might be a human is going to be assumed to be a human, which is considered a safer bet than not making that kind of assumption.

They would also tend toward having the AI take the “super-cautious” approach. I remember being invited to watch an AI self-driving car as it drove down an empty street, and the automaker and tech firm had a stuntman walk out into the street, acting like a jaywalker. The AI was able to detect the jaywalker and came to a nearly immediate halt. Success!

Well, not exactly.

The AI self-driving car came to a halt at about one quarter into the block, and the jaywalker was at the other quarter’s end of the block.

Sure, the AI self-driving car detected the jaywalker at a sizable distance and came to a prompt halt at a sizable distance. It was maybe just somewhat less than a half a football field away from the human when it halted.

Great, no chance of hitting that person.

Does this make sense in the real-world?

Imagine if AI self-driving cars are all coming to a grinding halt when detecting a human in the street when the human is many, many car lengths away. Will this be a viable way for AI self-driving cars to make their way on our streets? Suppose all human drivers did the same. You might argue that we’d be safer, but I wonder about how this would really play out.

For example, if you knew that a human driver would always stop for you, wouldn’t you nearly always choose to jaywalk? The moment you see a car coming down the street, just step into the street, and voila, that car is going to come to a halt. You and others could pretty much paralyze all car traffic. Maybe we would end-up with far less jaywalking injuries and deaths, but what would it also do to our ability to use cars as a means of transportation?

There are already reports of people opting to “prank” today’s AI self-driving cars. If you know that an AI self-driving car will stop or maybe turn when you take some kind of action as a pedestrian or maybe when driving in your own car, it is human nature that we would all likely exploit these behaviors of the AI self-driving cars. Want to get ahead and not be behind one of those slower moving AI self-driving cars, easy enough to arrange by tricking the AI self-driving car into slowing down or halting.

For my article about pranking of AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/pranking-of-ai-self-driving-cars/

For my article about ethical review boards and AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/ethics-review-boards-and-ai-self-driving-cars/

For the role of greed in car driving see, my article: https://www.aitrends.com/selfdrivingcars/selfishness-self-driving-cars-ai-greed-good/

For the falsehoods of zero fatalities and the advent of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/self-driving-cars-zero-fatalities-zero-chance/

Use of Machine Learning And Deep Learning

A true AI self-driving car has to be embodied with the kinds of driving skills that humans use, and particularly so with regard to contending with jaywalkers.

It is insufficient to simply rely upon some kind of overarching object detection and assume that doing so will resolve how to cope with jaywalkers. That’s not what human drivers seem to do. The behavior of human drivers is actually quite more complex, and we need to aim for having AI systems that can perform in a like manner.

The AI system needs to incorporate the multitude of factors that I’ve previously mentioned. Is the suspected jaywalker an adult or a child? Is it one person or more than one person? Is there a coupling between the multiple jaywalkers? Might the jaywalker drop something into the roadway, and if so, what contingencies should be considered? Does the weather increase or decrease the chances of jaywalking and is the street that you are driving on more or less prone to jaywalkers? And so on.

Some are hoping that the use of Machine Learning (ML) and Deep Learning (DL) will come to the aid of trying to cope with jaywalkers. In one sense, yes, it can be helpful to use ML and DL, and by collecting large sets of jaywalking circumstances begin to find patterns to suggest how jaywalkers behave, and therefore then have ready-made solutions in-hand by the AI.

I assure you though that today’s kind of ML and DL is not going to be the silver bullet or magic wand that provides the jaywalking kind of driving aptitude needed for a true AI self-driving car. The jaywalker aspects are far too complex. It is not the same as merely analyzing an image to ferret out whether there is a human in the scene or not. This has to do with behaviors and complex ones.

For the Uber incident and my initial analysis, see my article: https://www.aitrends.com/selfdrivingcars/initial-forensic-analysis/

For my second analysis of the Uber incident, see my article: https://www.aitrends.com/selfdrivingcars/ntsb-releases-initial-report-on-fatal-uber-pedestrian-crash-dr-lance-eliot-seen-as-prescient/

For aspects about Machine Learning and Deep Learning, see my article: https://www.aitrends.com/selfdrivingcars/plasticity-in-deep-learning-dynamic-adaptations-for-ai-self-driving-cars/

For safety and AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

For the boundaries of AI, see my article: https://www.aitrends.com/selfdrivingcars/ai-boundaries-and-self-driving-cars-the-driving-controls-debate/

Conclusion

Most seasoned drivers tend to take jaywalkers in stride (pun!), meaning that we human drivers can detect jaywalkers, we can anticipate what they might do, we can adjust our driving aspects accordingly, and most of the time the dance leads to the jaywalker getting safely across the street and the car safety proceeding down the street.

This is nearly an effortless act by a seasoned human driver.

AI self-driving cars are not yet as prepared for handling jaywalkers.

The sad incident of the jaywalker in Arizona being run down by an Uber self-driving car is but one example of how limited today’s AI self-driving cars are in terms of coping with jaywalkers. We need to focus greater attention on the AI capability to specifically deal with jaywalkers and not allow the assumed everyday capabilities of the AI to be able to contend properly with jaywalking.

Why did the jaywalker cross the road?

Answer: To safely get to the other side.

Whether you live in a country or place that condones jaywalking or shuns it, in the real-world jaywalking exists and will continue to exist.

In various parts of California, they have started a new effort to discourage jaywalking.

If a jaywalker is caught jaywalking and it is their first such caught offense, the city will give them a bright colored vest and an LED light for free, and tell them that if they continue to jaywalk, which they are not supposed to do, they should at least wear the vest and hold the LED light up (and no ticket for jaywalking is issued).

You might think this a rather “odd” kind of solution to the problem of jaywalking.

Some think it is ingenious, others think it is outright ludicrous.

In any case, let’s hope that AI developers keep attune to advancing self-driving cars toward ensuring that jaywalkers are well-detected and avoided, which is a crucial matter and not merely an “edge” problem by any means.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Read More

Posted on

Receptivity Of AI Autonomous Cars – How YIMBY Could Flip to NIMBY

Communities eager to have AI self-driving cars on their streets are the YIMBY’s (Yes in My Backyard); those opposed are NIMBYs. Beware of the instant flip from YIMBY to NIMBY. (GETTY IMAGES)
By Lance Eliot, the AI Trends Insider
NIMBY.
You’ve likely heard or seen the acronym before.

Read More

Posted on

Mobility, Hyperlanes, Bullet Trains, and AI Autonomous Cars

As AI self-driving cars proliferate on the roadways, it could be that the planned bullet train linking Northern and Southern California will no longer be needed. (GETTY IMAGES)
By Lance Eliot, the AI Trends Insider
I feel the need, the need for Maglev speed.
The Maglev has been considered the …

Read More

Posted on

Complexities Of Driving Controls And AI Autonomous Cars

The AI system of self-driving cars needs to be properly aligned with the core driving controls: the steering wheel, brakes, and accelerator. (GETTY IMAGES)
By Lance Eliot, the AI Trends Insider
Which is better, a lead foot on the brakes and a light-foot on the gas, or a lead foot …

Read More

Posted on

Considering The Practical Impacts Of Achieving Einstein-Level AI


Developers of AI software for self-driving cars need to decide if Einstein-level of intelligence or a more ordinary intellect makes for the smartest drivers. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

Too smart for their own good.

Smarter than their britches.

Egghead.

Pointy head.

An Einstein.

These are the kinds of semi-polite insults that are sometimes used to take down someone that seems to be highly intelligent.

This can be especially used whenever the person evokes the know-it-all kind of stance and tries to lord over others with their professed smartness and smarty-pants attitude.

Not everyone that might be in the intellectual high-end rankings is necessarily the type that wants to make sure that you know they are the mental giant in the room, but it does seem to happen with great regularity and presumably to the delight of the brainy colossus that is overtly full of their own boastfulness.

How shall we weigh the brainiac in terms of gauging their peak-level intellectual power?

I suppose you could remove their brain, place it on a scale, and see how much it weighs.

Probably not very conducive though to their continuing capacity as a living, breathing, functioning human being. Speaking of physically measuring the brain, there have been all sorts of efforts to try to dig up brains of famously smart people and do various dissections of their brains, doing so in hopes of being able to ascertain what made them so sharp.

Nowadays, the usual measuring stick for figuring out someone’s intellectual proficiency is the IQ (Intelligent Quotient) test.

Using a standard such as the classic Stanford-Binet Intelligence Scale test, which was first promulgated in 1916, there are often published rankings that try to make claim to whom among us is the topmost intellect. Stephen Hawking was around an IQ score of 160, something that we know due to his actually having undertaken an IQ test.

Albert Einstein’s score of around 160 to 190 is an estimate based on analyses of his writings and works (he apparently never took an IQ test, though he could have done so, but perhaps opted purposely to not take it or never had cause to take one).

Typically, if you can score 115 or above you are labeled as someone with a high IQ.

Getting a score of over 132 will get you bumped-up into the highly gifted category. The 145 and above is considered at the genius level.

The highest ever recorded is supposedly a score of 263, but there is some disagreement about the matter (this score is attributed to Ainan Celeste Cawley, born in 1999 and alive to this day).

Questioning The IQ Measurement

Not everyone believes in the IQ bandwagon.

Some would say that the IQ test is a questionable means to measure someone’s intellectual prowess.

There are predetermined aspects such as the nature of your language, your culture, and your propensity to solve puzzles, all of which makes critics decry that the IQ number is at best a surrogate of intellect and at worst a misleading gauge of intellect. There are also concerns that those that perchance score high on the IQ test will then consider themselves a kind of special class of human, perhaps encouraging them to look down upon others. The Mensa group, which is a high-IQ association, admits only those that have at least a score of 132 or other such scores depending upon the IQ test being used.

Another qualm about IQ tests is that it seems to judge your bookworm kind of thinking, more so than a true “smartness” indicator.

I’m sure you’ve seen the common portrayal in movies and TV shows of the highly intellectual person that cannot tie their own shoes and cannot open a paper bag. If someone can do really well on tests that ask about obscure numeric patterns or mind-numbing word games, does this really showcase intellect? It might, depending upon your definition, but it generally is not considered the same as measuring your smartness.

Some believe that being smart is different from having a high intellect.

You might so happen to be highly intellectual and also highly smart. There are some that believe you can be highly smart, perhaps tip-top smart, and yet not necessarily have an extremely high intellect. Generally, the odds are that you’d score well on an IQ test, but the high IQ doesn’t necessarily translate into being highly smart, and nor does the aspect of being highly smart necessarily indicate you’ll be an A+ on an IQ test.

Another concern about any of the IQ tests is that your intellectual performance is being measured only at a given point in time.

Maybe at a relatively young age you could score a quite high IQ, but later on in your middle-aged years you aren’t able to score as high. Does that mean you’ve dropped in your intellect? This takes us into the other word that some like to use, the word is “wisdom” and for which once again there is a debate about the relationship between wisdom, intellect, and smartness.

You might gain wisdom as you grow older, at least that’s the usual expectation.

Will you also increase your IQ?

Some claim your IQ is your IQ, no matter what your age and when you perchance take an IQ test. This though does not bear out in terms of the reality, which is that people can take an IQ test at different points in their life and have differing scores. Plus, you can take a different kind of IQ test and score differently on it that you might on some other also “valid” IQ test.

The debate that really gets people bubbling on the IQ topic involves whether your IQ is based on nature versus nurture.

Are you born with a particular IQ level that will ultimately surface once you become of an age to be able to express it?

Thus, it’s a DNA kind of thing. Or, are we all perhaps born with the same IQ potential and your upbringing and environment will dictate how far your IQ will emerge? Perhaps it’s a nurturing element for which some of us happen to get the proper intellectual inspirational blooming and others of us don’t.

The half-in half-out answer is usually stated that you are born with some IQ capacity and it will either emerge or not depending upon your environment and how you are raised.

If we put a baby in the woods to be raised by wild wolves, and the baby happened to have an IQ of 260, which we had not yet been able to measure of the tiny tot but say we guessed that the tiny baby had such an IQ, would the genius level IQ ever be showcased? Would being among wolves allow for the IQ to come to the surface? Would a tree make a sound if it fell in the woods and there was no one around to hear it?

Darwin had an interesting take on intellect.

He proposed that your intellect might contribute toward your survivability in a manner you might not have previously considered. Sure, we would guess that if you had an IQ you could hopefully figure out how to make fire and hunt gazelles, which would presumably enhance your chances of survival. Darwin also hypothesized that topnotch intellect would attract mates and therefore boost your chance of survivability and for carrying on your legacy of high intellect.

For those of you that might have been beat-up by a strong-armed muscle rippling bully as a child, and for as much as our society seems to be keen on humans having muscular bodies, it is perhaps a surprise to consider that intellect might be so revered and be on Darwin’s favorites list.

We are used to the trope that the nerdish kid is the one that is physically meek and mild. The physical imposing one is the one that gets ahead and readily attracts all the mates. Our fascination with the character Spiderman is representative of this kind of imagery.

Learning From Parrots About IQ

A recent study of budgerigars, a type of parrot, provides an ingenious glimpse of how we might try to test Darwin’s hypothesis.

Researchers in China tried to construct an experiment to see whether female budgerigars would be more attracted to male budgerigars that demonstrated greater intellect than other male budgerigars involved in the study (this was research done by the Institute of Zoology at the Chinese Academy of Sciences in Beijing).

The male budgerigars were presented with a difficult foraging task. Some were shown how to solve it, but this happened outside the gaze of the female budgerigars.

The female budgerigars were able to watch the male participants try to open a container and access food. The males that had no prior training (i.e., not being shown the trick), were generally unable to open the container. The males that had the prior training could open the container. Presumably, the female budgerigars would infer that the males that were successful in getting the food were the intellectually sharper ones and the males that failed at doing so were intellectually inferior.

I’ll steer clear for now on the question of whether this is a gender-biased study and merely note it for your noteworthiness.

In any case, the outcome of the study was that the females tended to prefer the males that had succeeded in obtaining the food from the container. You might argue that it suggests the females were more attracted to the seemingly higher intellectual males. In a manner, it provides evidence to support Darwin’s hypothesis on the matter.

I realize that you are perhaps a bit skeptical about the experimental approach and whether the designed experiment really is on-target to Darwin’s theory.

For example, how do we know what the female budgerigars were really thinking about?

Maybe they ascribed other attributes to the males that succeeded in the task, and those attributes might have little or nothing to do with a perceived sense of intellectual prowess. Furthermore, the females were never allowed to try to undertake the task, so they were not fully aware of what the task consisted of and had to base their “choices” as to the males based solely on watching them perform the experimental task.

Another potential weakness about the study involves our overall conundrum about how to measure intellect. The means of figuring out how to get into a locked container might be considered a problem-solving kind of task, which might or might not require high intellect, and therefore we could debate if intellect is truly being encompassed and exhibited in this study. Were the males merely showcasing keen problem-solving skills rather than high intellect per se?

Based on the experimental design, we need to accept the idea that we are to infer that the container access matter is a sign of good problem solving and that correspondingly, a good problem solver is ergo a high intellectual. Recall that earlier it was pointed out that smartness and intellect are not necessarily the same. Why should we believe that keen problem-solving and intellect are necessarily the same?

They probably are not, most would likely say.

Is this problem-solving task a valid surrogate in lieu of administering to the budgerigars our now-accepted IQ measurement tool, namely the Stanford-Binet Intelligence Scale test?

Makes one kind of chuckle to consider how we might get the budgerigars to take a conventional IQ test. Ponder, how might we ask these Australian parakeets to take an IQ test. These gregarious parakeets are typically referred to as the budgie, and I’d suggest it would be quite interesting to watch as the budgie “read” a conventional IQ test and pencil in, or shall we say peck in, their answers.

Let’s get back to human intellect.

The parrot study was mainly to illuminate that intellect is presumably a quite important matter and that Darwin was a proponent of the belief that intellect ties to survivability, doing so for humans and other animals too.

Considering AI and IQ

In the field of Artificial Intelligence (AI), the presumed overarching goal consists of trying to make machines that seem to exhibit the equivalent of human intelligence.

I’ve tried to word that sentence carefully. Notice that I’m saying that the machine is not necessarily the same as humans in terms of how human intelligence exists.

Many would assert that if we can reach intelligence in machines and do so in a manner that might be quite different from how humans arise to intelligence, we have nonetheless succeeded in achieving artificial intelligence.

The famous Turing Test is a somewhat simple notion of how we might measure whether AI has been achieved or not. Generally, it consists of having a machine that has presumably AI that competes with a human that presumably has human intelligence, and another human asks questions of the two competitors. If the human inquisitor cannot differentiate between the two competitors and is unable to state which is the AI and which is the human, one could infer that the AI has achieved human intelligence.

For my detailed assessment of the Turing Test, see my article: https://www.aitrends.com/selfdrivingcars/turing-test-ai-self-driving-cars/

For the notion of so-called Super-Intelligence, see my article: https://www.aitrends.com/selfdrivingcars/super-intelligent-ai-paperclip-maximizer-conundrum-and-ai-self-driving-cars/

Whether we are facing a grand singularity, see my article: https://www.aitrends.com/selfdrivingcars/singularity-and-ai-self-driving-cars/

For my article about why some say we should start over on the AI pursuit, see: https://www.aitrends.com/selfdrivingcars/starting-over-on-ai-and-self-driving-cars/

For conspiracies about AI, see my article: https://www.aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/

Here’s a good question to contemplate. How high is up?

I mention this because the question arises as to how much intelligence do we need to say that there is an AI that is indeed intelligent?

Suppose an AI system can pass the Turing Test.

Suppose further we give the AI an IQ test.

Many would claim that a score of 70 or lower is an indicator of an intellectual disability. Imagine what we would be pondering if the AI took an IQ test and got a score of say 50.

What a dilemma!

We have an AI system that appeared to pass the Turing Test and seems to be intelligent, and yet at the same time did quite poorly on the IQ test. I realize you might assert that the AI would have been unable to succeed at the Turing Test if it did not have a sufficient IQ, presumably an IQ of at least around 100, which is the “normal” average that usually is scored. I’m not so convinced that you are correct in that assertion.

I’ll shift our attention though from the bottom side of the IQ scale to the top side of the IQ scale.

How high up will we want the AI to score?

If the AI can score at say 115, which is the considered high-IQ range, would that be sufficient?

Consider this scenario.

Your life is in the hands of a robot that must decide what to do and potentially save you. You can choose a robot that has an IQ of 50 (considered intellectually disabled), or one that has an IQ of 100 (intellectually average for a human), or one that has a score of 115 (high IQ), or a score of 160 (Stephen Hawking’s score), or 190 (exceeds genius), or even let’s say the never-yet-human achieved score of 300 (knocking the socks off the IQ test!).

I’m guessing you’ll pick the highest possible number.

You would presumably use the logic that the higher the intellect of the robot then the greater the chance of it making sure your life is saved. Why take a chance on a robot that has “only” an IQ of 160 (Hawking’s level and Einstein’s level), if you could pick one that is off-the-charts at 300? If you could get yourself a robot that had the AI equivalence of two-times the score of Einstein, it would seem unwise of you to take anything lower.

Right now, AI systems are being built and deployed, but there isn’t anyone especially measuring what their intellectual score is. The belief seems to be that if the AI can “do the job” it was intended to do, hopefully it is intellectually commensurate enough. Should we be pleased with this approach? Are you willing to be at the mercy of an AI system for which no one even knows how intellectually low or high it is?

We also need to revisit the earlier points about smartness versus intellect.

I can tell you straight out that the AI of today does not have smartness.

The AI of today is brittle and considered narrow and lacks what often is referred to as Artificial General Intelligence (AGI).

I also earlier mentioned the notion of wisdom, which, again the AI of today would be far below any kind of wisdom scale (not even anywhere on such a scale). There are ongoing efforts to try to imbue AI with common sense reasoning, but it is a long slow road, and nobody knows whether it will ever even succeed.

For my assessment of common sense reasoning efforts, see my article: https://www.aitrends.com/selfdrivingcars/common-sense-reasoning-and-ai-self-driving-cars/

For plasticity in Deep Learning, see my article: https://www.aitrends.com/ai-insider/plasticity-in-deep-learning-dynamic-adaptations-for-ai-self-driving-cars/

For the boundaries of AI, see my article: https://www.aitrends.com/selfdrivingcars/ai-boundaries-and-self-driving-cars-the-driving-controls-debate/

For the AI irreproducibility problem, see my article: https://www.aitrends.com/selfdrivingcars/irreproducibility-and-ai-self-driving-cars/

AI Self-Driving Cars and IQ

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One question that nobody seems to yet be asking is whether we are supposed to be aiming for regular cognition or something more pronounced such as superior cognition. This ties to the discussion herein so far about intellect.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5 and Level 4, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a Utopian world in which there are only AI self-driving cars on the public roads. Currently, there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the topic of cognition and intellect, let’s consider how the matter of the level of intellect applies to the advent of AI self-driving cars.

We’ve so far considered whether there is a need to aim for a “highest feasible” intellect for an AI system that we might be constructing and fielding.

For AI that is designed and built to drive a car, what level of intellectual prowess should be the overarching goal?

First, you could say that we should aim at the level of intellect as exhibited by humans in the case of performing the driving task.

That would seem to be a reasonable marker as to the intellect that we as a society expect for execution of driving a car.

In that case, you would be hard-pressed to suggest that any kind of “higher” intellect is needed per se. Generally, the average person is able to obtain a driver’s license and legally be able to drive a car. As such, we’d presumably say that an “average” IQ is sufficient for the driving effort, and therefore we could be satisfied with an average IQ in terms of the AI that would be driving a car. Perhaps a score of around 100 would be satisfactory.

Suppose we pushed to get the AI of a self-driving car to a higher level of IQ.

Would we gain much?

It is not especially convincing that a higher intellect is going to make that much difference in undertaking the driving task. Are expert-level drivers that race cars of a higher intellect? There doesn’t seem to be much study on that matter, but I’d guess that those race car drivers are more versed in the driving of cars and yet are not intellectually especially at a higher level than the rest of us. Are professional drivers such as cabbies or truck drivers at a higher level of intellect than average car drivers? Again, there doesn’t seem to be much evidence to suggest they are.

If we don’t seem to have a base of high intellects that drive cars, in other words no set of high IQ’s that happen to drive cars and that have been studied to see whether they are somehow more proficient at driving cars, we are left to speculate about the higher IQ and its relationship to driving. You could claim that a higher intellect might be able to think more rapidly when driving a car and be able to mentally add something to the driving chore.

Perhaps a higher intellect would allow a human driver to be more adept at piecing together the clues of the driving scene.

They might be able to see that there is a car up ahead and that there is a pedestrian on the sidewalk, and be able to put together puzzle pieces in a manner that lets them know the odds are that the car is going to hit its brakes, due to the pedestrian likely stepping onto the street, which will then cause the cars behind the stopped car to come to a sudden halt, and will cascade into a potential car crash. Notably, all of these mentally complex calculations being undertaken in a fraction of second, faster and more completely than someone of a lesser but average intellect.

In that manner, a higher intellect might foster being able to envision more complex car-related traffic possibilities. A higher intellect might enable the driver to find clues about the driving situation that those of an average intellect would fail to piece together. A higher intellect might suggest that the driver would be faster at processing the driving situation. This faster mental processing might allow for being able to sooner avoid adverse driving moments. Whereas an average driver might get “caught off-guard” because of not having detective-like realized the clues of a pending driving problem, a higher intellect might be more likely to do so. And by mentally processing it faster, this gives the higher intellect driver more available options since they sooner ascertained that some driving action was needed, upping the chances of being able to select among more early escape options.

For my article about the speed of cognition aspects, see: https://www.aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/

For the role of defensive driving mental calculations, see my article: https://www.aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/

For the human foibles of driving, see my article: https://www.aitrends.com/selfdrivingcars/ten-human-driving-foibles-self-driving-car-deep-learning-counter-tactics/

For the driving complexities, see my article: https://www.aitrends.com/selfdrivingcars/prevalence-induced-behavior-and-ai-self-driving-cars/

For my article about scene analysis, see: https://www.aitrends.com/selfdrivingcars/street-scene-free-space-detection-self-driving-cars-road-ahead/

Will Higher Intellect Boost Driving

I realize you might argue that perhaps the higher intellect is not necessarily going to get all of those driving advantages.

Similar to the study of the budgerigars, perhaps driving a car is a problem-solving task and not as influenced simply by having higher intellect. You could assert that being able to perceive a driving scene and make life-critical decisions about operating a car is more so a problem-solving task rather than a purely intellectual exercise.

Thus, we might be barking up a wrong tree by trying to lay claim that the higher intellect will ergo lead to being a more adept driver.

The higher intellect might allow someone to be a better or faster problem-solver, but this is not axiomatic. These are two different items, whether being a topnotch problem solver versus having a high intellect. Presumably, if a higher intellect wanted to be a topnotch problem solver, they might have an easier time of doing so, prodded on by their high intellect, though it is not automatically the case.

We can also wonder whether a higher intellect might actually work against the notion of being a better driver of a car.

Remember the earlier mention that we as a society seem to assume that the higher intellect is often in the clouds in terms of not paying attention to day-to-day elements of life. We portray high intellects as unable to tie their own shoes. If that’s the case, it would seem that suggesting they are going to be driving a car at a higher plateau of driving proficiency is actually the opposite of what we should expect. We apparently should be worried about these higher intellects driving a car. They might be less able to do so in comparison to an average intellect driver.

Why would it be the case that a higher intellect might be a poorer or worse driver than someone of an average IQ?

You might at first assume that certainly the higher intellect would win at any task involving intellectual effort. The physical aspects of driving are generally rather simplistic, involving pushing a brake pedal and an accelerator pedal, and steering a wheel, all of which even a very young child can do. It’s the intellectual aspects of driving a car that appear to make the difference of being a proficient driver versus one that is not so proficient. A driver that cannot think quickly enough and tie together their sensory clues is one that is seemingly more likely to get into car accidents and create untoward traffic conditions.

We already as a society are concerned about distracted drivers. A distracted driver is one that is not paying attention to the driving task. The distraction can involve a physical form of distraction, such as taking your hands off the wheel to manipulate your smartphone, or maybe turning your head to talk to someone in the backseat of the car and thus your head is now turned away from the driving scene. The distraction can also be a form of intellectual distraction.

When your mind is focused on a text that you have just read on your smartphone, you are no longer well-engaged in the driving task. Even if your head and eyes are now facing the roadway, your mental awareness of the traffic conditions is going to be weakened by your mental preoccupation with the text that you read. I know that there is a lot of concern about using a smartphone while driving, but we’ve already had other forms of mental distractions too, such as talking with others in your car and discussing say the latest in politics or some other non-driving related matter.

You don’t even necessarily need to have something prompt you to mentally become disengaged with the driving task. Have you ever caught yourself daydreaming while driving your car? Imagine you are driving from Los Angeles to San Francisco, a six hour or so drive, and suppose it is a quiet traffic day and the main highway is pretty much empty. Nothing but miles upon miles of farms and rolling hills. For some people, they find themselves unable to concentrate on the roadway and their minds wander. This lack of mental connection to the driving task can catch them unaware if suddenly a tire blows or a deer darts across the highway.

One could suggest that at a higher level of intellect you might be able to multitask mentally more so than someone of an average intellect. If that’s the case, perhaps a minor mental distraction would not materially impact your driving, while for the person of average intellect it could have a more pronounced impact. In essence, if we imagine that intellect is like an apple pie, thinking about some text that you just got might consume half of the apple pie for an average intellect, but only a tiny slice of the apple pie of the higher intellect.

On the other hand, one could claim that perhaps the greater intellect is more prone to tossing their intellect at everything that comes along. In that case, whereas the average intellect might devote just a small mental slice to consider the text they just received, it could be that the higher intellect pours all of their mental capacity into thinking about the text, therefore having very limited intellect leftover to focus on the driving task.

For tests about human responsiveness while driving, see my article: https://www.aitrends.com/selfdrivingcars/not-fast-enough-human-factors-ai-self-driving-cars-control-transitions/

When humans get themselves into a tit-for-tat while driving, see my article: https://www.aitrends.com/selfdrivingcars/tit-for-tat-and-ai-self-driving-cars/

For my article about the role of greed, see: https://www.aitrends.com/selfdrivingcars/selfishness-self-driving-cars-ai-greed-good/

For my article about the dangers facing back-up drivers, see: https://www.aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Being Overly Smart Has Potential Downsides

I had earlier indicated that we often say that someone is smarter than their britches or too smart for their own good.

If we reword this to suggest that someone has too high an intellect for their own good, let’s see how that might impact their intellectual prowess and see how it could impact their driving.

I’ll consider these five exemplars of the potential adverse consequences of high intellect:

  • Analysis Paralysis
  • Dismissiveness
  • Shallowness of Thought
  • Over-Thinking
  • False Over-Confidence

Analysis Paralysis.

A higher intellect might be more prone to analyzing a myriad of options. Will that car ahead opt to make a sudden lane change? Will the pedestrian leap into the street? Is that traffic light going to change to red in the next few seconds? All of this thinking can produce analysis paralysis. The driver becomes preoccupied with analyzing what to do or what might happen, and as a result they aren’t making the kinds of rapid decisions that need to be made when driving a car.

Dismissiveness.

A higher intellect might be dismissive of others. You’ve likely had someone that thinks they are so sharp that they dismiss other people’s ideas or suggestions. Unless they believe the other person is of an equal intellect, they don’t get much credence to the other person. A driver that is dismissiveness might opt to ignore a warning from a front seat passenger that tells them a car to their right is possibly going to intervene into their lane. This dismissiveness can undermine the driving effort.

Shallowness of Thought.

A higher intellect will often categorize mental tasks and then proclaim that a particular task is not worthy of their intellectual powers. As a driver, a higher intellect might be tempted to consider the driving task as menial. As a result, the person is unwilling to put much mental effort toward driving. They prefer to operate a car with a shallowness of thought. If they do so, it could spell danger as they are potentially underestimating what they need to be considering in order to be a safe driver.

Over-Thinking.

A higher intellect might tend toward over-thinking every moment of the driving task. I knew someone that was looking at every angle at every step of driving a car. They made incredible mental leaps about the aspects that could go awry, almost to the degree that they even were calculating the chances of a meteor striking the earth in front of their car. This over-thinking can cause them to become muddled and overwhelmed about the driving task.

False Overconfidence.

A higher intellect might believe that they are the best driver ever, which is fueled by their belief in their own astounding intellect. This leads to overconfidence. They assume that for any driving situation they will be able to mentally find a means of driving the car to escape. This type of driver can be riskier in their driving and get themselves into binds that they are actually unable to get out of safely.

I am not saying that only higher intellects will potentially fall victim to the aforementioned mental guffaws. Any driver can suffer from analysis paralysis, and from dismissiveness, and from shallowness of thought, and from over-thinking, and from false over-confidence. I’d bet though that the higher intellect is perhaps more likely to find themselves falling into these traps. It is the basis for why we have as a society come up with the too smart for their own britches label.

Could an AI system for a self-driving car also be vulnerable to these same kinds of mental underpinnings?

Sure, each of these intellectual dangers can readily happen to an AI system. I don’t want you to though assume I am saying that AI is sentient, and it is succumbing to these mental impairments in the same manner that a human might. I am not suggesting or implying this.

Instead, I am trying to assert that the AI as a form of automation can suffer the same deficiencies and it is up to the AI developers to try to make sure that the AI is not caught off-guard by these computationally equivalent mental maladies.

For example, analysis paralysis can befall the AI if it gets bogged down trying to explore a large search space and fails to realize that time is crucial to making a driving decision. The AI could be so engrossed in assessing the sensory data and the virtual world model that it lets the clock continue to run. The running clock means that the world outside the self-driving car is moving and changing, which might mean that the AI is gradually losing out on options for making a vital car driving decision.

I had predicted that the Uber incident in Arizona might partially have occurred because of the time taken by the AI to try to assess the driving situation. Preliminary reports assessing the Uber incident appeared to have echoed that point. Though some might shrug their shoulders and say that’s just the way the real-time automation works, I am not one to fall into the trap of allowing automation to be some kind of independent amorphous entity that happens to do what it does. I hold accountable the AI developers that should be developing their AI systems to handle these kinds of real-time situations.

For my initial assessment of the Uber incident, see: https://www.aitrends.com/selfdrivingcars/initial-forensic-analysis/

For my follow-up about the Uber incident, see my article: https://www.aitrends.com/selfdrivingcars/ntsb-releases-initial-report-on-fatal-uber-pedestrian-crash-dr-lance-eliot-seen-as-prescient/

For the responsibility aspects of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/responsibility-and-ai-self-driving-cars/

For my article about egocentric AI developers, see: https://www.aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

For my article about AI developers and groupthink dangers, see: https://www.aitrends.com/selfdrivingcars/groupthink-dilemmas-for-developing-ai-self-driving-cars/

Conclusion

Is superior cognition needed to drive a car?

We might debate the meaning of the word “superior” and be at odds about the notion of what being superior in cognition consists of.

If we use the everyday notion of IQ, the question can be rephrased as to whether higher IQ is needed to drive a car. There seems little evidence to suggest that any particular level of above average IQ is a needed element to drive a car, since the world at large appears to be able to drive a car, and we can reasonably assume therefore it involves an average IQ effort.

It could be that if we can achieve AI that can drive a self-driving car, we might want to see what it can do if it is pushed to a higher level of intellect. Perhaps we might have better driving and safer driving. This is not necessarily the case. We also need to be aware of the kinds of mental maladies that seem to at times correspond to having higher intellect, and whether those might be found in AI systems and therefore undermine the heightened intellect aspects.

I’ve not entertained herein the conspiracy theorists that are worried that we might be pushing the AI intellect to a point that it surpasses human intellect and then opts to take over humanity. The paperclip making super-intelligence mankind-overtaking AI I’ve covered elsewhere. For now, I’m merely trying to get AI developers to consider the degree of intellect that they are aiming to achieve in their AI systems, and also prodding the rest of us to also consider what level of intellect are we becoming vulnerable to in terms of AI systems that increasingly are entering into our lives.

I’ve highlighted the nature of AI self-driving cars as a key indicator of how the intellect might come to play. Many AI systems are not as involved in making immediate life-or-death decisions as those of AI self-driving cars. I would hope that we would be more concerned about the intellect prowess of AI systems in the role of deciding whether a multi-ton car is going to make a right turn or maybe come to a sudden stop, decisions on which the lives of humans hang in the balance. It sure seems like having superior cognition would be a handy capability, if properly designed and deployed.

The Einstein AI for self-driving cars has kind of a ring to it, doesn’t it.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]