By Lance Eliot, the AI Trends Insider
We generally think of camouflage as a means to blend into the surroundings.
Many animals have a colorization pattern that allows them to seemingly disappear by laying quietly at a standstill on a tree branch or hiding inside a bush. Military personnel often put on special camouflage designated clothing and use skin paints to try to likewise be indistinguishable from their surroundings. The hope is that the enemy cannot spot them and so would be unable to accurately aim at them for purposes of attack.
This overall notion of camouflage is just one of several classes of camouflage, in this case the attempt at concealment is referred to as a crypsis form of camouflage.
There’s another form of camouflage that we don’t often consider, namely the kind that is known as motion dazzle, also sometimes called razzle-dazzle, or perhaps more easily understood by the name disruptive camouflage. In this instance, the desire is to actually stand out and be readily seen. In fact, being seen is the part that involves the camouflage trickery, specifically that the use of colors and shapes makes the observer take notice. In addition, the use of disruptive patterns and even countershading make it difficult for the observer to readily figure out the true shape and nature of what is being camouflaged.
Background About Disruptive Camouflage
This idea of using disruptive camouflage was extensively undertaken during World War I and also somewhat during World War II.
We tend to think of navy ships as always being painted a rather dull monotone grey color.
This would seem to be a wise choice.
At sea, the navy ships would tend to blend into the background of a grayish sky and a blue sea. Presumably, whales and dolphins use a similar colorization to try to blend into their surroundings. Here in Southern California, you can wander to San Diego and see lots of United States Navy ships docked in the harbor as it is a major west coast naval center. Many visitors often say to me that they didn’t even notice the big ships, in spite of the fact that there are many dozens of them all readily seen. Must be the grey monotone.
In a controversial manner, upon the onset of World War I, there were some that suggested it would be better to use a disruptive camouflage pattern for navy ships rather than a monotone grey.
Various complex patterns involving striking geometric shapes were devised and painted onto navy ships of the time period. Thousands of such motion dazzle depictions were used. The colors and shapes are intersecting each other and interrupting each other.
It was said that Picasso himself found these ship colorization approaches to be inspiring.
Why would anyone in their right mind do this to navy ships?
Isn’t it just asking for those ships to be readily attacked by submarines, airplanes, and other enemy ships?
Wouldn’t it be better to try to hide or conceal the ships so that the enemy would not realize it is floating along on the sea?
There indeed were acrimonious debates on this topic.
Those that favored the disruptive camouflage claimed that the navy ships were inevitably going to be spotted anyway and that trying to hide or conceal them was generally ineffective.
If you are in agreement that the concealment approach is not going to be successful, then you would certainly be open to considering other options. The razzle-dazzle approach was intended to not only make the ships seen but that when seen it would be potentially confusing to the enemy as to what they were looking at.
An enemy might not be sure if the ship is a battleship or a destroyer. It would be hard to discern the type of ship and also whether the ship was heading toward you or away from you. The speed was not easy to estimate either. Are you looking at the bow or the stern of the ship? All of this disruptive kind of visualization was intended to confound the enemy. Since it was assumed that the enemy was likely going to spot the ships anyway, might as well try to make things hard for the enemy to figure out what the ship was and where it was going.
Those navies that adopted the razzle-dazzle were also smart enough to realize that if they painted certain kinds of ships the same way, such as all battleships in the same patterns and colors, it would undermine the purpose of the disruptive notion. It would mean that once the enemy figured out how you were discoloring that category of ships, they could easily then “break the code” and be able to figure out what kind of ship it was and how to interpret the colors and patterns. As such, those using the razzle-dazzle had to come up with varying patterns and use them in a somewhat unique manner for each individual ship.
There are zoological theories that suggest the zebra, jaguar, and giraffe are examples of a razzle-dazzle form of camouflage.
There is much contention whether those animals are indeed going the razzle-dazzle route or whether there is some other reason for the patterns and colors on their skin. For example, some say that the giraffe is actually attempting to use the traditional form of camouflage by having the colors that you might see associated with trees. Given that giraffes stand tall, maybe the idea is that nature led them toward concealment when among a clump of trees.
In terms of the navy use of disruptive camouflage, numerous scientific studies have tried to determine whether the razzle-dazzle is really more effective than the monotone grey. It would seem that most studies ultimately are unable to control sufficiently the numerous factors involved and it is a muddied picture as to whether the razzle-dazzle is truly an effective defensive technique.
There are some that say it is a freakish method and all it does is make the ships look like bizarre oddities.
razzle-dazzle And AI Autonomous Cars
What does this have to do with AI self-driving driverless autonomous cars?
At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect that is going to be crucial for the ultimate success of self-driving cars will be whether pedestrians are safe when around AI self-driving cars.
Allow me to elaborate.
First, let’s clarify that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, the brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.
For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.
For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/
For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/
For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/
For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/
Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.
One of the top worries that the auto makers and tech firms have about AI self-driving cars will be the interaction of AI self-driving cars and pedestrians.
In theory, the AI self-driving car is supposed to be good enough at the driving task to be able to detect pedestrians. Once so detected, the AI should be doing what it can to avoid hitting pedestrians. There are some pundits that keep referring to “zero fatalities” once AI self-driving cars have been adopted on a widespread basis, but I’ve said many times that this zero-fatality notion is nonsensical.
Imagine that an AI self-driving car is driving down a street at the legally posted speed. Suppose it is 45 miles per hour, which is about 66 feet per second. A pedestrian is standing behind a pole that is adjacent to the road and right at the curb. Neither an AI self-driving car sees the pedestrian, and nor could a human driven car see the person that’s standing behind that pole. The pedestrian decides to step out into the street just as the AI self-driving car gets within a few feet of the pole (or, if you like, pretend it was a human driver – the same end result is going to occur).
By what magic is the AI self-driving car not going to hit that pedestrian?
For my article about pedestrians and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/avoiding-pedestrian-roadkill-self-driving-cars/
For my article about the myth of zero fatalities, see: https://aitrends.com/selfdrivingcars/self-driving-cars-zero-fatalities-zero-chance/
I ask the question because the answer is rather plain and simple, the law of physics is going to take over and the AI self-driving car is going to ram into that pedestrian. There was insufficient time to swerve the self-driving car. There was insufficient time to brake the self-driving car. There was no means to detect the pedestrian’s presence beforehand. Wham. It happened. That’s more than zero fatalities.
Well, I realize that you might object and say that it is preposterous that the pedestrian was unable to be seen.
Maybe I’ve contrived the situation in a manner that could never happen.
Obscured From View
I’m not sure where you drive, but I assure you that in the downtown Los Angeles area that I drive, it happens all the time that pedestrians are obscured from view.
One time, a FedEx worker had stacked a bunch of boxes next to the curb, standing high about eight feet or so.
Two pedestrians suddenly emerged from behind those boxes and stepped directly into the street. There was no viable means to have detected them beforehand.
Some of you might say that if they were jaywalking, it’s their fault for having illegally attempted to cross the street.
I’m not particularly discussing fault right now, and just trying to clarify and emphasize that pedestrians can mix with cars in a deadly fashion, and that an AI self-driving car is not necessarily going to somehow remedy entirely that equation.
If you still aren’t convinced, and still believe that the “hidden” pedestrian is a falsehood, I’ll change the indication and say that the pedestrian was completely seen by the car driver, whether it’s a human driver or an AI self-driving car. Suppose the pedestrian is completely visible and there’s no chance of not seeing the pedestrian.
Once again, if the pedestrian suddenly steps off the curb into the street, and if a car is approaching at 66 feet per second, and if the pedestrian does get in front of the car with just a second or two before impact, the law of physics enters into the picture. There will not be enough time to swerve the car or stop the car. It’s going to hit the pedestrian.
Why would a pedestrian do this? Is it a suicide by car? No (well, I hope not). There are pedestrians that routinely step into the street and aren’t at all paying attention to the cars. One recent concern by many municipalities if that people seem to be looking at their smartphones rather than the street traffic, and end-up making dumb moves into oncoming traffic. There are some local ordinances that have now made it against the law for you to be looking at your cell phone while crossing the street, even while doing so in a legal crosswalk.
In short, either a human driven car or an AI self-driving car can end-up hitting and possibly killing a pedestrian, doing so for sure in the circumstance wherein the pedestrian opts to enter into the street when there is no viable means of avoiding the pedestrian, either by braking or swerving the car.
You might be thinking that shouldn’t the AI self-driving car do a better job on this than a human driven car?
I suppose you could say that an AI self-driving car might do a better job in that the AI is presumably not distracted as a human driver might get distracted. A human driver might be looking at the other side of the street and noticing a new barbershop that’s opened up, and thus fail to notice the pedestrian to their right that suddenly steps into the street.
If there is a chance to avoid striking the pedestrian, by-and-large you might say that the AI will perhaps be more likely to do so, since it lacks the distraction factor that could beset a human driver. The AI is presumably continually scanning the surroundings, on all sides, and not going to allow itself to become preoccupied with one particular thing, such as the new barbershop.
Here’s the usual steps involved in the AI driving task:
- Sensor data collection and interpretation
- Sensor fusion
- Virtual world model updating
- AI action planning
- Car controls command issuance
The AI self-driving car should be using all of its sensors, including cameras for visual images, radar, sonic, and LIDAR, trying to detect where pedestrians are. It’s not sufficient though to just detect them, since you also need to try and predict what the pedestrian is going to do.
If a pedestrian is on the sidewalk and running, and they are veering toward the street, the AI is intended to make a prediction that the pedestrian might logically end-up in the street, and if intersecting with the path of the self-driving car, the AI should direct the AI self-driving car to avoid striking the pedestrian, once they (possibly) enter into the street.
For the cognition timing aspects of the driving task, see my article: https://aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/
See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/
To learn about LIDAR, see my article: https://aitrends.com/selfdrivingcars/lidar-secret-sauce-self-driving-cars/
Please keep in mind that the AI and the self-driving car are not considered perfect machines.
Is there a chance that a pedestrian might not be detected in spite of the array of sensors on the self-driving car?
Yes, absolutely. Can the sensors themselves at times fail or falter? Yes, absolutely. Could the AI system end-up making a wrong choice about whether there is a pedestrian nearby and whether the pedestrian might be intending to get into harm’s way? Yes, absolutely.
I say this because there are some pundits that seem to want to portray a Utopian world that involves self-driving cars that are always working and always flawless. I ask you, can you cite for me any of today’s machines that are utterly perfect and flawless? I don’t think so. The self-driving car is still a car.
It is prone to equipment wear-and-tear.
There might even be parts on the self-driving car that are subject to a recall.
The AI itself might have bugs or errors in the software. And so on.
For my article about recalls and self-driving cars, see: https://aitrends.com/selfdrivingcars/auto-recalls/
For my article about potential bugs and errors in AI, see: https://aitrends.com/ai-insider/irreproducibility-and-ai-self-driving-cars/
For the egocentric designs of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/
Let’s then assume that the AI self-driving car is going to do what it can to avoid hitting pedestrians, but that it is not a perfect system and there are still substantive chances of hitting pedestrians, especially when a pedestrian does something erratic or rash.
There are some AI developers that say we should go after the pedestrians. In other words, if we cannot make the AI good enough to deal with the pedestrians, let’s change the behavior of pedestrians. It’s those pesky humans that are the real problem here. The AI is fine if it is just good enough, and we can presumably bend the will of humans so that the AI won’t get itself into trouble.
I’m not one that ascribes to this notion that the pedestrians are the entire problem per se, and I’m a strong advocate that we need to make the AI stronger and self-driving cars more robust to be able to contend with pedestrians.
Nonetheless, admittedly the behavior of pedestrians does enter into the matter at-hand. That being said, trying to change human behavior is not an easy thing. I dare say that the jurisdictions that have outlawed looking at your cell phone as you cross the street are finding that most pedestrians are still looking down at their cell phones. Unless you were to put police at those crosswalks and they write a zillion tickets, few pedestrians are going to heed the new law.
There are though some aspects about human behavior that we can try to contend with.
One aspect is to try to grab the attention of the pedestrian so that they are less likely to blindly step into the path of an AI self-driving car.
Getting The Attention Of Pedestrians
At my presentations about AI self-driving cars, I’ve often described a phenomenon that I refer to as the “head nod” problem. For human driven cars, a pedestrian can usually see the human that is driving a car.
There is usually a kind of “courtship” that takes place between a car driver and a pedestrian. The pedestrian might make eye contact and be subliminally saying that they are going to step out into the street and the car driver better let them do so. The car driver might be making eye contact to say that don’t dare get in front of this car. In some cities, this takes place in a mere few seconds or so, such as in New York City or Boston, and a stare down leads to someone “winning” the chicken contest.
When you have an AI self-driving car, there is no longer a visually present human driver that represents the intentions of the self-driving car. Thus, an important signaling aspect of car-to-pedestrian is now removed from the day-to-day arrangement that we all seem to have with traffic and crossing streets or entering into streets. There’s been an unwritten contract of sorts that pedestrians generally should look toward the driver and the driver should look toward pedestrians. It’s not a perfect contract by any means. Pedestrians routinely don’t look at a driver, or perhaps cannot see the driver anyway due to tinted windows or other obscurities.
We’ve been instituting various means of being conspicuous, already integral to the everyday capabilities of cars, doing so as a means for the AI to try to signal to pedestrians. Just like conventional cars, an AI self-driving car has its turn signals, its headlights, and can potentially use its horn too. Even the direction of the tires and the maneuvering of the physical motions of the car are all part of the signaling aspects to pedestrians.
For more about the head nod problem, see my article: https://aitrends.com/selfdrivingcars/head-nod-problem-ai-self-driving-cars/
For the conspicuity aspects of an AI self-driving car, see my article: https://aitrends.com/selfdrivingcars/conspicuity-self-driving-cars-overlooked-crucial-capability/
More recently, there are some automakers and tech firms that are making use of add-ons to an AI self-driving car to provide further signaling or messaging to pedestrians.
These are pretty much experiments right now.
No one is quite sure what kind of signaling or messaging might be best. Do you place an electronic signboard on the roof of the AI self-driving car?
Do you put special wing like attachments on the sides of the AI self-driving car that can display visual signals?
Do you include audio sounds, beyond just honking a horn, such as a tone to indicate when the AI self-driving car is making a turn, or even an automated Alexa or Siri like voice that says what the self-driving car is going to do?
There is also a move afoot to provide a V2P capability (vehicle to pedestrians).
There are already efforts toward V2V (vehicle to vehicle) communications, allowing an AI self-driving car to electronically communicate with another AI self-driving car. In a crowded downtown area, the AI self-driving car ahead of your AI self-driving car might via V2V warn your AI self-driving car that a pedestrian is about to step into the street.
And there are V2I efforts (vehicle to infrastructure) too, in which the roadway infrastructure such as street lights or even crosswalks will communicate electronically with your AI self-driving car.
The V2P is the idea that a pedestrian might have a smartphone or maybe a smartwatch (or other wearable), and the AI self-driving car could communicate with the pedestrian. This is kind of “head nod” aspect that I mentioned earlier, except taken into a modern day with the use of electronics. The AI self-driving car might broadcast to the pedestrians on the corner that the AI is intending to make a right turn on red when it reaches the intersection. This could forewarn the pedestrians.
Presumably, a pedestrian could also make a request to an AI self-driving car. Perhaps the pedestrian needs extra time to make it through the crosswalk, and so their electronic device sends an indication to an AI self-driving car that’s coming down the street. The AI then realizes that there might be a longer wait than normal up ahead at the crosswalk and would slow down or come to a stop accordingly.
Making A Self-Driving Car Standout
I now present to you a somewhat provocative notion.
Should an AI self-driving car be readily noticed as an AI self-driving car?
Some would say that yes, it is important for pedestrians to realize that an AI self-driving car is coming along on the street. If they realize it is an AI self-driving car, perhaps the pedestrians will be more cautious than they otherwise might be. This could be especially important if we are conceding that in many respects the AI self-driving car might not be as good as a human driver in terms of dealing with pedestrians. By realizing that there is an AI self-driving car, the pedestrians are essentially forewarned.
So far, most of the AI self-driving cars tend to have a LIDAR unit on the top of the car, which is a beacon looking device. This is there for functional purposes. It also though happens to help make an AI self-driving car standout as an AI self-driving car. Visually, when you see the cap on the top of the self-driving car, you are likely looking at an AI self-driving car. Many in the general public already have become accustomed to this feature and immediately assume that any car with the beacon or cone is most likely an AI self-driving car.
Please be aware though that not all of the AI self-driving cars will necessarily opt to use LIDAR. Also, as the LIDAR units get improved and further miniaturized, it will become less obvious that there’s a LIDAR unit on the top of some AI self-driving cars since they will be streamlined in shape and size.
Some might argue that we should purposely put a special kind of dome or cap on all AI self-driving cars, regardless of whether LIDAR is being used or not. Doing so would make it a visually obvious aspect that the car is an AI self-driving car. It might even be a regulated requirement that all AI self-driving cars would have to have one. It is mandated as a kind of “decorative” matter (serving as a physical shape for distinctiveness, a footprint as it were), rather than for an electronic functional purpose.
And this brings us to the topic of razzle-dazzle!
Right now, most of the automakers and tech firms are taking a conventional looking car and outfitting it to be an AI self-driving car. The shape and colorization of the AI self-driving car is pretty much the same as any other car on the roadways today. There are some future concept cars that have designs of a rather new look, but those aren’t particularly aimed to be on our roadways soon.
Maybe, if we extend the idea of having a dome or beacon on the top of an AI self-driving car, we might consider doing something even more extravagant about the shape and colors of an AI self-driving car.
We already accept the notion that a cab or taxi is often yellow in color and has an indicator on the roof. We accept the notion that police cars have a certain pattern and color scheme. Would it make sense to consider that all AI self-driving cars would need to abide by some special designated “razzle-dazzle” combination of shapes and colors?
The rather obvious downside to this idea is that perhaps the public might not like the razzle-dazzle look. If you are an auto maker pouring tons of money into AI self-driving cars, you probably don’t want to risk having people not be willing to buy an AI self-driving car because of how it looks. Remember before that some expressed that the razzle-dazzle looking navy ships are freakish in appearance? I doubt that we want the public to perceive AI self-driving cars in a freakish manner.
For my article about marketing and selling AI self-driving cars, see: https://aitrends.com/selfdrivingcars/marketing-self-driving-cars-new-paradigms/
For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/
For defensive driving tactics and AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/
There might be a fine line then between purposely coming up with a common or standard scheme of shapes or colors that could make an AI self-driving car become readily apparent to humans, and especially pedestrians, and yet also not be overly garish. A kind of softer disruptive camouflage, as it were.
Unlike the normal kind of disruptive camouflage that intends to deceive about speed and direction, we’d of course want the razzle-dazzle to make those factors actually more apparent, rather than less so. Also, the traditional disruptive camouflage for navy ships is distinct per ship so that the class of ships cannot be revealed, while in this case we would want something that was consistent across all instances. In that sense, please use this analogy to the disruptive camouflage concept in a thoughtful manner and not a stricter aspect-for-aspect manner.
Presumably, each automaker would want to be able to still provide their own look-and-feel to the AI self-driving car, allowing them to be differentiated in the marketplace.
A pedestrian might take more notice of an AI self-driving car if it had some kind of standardized markings or indication, and hopefully the pedestrian would be more cautious accordingly. This though also raises the other side of the coin, namely that pedestrians might purposely try to prank an AI self-driving car, and by knowing right away that the car coming down the street is an AI self-driving car, they might more readily be apt to play such tricks.
If we don’t do something to make AI self-driving cars appear distinctive, it essentially means that they will be using traditional “camouflage” in that they will blend into the surroundings consisting of other conventional cars.
You won’t be able to readily notice whether those cars nearby are human driven or AI self-driving cars. As a society, do we believe that AI self-driving cars should be required to appear distinctive, or are we fine with them blending in?
Razzle-dazzle, or just the norm.
Time will tell.
Copyright 2020 Dr. Lance Eliot
This content is originally posted on AI Trends.
[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]
Source: AI Trends