The FSD option has been available as an optional add-on to complement Tesla’s Autopilot driver assistance technology, even though the features themselves haven’t been available to Tesla owners before the launch of the beta this month. Even still, it’s only in limited beta, but this is the closest Musk and Tesla have come to actually launching something under the FSD moniker — after having teased a fully autonomous mode in production Teslas for years now.
Despite its name, FSD isn’t what most in the industry would define as full, Level 4 or Level 5, autonomy per the standards defined by SAE International and accepted by most working on self-driving. Musk has designed it as vehicles having the ability “to be autonomous but requiring supervision and intervention at times,” whereas Levels 4 and 5 (often considered “true self-driving”) under SAE standards require no driver intervention.
Still, the technology does appear impressive in some ways according to early user feedback — though testing any kind of self-driving software unsupervised via the general public does seem an incredibly risky move. Musk has said that we should see a wide rollout of the FSD tech beyond the beta before year’s end, so he definitely seems confident in its performance.
The price increase might be another sign of his and the company’s confidence. Musk has always maintained that users were getting a discount by handing money over early to Tesla in order to help it develop technology that would come later, so in many ways it makes sense that the price increase comes now. This also obviously helps Tesla boost margins, though it’s already riding high on earnings that beat both revenue and profit expectations from analysts.
The U.S. government rolled out a new online tool Wednesday designed to give the public insight into where and who is testing automated vehicle technology throughout the country.
The official name of the online tool — Automated Vehicle Transparency and Engagement for Safe Testing Initiative tracking tool — is a jargony mess of a word salad. Fortunately, its mechanics are straightforward. The online tool gives users the ability to find information about on-road testing of automated vehicles in 17 cities throughout the United States. The public can find out information about a company’s on-road testing and safety performance, the number of vehicles in its fleet as well as AV-related legislation or policy in specific states.
The AV tracking tool is part of the Automated Vehicle Transparency and Engagement for Safe Testing Initiative, called AV TEST for short, that was announced in June. The National Highway Traffic Safety Administration is overseeing the AV TEST Initiative.
The online tool is hardly comprehensive, but it’s a start, and continues to expand. The tool currently shows data in 17 cities, including Austin, Columbus (Ohio), Dallas, Denver, Jacksonville, Orlando, Phoenix, Pittsburgh, Salt Lake City, San Francisco and Washington, D.C. The data might include testing activity as well as dates, frequency, vehicle counts and routes, NHTSA said.
The information on the interactive web page is based on information that companies have volunteered. In other words, companies testing automated vehicle technology are not required by the federal government to provide data.
However, a growing number of AV founders and engineers understand that public education and acceptance will be necessary if they ever hope to commercially deploy their technology. Ten companies and nine states have already signed on as participants in the voluntary web pilot. The participating companies, to date, are Beep, Cruise, EasyMile, FCA, LM Industries, Navya, Nuro, Toyota, Waymo and Uber Advanced Technologies Group. The online tool also contains voluntarily submitted safety reports from Aurora, Ike, Kodiak, Lyft, TuSimple and Zoox.
NHTSA has limited the number of companies submitting data during the pilot phase, Dr. Joseph M. Kolly, the agency’s chief safety scientist said during a briefing earlier Wednesday.
“The more information the public has about the on-road testing of automated driving systems, the more they will understand the development of this promising technology,” NHTSA Deputy Administrator James Owens said in a statement. “Automated driving systems are not yet available for sale to the public, and the AV TEST Initiative will help improve public understanding of the technology’s potential and limitations as it continues to develop.”
Hi, and welcome back to The Station, a weekly newsletter dedicated to all the ways people and packages travel from Point A to Point B. I’m your host Kirsten Korosec, senior transportation reporter at TechCrunch. If this is your first time, hello; I’m glad you’re with us.
I have started to publish a version of the newsletter on TechCrunch. That’s what you’re reading now. For the whole newsletter, which comes out every weekend, you can subscribe by heading over here, and clicking “The Station.” It’s free!
Last week, I asked readers to share how …
Four years ago, mathematician Vlad Voroninski saw an opportunity to remove some of the bottlenecks in the development of autonomous vehicle technology thanks to breakthroughs in deep learning.
Now, Helm.ai, the startup he co-founded in 2016 with Tudor Achim, is coming out of stealth with an announcement that it has …
Got your sights set on attending TC Sessions: Mobility 2020 on May 14 in San Jose? Spend the day with 1,000 or more like-minded founders, makers and leaders across the startup ecosystem. It’s a day-long deep dive dedicated to current and evolving mobility and transportation tech. Think autonomous vehicles, micromobility, AI-based mobility applications, battery tech and so much more.
In addition to taking in all the great speakers (more added every week), presentations, workshops and demos, you’ll want to meet people and build the relationships that foster startup success. Get ready for a radical network experience with CrunchMatch. TechCrunch’s free business-matching platform makes finding and connecting with the right people easier than ever. It’s both curated and automated, a potent combination that makes networking simple and productive. Hey needle, kiss that haystack goodbye.
Here’s how it works.
When CrunchMatch launches, we’ll email all registered attendees. Create a profile, identify your role and list your specific criteria, goals and interests. Whomever you want to meet — investors, founders or engineers specializing in autonomous cars or ride-hailing apps. The CrunchMatch algorithm kicks into gear and suggests matches and, subject to your approval, proposes meeting times and sends meeting requests.
CrunchMatch benefits everyone — founders looking for developers, investors in search of hot prospects, founders looking for marketing help — the list is endless, and the tool is free.
You have one programming-packed day to soak up everything this conference offers. Start strategizing now to make the most of your valuable time. CrunchMatch will help you cut through the crowd and network efficiently so that you have time to learn about the latest tech innovations and still connect with people who can help you reach the next level.
TC Sessions: Mobility 2020 takes place on May 14 in San Jose, Calif. Join, meet and learn from the industry’s mightiest minds, makers, innovators and investors. And let CrunchMatch make your time there much easier and more productive. Buy your early-bird ticket, and we’ll see you in San Jose!
Is your company interested in sponsoring or exhibiting at TC Sessions: Mobility 2020? Contact our sponsorship sales team by filling out this form.
Or, suppose instead I said to you that you should “Drive Safely: It’s the Law” – how would you react?
Perhaps I might say “Drive Safely or Get a Ticket.”
I could be even more succinct and simply say: Drive Safely.
These are all ways to generally say the same thing.
Yet, how you react to them can differ quite a bit.
Why would you react differently to these messages that all seem to be saying the same thing?
Because how the message is phrased will create a different kind of social context that your underlying social norms will react to.
If I simply say “Drive Safely”, it’s a rather perfunctory form of wording the message.
It’s quick, consisting of only two words. You likely would barely notice the message and you might also think that of course it’s important to drive safely. You might ignore the message due to it seemingly being obvious, or you might notice it and think to yourself that it’s kind of a handy reminder but that in the grand scheme of things it wasn’t that necessary, at least not for you (maybe it was intended for riskier drivers, you assume).
Consider next the version that says “Thank You for Driving Safely.”
This message is somewhat longer, having now five words, and takes more effort to read. As you parse the words of the message, the opening element is that you are being thanked for something. We all like being thanked. What is it that you are being thanked for, you might wonder. You then get to the ending of the message and realize you are being thanked for driving safely.
Most people would then maybe get a small smile on their face and think that this was a mildly clever way to urge people to drive safely. By thanking people, it gets them to consider that they need to do something to get the thanks, and the thing they need to do is drive safely. In essence, the message tries to create a reciprocity with the person – you are getting a thank you handed to you, and you in return are supposed to do something, namely you are supposed to drive safely.
Suppose you opt to not drive safely?
You’ve broken the convention of having been given something, the thanks, when it really was undeserved. In theory, you’ll not want to break such a convention and therefore will be motivated to drive safely. I’d say that none of us will necessarily go out of our way to drive safely merely due to the aspect that you need to repay the thank-you. On the other hand, maybe it will be enough of a social nudge that it puts you into a mental mindset of continuing to drive safely. It’s not enough to force you into driving safely, but it might keep you going along as a safe driver.
What about the version that says “Drive Safely: It’s the Law” and your reaction to it?
In this version, you are being reminded to drive safely and then you are being forewarned that it is something you are supposed to do. You are told that the law requires you to drive safely. It’s not really a choice per se, and instead it is the law. If you don’t drive safely, you are a lawbreaker. You might get into legal trouble.
The version that says “Drive Safely or Get a Ticket” is similar to the version warning you about the law, and steps things up a further notch.
If I tell you that something isn’t lawful, you need to make a mental leap that if you break the law there are potentially adverse consequences. In the case of the version telling you straight out that you’ll get a ticket, there’s no ambiguity about the aspect that not only must you drive safely but indeed there is a distinct penalty for not doing so.
None of us likes getting a ticket.
We’ve all had to deal with traffic tickets and the trauma of getting points dinged on our driving records, possibly having our car insurance rates hiked, and maybe needing to go to traffic school and suffer through boring hours of re-learning about driving. Yuk, nobody wants that. This version that mentions the ticket provides a specific adverse consequence if you don’t comply with driving safely.
The word-for-word wording of the drive safely message is actually quite significant as to how the message will be received by others and whether they will be prompted to do anything because of the message.
I realize that some of you might say that it doesn’t matter which of those wordings are used.
Aren’t we being rather tedious in parsing each such word?
Seems like a lot of focus on something that otherwise doesn’t need any attention. Well, you’d actually be somewhat mistaken in the assumption that those variants of wording do not make a difference. There are numerous psychology and cognition studies that show that the wording of a message can have an at times dramatic difference as to whether people notice the message and whether they take it to heart.
I’ll concentrate herein on one such element that makes those messages so different in terms of impact, namely due to the use of reciprocity.
Importance Of Reciprocity
Reciprocity is a social norm.
Cultural anthropologists suggest that it is a social norm that cuts across all cultures and all of time.
In essence, we seem to have always believed in and accepted reciprocity in our dealings with others, whether we explicitly knew it or not.
I tell you that I’m going to help you with putting up a painting on your wall. You now feel as though you owe me something in return. It might be that you would pay me for helping you. Or, it could be something else such as you might do something for me, such as you offer to help me cook a meal. We’re then balanced. I helped you with the painting, you helped me with the meal. In this case, we traded with each other, me giving you one type of service, and you providing in return to me some kind of service.
Of course, the trades could have been something other than a service.
I help you put up the painting (I’m providing a service to you), and you then hand me a six pack of beer. In that case, I did a service for you, and you gave me a product in return (the beers). Maybe instead things started out that you gave me a six-pack of beer (product) and I then offered to help put up your painting (a service). Or, it could be that you hand me the six pack of beers (product), and I hand you a pair of shoes (product).
In either case, one aspect is given to the other person, and the other person provides something in return. We seem to just know that this is the way the world works.
Is it in our DNA?
Is it something that we learn as children? Is it both?
There are arguments to be made about how it has come to be.
Regardless of how it came to be, it exists and actually is a rather strong characteristic of our behavior.
Let’s further unpack the nature of reciprocity.
I had mentioned that you gave me a six-pack of beer and I then handed you a pair of shoes. Is that a fair trade? Maybe those shoes are old, worn out, and have holes in them. You might not need them and even if you needed them you might not want that particular pair of shoes. Seems like an uneven trade. You are likely to feel cheated and regret the trade. You might harbor a belief that I was not fair in my dealings with you. You might expect that I will give you something else of greater value to make-up for the lousy shoes.
On the other hand, maybe I’m not a beer drinker and so you’re having given me beers seemed like an odd item to give to me. I might have thought that I’d give you an odd item in return. Perhaps in my mind, the trade was even. Meanwhile, in your mind, the trade was uneven.
There’s another angle too as to whether the trade was intended as a positive one or something that is a negative one. We both are giving each other things of value and presumably done in a positive way. It could be a negative action kind of trade instead. I hit you in the head with my fist, and so you then kick me in the shin. Negative actions as a reciprocity. It’s the old eye-for-an-eye kind of notion.
Time is a factor in reciprocity too. I will help you put up your painting. Perhaps the meal you are going to help me cook is not going to take place until several months from today. That’s going to be satisfactory in that we both at least know that there is a reciprocal arrangement underway.
If I help you with the painting, and there’s no discussion about what you’ll do for me, I’d walk away thinking that you owe me. You might also be thinking the same. Or, you could create an imbalance by not realizing you owe me, or maybe you are thinking that last year you helped me put oil into my car and so that’s what makes us even now on this most current trade.
Difficulties Of Getting Reciprocity Right
Reciprocity can be dicey.
There are ample ways that the whole thing can get com-bobbled.
I do something for you, you don’t do anything in return.
I do something for you of value N, and you provide in return something of perceived value Y that is substantively less than N. I do something for you, and you pledge to do something for me that’s a year from now, meanwhile I maybe feel cheated because I didn’t get more immediate value and also if you forget a year from now to make-up the trade then I forever might become upset. And so on.
I am assuming that you’ve encountered many of these kinds of reciprocity circumstances in your lifetime. You might not have realized at the time they were reciprocity situations. We often fall into them and aren’t overtly aware of it.
One of the favorite examples about reciprocity in our daily lives involves the seemingly simple act of a waiter or waitress getting a tip after having served a meal. Studies show that if the server brings out the check and includes a mint on the tray holding the check, this has a tendency to increase the amount of the tip. The people that have eaten the meal and are getting ready to pay will feel as though they owe some kind of reciprocity due to the mint being there on the tray. Research indicates that the tip will definitely go up by a modest amount as a result of the act of providing the mint.
A savvy waiter or waitress can further exploit this reciprocity effect. If they look you in the eye and say that the mint was brought out just for you and your guests, this boosts the tip even more so. The rule of reciprocity comes to play since the value of the aspect being given has gone up, namely it was at first just any old mint and now it is a special mint just for you all, and thus the trade in kind by you is going to increase to match somewhat to the increase in value of the offering. The timing involved is crucial too, in that if the mint was given earlier in the meal, it would not have as great an impact as coming just at the time that the payment is going to be made.
As mentioned, reciprocity doesn’t work on everyone in the same way.
The mint trick might not work on you, supposing you hate mints or you like them but perceive it of little value. Or, if the waiter or waitress has irked you the entire meal, it is unlikely that the mint at the end is going to dig them out of a hole. In fact, sometimes when someone tries the reciprocity trick, it can backfire on them. Upon seeing the mint and the server smiling at you, if you are already ticked-off about the meal and the service, it could actually cause you to go ballistic and decide to leave no tip or maybe ask for the manager and complain.
Here’s a recap then about the reciprocity notion:
Reciprocity is a social norm of tremendous power that seems to universally exist
Often fall into a reciprocity and don’t know it
Usually a positive action needs to be traded for another in kind
Usually a negative action needs to be traded for another in kind
An imbalance in the perceived trades can mar the arrangement
Trades can be services or products or combinations thereof
Time can be a factor as to immediate, short-term, or long-term
AI Autonomous Cars And Social Reciprocity
What does this have to do with AI self-driving driverless autonomous cars?
At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One crucial aspect of the AI will be the interaction with the human occupants of the self-driving car, and as such, the AI should be crafted to leverage reciprocity.
One of the areas of open research and discussion involves the nature of the interaction between the AI of a self-driving car and the human occupants that will be using the self-driving car. Some AI developers with a narrow view seem to think that all that the interaction consists of would be the human occupants saying to drive them to the store or to home, and that’s it.
This is a naive view.
The human occupants are going to want to have the AI much abler to carry on a conversation.
Let’s consider an example of social reciprocity involving a passenger and driver dialogue.
You get into the AI self-driving car and decide you want to go to work.
Almost as though you are giving an instruction to a GPS, you merely indicate the address of work, and the self-driving car then proceeds to drive you there. Suppose that during the journey, you want to get some coffee, doing so before you get to work. You would want to convey this interim destination to the AI.
Before the days of Alexa and Siri, we might have all accepted the idea that we might need to enter such instructions into an on-board computer tablet, but I think we are now expecting to be able to have a more natural audio interaction with the AI system. If you’ve gone to the trouble to buy an expensive and shiny AI self-driving car, you certainly would hope that some effort went into making the system interactive with the human occupants.
A barebones “conversational” AI might be this:
You: Take me to work.
AI self-driving car: Okay
What does the word “Okay” mean in this situation? Does it mean that the AI is acknowledging that you have said you want to go to work, but that it doesn’t necessarily mean that the AI should immediately do so. Or, maybe it means that the AI is going to immediately start the self-driving car and drive you to work. Or, perhaps the AI misunderstood what you said and thought you said “I hate work” and therefore the AI is merely saying “Okay” as a placeholder word to seem responsive.
Let’s try this again:
You: Take me to work.
AI self-driving car: Starting to your work now.
This is a little better since the AI has echoed back to you that you want to go to work, and furthermore it is stating when this action will occur, specifically that it will occur right away or now. This seems somewhat flat though. If you were conversing with a human chauffeur, they probably would be more embellishing in what they might say.
Here’s another try:
You: Take me to work.
AI self-driving car: Good morning, I will head to your work at 1234 Lauren Street. Hope you will have a pleasant day at work!
This provides more specific feedback, including the work address, which helps to confirm that you really intended to go to work. It might be that you said the word “work” out of habit and meant to say “home” – by hearing the address it might cause you to realize that you had said the wrong word. The part of the message that says “good morning” and the other part that says “have a pleasant day” are added embellishments that presumably give a softer feeling to the messaging and makes things seem less robotic.
One criticism of having the AI utter “good morning” and “have a pleasant day” is that it implies perhaps that the AI actually means those things.
When I speak those words to you, you assume that I as a human have measured out those words and that I presumably know what it means to have a good morning, and so with my knowledge about the nature of mornings, I am genuinely hoping that you have a good one. If you see the words “good morning” written on a poster, you don’t consider that the poster knows anything about the meaning of those words. When the AI system speaks those words, you are likely to be “fooled” into thinking that the AI system “understands” the nature of mornings and is basing those words on a sense of the world.
But, the AI of today is more akin to the poster, it is merely showcasing those words and does not yet (at least) comprehend the true meaning of those words.
Do we want the AI to seem to be more aware than it really is?
That’s an important question. If the human occupants believe that the AI has some form of human awareness and knowledge, the human occupant might get themselves into a pickle by trying to converse with the AI system. Suppose the human starts to suffer a heart attack, and believes that the AI has human-like understanding, and so the human says “help, I’m having a heart attack” – if you said this to another adult, the human adult would likely realize that you are in trouble, they might call 911, they might try to aid you, etc.
The AI of today would not know what you’ve said per se. You might have been misled into thinking that it would. If you believed that the AI was not so capable as a human, you might instead have uttered “take me to the nearest hospital” which then hopefully is similar to “take me to work” in that the system can parse the words and realize that it is a driving instruction. The AI would presumably then alter the driving path and instead drive the self-driving car to a nearby hospital.
This argument also comes up when AI developers put other human-like speaking mannerisms into their systems. Humans for example will often say “you know” or have pauses in their speaking or say “uh” while speaking. To-date, most of the AI natural language processing systems have generated spoken words that are streamlined and omit those human mannerisms. Lately, some AI developers have been including them. This gives the AI spoken words a closer to human-like feel.
Here’s an example:
You: Take me to work.
AI self-driving car: Uh, Okay, sounds good… will do so.
The added elements to the response are going to suggest to the human listener that the AI is composing speech in a manner similar to a human. This anthropomorphizes the AI system, which as I’ve mentioned before can be good if you believe that’s a desirable way to bond with the human occupants, but it can also be bad in that it can mislead as to what the AI system can and cannot really do and understand.
The joint interaction in a self-driving car between the human occupants and the AI are potentially going to involve life and death matters.
This is an important point to keep in mind. If you are simply developing an online search engine that verbally interacts with humans or a system that can aid them in ordering toothbrushes and towels, whether the human believes that the AI is actually comprehending them is not particularly life threatening. In the case of a self-driving car, the human might need to offer insight to the AI about the driving task, or the AI might need to get input from the human about the driving task, any of which if improperly undertaken could have dire consequences.
Should the AI system of the self-driving car carry on conversations that also invoke the use of reciprocity?
If you are of a mind that the AI should not over-inflate what it can actually comprehend, you’d probably say that it should not also imbue reciprocity.
If you are seeking to have the AI appear as sentient as it can, the use of reciprocity would be a core element to achieve that sense, since it is so common among humans and part-and-parcel of our daily lives.
Here’s an example:
You: Take me to work.
AI self-driving car: Happy to take you to work. Please make sure to keep your seat belts on.
You likely can detect the subtle reciprocity involved in the dialogue. The AI implies that it is doing something for the human occupant, taking them to work, and the punch-up is that the AI is “happy” to do so. This creates a human-like aspect as though the AI was doing this as a service for you. The trade is that in return, you are being asked to keep your seat belts on.
If we strip away the reciprocity, we’d get something like this instead:
You: Take me to work.
AI self-driving car: Okay. Keep your seat belts on.
This version is similar to the earlier example about the drive safely message. We’ve now got a plainer and straightforward kind of instruction or maybe even an edict, which was the same with the “drive safely” message. The “happy to take you to work” was more akin to the “thank you for driving safely” message that created a kind of quid-pro-quo element to the dialogue.
If we make the messaging more along the negative side, it might be something like this:
You: Take me to work.
AI self-driving car: Okay. Keep your seat belts on or I’ll stop the car and you won’t get to work on time.
Whoa! This sounds like some kind of fierce AI that is threatening you.
There are AI developers that would argue that this message is actually better than the others because it makes abundantly clear the adverse consequence if the human does not wear their seat belts.
Yes, it’s true that it does spell out the consequences, but it also perhaps sets up a “relationship” with the human occupant that’s going to be an angry one. It sets the tone in a manner that might cause the human to consider in what manner they want to respond back to the AI (angrily!).
If the AI system is intended to interact with the human occupants in a near-natural way, the role of reciprocity needs to be considered.
It is a common means of human to human interaction. Likewise, the AI self-driving car will be undertaking the driving task and some kind of give-and-take with the human occupants is likely to occur.
We believe that as AI Natural Language Processing (NLP) capabilities get better, incorporating reciprocity will further enhance the seeming natural part of natural language processing.
It is prudent though to be cautious in overstepping what can be achieved and the life-and-death consequences of human and AI interaction in a self-driving car context needs to be kept in mind.
Copyright 2020 Dr. Lance Eliot
This content is originally posted on AI Trends.
[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]
What started out as a temporary pilot project to test a robotaxi service in Las Vegas has turned into a multi-year partnership between self-driving software company Aptiv and Lyft and a new milestone that suggests the operation is ramping up.
The companies announced Tuesday that they’ve given 100,000 paid rides in Aptiv’s self-driving vehicles via the Lyft app.
“To our knowledge this is the largest open-to-the-public commercial pilot,” Aptiv Autonomous Mobility President Karl Iagnemma said in a recent interview. “To me this partnership is a great example of the next-generation ecosystem at work.”
The milestone has a few important caveats. Aptiv’s self-driving vehicles — which initially began with BMW 5 series — have a human safety driver behind the wheel to take over if needed. The human driver operates the vehicle manually in parking lots and hotel lobby areas.
The program, even if with those human safety drivers behind the wheel, has proven invaluable to the companies, according to Iagnemma and Jody Kelman, who leads the self-driving platform team at Lyft.
“We’ve got something here,” Kelman said. “This is really a blueprint for what future mobility partnerships can look like.”
Companies in this so-called “race” to commercially deploy on-demand ride-hailing services using self-driving vehicles must master more than the technical bits. Fleet management, real-time routing, and designing an approachable user interface are just a few critical components needed to operate a profitable robotaxi service.
The program has taught Aptiv how to “get and keep a fleet of autonomous vehicles on the road and keep them highly utilized,” Iagnemma said, later adding that this project positions Lyft and Aptiv to be major winners in this space. The companies also learned how to work with various regulatory bodies, in this case, with the city of Las Vegas, Clark County and the region’s transit authority.
Lyft and Aptiv first launched the pilot in January 2018 as a one-week experiment and then announced plans to extend the program. The program surpassed 5,000 self-driving rides by August and jumped to more than 25,000 paid autonomous rides by December 2018, all while maintaining an average passenger rating of 4.95 out of five stars, Aptiv said at the time.
Aptiv’s investment in Las Vegas expanded as those ridership numbers grew. The company opened in December 2018 a 130,000-square-foot technical center in the city to house its fleet of autonomous vehicles as well as an engineering team dedicated to research and development of software and hardware systems, validation and mapping.
Self-driving cars are found on the road more often than the normal person may think. Most imagine cars found in movies like KITT and Herbie. However, “self-driving” comes down to much more simple technology and doesn’t need to involve things like artificial intelligence to operate.
Self-driving cars are connected vehicles that rely on software, hardware and machine learning to navigate its surroundings using data collected in real-time. This combination is used in common autonomous features like adaptive cruise control and lane assist.
The lower levels involve the previously mentioned features like cruise control and lane assist, while higher levels involve higher levels of automation that requires less control from a human driver and more reliance on the car’s technology and ability to learn about its surroundings.
Waymo, originally Google’s Self-Driving Car Project, is an example of a car that operates at level four autonomy. Waymo’s vehicles are currently tested in major U.S. cities without human drivers. At this level of autonomy, Waymo’s vehicles are responsible for monitoring the road and surroundings, steering, accelerating, decelerating and fallback for self-driving failures.
The five levels of autonomy include the following:
Level 1: Driver Assistance: Features that fall in this category are more commonly found in cars and include things like a side-mirror indicator light that alerts drivers of cars in the next lane.
Level 2: Partial Automation: Tesla Autopilot and Nissan ProPilot Assist are examples that fall under this category. These features control aspects like steering, acceleration and deceleration while the driver still monitors the road and surrounding environment.
Level 3: Conditional Automation:On this level, self-driving cars can conditionally monitor surroundings and function autonomously. Human drivers are still necessary to take over in cases of complex driving scenarios like obstacles in the road and hazardous weather conditions. Uber’s self-driving car falls under this category.
Level 4: High Automation:This level is the highest level of autonomy we’ve attained. These cars are responsible for all functions including fallback when self-driving features fail, but there are still rare cases that these cars aren’t able to recognize. Like we’ve mentioned, Waymo is an example that falls under this level.
Level 5: Full Automation:These cars achieve full automation and will never need human intervention to operate. Level 5 cars are able to assess all road, weather and obstacles it faces.
An additional level is:
Level 0, No Automation: This level includes cars with no automation. An example is the Ford Model T since it has no automation. This includes simple technology like automatic windows.
Safety Hazards for Self-Driving Cars
Unfortunately, the advancement of autonomous vehicles opens the doors to new vulnerabilities and security issues.
Predicting agent behavior: It’s difficult to completely understand all of the moving parts that occur on the road. This includes how other cars behave and human error. For example, when someone accidentally leaves their turn signal on when they’re driving straight.
Understanding perception complexity:Autonomous vehicles have a difficult time monitoring the surrounding area when things are blocked from their view. Objects seen in a reflection is just one example of this.
Cybersecurity threats:Self-driving cars aren’t immune to hacking threats. Vulnerabilities will typically be found in the code since code is written by humans. Not many completely understand neural networks to the extent to exploit these vulnerabilities, but it’s possible.
Continued development and deployment:Changes to the code bring up the question of whether or not it needs to be tested for a certain amount of miles to validate its performance.
This animated guide to self-driving cars from The Simple Dollar breaks down everything you need to know about the current state of self-driving cars. Take a look at the visualizations below to understand how the levels of autonomous driving, autonomous software and hardware and the issues involving self-driving cars.
Aurora has been given permission by California regulators to transport passengers in its self-driving vehicles, TechCrunch has learned.
The California Public Utilities Commission granted Aurora a permit, which was posted on its website Wednesday, to participate in the state’s Autonomous Vehicle Passenger Service pilot. Aurora confirmed the approval.
“This permit lets us give rides powered by the Aurora Driver and shows that we’re committed to being good partners to California and the Commission,” an Aurora spokesperson said when asked about the permit.
The company didn’t provide more details about when it might start letting passengers in its vehicles. And based on the company’s focus, it’s likely this won’t be a broad robotaxi service.
Aurora has never planned to operate a robotaxi service. Instead, it has focused on building the self-driving stack and working with partners to integrate into vehicle platforms. The “Aurora Driver,” as the company calls it, has been integrated six vehicle platforms from several manufacturers, including sedans, SUVs, and minivans, to commercial vans and Class 8 trucks. These integrations are not commercially available.
Aurora, which has operations in Pittsburgh, Palo Alto and San Francisco, has a fleet of about a dozen self-driving vehicles that are used for testing on public roads. The company started testing its self-driving system in Chrysler Pacifica minivans and has said it will continue to grow this fleet over the next year.
Aurora attracted attention early on because of the pedigree of its three founders — Sterling Anderson, Drew Bagnell and Chris Urmson — who had led self-driving vehicle programs at Google, Tesla and Uber. In February 2019, the company raised more than $530 million in a Series B round led by Sequoia Capital and includes “significant investment” from Amazon and T. Rowe Price Associates. The monster round pushed Aurora’s valuation to more than $2.5 billion. It has raised more than $620 million to date.
The approval from CPUC is different than the permits issued by the California Department of Motor Vehicles to test self-driving vehicles in the state. Today, 65 companies have a permit to test self-driving vehicles on public roads in the state.
Only four other companies, AutoX, Pony.ai, Waymo and Zoox, have CPUC permits. Zoox was the first company to receive a permit in December 2018.
The CPUC permit gives Aurora permission to use its self-driving vehicles to transport people. The permit comes with a few caveats. Companies issued the permits cannot charge for rides — an rule that AV developers are lobbying to get changed — and the vehicles must have safety drivers behind the wheel. Companies must also have the testing permit from the DMV.
Aurora’s permit, which lasts until January 2023, requires the company to provide reports to CPUC with information on total passenger miles traveled and safety protocols.
GM has improved its hands-free driving assistance system Super Cruise, adding a feature that will automatically change lanes for drivers of certain Cadillac models, including the upcoming 2021 Escalade.
This enhanced version of Super Cruise, which will include better steering and speed control, puts it back in competition with Tesla’s Autopilot driver assistance system (specifically the Navigate on Autopilot feature), which is considered the most capable on the market today.
The improved version will be introduced starting with the 2021 Cadillac CT4 and CT5 sedans followed by the new 2021 Cadillac Escalade. The vehicles are expected to become available in the second half of 2020.
Super Cruise uses a combination of lidar map data, high-precision GPS, cameras and radar sensors, as well as a driver attention system, which monitors the person behind the wheel to ensure they’re paying attention. Unlike Tesla’s Autopilot driver assistance system, users of Super Cruise do not need to have their hands on the wheel. However, their eyes must remain directed straight ahead.
The automatic lane change feature in Super Cruise will still require the driver to keep their eyes on the road. When the system is engaged, the driver can engage the turn signal to indicate a desire to change lanes. Once the system has determined that the lane is open, the vehicle will merge. Meanwhile, the gauge cluster will display messages to the driver such as “looking for an opening” or “changing lanes.”
GM’s new digital vehicle platform, which provides more electrical bandwidth and data processing power, enabled engineers to add to Super Cruise’s capabilities. The company also improved its rear-facing sensors and software to be able to better track vehicles approaching from the rear, Super Cruise chief engineer Mario Maiorana said.
The new version Super Cruise will change lanes for the driver on highways where the feature is allowed. The user interface and hands-free driving dynamics have also been improved, according to Maiorana.
Super Cruise, which launched in 2017, was limited to just one model — the full-size CT6 sedan — and restricted to divided highways. That began to change last year when GM announced plans to expand where Super Cruise would be available. A software update expanded the thousands of miles of compatible divided highways in the United States and Canada . Super Cruise is now available on more than 200,000 miles of highways.
The automaker has also started to make the system available in more models. GM is expanding Super Cruise as an option on all Cadillac models this year. GM has said the Super Cruise system will start hitting its other brands such as Chevrolet, GMC, and Buick after 2020.