33 votes

Driverless cars may already be safer than human drivers

24 comments

  1. [11]
    Eji1700
    Link
    I do not care what companies who stand to gain billions self report when it comes to safety, especially as presented in the article where context is key. Waymo determining that no human could have...

    I do not care what companies who stand to gain billions self report when it comes to safety, especially as presented in the article where context is key. Waymo determining that no human could have not killed that dog reads like "the police investigated themselves and found they did no wrong". Some % of the time it's probably true, but there's no fucking way i'm going to accept that for a safety standard.

    Worse there's once again a major misdirection with the stats, in that there's 0 control for the fact that all stats on human driven miles per crash involve driving that these cars cannot do, in conditions they cannot drive in, in locations they cannot drive in.

    IF they were comparing the number of human crashes in the same areas that would at least be closer, but this once again looks to be the rather usual "look how much better it is" when not controlling properly for variables. It's not real data or science, just propaganda and opinion.

    78 votes
    1. [2]
      skybrian
      Link Parent
      I think it's still too early to make firm predictions and yes, we don't know what will happen under other conditions. As the article points out, it would be great if other states besides...

      I think it's still too early to make firm predictions and yes, we don't know what will happen under other conditions. As the article points out, it would be great if other states besides California required them to publish their data.

      It seems reasonable to analyze the data we have, though, and ignoring it altogether means we're left with little other than vibes and plausibility arguments.

      Some counterpoints:

      • San Francisco isn't an easy place to drive. There are easier places, including the Phoenix suburbs where Waymo started.

      • It's probably a good thing that, unlike Tesla and many Tesla drivers, these companies are cautious? That's a way to continue to have a good driving record. They have reputations to protect.

      • Gazing into the crystal ball, here's a plausibility argument: I don't expect them to suddenly stop caring about their reputations, which suggests that the accident rate probably won't get worse? If someone wanted to bet that Waymo's accident rate will double, I'd take the other side of that bet.

      27 votes
      1. Eji1700
        Link Parent
        This isn't how you do it. This is how you misdirect and mislead. This is on par with smoking doesn't cause cancer science. There's very well established methodologies and ways to compare in cases...

        I think it's still too early to make firm predictions and yes, we don't know what will happen under other conditions. As the article points out, it would be great if other states besides California required them to publish their data.

        It seems reasonable to analyze the data we have, though, and ignoring it altogether means we're left with little other than vibes and plausibility arguments.

        This isn't how you do it. This is how you misdirect and mislead. This is on par with smoking doesn't cause cancer science. There's very well established methodologies and ways to compare in cases where you've only got a subsection of the whole. This isn't it. Further, the fact so many articles keep flooding out doing exactly this, is in itself cause for concern.

        Some counterpoints:

        San Francisco isn't an easy place to drive. There are easier places, including the Phoenix suburbs where Waymo started.
        

        For a computer...yeah it kinda is. I'd argue that in many cases suburbs are going to be harder, not easier, than San Francisco as there's going to be more chance for unexpected and unplanned for events. Either way, neither is "meaningfully" difficult as a serious example since they're such a small % of the total driving humans actually do. Again things like weather and parking lots being a massive part of the stats these cars are compared to, and that they don't drive in.

        It's probably a good thing that, unlike Tesla and many Tesla drivers, these companies are cautious? That's a way to continue to have a good driving record. They have reputations to protect.
        

        How are they cautious? If you want to see what proper caution looks like, look at the FAA and everything around commercial airliners. You don't put out a "well we're working out the kinks so we'll cut corners" product because you need more investment runway, or when you do you get people dead (like the Max).

        Gazing into the crystal ball, here's a plausibility argument: I don't expect them to suddenly stop caring about their reputations, which suggests that the accident rate probably won't get worse? If someone wanted to bet that Waymo's accident rate will double, I'd take the other side of that bet.
        

        Based on what? Because evidence is that every up and coming tech company cares plenty about their reputation right up until they have the market capped. The evidence is so heavily against you on this it's hard to overstate. Uber and Tesla are easy recent examples for new tech, and Kia/Huyndai/VW have made more noticed waves recently by putting consumers at risk or blatantly lying. Google "did no evil" until that made no more sense, and apple was the scrappy underdog until they became the biggest company in the world.

        The amount of money automated vehicles stands to make means that even if the current company heads actually want to play it safe (something I wouldn't agree with given the decisions already) there's almost no doubt you're going to have major money interests manipulating for the cheapest labor/talent/timeframes with all the dangers that entails.

        Worse it's in a regulatory grey area. Too many rules are written in blood, and this has all the history to look at and say "we can do it right" but instead we've got opinion pieces trying to trick us into doing it worse. And we already are is the sad part.

        12 votes
    2. [7]
      first-must-burn
      Link Parent
      You nailed it. If they want to compare humans drivers, they need to do an apples-to-apples comparison -- safer than which drivers on which roads in what conditions. This lecture by Phil Koopman...

      You nailed it. If they want to compare humans drivers, they need to do an apples-to-apples comparison -- safer than which drivers on which roads in what conditions.

      This lecture by Phil Koopman from CMU is a (IMO) much more useful analysis of what the safety targets should look like. If you want to see the breakdown of the human safety numbers, it starts about 14 minutes in.

      15 votes
      1. [5]
        skybrian
        Link Parent
        The headline and article set up a very broad, vague comparison between human drivers and driverless cars, and yes, better comparisons could be done. But on the other hand, transportation isn't a...

        The headline and article set up a very broad, vague comparison between human drivers and driverless cars, and yes, better comparisons could be done.

        But on the other hand, transportation isn't a sport so there's no need to play fair. There a variety of ways to get around, specialized to different situations, and often they can be combined on the same trip. It's find to cherry-pick and use driverless cars only for situations where they work well.

        The main thing I took away from the article is that, despite some accidents that made the news, they still seem quite safe as currently used.

        5 votes
        1. [4]
          first-must-burn
          Link Parent
          My problem with the article is that it's constructed so support a very common AV industry talking point, which is that these cars must be deployed as fast as possible to reduce the number of car...

          My problem with the article is that it's constructed so support a very common AV industry talking point, which is that these cars must be deployed as fast as possible to reduce the number of car crashes, and there must be less regulatory restriction to do that. The reality is 1) the technology is not ready for full deployment and 2) if your goal is to reduce crashes, there are a myriad of other ways to do that, but they don't involve investing billions of dollars in AVs. If we decide collectively as a society that is how we want to solve the problem, that is fine, but this article is definitely selling it.

          For example, the article starts out with:

          But we actually do know a fair amount about the safety of driverless taxis. Waymo and Cruise have driven a combined total of 8 million driverless miles, including more than 4 million in San Francisco since the start of 2023.1 And because California law requires self-driving companies to report every significant crash, we know a lot about how they’ve performed.

          But then, under the heading "We don’t have great data on the safety of human drivers":

          It’s important to emphasize that there’s a lot of uncertainty about [the number of crashes human drivers experience].

          So do you have the data, or don't you? You can't say, "we know a lot about whether AVs are better than human drivers" if you don't know how good human drivers are.

          And if you want to compare AVs to humans, you shouldn't look at the statistics for all human drivers, but all human drivers on the same streets and same conditions. The surface street driving is all low speed driving, which means the severity of the crashes is inherently reduced.

          At the end is the real point of the article:

          The big question for policymakers is whether to allow Waymo and Cruise to continue and even expand their services. This should be an easy call with respect to Waymo, which seems to be safer than a human driver already. The faster Waymo scales up, the more crashes can be prevented.

          This is simply not true, but it's very convenient for the AV industry.

          One example where the AVs are not ready: in addition to the collision with the fire truck, there have been other problems with AVs in emergency situations. With a human driver, a police or fire fighter can simply tell the driver where to go. The AV companies will say, "we have procedures for how an emergency responder can direct the car," but figuring that out is not the emergency responder's job, and shouldn't be.

          AI safety is much more than crashes. It's also measuring and correcting bias and a myriad of other things, especially as they scale these vehicles to new environments. For anyone who's interested, the video I linked above the parent comment actually goes into this in much greater detail, including examining the ethical frameworks you might use to make a fuller statement of "safe enough".

          18 votes
          1. [3]
            skybrian
            Link Parent
            The article set up that framework, but that doesn’t mean we have to use it. We could judge the article by what question they asked and which side they took, but another way to read it is to to...

            So do you have the data, or don't you? You can't say, "we know a lot about whether AVs are better than human drivers" if you don't know how good human drivers are.

            The article set up that framework, but that doesn’t mean we have to use it. We could judge the article by what question they asked and which side they took, but another way to read it is to to ask, “okay, what research did they actually do?” I thought they wrote a pretty clear summary of the data, so I got something out of it.

            I don’t think reporters should hide their opinions under a veneer of pseudo-objectivity, but if they do a good job, you can ignore the conclusions and get some useful things out of the article anyway. (A writer can also encourage this by de-emphasizing their conclusions.)

            There’s enough data to draw some conclusions but not others. For example, we can rule out some scenarios where driverless cars are very dangerous.

            There won’t be a natural threshold where the statistics suddenly change and we know the answers. The companies have celebrated arbitrary thresholds like reaching a million miles driven. Two million is better than one million and ten million will be better than that, but you can’t get there without doing the driving, and to decide what pace to expand at, you need to use the data you have.

            I don’t have an opinion on how fast the driverless car companies should expand. In Arizona, it appears that regulations aren’t what’s holding them back? Waymo certainly took their time on expanding, and presumably they had good reasons. I don’t see a reason to second-guess them in either direction on pace.

            Cruise got their wings clipped by the California DMV and I don’t see that as being wrong either. They’ll still improve but it will take longer.

            The AV companies will say, "we have procedures for how an emergency responder can direct the car," but figuring that out is not the emergency responder's job, and shouldn't be.

            I find this framing too one-sided. We shouldn’t expect unreasonable things from emergency responders, but as the the world changes, it seems like they will sometimes need to learn new skills? Ideally this will be a cooperative learning process. You not only need procedures, you need to practice them and make sure they work.

            That’s another concern that feeds into setting the pace of rollout. It’s a learning process for everyone. A city needs to get comfortable having driverless cars around, and they will have more experience with them in a year than they have now.

            3 votes
            1. [2]
              first-must-burn
              Link Parent
              Sure, emergency procedures need to evolve, but the onus to do that (and fund it) should be on the AV companies. They should also have to put those procedures in place before deployment. By not...

              I find this framing too one-sided. We shouldn’t expect unreasonable things from emergency responders, but as the the world changes, it seems like they will sometimes need to learn new skills? Ideally this will be a cooperative learning process. You not only need procedures, you need to practice them and make sure they work.

              Sure, emergency procedures need to evolve, but the onus to do that (and fund it) should be on the AV companies. They should also have to put those procedures in place before deployment. By not insisting on that, we let our tax dollars (through underfunded city services) subsidize the AV company's efforts.

              If EMS or firefighters were busy dealing with a rogue AV or can't get to the scene because of a rogue AV, then we let the people who may die or sustain worse injuries as a result of that delay bear the risks, which is unethical because they are probably not the ones benefiting from the AV deployment.

              4 votes
              1. skybrian
                Link Parent
                I agree in principle, but I don't know enough about what they're doing to find fault.

                I agree in principle, but I don't know enough about what they're doing to find fault.

                2 votes
      2. nrktkt
        Link Parent
        The question of which drivers and in what state is going to be critical for making the eventual leap to vehicles that cannot be piloted by their owners. We'd need to see accident data broken down...

        The question of which drivers and in what state is going to be critical for making the eventual leap to vehicles that cannot be piloted by their owners.

        We'd need to see accident data broken down to see how many accidents are caused by sober, decently rested people without a history of accidents. I believe a AV could be a reasonably safe freeway driver, and I'd be glad to let it drive me if I'm tired or otherwise would be driving less than ideally. But I'll want to see way more rigor in the data before I would hand over control all the time.

        1 vote
    3. papasquat
      Link Parent
      “Professional baseball players lose more baseball games than I do, therefore I’m better at baseball than all of them”

      “Professional baseball players lose more baseball games than I do, therefore I’m better at baseball than all of them”

      16 votes
  2. [4]
    spit-evil-olive-tips
    Link
    ctrl-F "rain" - nothing "snow" - nothing hmm... "weather", "wet", "ice", "traction", "visibility"? still nothing. uhhh..."sleet"? "hail"? "graupel"? nope. well, that's disappointing. he's...
    • Exemplary

    ctrl-F "rain" - nothing

    "snow" - nothing

    hmm...

    "weather", "wet", "ice", "traction", "visibility"? still nothing.

    uhhh..."sleet"? "hail"? "graupel"? nope.

    well, that's disappointing.

    he's comparing crash statistics between driverless cars and human-driven cars. and as he says, most of the driverless cars have been in either San Francisco or Phoenix.

    and he doesn't say this explicitly, but the numbers for human-driven cars seem to be from nationwide statistics.

    so this comparison is very apples-to-oranges. that isn't necessarily bad, sometimes there are just inherent limitations in data sets like this.

    but I would expect him to at least mention that as the biggest limitation of the comparison. the driverless cars that have been beta-tested on public roads have been doing so in very favorable weather conditions.

    the closest we get to this is talk about city vs freeway driving:

    Moreover, Smith said, “these companies are not driving a representative sample of miles.”

    Both Waymo and Cruise have their driverless cars avoid freeways, which tend to have fewer crashes per mile of driving. Both companies are active in San Francisco, which has more chaotic streets than most US cities.

    the most charitable explanation here is that the author is an AI journalist, not an automotive journalist, so the question of weather just didn't occur to him. but, that's a pretty major blind spot. how do you write a 3000 word piece about car crashes and not mention adverse weather once?

    did he talk to any experts who are skeptics of driverless cars for this piece? if so, I don't see them quoted anywhere. in a traditional newspaper, you'd probably see that, but this is just some guy's Substack. "both sides" journalism gets a lot of criticism but I think this is one area where it can be useful.


    besides the unacknowledged problems with the statistics themselves, I think there's a fundamental problem with the comparison he's making, above the level of statistics.

    if a human-driven car gets into a crash, we have a legal system with lots of experience in evaluating and assigning blame. in the worst cases, we have the ability for criminal charges (including vehicular homicide or manslaughter for example), and in all other cases we have civil liability.

    with a human driver, you have one single person making the decisions. they can give a statement to the police at the scene. if there's a civil lawsuit or criminal charges filed, they can be subpoenaed and required to testify under oath. and if there's a crime, they're the one getting charged with it.

    with driverless cars, both criminal and civil liability is obscured, because there isn't a human you can point to as the ultimate decision-maker.

    as a human driver, if I get into an accident and then drive away, that's hit & run, which is a crime. here in Washington state for example, it's a felony if the hit & run results in injury or death.

    if a driverless car gets into an accident that includes an injury and then due to a bug in its software doesn't realize it and drives off, that's also a hit & run. does the CEO of Waymo or Cruise get charged with felony hit & run? do any of the software engineers? seems very unlikely. what happens, then? do we just cross our fingers and hope that these driverless cars never have a bug that causes them to commit hit & runs?

    or, say the driverless car hits a pedestrian, recognizes it, and stops. OK, what then? does it just...sit there? a human driver can get out of the car, check on the person, call 911 if they need medical attention, etc. what does Cruise or Waymo do? is that process publicly documented anywhere? that's something that I think would be relevant for the public to know when considering this question of whether these companies should be allowed to beta-test their products on public roads.

    I think it's instructive to look at some other industries for comparison:

    lots of aircraft have autopilot. but, aircraft always have a qualified human pilot in charge of the aircraft. the ultimate responsibility lies with that human pilot. if the autopilot malfunctions or does something unsafe, it's the responsibility of the pilot to disengage the autopilot and fly manually.

    or with bridges and other civil engineering structures, you have a licensed Professional Engineer who signs off on the design and takes responsibility for it. engineers who sign off on faulty designs can (and do) lose their license.

    in both of those cases, you have a similar scenario where it's tempting to let responsibility be diffused and assigned collectively. in the case of planes, you could say Boeing or Airbus as well as Acme Airlines is responsible for the autopilot. in the case of bridges, you could say that Acme Civil Engineering is responsible. but we reject that, and say that at the end of the day, there is one single identifiable person who individually carries the ultimate responsibility.


    another thing I dislike is his use of weasel words in the headline - driverless cars may be safer - but from the text it's clear the argument he's making is that they're probably safer. definitely probably safer.

    Cruise’s record is not impressive as Waymo’s, but there’s still reason to think its technology is on par with—and perhaps better than—a human driver.

    ...

    With all that said, it seems like Waymo cars get into serious crashes at a significantly lower rate than human-driven cars.

    ...

    This should be an easy call with respect to Waymo, which seems to be safer than a human driver already.

    and this isn't just a "let's compare some statistics" post. he is explicitly making a "and therefore, governments should take a hands-off approach to regulating these companies" point as well.

    like, there's a section heading literally titled "Don’t slow down progress". at the end of the day, this is a fairly generic argument for laissez-faire capitalism, applied to self-driving car companies, and propped up with some shoddy statistics.

    And so it’s important for policymakers to allow this experiment to continue. Because at scale, safer-than-human driving technology would save a lot of lives.

    ...

    Ultimately the only way for Cruise to improve its technology is by testing it on public roads. And we’ll all benefit from the widespread availability of self-driving cars that are dramatically safer than human drivers.

    we've gone from "may be safer than human drivers" in the headline to "dramatically safer than human drivers" in the conclusion.

    perhaps a better headline would be "driverless cars are probably safer than human drivers, as long as you ignore adverse weather entirely. and that means companies should be given free rein to continue doing beta-testing on public roads, with minimal or no interference from government regulators"

    20 votes
    1. sparksbet
      Link Parent
      Eh, I'm not inclined to be so charitable. An AI journalist should be even mote skeptical of how well it performs outside of the conditions it was trained in, as this is probably the biggest...

      the most charitable explanation here is that the author is an AI journalist, not an automotive journalist, so the question of weather just didn't occur to him.

      Eh, I'm not inclined to be so charitable. An AI journalist should be even mote skeptical of how well it performs outside of the conditions it was trained in, as this is probably the biggest failure case for AI systems like this. And unless he personally has never lived outside California, I'm baffled that poor weather conditions haven't occurred to him. Can these cars avoid hydroplaning? How do they perform when there's poor visibility from snow? These are probably the situations I want to avoid most as a fallible human driver, because they're fucking dangerous, and they're probably pretty damn rare occurrences for Waymo and Cruise (especially because I bet these companies pause operations during inclemwnt weather for exactly these safety reasons).

      4 votes
    2. [2]
      skybrian
      Link Parent
      The article makes a universal statement and I agree that it's too broad and unjustified. It might be possible to fix it up, though, because whenever there's an average, there's a choice of...

      The article makes a universal statement and I agree that it's too broad and unjustified. It might be possible to fix it up, though, because whenever there's an average, there's a choice of denominators. It would be reasonable to compare human drivers in Phoenix to driverless cars in Phoenix, and human drivers in San Francisco to driverless cars in San Francisco.

      Little bad weather, sure. (Though SF has rain and fog sometimes.) That can wait until they test in a different city. They might be doing some of that (like in New York City, though I don't know if they drive in winter). It's not without safety drivers yet. And yes, it's a good point that they're not taking passengers on freeways yet, though maybe they've tested there.

      I don't think being able to blame the pilot has anything to do with airline safety. These are systems. Every crash gets investigated, and sure, sometimes it's "pilot error" or someone else's screwup, like in maintenance. But the thing that makes it one of the safest forms of travel (despite all appearances) is that the government along with aircraft manufacturers and airlines are trying to drive accidents to zero, with systematic fixes. If there's a design flaw, they often ground that model of aircraft. The result is that accidents are very rare and very weird these days.

      I think that carries over pretty well to driverless cars? I don't see any blame-shifting or complicated liability issues; it's pretty clear that Waymo and Cruise are responsible for everything their cars do. This is a lot simpler than other forms of transportation. They record everything and learn from everything and they're trying to drive incidents down to zero so they can safely expand service.

      I think the best case for not having more government regulation, yet, is that they seem to pretty cautious and doing the right things on their own, because they're responsible for the whole system. There are incidents, but they're pretty rare and fairly random, and it's not clear what a better way to systematically improve things would be. What could a government agency do to improve on it? (More transparency, sure.)

      I don't know if that will last, though. Perhaps someday the operators will get more complacent and problems won't get fixed, and the government will need to step in.

      2 votes
      1. sparksbet
        Link Parent
        Relying on the good behavior of private companies rather than regulation for safety is a recipe for disaster. If they're doing the right things on their own, how would legislating that they're...

        I think the best case for not having more government regulation, yet, is that they seem to pretty cautious and doing the right things on their own, because they're responsible for the whole system.

        Relying on the good behavior of private companies rather than regulation for safety is a recipe for disaster. If they're doing the right things on their own, how would legislating that they're required to do the right things stifle innovation? This also relies on all private companies on this space being equally cautious, and we know for a fact that some (cough cough Tesla) are much more reckless when it comes to safety concerns. I'd much rather legally mandate that they be safe rather than just trust every company that ever enters this space to play safe without any legal requirement to do so.

        4 votes
  3. skybrian
    (edited )
    Link
    From the blog post: Edit: I guess this should have been under ~transport.

    From the blog post:

    For this story, I read through every crash report Waymo and Cruise filed in California this year, as well as reports each company filed about the performance of their driverless vehicles (with no safety drivers) prior to 2023. In total, the two companies reported 102 crashes involving driverless vehicles. That may sound like a lot, but they happened over roughly 6 million miles of driving. That works out to one crash for every 60,000 miles, which is about five years of driving for a typical human motorist.

    These were overwhelmingly low-speed collisions that did not pose a serious safety risk. A large majority appeared to be the fault of the other driver. This was particularly true for Waymo, whose biggest driving errors included side-swiping an abandoned shopping cart and clipping a parked car’s bumper while pulling over to the curb.

    Cruise’s record is not impressive as Waymo’s, but there’s still reason to think its technology is on par with—and perhaps better than—a human driver.

    Human beings drive close to 100 million miles between fatal crashes, so it’s going to take hundreds of millions of driverless miles for 100 percent certainty on this question. But the evidence for better-than-human performance is starting to pile up, especially for Waymo. And so it’s important for policymakers to allow this experiment to continue. Because at scale, safer-than-human driving technology would save a lot of lives.

    Edit: I guess this should have been under ~transport.

    8 votes
  4. [8]
    petrichor
    Link
    Curious. Who is liable for deaths caused by driverless cars?

    Curious. Who is liable for deaths caused by driverless cars?

    5 votes
    1. [3]
      unkz
      Link Parent
      We’re going to find out soon. https://www.reuters.com/business/autos-transportation/tesla-braces-its-first-trial-involving-autopilot-fatality-2023-08-28/

      We’re going to find out soon.

      https://www.reuters.com/business/autos-transportation/tesla-braces-its-first-trial-involving-autopilot-fatality-2023-08-28/

      SAN FRANCISCO, Aug 28 (Reuters) - Tesla Inc (TSLA.O) is set to defend itself for the first time at trial against allegations that failure of its Autopilot driver assistant feature led to death, in what will likely be a major test of Chief Executive Elon Musk's assertions about the technology.

      7 votes
      1. [2]
        first-must-burn
        Link Parent
        Its worth noting that Tesla's autopilot is sold as a system which requires human supervision at all times, however murky the marketing around it is, which may imply that it's more. So the Tesla...

        Its worth noting that Tesla's autopilot is sold as a system which requires human supervision at all times, however murky the marketing around it is, which may imply that it's more. So the Tesla autopilot is quiet a bit different than the Cruise and Waymo AVs

        I'm sure outcome of the Tesla case will be important, but unlikely to set much precedent for L4 AVs.

        4 votes
        1. Eji1700
          Link Parent
          I disagree. The main point is going to be what happens when the car fucks up. Can Tesla prove the human wasn't monitoring their system? If it gave 10 seconds of warning, ok sure, but what happens...

          I'm sure outcome of the Tesla case will be important, but unlikely to set much precedent for L4 AVs.

          I disagree. The main point is going to be what happens when the car fucks up. Can Tesla prove the human wasn't monitoring their system? If it gave 10 seconds of warning, ok sure, but what happens when it doesn't, or only gives half a second?

          There's a massive gulf between the binary states of "paying attention" and "sleeping at the wheel" that is going to be where all this litigation pans out. If we treated car autopilot style systems like planes none of this stuff would be on the road or approved, but there's so much money behind it I suspect they're going to jam it through anyways.

          3 votes
    2. [4]
      skybrian
      Link Parent
      Since airlines have to pay claims for plane crashes, my guess is that it's the taxi service. (For those that are basically taxi services.)

      Since airlines have to pay claims for plane crashes, my guess is that it's the taxi service. (For those that are basically taxi services.)

      2 votes
      1. [3]
        vektor
        Link Parent
        And I doubt people will buy AVs en masse. The appeal of wider adoption of AVs, IMO, is that it frees me up from the economic torture of either having to own a car, or not having access to a car....

        And I doubt people will buy AVs en masse. The appeal of wider adoption of AVs, IMO, is that it frees me up from the economic torture of either having to own a car, or not having access to a car. Sometimes there really is no better tool for a trip than a car, but that doesn't mean I want one; if I have one, it becomes the economical choice for many more trips, which is how we end up where we are now. With AVs, I can take a "taxi" to Ikea at a reasonable price, haul my stuff home easily, and continue to use the tram to work tomorrow. If I were to buy a car for my Ikea trips (and a bunch of other stuff, of course), the gas price and convenience of my commute could compete with the tram fare.

        So from that perspective, liability will probably stay with the taxi company. Whether they eventually split up into manufacturers and operators is not really a big deal to the public, as I expect even then operators to care about the safety record of the manufacturer and to what degree that exposes them to liability.

        6 votes
        1. [2]
          petrichor
          Link Parent
          How's that differ from borrowing a car?

          The appeal of wider adoption of AVs, IMO, is that it frees me up from the economic torture of either having to own a car, or not having access to a car.

          How's that differ from borrowing a car?

          1 vote
          1. vektor
            (edited )
            Link Parent
            It isn't. Not really. There's a bunch of tools that can do the job right now, just worse. Regular taxis are relatively expensive; borrowing/renting/carsharing all will give you wheels, but you...

            It isn't. Not really. There's a bunch of tools that can do the job right now, just worse. Regular taxis are relatively expensive; borrowing/renting/carsharing all will give you wheels, but you must still do the driving yourself. Depending on how much of that you do, I think the roads will be a lot safer if a machine does it. There's also convenience aspects around it; all the models that don't come with a driver usually involves you picking the car up and dropping it back off at a more or less central location. With a taxi, the ride starts and ends wherever I want, which is good.