52 votes

Injured person reportedly dies after Cruise cars block first responders, according to reports from the San Francisco Fire Department

31 comments

  1. [7]
    Eji1700
    Link
    This is just so unacceptable. The fact these are on the road with 0 emergency response method worked out before they were pushed out isn't some "oops yeah that's an edge case". A single session of...

    This is just so unacceptable.

    The fact these are on the road with 0 emergency response method worked out before they were pushed out isn't some "oops yeah that's an edge case". A single session of "what do normal drivers need to do" should bring up "yield and respond to emergency services", and such features and abilities should have been demonstrated long before they were allowed on the road. I would not be surprised if it's on some roadmap an engineer drew up and mentioned "uh hey, we haven't solved this yet" before going live.

    Edit-

    Thinking about it a bit more, why is Cruise not getting fined/hauled into court for these things? And i don't mean civil. If i parked my car in a way that blocked emergency services i'd be in front of a judge real quick, and yet this company does it and is consequence free? Just "oops working out the bugs"

    55 votes
    1. [6]
      Sodliddesu
      Link Parent
      The oft repeated "I'll believe corporations are people when Texas executes one of them" line applies here. The question of "WHO" blocked the vehicle arises. If there's a human in the car, it's...

      yet this company does it and is consequence free? Just "oops working out the bugs"

      The oft repeated "I'll believe corporations are people when Texas executes one of them" line applies here.

      The question of "WHO" blocked the vehicle arises. If there's a human in the car, it's you. So they haul you into court. Who's at fault with the driverless car? Obviously, the short answer is the CEO but we can't just prosecute a job creator like that. So who else do we apply fault to? The programmer who wasn't told to program that function? How's it his fault? The manager who approved the software? He's not a programmer and the required specs were met... And so on. That's why 'the company' isn't in trouble. Because no one is responsible because money.

      So they say "if a company does something illegal, take it up in civil court." and leave it to the victim's family.

      43 votes
      1. [5]
        markh
        Link Parent
        Why not?

        Obviously, the short answer is the CEO but we can't just prosecute a job creator like that.

        Why not?

        9 votes
        1. [4]
          Sodliddesu
          Link Parent
          That was sarcasm, obviously, we should be doing that. That said, there always could be gross negligence on the side of the programming teams - misrepresenting the capabilities of the cars or...

          That was sarcasm, obviously, we should be doing that.

          That said, there always could be gross negligence on the side of the programming teams - misrepresenting the capabilities of the cars or such... But the company shouldn't get to hold an internal investigation to find that out.

          13 votes
          1. [3]
            qob
            Link Parent
            It's up to the CEO to provide evidence in such a case. Sometimes the guilty aren't really guilty, and it's not always trivial to find out who is. That's why there are expensive and maddeningly...

            there always could be gross negligence on the side of the programming teams

            It's up to the CEO to provide evidence in such a case.

            Sometimes the guilty aren't really guilty, and it's not always trivial to find out who is. That's why there are expensive and maddeningly complex justice systems in most countries. Not prosecuting anyone for anything because the accused might actually not be guilty is just stupid.

            3 votes
            1. [2]
              devilized
              Link Parent
              No, that's not how prosecution of crimes works. You are innocent until proven guilty. If we lived in a world where employees were personally held legally responsible for charger brought to their...

              No, that's not how prosecution of crimes works. You are innocent until proven guilty. If we lived in a world where employees were personally held legally responsible for charger brought to their employer, it's not up to the employee (CEO or otherwise) to provide the prosecutors evidence that some software developers were responsible. It's up to the prosecutors to gather the evidence themselves and convince a judge/jury that the employee was knowingly and willingly negligent in this issue, while acting under their authority. And short of an internal email where the CEO wrote "I understand that these vehicles are not safe, launch them anyway", that's just not going to happen. Should Sundar Pichai go to jail because Google Maps told someone to take a turn that drove them into a lake?

              This whole extremist idea of "CEO is bad because they make money, send to jail" is ludicrous. CEOs, especially ones of large companies, are just not that deeply involved in the nitty gritty details of product development. Similarly, first line employees and middle managers are not responsible for deciding the strategic direction of the company.

              7 votes
              1. qob
                Link Parent
                You're right. I don't know what I was thinking. Probably not much.

                You're right. I don't know what I was thinking. Probably not much.

                1 vote
  2. [4]
    first-must-burn
    Link
    There was some pretty lively (for Tildes) discussion a few days ago about n article entitled Driverless cars may already be safer than human drivers. The article had a discussion of why AV crash...

    There was some pretty lively (for Tildes) discussion a few days ago about n article entitled Driverless cars may already be safer than human drivers. The article had a discussion of why AV crash data right suggest that AVs are already safe enough. The commentary there identified the lazy analysis (e.g. comparing nationwide human driving statistics to fair weather AV driving statistics).

    But this incident blocking emergency vehicles illustrates why tracking crashes is simply not enough. We have seen other new items about AVs interfering with emergency vehicles, but how many near misses go unreported and unaddressed by the AV developers until you end up with an (alleged) fatality.

    The cars simply aren't ready to be completely unsupervised, but the industry is rushing to remove safety drivers. They are not operating at a scale where having the cars be attended would be financially onerous. As far as I can tell, this is driven by pressure from investors to see progress.

    47 votes
    1. [3]
      ParatiisinSahakielet
      Link Parent
      I've been thinking about the weather issue.. I live way up in the north and during the winter the grip conditions vary A LOT. Like, you can enter a corner with decent amount of grip but in the...

      (e.g. comparing nationwide human driving statistics to fair weather AV driving statistics)

      I've been thinking about the weather issue.. I live way up in the north and during the winter the grip conditions vary A LOT. Like, you can enter a corner with decent amount of grip but in the middle of the corner the grip might just disappear. I wonder what the AI driver might do in that sort of situation. Slamming on the brakes often is the worst thing you can do in that situation, are the AI drivers taught to drift?

      To get a drivers license in my country you need to go trough a skid pad training thing where there are different scenarios, a moose avoidance test, braking test, braking in a corner test etc.. Would be interesting to see a AI driver to the same skid pad course and see how it gets on.

      I have a few examples from my own experience, once I was going from one highway to another but the connecting ramp had a small bridge portion, the bridge area was basically just black ice (because it freezes faster than a area that has ground under it). I entered the bend with the normal amount of grip, as soon as I lifted off from the accelerator I got lift off oversteer and I had to use all of my forza motorsport skills to keep the car from hitting the barriers, I was lightly on the brakes and counter steering one way, then another. I was very close to completely spinning at least twice.

      Another one was when I was in a similar situation, I was about to join a highway on a two lane ramp. In front of me was a old van with a trailer going very slowly. I decided (a bit stupidly) to overtake it on the ramp. When I got next to it I started to understeer, I lifted off and turned a bit harder which of course made the car oversteer. I managed to hold the drift with counter steering and little bit of accelerator, I was lucky enough to keep it on my lane and nothing bad happened.

      I've been thinking what would a AI driver have done in those situations. In the first example the cars stability control system was sort of helping and making things worse for me, I feel like I reacted faster than the system, which made things a bit worse in a few spots because my correction coupled with the stability controls corrections equaled a over correction. I remember yelling at the car "STOP DOING THAT" while trying to regain control. In the second example I didn't have any electronics helping me (that particular car only had ABS) and I managed to keep everything smooth and no big dramas. From that experience I would personally like to drive a car with minimal electronic "helps" but I do realise not everyone is spending their free time driving racing simulators.

      10 votes
      1. [2]
        first-must-burn
        Link Parent
        The notion of a driving test is not that useful for AVs. (Not that the parent was necessarily promoting that they are, I am just offering information about a common misconception.) The parent post...

        The notion of a driving test is not that useful for AVs. (Not that the parent was necessarily promoting that they are, I am just offering information about a common misconception.)

        The parent post said:

        To get a drivers license in my country you need to go trough a skid pad training thing where there are different scenarios, a moose avoidance test, braking test, braking in a corner test etc..

        As you described, in a driving test, they give you one (or a few) of these tests to verify that you've learned the general strategy and been exposed to those conditions ahead of a real event on the road. The expectation is that human drivers can generalize those skills to apply them to lots of different real-world scenarios.

        This is why there is an age component to licensing -- you have to be mature enough to have developed some of those generalization skills. And we do see that human drivers who are younger do have more accidents. But as a society, we've benchmarked an age (though this age varies by locality where most drivers are mature enough to be trusted behind the wheel.

        By contrast, the ML parts of the an AV rely heavily on lots of training examples. There's no measure of "socially acceptable levels of maturity in generalizing driving strategies" for AVs. So you really have to do lot of testing to try and get it right.

        ML has brittleness and inscrutability problems. Brittleness means that the algorithm might be right 95% of the time, but in that last 5%, the output won't just be degraded but radically wrong. Inscrutability means you don't know why the algorithm makes a certain decision - it is just a bunch of weights in a network that humans can't interpret. So you don't know where that 5% of weirdness is.

        The verification strategies for ML generally revolve around having the "right" set of training data. The challenge is to fully represent the environment without over- or under-representing certain parts of it. But the data set is so huge that humans have trouble looking at all of it. So it is a big challenge that I think we still haven't fully gotten to the bottom of.

        2 votes
        1. ParatiisinSahakielet
          Link Parent
          Yeah I didn't mean that at all. I was just curious of how a AI driver would fare in conditions that change suddenly, for example coming across black ice in the middle of a corner.

          The notion of a driving test is not that useful for AVs. (Not that the parent was necessarily promoting that they are, I am just offering information about a common misconception.)

          Yeah I didn't mean that at all. I was just curious of how a AI driver would fare in conditions that change suddenly, for example coming across black ice in the middle of a corner.

          3 votes
  3. [2]
    flymaster
    Link
    Buried the hell out of that lede.

    Two autonomous Cruise vehicles and an empty San Francisco police vehicle were blocking the only exits from the scene, according to one of the reports, forcing the ambulance to wait while first responders attempted to manually move the Cruise vehicles or locate an officer who could move the police car.

    Buried the hell out of that lede.

    16 votes
    1. 3rdcupcoffee
      Link Parent
      While i agree the cop car in the way is a problem, presumably, the police officer is there to do their job. There’s a reason it’s on site. Is it a bad parking job? Probably. However the way I am...

      While i agree the cop car in the way is a problem, presumably, the police officer is there to do their job. There’s a reason it’s on site. Is it a bad parking job? Probably. However the way I am understanding the article, it wouldn’t have been a problem if the ‘autonomous’ vehicles weren’t there.

      Compare that to the other vehicles in the way. Are they helping with the emergency or directing traffic or serving some function? Obviously not. There’s zero justification for that, and apparently this is a frequent problem.

      20 votes
  4. Gaywallet
    Link
    Found a short powerpoint presentation when looking into more detail from the various SF municipal organizations which outlines the detail and explains some of the issues with autonomous vehicles...

    Found a short powerpoint presentation when looking into more detail from the various SF municipal organizations which outlines the detail and explains some of the issues with autonomous vehicles that still really need to get ironed out. Human drivers are in general quite good at getting out of the way of emergency vehicles and even when drivers do not speak English they typically respond to gestures quite readily and can manage to maneuver out of the way. It seems like (unsurprisingly) most of the AV operators are hard to get in contact with and the systems don't seem to have ways for the operators to be notified when it's urgent to get their attention.

    15 votes
  5. [2]
    chocobean
    Link
    Another article recently linked in Tildes mentions the tracked 55 incidents in six months as well 55 incidents in 6 months is nearly one every three days. What if there's a fire and multiple...

    Another article recently linked in Tildes mentions the tracked 55 incidents in six months as well

    55 incidents in 6 months is nearly one every three days. What if there's a fire and multiple people die?

    Why can't autonomous cars be programmed to "slow down and pull over when emergency signal is received, and give up its own control temporarily to emergency personnel "? Basically behave like a human legally is required to, and then behave like a parked car that can be driven out of the way.

    The only reason they don't have similiar thing programmed in is profits.

    11 votes
    1. Grumble4681
      Link Parent
      While that may play a part, to say it's the only thing seems short-sighted. They are experiencing bad PR and presumably it costs them money to not just let first responders deal with the issue...

      The only reason they don't have similiar thing programmed in is profits.

      While that may play a part, to say it's the only thing seems short-sighted. They are experiencing bad PR and presumably it costs them money to not just let first responders deal with the issue too, so it's not even all that clear-cut how much they might be gaining or not losing by taking this approach, but it's questionable what kind of profit is involved in it or that it's enough to be intentionally problematic towards first responders.

      https://waymo.com/firstresponders/

      Waymo also does say their vehicles pull over when they detect emergency vehicles with signals on, and it says they have the ability to give manual control of the vehicles over to first responders in an emergency, but it is not automated. I don't know if Cruise has the same capabilities.

      One thing to note is that autonomous taxi vehicles are highly likely to have passengers in them. I suspect a great deal of caution and care must be taken to design the car and the process in such a way that it does not put the passenger in danger to relinquish control of the car. Just automatically giving over manual control to the first person to walk up to the vehicle when an emergency vehicle is around might not be the safest option.

      7 votes
  6. [13]
    Rudism
    Link
    The information I feel is always missing from articles like this is how do driverless vehicles compare to human drivers on the same metric? I'm sure emergency vehicles are occasionally impeded by...

    The information I feel is always missing from articles like this is how do driverless vehicles compare to human drivers on the same metric? I'm sure emergency vehicles are occasionally impeded by non-autonomous vehicles (I've seen it happen); is the rate at which driverless vehicles do this significantly worse? What happened here is clearly tragic, but without that metric as a starting point a lot of this kind of comes off like fear mongering or outrage bait.

    7 votes
    1. [12]
      spit-evil-olive-tips
      Link Parent
      this is basically a form of whataboutism, and it seems to be increasingly common in every discussion I see about AI-driven cars. no, the journalist writing this story about an incident with...

      how do driverless vehicles compare to human drivers on the same metric?

      this is basically a form of whataboutism, and it seems to be increasingly common in every discussion I see about AI-driven cars.

      no, the journalist writing this story about an incident with AI-driven cars is not obligated to give a complete run-down comparing it to similar events happening with human-driven cars. they're writing a 500-word story about one specific event, not a treatise on the general problem of cars being blocked by ambulances. and that does not make it "outrage bait".

      as I mentioned in more detail in another comment, these attempts to do statistical comparisons miss an important point. when a human driver blocks an ambulance, you have individual responsibility - the driver can be ticketed or charged with a crime depending on the severity of the event. AI-driven cars do not have any mechanism for individual accountability. that holds true whether the rate of events is the same between humans and AI drivers, or if the AI drivers have twice as many events, or half as many.

      14 votes
      1. [4]
        Grumble4681
        Link Parent
        What does responsibility matter if the comparative rates were such that autonomous cars weren't anymore of a nuisance? Responsibility matters when you're trying to influence something. Society...

        What does responsibility matter if the comparative rates were such that autonomous cars weren't anymore of a nuisance?

        Responsibility matters when you're trying to influence something. Society does not benefit from someone being ticketed, they benefit from people who want to avoid being ticketed and make attempts to avoid the offending behavior. Now you might say they benefit from the ticket revenue, but it seems that revenue is rather small and probably barely covers the processes and systems in place to handle the ticketing.

        https://www.urban.org/policy-centers/cross-center-initiatives/state-and-local-finance-initiative/state-and-local-backgrounders/fines-and-forfeitures

        So responsibility is primarily beneficial for curbing unwanted behaviors.

        If an autonomous car made by a company has hardly any responsibilities for the consequences of the mistakes of the vehicle, and comparatively produces less unwanted behavior than human drivers, then one can make the argument that it's better than human drivers in spite of the fact that they have less reason to be more competent than human drivers.

        Of course there's another side to that, I'm not pretending that there isn't, nor am I making the case that the ends justify the means, but you claimed it was a form of whataboutism and made it seem irrelevant what the comparative value of incidents were between autonomous vehicles and human driven vehicles and I seriously disagree with that.

        Of course, other than curbing behaviors, responsibility does serve a few other things, like making people feel as though there's some form of justice, or attempting to make someone provide compensation for damages they caused. It's also potentially valuable in trying to give some people a form of ethical guidelines or moral sensibilities, like someone should probably not feel happy about injuring someone because they were driving while looking at their phone or such, but even that ties in with trying to influence good behavior and remove bad behaviors on some level. Additionally, if you ever get to a point where the only drivers are autonomous cars and no more human drivers, a lack of responsibility can no longer be justified by comparing the damage caused by the vehicles to non-existent human drivers, no one would look at it that way anymore and it would be dumb because it removes the incentive to improve.

        Moving past that, pretty much everything businesses do have responsibility to some extent, certainly not to the extent that individuals have responsibility, and theoretically there would be monetary fines that would serve the same purpose as monetary fines to individuals, and more significant fines for more significant unwanted behavior, however we know that in the US that seems to be a difficult thing for us to achieve in many cases. Some other countries do this better than ours, so it's not as though it's some pie in the sky idea.

        With regards to your other comment, sure within the context of them currently being better, that's farcical to make such a broad claim given the limited circumstances they're used in, I didn't even give that thread any consideration because it's not even a remote consideration to think these vehicles could be better nationwide or worldwide at the current moment. I'm not supporting the notion that they're better now, but I disagree that it's somehow invalid to compare them to human drivers in the right circumstances, just so long as the comparisons are on equal grounds. Not comparing apples to oranges that benefit autonomous cars that drive in perfect weather or such.

        no, the journalist writing this story about an incident with AI-driven cars is not obligated to give a complete run-down comparing it to similar events happening with human-driven cars. they're writing a 500-word story about one specific event, not a treatise on the general problem of cars being blocked by ambulances. and that does not make it "outrage bait".

        I'll agree with you that not every story needs to be an overarching examination of the individual piece of news that they're reporting on. If they had already done such an investigation or examination then they probably would have linked it, if they haven't made one, given the new frontier here it would be plenty useful and relevant. But this day and age, a lot of news organizations don't have the resources for that type of reporting anymore.

        4 votes
        1. [3]
          sparksbet
          Link Parent
          There is extremely insufficient evidence that this is the case, and this incident is a point against the idea that this is the case, since this failure mode is not something you experience with a...

          If an autonomous car made by a company has hardly any responsibilities for the consequences of the mistakes of the vehicle, and comparatively produces less unwanted behavior than human drivers,

          There is extremely insufficient evidence that this is the case, and this incident is a point against the idea that this is the case, since this failure mode is not something you experience with a car manned by a human driver.

          When a car contains a human driver, the emergency services can yell and gesture at them to move. Humans will generally do this, even if they don't speak the same language as the emergency responders, because we're pretty damn good at communicating concepts like "get the fuck out of the way" even nonverbally. This is not the case for autonomous vehicles, whose behavior was equivalent to an empty car in regards to the emergency reaponders' ability to remove them. Whatever mechanisms Cruise has to handle emergency vehicles are insufficient if getting them to move is slower and more unwieldy than yelling and gesturing at a human driver.

          And the idea that it's somehow fine for a technology that leads to people dying who otherwise wouldn't because it might be more common for human drivers to lead to deaths? I find that kinda fucked up as a concept. Certainly this "it doesn't matter who's held liable when this results in people dying" attitude doesn't incentivize the companies deceloping this technology to prioritize safety, and unfounded assumptions that situations like this are definitely rare is not particularly comforting to those whose lives are being put at risk.

          9 votes
          1. [2]
            Grumble4681
            (edited )
            Link Parent
            You're making conclusions based on their effectiveness now, and I made it pretty clear that my stance is that they're not likely more effective now, but it would be part of their progression....

            There is extremely insufficient evidence that this is the case, and this incident is a point against the idea that this is the case, since this failure mode is not something you experience with a car manned by a human driver.

            You're making conclusions based on their effectiveness now, and I made it pretty clear that my stance is that they're not likely more effective now, but it would be part of their progression. Tracking that progression is how we can tell if they're improving and at what rate, it doesn't have to be better right now to compare them to human drivers, but it would give a better idea of what progress they are making.

            All new things have risks. Much of the things you benefit from today, that society benefits from today, could have been the cause of someones death when they first came about. This might be one of the times where we don't get to offload the consequence of risk to developing or least developed countries so we don't just get to pretend or ignore the consequences altogether. I suspect this whole comment section, your comments and mine, would not exist if Google and GM could develop this tech at the same rate in an African country for example. I think it's important to keep this in mind, because it's basically a form of NIMBYism, where we don't attempt to consider the impact of what happens when we push something away from things we personally care about.

            And the idea that it's somehow fine for a technology that leads to people dying who otherwise wouldn't because it might be more common for human drivers to lead to deaths? I find that kinda fucked up as a concept.

            So it's better if more deaths happen then? I think it's interesting you phrased the first part as a certainty, that it leads to people dying who otherwise wouldn't, then phrase the second part as an uncertainty, that it "might be more common for human drivers to lead to deaths", there's no might about it, in this hypothetical (which you seemingly acknowledged it for that because you said "the idea"), for the tech to become more widespread, it would be more common that human drivers were causing injuries.

            Certainly this "it doesn't matter who's held liable when this results in people dying" attitude doesn't incentivize the companies deceloping this technology to prioritize safety

            This was addressed this in my previous comment. Their incentive to improve is to be better than human drivers. Once they're consistently better than human drivers, then of course the paradigm under which they're viewed and judged would shift. It would be compared against previous versions of itself and standards would arise from that. Furthermore, I went with the standard of conversation you set about responsibility, and I took the angle that would least benefit companies to make a point that it would matter very little if they had no responsibility if they could make a product that produced outcomes better than human drivers, because responsibility primarily matters in affecting behavior and their behavior would already be better than humans. Then I gradually expanded on that to account for how companies do or can have responsibilities imposed on them to affect their behavior even if it's not the same kind of individual responsibility that humans have.

            unfounded assumptions that situations like this are definitely rare is not particularly comforting to those whose lives are being put at risk.

            Who said situations like this were rare? In this particular instance and accounting for reports of previous ones, it seems pretty evident that Cruise has been way more negatively impacting traffic and emergency response than Waymo has, when talking about the present concerns, it merits looking at Cruise and why they're having problems that Waymo isn't. But the fact that Waymo hasn't been making news for all of these problems makes it unreasonable to broadly paint with the same brush all autonomous vehicles, which you are doing.

            4 votes
            1. sparksbet
              Link Parent
              I don't think I can continue this discussion in good faith given the contents of this comment without getting way too angry, so I'm going to bow out of this conversation.

              I don't think I can continue this discussion in good faith given the contents of this comment without getting way too angry, so I'm going to bow out of this conversation.

      2. [7]
        Rudism
        Link Parent
        Correct me if I'm wrong, but it sounds like you're saying the important aspect of evaluating whether an alternative to human-driven vehicles is a good thing is not whether it is statistically...

        when a human driver blocks an ambulance, you have individual responsibility - the driver can be ticketed or charged with a crime depending on the severity of the event.

        Correct me if I'm wrong, but it sounds like you're saying the important aspect of evaluating whether an alternative to human-driven vehicles is a good thing is not whether it is statistically safer and causes less harm, but instead is whether or not you can hold a specific human accountable for the harm caused? That is worlds away from where I stand--I would much rather live in a world that is statistically safer for me and my loved ones than a world where we prioritize the ability to lay blame on individuals when something goes wrong.

        Despite your claims to the contrary, over one third of the linked article is dedicated to painting driverless vehicles as problematic, including a quote calling them "death traps." I'm not even saying that's untrue, just that without couching that within the greater context of human drivers it has all the trappings of sensationalism and not honest reporting.

        4 votes
        1. [5]
          devilized
          Link Parent
          This is one of the things I'm reading from this as well, in comments all over this thread. There's even a suggestion that the CEO of the company should be held criminally liable for these...

          Correct me if I'm wrong, but it sounds like you're saying the important aspect of evaluating whether an alternative to human-driven vehicles is a good thing is not whether it is statistically safer and causes less harm, but instead is whether or not you can hold a specific human accountable for the harm caused?

          This is one of the things I'm reading from this as well, in comments all over this thread. There's even a suggestion that the CEO of the company should be held criminally liable for these vehicles. I'm curious about this viewpoint, and wonder if this (need for justice) is more prevalent among Americans than the rest of the world? Is this why the US has one of the highest incarceration rates, and why our prison system is about punishment instead of reform?

          People are asking "why can't it be programmed to do this or that", as if updates for these vehicles don't exist? It's like saying that the Wright brothers should have never even bothered inventing flight because early planes crashed more frequently. Or that the Model T should have never been allowed onto the road because it didn't have air bags and their windshields weren't made of shatter-proof glass. Technological advances have to start somewhere, and they hopefully they improve over time. It's unfortunate, but every safety regulation we have today is written in blood.

          4 votes
          1. [4]
            spit-evil-olive-tips
            Link Parent
            if not the CEO, then who? where should the buck stop, if anywhere? if a bridge collapses, the engineer who signed off on the design can lose their license. criminal neglience charges are also...

            There's even a suggestion that the CEO of the company should be held criminally liable for these vehicles.

            if not the CEO, then who? where should the buck stop, if anywhere?

            if a bridge collapses, the engineer who signed off on the design can lose their license. criminal neglience charges are also possible.

            it sounds like you're against the idea of individualized responsibility in the case of software engineering negligence with AI-driven cars. do you also think we should repeal the laws allowing for individualized responsibility in cases of civil engineering negligence? what makes the two different, if anything?

            I'm curious about this viewpoint, and wonder if this (need for justice) is more prevalent among Americans than the rest of the world? Is this why the US has one of the highest incarceration rates, and why our prison system is about punishment instead of reform?

            putting CEOs in jail is not the reason our prisons are overcrowded.

            It's like saying that the Wright brothers should have never even bothered inventing flight because early planes crashed more frequently.

            this is a really weird comparison.

            the Wright brothers lived in Ohio, but they chose Kitty Hawk in North Carolina due to its favorable winds, but also because it was an isolated, unpopulated area where they could do their testing safely and without endangering other people.

            if they had instead done test flights over a populated city or town, and had crashed and caused property damage or loss of life, should they have been held liable for that? or should they get a free pass when it comes to accountability because they're doing innovation?

            the comparison to aviation safety is weird for another reason.

            the entire problem with Cruise and Waymo, as I see it, is that they're doing this beta-testing on public streets. they can beta-test whatever they want on their own private test tracks. when they introduce vehicles to public streets, the AI should be held to a minimum quality standard (and I think a single person, whether it's the CEO or CTO or some VP of engineering, should be required to certify that it meets those standards, and face consequences if they lie or misrepresent the AI's capability). as we see here, by not yielding to an ambulance it is clearly not meeting that bare minimum standard.

            (the predictable objection from Cruise and Waymo would be that they can't test "real world" scenarios on their private test tracks, and need public streets, which I think is bullshit. they could easily build fake cities and fill them with test employees who are paid to be unpredictable human drivers/pedestrians/bicyclists/etc. the only issue is that it would be more expensive than they want to spend. so they test in public as a cost-cutting measure.)

            suppose I had an AI-piloted aircraft, and I went to the FAA asking if I could beta-test it on passenger flights from LAX to JFK. I would get laughed out of the room.

            my hypothetical AI-piloted aircraft would be expected to undergo tons of testing away from populated areas, with one or probably two backup human pilots, and with zero passengers on board, before ever being approved for commercial passenger service.

            and if my AI aircraft caused a accident or even a near miss, the excuse of "well, one day it'll be safer than human pilots, and if you squint at the statistics maybe it is already" would absolutely not be acceptable.

            5 votes
            1. [3]
              devilized
              Link Parent
              Why does the "buck have to stop" with anyone? This is what I'm talking about in my comment, and you took my examples too literally. Most of the comments in this thread are all about...

              if not the CEO, then who? where should the buck stop, if anywhere?

              Why does the "buck have to stop" with anyone? This is what I'm talking about in my comment, and you took my examples too literally. Most of the comments in this thread are all about "accountability" and "justice", instead of talking about what can be done to learn from our mistakes and move on from them.

              putting CEOs in jail is not the reason our prisons are overcrowded.

              You're right, it's not. The reason is because we (Americans) have this sick penchant for "justice", which usually involves locking someone away as punishment. And no matter how long you're actually sentenced for, that punishment is life-long. This is exactly what I'm against, when this idea of criminally charging individuals for corporate missteps comes up.

              if a bridge collapses, the engineer who signed off on the design can lose their license.

              For careers that are licensed, that's a different story. Losing your license to autonomously practice licensed duties is quite different from receiving criminal charges.

              criminal neglience charges are also possible.

              And that engineer you cited was found not guilty. Proving individual criminal negligence beyond a reasonable doubt, in a scenario where there are many potential parties to blame, is rarely feasible. That case took 5 years to resolve, which is why prosecutors tend not to go after stuff like this.

              it sounds like you're against the idea of individualized responsibility in the case of software engineering negligence with AI-driven cars

              I'm against the idea of bringing criminal charges against individuals who are collaboratively and legally performing work for a company. It's one thing if they're breaking laws, but that's not the case here. This is a case where a product/service, which was operating legally, did not do what it was supposed to do. It needs to be fixed. People don't need to go to jail.

              suppose I had an AI-piloted aircraft, and I went to the FAA asking if I could beta-test it on passenger flights from LAX to JFK. I would get laughed out of the room.

              Perhaps, but that's not what happened here. California's DMV literally has a program in place where you can test your autonomous vehicles on public roads. So Cruise literally had permission to do what they were doing.

              3 votes
              1. [2]
                spit-evil-olive-tips
                Link Parent
                "our mistakes" is interesting phrasing. it's an example of the exact sort of diffusion of accountability I'm talking about. there's a similar trope of someone saying "mistakes were made" - in the...

                Most of the comments in this thread are all about "accountability" and "justice", instead of talking about what can be done to learn from our mistakes and move on from them.

                "our mistakes" is interesting phrasing. it's an example of the exact sort of diffusion of accountability I'm talking about.

                there's a similar trope of someone saying "mistakes were made" - in the passive voice, removing the actor from the mistake-making.

                who made the mistakes? I don't work for Cruise, or even live in California, so they're certainly not my mistakes. presumably they're not yours either. how is it possible to learn from mistakes if we don't even know who committed them?

                do we just cross our fingers and hope that Cruise learns from "our mistakes"? what if they don't? if 6 months from now another Cruise car blocks an ambulance, what should be done?


                we (Americans) have this sick penchant for "justice", which usually involves locking someone away as punishment. And no matter how long you're actually sentenced for, that punishment is life-long. This is exactly what I'm against, when this idea of criminally charging individuals for corporate missteps comes up.

                "corporate missteps" is another interesting choice of phrase.

                Don Blankenship was the CEO of a coal-mining company. in 2010, an explosion at a mine his company owned killed 29 people.

                he was sentenced to a year in prison for his "corporate missteps". it sounds like you think those criminal charges and prison sentence weren't justified?

                or, if you're fine with him going to prison, where is the dividing line in your mind between "corporate missteps" that warrant jail time, and ones that don't?


                6 months before the explosion:

                At a 2009 Labor Day rally in West Virginia, Blankenship said that federal and state mining regulators are ineffective at improving mine safety, and that the mining companies themselves are better suited to the task and should have less oversight, saying, "Washington and state politicians have no idea how to improve miners' safety."

                it seems to me that this controversy about Cruise is really just an iteration of the same argument as Blankenship was making. it's not really about AI, or cars. instead, it's about the proper role of government in regulating corporations, and AI-driven cars is just the current example.

                the laissez-faire approach, which it seems like you're advocating, is that companies will tend to do the right thing on their own, so there's really no need for the government to punish them if they do the wrong thing. like I said in another comment, the "invisible hand" of the market will take care of it, because if Cruise has a bad safety record presumably people will stop using their taxis and switch to a competitor.

                I think laissez-faire regulation has proven itself to not work, over and over and over again. there needs to be government regulations, and there needs to be actual punishment for breaking those regulations. it obviously shouldn't be criminal charges in every single case, but the threat of criminal charges in the most serious cases needs to be there as an important deterrent.


                California's DMV literally has a program in place where you can test your autonomous vehicles on public roads. So Cruise literally had permission to do what they were doing.

                from the page you linked, if you click on "Autonomous Vehicle Deployment Program", there's a link to Adopted Regulatory Text (PDF). from page 23:

                The manufacturer shall certify that the autonomous technology is designed to detect and respond to roadway situations in compliance with all provisions of the California Vehicle Code and local regulation applicable to the performance of the dynamic driving task in the vehicle's operational design domain, except when necessary to enhance the safety of the vehicle's occupants and/or other road users.

                section 21806 of the California Vehicle Code is the specific provision requiring drivers to yield to emergency vehicles.

                so yes, there's a law in California. but no, Cruise was not following it.

                they certified to California's DMV that their cars were able to comply with all provisions of the California Vehicle Code. and we have clear evidence that isn't true, based on two different Cruise cars both failing to yield to the ambulance.

                Cruise broke the law. should there be any repercussions, and if so, what should they be?

                as I said in another comment, blocking an ambulance is a $490 fine for human drivers. two of Cruise's cars failed to yield to the ambulance. should Cruise pay a $980 fine for this "corporate misstep" and then move on?

                2 votes
                1. devilized
                  Link Parent
                  Again, you're taking my comments too literally. Obviously you and I had nothing to do with Cruise's situation. I'm talking, in general, about society's obsession with sending people to jail as...

                  "our mistakes" is interesting phrasing. it's an example of the exact sort of diffusion of accountability I'm talking about.

                  Again, you're taking my comments too literally. Obviously you and I had nothing to do with Cruise's situation. I'm talking, in general, about society's obsession with sending people to jail as punishment.

                  he was sentenced to a year in prison for his "corporate missteps". it sounds like you think those criminal charges and prison sentence weren't justified?

                  No, I don't think that jail time was justified. How did that help victims? This is, again, using jail for punishment instead of rehabilitation (or for those who really can't be rehabilitated, keeping them away from the rest of functioning society). A more appropriate punishment would've been more along the lines of revoking his company's licenses, issuing fines (they did this), paying restitution to the victims (even if it means liquidating company assets), and maybe even preventing that guy from obtaining future licenses to operate in that industry again. Jail didn't do anyone any favors in this situation. The current problem with fines, that has people reaching for other solutions (like jail), is that the fines are too lenient. They need to be more severe. The fine needs to be several times the amount of the money that was or could've been made by breaking the law.

                  the laissez-faire approach, which it seems like you're advocating, is that companies will tend to do the right thing on their own, so there's really no need for the government to punish them if they do the wrong thing.

                  Cruise broke the law. should there be any repercussions, and if so, what should they be?

                  I'm not saying there should be no punishment. I'm saying that the company should be punished, not the employees. Maybe the repercussions are fines, and/or maybe they lose their licenses to operate until changes are made, much like you or I might lose our drivers license and then take a driving class to restore it. I also agree that fines against humans should differ from fines against companies. My whole point is that the neither the CEO nor the software developers should be going to jail over this.

                  3 votes
        2. spit-evil-olive-tips
          Link Parent
          I'm saying accountability is an important aspect, not the important aspect. the fine for blocking an ambulance in California is $490, plus one point on their driving record which increases...

          Correct me if I'm wrong, but it sounds like you're saying the important aspect

          I'm saying accountability is an important aspect, not the important aspect.

          the fine for blocking an ambulance in California is $490, plus one point on their driving record which increases insurance costs. that's a fairly significant deterrent to most human drivers. (perhaps not to extremely rich human drivers, for that you'd want something like income-dependent fines)

          Cruise has raised $15 billion in venture capital funding. a $500 fine is absolutely meaningless to them (and there isn't any notion of Cruise having "points" on their driving record).

          if you look at the engineer-hours necessary to write the "always move out of the way for an emergency vehicle" code (and test it), and multiply that by the salaries involved, it comes out to much more than $500. if you look only at Cruise's bottom-line, it makes sense to not prioritize writing that code in order to focus on other features that will actually make the company money.

          I would much rather live in a world that is statistically safer for me and my loved ones than a world where we prioritize the ability to lay blame on individuals when something goes wrong.

          this is a huge false dichotomy.

          safety doesn't just happen. what you're calling "blame", and what I'm calling responsibility and accountability, is an important part of achieving it.

          drinking and driving is unsafe. if I do it, I can have my license suspended or even go to jail for repeat offenses. that's true even if I'm a "safe" drunk driver and never get into a crash. we recognize that driving drunk is inherently unsafe and punish people who do it, as a deterrent to discourage that behavior.

          releasing buggy AI for cars is also unsafe. if Cruise does it, are there any consequences?

          is there any deterrent to Cruise and Waymo releasing unsafe AI drivers, beyond "the invisible hand of the market will sort it out"? that is, if they release unsafe AI, and it causes collisions, that'll result in bad press, and people not wanting to take their AI-driven taxis, and a loss in revenue?

          it seems like the world you want is one where we say "well, Cruise and Waymo and other AI drivers have a better safety record than human drivers, so there's no point in having any consequences if they act irresponsibly"

          for a concrete hypothetical, suppose human drivers have a accident rate of X, and AI drivers from Acme AI Corp have an accident rate of X/3 - three times as safe, one-third as many accidents. yay, that's great.

          then, Acme AI gets sloppy with their software development, releases some buggy AI, and their accident rate doubles. it's now 2/3rds of X. but, they're still safer than human drivers.

          should Acme AI be held accountable for those additional accidents (and presumably, deaths)? if so, how?

          over one third of the linked article is dedicated to painting driverless vehicles as problematic, including a quote calling them "death traps."

          I think you're referring to this, the last three paragraphs from the article?

          Just days earlier, the California Public Utilities Commission voted to expand driverless ride-hailing services in San Francisco. In public comments sent to the commission ahead of the vote on Aug. 10, scores of residents asked commissioners to limit Cruise and Waymo’s expansion, describing the robotaxis as “death traps” and a menace to disabled people and children.

          During an Aug. 7 meeting to discuss safety concerns around autonomous vehicles, San Francisco Fire Department Chief Jeanine Nicholson told the commission that her department had already recorded about 55 reports of driverless cars driving dangerously close to first responders, obstructing travel or blocking stations.

          “And you might say well, 55, that’s not a lot. Well, if it’s your family, it’s a lot,” Nicholson said. “And for me, it's not just your family, it’s everybody's family. I'm responsible for everybody in this city. And so if we don't get to one person, that's one person too many that we didn't get to.”

          how is that sensationalism, or dishonest reporting? this is following the standard "inverted pyramid" style of news writing. the relevant facts about this incident itself are at the top of the article, and then farther down they give more context about the ongoing debate over AI-driven cars in San Francisco. they also link to a previous article that goes into more detail about the safety concerns raised in that public comment period.

          3 votes
  7. [2]
    UntouchedWagons
    Link
    Are emergency vehicles not allowed to push vehicles out of the way? I remember seeing a video of a firetruck pushing a police car out of the way.

    Are emergency vehicles not allowed to push vehicles out of the way? I remember seeing a video of a firetruck pushing a police car out of the way.

    1 vote
    1. nukeman
      Link Parent
      Based on the report linked in the article, the ambulance would’ve had to push the AVs into oncoming traffic from the cross-street. The incident took place on Harrison Street, a one-way street...

      Based on the report linked in the article, the ambulance would’ve had to push the AVs into oncoming traffic from the cross-street. The incident took place on Harrison Street, a one-way street heading westbound and crossing Seventh Street.

      4 votes