52
votes
Injured person reportedly dies after Cruise cars block first responders, according to reports from the San Francisco Fire Department
Link information
This data is scraped automatically and may be incorrect.
- Authors
- Ariana Bindman
- Published
- Sep 1 2023
- Word count
- 473 words
This is just so unacceptable.
The fact these are on the road with 0 emergency response method worked out before they were pushed out isn't some "oops yeah that's an edge case". A single session of "what do normal drivers need to do" should bring up "yield and respond to emergency services", and such features and abilities should have been demonstrated long before they were allowed on the road. I would not be surprised if it's on some roadmap an engineer drew up and mentioned "uh hey, we haven't solved this yet" before going live.
Edit-
Thinking about it a bit more, why is Cruise not getting fined/hauled into court for these things? And i don't mean civil. If i parked my car in a way that blocked emergency services i'd be in front of a judge real quick, and yet this company does it and is consequence free? Just "oops working out the bugs"
The oft repeated "I'll believe corporations are people when Texas executes one of them" line applies here.
The question of "WHO" blocked the vehicle arises. If there's a human in the car, it's you. So they haul you into court. Who's at fault with the driverless car? Obviously, the short answer is the CEO but we can't just prosecute a job creator like that. So who else do we apply fault to? The programmer who wasn't told to program that function? How's it his fault? The manager who approved the software? He's not a programmer and the required specs were met... And so on. That's why 'the company' isn't in trouble. Because no one is responsible because money.
So they say "if a company does something illegal, take it up in civil court." and leave it to the victim's family.
Why not?
That was sarcasm, obviously, we should be doing that.
That said, there always could be gross negligence on the side of the programming teams - misrepresenting the capabilities of the cars or such... But the company shouldn't get to hold an internal investigation to find that out.
It's up to the CEO to provide evidence in such a case.
Sometimes the guilty aren't really guilty, and it's not always trivial to find out who is. That's why there are expensive and maddeningly complex justice systems in most countries. Not prosecuting anyone for anything because the accused might actually not be guilty is just stupid.
No, that's not how prosecution of crimes works. You are innocent until proven guilty. If we lived in a world where employees were personally held legally responsible for charger brought to their employer, it's not up to the employee (CEO or otherwise) to provide the prosecutors evidence that some software developers were responsible. It's up to the prosecutors to gather the evidence themselves and convince a judge/jury that the employee was knowingly and willingly negligent in this issue, while acting under their authority. And short of an internal email where the CEO wrote "I understand that these vehicles are not safe, launch them anyway", that's just not going to happen. Should Sundar Pichai go to jail because Google Maps told someone to take a turn that drove them into a lake?
This whole extremist idea of "CEO is bad because they make money, send to jail" is ludicrous. CEOs, especially ones of large companies, are just not that deeply involved in the nitty gritty details of product development. Similarly, first line employees and middle managers are not responsible for deciding the strategic direction of the company.
You're right. I don't know what I was thinking. Probably not much.
There was some pretty lively (for Tildes) discussion a few days ago about n article entitled Driverless cars may already be safer than human drivers. The article had a discussion of why AV crash data right suggest that AVs are already safe enough. The commentary there identified the lazy analysis (e.g. comparing nationwide human driving statistics to fair weather AV driving statistics).
But this incident blocking emergency vehicles illustrates why tracking crashes is simply not enough. We have seen other new items about AVs interfering with emergency vehicles, but how many near misses go unreported and unaddressed by the AV developers until you end up with an (alleged) fatality.
The cars simply aren't ready to be completely unsupervised, but the industry is rushing to remove safety drivers. They are not operating at a scale where having the cars be attended would be financially onerous. As far as I can tell, this is driven by pressure from investors to see progress.
I've been thinking about the weather issue.. I live way up in the north and during the winter the grip conditions vary A LOT. Like, you can enter a corner with decent amount of grip but in the middle of the corner the grip might just disappear. I wonder what the AI driver might do in that sort of situation. Slamming on the brakes often is the worst thing you can do in that situation, are the AI drivers taught to drift?
To get a drivers license in my country you need to go trough a skid pad training thing where there are different scenarios, a moose avoidance test, braking test, braking in a corner test etc.. Would be interesting to see a AI driver to the same skid pad course and see how it gets on.
I have a few examples from my own experience, once I was going from one highway to another but the connecting ramp had a small bridge portion, the bridge area was basically just black ice (because it freezes faster than a area that has ground under it). I entered the bend with the normal amount of grip, as soon as I lifted off from the accelerator I got lift off oversteer and I had to use all of my forza motorsport skills to keep the car from hitting the barriers, I was lightly on the brakes and counter steering one way, then another. I was very close to completely spinning at least twice.
Another one was when I was in a similar situation, I was about to join a highway on a two lane ramp. In front of me was a old van with a trailer going very slowly. I decided (a bit stupidly) to overtake it on the ramp. When I got next to it I started to understeer, I lifted off and turned a bit harder which of course made the car oversteer. I managed to hold the drift with counter steering and little bit of accelerator, I was lucky enough to keep it on my lane and nothing bad happened.
I've been thinking what would a AI driver have done in those situations. In the first example the cars stability control system was sort of helping and making things worse for me, I feel like I reacted faster than the system, which made things a bit worse in a few spots because my correction coupled with the stability controls corrections equaled a over correction. I remember yelling at the car "STOP DOING THAT" while trying to regain control. In the second example I didn't have any electronics helping me (that particular car only had ABS) and I managed to keep everything smooth and no big dramas. From that experience I would personally like to drive a car with minimal electronic "helps" but I do realise not everyone is spending their free time driving racing simulators.
The notion of a driving test is not that useful for AVs. (Not that the parent was necessarily promoting that they are, I am just offering information about a common misconception.)
The parent post said:
As you described, in a driving test, they give you one (or a few) of these tests to verify that you've learned the general strategy and been exposed to those conditions ahead of a real event on the road. The expectation is that human drivers can generalize those skills to apply them to lots of different real-world scenarios.
This is why there is an age component to licensing -- you have to be mature enough to have developed some of those generalization skills. And we do see that human drivers who are younger do have more accidents. But as a society, we've benchmarked an age (though this age varies by locality where most drivers are mature enough to be trusted behind the wheel.
By contrast, the ML parts of the an AV rely heavily on lots of training examples. There's no measure of "socially acceptable levels of maturity in generalizing driving strategies" for AVs. So you really have to do lot of testing to try and get it right.
ML has brittleness and inscrutability problems. Brittleness means that the algorithm might be right 95% of the time, but in that last 5%, the output won't just be degraded but radically wrong. Inscrutability means you don't know why the algorithm makes a certain decision - it is just a bunch of weights in a network that humans can't interpret. So you don't know where that 5% of weirdness is.
The verification strategies for ML generally revolve around having the "right" set of training data. The challenge is to fully represent the environment without over- or under-representing certain parts of it. But the data set is so huge that humans have trouble looking at all of it. So it is a big challenge that I think we still haven't fully gotten to the bottom of.
Yeah I didn't mean that at all. I was just curious of how a AI driver would fare in conditions that change suddenly, for example coming across black ice in the middle of a corner.
Buried the hell out of that lede.
While i agree the cop car in the way is a problem, presumably, the police officer is there to do their job. There’s a reason it’s on site. Is it a bad parking job? Probably. However the way I am understanding the article, it wouldn’t have been a problem if the ‘autonomous’ vehicles weren’t there.
Compare that to the other vehicles in the way. Are they helping with the emergency or directing traffic or serving some function? Obviously not. There’s zero justification for that, and apparently this is a frequent problem.
Found a short powerpoint presentation when looking into more detail from the various SF municipal organizations which outlines the detail and explains some of the issues with autonomous vehicles that still really need to get ironed out. Human drivers are in general quite good at getting out of the way of emergency vehicles and even when drivers do not speak English they typically respond to gestures quite readily and can manage to maneuver out of the way. It seems like (unsurprisingly) most of the AV operators are hard to get in contact with and the systems don't seem to have ways for the operators to be notified when it's urgent to get their attention.
Another article recently linked in Tildes mentions the tracked 55 incidents in six months as well
55 incidents in 6 months is nearly one every three days. What if there's a fire and multiple people die?
Why can't autonomous cars be programmed to "slow down and pull over when emergency signal is received, and give up its own control temporarily to emergency personnel "? Basically behave like a human legally is required to, and then behave like a parked car that can be driven out of the way.
The only reason they don't have similiar thing programmed in is profits.
While that may play a part, to say it's the only thing seems short-sighted. They are experiencing bad PR and presumably it costs them money to not just let first responders deal with the issue too, so it's not even all that clear-cut how much they might be gaining or not losing by taking this approach, but it's questionable what kind of profit is involved in it or that it's enough to be intentionally problematic towards first responders.
https://waymo.com/firstresponders/
Waymo also does say their vehicles pull over when they detect emergency vehicles with signals on, and it says they have the ability to give manual control of the vehicles over to first responders in an emergency, but it is not automated. I don't know if Cruise has the same capabilities.
One thing to note is that autonomous taxi vehicles are highly likely to have passengers in them. I suspect a great deal of caution and care must be taken to design the car and the process in such a way that it does not put the passenger in danger to relinquish control of the car. Just automatically giving over manual control to the first person to walk up to the vehicle when an emergency vehicle is around might not be the safest option.
The information I feel is always missing from articles like this is how do driverless vehicles compare to human drivers on the same metric? I'm sure emergency vehicles are occasionally impeded by non-autonomous vehicles (I've seen it happen); is the rate at which driverless vehicles do this significantly worse? What happened here is clearly tragic, but without that metric as a starting point a lot of this kind of comes off like fear mongering or outrage bait.
What does responsibility matter if the comparative rates were such that autonomous cars weren't anymore of a nuisance?
Responsibility matters when you're trying to influence something. Society does not benefit from someone being ticketed, they benefit from people who want to avoid being ticketed and make attempts to avoid the offending behavior. Now you might say they benefit from the ticket revenue, but it seems that revenue is rather small and probably barely covers the processes and systems in place to handle the ticketing.
https://www.urban.org/policy-centers/cross-center-initiatives/state-and-local-finance-initiative/state-and-local-backgrounders/fines-and-forfeitures
So responsibility is primarily beneficial for curbing unwanted behaviors.
If an autonomous car made by a company has hardly any responsibilities for the consequences of the mistakes of the vehicle, and comparatively produces less unwanted behavior than human drivers, then one can make the argument that it's better than human drivers in spite of the fact that they have less reason to be more competent than human drivers.
Of course there's another side to that, I'm not pretending that there isn't, nor am I making the case that the ends justify the means, but you claimed it was a form of whataboutism and made it seem irrelevant what the comparative value of incidents were between autonomous vehicles and human driven vehicles and I seriously disagree with that.
Of course, other than curbing behaviors, responsibility does serve a few other things, like making people feel as though there's some form of justice, or attempting to make someone provide compensation for damages they caused. It's also potentially valuable in trying to give some people a form of ethical guidelines or moral sensibilities, like someone should probably not feel happy about injuring someone because they were driving while looking at their phone or such, but even that ties in with trying to influence good behavior and remove bad behaviors on some level. Additionally, if you ever get to a point where the only drivers are autonomous cars and no more human drivers, a lack of responsibility can no longer be justified by comparing the damage caused by the vehicles to non-existent human drivers, no one would look at it that way anymore and it would be dumb because it removes the incentive to improve.
Moving past that, pretty much everything businesses do have responsibility to some extent, certainly not to the extent that individuals have responsibility, and theoretically there would be monetary fines that would serve the same purpose as monetary fines to individuals, and more significant fines for more significant unwanted behavior, however we know that in the US that seems to be a difficult thing for us to achieve in many cases. Some other countries do this better than ours, so it's not as though it's some pie in the sky idea.
With regards to your other comment, sure within the context of them currently being better, that's farcical to make such a broad claim given the limited circumstances they're used in, I didn't even give that thread any consideration because it's not even a remote consideration to think these vehicles could be better nationwide or worldwide at the current moment. I'm not supporting the notion that they're better now, but I disagree that it's somehow invalid to compare them to human drivers in the right circumstances, just so long as the comparisons are on equal grounds. Not comparing apples to oranges that benefit autonomous cars that drive in perfect weather or such.
I'll agree with you that not every story needs to be an overarching examination of the individual piece of news that they're reporting on. If they had already done such an investigation or examination then they probably would have linked it, if they haven't made one, given the new frontier here it would be plenty useful and relevant. But this day and age, a lot of news organizations don't have the resources for that type of reporting anymore.
There is extremely insufficient evidence that this is the case, and this incident is a point against the idea that this is the case, since this failure mode is not something you experience with a car manned by a human driver.
When a car contains a human driver, the emergency services can yell and gesture at them to move. Humans will generally do this, even if they don't speak the same language as the emergency responders, because we're pretty damn good at communicating concepts like "get the fuck out of the way" even nonverbally. This is not the case for autonomous vehicles, whose behavior was equivalent to an empty car in regards to the emergency reaponders' ability to remove them. Whatever mechanisms Cruise has to handle emergency vehicles are insufficient if getting them to move is slower and more unwieldy than yelling and gesturing at a human driver.
And the idea that it's somehow fine for a technology that leads to people dying who otherwise wouldn't because it might be more common for human drivers to lead to deaths? I find that kinda fucked up as a concept. Certainly this "it doesn't matter who's held liable when this results in people dying" attitude doesn't incentivize the companies deceloping this technology to prioritize safety, and unfounded assumptions that situations like this are definitely rare is not particularly comforting to those whose lives are being put at risk.
You're making conclusions based on their effectiveness now, and I made it pretty clear that my stance is that they're not likely more effective now, but it would be part of their progression. Tracking that progression is how we can tell if they're improving and at what rate, it doesn't have to be better right now to compare them to human drivers, but it would give a better idea of what progress they are making.
All new things have risks. Much of the things you benefit from today, that society benefits from today, could have been the cause of someones death when they first came about. This might be one of the times where we don't get to offload the consequence of risk to developing or least developed countries so we don't just get to pretend or ignore the consequences altogether. I suspect this whole comment section, your comments and mine, would not exist if Google and GM could develop this tech at the same rate in an African country for example. I think it's important to keep this in mind, because it's basically a form of NIMBYism, where we don't attempt to consider the impact of what happens when we push something away from things we personally care about.
So it's better if more deaths happen then? I think it's interesting you phrased the first part as a certainty, that it leads to people dying who otherwise wouldn't, then phrase the second part as an uncertainty, that it "might be more common for human drivers to lead to deaths", there's no might about it, in this hypothetical (which you seemingly acknowledged it for that because you said "the idea"), for the tech to become more widespread, it would be more common that human drivers were causing injuries.
This was addressed this in my previous comment. Their incentive to improve is to be better than human drivers. Once they're consistently better than human drivers, then of course the paradigm under which they're viewed and judged would shift. It would be compared against previous versions of itself and standards would arise from that. Furthermore, I went with the standard of conversation you set about responsibility, and I took the angle that would least benefit companies to make a point that it would matter very little if they had no responsibility if they could make a product that produced outcomes better than human drivers, because responsibility primarily matters in affecting behavior and their behavior would already be better than humans. Then I gradually expanded on that to account for how companies do or can have responsibilities imposed on them to affect their behavior even if it's not the same kind of individual responsibility that humans have.
Who said situations like this were rare? In this particular instance and accounting for reports of previous ones, it seems pretty evident that Cruise has been way more negatively impacting traffic and emergency response than Waymo has, when talking about the present concerns, it merits looking at Cruise and why they're having problems that Waymo isn't. But the fact that Waymo hasn't been making news for all of these problems makes it unreasonable to broadly paint with the same brush all autonomous vehicles, which you are doing.
I don't think I can continue this discussion in good faith given the contents of this comment without getting way too angry, so I'm going to bow out of this conversation.
Correct me if I'm wrong, but it sounds like you're saying the important aspect of evaluating whether an alternative to human-driven vehicles is a good thing is not whether it is statistically safer and causes less harm, but instead is whether or not you can hold a specific human accountable for the harm caused? That is worlds away from where I stand--I would much rather live in a world that is statistically safer for me and my loved ones than a world where we prioritize the ability to lay blame on individuals when something goes wrong.
Despite your claims to the contrary, over one third of the linked article is dedicated to painting driverless vehicles as problematic, including a quote calling them "death traps." I'm not even saying that's untrue, just that without couching that within the greater context of human drivers it has all the trappings of sensationalism and not honest reporting.
This is one of the things I'm reading from this as well, in comments all over this thread. There's even a suggestion that the CEO of the company should be held criminally liable for these vehicles. I'm curious about this viewpoint, and wonder if this (need for justice) is more prevalent among Americans than the rest of the world? Is this why the US has one of the highest incarceration rates, and why our prison system is about punishment instead of reform?
People are asking "why can't it be programmed to do this or that", as if updates for these vehicles don't exist? It's like saying that the Wright brothers should have never even bothered inventing flight because early planes crashed more frequently. Or that the Model T should have never been allowed onto the road because it didn't have air bags and their windshields weren't made of shatter-proof glass. Technological advances have to start somewhere, and they hopefully they improve over time. It's unfortunate, but every safety regulation we have today is written in blood.
Why does the "buck have to stop" with anyone? This is what I'm talking about in my comment, and you took my examples too literally. Most of the comments in this thread are all about "accountability" and "justice", instead of talking about what can be done to learn from our mistakes and move on from them.
You're right, it's not. The reason is because we (Americans) have this sick penchant for "justice", which usually involves locking someone away as punishment. And no matter how long you're actually sentenced for, that punishment is life-long. This is exactly what I'm against, when this idea of criminally charging individuals for corporate missteps comes up.
For careers that are licensed, that's a different story. Losing your license to autonomously practice licensed duties is quite different from receiving criminal charges.
And that engineer you cited was found not guilty. Proving individual criminal negligence beyond a reasonable doubt, in a scenario where there are many potential parties to blame, is rarely feasible. That case took 5 years to resolve, which is why prosecutors tend not to go after stuff like this.
I'm against the idea of bringing criminal charges against individuals who are collaboratively and legally performing work for a company. It's one thing if they're breaking laws, but that's not the case here. This is a case where a product/service, which was operating legally, did not do what it was supposed to do. It needs to be fixed. People don't need to go to jail.
Perhaps, but that's not what happened here. California's DMV literally has a program in place where you can test your autonomous vehicles on public roads. So Cruise literally had permission to do what they were doing.
Again, you're taking my comments too literally. Obviously you and I had nothing to do with Cruise's situation. I'm talking, in general, about society's obsession with sending people to jail as punishment.
No, I don't think that jail time was justified. How did that help victims? This is, again, using jail for punishment instead of rehabilitation (or for those who really can't be rehabilitated, keeping them away from the rest of functioning society). A more appropriate punishment would've been more along the lines of revoking his company's licenses, issuing fines (they did this), paying restitution to the victims (even if it means liquidating company assets), and maybe even preventing that guy from obtaining future licenses to operate in that industry again. Jail didn't do anyone any favors in this situation. The current problem with fines, that has people reaching for other solutions (like jail), is that the fines are too lenient. They need to be more severe. The fine needs to be several times the amount of the money that was or could've been made by breaking the law.
I'm not saying there should be no punishment. I'm saying that the company should be punished, not the employees. Maybe the repercussions are fines, and/or maybe they lose their licenses to operate until changes are made, much like you or I might lose our drivers license and then take a driving class to restore it. I also agree that fines against humans should differ from fines against companies. My whole point is that the neither the CEO nor the software developers should be going to jail over this.
Are emergency vehicles not allowed to push vehicles out of the way? I remember seeing a video of a firetruck pushing a police car out of the way.
Based on the report linked in the article, the ambulance would’ve had to push the AVs into oncoming traffic from the cross-street. The incident took place on Harrison Street, a one-way street heading westbound and crossing Seventh Street.