42 votes

Armed with traffic cones, protesters are immobilizing driverless cars in San Francisco

22 comments

  1. [22]
    boxer_dogs_dance
    Link
    So I have a couple takeaways from this. One is people disagree about new technology and protesters are going to protest, especially in the SF Bay area. There is local tension between residents and...

    So I have a couple takeaways from this. One is people disagree about new technology and protesters are going to protest, especially in the SF Bay area. There is local tension between residents and silicon valley companies and their leadership.

    Two, people designing autonomous technology are going to have to plan for obstructive and even destructive human behavior and make their baby products resilient to being messed with.

    There was this sad story a while back where residents of Philly ended a long journey of a robot for the lulz.
    https://www.usatoday.com/story/news/nation-now/2015/08/03/hitchhiking-robot-destroyed-philadelphia-ending-cross-country-trek/31051589/

    14 votes
    1. [21]
      chocobean
      Link Parent
      I feel like this is completely different from hitchhiker bot though: Article also mentioned a human death and also disruption to emergency services. Hitch hiker bot doesn't get in the way of...

      I feel like this is completely different from hitchhiker bot though:

      The cars have run red lights, rear-ended a bus and blocked crosswalks and bike paths. In one incident, dozens of confused cars congregated in a residential cul-de-sac, clogging the street. In another, a Waymo ran over and killed a dog.

      Article also mentioned a human death and also disruption to emergency services.

      Hitch hiker bot doesn't get in the way of anyone unwilling to participate. SF residents are turned into lab rats or unwilling safety road pylons with their safety at risk.

      The cone stunt is meant to demonstrate that no these are not the same as human driven vehicles because humans aren't stopped by a cone. And thus they need to be governed differently.

      Elon Musk intentionally programmed his cars to roll through stop signs instead of stop. That's unacceptable. What else are these companies doing or neglected to do? Rolling through safety yellow tape and running reds are extremely dangerous signs that they're not ready for human use.

      31 votes
      1. [3]
        boxer_dogs_dance
        Link Parent
        Yesterday I saw the same article on reddit and the most popular opinion by far was that the sacrifice of SF residents and other testing places like Phoenix and Austen is simply needed for the...

        Yesterday I saw the same article on reddit and the most popular opinion by far was that the sacrifice of SF residents and other testing places like Phoenix and Austen is simply needed for the greater good. Without training data from the real world with real obstacles, the machine won't improve enough to be safe in the real world. I strongly disagree with them, but they don't care.

        I thought the incident where the car drove into freshly poured concrete was funny, but undoubtedly not for the construction site and its management.

        I wrote perhaps too casually. Yes, what happened to hitchhikerbot was not the same as a protest action in motive or message. However, in both cases the machines are vulnerable to interference by humans. And people will absolutely meddle with them for a variety of reasons. I have seen predictions that long haul truck drivers will lose their jobs to automation of driving. My first thought is that maybe those companies will end up hiring the same people back as security guards if the load is at all valuable. I would expect organized crime to be all over that opportunity and reinvent highway robbery if the routes are automated and there is no human security present.

        10 votes
        1. [2]
          Eji1700
          Link Parent
          While I know you're not advancing this view, I cannot overstate how terribly naive this is. We have plans regulated to hell because we know that they'll still cut corners and kill people (see the...

          Yesterday I saw the same article on reddit and the most popular opinion by far was that the sacrifice of SF residents and other testing places like Phoenix and Austen is simply needed for the greater good. Without training data from the real world with real obstacles, the machine won't improve enough to be safe in the real world.

          While I know you're not advancing this view, I cannot overstate how terribly naive this is.

          We have plans regulated to hell because we know that they'll still cut corners and kill people (see the recent max issue). The idea being that they have to be extremely negligent and multiple regulators and engineers have to sign off before people should be dying.

          The car industry is already notorious for companies cutting corners and costing lives, and while it didn't kill anyone, VW/Harley Davison's/etc's emissions scandal shows what these companies think of regulations that get in the way of profit.

          The handling of driverless cars, and the "testing" they've been put through is just outright negligent. I will not be surprised when some fatal bug is shown to have been causing accidents for X days/weeks/months and they just ignored/backlogged/whatevered it.

          To give people a simple framework for this, ask yourself this-

          IF a fatal flaw of the level of the Max issue was found, do you think they'd "ground" their entire fleet? Because right now I'm not sure any agency can enforce it, and it's 1000x harder to even detect.

          6 votes
          1. boxer_dogs_dance
            Link Parent
            I agree. We especially need zero tolerance for incidents where the cars interfere with emergency response vehicles or personnel. I wrote a more detailed reply about this elsewhere in the thread.

            I agree. We especially need zero tolerance for incidents where the cars interfere with emergency response vehicles or personnel. I wrote a more detailed reply about this elsewhere in the thread.

            4 votes
      2. [17]
        karim
        Link Parent
        These seem like the exact things humans do. Humans constantly run-over pets and animals, sometimes in dense residential areas. The cone stunt shows me that Self-driving cars do follow the rules,...

        The cars have run red lights, rear-ended a bus and blocked crosswalks and bike paths. In one incident, dozens of confused cars congregated in a residential cul-de-sac, clogging the street. In another, a Waymo ran over and killed a dog.

        These seem like the exact things humans do. Humans constantly run-over pets and animals, sometimes in dense residential areas.

        The cone stunt shows me that Self-driving cars do follow the rules, and won't try to jump down a cliff because they wanted to ignore some cones.

        IMO traffic markers should never, ever be ignored. The cone stunt is a social issue, not a technical one.

        8 votes
        1. [6]
          sparksbet
          Link Parent
          The principle argument from people who argue we should be pushing self-driving cars into use as fast as possible is that they're safer than human drivers, though. That's constantly the line...

          These seem like the exact things humans do. Humans constantly run-over pets and animals, sometimes in dense residential areas.

          The principle argument from people who argue we should be pushing self-driving cars into use as fast as possible is that they're safer than human drivers, though. That's constantly the line whenever reasonable safety concerns about training/testing these vehicles on public roads are brought up. Yes, these are mistakes humans also make all the time, but they're also refutations of this oft-repeated lie that self-driving cars are necessarily safer than human drivers. The fact that manufacturers like Tesla are instructing the National Highway Safety Administration to omit whether self-driving technology was a factor in accidents/crashes does not inspire confidence for me here either.

          And, of course, there's an absolutely HUGE issue with liability when it comes to autonomous vehicles that hasn't really been fully settled. As @dangeresque points out below, companies like Tesla get away with shitty safety because they're not held liable for the damage and deaths caused by their self-driving cars.

          9 votes
          1. [5]
            Greg
            Link Parent
            They’re not a refutation just by being more than zero, only if the numbers are similar or greater than human drivers. Problem is that getting truly comparable numbers could be somewhat tough even...

            They’re not a refutation just by being more than zero, only if the numbers are similar or greater than human drivers.

            Problem is that getting truly comparable numbers could be somewhat tough even in a pure academic study, given the different types of failure and the various second and third order effects beyond that. Add in all the money and vested interests, plus the emotive angles on the other side too, and I have no idea how we’re going to get reliable data to make that call.

            5 votes
            1. [2]
              dangeresque
              Link Parent
              If manufacturers accepted liability for any casualties their self-driving tech causes, then I frankly wouldn't care about normalizing the data or see the need to debate about its safety....

              If manufacturers accepted liability for any casualties their self-driving tech causes, then I frankly wouldn't care about normalizing the data or see the need to debate about its safety. Manufacturers wouldn't need to lie about the safety of their cars in order to gain acceptance, because if they weren't safe they'd simply go broke.

              3 votes
              1. Greg
                Link Parent
                Absolutely - although it’s probably fair to say that we wouldn’t need to worry about the data in that situation because the actuaries would be waaaaay ahead of us on it there!

                Absolutely - although it’s probably fair to say that we wouldn’t need to worry about the data in that situation because the actuaries would be waaaaay ahead of us on it there!

                2 votes
            2. [2]
              sparksbet
              Link Parent
              The manufacturers deliberately preventing collection of data about when autonomous driving is involved in accidents certainly isn't helping with the "getting reliable data" side though.

              The manufacturers deliberately preventing collection of data about when autonomous driving is involved in accidents certainly isn't helping with the "getting reliable data" side though.

              1 vote
              1. Greg
                Link Parent
                Oh for sure, that’s largely what I meant about money and vested interests - they’re taking a not-necessarily-simple question and cranking it right up to near impossible.

                Oh for sure, that’s largely what I meant about money and vested interests - they’re taking a not-necessarily-simple question and cranking it right up to near impossible.

                1 vote
        2. [9]
          vektor
          Link Parent
          I'm also worried about muddied waters by actors like Tesla. Their relationship with road safety seems... tenuous at best. Meanwhile, there's much more responsible actors. Waymo with their...

          I'm also worried about muddied waters by actors like Tesla. Their relationship with road safety seems... tenuous at best. Meanwhile, there's much more responsible actors. Waymo with their walled-garden approach seems a much better example. Yet they get "coned" on here. I'm in favor of self-driving cars because of reasons I have previously written about [mainly that they free people from the pressure to own a car, thus making it economically more possible to choose the most appropriate transportation for the trip, including public transit]. So from that perspective, I really don't want people to lump the responsible actors in with the irresponsible ones. If Waymo has a shitty safety record: fuck it, cone them. But I'm not nearly convinced of that, from the article. And it certainly doesn't help that the list curated by the campaign mixes Cruise with Waymo (with Cruise dominating the list) and mixes all kinds of incidents up: Human at fault, AV being unsafe, AV delaying proceedings, but remaining safe, ...

          And during all this we're nowhere close to discussing a human baseline. A lot of these incidents seem like humans are very likely to cause them too, but because they're AVs, we (1) do not tolerate that their mistake profile looks different (i.e. they're conservative and thus more likely to stop in weird places rather than (foolishly) drive onwards) and (2) expect perfection. With how little there's a larger conversation about shitty driving practices by humans, I feel this could as well be a massive case of confirmation bias.

          As for SF being the proving ground for all kinds of products like this, that part I can sympathize with.

          6 votes
          1. [3]
            dangeresque
            Link Parent
            The subject I notice is missing from most of these conversations is liability. Tesla gets away with having a shitty safety record because their self-driving tech is beta tested by suckers who pay...

            The subject I notice is missing from most of these conversations is liability. Tesla gets away with having a shitty safety record because their self-driving tech is beta tested by suckers who pay for the privilege and accept all responsibility for anything their car does because they're "technically" at the wheel. If you force manufacturers to provide unlimited liability insurance for their self-driving vehicles, I think you'll find they quickly become more responsible.

            8 votes
            1. [2]
              vektor
              Link Parent
              Fully agreed. And IMO the line is quite simple: If the person is necessarily alert and active during the process, then they are in control and principally liable. So a sensor pack that wires into...

              Fully agreed. And IMO the line is quite simple: If the person is necessarily alert and active during the process, then they are in control and principally liable. So a sensor pack that wires into your brakes so your car reacts a split second before you could to a child running into the road? Manufacturer is not liable if it fails. It's a nice extra safety feature that hopefully works, but traffic can and will work if it isn't there. Don't tell your customers to rely on it though. (Of course, the manufacturer could still owe the customer if they sold a faulty product. But that's between car manufacturer and customer, not between the parties of the accident.)

              The alternative are systems where the human is expected to intervene when necessary, but is not necessarily alert and active. I'm sorry, Elon, but I have about 0 faith in humans to be alert enough to intervene effectively in such situations. Even for professional human safety drivers of AV development, this is a stretch. Sure, you can plonk someone in there in the hopes that it'll help, but ultimately if they can't prevent something, they're not to blame IMO. This will, in rare cases, prevent companies like Waymo or Cruise from shifting blame to employees, but mostly it will make Tesla liable for all the shit they cause. If it's basically just a collision avoidance system + GPS navigation + lane keeping system, either you treat it as such and require the human to constantly manually intervene, or you treat it as FSD, and accept the liability.

              2 votes
              1. dangeresque
                Link Parent
                imo I can't wait for level 4 to become an actual thing in the market and for level 2 and 3 autonomy to be outright banned. That would simplify the line: You are either driving or you are not...

                imo I can't wait for level 4 to become an actual thing in the market and for level 2 and 3 autonomy to be outright banned. That would simplify the line: You are either driving or you are not driving. There is no in between. There is no "oh this one active safety feature malfunctioned and yanked the wheel to the left and I couldn't overpower it to keep the car straight". There is no "WARNING YOU NEED TO SUDDENLY LEAP OUT OF YOUR HALF-NAP TO LOOK AROUND AND TAKE CONTROL BECAUSE IDIOT COMPUTER CAN'T FIGURE OUT WHAT'S GOING ON" 2 seconds before you plow into a wall and then let the manufacturer claim that you were in control when the car plowed into that wall. All active automated control should be disabled while the driver is driving, and the safety features limited to warnings and alerts.

                3 votes
          2. [5]
            boxer_dogs_dance
            Link Parent
            Thanks for the reply. From my perspective we need zero tolerance for incidents involving emergency response vehicles. If the cars can't recognize and avoid an emergency vehicle running with lights...

            Thanks for the reply. From my perspective we need zero tolerance for incidents involving emergency response vehicles. If the cars can't recognize and avoid an emergency vehicle running with lights and sirens it needs to be taken off the road until it does. I'm more forgiving re the emergency response accident scenes marked with yellow tape but again the burden needs to be on the companies. If their car needs more signage or signal to avoid accident response scenes, the company needs to provide the appropriate signage or electronic signal equipment that will alert their cars to the emergency response organizations before the cars return to the road. Any such equipment needs to be provided to the emergency responders at the cost of the autonomous vehicle companies.

            3 votes
            1. [4]
              vektor
              Link Parent
              I'd agree under one condition: As long as they're as good as humans. Humans also suck. I can't stand that we're letting perfect be the enemy of good enough. Humans will regularly do stupid or...

              I'd agree under one condition: As long as they're as good as humans. Humans also suck. I can't stand that we're letting perfect be the enemy of good enough. Humans will regularly do stupid or unhelpful things when dealing with stressful situations - like a fire truck riding their bumper. As long as their overall disruption to emergency response is about that of an average human driver, I'm good with allowing them on the road. Some of the cases I've seen reported in that list don't look great, but are within what I'd expect from human drivers. In one case, a shittily (that a word?) parked conventional car was basically blocking the road. There was a gap just wide enough for a car, with mere centimeters to spare. An AV was creeping through there, with a emergency response vehicle waiting behind. I'd say there's a good chance a human would've fared worse here. Nevermind that fault lies with the driver of the parked car blocking the road. Those cases don't count. Another one, there was a autonomous car driving over inactive fire hoses. Not ok. But there's tons of cases of humans doing such things, in this case literally in the same event. Long-term I'm entirely on board with expecting AVs to make literally perfect decisions, and thus fining such behavior. But short term, anything that's as good as human is good enough. Even if, as harsh as it sounds, people get killed by these vehicles; as long as it's less dead than with the vehicles they replaced, that's a net save.

              Granted, those two examples are potentially sorta cherry picked from the list. There might well be bad misbehavior by AVs in there that I didn't see. If those occur more often than with human drivers, off the road with 'em. But what I did see was a collection of fairly mundane mediocre driving, in a biased source. I'm not even anecdotally convinced by it that these cars are worse than humans, much less statistically.

              Another thing I'd like to see is perhaps a manual override. A way for e.g. a police officer to move the car, even if that car is being silly right now. I'm not sure to what degree that exists, but given that there's not a driver in there that you can order around, we probably need that. But I can also already see how people like the ones in the OP are going to misuse that, if that is technically possible. I'm also not sure to what degree Ops can intervene and interact. Can the police officer request that someone at base takes over manual control of the vehicle, and talk to them about what to do? I dunno. Seems like a thing that should be there, but also seems like a thing that these companies probably have already put in.

              3 votes
              1. dangeresque
                Link Parent
                You and I have discussed this elsewhere in the thread, but I think it bears repeating that the key issue here is liability. I think I am truly okay with less-than-perfect as long as the...

                But short term, anything that's as good as human is good enough. Even if, as harsh as it sounds, people get killed by these vehicles; as long as it's less dead than with the vehicles they replaced, that's a net save.

                You and I have discussed this elsewhere in the thread, but I think it bears repeating that the key issue here is liability. I think I am truly okay with less-than-perfect as long as the manufacturer bears the liability to the exact same degree that the driver would in any non-autonomous situation. But if the owner bears the liability when the manufacturer's program goes awry, then the manufacturer is only incentivized to make people think their cars are safe rather than to actually make them safe.

                3 votes
              2. [2]
                boxer_dogs_dance
                Link Parent
                I suspect our opinions overlap quite a bit although they might not match perfectly. I was thinking of this incident....

                I suspect our opinions overlap quite a bit although they might not match perfectly. I was thinking of this incident. https://www.reuters.com/business/autos-transportation/gms-cruise-robotaxi-collides-with-fire-truck-san-francisco-2023-08-19/

                And more are detailed here:
                https://missionlocal.org/2023/05/waymo-cruise-fire-department-police-san-francisco/

                Some of these incidents really do seem plucked from a Warner Bros. cartoon. Firefighters working in the wake of a March 21 windstorm report two Cruise vehicles rolling through warning tape and straight into the downed Muni wires the fire department was on-scene to deal with.

                Then, like Wile. E. Coyote running into the man-sized sling-shot, the cars kept rolling until the tension of the wires entangled in their roof apparatus tightened to the point where they ceased driving.

                1 vote
                1. vektor
                  Link Parent
                  That's unfortunately not very informative if you don't believe them to be forthcoming with unpleasant truths, but depending on the outcome of the investigation, this could be either "unlucky, a...

                  The car "did identify the risk of a collision and initiated a braking maneuver, reducing its speed, but was ultimately unable to avoid the collision," the company, which is investigating the incident, said in a statement on Friday.

                  That's unfortunately not very informative if you don't believe them to be forthcoming with unpleasant truths, but depending on the outcome of the investigation, this could be either "unlucky, a human driver wouldn't have prevented this either" or "yeah, nah, a human driver could see and hear that truck miles away and stop in time". These investigations tend to take time. Fortunately, there's sensors all over the car, and I'd fully expect them to have the last few minutes of sensor readings, if a crash occurred. So in all likelihood, this question can be answered in the clarity I presumed above: We can replay this with humans in the loop, and see if they succeed.

                  Oh, and also: This shows regulators are in the loop. I have much more faith in regulators to (1) have an accurate picture of the situation and relative risks involved and (2) act appropriately and fairly upon that data. Notice how many of these articles mention Waymo and Cruise equivalently, or even putting Waymo in the spotlight? When that incident list curated by the coners is 80% Cruise? I doubt the coners distinguish between the two companies when coning. Yet the regulators do. Cruise has been asked to halve its active fleet. That's the thing I'm worried about. Meanwhile, a Tesla always has a person inside, thus can't really be coned, yet they're probably worse for safety.

        3. chocobean
          Link Parent
          And I don't watlnt robots driving like humans, at all. I'm asking companies that are making crazy money to do better than humans -- or okay sure they pay for lives and injuries and damages. It's...

          And I don't watlnt robots driving like humans, at all. I'm asking companies that are making crazy money to do better than humans -- or okay sure they pay for lives and injuries and damages.

          It's like the stupid airbnb thing: hotels can cancel on people and have spy cameras and such sure but when it happens there's recourse. Airbnb make customer protection all go away.

          To me, auto cars running down pets and impeding emergency vehicles is not a tech issue it's a social issue: who should pay for it? Right now tech giants are shrugging it off on citizens.

          5 votes