26 votes

Anthropic drops flagship safety pledge

10 comments

  1. [2]
    unkz
    Link
    Shades of Google dropping “don’t be evil” (yeah, yeah, now it’s do the right thing in a different document).

    Shades of Google dropping “don’t be evil” (yeah, yeah, now it’s do the right thing in a different document).

    Anthropic, the wildly successful AI company that has cast itself as the most safety-conscious of the top research labs, is dropping the central pledge of its flagship safety policy, company officials tell TIME.

    In 2023, Anthropic committed to never train an AI system unless it could guarantee in advance that the company’s safety measures were adequate. For years, its leaders touted that promise—the central pillar of their Responsible Scaling Policy (RSP)—as evidence that they are a responsible company that would withstand market incentives to rush to develop a potentially dangerous technology.

    13 votes
    1. Aerrol
      Link Parent
      Cool everything is accelerated in this era, even the speed at which we lose 'don't be evil'. I hate this timeline.

      Cool everything is accelerated in this era, even the speed at which we lose 'don't be evil'. I hate this timeline.

      9 votes
  2. [5]
    moocow1452
    Link
    This wouldn't have anything to do with Anthropic now working on AI models for the Pentagon that by necessity can not be non-harmful, and therefore infringe on some element of safety for somebody, no?

    This wouldn't have anything to do with Anthropic now working on AI models for the Pentagon that by necessity can not be non-harmful, and therefore infringe on some element of safety for somebody, no?

    9 votes
    1. PelagiusSeptim
      Link Parent
      Since they were just given an ultimatum by the Pentagon, I can't see how this wouldn't be connected.

      Since they were just given an ultimatum by the Pentagon, I can't see how this wouldn't be connected.

      8 votes
    2. [3]
      unkz
      Link Parent
      I don’t think it’s that kind of safety they are taking about. This is more like, superintelligence risk.

      I don’t think it’s that kind of safety they are taking about. This is more like, superintelligence risk.

      1. [2]
        moocow1452
        Link Parent
        I don't think it matters that much. A gun is an inherently dangerous object, a military is similarly so, so if your AI is working plans with a military, somebody's risk is already a negotiable....

        I don't think it matters that much. A gun is an inherently dangerous object, a military is similarly so, so if your AI is working plans with a military, somebody's risk is already a negotiable.

        More to your point about superintelligence, XKCD made a comic where the author is more worried about what certain people would be empowered to do with an autonomous fleet of kill drones that will follow their orders than if the drones decided to wipe out humanity or maximize paperclips without orders. There's a much richer history of malice on one side of that equation than the other.

        4 votes
        1. unkz
          Link Parent
          Yeah, I just don’t think they ever had a policy of not training bots with the raw intelligence to commit atrocities with human assistance.

          Yeah, I just don’t think they ever had a policy of not training bots with the raw intelligence to commit atrocities with human assistance.

          1 vote
  3. [3]
    TonesTones
    Link
    Pete Hesgeth recently threatened to cut Anthropic from current and future DOD contracts unless they drop some of their safety measures. This is likely part of their response to that pressure....

    Pete Hesgeth recently threatened to cut Anthropic from current and future DOD contracts unless they drop some of their safety measures. This is likely part of their response to that pressure.

    Defense officials warned they could designate Anthropic a supply chain risk or use the Defense Production Act to essentially give the military more authority to use its products even if it doesn’t approve of how they are used.

    Money talks.

    7 votes
    1. unkz
      Link Parent
      Good catch, that probably is a major factor.

      Good catch, that probably is a major factor.

      1 vote
    2. Eric_the_Cerise
      Link Parent
      The point I'm surprised ... no, not surprised, I guess, just--somehow--even more disappointed over... Snowden was less than 15 years ago. Today, the Pentagon is threatening to blacklist Anthropic,...

      The point I'm surprised ... no, not surprised, I guess, just--somehow--even more disappointed over...

      Snowden was less than 15 years ago.

      Today, the Pentagon is threatening to blacklist Anthropic, explicitly, for not giving them full use of their AI, for A) fully autonomous, AI powered targeting & strike capabilities, and 2) unrestricted, fully autonomous AI powered mass surveillance of US citizens.

      This is not a whistleblower thing, it's not a reporter "scoop", nothing.

      The US Pentagon is flat-out openly stating that it will destroy an AI company if it can't use the AI for mass spying on the US public.

      (Oh yeah ... and killing people w/o human oversight)