39 votes

US prescription market hamstrung for nine days (so far) by ransomware attack

16 comments

  1. patience_limited
    Link
    Reporting from the front lines of those affected... it's not an utter catastrophe...yet. Prescriptions can be filled, but at out-of-pocket prices since the insurance payment verification can't go...

    Reporting from the front lines of those affected... it's not an utter catastrophe...yet. Prescriptions can be filled, but at out-of-pocket prices since the insurance payment verification can't go through.

    United Healthcare generally covers 90-day refills on recurring prescriptions, which means many people who depend on long-term medication have some leeway. Most generic medications for chronic conditions aren't unsustainably costly if paid for out-of-pocket for a short time.

    One coworker said, "{drug} cost me $6, same as with insurance, but it took an hour of waiting to find that out". I was very fortunate that I'd refilled just before the ransomware attack, because I'd be paying more like $600/month. One of my coworkers has a child with complicated medical needs, to the extent that a random common cold could easily turn into thousands of dollars in prescription costs without coverage.

    It's not ideal to be on a first-name basis with your pharmacist, but I'm a regular and a curious soul. So I asked him about the impacts he was dealing with, and effects on customers. He said there were backend workarounds to see that people didn't go without critical medications, including essentially giving people credit against future insurance payments. But the increased time and effort were obviously wearing on him and the other pharmacy staff. I also discovered that my prescriptions, including the most expensive recurring medication at around $450/mo., had been previously authorized by UHC at a fixed rate. The insurance payment didn't need to be reverified during the outage. Hopefully, this is also the case for many others.

    All that being said, I don't think the U.S. is doing a particularly good job of guarding against systemic vulnerabilities in critical systems. It appalls me that in 2024, it's still possible for a single malicious link or attachment in e-mail to compromise an entire company, that password-spraying attacks are still feasible, and that privileged supply chains aren't certified at the level of threat.

    I've never had a 100% cybersecurity role, but I've spent enough time in the healthcare trenches to see which systems are most critical, which most vulnerable, and design some separation between the two, as well as protecting backups and disaster recovery systems. And yet I've worked with two major health systems and several smaller facilities in the last three months which have had to rebuild practically from scratch.

    The ConnectWise ScreenConnect flaw used in the Optum Change Healthcare attacks is just one of many on managed services vendors. The state of the software supply chain remains dire. In my experience, even the largest health systems and insurers continue to treat their information technology departments as undesirable cost centers. There simply isn't enough staff, funding, or in-house sophistication to build secure infrastructure and keep all the necessary servers and software packages up-to-date, let alone manage SaaS and vendor vulnerabilities. There's little or no industry-wide coordination to monitor and manage significant threats. [In some ways, the primitive nature of electronic medical records exchange among disparate systems is protective. I dread the day when some threat actor decides Epic, or its myriad modules, is a good target.]

    26 votes
  2. [9]
    vord
    Link
    The real threat is companies not understanding how to take proper backups. An untested backup is not a backup.

    The real threat is companies not understanding how to take proper backups.

    An untested backup is not a backup.

    16 votes
    1. [8]
      Eji1700
      Link Parent
      They often get the backups compromised as well

      They often get the backups compromised as well

      4 votes
      1. [6]
        vord
        (edited )
        Link Parent
        That's what I'm saying. If you're testing your backups, with a regularily that relates to acceptable data loss, then it's impossible. You'll know you were compromised the second one of your...

        That's what I'm saying. If you're testing your backups, with a regularily that relates to acceptable data loss, then it's impossible. You'll know you were compromised the second one of your restores fails.

        A quick backup primer:

        If backing up a database, do a full export to new files, not just copy your datafiles.

        Copy files to a seperate backup system, compare a shasum to validate integrity. You take a filesystem snapshot to insure it remains readable in its current state.

        You validate the backup works. That means bringing a database live and doing an integrity check by selecting rows and such. If it's files, insure they can be opened without throwing errors on open.

        Then you copy those files offsite from the read-only snapshot, and repeat the shasum check.

        Edit: Incremental backups are fine, but full backups should be taken periodically as well.

        7 votes
        1. [5]
          mordae
          Link Parent
          No, the issue is that the backup target server is reachable from the backed-up network and uses the same (in M$ environments, domain) accounts/passwords/keys.

          No, the issue is that the backup target server is reachable from the backed-up network and uses the same (in M$ environments, domain) accounts/passwords/keys.

          1 vote
          1. [4]
            vord
            (edited )
            Link Parent
            That's bad backup design, an easily solved problem. As well as whole system setup that's the kind of failure that wouldn't be tolerated of a junior sysadmin. Your live server should only have...

            That's bad backup design, an easily solved problem. As well as whole system setup that's the kind of failure that wouldn't be tolerated of a junior sysadmin.

            Your live server should only have enough permissions to read and write data to a mount on your backup server. The backup server should never be able to log into your live server.

            The causes of course are generally just broad underfunding, but also there's plenty of people doing this kind of work that shouldn't be.

            6 votes
            1. [3]
              mordae
              (edited )
              Link Parent
              You have it backwards! Your line server should not be able to connect to the backup server at all. It should be the backup server that pulls the data from the line server. You must assume that the...

              You have it backwards!

              Your line server should not be able to connect to the backup server at all. It should be the backup server that pulls the data from the line server.

              You must assume that the line server has been penetrated and the attacker is trying to infect rest of the network from there.

              I that sense, the backup server is literally the last bastion for defending your data.

              Having restricted mount will still result in the whole mount being encrypted. And you need to expose backup server to active attacks.

              1. [2]
                vord
                (edited )
                Link Parent
                Not sure how that's the case, though I didn't detail everything out. The system managing the disk mounts is essentially just the first-tier backup...it's intended as a transient state to the...

                Having restricted mount will still result in the whole mount being encrypted.

                Not sure how that's the case, though I didn't detail everything out. The system managing the disk mounts is essentially just the first-tier backup...it's intended as a transient state to the 'full' backup system and is considered untrusted. Essentially each mount is a dedicated sub-volume on a COW filesystem. Each line system gets an account which can only access that sub-volume via an SSHFS mount. When the line system disconnects, a snapshot of the sub-volume is taken, the snapshot gets mounted as read-only in a separate mount for the downstream validation and backup systems to pull and confirm integrity. It's only after integrity is validated that the backup is considered 'safe'.

                1. mordae
                  (edited )
                  Link Parent
                  Yeah, you could do that. But you still expose at least the ssh to the attacker. One bad zero-day exploit from a single server in the fleet and you can kiss your backups good bye. And you end up...

                  Yeah, you could do that. But you still expose at least the ssh to the attacker. One bad zero-day exploit from a single server in the fleet and you can kiss your backups good bye. And you end up with moving parts to synchronize. What if the backup ends up taking too long, for example.

                  A popular way through, without an exploit, is agent forwarding. Some admins are not exactly cautious and sometimes use ssh -A to a server in the fleet. If their key is authorized to connect elsewhere (and it usually is), infected host can make use of this and log into other servers. This might include your backup server.

                  1 vote
      2. ignorabimus
        Link Parent
        Which surprises me because in a lot organisations people just write the data to physical tapes and put them in a warehouse somewhere, which I would think goes a long way to prevent this.

        Which surprises me because in a lot organisations people just write the data to physical tapes and put them in a warehouse somewhere, which I would think goes a long way to prevent this.

        3 votes
  3. skybrian
    Link
    From the article: ...

    From the article:

    Nine days after a Russian-speaking ransomware syndicate took down the biggest US health care payment processor, pharmacies, health care providers, and patients were still scrambling to fill prescriptions for medicines, many of which are lifesaving.

    On Thursday, UnitedHealth Group accused a notorious ransomware gang known both as AlphV and Black Cat of hacking its subsidiary Optum. Optum provides a nationwide network called Change Healthcare, which allows health care providers to manage customer payments and insurance claims. With no easy way for pharmacies to calculate what costs were covered by insurance companies, many had to turn to alternative services or offline methods.

    ...

    In December, the FBI and its equivalent in partner countries announced they had seized much of the AlphV infrastructure in a move that was intended to disrupt the group. AlphV promptly asserted it had unseized its site, leading to a tug-of-war between law enforcement and the group. The crippling of Change Healthcare is a clear sign that AlphV continues to pose a threat to critical parts of the US infrastructure.

    11 votes
  4. [5]
    boxer_dogs_dance
    Link
    This is a good signal that we need to harden our infrastructure and build in redundancies but I don't think our leaders will do that... This is embarrassing.

    This is a good signal that we need to harden our infrastructure and build in redundancies but I don't think our leaders will do that...

    This is embarrassing.

    11 votes
    1. [4]
      SaltSong
      Link Parent
      Of course they won't. For one thing, it costs money. But more importantly, it costs money that will not produce a visible result. It's tricky to sell a certain kind of person on the idea of...

      Of course they won't. For one thing, it costs money.

      But more importantly, it costs money that will not produce a visible result. It's tricky to sell a certain kind of person on the idea of spending a big pile of money with the end result being "nothing happened." They fail to understand that for a huge number of situations, "nothing happens" is the good result. We should make them play the BSG board game.

      And even if it was gonna cause a visible "win," most of our leaders are short-sighted, and most of our systems are designed to force them to be that way. Elections for senators are every six years. Presidents every four years. Representatives, two years. And "captains of industry" don't often look further ahead than the quarterly earnings.

      Only 33 people in our whole nation's leadership expected to look as far as six years into the future. No wonder we can't get anything done.

      16 votes
      1. [3]
        Autoxidation
        Link Parent
        These companies can afford the costs necessary to protect their data; they just choose not to do so.

        For the full year 2023, UnitedHealth Group brought in $371.6 billion in revenue and $22.4 billion in profit.

        These companies can afford the costs necessary to protect their data; they just choose not to do so.

        12 votes
        1. [2]
          DeaconBlue
          Link Parent
          I don't think that there was any debate about whether they can afford to do it. They are profit driven, not ethics driven. Of course, there is absolutely no reason for them to invest money into...

          I don't think that there was any debate about whether they can afford to do it. They are profit driven, not ethics driven.

          Of course, there is absolutely no reason for them to invest money into infrastructure. They are big enough and important enough to the population that they get to lean on the government to help fix big problems when they arise.

          9 votes
          1. sparksbet
            Link Parent
            Yeah, there needs to be some actual deterrent that affects the bottom line of the company. Be that an enormous fine when something like this happens or explicit legislation around security that's...

            Yeah, there needs to be some actual deterrent that affects the bottom line of the company. Be that an enormous fine when something like this happens or explicit legislation around security that's actually enforced with consequences that these companies care enough about to actually try and avoid shit like this.

            5 votes