I had the same thought. To be fair, Y2k didn't really happen aside from a few cash registers in England. The orgs who had the cobalt machines quietly took care of the issue a head of time....
I had the same thought.
To be fair, Y2k didn't really happen aside from a few cash registers in England. The orgs who had the cobalt machines quietly took care of the issue a head of time. Something to be said for having well known software for a long time - fewer surprises.
I wonder if this and the prior outages from AWS will contribute to another to orgs going back to having their own shops, perhaps in a cycle like fashion. I've been hearing grumbles that services like AWS are not always cheaper in every single case.
Quietly is not the word I would use....it was very much a panic-room situation for those last few years. Thankfully 2038 is (still) far enough out and recognition early enough that most of the...
Quietly is not the word I would use....it was very much a panic-room situation for those last few years.
Thankfully 2038 is (still) far enough out and recognition early enough that most of the steps have been done in the background.
I would use the word quietly. The various companies fixed their problems without making noise about or otherwise advertising it. Aside from the article about the issue in general you didn't hear...
I would use the word quietly.
The various companies fixed their problems without making noise about or otherwise advertising it.
Aside from the article about the issue in general you didn't hear about it.
Makes sense, a lot of those companies were financial institutions - not the tech sector. The latter loves bluster. The former probably didn't want to advertise to their clients that their vital systems were flawed.
I keep seeing this sentiment on the internet, but it's the opposite, no? This disporportionately affected on-prem setups. All the bad outages were on-prem servers and edge devices like kiosks and...
I wonder if this and the prior outages from AWS will contribute to another to orgs going back to having their own shops, perhaps in a cycle like fashion.
I keep seeing this sentiment on the internet, but it's the opposite, no? This disporportionately affected on-prem setups. All the bad outages were on-prem servers and edge devices like kiosks and employee laptops.
Most cloud setups do not use Crowdstrike directly, as you rely on the cloud provider for that kind of infrastructure. Very few cloud instances are windows servers. If any were, it's an easy fix, as you can simply dismount the drive virtually, remove the offending driver, and reattach.
If anything, this promotes MORE people moving to AWS, to stop having on-prem devices that you have to pay your own IT people to manage. It actually shows a major downside of having your own boxes.
Presumably sung to the tune of "It's Beginning to Look a Lot Like Christmas"..? It has been fascinating to see the ramifications and responses to such a large scale outage. I do wonder what would...
Presumably sung to the tune of "It's Beginning to Look a Lot Like Christmas"..?
It has been fascinating to see the ramifications and responses to such a large scale outage. I do wonder what would happen in even larger events with less clear resolutions, like a solar flare or massive cyber attack.
I had the same thought.
To be fair, Y2k didn't really happen aside from a few cash registers in England. The orgs who had the cobalt machines quietly took care of the issue a head of time. Something to be said for having well known software for a long time - fewer surprises.
I wonder if this and the prior outages from AWS will contribute to another to orgs going back to having their own shops, perhaps in a cycle like fashion. I've been hearing grumbles that services like AWS are not always cheaper in every single case.
Quietly is not the word I would use....it was very much a panic-room situation for those last few years.
Thankfully 2038 is (still) far enough out and recognition early enough that most of the steps have been done in the background.
I would use the word quietly.
The various companies fixed their problems without making noise about or otherwise advertising it.
Aside from the article about the issue in general you didn't hear about it.
Makes sense, a lot of those companies were financial institutions - not the tech sector. The latter loves bluster. The former probably didn't want to advertise to their clients that their vital systems were flawed.
I keep seeing this sentiment on the internet, but it's the opposite, no? This disporportionately affected on-prem setups. All the bad outages were on-prem servers and edge devices like kiosks and employee laptops.
Most cloud setups do not use Crowdstrike directly, as you rely on the cloud provider for that kind of infrastructure. Very few cloud instances are windows servers. If any were, it's an easy fix, as you can simply dismount the drive virtually, remove the offending driver, and reattach.
If anything, this promotes MORE people moving to AWS, to stop having on-prem devices that you have to pay your own IT people to manage. It actually shows a major downside of having your own boxes.
Presumably sung to the tune of "It's Beginning to Look a Lot Like Christmas"..?
It has been fascinating to see the ramifications and responses to such a large scale outage. I do wonder what would happen in even larger events with less clear resolutions, like a solar flare or massive cyber attack.
Ah yes, null pointer strikes again. Maybe this will mean increased interest in Rust and ebpf?