30
votes
Goodbye, floppies - San Francisco pays Hitachi $212 million to remove 5.25-inch disks from its light rail service
Link information
This data is scraped automatically and may be incorrect.
- Authors
- Rob Thubron
- Published
- Oct 25 2024
- Word count
- 428 words
If, like me, you're wondering how removing some floppy drives somehow costs $212M, from the linked ArsTechnica article:
It's not just the drives, it's the whole system. Cables, computers, etc. It also funds an expansion of existing train control systems. More info can be found on the SFMTA website.
The sensationalist headline of focusing on total project cost to achieve minor line item of the project is such a classic misinformation tactic, intentional or otherwise.
It sets the narrative that the project is a misuse of public funds.
Yeah, for some reason tech journalism has a real boner for floppy disks (no joke intended but I'm leaving it in). They replaced a system that was reliable enough to last for decades with a brand new one which means new hardware and software all around - but the old one had a floppy drive, so that's all people need to know, obviously.
It's things like this that make me not follow tech news generally. If there's anything newsworthy I generally get told about it.
Can anyone else verify they were using 5.25” disks? I’d heard they were using floppies, but I always assumed they were 3.5” disks. It’s actually kind of hilarious if made an investment this big into 5.25” disks in 1998. It was already seen as a legacy technology at that point, and from what I can tell the drives were no longer being mass produced by 1995. So I’m reeeeally curious what could have led them to make a 25 year investment in an already dead/dying format.
In the early 90's -- to be fair I was young and not really in the business world -- but I feel like technology was not quite as 'rapidly deprecating' -- nor as completely.
I think it's viewed as fairly normal now that formats are replaced, when they are replaced they are replaced quite fully, and that the pace of that replacement is fairly rapid. I'm just not sure we were there in the early '90s. VHS and beta had their big battle, but it felt like that took about a decade to play out. Maybe the CD was on the horizon, don't think it was really mainstream, and I'm pretty sure I was rocking audio cassettes from basically the mid-80s to the early 2000s with no concept that this format was already done.
If it ain't broke, don't fix it.
I always get frustrated when stories come out about some old piece of infrastructure or military equipment still using floppies or some other old computer technology and people act like that is somehow a bad thing. The fact that a system built decades ago is still perfectly functional today is a good thing, it means we are getting our money's worth.
Yeah. Especially with transportation, car computer chips are intentionally behind to help make sure errors and premature chip failures are lower.
At the same time, eventually you'll have to upgrade, whether that be because the demands of the infrastracture change in a way that demands more from the IT elements than it can support, the inability to do technical support for your tech because no one else uses it anymore, or because, like with SF's case, the technology is starting to degrade in a way that will cause catastrophic failure.
And the bigger the jump between what you have now, and what's modern today, the more disruptive and expensive the switch will be.
Eh. Better a radical overhaul every three decades with extended support to cover the time gap, than continuous patchjobs that will eventually require overhauls anyway. Like all this work Hitachi's doing would have to be done regardless, but this way it's being done in a single planned project that includes technology that's matured.
An analogy would be that sometimes replacing your drivers' Nokias with a newer model every year since 2005 just isn't as efficient as spending five, or even ten years on the same one, then skipping straight to Androids.
If a system really is perfectly functional then I’d agree, but I’ve rarely seen it play out that way, unfortunately.
In my experience it’s either a case of a huge amount being spent to keep the legacy system running (the military paying whatever Microsoft asked for extended Windows XP support, banks paying IBM and anyone who knows COBOL to keep their mainframe systems alive, etc.), or a ticking time bomb that’s going to absolutely decimate the budget when it fails because nobody knows in full how to repair or replace it.
Often it’s both, with the expensive patches on top of patches on top of patches being used to buy another year because nobody knows what to do when the whole house of cards comes down.
There’s also the opportunity cost of not even being able to consider related projects and upgrades that might make the system as a whole safer/cheaper/more efficient because doing so would conflict with the old tech.
I’m definitely not saying it can’t be done - I’m sure there are some bits of totally self contained factory or lab equipment out there that’ll keep ticking until they physically wear out, long before we collectively use up all the floppy disks stored in that one guy’s warehouse - but more often than not it’s been a sign of much bigger problems when I’ve seen it.
I enjoy being one of those edge cases. I use IT systems older than myself to run pieces of equipment that still function as well as when they were made and would cost more than our yearly budget (including staff) to replace.
I'm not talking about vast interconnected systems that require interoperability with other systems. This necessitates more regular upkeep. Banks need to talk to eachother after all.
But a computer that controls a draw bridge or a light rail system? If it's not connected to the internet, entirely self-contained, and already works exactly as it needs to, it becomes much less of a liability. The fact that one of these would use a floppy disk wouldn't concern me nearly as much as an ATM running a 25 year old operating system full of known vulnerabilities.
If the lines started being in service in 1998, then construction took years before that, and the design took years before that. The 5.25" floppies might have been designed into it in the 80s.
If it was designed in the mid 80s and just took a while to be realized, that would make sense.
The most common type of floppy, the 1.44 mb 3.5” was first released in 1986 (a year before the first CD-ROM, coincidentally) and was accepted to have completely surpassed the 5.25” standard by the late 80s. So it may also be possible they saw it as a cost saving measure since the drives (presumably) became cheaper after 3.5” became the new hotness.
Or they didn't want to risk investing lots of money in new technology. 5.25" floppies were tried and tested at the time. There was probably lots of data on medium life time and data integrity.