I use Oracle Fusion at work (currently work as a purchase ledger clerk) and I'm one of the temps tasked with "training" their Intelligent Document Recognition (IDR) system to recognize documents...
I use Oracle Fusion at work (currently work as a purchase ledger clerk) and I'm one of the temps tasked with "training" their Intelligent Document Recognition (IDR) system to recognize documents because my workplace's method of processing invoices has otherwise been very manual.
Oracle's solution is by far the worst OCR system I've ever used for processing PDF invoices and it's not even close. I genuinely cannot tell if it's because Oracle are a bunch of patent trolls with far more lawyers than engineers, or because they're embracing AI.
The irony is that IDR is neither intelligent nor does it recognize documents. After processing hundreds of docs from a specific supplier and marking the specific features this system needs to look out for, it still completely miscategorises invoices. And the thing about using Oracle Fusion as an Enterprise Resource Planning (ERP) solution is that you have to code transactions to very specific accounts with a long string of numbers.
When processing invoices manually you can just code the net invoice value to one line (assuming they're goods of the same type), post that lengthy accounting string once, put in the tax calculation and be done with it. When you process invoices with IDR, it replicates every line of an invoice, or tries its best to - I've seen plenty of errors.
Oracle Fusion only allows you to view 5 invoice lines at a time, and when you try to scroll down, it lags a lot, and it takes very simple stuff to glitch the system and end up having to start over again. You could click a button to get the detailed full screen view of each line which would make things easier if it didn't glitch out and fail to enable this view until you've already validated the invoice.
This crock of shit is part of what made Larry Ellison the second-richest man in the world. Let that sink in...
The fact you have to train it means Oracle IDR is using old school machine learning AI. Eventually Oracle will use the newer Gen AI technology, where you won't even have to train it in order for...
The fact you have to train it means Oracle IDR is using old school machine learning AI.
Eventually Oracle will use the newer Gen AI technology, where you won't even have to train it in order for it to incorrectly miscategorise invoices.
This is not true and is not a meaningful distinction. The type of training the person describes is presumably fine-tuning an existing machine learning model, which can definitely be done with...
This is not true and is not a meaningful distinction. The type of training the person describes is presumably fine-tuning an existing machine learning model, which can definitely be done with major GenAI models (and you overestimate how different the underlying architecture is from the stuff that immediately preceded GenAI). One of the big improvements in the recent models used for GenAI is that they tend to require a lot less fine-tuning for competence at many tasks, but they absolutely can be fine-tuned like this. iirc OpenAI explicitly offers fine-tuning options for GPT-4.
I'm familiar with fine tuning OpenAI and also with Oracle Fusion. I am 80% sure what I said was correct. Fine tuning a corporate LLM is an entirely different shit show, as is fine tuning prompts.
I'm familiar with fine tuning OpenAI and also with Oracle Fusion.
I am 80% sure what I said was correct.
Fine tuning a corporate LLM is an entirely different shit show, as is fine tuning prompts.
Your confidence is misplaced. It's perfectly possible for a "corporate LLM" to be using the exact same underlying technology as ChatGPT -- in fact, very often companies like Oracle are directly...
Your confidence is misplaced. It's perfectly possible for a "corporate LLM" to be using the exact same underlying technology as ChatGPT -- in fact, very often companies like Oracle are directly contracting companies like OpenAI and their competitors and directly using their products, rather than training their own machine learning models. Not every company doing this will have fine-tuning on a per-customer basis like this, but it's absolutely possible and there's nothing when it comes to the architecture of newer LLMs that prevents this compared to older language models. If it's not a possibility, it's because the company offering the service doesn't think it's worth the cost to implement everything else that's needed for it (for instance, an interface for the customer and other such things)
Also, "fine tuning prompts" is a phrase that makes no sense under the definition of fine-tuning we're discussing here. Prompt engineering and stuff is a thing, but it's more or less orthogonal to the process that's referred to as "fine-tuning" when discussing machine learning models.
Your guesses about Oracle tech are absolutely wrong in this specific instance. We are not talking about cutting edge technology here. OP is largely describing old school ML on top of OCR. We are...
Your guesses about Oracle tech are absolutely wrong in this specific instance. We are not talking about cutting edge technology here. OP is largely describing old school ML on top of OCR. We are talking pre-LLM and even pre-BERT. Which is definitely a thing in the Oracle-verse. Including the inevitably shitty result with only 100 samples.
I am actually having a hard time figuring out what your experience level is, as it doesn't exactly match anything I am deeply familiar with. It kind of sounds like you are familiar with Microsoft as of two years ago?
When customers complained about copilot when it first came out, Microsoft pushed fine tuning, which took a huge amount of organizational effort, when no amount of fine tuning was ever going to fix copilot. But as a result, most companies at the time were curious about the possibility of fine tuning once across all LOBs and then leveraging the same model across multiple vendors as BYOM. That was a complete failure. (None of this matches what OP described btw.)
Lately, most tech companies are hyping newer and largely unproven technologies, like dynamic prompts, custom workflows or augmenting with memory/ graphs.
Me? I am old school. I am a firm believer in using the latest LLMs with a few fixed examples passed as context to the prompt.
I am not and have not made any guesses about what tech Oracle is actually using. My statements have been about LLMs and GenAI more generally, because your initial comment seemed to claim that...
I am not and have not made any guesses about what tech Oracle is actually using. My statements have been about LLMs and GenAI more generally, because your initial comment seemed to claim that fine-tuning is only ever a thing for older types of language models and is not a thing for newer models. Your next comment then tried to contrast "corporate AI" with modern LLMs as though there is a fundamental difference there. My comments have been addressing these claims about the technology more generally, since these claims seemed to imply things about it in general, not just in the context of Oracle specifically. I do not know or care about what Oracle or even Microsoft are specifically using, as that's almost entirely unrelated to my point, which is that it's perfectly possible for a company like that to switch to an LLM and still offer fine-tuning (whether it's actually effective in a given context or not).
Ahhhh. No. I never said it was not technically possible. I only (facetiously) said the user would not need too in the near future. I later added I was familiar with fine tuning on OpenAI (meaning...
Ahhhh.
No. I never said it was not technically possible.
I only (facetiously) said the user would not need too in the near future.
I later added I was familiar with fine tuning on OpenAI (meaning I have fine tuned models myself.)
My third comment touched on why there are better technological choices to classify third party invoices according to second party accounting schemes than LLM fine tuning.
I will say, on a completely unrelated note, I once accused an author of misquoting a paper I thought I had carefully read. I can't have read it too carefully, as the author I emailed pointed out that he had authored both papers. I still don't understand how his quote was supported by a plain reading of his original paper. But that is OK. It clearly said that to him, he was clearly an expert, and I don't think I was his intended audience.
Well, to be fair, he's the second richest man in the world mostly due to a lot of overpriced and bloated Oracle shitware way prior to any AI products out today.
This crock of shit is part of what made Larry Ellison the second-richest man in the world. Let that sink in...
Well, to be fair, he's the second richest man in the world mostly due to a lot of overpriced and bloated Oracle shitware way prior to any AI products out today.
The people that use the software are not part of the team that make the decision on software purchasing. I'm not the person you responded to, but that is always the answer to that question.
The people that use the software are not part of the team that make the decision on software purchasing.
I'm not the person you responded to, but that is always the answer to that question.
This seems like more of a "corporate stupidity" issue than an AI issue. The people designing the system are out of touch with how it's used. It's hardly new to AI; enterprise software purchasing...
This seems like more of a "corporate stupidity" issue than an AI issue. The people designing the system are out of touch with how it's used. It's hardly new to AI; enterprise software purchasing can be pretty dysfunctional.
The big shots meet up and ask each other what their "AI strategy" is for 2025. If they don't have something that sounds legit then they'd be embarrassed. "Oh we just signed a $10,000,000 deal with...
The big shots meet up and ask each other what their "AI strategy" is for 2025. If they don't have something that sounds legit then they'd be embarrassed. "Oh we just signed a $10,000,000 deal with Oracle" eyebrows raise and competing CEO's worry they're not keeping up.
It's weird since as far as I can tell, everyone in the tech industry thinks Oracle is overpriced and evil. Isn't Postgres what people go with nowadays? Maybe I'm overly biased from reading Hacker...
It's weird since as far as I can tell, everyone in the tech industry thinks Oracle is overpriced and evil. Isn't Postgres what people go with nowadays?
Since my job title now includes “lead”, I’ve been getting all sorts of unsolicited marketing call invites. Of course, I don’t make the decisions on any of the software I’m being propositioned with
Since my job title now includes “lead”, I’ve been getting all sorts of unsolicited marketing call invites. Of course, I don’t make the decisions on any of the software I’m being propositioned with
I've definitely gotten some nice meals over the years by tagging along with some of my coworkers who were being wined and dined. That's not to say I had no input into decisions...but it was...
I've definitely gotten some nice meals over the years by tagging along with some of my coworkers who were being wined and dined. That's not to say I had no input into decisions...but it was relatively tiny, all things considered. Really, I just wanted to be wined and dined. And co-workers were like "Hell yeah, come on, it's going to be great!"
That does seem to match the thesis of this article. It's not the tool per se, but how stupidly and wildly it is being weilded. we're taking a specialty screwdriver and treating it like an army...
That does seem to match the thesis of this article. It's not the tool per se, but how stupidly and wildly it is being weilded. we're taking a specialty screwdriver and treating it like an army swiss knife. Cutting? AI. Drilling? AI. Wine cork? AI. My failing love life? AI.
Doesn't Oracle also strongarm companies pretty heavily and require them to use basically all of their suite of software if they want to use some of it? https://youtu.be/KW80Yjib7RA
Doesn't Oracle also strongarm companies pretty heavily and require them to use basically all of their suite of software if they want to use some of it?
I'm going to guess because they're explosive quarterly earnings jumped their stock 40+% a few months back, so it's clear that they do not need to compete on quality. Nor even care about productive...
I'm going to guess because they're explosive quarterly earnings jumped their stock 40+% a few months back, so it's clear that they do not need to compete on quality. Nor even care about productive engineers.
The average person doesn't understand the extremely long history of artificial intelligence systems being overhyped. For another infamous quote: AI is a convenient excuse for dishonest executives....
For decades now, we have been told that artificial intelligence systems will soon replace human workers. Sixty years ago, for example, Herbert Simon, who received a Nobel Prize in economics and a Turing Award in computing, predicted that “machines will be capable, within 20 years, of doing any work a man can do.” More recently, we have Daniel Susskind’s 2020 award-winning book with the title that says it all: A World Without Work.
The average person doesn't understand the extremely long history of artificial intelligence systems being overhyped. For another infamous quote:
In 2016, professor Hinton stated that “people should stop training radiologists now” and “it is just completely obvious that within 5 years deep learning will do better than radiologists” https://pmc.ncbi.nlm.nih.gov/articles/PMC7720669/#r1
The algorithms are still cool and helpful and improving, but they have not replaced humans, merely made them more productive.
AI is a convenient excuse for dishonest executives.
We’ve seen this act before. When companies are financially stressed, a relatively easy solution is to lay off workers and ask those who are not laid off to work harder and be thankful that they still have jobs. AI is just a convenient excuse for this cost-cutting.
Last week, when Amazon slashed 14,000 corporate jobs and hinted that more cuts could be coming, a top executive noted the current generation of AI is “enabling companies to innovate much faster than ever before.” Shortly thereafter, another Amazon rep anonymously admitted to NBC News that “AI is not the reason behind the vast majority of reductions.” On an investor call, Amazon CEO Andy Jassy admitted that the layoffs were “not even really AI driven.”
We have been following the slow growth in revenues for generative AI over the last few years, and the revenues are neither big enough to support the number of layoffs attributed to AI, nor to justify the capital expenditures on AI cloud infrastructure. Those expenditures may be approaching $1 trillion for 2025, while AI revenue—which would be used to pay for the use of AI infrastructure to run the software—will not exceed $30 billion this year. Are we to believe that such a small amount of revenue is driving economy-wide layoffs?
I've been yelling about this for years. Some businesses have been massively disrupted by "macroeconomic factors" such that they need to cut costs and layoff employees. That doesn't apply to most tech companies that are spending billions on infrastructure with no signs of paying off.
Tech companies laid people off in 2023-2024 (including friends of mine), but their headcount is growing again, based on charts for Alphabet, Apple, and Meta. They aren't hiring like in previous...
Tech companies laid people off in 2023-2024 (including friends of mine), but their headcount is growing again, based on charts for Alphabet, Apple, and Meta.
They aren't hiring like in previous booms. Still, Google has over double the headcount they had when I left. I wonder what they all do?
It's down to economics, more than anything. We're basically diving into stagflation right now, and companies are avoiding expansion. The 2023-2025 layoffs are primarily from: The end of ZIRP....
It's down to economics, more than anything. We're basically diving into stagflation right now, and companies are avoiding expansion.
The 2023-2025 layoffs are primarily from:
The end of ZIRP. Credit costs a lot more now, and it's caused businesses to change how they do things. Which has manifested more toward avoiding doing things and extracting more from what they already do (i.e. enshittification)
A trap laid in a budgetary bill back in the first Trump term. They balanced out tax slashes with a change in accounting requirements for software development: typically a company would claim the cost of developer salaries in the same year as a loss, but under the new requirement they would be required to amortize the expense over a rolling five year window. So for those years (I believe it was changed back this year), paying developers was heavily disincentivized, since you'd be paying someone but only being able to claim 1/5 of the cost as an actual expenditure.
well it's not what your coulleages at Google do. Many are being laid off and the ones that remain want to lay low. it's what the outsourced engineers do. That's why the headcount is growing. And I...
I wonder what they all do?
well it's not what your coulleages at Google do. Many are being laid off and the ones that remain want to lay low.
it's what the outsourced engineers do. That's why the headcount is growing. And I guess the answer is a mix of "training AI" and "maintaining existing systems". We're not really in innovation mode in any non-AI sector, so cheaping out enough to make sure existing revenue doesn't malfunction is enough.
Contractors aren't included in headcount. That's in addition to employees. But employees aren't all in the US either; Google has offices all over the world.
Contractors aren't included in headcount. That's in addition to employees. But employees aren't all in the US either; Google has offices all over the world.
Outsourcing isn't the same as contracting. They aren't mutually exclusive, but I was talking about people hired into Google proper (but also not in the US). Yes, and my argument is that Google is...
Outsourcing isn't the same as contracting. They aren't mutually exclusive, but I was talking about people hired into Google proper (but also not in the US).
But employees aren't all in the US either; Google has offices all over the world.
Yes, and my argument is that Google is reducing headcount in the US and expanding in places with lower costs of living. That's how you can say there's a job crisis in the US while still seeing Google's hiring numbers actually rising overtime.
We've been through this already with manufacturing in the 90's.
Is more of Google's headcount outside the US now than there used to be? That's an interesting hypothesis, but how do we prove it? They don't publish this information. I did the simple thing and...
Is more of Google's headcount outside the US now than there used to be? That's an interesting hypothesis, but how do we prove it? They don't publish this information.
I did the simple thing and asked ChatGPT but it didn't find anything good.
Yes, that's the tricky part. They only need to say their hiring numbers overall. Not the breakdown by region. All these come more off of inferences, between the job numbers in the US being down...
Yes, that's the tricky part. They only need to say their hiring numbers overall. Not the breakdown by region.
All these come more off of inferences, between the job numbers in the US being down for all sectors except hospitality and healthcare (from what we still have of the Labor Statistics), but job numbers from earnings reports in tech seemingly still going up. It's not hard proof, but it's definitely smoke that I'd hope someone else could investigate more deeply into.
The Bureau of Labor Statistics? https://www.bls.gov/charts/employment-situation/civilian-unemployment-rate.htm it's no secret (at least, until October tried to hide the numbers) that unemployment...
it's no secret (at least, until October tried to hide the numbers) that unemployment has been inching up in the US for the past few quarters. It's what's causing the feds to start slashing rates again.
Digging into the monthly reports, you see pretty much all industries are down as well. Here's August's job reports before Trump fired the previous statistician reporting the numbers. "Professional and business services", which tech is under, is down 17, 000 in a single month. "Computer systems design and related services" specifically was down 3.3k that month (heck, it's down even more in September, surprisingly. which was an overall more positive jobs report).
I'd be really curious to see what the stats are for domestic growth versus international growth. The company I work for (a very small fish in the big pond compared to FAANG) grew to 500 domestic...
I'd be really curious to see what the stats are for domestic growth versus international growth. The company I work for (a very small fish in the big pond compared to FAANG) grew to 500 domestic employees through Covid, then slashed that number to less than 150 in the last few years. Now we've been hiring like crazy again... in Eastern Europe, Bangalore, and Tijuana....
Well that's hardly their only citation: Having been on the receiving end of AI slop, it harms organizational productivity. Same as subtle bugs created by vibe coding.
Well that's hardly their only citation:
Yet we remain skeptical of the claim that AI is responsible for these layoffs. A recent MIT Media Lab study found that 95% of generative AI pilot business projects were failing. Another survey by Atlassian concluded that 96% of businesses “have not seen dramatic improvements in organizational efficiency, innovation, or work quality.” Still another study found that 40% of the business people surveyed have received “AI slop” at work in the last month and that it takes nearly two hours, on average, to fix each instance of slop. In addition, they “no longer trust their AI-enabled peers, find them less creative, and find them less intelligent or capable.”
Having been on the receiving end of AI slop, it harms organizational productivity. Same as subtle bugs created by vibe coding.
I use Oracle Fusion at work (currently work as a purchase ledger clerk) and I'm one of the temps tasked with "training" their Intelligent Document Recognition (IDR) system to recognize documents because my workplace's method of processing invoices has otherwise been very manual.
Oracle's solution is by far the worst OCR system I've ever used for processing PDF invoices and it's not even close. I genuinely cannot tell if it's because Oracle are a bunch of patent trolls with far more lawyers than engineers, or because they're embracing AI.
The irony is that IDR is neither intelligent nor does it recognize documents. After processing hundreds of docs from a specific supplier and marking the specific features this system needs to look out for, it still completely miscategorises invoices. And the thing about using Oracle Fusion as an Enterprise Resource Planning (ERP) solution is that you have to code transactions to very specific accounts with a long string of numbers.
When processing invoices manually you can just code the net invoice value to one line (assuming they're goods of the same type), post that lengthy accounting string once, put in the tax calculation and be done with it. When you process invoices with IDR, it replicates every line of an invoice, or tries its best to - I've seen plenty of errors.
Oracle Fusion only allows you to view 5 invoice lines at a time, and when you try to scroll down, it lags a lot, and it takes very simple stuff to glitch the system and end up having to start over again. You could click a button to get the detailed full screen view of each line which would make things easier if it didn't glitch out and fail to enable this view until you've already validated the invoice.
This crock of shit is part of what made Larry Ellison the second-richest man in the world. Let that sink in...
Point is... I'm done with AI slop.
The fact you have to train it means Oracle IDR is using old school machine learning AI.
Eventually Oracle will use the newer Gen AI technology, where you won't even have to train it in order for it to incorrectly miscategorise invoices.
This is not true and is not a meaningful distinction. The type of training the person describes is presumably fine-tuning an existing machine learning model, which can definitely be done with major GenAI models (and you overestimate how different the underlying architecture is from the stuff that immediately preceded GenAI). One of the big improvements in the recent models used for GenAI is that they tend to require a lot less fine-tuning for competence at many tasks, but they absolutely can be fine-tuned like this. iirc OpenAI explicitly offers fine-tuning options for GPT-4.
I'm familiar with fine tuning OpenAI and also with Oracle Fusion.
I am 80% sure what I said was correct.
Fine tuning a corporate LLM is an entirely different shit show, as is fine tuning prompts.
Your confidence is misplaced. It's perfectly possible for a "corporate LLM" to be using the exact same underlying technology as ChatGPT -- in fact, very often companies like Oracle are directly contracting companies like OpenAI and their competitors and directly using their products, rather than training their own machine learning models. Not every company doing this will have fine-tuning on a per-customer basis like this, but it's absolutely possible and there's nothing when it comes to the architecture of newer LLMs that prevents this compared to older language models. If it's not a possibility, it's because the company offering the service doesn't think it's worth the cost to implement everything else that's needed for it (for instance, an interface for the customer and other such things)
Also, "fine tuning prompts" is a phrase that makes no sense under the definition of fine-tuning we're discussing here. Prompt engineering and stuff is a thing, but it's more or less orthogonal to the process that's referred to as "fine-tuning" when discussing machine learning models.
Your guesses about Oracle tech are absolutely wrong in this specific instance. We are not talking about cutting edge technology here. OP is largely describing old school ML on top of OCR. We are talking pre-LLM and even pre-BERT. Which is definitely a thing in the Oracle-verse. Including the inevitably shitty result with only 100 samples.
I am actually having a hard time figuring out what your experience level is, as it doesn't exactly match anything I am deeply familiar with. It kind of sounds like you are familiar with Microsoft as of two years ago?
When customers complained about copilot when it first came out, Microsoft pushed fine tuning, which took a huge amount of organizational effort, when no amount of fine tuning was ever going to fix copilot. But as a result, most companies at the time were curious about the possibility of fine tuning once across all LOBs and then leveraging the same model across multiple vendors as BYOM. That was a complete failure. (None of this matches what OP described btw.)
Lately, most tech companies are hyping newer and largely unproven technologies, like dynamic prompts, custom workflows or augmenting with memory/ graphs.
Me? I am old school. I am a firm believer in using the latest LLMs with a few fixed examples passed as context to the prompt.
I am not and have not made any guesses about what tech Oracle is actually using. My statements have been about LLMs and GenAI more generally, because your initial comment seemed to claim that fine-tuning is only ever a thing for older types of language models and is not a thing for newer models. Your next comment then tried to contrast "corporate AI" with modern LLMs as though there is a fundamental difference there. My comments have been addressing these claims about the technology more generally, since these claims seemed to imply things about it in general, not just in the context of Oracle specifically. I do not know or care about what Oracle or even Microsoft are specifically using, as that's almost entirely unrelated to my point, which is that it's perfectly possible for a company like that to switch to an LLM and still offer fine-tuning (whether it's actually effective in a given context or not).
Ahhhh.
No. I never said it was not technically possible.
I only (facetiously) said the user would not need too in the near future.
I later added I was familiar with fine tuning on OpenAI (meaning I have fine tuned models myself.)
My third comment touched on why there are better technological choices to classify third party invoices according to second party accounting schemes than LLM fine tuning.
I will say, on a completely unrelated note, I once accused an author of misquoting a paper I thought I had carefully read. I can't have read it too carefully, as the author I emailed pointed out that he had authored both papers. I still don't understand how his quote was supported by a plain reading of his original paper. But that is OK. It clearly said that to him, he was clearly an expert, and I don't think I was his intended audience.
Well, to be fair, he's the second richest man in the world mostly due to a lot of overpriced and bloated Oracle shitware way prior to any AI products out today.
Why does your company buy terrible OCR software when there are better products out there?
The people that use the software are not part of the team that make the decision on software purchasing.
I'm not the person you responded to, but that is always the answer to that question.
100% this. They hear AI as a buzzword and are like "fuck yeah."
This seems like more of a "corporate stupidity" issue than an AI issue. The people designing the system are out of touch with how it's used. It's hardly new to AI; enterprise software purchasing can be pretty dysfunctional.
The big shots meet up and ask each other what their "AI strategy" is for 2025. If they don't have something that sounds legit then they'd be embarrassed. "Oh we just signed a $10,000,000 deal with Oracle" eyebrows raise and competing CEO's worry they're not keeping up.
It's weird since as far as I can tell, everyone in the tech industry thinks Oracle is overpriced and evil. Isn't Postgres what people go with nowadays?
Maybe I'm overly biased from reading Hacker News.
The nerds aren't the ones getting wined and dined by Oracle's sales team.
I need to get their sales team to think I make the purchasing decisions for a big company. I don’t think it would be strictly illegal…
Since my job title now includes “lead”, I’ve been getting all sorts of unsolicited marketing call invites. Of course, I don’t make the decisions on any of the software I’m being propositioned with
I've definitely gotten some nice meals over the years by tagging along with some of my coworkers who were being wined and dined. That's not to say I had no input into decisions...but it was relatively tiny, all things considered. Really, I just wanted to be wined and dined. And co-workers were like "Hell yeah, come on, it's going to be great!"
That does seem to match the thesis of this article. It's not the tool per se, but how stupidly and wildly it is being weilded. we're taking a specialty screwdriver and treating it like an army swiss knife. Cutting? AI. Drilling? AI. Wine cork? AI. My failing love life? AI.
Doesn't Oracle also strongarm companies pretty heavily and require them to use basically all of their suite of software if they want to use some of it?
https://youtu.be/KW80Yjib7RA
I'm going to guess because they're explosive quarterly earnings jumped their stock 40+% a few months back, so it's clear that they do not need to compete on quality. Nor even care about productive engineers.
The average person doesn't understand the extremely long history of artificial intelligence systems being overhyped. For another infamous quote:
AI is a convenient excuse for dishonest executives.
I've been yelling about this for years. Some businesses have been massively disrupted by "macroeconomic factors" such that they need to cut costs and layoff employees. That doesn't apply to most tech companies that are spending billions on infrastructure with no signs of paying off.
Tech companies laid people off in 2023-2024 (including friends of mine), but their headcount is growing again, based on charts for Alphabet, Apple, and Meta.
They aren't hiring like in previous booms. Still, Google has over double the headcount they had when I left. I wonder what they all do?
It's down to economics, more than anything. We're basically diving into stagflation right now, and companies are avoiding expansion.
The 2023-2025 layoffs are primarily from:
The end of ZIRP. Credit costs a lot more now, and it's caused businesses to change how they do things. Which has manifested more toward avoiding doing things and extracting more from what they already do (i.e. enshittification)
A trap laid in a budgetary bill back in the first Trump term. They balanced out tax slashes with a change in accounting requirements for software development: typically a company would claim the cost of developer salaries in the same year as a loss, but under the new requirement they would be required to amortize the expense over a rolling five year window. So for those years (I believe it was changed back this year), paying developers was heavily disincentivized, since you'd be paying someone but only being able to claim 1/5 of the cost as an actual expenditure.
I think I heard this was rolled back in his BBB. Is that true?
I believe so, yes. I recall being annoyed that one thing I really wanted to happen was bolted onto the rest of that.
well it's not what your coulleages at Google do. Many are being laid off and the ones that remain want to lay low.
it's what the outsourced engineers do. That's why the headcount is growing. And I guess the answer is a mix of "training AI" and "maintaining existing systems". We're not really in innovation mode in any non-AI sector, so cheaping out enough to make sure existing revenue doesn't malfunction is enough.
Contractors aren't included in headcount. That's in addition to employees. But employees aren't all in the US either; Google has offices all over the world.
Outsourcing isn't the same as contracting. They aren't mutually exclusive, but I was talking about people hired into Google proper (but also not in the US).
Yes, and my argument is that Google is reducing headcount in the US and expanding in places with lower costs of living. That's how you can say there's a job crisis in the US while still seeing Google's hiring numbers actually rising overtime.
We've been through this already with manufacturing in the 90's.
Is more of Google's headcount outside the US now than there used to be? That's an interesting hypothesis, but how do we prove it? They don't publish this information.
I did the simple thing and asked ChatGPT but it didn't find anything good.
Yes, that's the tricky part. They only need to say their hiring numbers overall. Not the breakdown by region.
All these come more off of inferences, between the job numbers in the US being down for all sectors except hospitality and healthcare (from what we still have of the Labor Statistics), but job numbers from earnings reports in tech seemingly still going up. It's not hard proof, but it's definitely smoke that I'd hope someone else could investigate more deeply into.
What numbers are you using?
The Bureau of Labor Statistics?
https://www.bls.gov/charts/employment-situation/civilian-unemployment-rate.htm
it's no secret (at least, until October tried to hide the numbers) that unemployment has been inching up in the US for the past few quarters. It's what's causing the feds to start slashing rates again.
Digging into the monthly reports, you see pretty much all industries are down as well. Here's August's job reports before Trump fired the previous statistician reporting the numbers. "Professional and business services", which tech is under, is down 17, 000 in a single month. "Computer systems design and related services" specifically was down 3.3k that month (heck, it's down even more in September, surprisingly. which was an overall more positive jobs report).
I'd be really curious to see what the stats are for domestic growth versus international growth. The company I work for (a very small fish in the big pond compared to FAANG) grew to 500 domestic employees through Covid, then slashed that number to less than 150 in the last few years. Now we've been hiring like crazy again... in Eastern Europe, Bangalore, and Tijuana....
https://archive.is/PouEw (still isn't full)
That one study from MIT gets repeated a lot! I’m not sure people vibing about it understand its limitations.
Well that's hardly their only citation:
Having been on the receiving end of AI slop, it harms organizational productivity. Same as subtle bugs created by vibe coding.