I am currently reading my way thru Ted Chiang's guest column in the New Yorker, about why the predicted AI/Tech Singularity will probably never happen...
I am currently reading my way thru Ted Chiang's guest column in the New Yorker, about why the predicted AI/Tech Singularity will probably never happen (https://www.newyorker.com/culture/annals-of-inquiry/why-computers-wont-make-themselves-smarter). ETA: I just noticed that article is almost 5 years old; the piece is still relevant, but worth noting.
Good read. Still reading, but so far, I find I disagree with his explicit arguments, but at the same time, he is also brushing up very closely to my own reasoning for why "it" might never happen. Regardless, it is thought-provoking.
But, I had a passing thought during the reading.
People who actually use LLMs like Claude Code to help write software, and/or, who pay close attention to LLMs' coding capabilities ... has anyone actually started experimenting with asking Claude Code or other LLMs that are designed for programming, to look at their own source code and help to improve it?
In other words, are we (the humans) already starting to use LLMs to improve their code faster than we humans alone could do?
Wouldn't this be the actual start of the predicted "intelligence explosion"?
Edit to add: To clarify, I am not (necessarily) suggesting that LLMs -- this particular round of AI -- will actually advance to become some kind of true supra-human AGI ... I am only suggesting that they may be the first real tool we've built (beyond Moore's Law itself) that might legitimately speed up the rate at which we approach the Singularity (whatever that ends up meaning).