To explain his steam engine to other humans, James Watt defined "1 horsepower" as "33,000 foot-pounds per minute", which approximates a typical horse's work on a typical mill.
...
Horse numbers are convenient at horse-scale, but cumbersome in calculations for telegraphy and rocketry, so scientists/engineers literally removed horses from the equation. Humans now measure power in "Watts" -- named after the human who named the measurement after horses. 1 horsepower equals ~746 watts.
One SpaceX Starship exceeds 100 million horsepower, but 100 million horses probably can't pull a sleigh into orbit. Horse-force is not thrust, and Earth's ~60 million total horses are not enough.
...
But humans still have no damn clue what "intelligence" is. We can't measure it, can't capture it, can't store it, and rarely use it.
...
If I were to get a heart transplant tomorrow, should I prefer 5 medical students over 1 expert? Should I prefer 2 Harvard grads over 3 UCR grads? A human child or 10,000 crows?
Such comparisons sound like nonsense; we lack equations to convert absurdity into understanding. We want to convert cognition into mundane magic. We need crowpower.
There's a lot of delightful references to a ton of ideas in this piece. But I appreciate that the overall thesis doesn't require understanding of all of these. Two thoughts after reading We want...
Scientific revolutions are punctuated by paradigm shifts. These shifts often occur when thought-experiment crash into new mathematical tooling: Schrödinger's cat, Newton's cannonball, Hilbert's hotel, Bell's spaceship, Maxwell's demon, Mermin's device, Zeno's race, Heisenberg's microscope, Galileo's ship, Savery's horse, Turing's machine, etc. [...] No two supersimulations are alike -- only a bat can be a bat, and only you can be you.
There's a lot of delightful references to a ton of ideas in this piece. But I appreciate that the overall thesis doesn't require understanding of all of these.
Two thoughts after reading
We want more intelligent people to emerge from our population. And to do that, especially when we don't understand how geniuses emerge, we need a society that nurtures all people. We'll get a bunch more super clever scammers, super social savvy marketing people, super artistic divas, but also a lot of good lawyers, linguists, artists, doctors, maths people and computer people. We may not have had Sir Andrew Wiles if his family wasnt generationally intellectual and wealthy. How many more will we get if more children can spend time tinkering with maths problems rather than figuring out their next meal.
Raw calculation is one dimension. Memory another. Recently we have a start of a third layer, where we began to link words likely to be seen together, and the ability to break down very long sequences of words into what's likely to be a much shorter version. Maybe the next layer is back to booleans: is this true or false. Given LLM generated response, how does a machine do another layer of "is this true, or likely sounding bullshit". We're starting small and working up: previously, chatgtp couldn't tell me 10 jokes without repeating, and it confidently told me a famous penninsula is an island because so many texts talk about the ocean and its most famous island when this penninsula is discussed. The newer models are starting to have a bit of a true/false filter.
From the article:
...
...
...
I am required to post the relevant xkcd. Feel free to mark as noise if this detracts from such a nice article.
Great article! Thanks for sharing
There's a lot of delightful references to a ton of ideas in this piece. But I appreciate that the overall thesis doesn't require understanding of all of these.
Two thoughts after reading
We want more intelligent people to emerge from our population. And to do that, especially when we don't understand how geniuses emerge, we need a society that nurtures all people. We'll get a bunch more super clever scammers, super social savvy marketing people, super artistic divas, but also a lot of good lawyers, linguists, artists, doctors, maths people and computer people. We may not have had Sir Andrew Wiles if his family wasnt generationally intellectual and wealthy. How many more will we get if more children can spend time tinkering with maths problems rather than figuring out their next meal.
Raw calculation is one dimension. Memory another. Recently we have a start of a third layer, where we began to link words likely to be seen together, and the ability to break down very long sequences of words into what's likely to be a much shorter version. Maybe the next layer is back to booleans: is this true or false. Given LLM generated response, how does a machine do another layer of "is this true, or likely sounding bullshit". We're starting small and working up: previously, chatgtp couldn't tell me 10 jokes without repeating, and it confidently told me a famous penninsula is an island because so many texts talk about the ocean and its most famous island when this penninsula is discussed. The newer models are starting to have a bit of a true/false filter.