An interesting story, but not a new one that required AI: https://www.youtube.com/watch?v=VTBZ0VwIgs8 I suggest reading the top comments and noting that he never seemed to make the second video,...
An interesting story, but not a new one that required AI:
I suggest reading the top comments and noting that he never seemed to make the second video, so that's good.
That being said, yeeaaah there's going to be all sorts of really awkward scenarios now when people realize that AI's, handled correctly, can give chemistry or physics information that can very quickly become extremely dangerous.
Also I feel like the most interesting part of the article is somewhat hidden here:
HudZah enjoys reading the old-fashioned way, but he now finds that he gets more out of the experience by reading alongside an AI. He puts PDFs of books into Claude or ChatGPT and then queries the books as he moves through the text. He uses Granola to listen in on meetings so that he can query an AI after the chats as well. His friend built Globe Explorer, which can instantly break down, say, the history of rockets, as if you had a professional researcher at your disposal. And, of course, HudZah has all manner of AI tools for coding and interacting with his computer via voice.
It's not that I don’t use these things. I do. It’s more that I was watching HudZah navigate his laptop with an AI fluency that felt alarming to me. He was using his computer in a much, much different way than I’d seen someone use their computer before, and it made me feel old and alarmed by the number of new tools at our disposal and how HudZah intuitively knew how to tame them.
This is something I've been thinking about a lot, especially when it comes to minor scripting tasks. Why am I clicking on several applications to run them, then manually moving the files when completed? Yeah i could write a script, but at this point, how hard is it to just tell an AI "when files in this folder finish downloading move them to x/y/z depending on a/b/c criteria"
I feel like that's the ideal way to utilize AI. he's using it to assist and enhance his current workflow, not as an omniscient oracle to substitue such materals. He's not using ChatGPT to replace...
I feel like that's the ideal way to utilize AI. he's using it to assist and enhance his current workflow, not as an omniscient oracle to substitue such materals. He's not using ChatGPT to replace reading the book, he's using it to reinforce his reading. He doesn't avoid the meeting altogether, he's listening in and gets an AI to interface with afterwards.
of course, many won't have that discipline, and the marketing Buzz around AI is very much about replacing talent, not enhancing them. That ethical line of how and when to use it was smashed from the beginning. If we were promoting what HudZah was doing I'd be all in for AI. But instead we're talking more about how to replace artists, stealing content from news organizations, and using it wholesale to get medical and legal advice (without oversight from a doctor/lawyer).
it's destroying humanity in real time and even if it's reeled in by the courts it may already be too late.
Agreed. This is new a Gutenberg moment. I observe that new technology does two thing: For undisciplined, unintentional people, it dumbs them down. They're using computers as a junk food vending...
Agreed. This is new a Gutenberg moment. I observe that new technology does two thing:
For undisciplined, unintentional people, it dumbs them down. They're using computers as a junk food vending machine for the mind.
For smart, disciplined, intentional people, it augments them. They're using computers truly as bicycles for the mind.
Humanity has had several milestones in how it handles knowledge or information:
Invention of writing. This allowed us to store and persist knowledge and then transmit to other people. Previously, oral transmission allowed humans to transmit very little knowledge and it would become corrupted and distorted. Human progress before writing was achingly slow because knowledge was constantly lost and destroyed between generations. Still, manuscripts have to be manually transcribed, making knowledge extremely expensive and reserved for the few elites...
Invention of the printing press. This allowed us to reproduce knowledge at scale. This enabled mass education, mass literacy, and mass transmission of every kind of knowledge: scientific, engineering, cultural, etc. The 1400s–1900s period sees more progress than all of previous human history combined.
Invention of the computer and the internet. This allowed us to transit, retrieve, link, and manipulate knowledge at the species scale and near-instantly. No more spending hours searching through libraries for a specific knowledge artifact when you can retrieve it in minutes or seconds. Related knowledge artifacts can be manually or algorithmically connected together.
Invention of AI. This is allowing us to extend our internal organic brain with an external quasi-brain that can process knowledge in parallel. We have a fundamental problem where our internal organic brain has limited storage and bandwidth. Even though knowledge is now persistent, we still have to spend 20+ years moving knowledge from persistent knowledge stores into our brain in order to do anything complex, and then we still have to spend all day retrieving, storing, and manipulating knowledge manually.
I can envision a distant future where a single human, augmented with an orchestra of external artificial brains with them as the conductor, commanding legions of robots, will be able to singlehandedly execute megaprojects.
What I'm taking away from this article is that it's not only possible but eminently realisable to build a fusion reactor in your garage because you want to. I love the future
What I'm taking away from this article is that it's not only possible but eminently realisable to build a fusion reactor in your garage because you want to. I love the future
There's a chance ... that I've been to this house. I was definitely at a house party full of very smart UWaterloo students who had come down from Canada to rent a big house in the Lower Haight to...
There's a chance ... that I've been to this house. I was definitely at a house party full of very smart UWaterloo students who had come down from Canada to rent a big house in the Lower Haight to build their own passion projects.
If he's got voice input going for most of what his machine does, count me jealous. I want my Star Trek computer and he's closer to it than me lol On that note, using AI as a book reading assistant...
If he's got voice input going for most of what his machine does, count me jealous. I want my Star Trek computer and he's closer to it than me lol
On that note, using AI as a book reading assistant is an interesting and often very productive experience. The more I've done it the more it feels like being able to both, read the book and talk to the book. At least with what I can run locally, sometimes the material is just sort of beyond it but a lot of the time the model functions like a perpetually available conversational partner, who won't get weird over my odd questioning or annoyed with how often I engage in my odd questioning. When I read something non-fiction I try to think of it like a conversation with the work, I'll think over stuff it says and have a sort of internal back and forth. The model lets me bring that into the real - I can speculate and get feedback, argue and be argued with, stuff like that and come out with deeper memories of the content I was consuming. It's not really about whether the model gets stuff right or is accurate all the time, the value comes from being able to talk about the material with a more or less competent partner. Makes total sense to me that HudZah could develop a comprehensive sort of workflow since he's got the time and exists in an environment where other folks are going to be engaging with the same/similar tools.
As a practical example, I have an archive of a bunch of field manuals and guide material around a big variety of subjects. I was reading one, a work by a guy named David Werner called "Where There is No Doctor", which is the nuts and bolts of doing healthcare in a remote village sort of context. While I was reading, I could ask the models things like "what did the author say before about why [thing] shouldn't be done?", it would provide an answer, and I'd verify every once in a while to understand the degree to which those answers were accurate/acceptable/whatever. I could ask a followup question, like "does this come up again later? Are there further details about why the author recommends this?" and on the whole the model performed great with that (again per my amateur sort of testing/comparing sources). When I would talk to folks about the book, those moments meant having a better command of the material - I remembered not just the book's content but also the experience of having discussed it already.
Honestly it was a profound sort of experience and I've greatly enjoyed continuing with it. With subjects I've got more expertise in, it's often helpful to be able to have the model attempt to summarize and pick apart the flaws in the summary. You can go as hard in the paint as you like when it gets stuff wrong and argue things out to their conclusions without having to worry about offense or insecurity. You can ask the same questions over and over, remind yourself of things, basically just drop all your emotional considerations and learn like hell, so to speak. The article's framing, that HudZah is an "ai native" racing toward a profoundly different way of using the machine, feels pretty on point to me.
An interesting story, but not a new one that required AI:
https://www.youtube.com/watch?v=VTBZ0VwIgs8
I suggest reading the top comments and noting that he never seemed to make the second video, so that's good.
That being said, yeeaaah there's going to be all sorts of really awkward scenarios now when people realize that AI's, handled correctly, can give chemistry or physics information that can very quickly become extremely dangerous.
Also I feel like the most interesting part of the article is somewhat hidden here:
This is something I've been thinking about a lot, especially when it comes to minor scripting tasks. Why am I clicking on several applications to run them, then manually moving the files when completed? Yeah i could write a script, but at this point, how hard is it to just tell an AI "when files in this folder finish downloading move them to x/y/z depending on a/b/c criteria"
I feel like that's the ideal way to utilize AI. he's using it to assist and enhance his current workflow, not as an omniscient oracle to substitue such materals. He's not using ChatGPT to replace reading the book, he's using it to reinforce his reading. He doesn't avoid the meeting altogether, he's listening in and gets an AI to interface with afterwards.
of course, many won't have that discipline, and the marketing Buzz around AI is very much about replacing talent, not enhancing them. That ethical line of how and when to use it was smashed from the beginning. If we were promoting what HudZah was doing I'd be all in for AI. But instead we're talking more about how to replace artists, stealing content from news organizations, and using it wholesale to get medical and legal advice (without oversight from a doctor/lawyer).
it's destroying humanity in real time and even if it's reeled in by the courts it may already be too late.
Agreed. This is new a Gutenberg moment. I observe that new technology does two thing:
Humanity has had several milestones in how it handles knowledge or information:
I can envision a distant future where a single human, augmented with an orchestra of external artificial brains with them as the conductor, commanding legions of robots, will be able to singlehandedly execute megaprojects.
Breaking Bad, the AI version - that's what am talking about.
What I'm taking away from this article is that it's not only possible but eminently realisable to build a fusion reactor in your garage because you want to. I love the future
Sort of, but it's a misleading way to put it because it's not what most people think of a fusion reactor.
There's a chance ... that I've been to this house. I was definitely at a house party full of very smart UWaterloo students who had come down from Canada to rent a big house in the Lower Haight to build their own passion projects.
If he's got voice input going for most of what his machine does, count me jealous. I want my Star Trek computer and he's closer to it than me lol
On that note, using AI as a book reading assistant is an interesting and often very productive experience. The more I've done it the more it feels like being able to both, read the book and talk to the book. At least with what I can run locally, sometimes the material is just sort of beyond it but a lot of the time the model functions like a perpetually available conversational partner, who won't get weird over my odd questioning or annoyed with how often I engage in my odd questioning. When I read something non-fiction I try to think of it like a conversation with the work, I'll think over stuff it says and have a sort of internal back and forth. The model lets me bring that into the real - I can speculate and get feedback, argue and be argued with, stuff like that and come out with deeper memories of the content I was consuming. It's not really about whether the model gets stuff right or is accurate all the time, the value comes from being able to talk about the material with a more or less competent partner. Makes total sense to me that HudZah could develop a comprehensive sort of workflow since he's got the time and exists in an environment where other folks are going to be engaging with the same/similar tools.
As a practical example, I have an archive of a bunch of field manuals and guide material around a big variety of subjects. I was reading one, a work by a guy named David Werner called "Where There is No Doctor", which is the nuts and bolts of doing healthcare in a remote village sort of context. While I was reading, I could ask the models things like "what did the author say before about why [thing] shouldn't be done?", it would provide an answer, and I'd verify every once in a while to understand the degree to which those answers were accurate/acceptable/whatever. I could ask a followup question, like "does this come up again later? Are there further details about why the author recommends this?" and on the whole the model performed great with that (again per my amateur sort of testing/comparing sources). When I would talk to folks about the book, those moments meant having a better command of the material - I remembered not just the book's content but also the experience of having discussed it already.
Honestly it was a profound sort of experience and I've greatly enjoyed continuing with it. With subjects I've got more expertise in, it's often helpful to be able to have the model attempt to summarize and pick apart the flaws in the summary. You can go as hard in the paint as you like when it gets stuff wrong and argue things out to their conclusions without having to worry about offense or insecurity. You can ask the same questions over and over, remind yourself of things, basically just drop all your emotional considerations and learn like hell, so to speak. The article's framing, that HudZah is an "ai native" racing toward a profoundly different way of using the machine, feels pretty on point to me.