teaearlgraycold's recent activity
-
Comment on Survey reveals almost 50% of California teachers may quit teaching soon in ~life
-
Comment on Survey reveals almost 50% of California teachers may quit teaching soon in ~life
teaearlgraycold Link ParentSeems like the best option is still to take the phones and just accept some annual cost to holding them.Seems like the best option is still to take the phones and just accept some annual cost to holding them.
-
Comment on I don’t know if my software engineering job will still exist in ten years in ~comp
teaearlgraycold Link ParentIt’s more that if AI can do every part of my job then no person working from a computer is safe. It’s like planning for nuclear annihilation. As an individual, you don’t.It’s more that if AI can do every part of my job then no person working from a computer is safe. It’s like planning for nuclear annihilation. As an individual, you don’t.
-
Comment on I don’t know if my software engineering job will still exist in ten years in ~comp
teaearlgraycold Link ParentHow I would put it is that the "prompt" that I am given when I'm working professionally is insufficient for use by an LLM. I wasn't asked to make a WYSIWYG editor or define a DSL. But I quickly...How I would put it is that the "prompt" that I am given when I'm working professionally is insufficient for use by an LLM. I wasn't asked to make a WYSIWYG editor or define a DSL. But I quickly realized that without those functions it would be too hard to make something shippable. And if it was just a WYSIWYG then we'd be leaving the benefits of LLMs on the table. A little of both allows the user to use each for when they are most applicable.
LLMs help software engineers get more done while maintaining a higher quality bar. When I'm done writing code and want to take a break I can easily spend a few more minutes adding test cases I would have otherwise done without. I can get CSS hacks instantly that would otherwise have taken a lot of fiddling and StackOverflow to figure out. I can find implementation logic from a library in seconds to answer questions about how something will behave. The world needed so much more software than engineers had time to build. Now that we're faster why would we need less software?
-
Comment on I don’t know if my software engineering job will still exist in ten years in ~comp
teaearlgraycold Link ParentAt least in startup land I get extremely open-ended tasks like "I think our users want us to add a website-builder feature onto our AI phone receptionist app. Maybe do something like Claude...At least in startup land I get extremely open-ended tasks like "I think our users want us to add a website-builder feature onto our AI phone receptionist app. Maybe do something like Claude Artifacts?". And then you need to talk to users, see their current sites, ask them questions, and re-evaluate the assumptions in the task itself.
Given the above task I ended up doing something a bit different:
- Create a JSON DSL that builds a React webpage using a custom component library
- Use tool-calling LLMs to create the page. Constrain to the JSON schema built at runtime from the component library
- One tool to create a page from scratch
- One tool to identify segments to edit from a user's change request
- One tool to implement changes
- Implement a basic WYSIWYG page editor with drag-and-drop components, click-to-edit text, etc. Lots of little things are needed to get it good enough to ship.
- Make sure the sites support server-side-rendering for SEO
It's never just "Make the button blue and change the text". If that's your whole job then you're cooked right now. I have met people that consider themselves developers but probably could be (perhaps are?) replaced by today's LLMs. But a proper developer has so much to do that isn't just translating well-defined requirements into code, or whatever other tasks LLMs are good at.
-
Comment on Is it worthwhile to run local LLMs for coding today? in ~comp
teaearlgraycold (edited )Link ParentI just tried running Qwen3.5 27B @ 4b quantization on an M3 with 24GB. It loads. But it runs at 2.2 tok/s (slow). It can work with the Pi coding agent so I gave that a shot. After a few minutes it...I just tried running Qwen3.5 27B @ 4b quantization on an M3 with 24GB. It loads. But it runs at 2.2 tok/s (slow). It can work with the Pi coding agent so I gave that a shot. After a few minutes it was 50% through processing the 16,000 token agent prompt, at which point Pi killed the request to the LLM because it had taken too long. I guess for simple questions it might take 15-20 minutes to give an answer. You’ll definitely need one of Apple’s Max chips with 64-128GB of memory to do even half decent agentic tasks. The M5 series’ reported 4x prompt processing speeds sounds pretty appealing now.
Edit: I switched to Qwen3.5 35B-A3B @ 3b quantization. I can now actually get it to work with Oh My Pi. It's slow but it does work. It runs 7-12x faster than the 27B monolithic model from what I've seen. It's cool to see an agent running locally on a relatively low-end machine, tool calling and giving me a correct answer to a simple question.
-
Comment on Is it worthwhile to run local LLMs for coding today? in ~comp
teaearlgraycold Link Parent4bit is generally the optimal tradeoff from my testing. Yes you lose some quality, but you'll lose a lot more by going with a smaller model with more precision per weight.4bit is generally the optimal tradeoff from my testing. Yes you lose some quality, but you'll lose a lot more by going with a smaller model with more precision per weight.
-
Comment on Can coding agents relicense open source through a “clean room” implementation of code? in ~comp
-
Comment on Can coding agents relicense open source through a “clean room” implementation of code? in ~comp
teaearlgraycold LinkI don’t think you can call this a clean room implementation. The original code was almost certainly fed into the LLMs as training data.I don’t think you can call this a clean room implementation. The original code was almost certainly fed into the LLMs as training data.
-
Comment on Is it worthwhile to run local LLMs for coding today? in ~comp
teaearlgraycold (edited )Link ParentNot Apple’s. I’d recommend people buy Apple hardware now before their fixed pre-inflation contracts run out. For general consumers it’s hard to justify alternatives in the <$1,000 range. Edit: I...Not Apple’s. I’d recommend people buy Apple hardware now before their fixed pre-inflation contracts run out. For general consumers it’s hard to justify alternatives in the <$1,000 range.
Edit: I just noticed they’ve increased the starting prices on their laptops by $100-$200 this generation. You can still buy an M4 series laptop so I recommend that to any readers looking to save a bit of money.
-
Comment on Proton Mail helped US FBI unmask anonymous ‘Stop Cop City’ protester in ~tech
teaearlgraycold Link ParentYou can use bitcoin as well. But I really doubt bill serials would do much.You can use bitcoin as well. But I really doubt bill serials would do much.
-
Comment on Proton Mail helped US FBI unmask anonymous ‘Stop Cop City’ protester in ~tech
teaearlgraycold Link ParentMullvad accepts envelopes of cash with just your account number on them.Mullvad accepts envelopes of cash with just your account number on them.
-
Comment on Almost a third of Gen Z men agree a wife should obey her husband in ~life.men
teaearlgraycold Link ParentWell the question is: “A wife should always obey her husband.” I think it would be better to have the question be even more direct. Something like “A wife’s role is to be subordinate to her...Well the question is:
“A wife should always obey her husband.”
I think it would be better to have the question be even more direct. Something like “A wife’s role is to be subordinate to her husband”. I want to see the % of people that stand by that statement. A simple patriarchy test.
-
Comment on Is it worthwhile to run local LLMs for coding today? in ~comp
teaearlgraycold Link ParentThe M3 Ultra has pretty good bandwidth (820 GB/s) but limited compute compared to high end GPUs.people reported some success on Mac Studio's unified memory but due to the slower memory bandwidth it will be slower than proper NVIDIA setup
The M3 Ultra has pretty good bandwidth (820 GB/s) but limited compute compared to high end GPUs.
-
Comment on Is it worthwhile to run local LLMs for coding today? in ~comp
teaearlgraycold LinkI hardly use local LLMs for coding, but I am pretty sure you'll want a 128GB MacBook Pro if you're looking to run anything remotely comparable to hosted models. Even then, a 256GB or 512GB Mac...I hardly use local LLMs for coding, but I am pretty sure you'll want a 128GB MacBook Pro if you're looking to run anything remotely comparable to hosted models. Even then, a 256GB or 512GB Mac Studio is more of the right choice to run the best open weight models.
But as you don't seem to be a professional software engineer I don't think I can anticipate your needs. If you just need an LLM that can help write some small scripts and navigate the command line then I can see something useful fitting into 32GB. I've gotten some use out of GPT-OSS-20B on my 24GB MacBook Air at times when I didn't have internet access. But it was really just a fancy natural language CSS documentation lookup tool at that time. Not anything remotely comparable to modern "agentic" coding tools. The context window is much too small for that.
If you don't need the AI to be local then the free tiers for cloud hosted models will be your best option.
-
Comment on Lenovo’s new ThinkPads score 10/10 for repairability in ~tech
teaearlgraycold Link ParentBut think of the respect you could earn by quitting your job over the IT department’s choice of OS.But think of the respect you could earn by quitting your job over the IT department’s choice of OS.
-
Comment on Apple announces Macbook Neo, a new budget Mac in ~tech
teaearlgraycold Link ParentSomehow I can keep it to under 5 tabs most of the time. Maybe 12 max in rare circumstances. I also don’t generally keep notes. Not sure if I’m missing out or just have good memory.Somehow I can keep it to under 5 tabs most of the time. Maybe 12 max in rare circumstances.
I also don’t generally keep notes. Not sure if I’m missing out or just have good memory.
-
Comment on Apple announces Macbook Neo, a new budget Mac in ~tech
teaearlgraycold Link ParentApple does a good job with memory compression, so it’s not super easy to compare between operating systems. People that use Docker are losing a few GBs to that alone. And then there’s local AI...Apple does a good job with memory compression, so it’s not super easy to compare between operating systems.
People that use Docker are losing a few GBs to that alone. And then there’s local AI tasks - although you’d need a large amount of memory for that to be more than a toy.
-
Comment on Apple announces Macbook Neo, a new budget Mac in ~tech
teaearlgraycold Link ParentTo me this makes a lot of sense as a glorified iPad with an attached keyboard. Same RAM config as the M1-M3 iPad Airs, and I didn't see anyone complaining about having only 8GB there.To me this makes a lot of sense as a glorified iPad with an attached keyboard. Same RAM config as the M1-M3 iPad Airs, and I didn't see anyone complaining about having only 8GB there.
-
Comment on Apple announces Macbook Neo, a new budget Mac in ~tech
teaearlgraycold Link ParentAnd no notch on that model.And no notch on that model.
Don't go to court. Even readily complying a few times per year will be worth it.