kacey's recent activity
-
Comment on How a White Nationalist Club Thought It Found Safe Haven in a Montreal Gym in ~society
-
How a White Nationalist Club Thought It Found Safe Haven in a Montreal Gym
8 votes -
Comment on AI was eroding trust in my classroom — so I got rid of typed papers and bought my students notebooks instead in ~life
kacey Link ParentNo worries. I'll bow out; I was concerned that people in this thread were reading a study, thinking it reflected reality, and were attempting to act in a research-driven manner. If we're all kinda...No worries. I'll bow out; I was concerned that people in this thread were reading a study, thinking it reflected reality, and were attempting to act in a research-driven manner. If we're all kinda winging it, that's fine. Rigor isn't necessary anywhere, it's just annoying to see people wag scientific papers around to prove their point, when vibes were always a perfectly reasonable way to ground your decisions.
I won't dig into your perspective; I disagree with it, but I'm doing so based on different grounds than you (i.e. it seems that the field can't agree on anything, so claims can't be made either way), so it's not terribly productive to discuss it further.
Regarding your last point about less contentious interventions - I don't think the first two are less contentious, otherwise we would simply do those, no? Yes, I would love to have less students so that I can work with them more directly and paying teachers more would hopefully increase retention of good teachers and encourage more people to join the profession, but we pretty clearly don't want to do that as a society right now and so I need to think about other ways of trying to improve my pedagogy given the system I'm in.
Yep, fair enough. I wasn't trying to suggest that you do anything different. Mostly, I'm suggesting that Business Insider stops posting articles which shape a narrative that actually laptops are the problem w.r.t. academic achievement. Not systemic issues in the school system.
(life issues are draining my brain at the moment, and I apologize for any bluntness in the above comment. I'd normally give it a few more editing passes, but I have several issues to attend to. Good luck with your class; I'm glad you're trying to do what's best for your students)
-
Comment on AI was eroding trust in my classroom — so I got rid of typed papers and bought my students notebooks instead in ~life
kacey Link ParentYeah ... I don't know how much I trust that meta-analysis. For context: I'm used to seeing meta-analyses use a robust framework for rejecting studies that are poorly designed, or are clearly...Yeah ... I don't know how much I trust that meta-analysis.
[Studies] had to meet six screening criteria for inclusion in the meta-analyses. Published and unpublished studies meeting these criteria were included. First, the studies had to compare how recording typed (e.g., laptop, desktop computer, tablet) versus handwritten lecture notes affected note-taking and/or achievement outcomes. Acceptable achievement outcomes were scores on immediate or delayed lecture-related post-tests, course exam or quiz grades, and final course grades. Second, participants needed to be college students. Third, the instructional materials must have been either a live classroom lecture or a pre-recorded video lecture in a laboratory setting. Studies including remote learners were not included. Fourth, only experimental and quasi-experimental studies were included. No survey-based or observational studies were included. Fifth, lectures needed to pertain to college-level topics; studies whose instruction focused on list learning or other non-meaningful learning outcomes (e.g., Aragón-Mendizábal et al., 2016) were excluded. Sixth, studies needed to have measured a direct effect between hand-written and typed note-taking and provided adequate information and data about the measured effects, such as test statistics that compare handwritten versus typed note-taking or descriptive statistics that included information such as sample sizes, mean scores, and standard deviations for both note-medium groups.
For context: I'm used to seeing meta-analyses use a robust framework for rejecting studies that are poorly designed, or are clearly biased. The fact that they also included non-peer reviewed studies is kind of maddening, too. For example, why were remote courses removed from the analysis? They were fine with pre-recorded lectures in a lab, why not at home? The authors even note that inclusion bias could be screwing up several other meta-analyses which have shown no result, instead of demonstrating a superior medium (see
Findings from Previous Meta‑analyses).I don't have the brain space atm to read through this paper (How to interpret discrepancies in empirical results from educational intervention studies.) (pre-print here), but IMO at a skim it confirms my biases at least: that educational intervention studies are extremely difficult to design, they get blown out of proportion by people with an agenda, and that we basically know nothing and should stop taking away tools from students.
(edit) Sorry, just clarifying my point, because I'm pretty scatter brained atm. People have been complaining about "kids these days" since time immemorial. Equally, seemingly everyone has an opinion about how kids should be raised, or what is the morally correct way of doing things. I think it's all bupkis and pseudoscience, and that you'd find a dramatically larger effect size on student performance with other, less contentious interventions: pay teachers more, decrease class sizes, change incentives towards learning instead of rote memorization/cheating.
(and for context, I can handwrite ridiculously fast anyways, and preferred switching between longhand and typed notes in university, based on the class. Calculus, discrete math, etc. got the wacom tablet, whereas psychology, linguistics, etc. got the keyboard. Depriving students of tools only makes them less capable humans overall)
-
Comment on ‘I took two bites and had to spit it out’: US candy makers are phasing out real cocoa in chocolate in some products in ~food
kacey Link ParentAch, wasn't aware of that. It'd be really nice if fully hydrogenated vegetable oils could overcome the stigma and gain broader acceptance, for precisely this reason :/ although people hate canola...ChoViva uses palm oil [...]
Ach, wasn't aware of that. It'd be really nice if fully hydrogenated vegetable oils could overcome the stigma and gain broader acceptance, for precisely this reason :/ although people hate canola because it's a seed oil, so perhaps my expectations are too high.
-
Comment on ‘I took two bites and had to spit it out’: US candy makers are phasing out real cocoa in chocolate in some products in ~food
kacey LinkSo the article's tone is dismissive, but my hot take is that cocoa alternatives are actually great! I've been following one that was mentioned in the article -- ChoViva (née Qoa and Nocoa) -- for...So the article's tone is dismissive, but my hot take is that cocoa alternatives are actually great! I've been following one that was mentioned in the article -- ChoViva (née Qoa and Nocoa) -- for quite a while, and their process is pretty sensible: the chocolatey flavour we associate with cocoa comes primarily from its fermentation and roasting (see, for example, Nile Red discovering this first hand), so fermenting other foodstuffs was a clear path forwards to a more sustainable snack. Cocoa pods are, after all, exclusively grown in highly exploited countries as part of an extractive industry which cares little for pesky things like environmental devastation and human rights, so finding alternatives for it -- especially considering our growing and insatiable desire -- is, imo, an objective good. Not everything needs to be a perfect, pure, chocolate experience -- that's why we have milk chocolate, white chocolate, "chocolatey" treats, Hershey's butyric-acid filled chocolate, etc.
Couple of quick side notes:
- Here's one of the first taste tests where I read about ChoViva. It'd be cool to see how the flavour has changed since the the oat-based days!
- IIRC, one of the founders was a research scientist working on much the same problem but for coffee -- the flavours for that are also primarily built up during fermentation and roasting, and they applied much of the same techniques for their chocolate product.
(tangent note: hah, I missed this because I have a filter excluding The Guardian from my feed. Explains my initial reaction of taking two glances at the article and spitting it out)
-
Comment on Pace of global warming has doubled since 2015 in ~enviro
kacey LinkI wonder whether this (and other results) will increase the palatability of solar radiation modification geoengineering approaches, as we determine with increasing confidence that previous warming...Hausfather also tells Carbon Brief that the “most likely driver” of the acceleration in warming is “reductions in planet-cooling sulfur dioxide emissions that have been masking some of the historical warming that we are now rapidly experiencing”.
I wonder whether this (and other results) will increase the palatability of solar radiation modification geoengineering approaches, as we determine with increasing confidence that previous warming signals were being hidden by -- now reduced -- sulfur particulate emissions. I recall some intense debates with friends a decade+ ago about whether geoengineering approaches will/should end up being deployed to blunt the worst effects of climate change to buy time for our ineffectual leaders. If this winds up happening, I'll be none too happy about having to win that argument.
-
Comment on Is it worthwhile to run local LLMs for coding today? in ~comp
kacey Link ParentDo you think making fun of OP is going to change their opinions? Or that bullying them publicly will make people more sympathetic to your cause? I didn't see anyone glazing Sam Altman in this thread.Do you think making fun of OP is going to change their opinions? Or that bullying them publicly will make people more sympathetic to your cause? I didn't see anyone glazing Sam Altman in this thread.
-
Comment on Is it worthwhile to run local LLMs for coding today? in ~comp
kacey Link ParentWhy do you feel it's appropriate to mock someone else's perspective?Why do you feel it's appropriate to mock someone else's perspective?
-
Comment on Electricity use of AI coding agents in ~enviro
kacey (edited )Link ParentOof, OK. I think my comment was too poorly written, and that's causing some confusion. Thankfully, Claude can help explain it 😅 I'm sorry for writing a hackle-raising message; I occasionally have...Oof, OK. I think my comment was too poorly written, and that's causing some confusion. Thankfully, Claude can help explain it 😅 I'm sorry for writing a hackle-raising message; I occasionally have very subtle views on things, and they often don't survive translation into text.
kacey:
(posted my original message, minus the context quote)
Could you help me understand this comment? It's a bit wordy and confusing.- Is this making a claim that a lot of people don't find LLMs to be useful?
- Is this making a claim that LLMs aren't useful in a wide variety of contexts?
- Is the author implying that they're avoiding AI entirely, or stating something like "they're not really useful, people are just hallucinating"?
Claude:
This is a fairly dense comment (K: oof) — let me break it down in relation to your three questions.
On whether the author is claiming people don't find LLMs useful
Not quite. The author is making a more precise point: that the value people get from LLMs doesn't justify the cost, specifically the energy cost. It's a cost-benefit argument, not a pure "nobody likes it" claim. The phrase "insufficiently valuable vs. the costs" is the key — some value is acknowledged, but it's framed as not enough to justify the power draw.
On whether they're claiming LLMs aren't useful across many contexts
This is where the METR/MIT reference comes in, and it's the trickiest part of the comment. The author is referencing research suggesting that LLMs don't integrate smoothly into real-world work as currently practiced — not that they're useless in principle, but that existing workflows, business processes, and organizational structures aren't set up to actually benefit from them. The "which goes both ways" aside means this cuts against both AI boosters ("it'll transform everything!") and pure skeptics ("it's useless") — the reality being more nuanced: it could be useful, but only after widespread reforms across industries, which is a very high bar (K: well actchually, I was claiming that LLM tools and business both need to evolve, though not necessarily to the same degree).
On whether the author is avoiding AI or calling it all hype
Neither, really. They're not making a personal lifestyle statement, and they're not calling LLMs a hallucination or scam. Their position is more structural: the deployment of AI at current scale is outpacing both the energy infrastructure needed to support it and the organizational reform needed to make it actually productive. The ROI isn't there yet, and the costs (energy, grid strain, rising household electricity bills) are already being felt by ordinary people now.
The core argument, simplified
"Even setting aside capitalism-specific critiques, AI data centers are growing too fast for the power grid to keep up, the energy costs are real and already hurting people, and the productivity gains to justify those costs require societal changes that haven't happened yet."For context, I'm currently nudging an LLM in the other window to work on an ML project of mine, and I'm banking the next several years of my life (and a great big chunk of my savings) on a long shot bet that LLM tooling is useful in some obscure manufacturing applications.
-
Comment on Hacker used Anthropic's Claude chatbot to attack multiple government agencies in Mexico in ~comp
kacey Link ParentHrm. Perhaps? I see that as a distinct product from what occurred here (but it does seem promising in its own right!). Without further details, I'd imagine that the Mexican agency "hacks" were the...Hrm. Perhaps? I see that as a distinct product from what occurred here (but it does seem promising in its own right!).
Without further details, I'd imagine that the Mexican agency "hacks" were the old standard stuff: ancient PHP servers running default passwords, easily spearfished employees with misconfigured email servers, critical services that don't rely on mutual authentication etc. Finding critical CVEs, like with Codex Security, is pretty cool! But more important is ensuring that your customers don't freely publish internal credentials on their homepage, allowing for hundreds of millions of SSNs to be leaked. For example :3
-
Comment on Hacker used Anthropic's Claude chatbot to attack multiple government agencies in Mexico in ~comp
kacey LinkSo ... not trying to sound like an AI apologist, but does this suggest that LLMs would be fantastic as a cheap red team to throw at nearly everything for hardening? I'm aware that software...So ... not trying to sound like an AI apologist, but does this suggest that LLMs would be fantastic as a cheap red team to throw at nearly everything for hardening? I'm aware that software vulnerability detection with LLMs has been a dud (see the curl bug bounty conversation)), but maybe network security is a better fit? They're probably better capable of launching spearfishing and social engineering attacks than existing tools, at least.
-
Comment on Is it worthwhile to run local LLMs for coding today? in ~comp
kacey Link ParentNot the OP, but IMO -- probably still a bubble, if only because there's still a gap between revenue and investment. Competitors appear capable of keeping pace with Anthropic/OpenAI at a steady...It's also really got me wondering if the current AI boom really is a bubble.
Not the OP, but IMO -- probably still a bubble, if only because there's still a gap between revenue and investment. Competitors appear capable of keeping pace with Anthropic/OpenAI at a steady ~6-12 months gap in capability, and they're doing so for pennies on the dollar. If the large, American AI firms can't demonstrate a way to keep their advantages proprietary, then a lot of the R&D investment which is going into making these systems will end up being written off: why would consumers/companies pay 10x for Anthropic/OpenAI when another service is available for mere fractions of the price?
But yeah, agreed that my expectations were blown out of the water while working with some frontier models. Even if they stay just as they are now, this will be massively disruptive to nearly all work done in front of a computer.
-
Comment on Is it worthwhile to run local LLMs for coding today? in ~comp
kacey Link ParentUh ... individuals buying computers aren't driving up the price of components, it's OpenAI buying 40% of the world's RAM manufacturing capacity and the like. I'm sure Akir has reasons for wanting...Uh ... individuals buying computers aren't driving up the price of components, it's OpenAI buying 40% of the world's RAM manufacturing capacity and the like. I'm sure Akir has reasons for wanting to code locally, and anyways, isn't their desire to use their computer just as valuable as "the rest of us"?
-
Comment on Is it worthwhile to run local LLMs for coding today? in ~comp
kacey Link ParentSorry, I didn't mean to offend. Apple products were -- IIRC -- one of the few computing products that haven't seen a price jump because of the recent spike in the cost of everything. For people...Sorry, I didn't mean to offend. Apple products were -- IIRC -- one of the few computing products that haven't seen a price jump because of the recent spike in the cost of everything. For people who really need the extra capacity, selecting the beefiest spec you can makes the most sense of any time between now and ~2029, since prices are only going to go up.
Genuinely, if Akir has the spare cash and has a reason to spec up for the next two years, now will probably be the most cost effective time to do so. Who knows what happens next. If you disagree, please feel free to make an argument ...? I can quote random blogspam if it'd help make mine, but I'm hopeful that we can discuss this point instead of screaming at each other.
-
Comment on Is it worthwhile to run local LLMs for coding today? in ~comp
kacey Link ParentBtw, may I ask if you've given Qwen3.5 27B a shot? Some -- admittedly kinda bad -- benchmarks figure that it's about as good as Anthropic's cheap model (Haiku 3.5).Btw, may I ask if you've given Qwen3.5 27B a shot? Some -- admittedly kinda bad -- benchmarks figure that it's about as good as Anthropic's cheap model (Haiku 3.5).
-
Comment on Is it worthwhile to run local LLMs for coding today? in ~comp
kacey (edited )LinkI wouldn't, tbh? Thoughts, as a case-wise analysis: Either the AI bubble pops, and all the hardware in those data centres (plus all the purchase orders and contracts) go up in flames, which will...I wouldn't, tbh? Thoughts, as a case-wise analysis:
- Either the AI bubble pops, and all the hardware in those data centres (plus all the purchase orders and contracts) go up in flames, which will probably have ... an effect on computer hardware prices,
- or the AI bubble succeeds, and we have fully autonomous AGI-class intelligences running amock in the cloud. The team which was responsible for getting the current state-of-the-art local model running (Qwen3.5-397B-A17B) was fired from Alibaba in the last week, so it's not terribly certain that local models will continue becoming better as fast as they have been over the last year. Which means that you'll probably want to use cloud models anyways.
(edit) Ah, two addendums:
- If you want to have better hardware over the next, say, two years for some other reason (e.g. gaming), now's a decent time to spec up. IMO.
- In case it helps, as a point of data, I've been running a 4-bit quant of Qwen3.5 35B A3B on a PC (9950x w/64 GB RAM and an RTX 2060), which has been inferring at ~20-30 tokens/second (depending on batch size; it trends closer to 30 than not). It still requires handholding, but it's mostly capable of handling a decently technical workload (atm. it's implementing an ML project I've been mulling for a while, and it's doing OK enough). It'd be fine for simple web apps, or quick one off scripts.
-
Comment on Electricity use of AI coding agents in ~enviro
kacey (edited )Link ParentSorry, I don't think I explained myself well, so you're asking a question of a point I didn't bring up (i.e. "carbon emissions aren't caused by individuals' actions", which is false imo). My...Sorry, I don't think I explained myself well, so you're asking a question of a point I didn't bring up (i.e. "carbon emissions aren't caused by individuals' actions", which is false imo).
My intent was to claim that individuals cannot effectively leverage their market participation in a way that incentivizes corporations and governments to become less impactful to the climate. I put forward that this is for (at least) two reasons:
- Consumers can often only choose between options that all emit an equally large amount. My grocery store only stocks one type of tomato; I can't choose between the less and more impactful fruit.
- Market-driven optimization seems likely to push actors into local optima, whereas we need a large push to leap out of that and into states that could realistically alter our current climate trajectory. An individual commuter can choose to buy a Prius or a Tesla, but it takes a provincial or federal initiative to buy a subway.
-
Comment on Electricity use of AI coding agents in ~enviro
kacey Link ParentFair enough! Apologies; I don't have very comprehensive knowledge of Star Trek, and only casually watched it as a kid. To be clearer with my original point: using a fictional TV comparison (as you...Fair enough! Apologies; I don't have very comprehensive knowledge of Star Trek, and only casually watched it as a kid.
To be clearer with my original point: using a fictional TV comparison (as you noted) doesn't seem compelling. The fictional population of the fictional Star Trek universe behave in network approved ways that a team of writers agreed on, so their fictional opinions on AI generated content bears only the slightest on the conversations we're having about decidedly non-fictional life.
Perhaps to be even clearer: if Gene Roddenberry rose from the dead, proclaimed that an AI stealing art is actually cool and normal, then promptly collapsed back into a heap, I would give an equivalently fractional slice of a damn about his opinion.
(edit) And just to underline this: I agree with you! These conversations about AI generated content often end up roping in the notion of intellectual property rights, portraying them as an obvious social good. That's a heck of an argument, and should hopefully be treated as perpendicular to the current issues we're dealing with.
-
Comment on Electricity use of AI coding agents in ~enviro
kacey Link Parent(I agree with your point, but imo the Star Trek comparison isn't terribly compelling ... no one has concerns about the holodeck in fiction because none of the writers wanted to deal with that...To use a fictional comparison, Star Trek’s holodeck works in a very similar way. The holodeck computer in Star Trek also generates things from records of existing people, harvested data, artwork, published stories, historical data, and reference material, but nobody treats that as some inherent moral crime or art theft, because it exists in a post-scarcity world.
(I agree with your point, but imo the Star Trek comparison isn't terribly compelling ... no one has concerns about the holodeck in fiction because none of the writers wanted to deal with that quagmire. See also the ethics -- and lack of objection to -- simulating people from your day-to-day life 🤢)
Both this and the follow-up cover the acts of literal Hitler following Nazis who are operating "Active Clubs" ("decentralized cells of white supremacist and neo-Nazi groups") in Canada.
This doesn't get nearly as much air time as the American's equivalents, but given that there are several other Canadians here, it might be wise to understand what's going on in our backyard too.