post_below's recent activity
-
Comment on Opinion piece: I am a 15-year-old girl. Let me show you the vile misogyny that confronts me on social media every day. in ~life.women
-
Comment on My personal AI assistant project in ~tech
post_below Link ParentMy apologies. As a mentioned in a different reply, there are a lot of open source OpenClaw alternatives now that look more like security bad dreams than security nightmares. One upside of OpenClaw...My apologies.
As a mentioned in a different reply, there are a lot of open source OpenClaw alternatives now that look more like security bad dreams than security nightmares. One upside of OpenClaw though, now that Open AI is sponsoring it there's a fair chance you'll be able to keep using a subscription with it, as opposed to paying API prices. The other big model companies have locked that down in the last couple weeks.
If you have the time though, the exercise of rolling your own gives you great insights into model behavior.
-
Comment on My personal AI assistant project in ~tech
post_below Link ParentDo you mean how are messages passed between my system and Signal? A bridge runs on the PC side and passes messages back and forth. It also keeps the agent in line. The session is never handed off,...- Do you mean how are messages passed between my system and Signal? A bridge runs on the PC side and passes messages back and forth. It also keeps the agent in line. The session is never handed off, at least not the way I'd define it. Everything except the remote signal app runs on my PC, sandboxed where appropriate.
Maybe you're asking about the shopping cart session? If so I just login from my phone.
- I took a look at exe.dev when it launched, it didn't do anything I found useful. I can run all the VMs I want for free on my PC or on a colocated web server. But really I find that with my existing setup and scaffolding I rarely want a VM.
However, I've read enough posts like yours about exe.dev to realize that it's much more useful for some people than I imagined it would be when it launched. Last I heard you could use Opus for free in it, but I'm guessing that with the recent Anthropic OAUTH developments that's no longer the case?
-
Comment on Lithium plume in our atmosphere traced back to returning SpaceX rocket in ~space
post_below Link ParentDamn, 6000/year, that's sobering number. Then factor in starlink competitors and whatever other LEO applications companies come up with.How much atmospheric pollution will we be seeing once SpaceX has 30,000+ satellites in LEO, and 6,000 of them are intentionally being deorbited to burn up in our atmosphere every single year?
Damn, 6000/year, that's sobering number. Then factor in starlink competitors and whatever other LEO applications companies come up with.
-
Comment on My personal AI assistant project in ~tech
post_below Link ParentYeah the group chat collaborative bit ended up being another part that was cooler than I expected. I really like your abstraction angle, it implies a less dystopian future than I've been imagining.Yeah the group chat collaborative bit ended up being another part that was cooler than I expected.
I really like your abstraction angle, it implies a less dystopian future than I've been imagining.
-
Comment on My personal AI assistant project in ~tech
post_below Link ParentIf you ever manage to find the time, I'll be curious to hear what you come up with. Alternatively the world of "claws" (OpenClaw alternatives) is growing fast, some of them even think about...If you ever manage to find the time, I'll be curious to hear what you come up with.
Alternatively the world of "claws" (OpenClaw alternatives) is growing fast, some of them even think about security. Maybe one of those would do what you want.
-
Comment on The Claude C Compiler: what it reveals about the future of software in ~tech
post_below Link ParentAgreed on all counts. The "full automation" thing is a fever dream that anyone who's used agents professionally knows is impossible. Maybe someday but definitely not now.Agreed on all counts. The "full automation" thing is a fever dream that anyone who's used agents professionally knows is impossible. Maybe someday but definitely not now.
-
My personal AI assistant project
Let me start off by saying that I'm exhausted by AI hype. Being interested in LLM agent technology (AI agent hereafter for brevity) means skimming over a lot of hype for one or two useful, semi...
Let me start off by saying that I'm exhausted by AI hype. Being interested in LLM agent technology (AI agent hereafter for brevity) means skimming over a lot of hype for one or two useful, semi reality based, bits of information. Maybe the part that I find the most frustrating is how effective the hype is. I don't know if there's ever been a hype cycle like this. Probably a big part of the reason for that is the internet has already proven, within living memory for most people, that technological revolutions really can change everything. Or mess everything up. Either way they generate a lot of economic activity.
So this post is not that. I'm not going to tell you about how AI agents are the second coming for Christ. I'm not selling anything.
Fairly early into learning about AI agents I wanted a way to connect to the agent remotely without hosting it somewhere or exposing ports to the internet. I settled on tailscale and a remote terminal and moved on, I rarely used it. Somehow the tiny friction of "Turn on tailscale, open terminal app, connect, run agent" was enough to make it not feel worth it.
I know I'm far from the only person who had the same "I want it remote" thought, the best evidence: OpenClaw. It's just one of those things that everyone naturally converges on.
If you're not familiar with OpenClaw, the TLDR is: Former founder with more money than he'll ever need vibecodes a bridge between instant messenger apps and LLM APIs. Nothing about it is technically challenging or requires solving any particularly hard problems. It almost immediately becomes the fastest growing GitHub repo of all time and is currently at number 14 for number of stars. It blew up the (tech) internet like very few things ever have. Within months he was hired by Open AI.
OpenClaw now does more than just connect messaging and agents, but I believe that one piece is the killer feature. My tailscale terminal solution, combined with a scheduled task or a cron job and some context files could already do all of the things that OpenClaw can do, and countless people had already implemented similar solutions. But I think it was the tiny bit of friction OpenClaw removed that was responsible for a lot its popularity.
I thought that was interesting but I have no interest in the security nightmare that is OpenClaw, or the "sentience" vibe for that matter, so I built my own tool.
Essentially it's just a light secondary harness combined with a bridge between Signal and Claude Code. It does some other things too, things I wished existing harnesses did, some memory and guidelines, automated prompts and reminders to wake the agent up and have it do stuff, some context to give the agent some level of persistence, make it less LLMy, less annoying. None of that is particularly interesting though.
Once I got it working (MVP took less than a day) and started playing with it, the OpenClaw phenomenon made a lot more sense. Somehow having the agent in a chat interface, with almost zero friction (just open the chat and send something) was cooler than it had any reason to be.
I can't explain it any better than that at the moment. Not only was it kinda fun, it lent itself to a whole range of "what ifs". What if it could do X? What if I wrote a tool that gave it Y capability? I've been experiencing that for some time, but somehow agent in your pocket has a different feeling.
Here's an example of a "what if". What if it could do our grocery shopping? I definitely want that. I already had a custom browser tool that I built for agent coding assistance so I was most of the way there. It was just a matter of teaching the agent to login and navigate a website, something they're already trained to do. Some hand holding, a few helper scripts, and an evening's worth of hours later and I had it working. The agent can respond to a shopping request by building a shopping list based on our most recent orders, presenting it to us for approval/edits in a Signal group chat, doing searches for any additional product requests and adding the finalized order to the cart. It could also checkout the order and schedule the delivery time but I'm doing the last 2 clicks manually for the time being. It's an idiot savant, it seems like a bad idea to give it access to my credit card. Maybe eventually.
The fact that I can handle shopping with a couple of signal messages feels effortless in a way that handling shopping by connecting to my PC terminal remotely via tailscale terminal wouldn't have. Especially when I can include people in the loop who have no interest in tailscaling anywhere. Everyone can use messaging apps.
I imagine before long solutions like this will be built in, either in the grocery websites and apps, or into the frontier harnesses themselves. There will probably be agents everywhere, for better or worse. Probably I'll wish that the agents would all fuck off. In the meantime it's exciting how easy it is to get these tools to do useful things.
25 votes -
Comment on The Claude C Compiler: what it reveals about the future of software in ~tech
post_below Link ParentI appreciate that people are calling stuff like this out on most AI articles/blogs posted in Tildes. It's almost always a post from someone at an AI company, usually the CEO, and they almost...I appreciate that people are calling stuff like this out on most AI articles/blogs posted in Tildes.
It's almost always a post from someone at an AI company, usually the CEO, and they almost always focus on, or exaggerate, the positive while glossing over the negative. Although this article was more honest than most.
The author is right that as proof of concept it was impressive. Putting aside the code quality, imagine reading about this just 4 years ago... A swarm of AI agents, orchestrated by other AI agents, writing a (technically) working compiler? Maybe the craziest thing is how quickly it's becoming normal.
And also I don't think enough time was spent on the fact that it's not useful software, that it's not even close to being useful software. That the only path to it someday being useful software is humans in the loop or theoretical future model advances that do not follow linearly from the current progress.
-
Comment on Attention economics, software engineering, and AI in ~tech
post_below Link ParentThat's true, in a large enough organization, where people are more interchangeable, abstractions rule. That's a big part of why LLM adoption has been so rapid: the disposablility of people,...That's true, in a large enough organization, where people are more interchangeable, abstractions rule.
That's a big part of why LLM adoption has been so rapid: the disposablility of people, championed by corporate culture. Removing human brains from increasingly more of the loop isn't even a new idea in that context.
My commentary is only that unqualified statements like "It's absolutely a straight upgrade" or "it's undeniably better in every other way" rub me the wrong way.
I get that, though I'd say in this case "it's absolutely a straight upgrade" is a person's experience, where they're well qualified to make absolute statements.
As far as my absolute goes, I'll happily add an "outside of corporate environments" qualifier. Also, if we were talking about a lot more than 300 lines of code it would be more nuanced.
-
Comment on Attention economics, software engineering, and AI in ~tech
post_below Link ParentMy philosophy has always been: just write the 300 lines rather than pull in the whole mess that's trying to solve a bunch of use cases you don't need. I don't mean copy them from the library...My philosophy has always been: just write the 300 lines rather than pull in the whole mess that's trying to solve a bunch of use cases you don't need. I don't mean copy them from the library wholesale, that usually takes longer anyway since you have to disentangle what you need from the web of interrelations. At most you're inspired by a novel strategy the library is using.
Thinking about backwards compatibility, edge cases and performance optimizations is just part of writing code. You know the scope, you know what it needs to do, you're thinking about performance while you're writing it. You may not get everything right the first time, but you can iterate, and when you're done you'll have added a few 100 tightly scoped lines of code to your project rather than 30,000 lines of code. It takes longer but it's undeniably better in every other way. It will run faster, it will be bug free (once you've fixed the bugs) and it will be easier to maintain and extend. We're talking about 300 lines of code after all, it's not difficult to reason about. In most languages it takes minutes, as opposed to hours, to read and understand 300 lines of code.
teaearlygrey's LLM shortcut is likely doing something similar. I could be wrong, I've never actually asked an LLM to extract functionality from a library, but based on my experience if you said "figure out how this works and then make a plan to implement the functionality in my application over here" what you're likely to get in the end is a somewhat less elegant version of what would happen if you wrote the code yourself. Or maybe you'd get a copyright violation, it depends on the model and your prompt. You can iterate on that too.
But the point is that drawing inspiration from existing code, and sometimes even copy/pasting (pour one out for stackexchange), has always been a part of software development. One person solves a problem and 1000 people copy or import their solution. You can make a case that it sucks that LLMs can accelerate the process, but they didn't invent it.
-
Comment on Something big is happening in ~tech
post_below Link ParentThat's really cool, let me be the first (Tildonite) to validate your experience! AI can be super fun while being genuinely useful. The newer generations of coding agents are perfect for what...That's really cool, let me be the first (Tildonite) to validate your experience! AI can be super fun while being genuinely useful.
The newer generations of coding agents are perfect for what you're describing, and you're obviously using them well to get the results you're getting.
Just one caveat: If you decide to publish some of your apps, don't collect user data. And be cautious about your own PII, secrets and financial info. Without being able to (fully) read the code you can never know if you're being responsible with that data and it's pretty much guaranteed at this point that you wouldn't be.
Outside of that, go wild.
-
Comment on Something big is happening in ~tech
post_below (edited )LinkSince the author is talking about coding as the killer use case that proves all the future use cases are coming... I want to add a sanity check from the perspective of someone with decades of...- Exemplary
Since the author is talking about coding as the killer use case that proves all the future use cases are coming... I want to add a sanity check from the perspective of someone with decades of software engineering experience and as much experience with modern LLM agents as anyone has at this point.
But first I want to acknowledge that he's right about a lot of what he's saying. These tools are more powerful than most people realize at this point. They absolutely are going to change everything on a scale not seen since the widespread consumer internet. And it's going to happen faster than the internet did. It's going to happen too fast.
That said, here's how you know you're reading hype: He never mentions that these tools are also drooling idiots. Maybe he really doesn't know. It's hard to imagine how that could be true, but I want to allow for the possibility that he really believes everything he's saying.
What I mean by that is that this author, and so many others before him, seem to be skipping over big chunks of the current reality and leaping forward into what might happen in the future. The truth is that, for coding, AI agents are miraculous. He's right about that. And also, they absolutely cannot autonomously create complex production level code to professional human standards. They just can't.
However, they appear to. The SOTA is in this odd place where agents can write large, fully functioning, applications that meet most of the specs and pass all of the tests. Which is mind blowing, groundbreaking, science fiction level stuff. While at the same time under the hood there are security flaws, bad patterns, wildly varied conventions and style, performance problems, redundancy, insane verbosity and so on. And the only thing that can fix those issues (or stop them from happening in the first place) is a human in the loop.
So on the surface it looks like a miracle, but underneath it's a mountain of tech debt and vulnerabilities just waiting for the right moment to fuck up everyone's day.
I feel like I should establish that what I'm talking about generalizes, as opposed to being the result of my not understanding how to use the tools. I've been using them extensively for quite a while now (in AI years). I have scaffolding and custom built tools and extensive initial context and skills and commands and hooks and custom sub agents and all the things. Each of them iterated and pruned and updated for the latest generation countless times in an attempt to make the agents more reliable and less idiotic. And it works, some of the scaffolding I came up with in 2025 is now built into the latest SOTA harnesses. I don't say that to paint myself as some sort of visionary, this is all new territory that everyone is figuring out together, a lot of people have organically converged around various obviously effective strategies that the frontier labs then adopted. My point is only that I'm holding them right. I can get coding agents to do all sorts of exciting and useful things and I believe I have a solid, realistic understanding of what they're capable of and what their limits are. With humans in the loop they redefine software engineering. Without humans in the loop they are just very very impressive tech demos.
That could all change, they could get to the point where there don't need to be humans involved. If that happens then everything the author is saying is true and he's maybe not even stating it strongly enough. But it hasn't happened yet. The people who are saying it has are either deluding themselves or exaggerating for cynical reasons. I expect the fallout of that delusion to be difficult to miss in the software industry in the coming months and years.
I can prove it to you (here you can TLDR to the end if you don't care about using coding agents)
Assuming you have a subscription with one of the SOTA companies (for coding you want Claude Code with Opus 4.6 or Codex 5.3 high) that covers the necessary tokens.
First you'll want a decent AGENTS.md or CLAUDE.md for initial context. You can find decent starter context online if you don't want to spend too much time. Pick something reasonably lean, you don't want to use up too much context out of the gate. We can skip all of the more in-depth stuff for now.
Next, give the agent a general spec for a non-trivial application that has a lot of user facing surface area. The more varied the surface area the better. It should ideally be a big enough application that the agent can't one shot it in a single context window. With current context limits that isn't too hard to do (unless you're paying a premium for an extra large context window). It should attempt to solve a problem that's not completely overdone (no glorified to do list apps).
Next have the agent work your prompt up into a detailed implementation plan and have it write that plan to an .md file. If you have the time ask it to run a Q&A session with you to refine the plan.
Then instruct it to implement the plan while keeping track of its progress. This is a key step because you'll need to feed the plan and current progress into a new session when your agent runs out of context, or you can have the agent hand off to a new version of itself automatically, or let it do context compaction and soldier on in the same session. Or if you have a really big subsciption you can have an orchestrator agent run a bunch of sub agents automatically until the plan is finished. There are various ways to do it, each with pros and cons. Make sure the plan it writes includes a detailed testing phase so that it can iterate on any issues until it has something that works. You'll want to have some sort of browser (or device) automation wired up so it can test the UI/UX along with the backend. That's easy to do these days, the providers have solutions already built, or you can ask the agent to do it for you.
Then, assuming you've given it sufficient permissions so that it doesn't need to check in with you, go do something else for a while. Sleeping is a great option.
When you wake there's a fair chance (but not whatsoever guaranteed) that you'll be waking up to a working application that looks quite a lot like what you asked for. If it's your first attempt you're welcome to take all the time you need to wait for the world to stop spinning.
If it app isn't working yet, you should be able to prompt the agent into getting it there fairly easily, but it depends on how hard the set of problems you're trying to solve are.
Once it's working it will be hard to deny that you just experienced some version of the future.
But now the next step is to ask another model to audit the codebase. For example, if you built it with Opus, ask Codex to take a look. It shouldn't cost more than about $5 in tokens for a thorough audit, a lot less if the codebase isn't too big. At the same time, start a fresh session with your main model and ask it to do an audit too. Have both agents write their findings to a file when they're done.
I guarantee the list of issues they find will be extensive and that it will reframe your perspective on the miracle you just experienced. But you're not quite done, instruct your main agent to fix all of the issues and then repeat the audit process. Prepare for another long (but shorter) list of issues. Keep repeating until the agents stop finding issues. Note that the audit prompt is important, it needs to be thorough. You can download pre-made skills for that if you're not a coder. Multiple specialized auditors with different disciplines works best (security, logic, maintainability, etc.)
Once the agents are satisfied that the codebase is perfect, take a look at the codebase yourself. Or if you can't fluently read code, bribe someone who can. If you are really doing a best effort code review, I absolutely guarantee you will find more issues, some of them shocking.
And that's doing the bare minimum to wrangle the agents, my overlong post could be 8 times as long with instructions on how to make the agents suck less and still at the end of the process you would be finding serious issues.
That's the (real) current state of the art in autonomous coding agents and no amount of promot engineering can navigate around it.
A human in the loop, on the other hand, makes for a very different outcome, that is until you get overconfident and let the agent write too much code without thorough review. Then, again, issues are guaranteed.
TLDR
All of this to say: It's still safe to ignore the hype from people like the post author. The AI apocalypse could come at any time, but it's not on the horizon yet based on the current state of the tech.
And also, listen to the more level headed people who are saying this is a paradigm shift, because they are not lying.
-
Comment on How The New York Times uses a custom AI tool to track the “manosphere” in ~life.men
post_below Link ParentThat's a good point, finding out what the demographic is talking about without the mental and emotional load of wading through the toxicity could be useful outside of journalism too. I've been...if it's not painful and heated and time consuming to hear that the other side is saying
That's a good point, finding out what the demographic is talking about without the mental and emotional load of wading through the toxicity could be useful outside of journalism too.
Tracking what's controversial even within an echo chamber could be so important for a party trying to come up with something that would resonate
I've been getting a handful of updates in various fields that are AI summarized digests of what people are saying in online spaces (Reddit, Twitter, Discord, etc.) that I'd never otherwise spend enough time on to have a sense of the pulse. Complete with source links in case my faith in the human race is feeling strong enough to read the actual comments section. I could see a larger scale version of that being really useful politically.
-
Comment on How The New York Times uses a custom AI tool to track the “manosphere” in ~life.men
post_below LinkThis is a good example of an AI use case for a task that likely wouldn't be done otherwise (listening to loads of manosphere podcasts and summarizing them every week forever). There's a job no one...This is a good example of an AI use case for a task that likely wouldn't be done otherwise (listening to loads of manosphere podcasts and summarizing them every week forever). There's a job no one at the NYT wants.
It replaces a lot of human hours but they're hours that likely wouldn't have been spent in the first place because it's not worth the resources (or pain). No doubt they have summaries setup for other "spaces" too.
Really smart use of LLM tools. This isn't the first time I've been surprised by the NYT's tech savvy
-
Comment on AI doomers: What uses of generative AI are you actually excited about? in ~tech
post_below LinkI'm not sure about alarms but I know someone with hearing issues and I looked into it recently, there are at least a half dozen companies making live caption glasses for hearing impaired people. I...feels both achievable with current tech and also pretty awesome
I'm not sure about alarms but I know someone with hearing issues and I looked into it recently, there are at least a half dozen companies making live caption glasses for hearing impaired people. I agree, it's a fantastic use case.
A few others: research, in so many fields but especially in genomics. Medicine, not to replace doctors but to improve the process (screening, data management, diagnostic aids), modeling (weather, climate, geology, etc..), gaming (the most popular speculation is MUCH better NPCs). Also world building in general, for games of course but the possibilities for "world models" in a variety of areas are near unlimited. If you can train on the "world" of a domain (say, an ecosystem), the resulting inference could be remarkably useful. I expect there will be a lot of annoying hype around world models this year.
Regarding spiraling, I get it. I don't want to take this thread into the downsides direction but they're as unlimited as the upsides.
In spite of that, it's going to happen no matter what we think about it. We can hopefully support the creation of regulation and guardrails but one way or another the technology is going to keep exploding in all directions.
If some day we look back and collectively decide that AI was a huge mistake, not one of us will be able to realistically say "I should have done more to stop it" because there's just not currently anything an individual can do relative to the unprecedented amount of capital involved. Even large groups of individuals don't stand a chance.
I don't say that from a defeatist perspective. At any point in history there are large scale developments and circumstances beyond an individual's control. With just the one life each, I think we should appreciate the upsides where we can and focus our energy in the places we actually can make an impact. There have always been too many problems for one person to solve.
-
Comment on Building a C compiler with a team of parallel Claudes in ~tech
post_below Link ParentTo be fair to them, they didn't claim it was glorious in the blog post. It mentions that, with all optimizations enabled, it performs worse than GCC with all optimizations turned off. It also...To be fair to them, they didn't claim it was glorious in the blog post. It mentions that, with all optimizations enabled, it performs worse than GCC with all optimizations turned off. It also talks about the code quality being sub par.
The frustrating thing is that's not likely how the media and bloggers will talk about it. It will be another round of AI doom "it's coming for your job". It will fuel the hot takes that AI can now truly just write software. It will help suck in a new round of vibe coders. Except this year they want to be called "vibe engineers".
What it really is, like Cursor's far worse and more expensive example before it, is a somewhat interesting proof of concept. A few short years ago the possibility of agents creating any non trivial application autonomously was absurd.
I hate the hype too. But if the well wasn't poisoned by hype, and the airwaves saturated with AI discussion, we'd all be at least a little bemused by this.
-
Comment on Any software engineers considering a career switch due to AI? in ~comp
post_below LinkOption 2 is to build your own thing, which you can get started on any time, even keeping your current job and income. It's not for everyone but you'd get to decide exactly how much hands on...I really love building stuff and solving problems so maybe I go back to school and switch to some other flavor of engineering
Option 2 is to build your own thing, which you can get started on any time, even keeping your current job and income. It's not for everyone but you'd get to decide exactly how much hands on building and problem solving you'd get to do. The trick IMO is finding a problem you really care about solving, rather than solving a problem just to make money.
-
Comment on Passing question about LLMs and the Tech Singularity in ~tech
post_below Link ParentAh, yes in that context we're nowhere near an explosion. Or at least the existing technology doesn't put us near one, who knows if there will be breakthroughs in the near future. Yes LLMs are...Ah, yes in that context we're nowhere near an explosion. Or at least the existing technology doesn't put us near one, who knows if there will be breakthroughs in the near future.
Yes LLMs are already helping move the technology along faster than humans alone could do it. I don't think there's any doubt of that. The only question is if the path leads to the vicinity of AGI, which I think is safe to answer yes. It doesn't matter of LLMs themselves will have anything to do with AGI, they will definitely accelerate many aspects of technological advancement and some of them will contribute to eventual AGI.
-
Comment on The AI industry doesn’t take “no” for an answer in ~tech
post_below Link ParentI'd replace humans with mammals, and it's a well known part of the process of evolution. Calories are historically expensive and both movement and cognition use a lot of calories so organisms...I'd replace humans with mammals, and it's a well known part of the process of evolution. Calories are historically expensive and both movement and cognition use a lot of calories so organisms evolve to be as lazy as they can get away with while still surviving effectively.
She's describing the world I remember as a teenager decades ago. And doing a very articulate job of it at 15! The social media feeds are new but the behavior isn't.
The internet does make it worse though. The way a lot of men and boys talk online is similar to how they used to talk when it was a group of just boys, but rarely in mixed company. It was mostly posturing, most of them would never actually treat a girl badly, they were too terrified of them. The internet made it more public.
The biggest problem is that most teenaged boys, and 20 something men too, are idiots. Not fundamentally idiots, just lacking in brain development and empathy and healthy coping mechanisms for their fears and insecurities.
Note that I'm not saying that teenaged girls aren't idiots too, or that generalizing boys is fair or accurate. But it does make it easier to talk about without an encyclopedia of disclaimers.
What I'm getting at is that the combination of popular cultural ideas about how men should identify and behave, along with the way many men are socialized, and the inherent uncertainty and uncomfortability of growing up and figuring out who you are, compounded by a high level of testosterone... it's not a great combination for creating well adjusted behavior.
It seems to me that cultural norms for men have changed a lot, for the better, but the process is slow. A lot of outdated ideas are still holding on.
I think we can help that process along by having some empathy for boys and young men. I don't mean that we should condone their misogyny. They need to understand that it's not ok. But the thing is, many of them don't have any context for understanding why it isn't ok. It's really really easy to say stuff online, you don't have to mean it, sometimes saying things you don't mean is the point, and teenage boys have poor impulse control and a strong urge to test limits. Teenaged girls too. Sometimes it's being offensive for the sake of being offensive.
When I read about this girl's experience, I want to make it better for her. She has no context yet to understand the complexities of sexuality and it's deeply wrong that the world is projecting sexuality onto her. When I was 15, though, I didn't have the bandwidth or the context to understand her experience. Almost no boy I knew when I was 15 did. Very few 15 year old boys I've known since have. Ideally culture, or our parents, would have helped us understand better, but it didn't.
One of the side effects of labeling your fears and insecurities as weak and trying to cover them with strength, which many cultural groups expect of men, is that you're effectively having less compassion and empathy for yourself and so you end up with less empathy for others. Self defense mechanisms insulate you from both yourself and the world, that's their purpose.
Some men are objectively broken, some boys too, but the bulk are just trying to figure shit out in an imperfect chemical reality and with imperfect information. Of the latter, most will respond better, and learn more, from a compassionate approach.
I'm not saying anyone in this thread is suggesting a different approach, I just think it's a useful reminder. A simplified version is useful for teenagers to understand. Some day we might live in a world where a lot of teenagers of all genders aren't guaranteed to be assholes some of the time, but we're not there yet. Right now we can expect it. It happens because they're fighting their own demons, not because of anything about the targets of the assholery.
About social media bans: I'm all for it. Generally I'm against censorship, but I'm comfortable with some amount where developing brains are concerned, people will literally say anything on the internet and the volume is just way too much. If we can reduce social media use in teens it can only be good for society.