deepdeeppuddle's recent activity

  1. Feature request: an option to deactivate or delete your account

    Before posting this, I checked my user settings page, the site’s documentation, and GitLab. I also did some site:tildes.net Google searches and used the on-site search for the ~tildes group. I saw...

    Before posting this, I checked my user settings page, the site’s documentation, and GitLab. I also did some site:tildes.net Google searches and used the on-site search for the ~tildes group.

    I saw that on GitLab there is a feature request for account deletion, but not deactivation, that was marked “Accepted” about 5-6 years ago (August 2019).

    I also saw some posts here in the ~tildes group, including one from about 6-7 years ago (June 2018) with a comment that said an option for both account deletion and account "dissociation" was planned. Both of these features sound great.

    In addition to account deletion and account dissociation, I want to also request an option for account deactivation.

    I don’t want to ask for the Moon here, but I envision account deactivation as having the option to remove all your posts and comments from the site (as well as your profile), with the option of restoring them if you reactivate your account. (I don’t know how annoying or how much effort this would be to code. I’m just imagining what I would find ideal from a user perspective.)

    Another wonderful bonus would be the option to set a timer limiting your ability to reactivate your account, e.g. don’t let me reactivate my account for 6 months.

    In the past, I’ve done this on another site through an elaborate system where I:

    1. Set up two-factor authentication.
    2. Saved the two-factor recovery code on an encrypted pastebin in a password-protected paste.
    3. Saved the password for the paste in my password manager.
    4. Used FutureMe.org to send an email with a link to the paste to myself X amount of time in the future.
    5. Deleted the link to the paste from my browser history.
    6. Deleted the entry for the site from my two-factor authentication app so the recovery code is the only way to get in.

    This works, but it’s an elaborate process and if something goes wrong with FutureMe.org or the paste bin site, you could lose your ability to ever reactivate your account. You have to be willing to take that risk!

    I might end up implementing this wacky system again for Tildes. For a site like Tildes, if I permanently lost access to my account (due to FutureMe.org shutting down or suffering data loss, for example) and wanted to re-join the site at some point in the future, I guess the worst consequence would be losing my username. (Also, getting an invite again might be a hassle, I don’t know.) That might be unfortunate depending how much you like your username, but it’s not as bad as a site with follows and followers where you would lose all of those.

    18 votes
  2. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    (edited )
    Link Parent
    I disagree with your analysis of the situation. I would like to be granted the autonomy to maintain my own boundaries. That’s why I think just having a block feature is a good solution for most...

    I disagree with your analysis of the situation.

    I would like to be granted the autonomy to maintain my own boundaries. That’s why I think just having a block feature is a good solution for most social media sites/apps.

    If it’s going to turn out that trying to maintain my boundaries isn’t possible, or isn’t possible without some onerous level of stress and effort (e.g. spending hours justifying myself to strangers who have no investment in my well-being), then unfortunately the best option for me is to just quit the site.

    I already took a break from the site for about a month after having a bad experience. I am not one of these people who relishes conflict or who wants to get into it with strangers online. Very much the opposite.

    I just want to have the ability to stop interacting with someone if I have a bad experience with them. I see no utility in getting into protracted conflict with strangers online. It’s hard enough to resolve conflict with people you know and love, let alone with strangers where you have no established trust, rapport, affinity, or common ground. Why would that be a good use of my very limited resources?

  3. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    (edited )
    Link Parent
    That was not my intended implication. I really would love just to have a block button and not get into some huge public argument.

    That was not my intended implication.

    I really would love just to have a block button and not get into some huge public argument.

  4. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    (edited )
    Link Parent
    I agree that if someone has been respectful to you then it’s not polite to ask them not to engage with you. If someone has been disrespectful to you, then I guess your only two options are to ask...

    I agree that if someone has been respectful to you then it’s not polite to ask them not to engage with you. If someone has been disrespectful to you, then I guess your only two options are to ask them to stop engaging with you or quit the site. I’m trying the first option before I quit as a last resort.

    I think it’s a good idea for social sites/apps to have a block feature, since it lets people maintain their own boundaries without quitting the site/app altogether.

  5. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    Link Parent
    I'm not sure I understand your intended meaning. If consumers don't consume AI art, there is no market for it. Also, the comment you replied to was replying to Diff's comment, and in that comment,...

    I'm not sure I understand your intended meaning. If consumers don't consume AI art, there is no market for it.

    Also, the comment you replied to was replying to Diff's comment, and in that comment, Diff wasn't talking about large corporations making popular movies. They were (I thought) talking about individual customers who have a direct, one-to-one relationship with an artist or a small business producing art at small scale. So, that was about individual consumer choice. That was about "the masses" directly purchasing products.

    I would appreciate it if you didn't reply to my comments in the future.

  6. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    Link Parent
    That's interesting information. Thank you for telling me about that. I think the topic warrants further study.

    That's interesting information. Thank you for telling me about that.

    I think the topic warrants further study.

  7. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    Link Parent
    I can't imagine that analogy doing anything to persuade someone who isn't already convinced. All it does is polarize the discourse. We've spent decades making fun of anti-piracy PSAs on TV. Now...

    I can't imagine that analogy doing anything to persuade someone who isn't already convinced. All it does is polarize the discourse. We've spent decades making fun of anti-piracy PSAs on TV. Now copyright infringement is akin to Nazi crimes against humanity? Please step back from this hyperbole.

    I'm not saying you're wrong and I'm not trying to invalidate your wife's hard feelings about having her work used in this way. But we really can't go around comparing things cavalierly to Nazi atrocities.

    7 votes
  8. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    Link Parent
    I guess your argument is that the masses have poor taste and will accept low-quality art? Is there data that supports the idea that this is happening at scale, i.e., there is some statistically...

    I guess your argument is that the masses have poor taste and will accept low-quality art? Is there data that supports the idea that this is happening at scale, i.e., there is some statistically measurable displacement of paid human artistic labour by AI art generation?

    1 vote
  9. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    Link Parent
    This is an interesting perspective. Thank you. I briefly looked through about the first half of the examples in the "AI Art Turing Test". A lot of the pieces are abstract, weird, fantastical, and...

    This is an interesting perspective. Thank you.

    I briefly looked through about the first half of the examples in the "AI Art Turing Test". A lot of the pieces are abstract, weird, fantastical, and have non-Euclidean geometry or don't attempt to show perspective in a realistic way. That makes it particularly hard to judge.

    I also saw a few examples, particularly the cartoony images of porcelain women, that I find ugly and low-quality but I don't doubt they could have been made by humans. Sometimes I wonder if part of the reason for diffusion models like DALL-E and Midjourney outputting art that looks bad is that they're trained on a lot of art from DeviantArt or Tumblr or wherever that is bad. It makes sense that most of the drawings on the Internet would be made by people who have closer to beginner-level skill than expert-level skill, just like how most fanfiction is closer to a "14-year-old who has never written anything before" level of quality than a "experienced writer who could realistically get a publishing deal" level of quality.

    I also think of this post about LLMs generating short fiction. The author's view is that LLMs are good at generating short stories that look like good writing upon a cursory inspection, but if you scratch the surface, you start to notice how bad it really is.

    I worry about the same thing happening with the "AI Art Turing Test". Realistically, how long am I going to spend looking at fifty images? Maybe like ten seconds or less, which is not long enough for my eyes to even take in all the detail in the image? Passable at a glance is not the same thing as good.

    If a great piece of art is something you can stand in front of at a museum for an hour and continually appreciate more detail in, then a bad piece of AI art is something that looks impressive for the first 30 seconds you spend looking at it before you notice some messed up, ugly detail.

  10. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    (edited )
    Link
    I see a lot of problems with the current, popular AI discourse. I wrote about where I find fault in the discourse about AI capabilities here. But there's more I take issue with. This comment will...

    I see a lot of problems with the current, popular AI discourse. I wrote about where I find fault in the discourse about AI capabilities here. But there's more I take issue with.

    This comment will mostly focus on the common ethical arguments against AI. I could also talk about AI hype (e.g., how despite huge business investment and apparent enthusiasm for AI, it doesn't seem to increasing productivity or profitability), but it seems like most Tildes users already believe that AI is overhyped.

    1. The anti-AI art narrative seems to contain a contradiction

    The discourse about AI-generated art is confusing. The detractors of AI-generated art make two claims that seem incompatible (or at least in tension with each other):

    1. AI-generated art is terrible in quality, and obviously so to anyone who looks at it.
    2. AI-generated art is displacing human-generated art in the market and costing human artists revenue.

    I agree with (1). As for (2), I want to see data that supports this claim. I've looked for it and I haven't been able to find much data.

    What nags at me most is that (1) and (2) seem to be incompatible. If AI-generated art is so terrible, why do consumers putatively prefer it? And if consumers don't prefer it, how could it be displacing human labour in creating art? How can these two claims, which are often made by the same people, be reconciled?

    What seems to me most likely to be true is that AI art sucks and because it sucks, there is a marginal market for it, and there's very little displacement of human artists' labour.

    2. Talking about how much electricity AI uses seems like it's just a proxy for talking about useful AI is

    I'm skeptical about environmentalist arguments against AI. I'm skeptical because I've tried to find hard data on how much electricity AI consumes and I can't find strong support for the idea that an individual consumer using an LLM uses a lot of electricity when compared to things like using a computer, playing a video game, keeping some LED lightbulbs turned on, running a dishwasher, etc.

    The predictable rejoinder is "those other things have some utility, while AI doesn't". If that's what this debate comes down to, then the environmentalist stuff is just a proxy argument for the argument about whether AI is useful or not. If you thought AI were useful, you probably wouldn't object to it using a modest amount of electricity on a per consumer basis. If you don't think it's useful, even if it consumed zero electricity, you would still have other reasons to oppose it. So, it seems like nobody's opinion about AI actually depends on the energy usage of AI.

    I also dislike how much discourse about energy in general is focused on promoting energy conservation rather than promoting increased production of sustainable energy when the latter is far more important for mitigating climate change and also benefits people economically (whereas energy conservation, if anything, harms people economically).

    3. AI and copyright

    A lot of people assert that AI models "steal" training data or that training on copyrighted text or images amounts to "plagiarism" or "copyright infringement". Two things that bother me about this sort of assertion:

    1. It's not obvious what constitutes "theft" in the context of training AI models. This is an unprecedented situation and I don't see people trying to justify why their non-obvious interpretation of "theft" is correct. Humans are allowed to consume as much text and as many images as they can in order to produce new text and images. If we treated AI models like humans in this respect, then this would not be theft. I don't think it's obvious we should treat AI models like humans in this respect. I don't know exactly what we should do. Why does it seem like people are not engaging with the complexity and ambiguity of this issue? Why does it seem like people are asserting that it's theft without a supporting argument, as if it should be obvious, when it's really not obvious whether it's theft or not?

    2. The people who are angry about AI allegedly infringing copyright seem mostly indifferent or supportive of media piracy. I don't understand why the zeal against AI exists, especially when AI is a more ambiguous case with regard to copyright, and there isn't any zeal against piracy, especially when piracy is such a clear-cut instance of copyright infringement. Being anti-AI and pro-piracy (or neutral on piracy) aren't necessarily inconsistent positions, but I haven't seen many attempts to reconcile these positions.

    Is this a symptom of people feeling uncomfortable with ambiguity and uncertainty and attempting to resolve the discomfort by rushing to angry, confident opinions?

    4. General properties of the discourse that I don't like

    Some of the general things that bother me about the AI discourse are:

    1. Strong factual claims, e.g., about AI displacing artist labour and AI using a lot of energy, without clear supporting data.

    2. Apparent tensions or contradictions that aren't resolved; obvious questions or objections that go unanswered.

    3. Opinions so strongly held against AI that it is sometimes said or implied that no reasonable disagreement with an anti-AI stance could possibly exist and that people who use or defend AI are clearly doing something severely unethical and maybe should even be ostracized on this basis. Wow.

    4. I take seriously the possibility that generative AI isn't actually that important or impactful (at least for now and in terms of what's foreseeable over the next few years), and that it's not really worth this much attention. This is a boring, possibly engagement-nullifying opinion, which might make it memetically disadvantaged on the Internet. But maybe also some people would find this idea refreshing!

    The polarization isn't just on one side. In a way, both sides might be overrating how impactful AI is, with anti-AI people seeing the impact as highly net negative and the pro-AI people seeing the impact as highly net positive. I don't see AI as a credible threat to artists, the environment, or copyright law and I also don't see AI as a driver of economic productivity or firm/industry profitability. I think LLMs' actually good use cases are pretty limited and I definitely don't see generative AI as "revolutionary" or worth the amount of hype it has been receiving in the tech industry or in other industries where businesses have been eager to integrate AI.

    10 votes
  11. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    Link Parent
    I wrote a post on Tildes a week ago with the intention of cutting through some of the polarized, it's-either-black-or-white discourse on the capabilities of LLMs: The ARC-AGI-2 benchmark could...

    I wrote a post on Tildes a week ago with the intention of cutting through some of the polarized, it's-either-black-or-white discourse on the capabilities of LLMs:

    The ARC-AGI-2 benchmark could help reframe the conversation about AI performance in a more constructive way

    I encourage people to read that post and comment on it. There is now limited evidence that at least one of the newest frontier AI models — namely, OpenAI's o3 — is capable, to a limited extent, of something we could reasonably call reasoning. This challenges common narrative framings of AI that attempt to downplay its capabilities or potential. It also challenges the idea, common in some circles, that AI models have already possessed impressive reasoning ability since 2023, since the reasoning ability detected in o3 is so small and so recent.

    4 votes
  12. Comment on Bluesky’s quest to build nontoxic social media in ~tech

    deepdeeppuddle
    Link Parent
    I don’t have a dog in this fight because I’m not interested in using microblogging services in general, regardless of whether they’re fully decentralized, fully centralized, or something in...

    I don’t have a dog in this fight because I’m not interested in using microblogging services in general, regardless of whether they’re fully decentralized, fully centralized, or something in between.

    I will say that I find your mocking tone frustrating. I am now shut down from hearing your opinion on things because your approach is so hardline and combative.

  13. Comment on A slow guide to confronting doom in ~health.mental

    deepdeeppuddle
    Link Parent
    Thanks for explaining that. This is consistent from what I’ve heard from other people who do coding. Basically, it streamlines the process of looking things up on Stack Overflow (or forums or...

    Thanks for explaining that. This is consistent from what I’ve heard from other people who do coding.

    Basically, it streamlines the process of looking things up on Stack Overflow (or forums or documentation or wherever) and copying and pasting code from Stack Overflow.

    The LLM isn’t being creative or solving novel problems (except maybe to a minimal degree), but using existing knowledge to bring speed, ease, and convenience to the coder.

    2 votes
  14. Comment on Bluesky’s quest to build nontoxic social media in ~tech

    deepdeeppuddle
    Link Parent
    I don’t know anything about designing protocols for decentralized social networks, but why is the AT Protocol that Bluesky is based on able to allow post migration but Mastodon/ActivityPub is not?

    I don’t know anything about designing protocols for decentralized social networks, but why is the AT Protocol that Bluesky is based on able to allow post migration but Mastodon/ActivityPub is not?

  15. Comment on Where do you all get your news from? How do you work to avoid echo chambers and propaganda? in ~life

    deepdeeppuddle
    (edited )
    Link
    I don’t automatically buy your assertions about botnets. I would need to see more evidence before I grant that this is happening. There are certainly instances where evidence of manipulation has...

    I don’t automatically buy your assertions about botnets. I would need to see more evidence before I grant that this is happening.

    There are certainly instances where evidence of manipulation has come out. I’m specifically thinking of this story about a smear campaign against Blake Lively, reported by The New York Times in December. It’s wild and shocking.

    I think a lot of the ways discourse changes over time could be shaped by psychological and social psychological dynamics. For example, who feels compelled to speak when on political candidate selection. Or people who are somewhat on the fence or somewhat open-minded looking on the bright side when someone other than their top pick for candidate is chosen. Or the way that enthusiasm for a candidate can be contagious and build momentum over time. And so on.

    For example, I have no reason to think the enthusiasm that built for Kamala Harris after took over the presidential campaign from Joe Biden was anything other than primarily organic. Of course the campaign was trying to generate enthusiasm, but they only wish they could generate enthusiasm like that for whatever candidate they want. It’s more a complex, subtle thing than people just being manipulated by the media or political elite, or by propaganda campaigns.

    A lot of my exposure to different political ideas has happened through my own research and poking around papers, books, and podcasts, rather than just following a mainstream news source (or any news source). For example, when I was in university, and I was much more enamoured with the ideas of socialism and anti-capitalism than I am today, I looked up and read papers about socialist economics because I had heard enough general theorizing and wanted to see some concrete proposals for how a socialist economy would be run.

    This was one of the biggest factors in my turn away from socialism. Not reading proponents of capitalism make convincing arguments. But reading socialist economics papers and finding them lacking. Feeling like the proponents of socialism had a real lack of good ideas.

    Also, reading Thomas Piketty’s book Capital in the Twenty-First Century, which was such a contrast to works like Karl Marx’s Capital, which I had read in school. It turns out 150 years of progress in economics really makes a difference. Piketty’s Capital in the Twenty-First Century reads like a work of social science, whereas Marx’s Capital has sections where the digs into the price of corn or whatever, but also has long sections where he waxes about inscrutable Hegelian philosophy or discusses 19th century misconceptions about ancient human history and anthropology.

    This made me open up further to the idea that 21st economics could take seriously problems like wealth/income equality and examine them rigorously, “scientifically” based on things like 100 years of French tax records or whatever.

    The main (sole?) policy Piketty advocates in that book is a wealth tax. He advocates starting with a small wealth tax, which will allow economists to have more data about wealth, upon which further research and policy can be based. This made its way to Elizabeth Warren’s presidential campaign platform. One of Piketty’s graduate students was on her policy team.

    In Piketty’s subsequent book, Capital and Ideology, he advocates for some radical political and economic reforms, but he approaches the topic with a level of intellectual humility and acknowledging uncertainty that I find refreshing. I don’t know if he’s right and he doesn’t know if he’s right, but it’s an interesting jumping off point.

    So, that’s a brief story about a major political “conversion” I had, which maybe, hopefully, can tell you a little something about being exposed to new ideas.

    Nowadays, I really like The Ezra Klein Show. I don’t listen to most of the political episodes because politics is stressful and I need to take it low doses.

    A pretty cool thing about Ezra is he’s willing to entertain ideas that differ significantly from what he already thinks. This includes ideas from people across the political aisle, but also ideas that aren’t currently part of mainstream partisan political discourse at all. For example, I remember him saying at one point that he thinks about the stories he might be missing as a reporter, that would seem important in retrospect when looking back on the current era but that aren’t on his radar (or most reporters’ radar). The example he gave was CRISPR.

    On his podcast, he also discusses topics like psychology, psychedelics, loneliness, polyamory, and other things that have importance for the world but aren’t really part of the news or mainstream political debates. I find that refreshing and it also gives me a chance to engage with the podcast and not be stressed out by news or politics.

    3 votes
  16. Comment on A slow guide to confronting doom in ~health.mental

    deepdeeppuddle
    Link Parent
    How do you use AI in your work? How does it help you accomplish more in less time? I haven’t seen much evidence that AI has been having an effect on the macroeconomy, on (un)employment, or the...

    How do you use AI in your work? How does it help you accomplish more in less time?

    I haven’t seen much evidence that AI has been having an effect on the macroeconomy, on (un)employment, or the productivity of individual companies. I am open to seeing statistics that show an impact, though.

    2 votes
  17. Comment on Bluesky’s quest to build nontoxic social media in ~tech

    deepdeeppuddle
    Link Parent
    Thank you for sharing this. It’s interesting but, as you said, not very active.

    Thank you for sharing this. It’s interesting but, as you said, not very active.

    1 vote
  18. Comment on A slow guide to confronting doom in ~health.mental

    deepdeeppuddle
    (edited )
    Link Parent
    You are right. Here’s a comment from the Effective Altruism Forum, which has a lot of overlap with the LessWrong forum. There is overlap in terms of the user base, posts (people cross-post to...

    Trade wars and democratic backsliding seem too mundane to be significant LessWrong concerns.

    You are right.

    Here’s a comment from the Effective Altruism Forum, which has a lot of overlap with the LessWrong forum. There is overlap in terms of the user base, posts (people cross-post to both, and there’s even a feature built-in to both forums to make this easier), and discussion topics (particularly AGI). The forums also share the same code base.

    This comment is about Daniela Amodei, the President of the AI company Anthropic. The context is a discussion about whether it’s appropriate to look up information on the personal website she created for her wedding and publicly discuss it.

    …I will just say that by the "level of influence" metric, Daniela shoots it out of the park compared to Donald Trump. I think it is entirely uncontroversial and perhaps an understatement to claim the world as a whole and EA [effective altruism] in particular has a right to know & discuss pretty much every fact about the personal, professional, social, and philosophical lives of the group of people who, by their own admission, are literally creating God. And are likely to be elevated to a permanent place of power & control over the universe for all of eternity.

    Such a position should not be a pleasurable job with no repercussions on the level of privacy or degree of public scrutiny on your personal life. If you are among this group, and this level of scrutiny disturbs you, perhaps you shouldn't be trying to "reshape the lightcone without public consent" or knowledge.

    Note that 4 people have voted “agree” (that’s what the check mark symbol means).

    This helps put into perspective what people in this community are worrying about right now.

    4 votes
  19. Comment on A slow guide to confronting doom in ~health.mental

    deepdeeppuddle
    (edited )
    Link
    I really dislike LessWrong for reasons I explained at length in a series of comments on a post from February. If you’re curious, you can find my comments by starting here and then looking at the...

    I really dislike LessWrong for reasons I explained at length in a series of comments on a post from February. If you’re curious, you can find my comments by starting here and then looking at the replies down the chain.

    For those who don’t know, LessWrong is an online forum that has users from around the world, but is also closely connected to an IRL community of people in the San Francisco Bay Area who self-identify as “rationalists”. Rationalists have one primary fixation above all else, which is artificial general intelligence (AGI), and, more specifically, the fear that it will kill all humans sometime in the near future. That’s the “doom” that this LessWrong post is about.

    On the topic of AGI, I wrote a post here, in which I expressed frustration at the polarized discourse on AGI and discussed how the conversation could potentially be refined by focusing on better benchmarks for AI performance.

    I’ll say a little more on the topic of AGI.

    I think there are a number of bad reasons to reject the idea of AGI, such as:

    • dualism, the idea that the mind is non-physical or supernatural

    • mysterianism, the idea that the mind can never be understood by science

    • overly simple or dismissive misunderstandings of deep learning and deep reinforcement learning

    • the belief that AI research will run out of funding

    That said, I also think there are a number of bad reasons to believe that AGI will be created soon:

    • being overly impressed with ChatGPT and insufficiently critical of its failures to produce intelligent behaviour

    • a belief that the intelligence of AI systems will rapidly, exponentially increase without plateauing, despite serious empirical and theoretical problems with this idea (such as economic data failing to support that this has been happening so far)

    • a reliance on poor benchmarks that don’t really measure intelligence

    • knee-jerk dismissal of well-qualified critics like Yann LeCun and François Chollet

    • over-reliance on the opinions of other people about AGI, without enough examination of why they hold those opinions (e.g. how much is it circular? How much of those other people’s opinions is based on other people’s opinions?)

    It is difficult to find nuanced discussion of AGI online lately because most of the discussion I see is either people taking hardline anti-AI positions (e.g. it’s all just a scam) or people with an extreme, eschatological belief in near-term AGI.

    I highly doubt that we will see AGI within ten years. Within a hundred years? Possibly. But there’s a lot of irreducible uncertainty and there’s no way we can really know right now.

    13 votes
  20. Comment on Bluesky’s quest to build nontoxic social media in ~tech

    deepdeeppuddle
    (edited )
    Link Parent
    I don’t know if you meant to reply to me or you meant to reply to skybrian and replied to me on accident. In the comment I wrote that you’re replying to, I said: If I had to guess, I would guess...

    I don’t know if you meant to reply to me or you meant to reply to skybrian and replied to me on accident. In the comment I wrote that you’re replying to, I said:

    I think microblogging is just fundamentally a bad idea. It doesn't matter if it's Twitter, Bluesky, Mastodon, or Threads, it's all fundamentally the same idea for a social network and it all suffers from the same problems.

    If I had to guess, I would guess that most people’s lives would be improved on net if they stopped using microblogging platforms. I would also guess that the world would be improved on net if microblogging platforms stopped existing. But I don’t know for sure, and I don’t need to know for sure, since that decision isn’t up to me.

    When I was describing in my comment above what I want to see in an online platform, I was indeed, describing a fundamentally different type of online platform than a microblogging platform. (I think we should try to move past microblogging as an idea. Or, at least, I, personally, don’t want to use microblogging platforms anymore.)

    1 vote