deepdeeppuddle's recent activity

  1. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    (edited )
    Link Parent
    Thanks. I've tried that extension and I found the tagging username feature to be better than nothing, but ultimately not what I was looking for. I really don't like the idea that if you want to...

    Thanks. I've tried that extension and I found the tagging username feature to be better than nothing, but ultimately not what I was looking for.

    I really don't like the idea that if you want to block someone, it's your problem. Blocking someone doesn't necessarily imply a strong moral condemnation or a belief that they should be punished. Like you described in your example, sometimes some people just express opinions that bother you, and you might just not want to engage with those opinions. Or you might just find that someone is too rude for you to want to interact with them further, while not being so rude that it warrants moderator action against them. There are many, many more examples like that.

    The line that the admin/moderator/community is going to draw for what's acceptable behaviour or what's an acceptable opinion is going to be different from my line. Perhaps more importantly, the line I draw for who and what I want to engage with personally is going to be very different from the line I would draw if I were acting in the capacity of moderator. There are many instances in my experience moderating other communities where I have extended leagues more patience and tolerance for people when I was acting as a moderator than I would ever extend if I were interacting with those people in a personal capacity — and I think that's as it should be. As a moderator, you have a much greater obligation to be impartial, lenient, measured, patient, and so on than you have as an ordinary person having personal interactions.

    To say that personal boundaries = community rules, and then rebuking anyone who tries to have personal boundaries that are stricter than (or just different from) the community rules, is, I think, just unfair, unreasonable, unrealistic, and unkind. That feels really unhealthy and unwise to me. I don't want things that are personal to me and that I feel should be up to me to have to go through approval from a committee process, and a community that requires that seems conducive to many bad outcomes, including bullying and emotional abuse. I really don't like this way of doing things. I also really don't like how mean a few users have been to me in the comments on this post.

    This will be the last comment I make on Tildes for at least the next 6 months. I am going to deactivate (or pseudo-deactivate) my account using the steps I described here.

    I wish you well and wish everyone else on Tildes well too. And I really mean that and I'm not saying it passive-aggressively. I am unhappy with my experience here and have some grievances (as I just described at length), but I believe in peace above all else and I believe (as much for the well-being of my soul as for the benefit of the world) in wanting good things for people even if you've had conflict with them, even if you've cut off ties, even if you have untenable disagreements, even if they've hurt you or wronged you. So, I genuinely wish everyone on Tildes well.

  2. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    Link Parent
    This is super rude and unkind. I don't know why you thought this would be constructive — maybe you didn't think it would be constructive and said it anyway, I don't know. I will be deactivating my...

    This is super rude and unkind. I don't know why you thought this would be constructive — maybe you didn't think it would be constructive and said it anyway, I don't know.

    I will be deactivating my Tildes account shortly.

  3. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    Link Parent
    My personal boundaries are not the consensus boundaries of the community — and that's as it should be. Just because I've decided I don't want to interact with someone doesn't mean they should be...

    My personal boundaries are not the consensus boundaries of the community — and that's as it should be. Just because I've decided I don't want to interact with someone doesn't mean they should be banned from the site. By analogy, there are people in my local community who I've decided I don't want to talk to, but I would never advocate for them to be excluded from IRL or online community spaces. There is a difference between "I don't personally want to interact with this person for reasons that are important to me" and "I think this person should be ostracized from this community". If you're saying that the only good reasons to not want to interact with someone are because they've done something that warrants censure or ostracism from the community at large, I don't know what to tell you, you're just wrong — and you probably don't even actually believe that.

    I will be deactivating my Tildes account shortly.

  4. Comment on Feature request: an option to deactivate or delete your account in ~tildes

    deepdeeppuddle
    Link Parent
    Thank you for adding that to GitLab. I appreciate that.

    Thank you for adding that to GitLab. I appreciate that.

    1 vote
  5. Comment on Feature request: an option to deactivate or delete your account in ~tildes

    deepdeeppuddle
    Link
    Thank you everyone for your input on this post. I'm going to implement the elaborate self-imposed deactivation method I described in this post for Tildes. I reduced the risk of permanently losing...

    Thank you everyone for your input on this post.

    I'm going to implement the elaborate self-imposed deactivation method I described in this post for Tildes.

    I reduced the risk of permanently losing my account by using multiple "send an email to yourself in the future" services in addition to FutureMe.org and multiple encrypted pastebins (with password-protected pastes) for the 2FA recovery codes. Something would have to go wrong with multiple sites for this method not to work. I think this is a good system! I would like more people to know about this idea when they're feeling conflicted about using a certain Internet platform and are contemplating deleting their account.

    I have set things up to lock myself out of my Tildes account for the next 6 months.

    I will not be deleting any of my comments or posts for now. I will not be deleting or "dissociating" my account for now. I see both sides of the argument and feel ambivalent about this topic. I see the importance of user autonomy/control over our own data. I also see the importance of digital archiving and I think there should be limits to how much someone can control copies of their words once they're published, e.g., I don't think an author should be allowed to "recall" a book from book stores and libraries once it's published. (You could draw analogies between a Reddit/Tildes/forum/Twitter/Bluesky user and different things... I'm not sure which analogy should set the precedent, if any. Free and universally accessible digital self-publishing is a new technology to which our traditions don't fully apply.)

    I don't have a strong reason to want to delete my comments or posts off the site. The only reason I would have to do that is I'm upset that I've had a bad overall experience with the site and I want to take my ball and go home. A stronger reason to keep everything up is if I want to reference it later for some reason (it sometimes happens).

    I may or may not use Tildes again when my recovery keys get emailed to me in 6 months. That is a decision for my future self to think about.

    In the time that I spend off Tildes (at least 6 months, maybe longer, maybe forever), I will try to figure out what I get out of sites like Tildes and try to figure out alternatives that give me that but don't give me the stuff I don't like. By "alternative", I don't necessarily mean a social media site or online community. That could be one alternative. But maybe the problems I've been experiencing are inherent to and inevitable in any social media site or any large online community. I don't know yet.

    I wish everyone on Tildes well. May you be safe, may you have peace, may you find joy and hope, may you be free.

    6 votes
  6. Feature request: an option to deactivate or delete your account

    Before posting this, I checked my user settings page, the site’s documentation, and GitLab. I also did some site:tildes.net Google searches and used the on-site search for the ~tildes group. I saw...

    Before posting this, I checked my user settings page, the site’s documentation, and GitLab. I also did some site:tildes.net Google searches and used the on-site search for the ~tildes group.

    I saw that on GitLab there is a feature request for account deletion, but not deactivation, that was marked “Accepted” about 5-6 years ago (August 2019).

    I also saw some posts here in the ~tildes group, including one from about 6-7 years ago (June 2018) with a comment that said an option for both account deletion and account "dissociation" was planned. Both of these features sound great.

    In addition to account deletion and account dissociation, I want to also request an option for account deactivation.

    I don’t want to ask for the Moon here, but I envision account deactivation as having the option to remove all your posts and comments from the site (as well as your profile), with the option of restoring them if you reactivate your account. (I don’t know how annoying or how much effort this would be to code. I’m just imagining what I would find ideal from a user perspective.)

    Another wonderful bonus would be the option to set a timer limiting your ability to reactivate your account, e.g. don’t let me reactivate my account for 6 months.

    In the past, I’ve done this on another site through an elaborate system where I:

    1. Set up two-factor authentication.
    2. Saved the two-factor recovery code on an encrypted pastebin in a password-protected paste.
    3. Saved the password for the paste in my password manager.
    4. Used FutureMe.org to send an email with a link to the paste to myself X amount of time in the future.
    5. Deleted the link to the paste from my browser history.
    6. Deleted the entry for the site from my two-factor authentication app so the recovery code is the only way to get in.

    This works, but it’s an elaborate process and if something goes wrong with FutureMe.org or the paste bin site, you could lose your ability to ever reactivate your account. You have to be willing to take that risk!

    I might end up implementing this wacky system again for Tildes. For a site like Tildes, if I permanently lost access to my account (due to FutureMe.org shutting down or suffering data loss, for example) and wanted to re-join the site at some point in the future, I guess the worst consequence would be losing my username. (Also, getting an invite again might be a hassle, I don’t know.) That might be unfortunate depending how much you like your username, but it’s not as bad as a site with follows and followers where you would lose all of those.

    24 votes
  7. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    (edited )
    Link Parent
    I disagree with your analysis of the situation. I would like to be granted the autonomy to maintain my own boundaries. That’s why I think just having a block feature is a good solution for most...

    I disagree with your analysis of the situation.

    I would like to be granted the autonomy to maintain my own boundaries. That’s why I think just having a block feature is a good solution for most social media sites/apps.

    If it’s going to turn out that trying to maintain my boundaries isn’t possible, or isn’t possible without some onerous level of stress and effort (e.g. spending hours justifying myself to strangers who have no investment in my well-being), then unfortunately the best option for me is to just quit the site.

    I already took a break from the site for about a month after having a bad experience. I am not one of these people who relishes conflict or who wants to get into it with strangers online. Very much the opposite.

    I just want to have the ability to stop interacting with someone if I have a bad experience with them. I see no utility in getting into protracted conflict with strangers online. It’s hard enough to resolve conflict with people you know and love, let alone with strangers where you have no established trust, rapport, affinity, or common ground. Why would that be a good use of my very limited resources?

  8. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    (edited )
    Link Parent
    That was not my intended implication. I really would love just to have a block button and not get into some huge public argument.

    That was not my intended implication.

    I really would love just to have a block button and not get into some huge public argument.

  9. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    (edited )
    Link Parent
    I agree that if someone has been respectful to you then it’s not polite to ask them not to engage with you. If someone has been disrespectful to you, then I guess your only two options are to ask...

    I agree that if someone has been respectful to you then it’s not polite to ask them not to engage with you. If someone has been disrespectful to you, then I guess your only two options are to ask them to stop engaging with you or quit the site. I’m trying the first option before I quit as a last resort.

    I think it’s a good idea for social sites/apps to have a block feature, since it lets people maintain their own boundaries without quitting the site/app altogether.

  10. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    Link Parent
    I'm not sure I understand your intended meaning. If consumers don't consume AI art, there is no market for it. Also, the comment you replied to was replying to Diff's comment, and in that comment,...

    I'm not sure I understand your intended meaning. If consumers don't consume AI art, there is no market for it.

    Also, the comment you replied to was replying to Diff's comment, and in that comment, Diff wasn't talking about large corporations making popular movies. They were (I thought) talking about individual customers who have a direct, one-to-one relationship with an artist or a small business producing art at small scale. So, that was about individual consumer choice. That was about "the masses" directly purchasing products.

    I would appreciate it if you didn't reply to my comments in the future.

  11. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    Link Parent
    That's interesting information. Thank you for telling me about that. I think the topic warrants further study.

    That's interesting information. Thank you for telling me about that.

    I think the topic warrants further study.

  12. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    Link Parent
    I can't imagine that analogy doing anything to persuade someone who isn't already convinced. All it does is polarize the discourse. We've spent decades making fun of anti-piracy PSAs on TV. Now...

    I can't imagine that analogy doing anything to persuade someone who isn't already convinced. All it does is polarize the discourse. We've spent decades making fun of anti-piracy PSAs on TV. Now copyright infringement is akin to Nazi crimes against humanity? Please step back from this hyperbole.

    I'm not saying you're wrong and I'm not trying to invalidate your wife's hard feelings about having her work used in this way. But we really can't go around comparing things cavalierly to Nazi atrocities.

    8 votes
  13. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    Link Parent
    I guess your argument is that the masses have poor taste and will accept low-quality art? Is there data that supports the idea that this is happening at scale, i.e., there is some statistically...

    I guess your argument is that the masses have poor taste and will accept low-quality art? Is there data that supports the idea that this is happening at scale, i.e., there is some statistically measurable displacement of paid human artistic labour by AI art generation?

    1 vote
  14. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    Link Parent
    This is an interesting perspective. Thank you. I briefly looked through about the first half of the examples in the "AI Art Turing Test". A lot of the pieces are abstract, weird, fantastical, and...

    This is an interesting perspective. Thank you.

    I briefly looked through about the first half of the examples in the "AI Art Turing Test". A lot of the pieces are abstract, weird, fantastical, and have non-Euclidean geometry or don't attempt to show perspective in a realistic way. That makes it particularly hard to judge.

    I also saw a few examples, particularly the cartoony images of porcelain women, that I find ugly and low-quality but I don't doubt they could have been made by humans. Sometimes I wonder if part of the reason for diffusion models like DALL-E and Midjourney outputting art that looks bad is that they're trained on a lot of art from DeviantArt or Tumblr or wherever that is bad. It makes sense that most of the drawings on the Internet would be made by people who have closer to beginner-level skill than expert-level skill, just like how most fanfiction is closer to a "14-year-old who has never written anything before" level of quality than a "experienced writer who could realistically get a publishing deal" level of quality.

    I also think of this post about LLMs generating short fiction. The author's view is that LLMs are good at generating short stories that look like good writing upon a cursory inspection, but if you scratch the surface, you start to notice how bad it really is.

    I worry about the same thing happening with the "AI Art Turing Test". Realistically, how long am I going to spend looking at fifty images? Maybe like ten seconds or less, which is not long enough for my eyes to even take in all the detail in the image? Passable at a glance is not the same thing as good.

    If a great piece of art is something you can stand in front of at a museum for an hour and continually appreciate more detail in, then a bad piece of AI art is something that looks impressive for the first 30 seconds you spend looking at it before you notice some messed up, ugly detail.

  15. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    (edited )
    Link
    I see a lot of problems with the current, popular AI discourse. I wrote about where I find fault in the discourse about AI capabilities here. But there's more I take issue with. This comment will...

    I see a lot of problems with the current, popular AI discourse. I wrote about where I find fault in the discourse about AI capabilities here. But there's more I take issue with.

    This comment will mostly focus on the common ethical arguments against AI. I could also talk about AI hype (e.g., how despite huge business investment and apparent enthusiasm for AI, it doesn't seem to increasing productivity or profitability), but it seems like most Tildes users already believe that AI is overhyped.

    1. The anti-AI art narrative seems to contain a contradiction

    The discourse about AI-generated art is confusing. The detractors of AI-generated art make two claims that seem incompatible (or at least in tension with each other):

    1. AI-generated art is terrible in quality, and obviously so to anyone who looks at it.
    2. AI-generated art is displacing human-generated art in the market and costing human artists revenue.

    I agree with (1). As for (2), I want to see data that supports this claim. I've looked for it and I haven't been able to find much data.

    What nags at me most is that (1) and (2) seem to be incompatible. If AI-generated art is so terrible, why do consumers putatively prefer it? And if consumers don't prefer it, how could it be displacing human labour in creating art? How can these two claims, which are often made by the same people, be reconciled?

    What seems to me most likely to be true is that AI art sucks and because it sucks, there is a marginal market for it, and there's very little displacement of human artists' labour.

    2. Talking about how much electricity AI uses seems like it's just a proxy for talking about useful AI is

    I'm skeptical about environmentalist arguments against AI. I'm skeptical because I've tried to find hard data on how much electricity AI consumes and I can't find strong support for the idea that an individual consumer using an LLM uses a lot of electricity when compared to things like using a computer, playing a video game, keeping some LED lightbulbs turned on, running a dishwasher, etc.

    The predictable rejoinder is "those other things have some utility, while AI doesn't". If that's what this debate comes down to, then the environmentalist stuff is just a proxy argument for the argument about whether AI is useful or not. If you thought AI were useful, you probably wouldn't object to it using a modest amount of electricity on a per consumer basis. If you don't think it's useful, even if it consumed zero electricity, you would still have other reasons to oppose it. So, it seems like nobody's opinion about AI actually depends on the energy usage of AI.

    I also dislike how much discourse about energy in general is focused on promoting energy conservation rather than promoting increased production of sustainable energy when the latter is far more important for mitigating climate change and also benefits people economically (whereas energy conservation, if anything, harms people economically).

    3. AI and copyright

    A lot of people assert that AI models "steal" training data or that training on copyrighted text or images amounts to "plagiarism" or "copyright infringement". Two things that bother me about this sort of assertion:

    1. It's not obvious what constitutes "theft" in the context of training AI models. This is an unprecedented situation and I don't see people trying to justify why their non-obvious interpretation of "theft" is correct. Humans are allowed to consume as much text and as many images as they can in order to produce new text and images. If we treated AI models like humans in this respect, then this would not be theft. I don't think it's obvious we should treat AI models like humans in this respect. I don't know exactly what we should do. Why does it seem like people are not engaging with the complexity and ambiguity of this issue? Why does it seem like people are asserting that it's theft without a supporting argument, as if it should be obvious, when it's really not obvious whether it's theft or not?

    2. The people who are angry about AI allegedly infringing copyright seem mostly indifferent or supportive of media piracy. I don't understand why the zeal against AI exists, especially when AI is a more ambiguous case with regard to copyright, and there isn't any zeal against piracy, especially when piracy is such a clear-cut instance of copyright infringement. Being anti-AI and pro-piracy (or neutral on piracy) aren't necessarily inconsistent positions, but I haven't seen many attempts to reconcile these positions.

    Is this a symptom of people feeling uncomfortable with ambiguity and uncertainty and attempting to resolve the discomfort by rushing to angry, confident opinions?

    4. General properties of the discourse that I don't like

    Some of the general things that bother me about the AI discourse are:

    1. Strong factual claims, e.g., about AI displacing artist labour and AI using a lot of energy, without clear supporting data.

    2. Apparent tensions or contradictions that aren't resolved; obvious questions or objections that go unanswered.

    3. Opinions so strongly held against AI that it is sometimes said or implied that no reasonable disagreement with an anti-AI stance could possibly exist and that people who use or defend AI are clearly doing something severely unethical and maybe should even be ostracized on this basis. Wow.

    4. I take seriously the possibility that generative AI isn't actually that important or impactful (at least for now and in terms of what's foreseeable over the next few years), and that it's not really worth this much attention. This is a boring, possibly engagement-nullifying opinion, which might make it memetically disadvantaged on the Internet. But maybe also some people would find this idea refreshing!

    The polarization isn't just on one side. In a way, both sides might be overrating how impactful AI is, with anti-AI people seeing the impact as highly net negative and the pro-AI people seeing the impact as highly net positive. I don't see AI as a credible threat to artists, the environment, or copyright law and I also don't see AI as a driver of economic productivity or firm/industry profitability. I think LLMs' actually good use cases are pretty limited and I definitely don't see generative AI as "revolutionary" or worth the amount of hype it has been receiving in the tech industry or in other industries where businesses have been eager to integrate AI.

    10 votes
  16. Comment on I'm tired of dismissive anti-AI bias in ~tech

    deepdeeppuddle
    Link Parent
    I wrote a post on Tildes a week ago with the intention of cutting through some of the polarized, it's-either-black-or-white discourse on the capabilities of LLMs: The ARC-AGI-2 benchmark could...

    I wrote a post on Tildes a week ago with the intention of cutting through some of the polarized, it's-either-black-or-white discourse on the capabilities of LLMs:

    The ARC-AGI-2 benchmark could help reframe the conversation about AI performance in a more constructive way

    I encourage people to read that post and comment on it. There is now limited evidence that at least one of the newest frontier AI models — namely, OpenAI's o3 — is capable, to a limited extent, of something we could reasonably call reasoning. This challenges common narrative framings of AI that attempt to downplay its capabilities or potential. It also challenges the idea, common in some circles, that AI models have already possessed impressive reasoning ability since 2023, since the reasoning ability detected in o3 is so small and so recent.

    4 votes
  17. Comment on Bluesky’s quest to build nontoxic social media in ~tech

    deepdeeppuddle
    Link Parent
    I don’t have a dog in this fight because I’m not interested in using microblogging services in general, regardless of whether they’re fully decentralized, fully centralized, or something in...

    I don’t have a dog in this fight because I’m not interested in using microblogging services in general, regardless of whether they’re fully decentralized, fully centralized, or something in between.

    I will say that I find your mocking tone frustrating. I am now shut down from hearing your opinion on things because your approach is so hardline and combative.

    2 votes
  18. Comment on A slow guide to confronting doom in ~health.mental

    deepdeeppuddle
    Link Parent
    Thanks for explaining that. This is consistent from what I’ve heard from other people who do coding. Basically, it streamlines the process of looking things up on Stack Overflow (or forums or...

    Thanks for explaining that. This is consistent from what I’ve heard from other people who do coding.

    Basically, it streamlines the process of looking things up on Stack Overflow (or forums or documentation or wherever) and copying and pasting code from Stack Overflow.

    The LLM isn’t being creative or solving novel problems (except maybe to a minimal degree), but using existing knowledge to bring speed, ease, and convenience to the coder.

    2 votes
  19. Comment on Bluesky’s quest to build nontoxic social media in ~tech

    deepdeeppuddle
    Link Parent
    I don’t know anything about designing protocols for decentralized social networks, but why is the AT Protocol that Bluesky is based on able to allow post migration but Mastodon/ActivityPub is not?

    I don’t know anything about designing protocols for decentralized social networks, but why is the AT Protocol that Bluesky is based on able to allow post migration but Mastodon/ActivityPub is not?

  20. Comment on Where do you all get your news from? How do you work to avoid echo chambers and propaganda? in ~life

    deepdeeppuddle
    (edited )
    Link
    I don’t automatically buy your assertions about botnets. I would need to see more evidence before I grant that this is happening. There are certainly instances where evidence of manipulation has...

    I don’t automatically buy your assertions about botnets. I would need to see more evidence before I grant that this is happening.

    There are certainly instances where evidence of manipulation has come out. I’m specifically thinking of this story about a smear campaign against Blake Lively, reported by The New York Times in December. It’s wild and shocking.

    I think a lot of the ways discourse changes over time could be shaped by psychological and social psychological dynamics. For example, who feels compelled to speak when on political candidate selection. Or people who are somewhat on the fence or somewhat open-minded looking on the bright side when someone other than their top pick for candidate is chosen. Or the way that enthusiasm for a candidate can be contagious and build momentum over time. And so on.

    For example, I have no reason to think the enthusiasm that built for Kamala Harris after took over the presidential campaign from Joe Biden was anything other than primarily organic. Of course the campaign was trying to generate enthusiasm, but they only wish they could generate enthusiasm like that for whatever candidate they want. It’s more a complex, subtle thing than people just being manipulated by the media or political elite, or by propaganda campaigns.

    A lot of my exposure to different political ideas has happened through my own research and poking around papers, books, and podcasts, rather than just following a mainstream news source (or any news source). For example, when I was in university, and I was much more enamoured with the ideas of socialism and anti-capitalism than I am today, I looked up and read papers about socialist economics because I had heard enough general theorizing and wanted to see some concrete proposals for how a socialist economy would be run.

    This was one of the biggest factors in my turn away from socialism. Not reading proponents of capitalism make convincing arguments. But reading socialist economics papers and finding them lacking. Feeling like the proponents of socialism had a real lack of good ideas.

    Also, reading Thomas Piketty’s book Capital in the Twenty-First Century, which was such a contrast to works like Karl Marx’s Capital, which I had read in school. It turns out 150 years of progress in economics really makes a difference. Piketty’s Capital in the Twenty-First Century reads like a work of social science, whereas Marx’s Capital has sections where the digs into the price of corn or whatever, but also has long sections where he waxes about inscrutable Hegelian philosophy or discusses 19th century misconceptions about ancient human history and anthropology.

    This made me open up further to the idea that 21st economics could take seriously problems like wealth/income equality and examine them rigorously, “scientifically” based on things like 100 years of French tax records or whatever.

    The main (sole?) policy Piketty advocates in that book is a wealth tax. He advocates starting with a small wealth tax, which will allow economists to have more data about wealth, upon which further research and policy can be based. This made its way to Elizabeth Warren’s presidential campaign platform. One of Piketty’s graduate students was on her policy team.

    In Piketty’s subsequent book, Capital and Ideology, he advocates for some radical political and economic reforms, but he approaches the topic with a level of intellectual humility and acknowledging uncertainty that I find refreshing. I don’t know if he’s right and he doesn’t know if he’s right, but it’s an interesting jumping off point.

    So, that’s a brief story about a major political “conversion” I had, which maybe, hopefully, can tell you a little something about being exposed to new ideas.

    Nowadays, I really like The Ezra Klein Show. I don’t listen to most of the political episodes because politics is stressful and I need to take it low doses.

    A pretty cool thing about Ezra is he’s willing to entertain ideas that differ significantly from what he already thinks. This includes ideas from people across the political aisle, but also ideas that aren’t currently part of mainstream partisan political discourse at all. For example, I remember him saying at one point that he thinks about the stories he might be missing as a reporter, that would seem important in retrospect when looking back on the current era but that aren’t on his radar (or most reporters’ radar). The example he gave was CRISPR.

    On his podcast, he also discusses topics like psychology, psychedelics, loneliness, polyamory, and other things that have importance for the world but aren’t really part of the news or mainstream political debates. I find that refreshing and it also gives me a chance to engage with the podcast and not be stressed out by news or politics.

    3 votes