Lia's recent activity

  1. Comment on Youtube channel recommendations 2026 in ~tech

    Lia
    Link Parent
    Good catch! I stand corrected. :)

    Good catch! I stand corrected. :)

    1 vote
  2. Comment on Youtube channel recommendations 2026 in ~tech

    Lia
    Link
    (My recommendations are not about tech or science, but these creators are all scientifically informed and/or technically advanced, regardless of the subject matter.) Baumgartner Restoration shows...

    (My recommendations are not about tech or science, but these creators are all scientifically informed and/or technically advanced, regardless of the subject matter.)

    Baumgartner Restoration shows us the meticulous and often surprising process of art restauration, shot and edited so beautifully that it feels soothing and nourishing - sort of like staring into a campfire. He describes what he's doing as he goes, also going into some technical details that make the work seem even more fascinating. There's a lot of repetition in the videos because many projects make use of the same techniques, but I seem to never get tired of watching. Some of the most interesting cases are when he fixes a painting that has been previously unprofessionally restored.

    Heidi Priebe makes amazing self-help videos. The idea is to help people get more of the things we want out of life (and less of what we don't want!). Her material is well researched and she gives clear definitions for the terms she uses, as well as some research background when necessary. Unlike most self-help content out there, many videos offer practical and actionable advice. A lot of her older content is based on attachment theory but the framework doesn't really matter when the results are universally applicable. For example, I consider myself securely attached but I still benefit from her material a great deal. I particularly enjoy her logical style of expression - this could be a downside to some people because it makes for more elaborate sentences, but for logically oriented people, the clarity she can reach on complex topics is outstanding. Examples: How to stop wasting your life in the drama triangle, How to deal with getting projected onto

    understitch, makes extremely well researched and well presented videos about the history of different fashion brands, focused on the business and marketing angles. Highly, highly recommended to anyone half interested in the topic.

    Florian Gadsby runs a one man pottery studio somewhere in the UK, showing and explaining his process from start to finish. I enjoy his stylistic rigour and attention to detail, and I love just watching skillful hands do their thing.

    Lab Muffin Beauty Science is a chemist whose work centers on the science of beauty products and debunking the false marketing claims that this industry is particularly saturated with, thanks to how easy it is to circumnavigate regulation. Products I refuse to use as a chemist

    7 votes
  3. Comment on Cory Doctorow | AI companies will fail. We can salvage something from the wreckage. in ~tech

    Lia
    Link Parent
    I pinged you in order to not write a similar reply twice. I think it would have been rude to disagree with you "behind your back", and also somewhat rude to write another direct reply to you,...

    I pinged you in order to not write a similar reply twice. I think it would have been rude to disagree with you "behind your back", and also somewhat rude to write another direct reply to you, because that can create the impression that I'm expecting a reply.

    You do realise that you yourself mischaracterised what raze was saying, without trying to check what they actually meant - even after another person (myself) clearly interpreted differently? This type of thing happens a lot in conversations between humans. I don't know a better way to solve it than just believing everyone is engaging in good faith and constructively correcting the interpretations where needed. And it would be great if everyone could be in the habit of checking themselves before getting salty due to some potentially misinformed interpretation.

  4. Comment on Cory Doctorow | AI companies will fail. We can salvage something from the wreckage. in ~tech

    Lia
    Link Parent
    For the record, this is and was clear to me. I do not agree with @sparksbet's comment that only the popular, misleading definition is okay to use. It's better to give people tools to process...

    I was using narcissism in its acedemic sense

    For the record, this is and was clear to me. I do not agree with @sparksbet's comment that only the popular, misleading definition is okay to use. It's better to give people tools to process reality constructively (this term being one such tool) than to give ragebait more breeding ground.

    1 vote
  5. Comment on We are witnessing the self-immolation of a superpower in ~society

    Lia
    (edited )
    Link Parent
    Just because a person, a business or an industry is exceptionally well positioned to extort money by abusing vendor lock-ins, for example, does not automatically make their functions value-adding....

    a good number of the planet's most exceptionally educated and productive people

    Just because a person, a business or an industry is exceptionally well positioned to extort money by abusing vendor lock-ins, for example, does not automatically make their functions value-adding. Not even if that's what is taught to economists these days.

    It seems that we're taking a step further now: someone who doesn't even have enough paying customers to cover expenses but who is able to attract venture capital by being extra manipulative and future-faking seems to be considered a high level value creator by many, when their pursuits are effectively making life and societies worse for the majority of people.

    Something needs to happen to stop this delusional detachment from reality.

    15 votes
  6. Comment on Cory Doctorow | AI companies will fail. We can salvage something from the wreckage. in ~tech

    Lia
    Link Parent
    I appreciate your granular approach. To attempt a recap: I'm saying that it's important to be discerning about the interlocutor's process, also when they are a human being, and that the trouble...

    I appreciate your granular approach.

    To attempt a recap: I'm saying that it's important to be discerning about the interlocutor's process, also when they are a human being, and that the trouble caused by LLMs can teach us to become better at this. You're saying that some LLMs, at least in some cases, achieve results that are practically speaking indiscernible from the human process, even if we know their process is different, and that duck testing is enough to discern how to approach each conversation/situation.

    I'm tempted to draw a conclusion that we are saying more or less the same thing: (duck) testing is important and the LLMs that don't pass the test will teach us to become better at it.

    Something is still bothering me when it comes to the definition of knowledge but I can't put my finger on it immediately, so I'll get back to you after I've had a chance to give it further thought.

    1 vote
  7. Comment on Cory Doctorow | AI companies will fail. We can salvage something from the wreckage. in ~tech

    Lia
    Link Parent
    I guess we are both feeling misunderstood by the other, so sure, we can drop it rather than get deeper into the weeds. I'm a tad worried that I may have unintentionally offended you due to the...

    I guess we are both feeling misunderstood by the other, so sure, we can drop it rather than get deeper into the weeds. I'm a tad worried that I may have unintentionally offended you due to the definitions mismatch and the fact that I'm sticking to mine. Just know that it isn't personal!

    Thanks for the comments.

    3 votes
  8. Comment on Cory Doctorow | AI companies will fail. We can salvage something from the wreckage. in ~tech

    Lia
    Link Parent
    Look here, I completely agree with you that folks in online cesspools are using this term erroneously and in ways that cause harm. The solution isn't to just dismiss the use of the term entirely -...

    Look here, I completely agree with you that folks in online cesspools are using this term erroneously and in ways that cause harm. The solution isn't to just dismiss the use of the term entirely - instead, I would recommend learning the real meaning and using it appropriately.

    Here's an insightful book about it, should you ever feel like learning more. The Amazon description names another title for some reason but the description still fits the one I linked to (empasis mine):

    Harvard Medical School psychologist and Huffington Post blogger Craig Malkin addresses the "narcissism epidemic," by illuminating the spectrum of narcissism, identifying ways to control the trait, and explaining how too little of it may be a bad thing."What is narcissism?" is one of the fastest rising searches on Google, and articles on the topic routinely go viral. Yet, the word "narcissist" seems to mean something different every time it's uttered. People hurl the word as insult at anyone who offends them. It's become so ubiquitous, in fact, that it's lost any clear meaning. The only certainty these days is that it's bad to be a narcissist—really bad—inspiring the same kind of roiling queasiness we feel when we hear the words sexist or racist. That's especially troubling news for millennials, the people born after 1980, who've been branded the "most narcissistic generation ever."

    In Rethinking Narcissism readers will learn that there's far more to narcissism than its reductive invective would imply. The truth is that we all fall on a spectrum somewhere between utter selflessness on the one side, and arrogance and grandiosity on the other. A healthy middle exhibits a strong sense of self. On the far end lies sociopathy. Malkin deconstructs healthy from unhealthy narcissism and offers clear, step-by-step guidance on how to promote healthy narcissism in our partners, our children, and ourselves.

    Rest assured, I do not believe (and I don't think others here do either) that "any and all ignorance equates to narcissism", or other similar BS.

    1 vote
  9. Comment on Cory Doctorow | AI companies will fail. We can salvage something from the wreckage. in ~tech

    Lia
    Link Parent
    Thanks for clarifying. Did I understand your point correctly: because we don't distinguish the different levels/ways of knowing when it comes to humans, it doesn't make sense to do so regarding...

    Thanks for clarifying.

    Did I understand your point correctly: because we don't distinguish the different levels/ways of knowing when it comes to humans, it doesn't make sense to do so regarding LLMs either? I agree in principle, it's just that I would like to see more granularity applied in both cases - not less.

    Without writing a novel about it, the person I mentioned above (who had ingested exceptional amounts of information) ended up being extremely challenging to form a relationship with because seeing him as someone who knows, rather than someone who possesses information, led to a seriously skewed dynamic on many levels.

    Of course there are cases where it really won't matter how some response got formulated, but there are many cases where it does matter, and some where it matters a great deal. It's important to have adequate terminology that draws distinction between knowledge that has been internalised (by applying it in real life situations) and other types of storing, managing and processing information. And I see no reason to not apply it to LLMs.

    2 votes
  10. Comment on Cory Doctorow | AI companies will fail. We can salvage something from the wreckage. in ~tech

    Lia
    (edited )
    Link Parent
    Here's where I'm going to actually sound nitpicky, but so be it: the verb that I commented on was "to knowledge" - a wonderfully creative concoction meant to underline the procedural nature of the...

    We’re just saying that our definitions of the verb “to know” don’t inherently carry that meaning without further clarification.

    Here's where I'm going to actually sound nitpicky, but so be it: the verb that I commented on was "to knowledge" - a wonderfully creative concoction meant to underline the procedural nature of the activity.

    I have no problem with people saying things like "This LLM knew the correct answer to my question about socks" and I wasn't criticising that sort of colloquial use of the term. I do have a problem with this: "If it knowledges like a knowledgeable person, it knowledges." This implies that the process under the hood doesn't matter, only the outcome does, and I just can't agree on that.

    We aren’t dismissing the value of making these distinctions.

    I just don't see them applied very often (/ at all) even when a conversation would benefit from it. Here I did so myself and received fairly clear pushback. Speaking of duck tests, if it doesn't walk or quack like a duck, why should I assume it's a duck anyway?

    2 votes
  11. Comment on Cory Doctorow | AI companies will fail. We can salvage something from the wreckage. in ~tech

    Lia
    Link Parent
    You are exactly right that my definitions for certain terms differ from the mainstream around here - I wouldn't call them "extremely specific" though just because they aren't what you happen to be...

    You are exactly right that my definitions for certain terms differ from the mainstream around here - I wouldn't call them "extremely specific" though just because they aren't what you happen to be used to.

    I've encountered this issue before around the term "art" and I think it's more a reflection of the general difference between European and American ways to conceptualise reality. For example, the idea that memorisation does not automatically produce knowledge even when accurately and appropriately retrieved was part of my secondary education that everyone in my country receives. It's not some novel fringe idea where I come from, and I've found the distinction frequently useful in everyday life, which is why I'm applying it now. From my perspective it feels a tad brow-raising that some clearly intelligent and thoughtful people are baffled about it to this extent, and that it gets so lightheartedly deemed "nitpicking", in other words a useless way to think.

    Aside from the strong American representation, possibly Tildes also has more tech-oriented people than humanists?

    4 votes
  12. Comment on Cory Doctorow | AI companies will fail. We can salvage something from the wreckage. in ~tech

    Lia
    Link Parent
    Information retention and retrieval is not the same as knowledge. I know someone who has photographic memory, knows seven languages and has read thousands of volumes of literature from all over...

    Information retention and retrieval is not the same as knowledge.

    I know someone who has photographic memory, knows seven languages and has read thousands of volumes of literature from all over the globe, ancient and modern and everything in between. When he quotes something from one of these books, he can often even remember the page number the passage is on.

    Does this person know the things that he can quote from memory? It appears that you believe the answer to be a straightforward yes, as long as his memory doesn't fail him. Am I correct?

    3 votes
  13. Comment on Cory Doctorow | AI companies will fail. We can salvage something from the wreckage. in ~tech

    Lia
    Link Parent
    That's not what I said though. I am saying that the interlocutor's process matters, you were saying that only their output matters. In your reply you seem to agree with me that the process matters...

    I don't agree that whether or not one's interlocutor is actually participating in the conversation to the same degree and in the same way as you entails that you need to put in less effort.

    That's not what I said though. I am saying that the interlocutor's process matters, you were saying that only their output matters. In your reply you seem to agree with me that the process matters as well, given your statement that with an LLM, sometimes more effort is needed.

    I also just don't think that your comparison to a human who simply isn't listening or engaging with your conversation is apt.

    I wasn't making a direct comparison with an LLM (obviously?) - I simply attempted to give one example of a case where my level of engagement will be different depending on the interlocutor's process. Numerous other examples could be given that don't represent the same process as that of an LLM but still illustrate why the process matters.

    Societal collapse seems extremely hyperbolic even in the worst case scenario here.

    It might be wise to wait until you get what I'm saying (here and also on the other chain regarding narcissism) before locking down that statement, but up to you.

    4 votes
  14. Comment on Cory Doctorow | AI companies will fail. We can salvage something from the wreckage. in ~tech

    Lia
    Link Parent
    Well, narcissism is a human trait, present in all of us. We need to have a healthy level of narcissism in order to function. As such, it is helpful or not in a similar way as saying that someone's...

    Well, narcissism is a human trait, present in all of us. We need to have a healthy level of narcissism in order to function. As such, it is helpful or not in a similar way as saying that someone's athleticism drives them to spend a lot of money and a lot of time on an indoor bike. As in: so what? It isn't an issue until it gets in the way of your life/health/relationships/financial stability/ability to function and be happy.

    Some of us have too much narcissism, others too little, and both can be problematic. When it comes to the folks who for example take an LLM's word over their spouse's whenever the latter disagrees, and who spend a lot of time "listening" to life advice from an LLM, unbothered by the vacuity and sycophancy (or perhaps unable to recognise it, or in extreme cases: strongly preferring it), this behaviour is indeed best explained by their overly dominant narcissism trait. I don't think that beating around the bush about such normal, widely recognised and researched concepts is helpful. Spreading information will help most people who are interested to learn; obfuscating the truth will help no one.

    Narcissistically wounded people are vulnerable to particular types of manipulation. The same applies to a lesser degree to the much broader group of people who, although not wounded, have simply not reached full emotional maturity. Teenagers are an example of a transient phase of inflated narcissism. Sometimes people can get stuck in their emotional development and exhibit similar traits in adulthood. None of these people meet the criteria of a personality disorder but they have some challenges with narcissism nonetheless.

    Corporations are fully aware of this and deliberately exploit it in marketing without care for the individual and societal repercussions. AI companies have a much broader attack surface (given how many of us share our deepest secrets) and set of tools to drive engagement, and it should go without saying that they will exploit those to the very limit of their ability.

    Like I said before, I actually find this a good thing because I'm hopeful that the inevitable pain that results from this will help people grow into more responsible, mindful consumers and do less of the "I was just using a service that was offered to me". (I hope that the parallels with "I was just following orders" are clear enough.) No one here is saying they are evil, just that this type of irresponsibility is a notable societal issue in our time, and that everyone would be better off if more people reached maturity sooner rather than later. It's a process that takes time and effort actively processing your emotions, your place in society, your relationships and so on. Not having adequately completed it yet does not make a person deficient, but it also doesn't get them a 'get out of jail free' card either. Going through this is everyone's own responsibility, as are the consequences for not doing so.

    To circle back to the topic at hand: some people are aware enough that they can use an LLM to help them along on that journey. This necessitates the understanding that the only person the user is having a conversation with is themself. Without such understanding, the process will most likely leave users worse off.

    3 votes
  15. Comment on Cory Doctorow | AI companies will fail. We can salvage something from the wreckage. in ~tech

    Lia
    Link Parent
    They were probably referring to the narcissism of the users who accept such solipsistic "conversations" as good enough.

    They were probably referring to the narcissism of the users who accept such solipsistic "conversations" as good enough.

    4 votes
  16. Comment on Cory Doctorow | AI companies will fail. We can salvage something from the wreckage. in ~tech

    Lia
    Link Parent
    One situation where it matters is when gauging the amount of effort we should be putting in. If I'm having a conversation about something important with a life partner, whom I know well enough to...

    I'm not particularly convinced the difference matters in a practical way.

    One situation where it matters is when gauging the amount of effort we should be putting in.

    If I'm having a conversation about something important with a life partner, whom I know well enough to tell they are actually processing my input and giving it a chance to affect their world view, values, habits... (= whatever tangible wiring in their neural network) - and that there is similarly a chance for me to be affected by someone whom I deem admirable and trustworthy - then I'm willing to give the conversation my best effort and a long time. Several days, if need be. If it's someone similarly engaged but for whom I care about less, like a friend, I'm still willing to put in a lot of effort but not as much as I will for a partner.

    On the other end of that continuum, if it's someone who clearly isn't listening attentively enough to be able to actually process anything, I'll be done with the convo as soon as I notice.

    We know that some people will happily engage with an LLM for days on end, but believing this to be fundamentally no different from engaging with a mindful human with their own personality and thoughts is.. dangerous. And we're starting to see what such engagement does to these people's minds, which should also indicate it isn't something to overlook.

    As well, being able to tell whether you're the only person/entity putting yourself and your personality on the line (while the other side is simply faking their way through in an attempt to drive further engagement from you) will protect you against getting scammed. Or against getting into dysfunctional relationships. And so on.

    It's an incredibly important life skill, one that can even lead to societal collapse if eroded too far.

    6 votes
  17. Comment on What private companies are you happy doing business with? in ~talk

    Lia
    Link Parent
    Thank you too for giving me a reason to avoid some cumbersome work :) My pet peeve regarding Obsidian is that it's developing so fast (especially accounting for third party plugins and how they...

    Thank you too for giving me a reason to avoid some cumbersome work :)

    My pet peeve regarding Obsidian is that it's developing so fast (especially accounting for third party plugins and how they work together) that the documentation is always lagging behind and many things that can be done with it just aren't mentioned anywhere. I'm happy if my ramblings could offer a glimpse on how one person uses it and give you some ideas. :) Good luck with everything - definitely make a post later if you feel like you need help with something or further ideas.

  18. Comment on Cory Doctorow | AI companies will fail. We can salvage something from the wreckage. in ~tech

    Lia
    Link Parent
    I do agree about this, but not necessarily the broader point that we shouldn't aim to tell real conversations apart from auto-piloting (machine assisted/generated or not). And I believe this to be...

    it would exclude huge swaths of conversations between humans

    I do agree about this, but not necessarily the broader point that we shouldn't aim to tell real conversations apart from auto-piloting (machine assisted/generated or not). And I believe this to be one of the benefits of ubiquitous LLMs: more people will learn to tell the difference and why it matters.

    4 votes
  19. Comment on Cory Doctorow | AI companies will fail. We can salvage something from the wreckage. in ~tech

    Lia
    Link Parent
    Is it actually a conversation, though? Or just a facsimile where I contribute an input and the system produces a response that fairly convincingly emulates some sort of relatedness to my input?

    The fact that you can now have a full conversation with a computer is strange and incredible

    Is it actually a conversation, though? Or just a facsimile where I contribute an input and the system produces a response that fairly convincingly emulates some sort of relatedness to my input?

    10 votes
  20. Comment on What private companies are you happy doing business with? in ~talk

    Lia
    Link Parent
    I forgot to mention two other ways that helped fix the tasks vault lag issue: I used to have a lot of my daily routines come up as tasks on the daily notes, where I would click them done. It works...

    I forgot to mention two other ways that helped fix the tasks vault lag issue:

    I used to have a lot of my daily routines come up as tasks on the daily notes, where I would click them done. It works well for me but that's about 10 items repeating every day. Now that I have that vault on the newest Obsidian version, I chose to put most of them into the notes' frontmatter instead because they're easily viewable with the new Bases core plugin. I can still click them done and they aren't creating a lot of bulk for the tasks query engine to wade through.

    My meds are still tasks though, because I take a different strength depending on the day and I have two off days every week. I need the task to explicitly appear into my daily note so that it sticks out rather than just be a part of the frontmatter.

    Another thing about the daily notes: the log section receives my log entries I discussed earlier, but also any tasks that were done that day. So there's a query for done tasks on each note. There's also another query for the undone tasks for that day. The daily note is created from a Templater template. One change I made recently was changing the undone tasks query template so that after creating the query on today's note, it removes it from the next newest note (usually that will be yesterday's note). I used (free) ChatGPT to get this done and it worked after a few iterations. I'm assuming that this also makes my vault less laggy when the same query doesn't have to be processed on each past day's note.

    1 vote