Sinnerman's recent activity

  1. Comment on Learning and studying things without taking notes in ~life

    Sinnerman
    Link
    A couple techniques you may be interested in: Speed reading. I'm only familiar with Stanley Frank's book "The Evelyn Wood Seven-Day Speed Reading and Learning Program" but there are others. I'm...

    A couple techniques you may be interested in:

    1. Speed reading. I'm only familiar with Stanley Frank's book "The Evelyn Wood Seven-Day Speed Reading and Learning Program" but there are others. I'm skeptical of some of the book's claims about being able to read entire pages at a glace (the wikipedia page has more about this) but some of their simpler speed-up techniques are worthwhile.

    2. Frank's book has a section on "recall patterns" (chapter 6) which are ways of organizing what you read to help you remember them. I've heard of the terms cognitive maps and mind maps which may be related. This also leads us to...

    3. Mnemonics, the use of memorization techniques. People have been doing this for centuries, and you're probably familiar with some simple mnemonics already. But there are a variety of general-use techniques and even memorizing championships. The book I'm most familiar with is "How to Develop a Brilliant Memory Week by Week" by Dominic O'Brien.

    Disclaimer: I'm not an expert on any of these things!

    2 votes
  2. Comment on Anyone here into yoyos, juggling, kendama, or other skill toys? in ~hobbies

    Sinnerman
    Link
    Hey, I learned to juggle a couple years back. FYI the annual IJA Festival (International Juggler's Association Festival) just started, it runs July 17–23 and this year it's in South Bend, IN....

    Hey, I learned to juggle a couple years back.

    FYI the annual IJA Festival (International Juggler's Association Festival) just started, it runs July 17–23 and this year it's in South Bend, IN.
    https://festival.juggle.org/

    3 votes
  3. Comment on What have you made using an AI tool? in ~creative

    Sinnerman
    Link Parent
    Unfortunately, I was using Bing Chat (GPT 4), which I do not recommend for this. I did the same kind of game yesterday, playing Casaubon in Middlemarch. I began by repenting in prayer and altering...

    Unfortunately, I was using Bing Chat (GPT 4), which I do not recommend for this.

    I did the same kind of game yesterday, playing Casaubon in Middlemarch. I began by repenting in prayer and altering my will. Then I thought to myself about what scholars I could communicate with to improve my arguments, and GPT helpfully offered several scholars of the area, including Mary Ann Evans (George Eliot) who wrote Middlemarch. Well I couldn't resist that, so Casaubon and Dorothea had tea with Ms Evans and Mr Lewes.

    One thing that immediately bothered me was that Bing Chat's GPT was so unerringly optimistic and obsequious . In GPTs story, Ms Evans and Mr Lewes were fawning and loved Casaubon's writing (which they wouldn't have.) There was no drama or conflict. So I had Casaubon stay in London (with Dorothea) and look for a job, and sure enough GPT made sure he found one. I had GPT come up with a rival, but he was pretty easily defeated. I tried to come up with a dangerous scholarly trip but GPT just says "You face some difficulties and dangers along the way, such as rough weather, scarce resources, hostile locals, or rival explorers. You overcome them with courage and perseverance, with reason and evidence, with grace and eloquence."

    So I had Casaubon and Dorothea visit an opium den, thinking that surely there'd be some conflict or drama there. GPT says "You explore the room with interest and curiosity, hoping to find some hidden treasures or secrets. You talk to some of the people who are smoking opium, asking them some questions or making some comments that are respectful and constructive. You try to learn more about their lives and their reasons for smoking opium. You also smoke some opium yourself, as part of your research for your article. You smoke it with caution and moderation, hoping to experience its effects without becoming addicted or harmed." So I told GPT that Casaubon and Dorothea visit the opium den again, because they've had some strange urges after their first visit. That's when it got weird. The first paragraph was plain GPT positive blather, but then it got dark, talking about Dorothea asking Casaubon what was wrong with him, "you don't know whether to laugh or scream" and there was another paragraph after that, but I didn't get a chance to read it because GPT blanked it all out and just said "I can't answer that right now."

    So that whole experience really highlighted how Bing Chat in particular (using RHLF or other means) was trained to constrain its output to the sort of thing that's appropriate in a professional setting. And sure, that's OK, you don't want to be using a GPT in the office and suddenly it starts spewing offensive stuff in front of your boss. But for the purposes of exploring literary writing, it's a bad match.

    Maybe I'll try that Claude, thanks for the tip.

    2 votes
  4. Comment on What have you made using an AI tool? in ~creative

    Sinnerman
    Link Parent
    I think it's a bit more complex than that, but we're starting to reach the boundaries of what I'm familiar with... I'm not a lawyer, so I'm not sure... is this settled legistation or case law,...

    Its weighing different properties based on how frequently they are seen based on the context of what surrounds them.

    I think it's a bit more complex than that, but we're starting to reach the boundaries of what I'm familiar with...

    In the realm of copyright, the LLM should be subjected to every copyright of everything it consumed in its training data.

    I'm not a lawyer, so I'm not sure... is this settled legistation or case law, either in the US or EU or elsewhere? It'd be interesting to see what the arguments for and against are/were. If not, I think this might be the kind of thing that (in the US) goes to the Supreme Court, though I suspect legislation will probably come first. I'm not sure what that means in the interim.

    Alternately, are you making a moral statement? I kind of get the "gut feeling" of the argument, but for a more thorough argument, I think a lot of it comes down to the technical details of licensing and copyright laws. And my impression is that copyright laws are pretty outdated in the US.

    TBH I kind of agree with you, I don't want some corporation taking stuff I post online and using it for marketing purposes etc. But I'm not 100% sure, and I think it's important to understand the details involved.

  5. Comment on What have you made using an AI tool? in ~creative

    Sinnerman
    Link Parent
    So... first of all, when you say "The only ethical AI would be one that was only trained on public domain data." I think that's an interesting proposition worth exploring. So if I make some...

    So... first of all, when you say "The only ethical AI would be one that was only trained on public domain data." I think that's an interesting proposition worth exploring. So if I make some factual corrections, it's not an attack or anything, I just want to better explore this proposition.

    First, you say: "It's just randomly spitting out whatever is highest probability." this is usually not true. Most GPT-based systems have a setting like "temperature" which indicates the extent to which the GPT can generate something with high vs low probability. I once accidentally set a temperature value to be extremely high, and the output was barely intelligible. But I believe that even at lower temperatures, there's some element of randomness involved, so it's not always the same output. (this will vary by implementation, of course.)

    Second, you say if you created an AI and trained it on two sentences, then it would most likely produce those two sentences. Aside from point 1 above, there's the question of whether such a system would actually be called an AI. It would certainly be strange to train an LLM on only those two sentences, after all the first "L" stands for "large". The most likely model to train it on would be a character n-gram generator. I happen to have a character n-gram generator handy and put those two sentences in it. for character bigrams I got, as expected, nonsense:

    dullesese bo wo wohnnndoy
    Jork wohnnnnnnnd Ale woy Appppplesesese ma annndle workese nnnduleese woy Apllese ma wo Jork Appplak bohnnnnnd wo ma 
    Appppppppplannndulle nnnd Appll mannnnnnde workese bohndhndulllanndulll Jorkese wo
    

    For character n-grams, what you're really looking for is portmanteaux, like "workese" looks cool. Anyway, character 5-grams is a bit better:

    make Johnny a dull boy
    Johnny a dull boy
    Johnny a dull boy
    Johnny Appleseed boy
    Johnny a dull boy
    

    and just for kicks this is what word (as opposed to character) bigrams looks like:

    Johnny a dull boy
    Johnny a dull
    Boy johnny a dull boy johnny
    Appleseed a dull boy johnny
    A dull boy. 
    

    so the point is: it's important to be specific about the types of algorithms used, LLMs aren't trained on only two sentences, there exist simpler models that generate novel words but they don't generally spit out exactly the same words.

    Third, when you say "It would be literally only the words King wrote (and published under copyright). It wouldn't have any other language at its disposal." this is not how GPTs are generally trained. Usually what happens is: an LLM is trained on a large amount of data, and the neural network learns high-level features. Then the system is fine-tuned on a smaller set of data more relevant to whatever specialized application you're trying to build. The idea is that the larger LLM learns features, and through transfer learning those features are also applied to the specialized application through the fine-tuning. (If it's a chat GPT, then there's also additional training you do to enable the GPT to participate in a dialogue.) So even Stephen King has not produced enough text to populate an LLM by himself, so his text would probably be used for fine-tuning. But even then, the larger LLM would be providing the structures that guide the final output (i.e. it's not just King's writings that are providing the final output.)

    Finally, when you say "Which will likely copy characters, settings, themes, and tropes from his existing stories and spit them out in a slightly different order." this is analogous to William S. Burrough's "cut-up" method which is admittedly problematic from a copyright perspective. Burroughs used to just take pages of text, cut them in half, and rearrange them. Similarly, ancient Romans and Greeks used to cut up lines of poetry and re-arrange them; they used to call them "centos". However, as I describe above, the approaches used by modern GPT methods, or even older n-gram methods, do not work in this simple way.

    Anyway, I'm not an expert in this, I'm just trying to share what little I know, if I got something wrong someone feel free to tell me, thx!

    5 votes
  6. Comment on What have you made using an AI tool? in ~creative

    Sinnerman
    Link
    I recently asked a GPT to generate a text adventure game for me. You know, the kind where it says "You wake up. You're in a room. You can look around, leave through the door... etc" and you type...

    I recently asked a GPT to generate a text adventure game for me. You know, the kind where it says "You wake up. You're in a room. You can look around, leave through the door... etc" and you type your command and it tells you what happens.

    But in this case I had the GPT make the adventure based on George Eliot's "Middlemarch." I kind of wanted to be Casaubon but it made me Dorothea after I was married to Casaubon, so I went with it. So I woke up, puttered around the house in my unhappy marriage. GPT was like: "you remember the past days when you used to wear colorful clothes" and I was like "wtf GPT, Dorothea never wore colorful clothes" and GPT was like "I'm taking creative liberties, and by the way Dorothea wore colorful clothes by the end of the book." and I was like: whatever.

    Anyway, I go talk to Casaubon and he's sneering at me. So I ask GPT for info about Casaubon's character. I'd re-read Middlemarch during my PhD studies (in CS!) and during that reading I'd been very alert to the themes of intellectual yearning and failure (Dorothea, Lydgate, Mr Brooke, Ladislaw, but especially Casaubon.) GPT found some points (and cited someone's writing!) about Casaubon that were kind of obvious but it was still interesting to see them stated that way (because every time I read Middlemarch I never had anyone to discuss it with) and GPT and I also had a nice little discussion on what kinds of things Casaubon could have done to be more successful (which might have been encouraging for me to hear when I was doing my PhD!)

    Back in the game, I'm doing some note-taking task for Casaubon and I don't quite understand, and Casaubon's disrespecting me. So I'm like, "GPT, what are some ways to deal with troublesome people?" And GPT gives me a bunch of strategies which sound like they came from a self-help book. So I pick some of those strategies, and Casaubon starts to chill out a bit. Casaubon ends up finding a task that I can actually do, and because of the strategies that GPT suggested, Casaubon and I (Dorothea) start getting along better. After I finish the task, we go to dinner and I have the GPT end it there, and GPT provided a hopeful-sounding ending paragraph.

    The main thing I wish I'd done is have GPT generate the text in Eliot's style -- I love her sentences. One thing I noticed is that the GPT wasn't as "insightful" as Eliot. In Middlemarch, every page has several great insights on people's ways of being, in a way that seems universal yet is tightly dependent on the narrative situation at hand. That sort of insight might be currently possible for GPTs with clever prompting. Alternately, it's possible that newer networks will have that insight by default. That's one way, though difficult to empirically measure, to subjectively track improvements in GPT "intelligence".

    5 votes
  7. Comment on What have you made using an AI tool? in ~creative

    Sinnerman
    Link Parent
    But GPTs dont actually spit out Stephen King's writing. They learn high-level features about that writing, and then generate new text that is consistent with those features. I'm not sure you can...

    "I fed every Stephen King novel to an AI and now it can spit out large chunks of Stephen King's writing to anyone who asks"

    But GPTs dont actually spit out Stephen King's writing. They learn high-level features about that writing, and then generate new text that is consistent with those features.

    I'm not sure you can successfully argue that a writer's high-level features should be copyrighted. It'll be interesting to see how the arguments play out in the cases of the writers who are suing AI companies.

    6 votes
  8. Comment on Lemmy.world has been hacked and is currently down [UPDATE: It's back up] in ~tech

    Sinnerman
    Link Parent
    You're right that obvious bots haven't been posting stuff on Lemmy/Kbin. However, Lemmy devs acknowledge that bot account creation is a problem: https://github.com/LemmyNet/lemmy/issues/2355...

    You're right that obvious bots haven't been posting stuff on Lemmy/Kbin. However, Lemmy devs acknowledge that bot account creation is a problem:
    https://github.com/LemmyNet/lemmy/issues/2355

    People who track user count (i.e. with "friendly" bots) can see suspicious patterns in account creation:
    https://botsin.space/@threadcount/110581723322900741

    Over the past month especially, lemmy/kbin admins have used a variety of ways of dealing with bot account creation, e.g. captchas, emails, and human review:
    https://lemmy.world/post/293545

    So it's almost certain that there are bot accounts on fediverse servers, which will skew their "registered user" numbers to favor servers with open signups like lemmy.world and disfavor servers with human-review signups like beehaw. Daily Active Users is a better metric until the bots start posting:
    https://lemmy.fediverse.observer/dailystats
    (but fediverse observer apparently doesn't break it down by server, which was the question above.)

    Killer username, btw.

    2 votes
  9. Comment on Lemmy.world has been hacked and is currently down [UPDATE: It's back up] in ~tech

    Sinnerman
    Link Parent
    According to fediverse observer, lemmy.world is the largest Lemmy instance by number of users, and there are several other Lemmy instances that are larer than Beehaw. However, there's some...

    According to fediverse observer, lemmy.world is the largest Lemmy instance by number of users, and there are several other Lemmy instances that are larer than Beehaw. However, there's some question about how many of those are "bot" signups.
    https://fediverse.observer/list

    2 votes
  10. Comment on I would really appreciate some help. Looking for moving, thoughtful or observant or interesting songs that aren't metal or rap/hiphop in ~music

    Sinnerman
    (edited )
    Link
    If you want some (e: acoustic) guitar-based rock, hopeful but wistful with interesting lyrics, try this: Amy MacDonald - Mr Rock and Roll

    If you want some (e: acoustic) guitar-based rock, hopeful but wistful with interesting lyrics, try this:
    Amy MacDonald - Mr Rock and Roll

    1 vote
  11. Comment on What are you looking for out of artificial intelligence? in ~tech

    Sinnerman
    Link Parent
    Not OP, but I've been interested in this. You're basically right about the limitations being algorithmic, but it's a bit more fundamental than that. There are basic theoretical questions related...

    I don't think the limit is in raw compute, but rather on the software and algorithm side.

    Not OP, but I've been interested in this. You're basically right about the limitations being algorithmic, but it's a bit more fundamental than that. There are basic theoretical questions related to "meaning" that remain unanswered.

    Specifically, consider a word written on a piece of paper: where does it get its meaning from? Clearly, it's not inherent, because it's nothing but ink. There are various philosophical ideas related to the nature of a symbol and the types of interactions and commitments that need to occur for those symbols to be used in a "meaningful" way. (And if you think about it, a lot of it becomes intuitive: how did YOU learn the meaning of those words on paper? In what ways do you act to coordinate that meaning with others? If you want to create a new meaning, what do you have to do?)

    But, crucially, although there are plenty of philosophical theories, there are no complete formalisms that are concrete enough to implement algorithmically. There's been some valuable work on parts of the question related to symbol grounding, embodied meaning, human-agent semantic coordination, etc. But there's still a lot of work to be done to create systems that truly use language "meaningfully".

    But tbh I'm not very good at predictions. Maybe I'm wrong and all it takes is a neural network that's trained on enough data and that just has a certain type of architecture. I like neural networks as much as the next guy, so I'll be happy enough if that's the case. Thanks for the tip about the LocalLLM Discord server, I'll check it out!

    1 vote
  12. Comment on What are some overlooked comic books or graphic novels that you think deserve more attention? in ~comics

    Sinnerman
    Link
    There are several comics called "Exquisite Corpse." The one I'm talking about was published by Dark Horse in 1990, and consisted of three issues (Green, Red, and Yellow issues) which were released...

    There are several comics called "Exquisite Corpse." The one I'm talking about was published by Dark Horse in 1990, and consisted of three issues (Green, Red, and Yellow issues) which were released simultaneously and could be read in any order. The story was about a character's sexual traumas and the violent acts that character then committed, all of which were graphically depicted in a stylized "ugly" black and white, so it was somewhat disturbing but also thought-provoking. This sort of comic never had a chance of becoming "mainstream", but it doesn't deserve to fall into obscurity forever.

    3 votes
  13. Comment on Music Discovery Thread: Share the Top 3 songs you’re currently obsessed with! in ~music

    Sinnerman
    Link Parent
    Great song, hadn't heard it before, reminds me a bit of Biggie, in terms of the storytelling aspect and the background synth-violins towards the end to add drama.

    The Hilton, Ghostface feat. Raekwon

    Great song, hadn't heard it before, reminds me a bit of Biggie, in terms of the storytelling aspect and the background synth-violins towards the end to add drama.

  14. Comment on Music Discovery Thread: Share the Top 3 songs you’re currently obsessed with! in ~music

    Sinnerman
    Link Parent
    Yes, it's a crime that Käärijä didn't win, but on the bright side he's now one of those unforgettable acts like Go_A and Verka Serduchka that are defining Eurovision acts even though they didn't...

    Yes, it's a crime that Käärijä didn't win, but on the bright side he's now one of those unforgettable acts like Go_A and Verka Serduchka that are defining Eurovision acts even though they didn't win.

    This year, I thought that Pasha Parfeni and Luke Black had very high-quality songs and stage shows that weren't rated that highly by either the juries or the popular vote, I'm surprised they didn't do better.

    2 votes
  15. Comment on Music Discovery Thread: Share the Top 3 songs you’re currently obsessed with! in ~music

    Sinnerman
    Link Parent
    Boris Brejcha is amazing. Whenever I'm working on something and I get stuck, I put on his Cercle set at Grand Palais and I usually figure things out. (The song "Gravity" is a little under halfway...

    Boris Brejcha is amazing. Whenever I'm working on something and I get stuck, I put on his Cercle set at Grand Palais and I usually figure things out. (The song "Gravity" is a little under halfway through.) Not only is that Cercle set a great mix, but the cinematography's solid, and at the end there's a cute little bit where they go into a giant sphere, interview him, and award him an art print engraving -- to which he looks suitably bemused and impressed.