tesseractcat's recent activity

  1. Comment on Kindle modding wiki in ~books

    tesseractcat
    Link
    If you don't already have an ereader, consider getting a Kobo device, instead of a Kindle. The Kobo ereaders are much more modding friendly, so there's no need to jailbreak or worry about an...

    If you don't already have an ereader, consider getting a Kobo device, instead of a Kindle. The Kobo ereaders are much more modding friendly, so there's no need to jailbreak or worry about an update bricking your device.

    15 votes
  2. Comment on Looking for the PC equivalent of Garage Band in ~creative

    tesseractcat
    Link
    Unfortunately there aren't that many good DAWs targeted at novices. If you do want to dive into a DAW however, I recommend Reaper (which is the WinRAR of DAWs, aka it has an indefinite evaluation...

    Unfortunately there aren't that many good DAWs targeted at novices. If you do want to dive into a DAW however, I recommend Reaper (which is the WinRAR of DAWs, aka it has an indefinite evaluation period). It's not super easy to use, but also not terribly hard to learn, especially if you're not doing anything complex. If you do go this route, I also recommend the synth/plugin Dexed.

    17 votes
  3. Comment on Have you made a video game? Can I play it? in ~games

    tesseractcat
    Link
    I've made a bunch of prototypes but never got all that close to releasing anything real. I'm pretty happy with what I've put up on my itch though: https://tesseractcat.itch.io/. I'd say I'm most...

    I've made a bunch of prototypes but never got all that close to releasing anything real. I'm pretty happy with what I've put up on my itch though: https://tesseractcat.itch.io/. I'd say I'm most proud of my concept for a flipnote style heist game, and a somewhat unique 3d modelling/drawing software.

    My new years resolution was to put a game on steam but I don't think that's going to end up happening -.-

    3 votes
  4. Comment on "The Bullshit Machines" - A free humanities course on LLMs for college freshmen from UW professors in ~humanities

    tesseractcat
    Link Parent
    I think you're right to be skeptical, and I didn't mean to present Andrej Karpathy himself as an unbiased figure, however I believe that if you watch the video you'll find that it is much less...

    I think you're right to be skeptical, and I didn't mean to present Andrej Karpathy himself as an unbiased figure, however I believe that if you watch the video you'll find that it is much less biased than the linked article.

    2 votes
  5. Comment on "The Bullshit Machines" - A free humanities course on LLMs for college freshmen from UW professors in ~humanities

    tesseractcat
    Link
    This course makes a number of claims about LLMs, mostly substantiated by repeating the claim that "they just predict the next token" which hides a lot of complexity and controversy behind the word...

    This course makes a number of claims about LLMs, mostly substantiated by repeating the claim that "they just predict the next token" which hides a lot of complexity and controversy behind the word "just". The course also takes a clear stance that LLMs cannot truly reason, understand, or have consciousness when all of these points are hotly debated in the field.

    If you're looking for an introduction to LLMs without such a heavy bias I would recommend this great video by Andrej Karpathy: https://www.youtube.com/watch?v=7xTGNNLPyMI.

    8 votes
  6. Comment on What is China’s DeepSeek and why is it freaking out the AI world? in ~tech

    tesseractcat
    Link Parent
    Fair, although to defend my point a bit, the hype is still because they replicated OpenAIs o1-style reasoning. DeepSeek v3 had been around for a month or so with little hype. Although maybe that's...

    Fair, although to defend my point a bit, the hype is still because they replicated OpenAIs o1-style reasoning. DeepSeek v3 had been around for a month or so with little hype. Although maybe that's just because it took a while to accrue publicity, so who knows /shrug.

    It would be really interesting to compare with how much money it's costing OpenAI to train their models, unfortunately to the best of my knowledge they're not open with that information.

  7. Comment on What is China’s DeepSeek and why is it freaking out the AI world? in ~tech

    tesseractcat
    Link Parent
    Maybe. From my perspective at least, it seems like OpenAI spends the money to make the expensive breakthroughs, and everyone else copies them 1-2 years later. After all, the hype around deepseek...

    Maybe. From my perspective at least, it seems like OpenAI spends the money to make the expensive breakthroughs, and everyone else copies them 1-2 years later. After all, the hype around deepseek is entirely because they replicated the new reasoning stuff that OpenAI was working on.

    I would be surprised if AGI (not my favorite terminology) wasn't some derivative of LLMs, although probably a more multi-modal model (audio/video/robotics/language/etc), and RL used heavily in training.

    2 votes
  8. Comment on What is China’s DeepSeek and why is it freaking out the AI world? in ~tech

    tesseractcat
    Link Parent
    Why do you think models will hit the point of "good enough" in the not-so-far future? OpenAI and co explicitly state their goal is AGI/ASI, which seems like an ambitious goal. Also, o1/o3/deepseek...

    Why do you think models will hit the point of "good enough" in the not-so-far future? OpenAI and co explicitly state their goal is AGI/ASI, which seems like an ambitious goal. Also, o1/o3/deepseek r1 are all starting to use RL techniques, which are very compute intensive.

    4 votes
  9. Comment on What is China’s DeepSeek and why is it freaking out the AI world? in ~tech

    tesseractcat
    Link Parent
    This is only true if people decide that LLMs at the current level are good enough. Otherwise, more efficient training/inference will just result in bigger models with more capabilities (assuming...

    This is only true if people decide that LLMs at the current level are good enough. Otherwise, more efficient training/inference will just result in bigger models with more capabilities (assuming scaling laws hold).

    8 votes
  10. Comment on I hate 2FA in ~tech

    tesseractcat
    Link
    Recently my Google Authenticator randomly stopped generating a few different codes (notably the one for my google account). And I'm not the only one:...

    Recently my Google Authenticator randomly stopped generating a few different codes (notably the one for my google account). And I'm not the only one: https://support.google.com/accounts/thread/295605525/corrupted-google-authenticator-codes. I always write down the backup codes but 2FA just feels so dangerous to me, like there's no recourse if my phone dies or I lose my written down codes (fortunately this is unlikely). Anyway I don't really have a point but I wish I at least had the option to determine what sort of risk I'd like to take, hacking vs losing codes.

    Be careful if you're using Google Authenticator.

    4 votes
  11. Comment on OpenAI is a bad business in ~tech

    tesseractcat
    Link
    I think this article is interesting, but a bit myopic considering that it's basically missing the biggest part of OpenAI's strategy. They're hemorrhaging money because they're betting that they...

    I think this article is interesting, but a bit myopic considering that it's basically missing the biggest part of OpenAI's strategy. They're hemorrhaging money because they're betting that they can train a model that is smarter. I think it's uncontroversial that the earnings potential increases the smarter the model gets (very few people would pay for a GPT-2 level model, for instance), so the only question is whether or not they can train a smarter model in time, and what people will pay for it at that point.

    One objection is that even if they do manage to train a smarter model, at that point there will be no differentiator from other companies who will also train equivalently smart models. I think this is sort of true, but I wouldn't be surprised if OpenAI can exploit their lead, and then maintain it through network effects.

    11 votes
  12. Comment on Why AI isn't going to make art in ~arts

    tesseractcat
    Link
    Gwern made a comment on this article I thought was pretty insightful: I find myself a bit frustrated when people associate AI capabilities with the corporate sanitized outputs of...

    Gwern made a comment on this article I thought was pretty insightful:

    He's also, broadly speaking, wrong. The blandness he refers to has little to do with 'taking the average of the choices'. Base model output is not bland! It is often wacky, bizarre, hilarious, or unexpected. See the samples people have been generating with the new Llama base model, or just again, go back to my old GPT-3 samples - samples Chiang should've seen since he's been pontificating regularly about LLMs this whole time. (In considerable part because there is usually no such thing as 'taking the average of the choices' in text: what is the 'average' of two likely predictions like 'a' or 'b'? There is no such thing as a letter 50% of the way in between 'a' and 'b', the sampling process has to pick one or the other, and then it's committed.)

    The blandness he's referring to is the effect of the preference-learning/instruction-tuning made infamous by ChatGPT, which has no simple analogy but has little to do with "the choices that other writers have made, as represented by text found on the Internet" (in fact, this is the opposite of correct, as making the outputs unlike 'text found on the Internet' is most of the point). The point of the tuning is to collapse the LLM, which starts off as an extraordinary mimic of every style, onto a narrow range of styles & vocabs - yielding the ChatGPTese we know all too well now.

    I find myself a bit frustrated when people associate AI capabilities with the corporate sanitized outputs of ChatGPT/Dall-E/Bing Image/etc. Although it's not that surprising, considering that's the only way the average person will interact with AI.

    If you're interested in playing around with more authentic AI, I'd suggest a non-finetuned version of Llama, or Stable Diffusion (although the newer versions of Stable Diffusion are a bit sanitized as well, 1.5 is probably the best for wackier outputs). Prompting will be harder, but you won't get outputs that sound/look like AI.

    16 votes
  13. Comment on Grokking KOReader in ~books

    tesseractcat
    Link Parent
    For me the main thing is the ability to use regular folders to sort files. For the life of me I don't understand why none of the mainstream e-readers let me just use folders to sort books (they...

    For me the main thing is the ability to use regular folders to sort files. For the life of me I don't understand why none of the mainstream e-readers let me just use folders to sort books (they all want you to manually add everything to collections).

    Other than that it's nice to be able to launch an SSH server to remotely transfer files. And the reading stats are pretty thorough. Also, although it's kind of niche, when reading manga with a bunch of volumes, KOReader will give you an option to open the next volume once you finish the current one, rather than needing to go back to the file explorer and manually find the next one. Oh, and KOReader will give you an option to discard the embedded style in a book if you don't like it (which I often have to do since certain books have weird indentation/paragraph styles).

    Basically it's just a bunch of little QOL things that add up.

    2 votes
  14. Comment on Looking for a decent, but cheap-ass tablet in ~tech

    tesseractcat
    Link Parent
    I disagree, ipads tend to retain their resale value a lot better than android tablets. In this case, that means it's a lot better value to get a used android tablet than a used ipad. The (older)...

    I disagree, ipads tend to retain their resale value a lot better than android tablets. In this case, that means it's a lot better value to get a used android tablet than a used ipad. The (older) Samsung flagship tablets in particular are quite cheap now used and are great for the described use case.

    1 vote
  15. Comment on ‘Furiosa’ fires up Cannes with six-minute standing ovation for Anya Taylor-Joy and teary Chris Hemsworth in ~movies

    tesseractcat
    Link Parent
    It's probably better to watch Fury Road first, wait a bit, and then watch Furiosa. Although if you were committed to a marathon it would be cool to watch Furiosa -> Fury Road, though I don't think...

    It's probably better to watch Fury Road first, wait a bit, and then watch Furiosa. Although if you were committed to a marathon it would be cool to watch Furiosa -> Fury Road, though I don't think that would be the best first viewing.

    2 votes
  16. Comment on Tiny Glade's castle-doodling demo is packed with delightful little reactive surprises in ~games

    tesseractcat
    Link
    I'm happy to see Tiny Glade will be joining the Gnorp Apologue in the emerging category of games written in Rust. From what I've read it uses bevy's ECS combined with a custom renderer.

    I'm happy to see Tiny Glade will be joining the Gnorp Apologue in the emerging category of games written in Rust. From what I've read it uses bevy's ECS combined with a custom renderer.

    2 votes
  17. Comment on Noam Chomsky: The false promise of ChatGPT in ~tech

    tesseractcat
    Link Parent
    That's true. It's kindof an apples to oranges comparison. LLMs are an attempt to bootstrap intelligence off of humanity without needing to go through the whole billion-year evolution gauntlet. So...

    That's true. It's kindof an apples to oranges comparison. LLMs are an attempt to bootstrap intelligence off of humanity without needing to go through the whole billion-year evolution gauntlet. So they definitely have more initial structure imposed on them than the first micro organisms (which might truly be called blank slates). Given the complexity of the human body though, I suspect that the amount of data imposed on an untrained LLM is much less than that of a baby.

    3 votes
  18. Comment on Noam Chomsky: The false promise of ChatGPT in ~tech

    tesseractcat
    Link Parent
    Exactly, babies have so much behavior and knowledge from birth it's surprising people still keep comparing them to untrained neural nets.

    Exactly, babies have so much behavior and knowledge from birth it's surprising people still keep comparing them to untrained neural nets.

    13 votes
  19. Comment on Noam Chomsky: The false promise of ChatGPT in ~tech

    tesseractcat
    Link Parent
    Whether or not the human mind is processing a lot of information, there's a deeper issue with this line of argument. Since human babies are born helpless, it's easy to think of them as a blank...

    Whether or not the human mind is processing a lot of information, there's a deeper issue with this line of argument. Since human babies are born helpless, it's easy to think of them as a blank slate, but they're not. Humans have been "trained" by millions of years of evolution, the equivalent of probably exabytes worth of "data". It's difficult to come up with a perfect analogy, but I think it's more accurate to consider a baby the equivalent of a trained model, and to consider any learning that happens after birth like fine-tuning, or perhaps in-context learning (or a mixture of both).

    17 votes
  20. Comment on Jet Lag | Season 9 trailer : Hide and Seek across Switzerland in ~hobbies

    tesseractcat
    Link Parent
    Spoiler Overall while I've been enjoying it, I think that it's one of the more boring seasons. My main issue is that the curses haven't been used enough. Basically all the curses have been pretty...
    Spoiler

    Overall while I've been enjoying it, I think that it's one of the more boring seasons. My main issue is that the curses haven't been used enough. Basically all the curses have been pretty trivial and the hiders only ever roll once or twice. I think they should have made curses cheaper but less powerful overall, with a few rare good ones, and in addition the seeking abilities should have been a bit more powerful, considering how long it's taking to find the hiders on average.

    Also it seems a bit unfair that they don't all get an equal number of opportunities to hide.

    5 votes