• Activity
  • Votes
  • Comments
  • New
  • All activity
  • Showing only topics in ~tech with the tag "language models.large". Back to normal view / Search all groups
    1. Is trying to become an author insane in times of LLMs?

      A simple question. I know LLMs are currently not a replacement for authors. Will that remain true in 5 to 10 years? EDIT: No. I never expected to earn a living either mostly or exclusively by...

      A simple question. I know LLMs are currently not a replacement for authors. Will that remain true in 5 to 10 years?


      EDIT: No. I never expected to earn a living either mostly or exclusively by selling books. There are however many "side gigs" in my country that can greatly benefit from being published by a real company. Ultimately though, I'm not in it primarily for the money. But I wonder what the future holds for fiction as a whole.

      21 votes
    2. Part of me wishes it wasn't true but: AI coding is legit

      I stay current on tech for both personal and professional reasons but I also really hate hype. As a result I've been skeptical of AI claims throughout the historic hype cycle we're currently in....

      I stay current on tech for both personal and professional reasons but I also really hate hype. As a result I've been skeptical of AI claims throughout the historic hype cycle we're currently in. Note that I'm using AI here as shorthand for frontier LLMs.

      So I'm sort of a late adopter when it comes to LLMs. At each new generation of models I've spent enough time playing with them to feel like I understand where the technology is and can speak about its viability for different applications. But I haven't really incorporated it into my own work/life in any serious way.

      That changed recently when I decided to lean all the way in to agent assisted coding for a project after getting some impressive boilerplate out of one of the leading models (I don't remember which one). That AI can do a competent job on basic coding tasks like writing boilerplate code is nothing new, and that wasn't the part that impressed me. What impressed me was the process, especially the degree to which it modified its behavior in practical ways based on feedback. In previous tests it was a lot harder to get the model to go against patterns that featured heavily in the training data, and then get it to stay true to the new patterns for the rest of the session. That's not true anymore.

      Long story short, add me to the long list of people whose minds have been blown by coding agents. You can find plenty of articles and posts about what that process looks like so I won't rehash all the details. I'll only say that the comparisons to having your own dedicated junior or intern who is at once highly educated and dumb are apt. Maybe an even better comparison would be to having a team of tireless, emotionless, junior developers willing to respond to your requests at warp speed 24/7 for the price of 1/100th of one developer. You need the team comparison to capture the speed.

      You've probably read, or experienced, that AI is good at basic tasks, boilerplate, writing tests, finding bugs and so on. And that it gets progressively worse as things get more complicated and the LoCs start to stack up. That's all true but one part that has changed, in more recent models, is the definition of "basic".

      The bit that's difficult to articulate, and I think leads to the "having a nearly free assistant" comparisons, is what it feels like to have AI as a coding companion. I'm not going to try to capture it here, I'll just say it's remarkable.

      The usual caveats apply, if you rely on agents to do extensive coding, or handle complex problems, you'll end up regretting it unless you go over every line with a magnifying glass. They will cheerfully introduce subtle bugs that are hard to catch and harder to fix when you finally do stumble across them. And that's assuming they can do the thing you're asking then to do at all. Beyond the basics they still abjectly fail a lot of the time. They'll write humorously bad code, they'll break unrelated code for no apparent reason, they'll freak out and get stuck in loops (that one suprised me in 2025). We're still a long way from agents that can actually write software on their own, despite the hype.

      But wow, it's liberating to have an assistant that can do 100's of basic tasks you'd rather not be distracted by, answer questions accurately and knowledgeably, scan and report clearly about code, find bugs you might have missed and otherwise soften the edges of countless engineering pain points. And brainstorming! A pseudo-intelligent partner with an incomprehensibly wide knowledge base and unparalled pattern matching abilities is guaranteed to surface things you wouldn't have considered.

      AI coding agents are no joke.

      I still agree with the perspectives of many skeptics. Execs and middle managers are still out of their minds when they convince themselves that they can fire 90% of their teams and just have a few seniors do all the work with AI. I will read gleefully about the failures of that strategy over the coming months and years. The failure of their short sightedness and the cost to their organizations won't make up for the human cost of their decisions, but at least there will be consequences.

      When it comes to AI in general I have all the mixed feelings. As an artist, I feel the weight of what AI is doing, and will do, to creative work. As a human I'm concerned about AI becoming another tool to funnel ever more wealth to the top. I'm concerned about it ruining the livelihoods of huge swaths of people living in places where there aren't systems that can handle the load of taking care of them. Or aren't even really designed to try. There are a lot of legitimate dystopian outcomes to be worried about.

      Despite all that, actually using the technology is pretty exciting, which is the ultimate point of this post: What's your experience? Are you using agents for coding in practical ways? What works and what doesn't? What's your setup? What does it feel like? What do you love/hate about it?

      50 votes
    3. Duck Duck Go search AI curiously cited Tildes

      I was trying to find out why Lidarr wasn't matching my copy of The Cure's Greatest Hits. Found out I've got some bootleg Russian release that's catalogued on discogs (I eventually found the...

      I was trying to find out why Lidarr wasn't matching my copy of The Cure's Greatest Hits. Found out I've got some bootleg Russian release that's catalogued on discogs (I eventually found the musicbrainz release and updated my profile to include bootlegs). So I search "Lidarr use specific discogs release" and the duck duck go search assist spat out some text about Lidarr not using discogs and cited this Tildes post.

      It's curious because that post is 3yrs old and doesn't talk about discogs integration in Lidarr, just one mention of discogs in the post and some folks talking about Lidarr in the comments (It did cite a relevant GitHub issue about it though). The AI response mentioned that some users track new releases with Lidarr and downloads disabled, while covered in the post, it seems fairly tangential to my query.

      I'm curious why it decided to check or cite a tildes post. No tildes posts came up in the first couple pages of search results. I use tildes from the same location, though on my phone where this query was on my desktop, and have done a couple DDG queries using "site:tildes.net" on my phone.

      Has anyone else seen a search assist cite an unexpected site? Not unexpected as in irrelevant, that's all too common, but small and specific sources.

      29 votes
    4. How has AI positively impacted your life?

      I've been trying to get a more rounded understanding of the impacts that "AI" has had since ChatGPT went viral back in 2022. I've found it easy to gather a list of negative impacts, but have...

      I've been trying to get a more rounded understanding of the impacts that "AI" has had since ChatGPT went viral back in 2022.

      I've found it easy to gather a list of negative impacts, but have struggled to point to many positives.

      I was curious if there were folks who have used any of these AI tools, and would willing to share any positive impacts those tools have had in their lives. I'm particularly interested in the text, audio, image, and video generation tools that have appeared since ChatGPT went viral, but please share anything else that you think fits.

      50 votes
    5. Is it possible to easily finetune an LLM for free?

      so Google's AI Studio used to have an option to finetune gemini flash for free by simply uploading a csv file. but it seems they have removed that option, so I'm looking for something similar. I...

      so Google's AI Studio used to have an option to finetune gemini flash for free by simply uploading a csv file. but it seems they have removed that option, so I'm looking for something similar. I know models can be finetuned on colab but the problem with that is it's way too complicated for me, I want something simpler. I think I know enough python to be able to prepare a dataset so that shouldn't be a problem.

      21 votes
    6. Question - how would you best explain how an LLM functions to someone who has never taken a statistics class?

      My understanding of how large language models work is rooted in my knowledge of statistics. However a significant number of people have never been to college and statistics is a required course...

      My understanding of how large language models work is rooted in my knowledge of statistics. However a significant number of people have never been to college and statistics is a required course only for some degree programs.

      How should chatgpt etc be explained to the public at large to avoid the worst problems that are emerging from widespread use?

      37 votes
    7. Is pop culture a form of "model collapse?"

      Disclaimer: I do not like LLMs. I am not going to fight you on if you say LLMs are shit. One of the things I find interesting about conversations on LLMs is when have a critique about them, and...

      Disclaimer: I do not like LLMs. I am not going to fight you on if you say LLMs are shit.

      One of the things I find interesting about conversations on LLMs is when have a critique about them, and someone says, "Well, it's no different than people." People are only as good as their training data, people misremember / misspeak / make mistakes all the time, people will listen to you and affirm you as you think terrible things. My thought is that not being reliably consistent is a verifiable issue for automation. Still, I think it's excellent food for thought.

      I was looking for new music venues the other day. I happened upon several, and as I looked at their menu and layout, it occurred to me that I had eaten there before. Not there, but in my city, and in others. The Stylish-Expensive-Small-Plates-Record-Bar was an international phenomenon. And more than that, I couldn't help but shake that it was a perversion of the original, alluring concept-- to be in a somewhat secretive record bar in Tokyo where you'll be glared into the ground if you speak over the music.

      It's not a bad idea. And what's wrong with evoking a good idea, especially if the similarity is just unintentional? Isn't it helpful to be able to signal to people that you're like-that-thing instead of having to explain to people how you're different? Still, the idea of going just made me assume it'd be not simply like something I had experienced before, but played out and "fake." We're not in Tokyo, and people do talk over the music. And even if they didn't, they have silverware and such clanging. It makes me wonder if this permutation is a lossy estimation of the original concept, just chewed up, spat out, slurped, regurgitated, and expensively funded.

      other forms of conceptual perversion:

      • Matters of Body Image - is it a sort of collapse when we go from wanting 'conventional beauty' to frankensteining features onto ourselves? Think fox eye surgeries, buccal fat removal, etc. Rather than wanting to be conventionally attractive, we aim for the related concept of looking like people who are famous.
      • (still thinking)
      15 votes
    8. LLMs and privacy

      Hello to everyone who's reading this post :) Now LLMs are increasingly so useful (of course after careful review of their generated answers), but I'm concerned about sharing my data, especially...

      Hello to everyone who's reading this post :)

      Now LLMs are increasingly so useful (of course after careful review of their generated answers), but I'm concerned about sharing my data, especially very personal questions and my thought process to these large tech giants who seem to be rather sketchy in terms of their privacy policy.

      What are some ways I can keep my data private but still harness this amazing LLM technology? Also what are some legitimate and active forums for discussions on this topic? I have looked at reddit but haven't found it genuinely useful or trustworthy so far.

      I am excited to hear your thoughts on this!

      33 votes
    9. Which translation tools are LLM free? Will they remain LLM free?

      Looking at the submission rules for Clarkesworld Magazine, I found the following: Statement on the Use of “AI” writing tools such as ChatGPT We will not consider any submissions translated,...

      Looking at the submission rules for Clarkesworld Magazine, I found the following:

      Statement on the Use of “AI” writing tools such as ChatGPT

      We will not consider any submissions translated, written, developed, or assisted by these tools. Attempting to submit these works may result in being banned from submitting works in the future.

      EDIT: I assume that Clarkesworld means a popular, non-technical understanding of AI meaning post-chatGPT LLMs specifically and not a broader definition of AI that is more academic or pertinent the computer science field.

      I imagine that other magazines and website have similar rules. As someone who does not write directly in English, that is concerning. I have never translated without assistance in my life. In the past I used both Google Translate and Google Translator Toolkit (which no longer exist).

      Of course, no machine translation is perfect, that was only a first pass that I would change, adapt and fix extensively and intensely. In the past I have used the built-in translation feature from Google Docs. However, now that Gemini is integrated in Google Docs, I suspected that it uses AI instead for translation. So I asked Gemini, and it said that it does. I am not sure if Gemini is correct, but, if it doesn't use AI now it probably will in the future.

      That poses a problem for me, since, in the event that I wish to submit a story to English speaking magazines or websites, I will have to find a tool that is guaranteed to be dumb. I am sure they exist, but for how long? Will I be forced to translate my stories like a cave men? Is anyone concerned with keeping non-AI translation tools available, relevant, and updated? How can I even be sure that a translation tool does not use AI?

      28 votes