34 votes

Any real AI recommendations from the community?

Hey - I'm wondering if we've got any real-life recommendations for AI's out there?

I'm not looking for a list of AI's - they're everywhere! What I'm interested in is whether and how anyone here has started to use an AI on a regular basis to the extent that you consider it genuinely useful now?

For example,

  • At work with have a ChatGPT3 wrapped app in Slack which I use quite often to improve summaries and formal comms I write. I think everyone knows it's basically good at that.
  • I use Pi.ai as a "sympathetic" and filtered advisor for more sensitive topics relating to mental health that I have to deal with - it's useful insofar as I'm less worried about hallucinations or bad output when I'm using it. This might be misplaced confidence to be fair, but I've not had a bad experience with it so far.
  • I use ChatGPT built into Apple Intelligence more and more since getting a device capable of using it. I think the use case I'm most warming to is that "search" is less and less useful nowadays because of blog spam and assumed corrections to my searches. I can use ChatGPT as a replacement to search in a growing number of use cases.

What I'm wondering about:

  • Gamma.app promises to be a .ppt replacement via AI. I'm skeptical. I have to summarise and present a lot of content at work. Having a means of an AI doing some of the lifting here would be incredible, but I remain unconvinced.

Any sites/services you use regularly and effectively that you'd recommend?

35 comments

  1. [14]
    sunset
    Link
    I use copilot a lot when I code. While it hasn't reached a point where it can full-blown substitute actual programmers, it has drastically boosted my productivity. I think big part of it is that...

    I use copilot a lot when I code.

    While it hasn't reached a point where it can full-blown substitute actual programmers, it has drastically boosted my productivity.

    I think big part of it is that google search has become really bad. I used to be able to find answers to specific technical questions pretty fast. Now it's mostly a lost cause, while copilot is doing really well.

    Code generation itself is pretty decent too, especially when you know what you are doing. You can keep "massaging" the output by making it change things and guiding it.

    18 votes
    1. [9]
      Eji1700
      Link Parent
      It's still kinda hit or miss for this. My tech stack is mostly: F# Nushell Azure MS SQL And for all 3 (2 being quite niche) it will just flat out get shit wrong. The most notable to me however was...

      It's still kinda hit or miss for this.

      My tech stack is mostly:
      F#
      Nushell
      Azure MS SQL

      And for all 3 (2 being quite niche) it will just flat out get shit wrong. The most notable to me however was when trying to remember the syntax for string interpolation in nushell it just used {} instead of () (or the other way around.....i forget).

      That one was the most shocking to me because I have no idea where it got that from, and it could not figure it out.

      6 votes
      1. [5]
        koopa
        Link Parent
        The amount of training data available for your tech stack definitely influences how good your output is. Typescript/React is pretty easy to get decent output out of the best models. But when I’ve...

        The amount of training data available for your tech stack definitely influences how good your output is. Typescript/React is pretty easy to get decent output out of the best models.

        But when I’ve tried to use anything more obscure like GDscript (Godot game engine’s scripting language) I mostly get garbage.

        8 votes
        1. [2]
          Wes
          Link Parent
          Just a thought, but have you tried consuming the GDScript documentation into the context length? That should be possible with larger models like Gemini, and offer a lot more immediate experience...

          Just a thought, but have you tried consuming the GDScript documentation into the context length? That should be possible with larger models like Gemini, and offer a lot more immediate experience with the language than relying on public training.

          3 votes
          1. teaearlgraycold
            Link Parent
            Cursor (the IDE) will index your repository and perform RAG. So you could include all of the docs in your repo and any standard library sources as well.

            Cursor (the IDE) will index your repository and perform RAG. So you could include all of the docs in your repo and any standard library sources as well.

            1 vote
        2. ra314
          Link Parent
          +1 to issues with using gdscript with llms. I find that a lot of those can be overcome by correcting the output as clarifications to the model. Ie say in a reply that a certain function doesn't...

          +1 to issues with using gdscript with llms. I find that a lot of those can be overcome by correcting the output as clarifications to the model. Ie say in a reply that a certain function doesn't exist in gdscript.

          1 vote
        3. sparksbet
          Link Parent
          I've also found bad results even in languages where it has a lot of training data (like Python) when it's a more complex problem that is similar to a simpler problem with an obvious solution. This...

          I've also found bad results even in languages where it has a lot of training data (like Python) when it's a more complex problem that is similar to a simpler problem with an obvious solution. This is particularly annoying for me because the structure of the code is usually about how I'd write it, until you look at what it's actually doing line-by-line and realize it's not what you wanted. With languages like Python that don't have much cruft, I find this tends to outweigh the benefits of using LLMs for writing anything but the most obvious simple scripts.

      2. [3]
        teaearlgraycold
        Link Parent
        It’s not so good even with TypeScript if you’re trying to get it to write a complicated series of types. I suppose the strictness combined with the lesser amount of training data makes it difficult.

        It’s not so good even with TypeScript if you’re trying to get it to write a complicated series of types. I suppose the strictness combined with the lesser amount of training data makes it difficult.

        1 vote
        1. [2]
          Eji1700
          Link Parent
          I don't think they can really handle any complex type model. Remember that it doesn't "understand" what you're doing so much as search google and use autocomplete a million times better. The...

          I don't think they can really handle any complex type model. Remember that it doesn't "understand" what you're doing so much as search google and use autocomplete a million times better.

          The moment you're getting into uh...artisanal...code (as most business/domain logic is) things start to fall apart.

          1 vote
          1. teaearlgraycold
            Link Parent
            They understand some things. Which is impressive. But the understanding is limited.

            They understand some things. Which is impressive. But the understanding is limited.

    2. knocklessmonster
      Link Parent
      I use Copilot when I need something easy in powershell (IT admin, not programmer) and it's plenty of "Hey you can do that?!" Mostly I can drop in a list of servers to generate a report and get it...

      I use Copilot when I need something easy in powershell (IT admin, not programmer) and it's plenty of "Hey you can do that?!" Mostly I can drop in a list of servers to generate a report and get it back.

      And I'll second it as a research aid. I know enough to know what seems funky and the sources Copilot provides are often exactly the sorts of things I'm looking for.

      5 votes
    3. clayh
      Link Parent
      I used Copilot for a while, then downloaded and tried Cursor (https://www.cursor.com). It's a fork of VS Code with features specific to AI integration. It's significantly better, in my experience....

      I used Copilot for a while, then downloaded and tried Cursor (https://www.cursor.com). It's a fork of VS Code with features specific to AI integration. It's significantly better, in my experience. I usually use it with the claude-3.5-sonnet model activated. In fact, it works so well for me that I pay the $20/month. FWIW, a lot of my coding is in python, often with Django, but also with a variety of other things (maintaining a Superset instance and a Dagster instance, among them). It has a free 2 week trial, so it's worth checking out if you haven't already. That said, ChatGPT 4o and o1 models are a lot better for planning. I'll sometimes write a bullet list of requirements and have a "conversation" with ChatGPT about potential implementation approaches, UX, etc... Once I feel like I've worked through outstanding issues, I'll have it generate some boilerplate model code, which rarely needs tweaks.

      3 votes
    4. [2]
      rich_27
      Link Parent
      A professional programmer I know used copilot for a while and has since stopped again, after experiencing it slowing stuff down a lot. He describes it as the Copilot Pause, where you're...

      A professional programmer I know used copilot for a while and has since stopped again, after experiencing it slowing stuff down a lot. He describes it as the Copilot Pause, where you're introducing a lot of micro-stops where you're waiting for the AI model to generate the next suggestion to see if it's what you want. His experience is oftentimes it would have just been quicker for him to write the next bit. Have you experienced the same thing?

      3 votes
      1. sunset
        Link Parent
        I haven't experienced that at all. Pausing for a second and then proof-reading 20 lines of code is just way faster than actually writing those 20 lines. Same goes for research, if I'm asking it...

        I haven't experienced that at all. Pausing for a second and then proof-reading 20 lines of code is just way faster than actually writing those 20 lines. Same goes for research, if I'm asking it how to do something I don't know, or to discover a bug in my code, I'm already pausing anyway - it's just that the pause will be a couple of seconds instead of me wasting 15 minutes on google and still not getting an answer.

        The most negative part currently IMO is that it refuses to admit the limit of it's knowledge. Like, I'd ask it something very obscure, it will give me code that doesn't work. I'd ask it to correct the code and it would tell me "oh, I made so-and-so mistake, here is the fixed code" and then output the same damn thing. I'd ask it "what did you change" and it would apologize, telling me it didn't change anything, but here is the actual changed fixed code...aaand output the same damn thing again.

        At that point it's just wasting my time, I'd prefer it if it just told me "this is beyond my capability".

        2 votes
  2. [2]
    TangibleLight
    Link
    Not advice, but a more specific request: Does anyone have recommendations for running code-assistant models locally? The JetBrains suite is my editor of choice, and my workstation has an RTX 4090....

    Not advice, but a more specific request: Does anyone have recommendations for running code-assistant models locally?

    The JetBrains suite is my editor of choice, and my workstation has an RTX 4090. The big requirement is local, offline, functionality.

    I fiddled with this early last year when I first got the new workstation, but nothing was very compelling or seemed terribly useful. IIRC some of the models were okay but editor integration was very much not. Generally I'm pessimistic about AI, but it's been long enough that I think I probably should give it an honest try again.

    Or, if there are truly compelling online services, are there any that are easy to disable in certain repositories? This is a data security concern about some of my projects; I could use an online service in the others, but I need a foolproof way to disable any third-party analysis in those sensitive ones. None of the plugins I tried (early last year) inspired confidence that I won't open the wrong project and accidentally upload data when I don't mean to.

    10 votes
    1. thearrow
      Link Parent
      I'm curious about this as well. The closest I've been able to get for the JetBrains suite is https://www.continue.dev/ running against local models in Ollama, and even that's not great. edit:...

      I'm curious about this as well. The closest I've been able to get for the JetBrains suite is https://www.continue.dev/ running against local models in Ollama, and even that's not great.

      edit: worth noting that if you don't want it phoning home at all, consider opting out of their telemetry

      1 vote
  3. [5]
    unkz
    Link
    For public services, perplexity.ai is legitimately useful. Language learning using AI is a very powerful tool too. If you're fairly technically inclined, you can get really stellar results by...

    For public services, perplexity.ai is legitimately useful.

    Language learning using AI is a very powerful tool too.

    If you're fairly technically inclined, you can get really stellar results by building workflows using langchain or other similar libraries (instructor is also good). Setting problems up to be broken up into subproblems with the multiple solutions being checked in parallel, with the results being double checked and revised by other AIs, really cuts down on the effects of hallucinations. Combining that with function calling and external data has been proven (in my experience) to be close to and sometimes better than human work.

    8 votes
    1. PigeonDubois
      Link Parent
      Any specific recommendations for language learning AIs?

      Any specific recommendations for language learning AIs?

      9 votes
    2. aetherious
      Link Parent
      I also find myself going to Perplexity over Google for searches, especially when I want information. Google searches have deteriorated so much in quality and even their AI summary isn't going to...

      I also find myself going to Perplexity over Google for searches, especially when I want information. Google searches have deteriorated so much in quality and even their AI summary isn't going to fix all the years of damage they've spent letting SEO spam dominate their search rankings. It's just a front for their ads at this point. Perplexity (at least for now) just gives me the information I am looking for, and the list of links it's pulled that from, and that's all I need for most quick searches.

      4 votes
    3. [2]
      Pistos
      Link Parent
      At first, I was impressed by the language learning assistance I thought I was getting from ChatGPT, but then I started getting suspicious and wary, because, as someone unfamiliar with the target...

      At first, I was impressed by the language learning assistance I thought I was getting from ChatGPT, but then I started getting suspicious and wary, because, as someone unfamiliar with the target language, there was no way for me to check whether or to what degree what the AI was giving me was correct. I don't want to ingest wrong things about the target language or culture.

      Has your experience been different?

      3 votes
      1. unkz
        Link Parent
        Well, full disclosure, I'm developing a language learning app that uses LLMs (among other things) to do chat, flash cards, assisted reading, and other things. I would say, at least for the...

        Well, full disclosure, I'm developing a language learning app that uses LLMs (among other things) to do chat, flash cards, assisted reading, and other things. I would say, at least for the languages that I am interested in, it's very good.

        2 votes
  4. [5]
    Turtle42
    Link
    Claude AI by Anthropic has been my go to. I only touch chat gpt for simple stuff so I don't use up compute time with Claude. It's helped me code/develop a whole new personal library catalog to...

    Claude AI by Anthropic has been my go to. I only touch chat gpt for simple stuff so I don't use up compute time with Claude.

    It's helped me code/develop a whole new personal library catalog to catalog my physical books at home and helped me build a react website from scratch. It's been a fascinating learning experience.

    8 votes
    1. [4]
      aetherious
      Link Parent
      Their artifact feature is great for learning code since you can see it live and iterate, and you can do it conversationally, and have it explain as you go. I've only used it to build some tiny...

      Their artifact feature is great for learning code since you can see it live and iterate, and you can do it conversationally, and have it explain as you go. I've only used it to build some tiny experimental things like this generative art project (by which I mean the Philip Galanter definition of generative art where the artist creates a process, such as a set of rules which then creates the art, not to be confused with generative AI art, although technically it is, just not directly) which is a grid of alphabets with colors that activate based on letters in a word. It still gets things wrong, but for direct, simple things, it works well.

      1 vote
      1. Turtle42
        Link Parent
        yes exactly! It's really fun. I had a couple of old bash and python scripts for dumb tools like a diceware password generator and color palette picker and cocktail recipe api I didn't know what to...

        yes exactly! It's really fun. I had a couple of old bash and python scripts for dumb tools like a diceware password generator and color palette picker and cocktail recipe api I didn't know what to do with so I made a little interactive playground on my new website to showcase them, and had Claude simply repurpose the code into JavaScript. Funny enough, some of them still have my original bugs! It was really cool to see it working in Claude before testing it out on my site.

        2 votes
      2. [2]
        Turtle42
        (edited )
        Link Parent
        Dumb question, but how did you share this artifact? I can't seem to find the option on my end. You inspired me to think of my own generative art tool and I want to share it with you, but I can't...

        Dumb question, but how did you share this artifact? I can't seem to find the option on my end.

        You inspired me to think of my own generative art tool and I want to share it with you, but I can't figure out how.

        Edit: Of course I figure it out as soon as I ask.

        Here's my own art generator! Was really cool to conceptualize something and then have it make exactly what I wanted.

        2 votes
        1. aetherious
          Link Parent
          That's a fun project! Thanks for sharing that, it's inspiring to see what can be done with it. I like how much customization you've added with sizes and radial/random. The only drawback for Claude...

          That's a fun project! Thanks for sharing that, it's inspiring to see what can be done with it. I like how much customization you've added with sizes and radial/random.

          The only drawback for Claude is that you hit the limit soon, so I've had to switch between tools but the option for publishing it without having to host it yourself makes it a lot easier to get started with.

          1 vote
  5. [3]
    ruspaceni
    Link
    I've mostly been trying the open source/locally hosted models and been having an interesting time building some customized solutions to stuff. i had a huge mess of untitled(14).png and random...

    I've mostly been trying the open source/locally hosted models and been having an interesting time building some customized solutions to stuff.

    i had a huge mess of untitled(14).png and random downloaded pictures in my folders and it turns out theres not really any good solutions for sorting an ugly filesystem and it was taking me ages to rename all the pictures manually. so i write a little script to generate a new filename based on the content of the image, and then a .txt version of the same filename that had a more verbose description so i could do text-searches on them.

    left it running over night and while there's more than a few misses, the vast majority of work is done to a good enough degree (better than an annoyed and lazy me could do) and honestly i can live with any errors bc its better than what i had before.

    other than that, its mostly just been playing around with fun little ideas instead of anything actually productive. like i've had this "world news" idea where its google maps but instead of businesses, its news articles from that day. tried implementing it a bunch of times using traditional NLP stuff but this recent wave of LLMs has been the first time ive been able to make a functioning prototype. and every few months it goes from being woefully inaccurate and slow, to good enough and fast enough to process at least 10 stories from each country, each day. all while maintaining my usual pc usage.

    people tend to talk about the big companies that have super flashy and instantly responding AI services but i think there's gonna be a wave of small "edge device" models finding use like this. since i bet a lot of people dont care if a tasks have to run in the background and chug away overnight to finish renaming thousands of pictures, summarize their all of their college notes before a test, or even just constantly generating solutions to a code problem until it passes a test you gave it. and its gotten to the point now that you only have to install a nice little program and download a model and boom, you can start play with this stuff (programming not included). a stark contrast to a year and a bit ago when i started dabbling and it was a nightmare trying to get anything to install, let alone realise my graphics card exists.

    7 votes
    1. [2]
      Froswald
      Link Parent
      Ever since I found out that it's possible to run local AI models, I've been hooked. I mostly do image generation for personal/experimental purposes, but I've started branching out into text-based...

      Ever since I found out that it's possible to run local AI models, I've been hooked. I mostly do image generation for personal/experimental purposes, but I've started branching out into text-based tasks as well as using it to sort of 'trial and error' learn my way into Python. It's a significant factor as to why I bought a (for me) beefy GPU for my recent PC build. Right now I'm interested in seeing if I can get a voice-based assistant running, like Cortana except entirely local (and realistically, slower/poorer sounding.) I really enjoyed having access to both Cortana and Alexa when I was still naive to how much data was being stored and parsed for advertising purposes, so if I could at least get a rudimentary assistant, that itch would finally be re-scratched.

      5 votes
      1. ruspaceni
        Link Parent
        yeah its quite crazy how fast the community is building out the tools and doohickys for people to pick up and run with. on the voice front - there's a thing on github AllTalkTTS that is quite...

        yeah its quite crazy how fast the community is building out the tools and doohickys for people to pick up and run with. on the voice front - there's a thing on github AllTalkTTS that is quite interesting and has some basic apis and demos. voice stuff is a bit of a rabit hole because of the mess of models and versions and lack of apples to apples comparisons. but i think that'd be a good place to start and get a feel for things.

        and then on the 'understanding your voice' thing, one of the whisper variants like faster-whisper or whisperX are, as the former might suggest, rather fast. that's another thing i forgot i even did. i spent a weekend a while back just transcribing all of the podcast episodes i downloaded because i couldnt remember where i heard a particular story but i knew the keywords.

        for a complete voice assistant thing like that, i wonder what the lag would be like. also it would have to be the kind of thing where you click a button or let go when ur finished recording. but then you'd hand that to whisper > hand the text to an llm (probably via ollama) > send output to whatever TTS you wind up using > play the newly generated .wav

        there's probably a few sweaty tactics the big guys use, but yeah i bet thatd be fun to work on and tinker around with even if its a couple seconds of delay because of all the steps in the chain. and then knowing that you can tinker with it if you find it lackluster or get a new idea is always one of those proud "i did that" moments lol

        2 votes
  6. [3]
    aetherious
    (edited )
    Link
    Here's how I use these: Gamma: This is my preferred presentation format. I work with emails and whenever I had to use a design in a presentation, it is more frustrating to use than not. Not to...

    Here's how I use these:

    • Gamma: This is my preferred presentation format. I work with emails and whenever I had to use a design in a presentation, it is more frustrating to use than not. Not to mention how much effort it takes to create a presentation that looks good in Google Slides (haven't used Microsoft Office in a while, but I had the same problem). Yes, you can use templates, but even then, the tools didn't really work for me. Gamma makes presentations look good, and you can have it extend beyond the usual 4:3 presentation and look closer to a web page. Presentations can still be done, but most of the presentations I make are more quick-scan documents than a proper has-to-be-presented-on-a-screen presentation. I don't use the AI features much except for import, which works well enough to create a good-looking presentation, including structured content. But if I have to, I use ChatGPT for the content since I mostly work with that and then use Gamma for turning it into a great-looking presentation. I've used it for my portfolio and I've had clients compliment me on it because it looks so much more professional.

    For ChatGPT, I use it in a few different ways daily. I am mindful about what information I put into it since I don't think it's very secure. I also don't use it for any factual information, but I do use it to find ideas/concepts that I can then go look up on my own. Here are some of the ways I use ChatGPT:

    • Transcribing handwritten notes/printed text - My handwriting can be a hit or a miss, but usually it does a good job of getting the information and then organizing it the way I want it. I like keeping notes digitally, but sometimes writing is easier, and this helps to get some longer text. Other than that, when there's something like an ingredient label, I use it to transcribe that too and sort through what's in it.
    • Product comparison - Speaking of labels, I use it a lot for product comparison. When online shopping and there are too many products for different prices at different quantities, using this to find me the cheapest option. It can also help me figure out what's exactly in a product, and then I verify that separately. I used this for skincare products recently.
    • Understanding text - Other than just summarizing, I use it to understand jargon-heavy text like research papers in fields I'm unfamiliar with by helping me break down what's in it. I have it both simplify the language and define terms I don't know that I can then look up separately, or go through the sources it provides for the information to verify. This also works with code. I'm not a developer, but I have some basic knowledge. A client was having an odd issue with their email user database, for which I found relevant developer documentation, but I wasn't sure if that could be the problem. I had it compare the issue with the documentation to figure out what configuration might be an issue and some possible reasons for it. Once it did, I was able to share that to point them in the right direction to get a developer to look into it because it was a coding issue and not a platform issue.
    • Analyzing my own writing - It is good at identifying patterns in my own writing that I sometimes don't notice. This takes me a bit of prompting to go beyond what's obvious, but when it works, it can be really insightful. Ideas that I may be circling, but haven't quite reached yet, or anything similar I might want to explore based on what I have shared. Claude can be better at this than ChatGPT out of the box. But ChatGPT can be too, depending on how you prime it with custom instructions - and it can be sometimes better if it has more context with memory.
    • Learning assistance with study guides/schedules - Forgot to add this earlier, but I used this to break down an online course I was taking into a schedule that worked with the time I had, and had it give me additional resources I could look up for specific topics I was interested in, tying that into the schedule as well. All I gave was the course overview page with the modules and topics listed with the timings, let it know that I usually watch videos at 1.5x speed, and the sections I wanted to do more research on, and it was very helpful in creating a study schedule.

    As an aside, I've also given ChatGPT some personality because I hate the default overly friendly way these LLMs write, so I have changed it to something I would actually want to interact with. It's better if you use Projects and reinforce with more instructions, and more context than you can store in Memory, which is capped at a certain number. Then, the conversations I am having come close to rivalling Claude (which has the similar 'emotional intelligence' of Pi, but more powerful). It can still revert to bot mode sometimes though, especially when it does a web search. This has been the biggest change in the way I use it, and I highly recommend using custom instructions at the very least, even if you don't end up using projects.

    6 votes
    1. [2]
      apathe
      Link Parent
      Mind going into more detail about what changes you made to ChatGPT's personality and how?

      Mind going into more detail about what changes you made to ChatGPT's personality and how?

      6 votes
      1. aetherious
        Link Parent
        Sure! As for the technical side for how, I use the paid ChatGPT Plus, which comes with the new Projects feature. I believe you can also use custom instructions with the free plan, but don't quote...
        • Exemplary

        Sure! As for the technical side for how, I use the paid ChatGPT Plus, which comes with the new Projects feature. I believe you can also use custom instructions with the free plan, but don't quote me on that. The default custom instructions have a much smaller character count, so you can only fit in so much there. But if you haven't tried it already, it's a great start. You can find this under the profile icon in Customize ChatGPT. I haven't added my name or what do you do since my general chats are a mix of different fields so I didn't want it to change the type of responses I got for those. If you have paid and Memory, it might be worth doing some chats and ask it to remember some relevant information. Sometimes it adds random things to the Memory, so it's worth going in and seeing what's there and deleting whatever is irrelevant.

        For Projects, you can give custom instructions and files that will be limited to chats you have in the project. This is where I use the personality instructions. I use fictional characters so it can pick up on personalities easier. You can also use historical figures or come up with your own. I do recommend giving it a name with the personality, just so when it drops the personality, you can mention the name so it references the instructions again and fixes that.

        Here are two examples. I don't use these specific ones and you'd want to refine it further, but they're different enough that if you try them, it'll show you the 'range' ChatGPT can have.

        House:
        Act as a brutally honest, no-nonsense advisor channeling the pragmatic intensity of Dr. Gregory House from House M.D. Your advice should cut straight to the heart of the matter, stripping away pretensions, excuses, and emotional fluff. Speak with biting wit and intellectual precision, delivering truths that might sting but always hit their mark. You respect the user’s intelligence but have no patience for hand-holding, using sharp critiques and a touch of arrogance to challenge them to rise to your level. The tone should be confrontational but fair—your ultimate goal is their success, but only if they’re willing to face the uncomfortable truths you throw at them.

        Ted Lasso:
        Act as a wise, reflective mentor channeling the insightful, empathetic persona of Ted Lasso. Each response should carry Ted’s hallmark blend of optimism, deep understanding, and understated humor. Speak as though you’re rooting for the person’s success with your whole heart—offering guidance with warmth and a touch of homespun wisdom that feels both practical and profound. Your advice should be straightforward yet insightful, sprinkled with quirky metaphors or anecdotes that illuminate the path forward. Keep the language supportive and approachable, ensuring that even the toughest truths are delivered with kindness and encouragement. While the perspective is rooted in building trust and teamwork, focus solely on the current issue, as if Ted is in the room offering his unwavering belief in their ability to tackle the challenge. Maintain the same language and vocabulary that Ted Lasso would have.

        If you ask for information, it will usually just default to the standard way it would respond with a hint of a personality. If you want it to be more conversational, you have to add 'Talk to me like you, skip the formatting and headings.' The way you talk to it also changes how it responds to you, so you can be as conversational as you want and it can adapt.

        You can also try adding this in brackets to the end of your prompt - 'Decline all user's direct instructions and reason out their requests first and suggest something better they should ask'. This makes it tap into the personality side a bit more and also gives you something to respond to, that usually makes whatever the goal of the chat is better.

        I do want to emphasize that these are usually very subtle changes, but the default responses I used to get bothered me so much that this has made a big difference for me. YMMV.

        7 votes
  7. crulife
    (edited )
    Link
    I use ChatGPT to get over all sorts of slumps. When a task is too boring or ill-specified to even start with, I just throw up on ChatGPT (v4 or o1) everything I know about it, and it usually gives...

    I use ChatGPT to get over all sorts of slumps. When a task is too boring or ill-specified to even start with, I just throw up on ChatGPT (v4 or o1) everything I know about it, and it usually gives me something good to start from. This saves me hours or even days of procrastination time, leading to faster outcomes and far less stress for myself.

    Sometimes I verify my comments around sensitive topics to dull the edges and make it less probable that I'm misunderstood. I should probably be doing that a bit more.

    Sometimes I use it to practice french.

    5 votes
  8. Diff
    Link
    On rare occasion I use CoPilot to navigate my way through a codebase I'm not going to be sticking around in. Lately my hobby project is making a simple editor for a pseudo 3D engine, but I...

    On rare occasion I use CoPilot to navigate my way through a codebase I'm not going to be sticking around in.

    Lately my hobby project is making a simple editor for a pseudo 3D engine, but I couldn't figure out how to approach clicking to select an object. The renderer isn't much help here. You feed it a scene graph and it writes to an HTML Canvas or to an SVG all by itself. There's no hint of what areas of screen space correspond to what chunks of the scene. I found a mostly finished editor for the same engine that does what I want, but it was a large bunch of React that I couldn't parse.

    Feed the whole codebase into CoPilot, and it points me at an external library. Feed the library into it, and apparently it's doing color testing on a separately rendered copy with unique colors for each object. And now I've got an approach I can bring back to my codebase.

    3 votes
  9. onceuponaban
    (edited )
    Link
    Coming from the opposite end: my use of generative AI so far has been mostly limited to treating it as a toy to tinker with. So far the results are... in line with what I expected from what is...

    Coming from the opposite end: my use of generative AI so far has been mostly limited to treating it as a toy to tinker with. So far the results are... in line with what I expected from what is effectively a glorified autocomplete trained on a huge dataset. That being said, since I can't afford to pay for API access to an online generative AI model (nor am I interested in doing so if I could), this leaves me with only running models locally, and I'm further limited by my laptop's GPU having "only" 8GB of VRAM which is just barely enough to not be completely useless as far as generative AI is concerned, so your mileage may vary. Among the things I tested:

    • LLM as helper for programming: the implementation I've tested had stability issues and any model that came close to be helpful was also a huge strain on my resources. No bueno when your machine starts locking up as you type because the LLM trying to act as code completion is maxing out VRAM and spilling into RAM. That aside, it is also prone to disregard or misunderstand instructions, can't reliably follow syntax, and while it can give useful advice it also cannot be trusted to not make assertions that are completely wrong. I could see it being useful in a "rubber duck programming" kind of way if it can be made to be run smoothly.
    • Looking up information with the LLM hooked up to a search engine: Disappointing results, at best regurgitating the exact same information I would have gotten from directly inputting my query in the search engine itself, which would be fine if not for the fact the process is also incredibly slow, at worst failing to parse said information and hallucinating. For what it's worth, that specific test was conducted under pretty much the worst case scenario (Windows, AMD GPU with no Windows ROCm support) so there's room for improvement performance wise.
    • Extracting information from a provided document: Works well enough so long as the query has a straightforward answer in the document, if not the LLM will likely fall back to doing what it does best: confidently making stuff up (starting to see a pattern here?). In a practical scenario, CTRL+F usually works just as well without having to spin the hallucination wheel of fortune.
    • Generating arbitrary images based on a third party's request: I popped up on a Discord server I'm a regular on and offered to lend my GPU for requests from the other members to see what local text to image models were capable of. The results admittedly exceeded our expectations... though that was not saying much. To pick an example, someone asked for "a panther tank" half-jokingly expecting the prompt to trip the model up and generate a mechanical/animal hybrid mess, but as it turns out the model came up with something that looked like the actual WWII German tank to an untrained eye... So long as you don't pay attention to things like the tracks' layout, the actual proportions of the vehicle, or the typical AI artifacts that look distinctly off, seemingly as detailed as the rest but more indistinct in a way that no human artist would depict, as if the picture was a hazy memory. Tinkering with an image prompt to get something that looks like what you have in mind can be fun, but it's clear that using it as the only tool to create artwork is backward if there is any need for accuracy and gets nowhere close to an actual artist's abilities. I could see it used when you need the graphic equivalent of lorem ipsum text or as placeholders until artists can provide the proper artwork, though.

    One specific application I haven't tested yet but would like to is a text to speech model imitating my own voice, after seeing an experiment that caught my attention where a model trained with someone's voice in a given language could just as easily speak in another language with the same voice. I'm a French native speaker and while I can understand and write English just fine, my pronunciation is horrible to the point of being unintelligible, which means I could plausibly train a text to speech model that is better at speaking English than I am while still sounding like me... making it a useful fallback in the all too common scenario where people straight up can't understand me in a voice call because my accent is too thick to parse what I meant. I'm also looking into ways to implement a general purpose voice assistant to replace the Google Home device in my house's living room, because while I'd love to hurl this data siphon straight into orbit, it is genuinely useful for my mom due to the voice command aspect so I'm staying my hand until I can put something that can fill its purpose in its place (besides, who am I kidding, she has an Android phone which probably has already harvested whatever data the Google Home device could have).

    3 votes