• Activity
  • Votes
  • Comments
  • New
  • All activity
  • Showing only topics in ~tech with the tag "language models.large". Back to normal view / Search all groups
    1. Is it wrong to use AI to fact check and combat the spread of misinformation?

      I’ve been wondering about this lately. Recently, I made a post about Ukraine on another social media site, and someone jumped in with the usual "Ukraine isn't a democracy" right-wing talking...

      I’ve been wondering about this lately.

      Recently, I made a post about Ukraine on another social media site, and someone jumped in with the usual "Ukraine isn't a democracy" right-wing talking point. I wrote out a long, thoughtful reply, only to get the predictable one-liner propaganda responses back. You probably know the type, just regurgitated stuff with no real engagement.

      After that, I didn’t really feel like spending my time and energy writing out detailed replies to every canned response. But I also didn’t want to just let it sit there and have people who might be reading the exchange assume there’s no pushback or correction.

      So instead, I tried leveraging AI to help me write a fact-checking reply. Not for the person I was arguing with, really, but more as an FYI for anyone else following along. I made sure it stayed factual and based in reality, avoided name-calling, and kept the tone above the usual mudslinging. And of course, I double-checked what it wrote to make sure it matched my understanding and wasn’t just spitting out garbage or hallucinations.

      But it got me thinking that there’s a lot of fear about AI being used to spread and create misinformation. But do you think there’s also an opportunity to use it as a tool to counter misinformation, without burning ourselves out in the process?

      Curious how others see it.

      16 votes
    2. Have you altered the way you write to avoid being perceived as AI?

      I recently had an unpleasant experience. Something I wrote fully and without AI generation of any kind was perceived, and accused of, having been produced by AI. Because I wanted to get everything...

      I recently had an unpleasant experience. Something I wrote fully and without AI generation of any kind was perceived, and accused of, having been produced by AI. Because I wanted to get everything right, in that circumstance, I wrote in my "cold and precise" mode, which admittedly can sound robotic. However, my writing was pointed, perhaps even a little hostile, with a clear point of view. Not the kind of text AI generally produces. After the experience, I started to think of ways to write less like an AI -- which, paradoxically, means forcing my very organic self into adopting "human-like" language I don't necessarily care for. That made me think that AI is probably changing the way a lot of people write, perhaps in subtle ways. Have you noticed this happening with you or those around you?

      30 votes
    3. What trustworthy resources are you using for AI/LLMs/ML education?

      Every company is trying to shoehorn AI into every product, and many online materials provide a general snake oil vibe, making it increasingly difficult to parse. So far, my primary sources have...

      Every company is trying to shoehorn AI into every product, and many online materials provide a general snake oil vibe, making it increasingly difficult to parse. So far, my primary sources have been GitHub, Medium, and some YouTube.

      My goal is to better understand the underlying technology so that I can manipulate it better, train models, and use it most effectively. This goes beyond just experimenting with prompts and trying to overcome guardrails. It includes running local, like Ollama on my M1 Max, which I'm not opposed to.

      8 votes
    4. How are AI and LLMs used in your company (if at all)?

      I'm working on an AI chat portal for teams, think Perplexity but trained on a company's knowledgebase (prosgpt dot com for the curious) and i wanted to talk to some people who are successfully...

      I'm working on an AI chat portal for teams, think Perplexity but trained on a company's knowledgebase (prosgpt dot com for the curious) and i wanted to talk to some people who are successfully using LLMs in their teams or jobs to improve productivity

      Are you using free or paid LLMs? Which ones?

      What kind of tasks do you get an LLM to do for you?

      What is the workflow for accomplishing those tasks?

      Cheers,
      nmn

      12 votes
    5. Experiences using a local voice assistant with LLM with HomeAssistant?

      Has anyone out there hooked HomeAssistant up to a local LLM? I'm very tempted: Alexa integrations fail often. HomeAssistant integrations tend to be rock solid. Alexa is rule/pattern matching...

      Has anyone out there hooked HomeAssistant up to a local LLM? I'm very tempted:

      • Alexa integrations fail often. HomeAssistant integrations tend to be rock solid.
      • Alexa is rule/pattern matching based. LLMs can understand natural language fairly well. The "magical incantations" required by Alexa are awkward.

      Other than the software, the device side seems challenging. There are $50 fully-baked POP devices. I'm less sure on the DIY front.

      Also, I desperately want my house to speak to me in the voice of the NCC-1701D computer. I've read enough now to know this should be achievable with a modicum of effort via OSS voice cloning tools or training a new model (same difference except "voice cloning" seems to often refer to doing this without training a whole new model?).

      Thoughts? Experiences?

      I've seen several pages that have led me to conclude this is tenable:

      https://github.com/myshell-ai/OpenVoice

      https://github.com/domesticatedviking/TextyMcSpeechy

      https://github.com/mezbaul-h/june

      https://www.home-assistant.io/voice_control/voice_remote_local_assistant/

      https://heywillow.io/hardware/#esp32-s3-box-lite

      14 votes