-
5 votes
-
Tracing the thoughts of a large language model
10 votes -
Review: Cræft, by Alexander Langlands
4 votes -
Please stop externalizing your costs directly into my face
120 votes -
FOSS infrastructure is under attack by AI companies
39 votes -
LLM crawlers continue to DDoS SourceHut
11 votes -
Mayo Clinic's secret weapon against AI hallucinations: Reverse RAG in action
8 votes -
Factorio Learning Environment – a benchmark that tests agents in long-term planning, program synthesis, and resource optimization
13 votes -
Is it wrong to use AI to fact check and combat the spread of misinformation?
I’ve been wondering about this lately. Recently, I made a post about Ukraine on another social media site, and someone jumped in with the usual "Ukraine isn't a democracy" right-wing talking...
I’ve been wondering about this lately.
Recently, I made a post about Ukraine on another social media site, and someone jumped in with the usual "Ukraine isn't a democracy" right-wing talking point. I wrote out a long, thoughtful reply, only to get the predictable one-liner propaganda responses back. You probably know the type, just regurgitated stuff with no real engagement.
After that, I didn’t really feel like spending my time and energy writing out detailed replies to every canned response. But I also didn’t want to just let it sit there and have people who might be reading the exchange assume there’s no pushback or correction.
So instead, I tried leveraging AI to help me write a fact-checking reply. Not for the person I was arguing with, really, but more as an FYI for anyone else following along. I made sure it stayed factual and based in reality, avoided name-calling, and kept the tone above the usual mudslinging. And of course, I double-checked what it wrote to make sure it matched my understanding and wasn’t just spitting out garbage or hallucinations.
But it got me thinking that there’s a lot of fear about AI being used to spread and create misinformation. But do you think there’s also an opportunity to use it as a tool to counter misinformation, without burning ourselves out in the process?
Curious how others see it.
16 votes -
Students and their ChatGPT
27 votes -
Algorithmic Complacency: Algorithms are breaking how we think
82 votes -
Have you altered the way you write to avoid being perceived as AI?
I recently had an unpleasant experience. Something I wrote fully and without AI generation of any kind was perceived, and accused of, having been produced by AI. Because I wanted to get everything...
I recently had an unpleasant experience. Something I wrote fully and without AI generation of any kind was perceived, and accused of, having been produced by AI. Because I wanted to get everything right, in that circumstance, I wrote in my "cold and precise" mode, which admittedly can sound robotic. However, my writing was pointed, perhaps even a little hostile, with a clear point of view. Not the kind of text AI generally produces. After the experience, I started to think of ways to write less like an AI -- which, paradoxically, means forcing my very organic self into adopting "human-like" language I don't necessarily care for. That made me think that AI is probably changing the way a lot of people write, perhaps in subtle ways. Have you noticed this happening with you or those around you?
30 votes -
DeepSeek’s safety guardrails failed every test researchers threw at its AI chatbot
16 votes -
Building games with LLMs to help my kid learn math
9 votes -
What trustworthy resources are you using for AI/LLMs/ML education?
Every company is trying to shoehorn AI into every product, and many online materials provide a general snake oil vibe, making it increasingly difficult to parse. So far, my primary sources have...
Every company is trying to shoehorn AI into every product, and many online materials provide a general snake oil vibe, making it increasingly difficult to parse. So far, my primary sources have been GitHub, Medium, and some YouTube.
My goal is to better understand the underlying technology so that I can manipulate it better, train models, and use it most effectively. This goes beyond just experimenting with prompts and trying to overcome guardrails. It includes running local, like Ollama on my M1 Max, which I'm not opposed to.
8 votes -
Are LLMs making Stack Overflow irrelevant?
23 votes -
Nepenthes: a tarpit intended to catch AI web crawlers
33 votes -
Researchers explain that it is easy to redirect LLM equiped robots, including military and security robots in dangerous ways
15 votes -
Project Zero: Using large language models to catch vulnerabilities in real-world code
7 votes -
Gender, race, and intersectional bias in resume screening via language model
14 votes -
Anthropic announces New Claude 3.5 Sonnet, Claude 3.5 Haiku and the Computer Use API
19 votes -
How harmful are AI’s biases on diverse student populations?
9 votes -
GSM-Symbolic: Understanding the limitations of mathematical reasoning in large language models
15 votes -
OpenAI is a bad business
43 votes -
How to setup a local LLM ("AI") on Windows
12 votes -
Covert racism in AI: How language models are reinforcing outdated stereotypes
20 votes -
Prison inmates in Finland are being employed as data labellers to improve accuracy of AI models
22 votes -
OpenAI: Introducing o1
14 votes -
AI is here. What now?
18 votes -
AI accuses journalist of escaping psych ward, abusing children and widows
29 votes -
AI makes racist judgement calls when asked to evaluate speakers of African American vernacular English
23 votes -
The LLMentalist effect: how chat-based large language models replicate the mechanisms of a psychic's con
29 votes -
Solving a couple of hard problems with an LLM
13 votes -
How are AI and LLMs used in your company (if at all)?
I'm working on an AI chat portal for teams, think Perplexity but trained on a company's knowledgebase (prosgpt dot com for the curious) and i wanted to talk to some people who are successfully...
I'm working on an AI chat portal for teams, think Perplexity but trained on a company's knowledgebase (prosgpt dot com for the curious) and i wanted to talk to some people who are successfully using LLMs in their teams or jobs to improve productivity
Are you using free or paid LLMs? Which ones?
What kind of tasks do you get an LLM to do for you?
What is the workflow for accomplishing those tasks?
Cheers,
nmn12 votes -
Vibe Check - Let AI find you the best things
30 votes -
Experiences using a local voice assistant with LLM with HomeAssistant?
Has anyone out there hooked HomeAssistant up to a local LLM? I'm very tempted: Alexa integrations fail often. HomeAssistant integrations tend to be rock solid. Alexa is rule/pattern matching...
Has anyone out there hooked HomeAssistant up to a local LLM? I'm very tempted:
- Alexa integrations fail often. HomeAssistant integrations tend to be rock solid.
- Alexa is rule/pattern matching based. LLMs can understand natural language fairly well. The "magical incantations" required by Alexa are awkward.
Other than the software, the device side seems challenging. There are $50 fully-baked POP devices. I'm less sure on the DIY front.
Also, I desperately want my house to speak to me in the voice of the NCC-1701D computer. I've read enough now to know this should be achievable with a modicum of effort via OSS voice cloning tools or training a new model (same difference except "voice cloning" seems to often refer to doing this without training a whole new model?).
Thoughts? Experiences?
I've seen several pages that have led me to conclude this is tenable:
https://github.com/myshell-ai/OpenVoice
https://github.com/domesticatedviking/TextyMcSpeechy
https://github.com/mezbaul-h/june
https://www.home-assistant.io/voice_control/voice_remote_local_assistant/
14 votes -
Detecting hallucinations in large language models using semantic entropy
17 votes -
CodeAid: A classroom deployment of an LLM-based programming assistant
6 votes -
Hallucination-free RAG: Making LLMs safe for healthcare
12 votes -
Stack Overflow and OpenAI partner to strengthen the world’s most popular large language models
21 votes -
Meet Llama 3
17 votes -
Why large language models like ChatGPT treat Black- and White-sounding names differently
10 votes -
Have we reached peak AI?
24 votes -
AI models found to show language bias by recommending Black defendents be 'sentenced to death'
28 votes -
You can now train a 70b language model at home (if you have a dual-3090 or better)
11 votes -
Research paper compares LLM responses based on politeness of requests and finds quality difference
28 votes -
Generative AI - We aren’t ready
27 votes -
How Quora died - The site used to be a thriving community that worked to answer our most specific questions. But users are fleeing.
37 votes -
Popular AI chatbots found to give error-ridden legal answers
19 votes -
Stuff we figured out about AI in 2023
27 votes