-
9 votes
-
Review: Cræft, by Alexander Langlands
4 votes -
Please stop externalizing your costs directly into my face
120 votes -
FOSS infrastructure is under attack by AI companies
39 votes -
LLM crawlers continue to DDoS SourceHut
11 votes -
Mayo Clinic's secret weapon against AI hallucinations: Reverse RAG in action
8 votes -
Factorio Learning Environment – a benchmark that tests agents in long-term planning, program synthesis, and resource optimization
13 votes -
Is it wrong to use AI to fact check and combat the spread of misinformation?
I’ve been wondering about this lately. Recently, I made a post about Ukraine on another social media site, and someone jumped in with the usual "Ukraine isn't a democracy" right-wing talking...
I’ve been wondering about this lately.
Recently, I made a post about Ukraine on another social media site, and someone jumped in with the usual "Ukraine isn't a democracy" right-wing talking point. I wrote out a long, thoughtful reply, only to get the predictable one-liner propaganda responses back. You probably know the type, just regurgitated stuff with no real engagement.
After that, I didn’t really feel like spending my time and energy writing out detailed replies to every canned response. But I also didn’t want to just let it sit there and have people who might be reading the exchange assume there’s no pushback or correction.
So instead, I tried leveraging AI to help me write a fact-checking reply. Not for the person I was arguing with, really, but more as an FYI for anyone else following along. I made sure it stayed factual and based in reality, avoided name-calling, and kept the tone above the usual mudslinging. And of course, I double-checked what it wrote to make sure it matched my understanding and wasn’t just spitting out garbage or hallucinations.
But it got me thinking that there’s a lot of fear about AI being used to spread and create misinformation. But do you think there’s also an opportunity to use it as a tool to counter misinformation, without burning ourselves out in the process?
Curious how others see it.
16 votes -
Students and their ChatGPT
27 votes -
Algorithmic Complacency: Algorithms are breaking how we think
82 votes -
Have you altered the way you write to avoid being perceived as AI?
I recently had an unpleasant experience. Something I wrote fully and without AI generation of any kind was perceived, and accused of, having been produced by AI. Because I wanted to get everything...
I recently had an unpleasant experience. Something I wrote fully and without AI generation of any kind was perceived, and accused of, having been produced by AI. Because I wanted to get everything right, in that circumstance, I wrote in my "cold and precise" mode, which admittedly can sound robotic. However, my writing was pointed, perhaps even a little hostile, with a clear point of view. Not the kind of text AI generally produces. After the experience, I started to think of ways to write less like an AI -- which, paradoxically, means forcing my very organic self into adopting "human-like" language I don't necessarily care for. That made me think that AI is probably changing the way a lot of people write, perhaps in subtle ways. Have you noticed this happening with you or those around you?
30 votes -
Overwhelmed with the realm of data exploration (datalakes, AI, plus some c-level pressure)
Hi all, I have been tasked with the gargantuan task of understanding and eventually implementing what is effectively turning our database into an all-knowing human. What they want at the base...
Hi all,
I have been tasked with the gargantuan task of understanding and eventually implementing what is effectively turning our database into an all-knowing human.
What they want at the base level is to be able to open up a chat bot or similar and ask "where can I put an ice cream shop in <x region of our portfolio>?" And the result should be able to reason against things like demographics in the area, how many competing ice cream shops are in the area, etc.
They also want it to be able to read into trends in things like rents, business types, etc., among many other "we have the data, we just don't know how to use it" questions.
You may be sitting there saying "hire a data analyst" and I agree with you but the ai bug has bitten c-level and they are convinced our competition has advanced systems that can give this insight into their data with a snap of a finger.
I don't know if this is true but regardless, here I am knee deep in the shit trying to find some kind of solution. My boss thinks we can throw everything into a datalake and connect it to chatgpt and it will just work, but I have my reservations.
We have one large database that is "relational" (it has keys that other tables reference but they rarely have proper foreign keys, this is a corporate accounting software specifically for commercial real estate and was not our design and is 30 years old at this point) and we have a couple of smaller databases for things like brokerage and some other unrelated things.
I'm currently of the opinion that a datalake won't do much for us. Maybe I'm wrong but I think cultivating several views that combine our various tables in a sensible way with sensible naming will help to give AI a somewhat decent chance at being successful.
My first entry point was onelake + powerbi + copilot, but that isn't what they're looking for and it's ridiculously expensive. I then looked at powerbi "q&a" which was closer but still not there. You can do charts and sums and totals etc but you can't ask it introspective questions, it just falls on its face. I don't think it was designed for the type of things my company wants.
I have since pivoted to retrieval augmented generation (rag-ai) with azure openai and I feel like I'm on the right path but I can't get it to work. I'm falling face first through azure and the tutorials that exist are out of date even though they're 3 months old. It's really frustrating to try to navigate azure and fabric and foundry with no prior understanding. Every time I try something I have to create 6 resource group items, permissions left right and center, blob stores, etc, and in the end it just...doesn't work.
I think I'm headed in the right direction. I think I need to make some well formatted views/data warehouses, then transform those into vector matrices which azure's openai foundry can take and reason against in addition to the normal LLM that 4o or o1 mini uses
I tried to do a proof of concept with an exported set of data that I had in a big excel sheet but uploading files as part of your dataset is painful as they get truncated and even if they don't, the vectorizing doesn't seem to work if it's not a PDF or image etc.
I need to understand whether I'm in the right universe and I need to figure out how to get this implemented without spending 10 grand a month on powerbi and datalakes that don't even work the way they want.
Anyone got any advice/condolences for me? I've been beating my head against this for days and I'm just overwhelmed by all the buzz words and over promises and terrible "demos" of someone making a pie chart out of 15 records out of the contoso database and calling it revolutionary introspective conversational AI
I'm just tired 😩
20 votes -
DeepSeek’s safety guardrails failed every test researchers threw at its AI chatbot
16 votes -
Building games with LLMs to help my kid learn math
9 votes -
What trustworthy resources are you using for AI/LLMs/ML education?
Every company is trying to shoehorn AI into every product, and many online materials provide a general snake oil vibe, making it increasingly difficult to parse. So far, my primary sources have...
Every company is trying to shoehorn AI into every product, and many online materials provide a general snake oil vibe, making it increasingly difficult to parse. So far, my primary sources have been GitHub, Medium, and some YouTube.
My goal is to better understand the underlying technology so that I can manipulate it better, train models, and use it most effectively. This goes beyond just experimenting with prompts and trying to overcome guardrails. It includes running local, like Ollama on my M1 Max, which I'm not opposed to.
8 votes -
Are LLMs making Stack Overflow irrelevant?
23 votes -
Nepenthes: a tarpit intended to catch AI web crawlers
33 votes -
Researchers explain that it is easy to redirect LLM equiped robots, including military and security robots in dangerous ways
15 votes -
Project Zero: Using large language models to catch vulnerabilities in real-world code
7 votes -
Gender, race, and intersectional bias in resume screening via language model
14 votes -
Anthropic announces New Claude 3.5 Sonnet, Claude 3.5 Haiku and the Computer Use API
19 votes -
How harmful are AI’s biases on diverse student populations?
9 votes -
GSM-Symbolic: Understanding the limitations of mathematical reasoning in large language models
15 votes -
OpenAI is a bad business
43 votes -
How to setup a local LLM ("AI") on Windows
12 votes -
Covert racism in AI: How language models are reinforcing outdated stereotypes
20 votes -
Prison inmates in Finland are being employed as data labellers to improve accuracy of AI models
22 votes -
OpenAI: Introducing o1
14 votes -
AI is here. What now?
18 votes -
AI accuses journalist of escaping psych ward, abusing children and widows
29 votes -
AI makes racist judgement calls when asked to evaluate speakers of African American vernacular English
23 votes -
The LLMentalist effect: how chat-based large language models replicate the mechanisms of a psychic's con
29 votes -
Solving a couple of hard problems with an LLM
13 votes -
How are AI and LLMs used in your company (if at all)?
I'm working on an AI chat portal for teams, think Perplexity but trained on a company's knowledgebase (prosgpt dot com for the curious) and i wanted to talk to some people who are successfully...
I'm working on an AI chat portal for teams, think Perplexity but trained on a company's knowledgebase (prosgpt dot com for the curious) and i wanted to talk to some people who are successfully using LLMs in their teams or jobs to improve productivity
Are you using free or paid LLMs? Which ones?
What kind of tasks do you get an LLM to do for you?
What is the workflow for accomplishing those tasks?
Cheers,
nmn12 votes -
Vibe Check - Let AI find you the best things
30 votes -
Experiences using a local voice assistant with LLM with HomeAssistant?
Has anyone out there hooked HomeAssistant up to a local LLM? I'm very tempted: Alexa integrations fail often. HomeAssistant integrations tend to be rock solid. Alexa is rule/pattern matching...
Has anyone out there hooked HomeAssistant up to a local LLM? I'm very tempted:
- Alexa integrations fail often. HomeAssistant integrations tend to be rock solid.
- Alexa is rule/pattern matching based. LLMs can understand natural language fairly well. The "magical incantations" required by Alexa are awkward.
Other than the software, the device side seems challenging. There are $50 fully-baked POP devices. I'm less sure on the DIY front.
Also, I desperately want my house to speak to me in the voice of the NCC-1701D computer. I've read enough now to know this should be achievable with a modicum of effort via OSS voice cloning tools or training a new model (same difference except "voice cloning" seems to often refer to doing this without training a whole new model?).
Thoughts? Experiences?
I've seen several pages that have led me to conclude this is tenable:
https://github.com/myshell-ai/OpenVoice
https://github.com/domesticatedviking/TextyMcSpeechy
https://github.com/mezbaul-h/june
https://www.home-assistant.io/voice_control/voice_remote_local_assistant/
14 votes -
Detecting hallucinations in large language models using semantic entropy
17 votes -
CodeAid: A classroom deployment of an LLM-based programming assistant
6 votes -
Hallucination-free RAG: Making LLMs safe for healthcare
12 votes -
Stack Overflow and OpenAI partner to strengthen the world’s most popular large language models
21 votes -
Meet Llama 3
17 votes -
Why large language models like ChatGPT treat Black- and White-sounding names differently
10 votes -
Have we reached peak AI?
24 votes -
AI models found to show language bias by recommending Black defendents be 'sentenced to death'
28 votes -
You can now train a 70b language model at home (if you have a dual-3090 or better)
11 votes -
Research paper compares LLM responses based on politeness of requests and finds quality difference
28 votes -
Generative AI - We aren’t ready
27 votes -
How Quora died - The site used to be a thriving community that worked to answer our most specific questions. But users are fleeing.
37 votes -
Popular AI chatbots found to give error-ridden legal answers
19 votes -
Stuff we figured out about AI in 2023
27 votes