-
4 votes
-
FOSS funding vanishes from EU's 2025 Horizon program plans. Elimination of most Next Generation Internet funding 'incomprehensible,' says OW2 CEO Pierre-Yves Gibello.
28 votes -
OpenAI improving model safety behavior with Rule-Based Rewards
6 votes -
Solving a couple of hard problems with an LLM
13 votes -
How Apple just stole "AI" from everyone else
12 votes -
Now available: AI indulgences
12 votes -
Academic authors 'shocked' after Taylor & Francis sells access to their research to Microsoft AI
42 votes -
It may soon be legal to jailbreak AI to expose how it works
29 votes -
Tony Blair wants us to stop worrying and embrace AI
8 votes -
/r/nixos enables automated moderation with Watchdog
16 votes -
Scarlett Johansson says OpenAI’s Sam Altman would make a good Marvel villain after voice dispute
33 votes -
What's our thoughts on Perplexity.ai for search?
If you haven't used it yet, it's more like a cited source summary tool. I actually really like for questions such as "Who is X and why are they important?" I'm interested in people's thoughts on it.
15 votes -
How are AI and LLMs used in your company (if at all)?
I'm working on an AI chat portal for teams, think Perplexity but trained on a company's knowledgebase (prosgpt dot com for the curious) and i wanted to talk to some people who are successfully...
I'm working on an AI chat portal for teams, think Perplexity but trained on a company's knowledgebase (prosgpt dot com for the curious) and i wanted to talk to some people who are successfully using LLMs in their teams or jobs to improve productivity
Are you using free or paid LLMs? Which ones?
What kind of tasks do you get an LLM to do for you?
What is the workflow for accomplishing those tasks?
Cheers,
nmn12 votes -
Exclusive: OpenAI working on new reasoning technology under code name ‘Strawberry’
17 votes -
Library asks users to verify that books actually exist before making a loan request because AI invents book titles
43 votes -
How to raise your artificial intelligence
7 votes -
We need to control AI agents now
19 votes -
Meet Mercy and Anita – the African workers driving the AI revolution, for just over a dollar an hour
18 votes -
Superintelligence—ten years later
8 votes -
ChatGPT is bullshit
61 votes -
Mitigating Skeleton Key, a new type of generative AI jailbreak technique
15 votes -
Microsoft CEO of AI claims online content is 'freeware' [and can be used to train LLMs in the absence of a specific directives from the author against this]
43 votes -
AI-powered scams and what you can do about them
7 votes -
Why so many bitcoin mining companies are pivoting to AI
14 votes -
AI work assistants need a lot of handholding (gifted link)
7 votes -
Vibe Check - Let AI find you the best things
30 votes -
Anthropic's CEO on being an underdog
9 votes -
AI the product vs AI the feature
17 votes -
Experiences using a local voice assistant with LLM with HomeAssistant?
Has anyone out there hooked HomeAssistant up to a local LLM? I'm very tempted: Alexa integrations fail often. HomeAssistant integrations tend to be rock solid. Alexa is rule/pattern matching...
Has anyone out there hooked HomeAssistant up to a local LLM? I'm very tempted:
- Alexa integrations fail often. HomeAssistant integrations tend to be rock solid.
- Alexa is rule/pattern matching based. LLMs can understand natural language fairly well. The "magical incantations" required by Alexa are awkward.
Other than the software, the device side seems challenging. There are $50 fully-baked POP devices. I'm less sure on the DIY front.
Also, I desperately want my house to speak to me in the voice of the NCC-1701D computer. I've read enough now to know this should be achievable with a modicum of effort via OSS voice cloning tools or training a new model (same difference except "voice cloning" seems to often refer to doing this without training a whole new model?).
Thoughts? Experiences?
I've seen several pages that have led me to conclude this is tenable:
https://github.com/myshell-ai/OpenVoice
https://github.com/domesticatedviking/TextyMcSpeechy
https://github.com/mezbaul-h/june
https://www.home-assistant.io/voice_control/voice_remote_local_assistant/
14 votes -
Detecting hallucinations in large language models using semantic entropy
17 votes -
AI took their jobs. Now they get paid to make it sound human.
26 votes -
Butterflies: An AI social network
11 votes -
New AI project aims to mimic the human neocortex: The Thousand Brains Project offers a fundamentally different approach to AI
19 votes -
Meta hit with Norwegian complaint over its plans to use images and posts of users on Facebook and Instagram to train artificial intelligence models
27 votes -
Microsoft admits that maybe surveiling everything you do on your computer isn’t a brilliant idea
27 votes -
OpenAI adds former US NSA chief Paul Nakasone to the board
21 votes -
DuckDuckGo AI Chat: anonymous access to popular AI chatbots
46 votes -
Apple Intelligence
31 votes -
Generative AI is not going to build your engineering team for you
15 votes -
Elon Musk threatens to ban iPhones and MacBooks at his companies after Apple announces OpenAI partnership
40 votes -
Hackers target AI users with malicious stable diffusion tool on Github to protest 'art theft'
17 votes -
AI will become mathematicians’ ‘co-pilot’
5 votes -
Discussing AI music - examples and some thoughts
I'm not sure if this would be better for ~music, ~tech, or what, but after messing around with Udio for a bit, I made some stuff I liked and wanted to get folks' thoughts. Imo, it's incredible to...
I'm not sure if this would be better for ~music, ~tech, or what, but after messing around with Udio for a bit, I made some stuff I liked and wanted to get folks' thoughts. Imo, it's incredible to be able to get music from a text prompt - it means I, as someone who is mostly ignorant to music production, can have my musical idea and actually render that out as music for someone to hear. I can think "damn that would be cool" and then in kind of a fuzzy way, make it happen then and there. Whether it's good, I don't know. That's not up to me, really, but it is the kind of sound I wanted to happen, so I'm left conflicted on how to feel about it. Figured it would be worthwhile to show folks some of it, and see what they think.
I do enjoy synth and metal, so there's a lot of that in these. Feel free to be as critical as you like. If I can apply your criticism I will try to do it, and if you want to see how that works out, I'll share.
- Cosmoterrestrial
- A Floyd, Pinkly
- Empire's Demise, Foretold
- Metal for Ghosts Bedsheet Edition (the very end of this one is hilariously appropriate)
- Multi-3DS Drifting
And here's a link to my profile, if you would like to browse. It will update too when I put more up.
They're all instrumental. Lyrical music is less appealing to me in general and Udio's voices do sound kinda weird to me more often than not. The way I made the tracks, I would start with a clip combining some genres/moods, and then add to either end of the clip until I had a complete song. Along the way, I could introduce new elements/transitions by using more text/tweaking various settings and flipping "manual mode" on and off. The results were fuzzy; I didn't always get what I wanted, but I could keep trying until I did, or until I got something that sounded "better". I wrote all the titles after the song was finished. The album art is from a text prompt.
I'm not sure what I think, to be honest. On the one hand, a lot of the creative decision-making wasn't mine. On the other, the song would not be what it is without me making decisions about how it came about and what feelings/moods/genres were focused upon/utilized. I think the best I can say is "use the tool and see whether it's enough to count". To me it feels almost 50/50, like I've "collaborated with my computer" rather than "made music". Does it matter? If the sound is the intended sound, the sound I hoped to make and wanted to share, is that enough to say it is "my music"? Is this perhaps just what it looks like to be a beginner in a different paradigm?
When I used Suno, I had a much more rigid opinion. What it produced, I called "computer spit". Because, all I could actually control was telling it to continue, changing the prompt, and giving it structure/genre tags that felt like a coin flip in terms of effectiveness. I had a really hard time trying to get it to keep/recall melody, and my attempts to guide it along felt more like gambling than deliberate decisions. It also couldn't keep enough in context to make the overall song consistent with respect to instrumentation. It's different with Udio, both because you have a lot of additional tools, and because it feels like those tools work more consistently at making the model do what you want. I still call the results "computer spit" where I've shown them off, but I'm unsure now whether the production has enough of myself in it to be something more. Perhaps not on the same level as something someone produced by playing an instrument, or choosing samples/arranging things in software, but also not quite the same as the computer just rolling along, with me going "thumbs up" or "thumbs down". Maybe these distinctions don't actually matter, but I'd be curious if anyone has thoughts along these lines.
I'm intentionally trying to avoid a discussion about the morality of the thing or what political/social ramifications it has, not because I don't care about that but because I'm in the middle of trying to understand the tool and what its results mean. Would you consider what I've posted here work I could claim as my own, or do you think the computer has enough of a role to say it's not? Is my role in the production large enough? Or perhaps you have a stronger position, that nothing the computer can possibly do in this way counts as original music. Does any of this change that position for you? I ask because I've gone through a lot of opinions myself as I've been following things, and one interesting bit is that I have not gotten any copyright notices when I've uploaded the music to Youtube (I did get notices with Suno's music). As far as I can tell, with what is available to me, this is all original.
And of course, the most important one: Did you like it? Is there something you think would make them better? Do they all suffer from something I'm not seeing/hearing? I'm not an expert technician nor a music producer, so perhaps my ignorant ears are leading me astray. Either way, I've had a ton of fun doing this, and the results to my ear are fun to listen to while I'm doing stuff. I wouldn't call any of it the best music I've ever heard, but I can also think of a lot that is worse. I think what I wonder the most is whether it comes off bland/plain. Most of the folks I show things to are a bit too caught up in being astounded/disturbed to really give me much feedback, so perhaps putting the request in this form will work out a bit better - ya'll have time to think on it.
As always, your time and attention is greatly appreciated
Edit: I should clarify. I am not attempting to be a musician. Hence calling it "computer spit" with anything public, and the lack of any effort to pitch it as something I did only on my own. Rather, I recognize the limit of my own understanding, and felt I'd hit a point where my ignorance of production meant I could not judge the results as well as I'd like. That means it's time to engage some folks because folks out there are likely to know what I do not and see things I can't. From that angle, a lot of the discussion is very interesting, and I'll be responding to those in a bit. But there's no need to argue for doing the work - I recognize that. I'm trying to see past my own horizons with a medium I don't put the work into. I'm a consumer of music, not a creator, so getting some perspective from folks more acquainted with creating and with the technology is really what I'm after in sharing the experience.
Edit again: Thank you all for a very interesting discussion. I had a spare evening/morning and this was a good use of it. For the sake of tying a bow on the whole thing, I'll share my takeaways as succinctly as I can manage.
It seems, at present, and at best, the role these tools can play is of a sort of personal noise generator. The output is not of sufficient interest, quality, complexity, etc., to really be regarded the same as human-produced music, is the overall impression I have been left with. And for other reasons, it may be that the fuzziness of it all is a permanent feature, and thus a permanent constraint on how far toward "authentic" the results can ever get. I was trying to avoid a discussion about my own creativity, the value of doing work, societal ramifications, etc., so I'll work on how to present things better. For what it's worth, this has all been part of what I do creatively - my area of study was philosophy, and the goal of that to my mind has always been "achieving clarity". So I am attempting to achieve clarity with things as they develop, as a hobby sort of interest while I'm busy doing completely different stuff and to better protect my own mind against dumb marketing and hype. So once again, I appreciate you all taking the time, and I wish you all well in all the things you do.
24 votes -
AI: The decade ahead
27 votes -
The latest AI use cases appear to be built specifically for managers and executives, and literally nobody else
30 votes -
OpenAI insiders warn of a ‘reckless’ race for dominance
15 votes -
Rabbit R1 it's a scam
16 votes -
Hands-on: Frame by Brilliant Labs is a disappointing stepping stone to something better
14 votes -
Ex-OpenAI board member reveals what led to Sam Altman's brief ousting
35 votes -
How are you dealing with AI generated results in your searches?
I'm finding it more and more frustrating having to parse the things I'm actually looking for from what feels like a deluge of rubbish. Are there any strategies, extensions, add-ons, etc that...
I'm finding it more and more frustrating having to parse the things I'm actually looking for from what feels like a deluge of rubbish. Are there any strategies, extensions, add-ons, etc that people are using to filter results?
46 votes