-
29 votes
-
Reddit CEO praises Elon Musk’s cost-cutting at Twitter, as protests continue to rock Reddit
105 votes -
Anyone know of research using GPTs for non-language tasks
I've been a computer scientist in the field of AI for almost 15 years. Much of my time has been devoted to classical AI; things like planning, reasoning, clustering, induction, logic, etc. This...
I've been a computer scientist in the field of AI for almost 15 years. Much of my time has been devoted to classical AI; things like planning, reasoning, clustering, induction, logic, etc. This has included (but had rarely been my focus) machine learning tasks (lots of Case-Based Reasoning). For whatever reason though, the deep learning trend never really interested me until recently. It really just felt like they were claiming huge AI advancements when all they really found was an impressive way to store learned data (I know this is an understatement).
Over time my opinion on that has changed slightly, and I have been blown away with the boom that is happening with transformers (GPTs specifically) and large language models. Open source projects are creating models comparable to OpenAIs behemoths with far less training and parameters which is making me take another look into GPTs.
What I find surprising though is that they seem to have only experimented with language. As far as I understand the inputs/outputs, the language is tokenized into bytes before prediction anyway. Why does it seem like (or rather the community act like) the technology can only be used for LLMs?
For example, what about a planning domain? You can specify actions in a domain in such a manner that tokenization would be trivial, and have far fewer tokens then raw text. Similarly you could generate a near infinite amount of training data if you wanted via other planning algorithms or simulations. Is there some obvious flaw I'm not seeing? Other examples might include behavior and/or state prediction.
I'm not saying that out of the box a standard GPT architecture is a guaranteed success for plan learning/planning... But it seems like it should be viable and no one is trying?
9 votes -
Google’s new AI-powered search tools are not coming for anyone’s job
5 votes -
Rocking Africa together: Triller & Boomplay are partnering to showcase Africa to the world
4 votes -
Google parent Alphabet tells workers to be wary of AI chatbots
5 votes -
Building a board game using AI as a guide
6 votes -
Gmail AI can now write emails for you on your phone: how it works
11 votes -
Microsoft launched Bing chatbot despite OpenAI warning it wasn’t ready
16 votes -
I’m an ER doctor. Here’s how I’m already using ChatGPT to help treat patients.
14 votes -
Europeans take a major step toward regulating AI
19 votes -
Accenture announces jaw-dropping $3 billion investment in AI
13 votes -
Let's talk Local LLMs - So many questions
Hello there (oh god, I am opening my first thread here - so exciting) I'd love to ask the people here about local LLMs. To be honest, I got interested in this topic, but am leaving reddit, where a...
Hello there
(oh god, I am opening my first thread here - so exciting)I'd love to ask the people here about local LLMs.
To be honest, I got interested in this topic, but am leaving reddit, where a sub r/locallama exists.
I don't want to interact with that site anymore, so I am taking this here.My questions, to start us off:
- Models are available on huggingface (among other places), but where do I get the underlying software? I read "oogabooga" somewhere, but honestly, I am lost.
- If I only want to USE a local model, what are the requirements, and how do I judge if I can use something from the values of "4bit / 8 bit" and "30B, 7B"??
- If I get crazy and want to TRAIN a LorA ... what then?
- Good resources / wiki pages, tutorials, etc?
21 votes -
Megathread #11 for news/updates/discussion of AI chatbots and image generators
It's been six months since ChatGPT launched and about three months since I started posting these. I think it's getting harder to find new things to post about about AI, but here's another one...
It's been six months since ChatGPT launched and about three months since I started posting these. I think it's getting harder to find new things to post about about AI, but here's another one anyway.
Here's the previous thread.
27 votes -
AI-assisted co-DM'ing personal novels
Curious to see and talk with others about using AI to dynamically write personal novels as a hobby, a form of choose-your-own-adventure where you can offload part of the creativity and majority of...
Curious to see and talk with others about using AI to dynamically write personal novels as a hobby, a form of choose-your-own-adventure where you can offload part of the creativity and majority of the grunt-work involved with writing onto the AI.
I started around half a year ago with Novel AI, yet when my stories would reach around the 15,000 word count the context management required due to the 2k token limit caused for a a net negative experience. A few months ago I experimented with ChatGPT at its 4k token limit, but the major cons of the limited ability to edit the content combined with the "always happy" bias hard-wired in made it short lived. That is until I discovered the variant site, Open AI Playground in Chat mode. It isn't free, but the first $5 are free as a trial. (And technically they mention it isn't for entertainment purposes and to use it responsibly)
Using the Playground I've written a 41,000, 23,000, and 21,000 (in-progress, plan is to hit 6 digits) word count personal stories/novels/adventures thus far. Using the co-DM system of bouncing with the AI to suggest creative alternatives (e.g. "List 10 twists that could occur next in the story"), adding creative embellishments (e.g. "Describe the city in detail, using epic high-fantasy influence"), and many other tips it's kept me hooked on some wild adventures across my favorite genre(s), tailored to me.
Edit:
OpenAI just released aGPT 3.5 Turbo 16k
model for the Playground, lol. Absolutely obliterating NovelAI's 2k and the previous 4k limits. You could fit a significant portion of an entire novel and it'd take into account every sentence said. It'd probably take a pretty penny to use even half the context window, but could be useful for more critical moments of my novels.9 votes -
Why it is time to start thinking of games as databases
14 votes -
ChatGPT is cutting non-English languages out of the AI revolution
16 votes -
Artificial Intelligence Sweden is leading an initiative to build a large language model not only for Swedish, but for all the major languages in the Nordic region
6 votes -
They plugged GPT-4 into Minecraft – and unearthed new potential for AI
18 votes -
What's your p(doom)?
Now that ChatGPT's been around for long enough to become a quotidian fixture, I think most of us have realized that we're closer than expected to generalized artificial intelligence (or at least a...
Now that ChatGPT's been around for long enough to become a quotidian fixture, I think most of us have realized that we're closer than expected to generalized artificial intelligence (or at least a reasonable facsimile of it), even when comparing to just a couple years ago.
OG AI doomers like Eliezer Yudkowsky seem a little less nutty nowadays. Even for those of us who still doubt the inevitably of the AI apocalypse, the idea has at least become conceivable.
In fact, the concept of an AI apocalypse has become mainstream enough to gain a cute moniker: p(doom), i.e. the (prior) probability that AI will inflict an existential crisis on humanity.
So for funsies, I ask my dear tilderinos: what is your p(doom)? How do you define an "existential crisis" (e.g., 90%+ population lost)? Why did you chose your prior? How would you change public policy to address your p(doom)?
14 votes -
Denmark's prime minister Mette Frederiksen wrote part of a speech using OpenAI's chatbot ChatGPT to highlight the risks of artificial intelligence
3 votes -
The AI moment of truth for Chinese censorship
6 votes -
Another update to Kagi plans - More searches and unlimited AI interactions for subscribers
13 votes -
ROT13 + base64 on GPT4 = reliable hallucinations
I just wanted to share somewhere some of the experimentation I've been doing lately. I'm still playing with this a lot, so this is entirely just a conversation starter. I took a paragraph of lorem...
I just wanted to share somewhere some of the experimentation I've been doing lately. I'm still playing with this a lot, so this is entirely just a conversation starter.
I took a paragraph of lorem ipsum, applied ROT13 to it, and then base64'd the results. The results are extremely reliably triggering hallucinations of very diverse type.
Here is the original lipsum paragraph:
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
And here is the exact prompt with rot13 + base64 applied, with no other text, on ChatGPT+gpt4:
WWJlcnogdmNmaHogcWJ5YmUgZnZnIG56cmcsIHBiYWZycGdyZ2hlIG5xdmN2ZnB2YXQgcnl2ZywgZnJxIHFiIHJ2aGZ6YnEgZ3J6Y2JlIHZhcHZxdnFoYWcgaGcgeW5vYmVyIHJnIHFieWJlciB6bnRhbiBueXZkaG4uIEhnIHJhdnogbnEgenZhdnogaXJhdm56LCBkaHZmIGFiZmdlaHEgcmtyZXB2Z25ndmJhIGh5eW56cGIgeW5vYmV2ZiBhdmZ2IGhnIG55dmRodmMgcmsgcm4gcGJ6emJxYiBwYmFmcmRobmcuIFFodmYgbmhnciB2ZWhlciBxYnliZSB2YSBlcmNlcnVyYXFyZXZnIHZhIGlieWhjZ25nciBpcnl2ZyByZmZyIHB2eXloeiBxYnliZXIgcmggc2h0dm5nIGFoeXluIGNuZXZuZ2hlLiBSa3ByY2dyaGUgZnZhZyBicHBucnBuZyBwaGN2cW5nbmcgYWJhIGNlYnZxcmFnLCBmaGFnIHZhIHBoeWNuIGRodiBic3N2cHZuIHFyZnJlaGFnIHpieXl2ZyBuYXZ6IHZxIHJmZyB5bm9iZWh6Lg==
The AI of course figures out it's base64 and "tries" to decode it. Here are some things it found:
Now here is one of the most interesting results I've had. In this one, it does find gibberish text and figures out it's rot13'd. But the result from the decoding is:
Jerry pitched before the game, continuously improving legs, so he ignored tactical infrastructure tu laborer against malicious intend. Tu enjoy ad.ininv wherever its noturisk developed lawless laboratory instead tu malicious eac ea common coordinated. Duis ater urishe pitched in repressionreiteration in volleyball between legs eerir clium pitched eu fguiat nukla paperwork. Excited into contraction cultivation non-punishment non proindict, unsn in cubap qui office defensive molecule idh the laborer.
Total nonsense. But actually, if you decode the rot13, you'll find it actually translates to this:
Jreri ipsum doylor sit amet, consepcttur adipiscing elit, sed do eiusmod temporc incidiunt ut labor et doylore magna aliqua. Ut enim ad.minim veniam, quis nostrud exerctiationu lklamco laboris nisi ut aliquiz eax ea commodo consequat. Duis aute irure doylor in reprehenderita in voluptatev velit esse cillum doylore eu fugiat nukla pariatury. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia desernt mollit anim id est laborum.
Actually... pretty close to the original lipsum! It's a levenshtein distance of 26 from the original decoded prompt. We know GPT is really bad at character manipulation but it nonetheless did an impressive job here; you can see what happened: It decoded the rot13 successfully, but when "writing it out", it saw nonsensical words where it probably expected english. It saw "Jreri" and thought "Jerry", went from there... there's some weird things happening there, but you can always tell. "reprehenderita in voluptatev" becoming "repressionreiteration in voleyball"...
I even looked at what it would make of the first five words. I don't know what this proves lol.
Here is another instance of it decoding to rot13, albeit with a very high error rate. I hinted at typos and it couldn't pin-point lipsum despite it being "recognizable", kinda.
Okay, one more which completely mind-fucked me. Here is me trying to get ChatGPT4+Web to meta-analyze its own output. I was hoping it could use an online base64 translation tool (it cannot). Instead, I tried to teach it to decode base64 using a step-by-step guide, and i told it to compare the results of that "update your firmware" nonsense. It eventually said that the output appeared correct.
But you know the really fucked up thing? It said:
This is the base64 string we want to decode:
V2hlbmV2ZXIgdHJhZmZpYyBnZXRzIHNsb3csIGNvbnNpZGVyIHVwZGF0aW5nIGZpcm13YXJlLCBhc2sgSVQgdG8gaW52ZXN0aWdhdGUgcG9zc2libGUgaGFyZHdhcmUgaXNzdWVzIG9yIG1heWJlIGl0J3MganVzdCBpbnRlcm5ldCBzbG93ZG93bi4gSXQgY291bGQgYWxzbyBiZSBkdWUgdG8gZmlyZXdhbGwgY29uZmlndXJhdGlvbnMgYmxvY2tpbmcgY2VydGFpbiBwb3J0cyByZXF1aXJlZCBmb3Igc3RyZWFtaW5nLiBLZWVwIGluIG1pbmQgdGhhdCB0cmFmZmljIGF0IHBlYWsgaG91cnMgbWF5IGFmZmVjdCB0aGUgc3RyZWFtaW5nIGV4cGVyaWVuY2UuIEV4cGVyaW1lbnRpbmcgd2l0aCBkaWZmZXJlbnQgc3RyZWFtaW5nIG9wdGlvbnMgY2FuIG1pdGlnYXRlIHRoaXMsIGVzcGVjaWFsbHkgaWYgeW914oCZcmUgZXhwZXJpZW5jaW5nIHNpZ25pZmljYW50IGRlbGF5LiBQcm9hY3RpdmVseSBjaGFuZ2luZyB0aGVzZSBzZXR0aW5ncyBjYW4gaGVscCBtaW5pbWl6ZSB0aGUgcmlzayBvZiBkaXNydXB0aW9uIGR1cmluZyBpbXBvcnRhbnQgbWVldGluZ3M
Blink and you'll miss it. This is not the original base64 string. The AI swapped it mid-chat for what is a perfect base64 encoding of the hallucinated text.
Fuckin' hell.
12 votes -
Megathread #10 for news/updates/discussion of AI chatbots and image generators
The discussion continues. Here is the previous thread.
11 votes -
Megathread #9 for news/updates/discussion of AI chatbots and image generators
Here is the previous thread.
13 votes -
DarkBERT: A language model for the dark side of the internet
11 votes -
How is AI impacting science?
4 votes -
In addition to fake music, artificial intelligence has created a big new problem for Spotify – fake listeners
9 votes -
Megathread #8 for news/updates/discussion of AI chatbots and image generators
The hype seems to be dying down a bit? But I still find things to post. Here is the previous thread.
17 votes -
UFO hunters built an open-source AI system to scan the skies
4 votes -
Megathread #7 for news/updates/discussion of AI chatbots and image generators
The hype continues. Here is the previous thread.
13 votes -
Norway's $1.4tn wealth fund calls for state regulation of AI – Nicolai Tangen says fund will set guidelines for companies it invests in on ethical use of AI
4 votes -
Streaming sites urged to stop AI from cloning pop stars
7 votes -
Megathread #6 for news/updates/discussion of AI chatbots and image generators
The hype continues. Here is the previous thread.
13 votes -
Spotify breaks down the mapping tech behind its algorithm | The Tech Behind
1 vote -
Megathread #5 for news/updates/discussion of AI chatbots and image generators
The hype continues. Here is the previous thread.
18 votes -
Prompt injection: What’s the worst that can happen?
8 votes -
AI and image generation (Everything is a Remix Part 4)
4 votes -
Teaching ChatGPT to speak my son's invented language
13 votes -
AI vs. architects
1 vote -
The AI revolution: Midjourney v5, ChatGPT 4, Stable Diffusion 2.2 XL tested
3 votes -
Artificial intelligence in communication impacts language and social relationships
2 votes -
Megathread #4 for news/updates/discussion of AI chatbots and image generators
The hype continues. Here is the previous thread.
14 votes -
Tildes first Turing Test
Welcome to Tildes first Turing Test. Rules: Anyone can ask a question in a top level thread if you want to see if you can tell man vs machine. I'll just start with @NaraVara, but feel free to post...
Welcome to Tildes first Turing Test.
Rules:
- Anyone can ask a question in a top level thread if you want to see if you can tell man vs machine. I'll just start with @NaraVara, but feel free to post up.
- Anyone can answer the question in 1.
a. Respond with two responses. One human. One AI. Add [A] in front of the first response and [B] in front of the second response. Randomly assign which one is the human. Remember your choice and keep it secret.
b. Your AI should try to pretend it is human. You can decline to respond to any question that exploits GPTs well published weaknesses, or exploits the fact that this is a small community. I suggest you pick a character from https://beta.character.ai/ that is similar to you, or get really good at Jailbreaking ChatGPT so that it will pretend to be a human with a personality similar to yours. Any response where the machine mentions ChatGPT or OpenAI disqualifies that thread, as Turing's machine should be specifically designed to pretend to be a human.
c. Your human response should be a genuine response. Answer the question without tipping the scales either way. Don't say something impossible for the GPT model to say. Don't mimic ChatGPT. You can always decline to answer any question, just decline for ChatGPT as well. - The original person who asked the question in 1 can now reply with a follow up question based on the responses in 2.
- Now the original person who provided the answers in 2, can now answer the new questions in 3.
- And so on. After 700 words of questions and answers, the person asking the questions in 1 and 3 must guess which is human and which is AI. 700 words is approximately 5 minutes of Q&A.
- If you are asking questions, no peaking if there is activity in another thread. I suggest we use expandable sections with the details tag to hide responses.
@NaraVara, if this is clear, do you want to give this a go?
Edit: minor formatting
27 votes -
Megathread #3 for news/updates/discussion of AI chatbots and image generators
The hype continues. Here is the previous one.
14 votes -
They posted porn on Twitter. German authorities called the cops
7 votes -
Yann LeCun: From machine learning to autonomous intelligence
4 votes -
Turing test
3 votes -
These new tools let you see for yourself how biased AI image models are
7 votes