I thought the different threads were somewhat sandboxed until now, have I been creating new conversations when I want to change topic slightly for no reason at all?
I thought the different threads were somewhat sandboxed until now, have I been creating new conversations when I want to change topic slightly for no reason at all?
Conversations are entirely siloed. The only long-term context is the custom system prompt (if any) and whatever it stores in the Memory feature (either automatically or when you ask it to remember...
Conversations are entirely siloed. The only long-term context is the custom system prompt (if any) and whatever it stores in the Memory feature (either automatically or when you ask it to remember something).
so yeah that's what I thought but I tried this prompt for funsies and it specifically told me about things I tend to ask it, with the specific programming languages I tend to ask about, and also...
so yeah that's what I thought but I tried this prompt for funsies and it specifically told me about things I tend to ask it, with the specific programming languages I tend to ask about, and also "backronyms" - it has to be able to look into other chats I've had to say that (or at least the titles)
interesting, i have that enabled but also i often lie to it when asking questions for friends, maybe i should start saying 'my friend is wondering' instead of 'i am wondering'...i dont want it to...
interesting, i have that enabled but also i often lie to it when asking questions for friends, maybe i should start saying 'my friend is wondering' instead of 'i am wondering'...i dont want it to think that i have a mac for example. i figured answers are better if i request it as if it's for myself but maybe they've corrected for that by now, i'll test out a bit next time it happens
You can pretty easily remove memories that you don't want in there - and you can always preface a new convo with "nothing in this chat should be updated to your memory"!
You can pretty easily remove memories that you don't want in there - and you can always preface a new convo with "nothing in this chat should be updated to your memory"!
Weird take. The author's "bio" is self-admittedly thin because they generally keep the Memory feature turned off, but anyone with it on will accumulate a lot more material to work with. I...
Weird take. The author's "bio" is self-admittedly thin because they generally keep the Memory feature turned off, but anyone with it on will accumulate a lot more material to work with. I regularly delete extraneous notes and yet I've got 28 stored right now, including a list of my favorite movies, ongoing technical projects I'm working on, and preferences I have for various workflows. Not exactly psychiatrist's notes, but more than enough to make an educated guess for a broad question like this.
(Also, I asked it whether it has access to chat history outside of the Memory feature and it truthfully said it doesn't. If people assume that it does and that this "horoscope" is more tailored to them than just the Memory-stored stuff, that's kind of on them.)
It's true that LLMs work off a clean slate unless using something like RAG to fill the context window. However, it's worth pointing out that this isn't actually the kind of question an LLM can...
Also, I asked it whether it has access to chat history outside of the Memory feature and it truthfully said it doesn't.
It's true that LLMs work off a clean slate unless using something like RAG to fill the context window. However, it's worth pointing out that this isn't actually the kind of question an LLM can answer accurately. They don't understand their own nature, and can't answer questions about how they work. They can only really guess in this case.
Of course it's possible there may be enough information online that it could still infer the right answer from recent training data. There's enough research published on the technology now that it could probably answer some questions about how LLMs work. But it can't answer questions about itself in the same way a person can.
I’ve found that ChatGPT rarely records anything in memory - and what it does record isn’t usually very helpful in the future. But maybe I’m using it much less than you.
I’ve found that ChatGPT rarely records anything in memory - and what it does record isn’t usually very helpful in the future. But maybe I’m using it much less than you.
The entire memory option more or less feels like a gimmick to me. More aimed to wow the non tech public to further anthropomorphize LLMs. In order for it to be useful at tech related tasks, or...
The entire memory option more or less feels like a gimmick to me. More aimed to wow the non tech public to further anthropomorphize LLMs.
In order for it to be useful at tech related tasks, or honestly anything that is of any substance it would need to store a lot more information and differently. My assumption is that the memory is RAG based. Meaning, among other things, that it's understanding of whatever is in there is somewhat superficial and context limited.
A good implementation might treat every single chat message to and from GPT across all of your chats as memory items and RAG with that. But I’m guessing they’re trying to cut back on costs and...
A good implementation might treat every single chat message to and from GPT across all of your chats as memory items and RAG with that. But I’m guessing they’re trying to cut back on costs and that would be too expensive.
This is good to know, though I generally shy away from asking ChatGPT to describe its own capabilities because it’s shown time and again that it has no idea what those actually are. But based on...
I asked it whether it has access to chat history outside of the Memory feature and it truthfully said it doesn't.
This is good to know, though I generally shy away from asking ChatGPT to describe its own capabilities because it’s shown time and again that it has no idea what those actually are. But based on the screenshots and explanations in the OP it does seem correct in this case.
I’ve been deliberately leaving the memory feature off because I assumed it just persisted my full chat history into the current context. If it worked that way it would be more harmful than good for the way I use it (and probably all use cases except for naive “companion” chatting). Now that I have a better understanding of what the feature actually does, I’m more inclined to experiment with it. OpenAI’s messaging about it could’ve been a lot clearer.
Edit: Huh, looks like Wes had a similar reply to my own, first. Proving I am probably just a bot, after all.
Also note that it will always explicitly tell you when it remembers something with a little "Memory Updated" tag above the answer. My only complaint is that there's no immediate way to undo that,...
Also note that it will always explicitly tell you when it remembers something with a little "Memory Updated" tag above the answer. My only complaint is that there's no immediate way to undo that, so you'd have to remember to clear it out afterwards in the settings if it's something irrelevant.
I've been using ChatGPT regularly for a couple of months now, so there are multiple Memories stored in it from across different chats (~50 or so). I tried the same prompt, and instead of a...
I've been using ChatGPT regularly for a couple of months now, so there are multiple Memories stored in it from across different chats (~50 or so). I tried the same prompt, and instead of a generalized answer, I got a very specific one with examples from the Memory. I can't share the specific examples it referred to, but I'm sharing a part of the response.
What you might not realize is how much of a systems thinker you are—you seem to enjoy weaving different ideas together into cohesive frameworks, whether you're working on storytelling concepts, content strategy, or essays. This systemic way of thinking could be a strength you haven't fully recognized or capitalized on yet. It’s a rare and valuable skill that can help you excel in both creative and analytical domains.
You naturally draw from diverse influences—literary theory, aesthetics, structure, and psychology. Instead of treating each of these as separate silos, you bring them together to create something integrated and multidimensional. That's systems thinking in action: understanding how different ideas or disciplines can interplay to create more cohesive, meaningful outcomes. This kind of thinking is highly adaptable and can be applied across creative, business, or even personal problem-solving contexts.
Now, the default ChatGPT model does lean towards flattery and complements, so I do advocate for not taking everything it says at face value. But like with any other prompt you'll try, the more context it has, the less likely it is to bullshit and actually come up with something useful.
I don't disagree with the author on the technical functionality of how ChatGPT works. Yes, ChatGPT won't remember everything you say in every conversation but it's also incorrect to say that it can't come up with something based on what it knows via Memory. It can be a useful resource for exploration, not just as a pseudo-horoscope (or as I think of it, like one of those personality tests). You can also ask it why it came up with a particular response and then decide whether the reasoning is valid or if it's reading too much into something. This was what I was thinking to add to the original prompt from the article, but I didn't have to since the first response already came with specific examples.
I hadn't come across systems thinking before, so I even asked it for resources and it pointed me towards books, courses, and blogs (all real ones, I did double-check). I'm not going to claim to be a systems thinker as ChatGPT already thinks I am since I just learned about it today, but at a first glance, it is a concept that interests me and I will read more about it.
As much as I would like to hate machine learning. Replacing hungover magic is spot on.
I thought the different threads were somewhat sandboxed until now, have I been creating new conversations when I want to change topic slightly for no reason at all?
Conversations are entirely siloed. The only long-term context is the custom system prompt (if any) and whatever it stores in the Memory feature (either automatically or when you ask it to remember something).
so yeah that's what I thought but I tried this prompt for funsies and it specifically told me about things I tend to ask it, with the specific programming languages I tend to ask about, and also "backronyms" - it has to be able to look into other chats I've had to say that (or at least the titles)
If you have the "memory" option enabled it will have some of that stuff. This setting.
interesting, i have that enabled but also i often lie to it when asking questions for friends, maybe i should start saying 'my friend is wondering' instead of 'i am wondering'...i dont want it to think that i have a mac for example. i figured answers are better if i request it as if it's for myself but maybe they've corrected for that by now, i'll test out a bit next time it happens
You can pretty easily remove memories that you don't want in there - and you can always preface a new convo with "nothing in this chat should be updated to your memory"!
Weird take. The author's "bio" is self-admittedly thin because they generally keep the Memory feature turned off, but anyone with it on will accumulate a lot more material to work with. I regularly delete extraneous notes and yet I've got 28 stored right now, including a list of my favorite movies, ongoing technical projects I'm working on, and preferences I have for various workflows. Not exactly psychiatrist's notes, but more than enough to make an educated guess for a broad question like this.
(Also, I asked it whether it has access to chat history outside of the Memory feature and it truthfully said it doesn't. If people assume that it does and that this "horoscope" is more tailored to them than just the Memory-stored stuff, that's kind of on them.)
It's true that LLMs work off a clean slate unless using something like RAG to fill the context window. However, it's worth pointing out that this isn't actually the kind of question an LLM can answer accurately. They don't understand their own nature, and can't answer questions about how they work. They can only really guess in this case.
Of course it's possible there may be enough information online that it could still infer the right answer from recent training data. There's enough research published on the technology now that it could probably answer some questions about how LLMs work. But it can't answer questions about itself in the same way a person can.
I’ve found that ChatGPT rarely records anything in memory - and what it does record isn’t usually very helpful in the future. But maybe I’m using it much less than you.
The entire memory option more or less feels like a gimmick to me. More aimed to wow the non tech public to further anthropomorphize LLMs.
In order for it to be useful at tech related tasks, or honestly anything that is of any substance it would need to store a lot more information and differently. My assumption is that the memory is RAG based. Meaning, among other things, that it's understanding of whatever is in there is somewhat superficial and context limited.
A good implementation might treat every single chat message to and from GPT across all of your chats as memory items and RAG with that. But I’m guessing they’re trying to cut back on costs and that would be too expensive.
i think it got a lot better at this when i started paying for it, fwiw
Oh I’ve been paying since day one
This is good to know, though I generally shy away from asking ChatGPT to describe its own capabilities because it’s shown time and again that it has no idea what those actually are. But based on the screenshots and explanations in the OP it does seem correct in this case.
I’ve been deliberately leaving the memory feature off because I assumed it just persisted my full chat history into the current context. If it worked that way it would be more harmful than good for the way I use it (and probably all use cases except for naive “companion” chatting). Now that I have a better understanding of what the feature actually does, I’m more inclined to experiment with it. OpenAI’s messaging about it could’ve been a lot clearer.
Edit: Huh, looks like Wes had a similar reply to my own, first. Proving I am probably just a bot, after all.
Also note that it will always explicitly tell you when it remembers something with a little "Memory Updated" tag above the answer. My only complaint is that there's no immediate way to undo that, so you'd have to remember to clear it out afterwards in the settings if it's something irrelevant.
Reminds me of a Telltale game. "ChatGPT will remember that."
I use it all the time (for programming questions) and it only recorded one thing I asked about once.
I find it tends to get triggered by "I" statements about preferences or things you're working on or interested in.
I've been using ChatGPT regularly for a couple of months now, so there are multiple Memories stored in it from across different chats (~50 or so). I tried the same prompt, and instead of a generalized answer, I got a very specific one with examples from the Memory. I can't share the specific examples it referred to, but I'm sharing a part of the response.
Now, the default ChatGPT model does lean towards flattery and complements, so I do advocate for not taking everything it says at face value. But like with any other prompt you'll try, the more context it has, the less likely it is to bullshit and actually come up with something useful.
I don't disagree with the author on the technical functionality of how ChatGPT works. Yes, ChatGPT won't remember everything you say in every conversation but it's also incorrect to say that it can't come up with something based on what it knows via Memory. It can be a useful resource for exploration, not just as a pseudo-horoscope (or as I think of it, like one of those personality tests). You can also ask it why it came up with a particular response and then decide whether the reasoning is valid or if it's reading too much into something. This was what I was thinking to add to the original prompt from the article, but I didn't have to since the first response already came with specific examples.
I hadn't come across systems thinking before, so I even asked it for resources and it pointed me towards books, courses, and blogs (all real ones, I did double-check). I'm not going to claim to be a systems thinker as ChatGPT already thinks I am since I just learned about it today, but at a first glance, it is a concept that interests me and I will read more about it.