Now I'm curious if it's possible for a LLM to think of something without revealing it. Like, could it decide what hand it plans on throwing before telling you? Would it be possible to tell whether...
Now I'm curious if it's possible for a LLM to think of something without revealing it. Like, could it decide what hand it plans on throwing before telling you? Would it be possible to tell whether it's cheating?
Yes and no, but mostly no. In practice, these chat systems have no “private thoughts.” They only remember what’s in the chat transcript. A workaround for playing a guessing game like hangman is to...
Yes and no, but mostly no.
In practice, these chat systems have no “private thoughts.” They only remember what’s in the chat transcript. A workaround for playing a guessing game like hangman is to ask it to write its guess encoded as base64.
In theory, a programmer who is using the API might create a UI that gives it a way to write something in the transcript that won’t be revealed to the user. But it might be difficult to get the chatbot to use it correctly.
LLMs don't really "think" in that way. They don't have a subconscious that you pull information out of to form a response. They work by generating a list of possible tokens, ordered by relevance...
LLMs don't really "think" in that way. They don't have a subconscious that you pull information out of to form a response. They work by generating a list of possible tokens, ordered by relevance to the previous tokens, and will pick one option at each step to produce the next token in the sequence.
They don't always pick the most relevant option because that can generate very sterile sentences, and can sometimes result in loops. So a "temperature" variable is introduced which adds a little randomness, and helps spice up the responses. Often tools will let you regenerate a response where it'll adjust the random seed to produce completely different chains of tokens.
LLMs are very cool, but they're not really capable of premeditation or cheating. It's best to think of them as very capable text prediction engines.
I'm a subscriber to Google One's existing 2TB plan which is £80 a year. For the new AI plan they only offer monthly billing so it comes to £228 a year which is almost three times the price....
I'm a subscriber to Google One's existing 2TB plan which is £80 a year. For the new AI plan they only offer monthly billing so it comes to £228 a year which is almost three times the price.
Apparently the AI features aren't available to other family members either. This feels dead on arrival for any existing Google customers.
The 2TB plan is already such a stupid jump from 200GB, it really seems designed to pinch people who don't need that much space. Including Gemini Ultra would help bridge the value gap.
The 2TB plan is already such a stupid jump from 200GB, it really seems designed to pinch people who don't need that much space. Including Gemini Ultra would help bridge the value gap.
Looks like ChatGPT is also £20 a month in the UK for GPT4, so it's in line with the competition. Perhaps there will be more price competition in the future, as these things become less expensive...
Looks like ChatGPT is also £20 a month in the UK for GPT4, so it's in line with the competition.
Perhaps there will be more price competition in the future, as these things become less expensive to run and there are more competitors.
Well most families with Google accounts have them paired together in a "family". This isn't just used to share storage upgrade plans. It also powers family features across all of their apps and on...
Well most families with Google accounts have them paired together in a "family". This isn't just used to share storage upgrade plans. It also powers family features across all of their apps and on Android itself.
If you want to use Gemini as the admin of the family account you can choose to triple the price and access it. But if you're the spouse, child of or parents of the family admin then the only way to subscribe to Gemini is to leave the family.
From Google's blog post: Haven't tried it, but early reviewers seem to think that it's comparable with GPT-4? Here's one by Danny Bee on Hacker News and here's a blog post from someone with early...
From Google's blog post:
Gemini Advanced is available as part of our brand new Google One AI Premium Plan for $19.99/month, starting with a two-month trial at no cost. This plan gives you the best of Google AI and our latest advancements, along with all the benefits of the existing Google One Premium plan, such as 2TB of storage. In addition, AI Premium subscribers will soon be able to use Gemini in Gmail, Docs, Slides, Sheets and more (formerly known as Duet AI).
The basic version of Gemini was rolled out today to my ancient pixel 3a (which is otherwise long beyond the point of getting even security updates). The results seem serviceable so far, but...
The basic version of Gemini was rolled out today to my ancient pixel 3a (which is otherwise long beyond the point of getting even security updates). The results seem serviceable so far, but sometimes have odd glitches like returning part of the results in another language. I'll be trying it further in cases where I would like a starting point, but I definitely don't feel like I could rely on any results without doing my own verification.
That aside, the fact that Google is able to roll the AI out across its enormous product range and user base (much like MS has now done with copilot in windows 11) will no doubt give it a big leg up in getting people to adopt it at least for casual usage.
Apparently, Gemini Advanced is kind of dumb about playing Rock, Paper, Scissors.
lmao this absolutely cracks me up, I love it.
Now I'm curious if it's possible for a LLM to think of something without revealing it. Like, could it decide what hand it plans on throwing before telling you? Would it be possible to tell whether it's cheating?
Yes and no, but mostly no.
In practice, these chat systems have no “private thoughts.” They only remember what’s in the chat transcript. A workaround for playing a guessing game like hangman is to ask it to write its guess encoded as base64.
In theory, a programmer who is using the API might create a UI that gives it a way to write something in the transcript that won’t be revealed to the user. But it might be difficult to get the chatbot to use it correctly.
LLMs don't really "think" in that way. They don't have a subconscious that you pull information out of to form a response. They work by generating a list of possible tokens, ordered by relevance to the previous tokens, and will pick one option at each step to produce the next token in the sequence.
They don't always pick the most relevant option because that can generate very sterile sentences, and can sometimes result in loops. So a "temperature" variable is introduced which adds a little randomness, and helps spice up the responses. Often tools will let you regenerate a response where it'll adjust the random seed to produce completely different chains of tokens.
LLMs are very cool, but they're not really capable of premeditation or cheating. It's best to think of them as very capable text prediction engines.
I'm a subscriber to Google One's existing 2TB plan which is £80 a year. For the new AI plan they only offer monthly billing so it comes to £228 a year which is almost three times the price.
Apparently the AI features aren't available to other family members either. This feels dead on arrival for any existing Google customers.
The 2TB plan is already such a stupid jump from 200GB, it really seems designed to pinch people who don't need that much space. Including Gemini Ultra would help bridge the value gap.
Looks like ChatGPT is also £20 a month in the UK for GPT4, so it's in line with the competition.
Perhaps there will be more price competition in the future, as these things become less expensive to run and there are more competitors.
Unless they actually want Gemini... I don't think anyone is going to subscribe to this just for Google One access.
Well most families with Google accounts have them paired together in a "family". This isn't just used to share storage upgrade plans. It also powers family features across all of their apps and on Android itself.
If you want to use Gemini as the admin of the family account you can choose to triple the price and access it. But if you're the spouse, child of or parents of the family admin then the only way to subscribe to Gemini is to leave the family.
From Google's blog post:
Haven't tried it, but early reviewers seem to think that it's comparable with GPT-4? Here's one by Danny Bee on Hacker News and here's a blog post from someone with early access: Google's Gemini Advanced: Tasting Notes and Implications.
The basic version of Gemini was rolled out today to my ancient pixel 3a (which is otherwise long beyond the point of getting even security updates). The results seem serviceable so far, but sometimes have odd glitches like returning part of the results in another language. I'll be trying it further in cases where I would like a starting point, but I definitely don't feel like I could rely on any results without doing my own verification.
That aside, the fact that Google is able to roll the AI out across its enormous product range and user base (much like MS has now done with copilot in windows 11) will no doubt give it a big leg up in getting people to adopt it at least for casual usage.