I was going to have my own language model hosted but with this, it’ll save me a bunch for those quick queries. Ddg.gg is doing it much better than other search engines.
I was going to have my own language model hosted but with this, it’ll save me a bunch for those quick queries.
Ddg.gg is doing it much better than other search engines.
The one time I saw a quantitative analysis it was actually very obvious that it mostly regurgitates Bing. They were trying to evaluate whether Google reall spits out less...
The one time I saw a quantitative analysis it was actually very obvious that it mostly regurgitates Bing.
They were trying to evaluate whether Google reall spits out less relevant-for-the-search-terms results than in years past.
Result: It does.
Second result: Everyone else also has that problem, and more so. So in other words, the SEO spammers just learned enough to poison all indexes now. Google actually came out marginally on top (it was something like comparatively 28% worse to 35% worse for Bing/DDG).
I love DDG's visual design though and how clean the page is. I just wish they'd use any other maps provider. I'd take Here maps over Apple. I'm tempted to say I'd take no maps over Apple, that's how bad Apple Maps is over here. 😅
I love current AI! I test it by stating "How much characters does this text have?" and then I watch, quite amused, on what comes out. I know these are language models and should be treated as...
I love current AI! I test it by stating "How much characters does this text have?" and then I watch, quite amused, on what comes out. I know these are language models and should be treated as "conversation type" and not mathematics, but when I spit out such sentence in real conversation, I get aswer - it may be simple "What?" or even "Are you crazy?", but nonetheless, I get the answer. AI also givesme answer. It just failed every single time when I asked it. It simply can't count.
I love I can access more AI models from one page, more so that I don't need to register anywhere! I lnow ChatGPT exists in newer version, vut I don't care that much. This is just so convenient way to test AI models!
Chat models don't see individual characters, they see "tokens", which are often a few characters or even a full word long. So they don't have enough information to give you an accurate answer.
Chat models don't see individual characters, they see "tokens", which are often a few characters or even a full word long. So they don't have enough information to give you an accurate answer.
Then maybe it is not "intelligence"? Every human would react to my answer in some way, someone might even ask if I want number of characters of the question or the answer. Yet all of those four...
Then maybe it is not "intelligence"? Every human would react to my answer in some way, someone might even ask if I want number of characters of the question or the answer. Yet all of those four so-called-AI report bad answers. One thing is when it says there are 30 characters and is off by 5, the other thing is when another "AI" says there are 600 characters.
I get it - it must work on some base (tokens) but when it can't really count, it becomes unusable for many things. I'm still waiting for the VIKI (from I, robot movie) or Skynet, as today's AI is just a laugh at what an AI should be in my eyes.
They're not trying to sell you AGI (which is what Skynet etc. would qualify as). AI can be as simple as automated linear regression. Back in the day optical character recognition was a huge...
They're not trying to sell you AGI (which is what Skynet etc. would qualify as). AI can be as simple as automated linear regression. Back in the day optical character recognition was a huge milestone for AI. LLMs count as AI. The "Intelligence" in "Artificial Intelligence" doesn't mean they're replicating human thought. When you get down to it, AI is about the magic of a computer program working when the programmers don't know how it works. Normally when programming you need to have a pretty good understanding of the system to make any change successfully. But to have something like ChatGPT that has had billions or trillions of variables automatically tuned until it can recall some facts, rewrite text in pig latin, suggest code to solve a problem, etc. without any programmer ever doing anything but throwing data at it - that's pretty amazing. And it's useful already. I see people using it every day.
There are a lot of grifters out there that will try to stretch what GPT-4 is capable of and make it sound like we're about to get Skynet. You should be skeptical about those claims. But when I read your comment, or others like it, I suspect this is a bit of a knee-jerk reaction made out of fear. The current generation of this tech isn't quite good enough to unemploy large groups of people. But one more generational leap and I am confident it will.
I absolutely agree that it's amazing to have what we have today. I even understand that today's AI is not AGI. The thing is that (I would take a bit of stretch here) general public doesn't realize...
I absolutely agree that it's amazing to have what we have today. I even understand that today's AI is not AGI.
The thing is that (I would take a bit of stretch here) general public doesn't realize this. I use today's AI in my daily job (I just use it, other people made it usable) and I'm not anti-AI or afraid it will take people's jobs (as in current state it's just a really big algorithm from my point of view). The real problem is not current state of AI but rather public opinion/thinking/feeling about that. People generally either don't even think about AI or think today's one is absolutely great/wonderful/best - I just want to point out that it really isn't.Which doesn't mean that the progress already made is not awesome!
Yet, by definition that means it passes the Turing Test. I'm starting to think the whole thing was a bit backhanded on Turing's part: a "ha, the average person doesn't recognize intelligence when...
The thing is that (I would take a bit of stretch here) general public doesn't realize this.
Yet, by definition that means it passes the Turing Test. I'm starting to think the whole thing was a bit backhanded on Turing's part: a "ha, the average person doesn't recognize intelligence when they see it. Something talking to them is persuasive enough" kind of thing.
Out of curiosity I ran this on CGPT4o, and it returns: The sentence "How many characters does this sentence have?" has 44 characters. And used the following to determine it's answer: sentence =...
How much characters does this text have?
Out of curiosity I ran this on CGPT4o, and it returns:
The sentence "How many characters does this sentence have?" has 44 characters.
And used the following to determine it's answer:
sentence = "How many characters does this sentence have?"
character_count = len(sentence)
character_count
Yeah, to be fair, I didn't try on the newest one. But you can see how the other ones react for yourself. Also worth of noting - I used Czech language to ask. It was understood perfectly fine as...
Yeah, to be fair, I didn't try on the newest one. But you can see how the other ones react for yourself.
Also worth of noting - I used Czech language to ask. It was understood perfectly fine as the answer was right in the conversation side but bad in the counting one.
Funny, I just did the same, and it also used Python to get the answer. I then asked it to try again without Python and without using the knowledge that there were 40 characters, and it still got...
Funny, I just did the same, and it also used Python to get the answer. I then asked it to try again without Python and without using the knowledge that there were 40 characters, and it still got it correct. It repeated the sentence to me, inserting a dash after each character, so it felt like it knew what it was doing.
I was going to have my own language model hosted but with this, it’ll save me a bunch for those quick queries.
Ddg.gg is doing it much better than other search engines.
The one time I saw a quantitative analysis it was actually very obvious that it mostly regurgitates Bing.
They were trying to evaluate whether Google reall spits out less relevant-for-the-search-terms results than in years past.
Result: It does.
Second result: Everyone else also has that problem, and more so. So in other words, the SEO spammers just learned enough to poison all indexes now. Google actually came out marginally on top (it was something like comparatively 28% worse to 35% worse for Bing/DDG).
I love DDG's visual design though and how clean the page is. I just wish they'd use any other maps provider. I'd take Here maps over Apple. I'm tempted to say I'd take no maps over Apple, that's how bad Apple Maps is over here. 😅
I would love if they just used OpenStreetMaps. It would give open street some more visibility and maybe inspire some new contributors.
I love current AI! I test it by stating "How much characters does this text have?" and then I watch, quite amused, on what comes out. I know these are language models and should be treated as "conversation type" and not mathematics, but when I spit out such sentence in real conversation, I get aswer - it may be simple "What?" or even "Are you crazy?", but nonetheless, I get the answer. AI also givesme answer. It just failed every single time when I asked it. It simply can't count.
I love I can access more AI models from one page, more so that I don't need to register anywhere! I lnow ChatGPT exists in newer version, vut I don't care that much. This is just so convenient way to test AI models!
Chat models don't see individual characters, they see "tokens", which are often a few characters or even a full word long. So they don't have enough information to give you an accurate answer.
Then maybe it is not "intelligence"? Every human would react to my answer in some way, someone might even ask if I want number of characters of the question or the answer. Yet all of those four so-called-AI report bad answers. One thing is when it says there are 30 characters and is off by 5, the other thing is when another "AI" says there are 600 characters.
I get it - it must work on some base (tokens) but when it can't really count, it becomes unusable for many things. I'm still waiting for the VIKI (from I, robot movie) or Skynet, as today's AI is just a laugh at what an AI should be in my eyes.
They're not trying to sell you AGI (which is what Skynet etc. would qualify as). AI can be as simple as automated linear regression. Back in the day optical character recognition was a huge milestone for AI. LLMs count as AI. The "Intelligence" in "Artificial Intelligence" doesn't mean they're replicating human thought. When you get down to it, AI is about the magic of a computer program working when the programmers don't know how it works. Normally when programming you need to have a pretty good understanding of the system to make any change successfully. But to have something like ChatGPT that has had billions or trillions of variables automatically tuned until it can recall some facts, rewrite text in pig latin, suggest code to solve a problem, etc. without any programmer ever doing anything but throwing data at it - that's pretty amazing. And it's useful already. I see people using it every day.
There are a lot of grifters out there that will try to stretch what GPT-4 is capable of and make it sound like we're about to get Skynet. You should be skeptical about those claims. But when I read your comment, or others like it, I suspect this is a bit of a knee-jerk reaction made out of fear. The current generation of this tech isn't quite good enough to unemploy large groups of people. But one more generational leap and I am confident it will.
I absolutely agree that it's amazing to have what we have today. I even understand that today's AI is not AGI.
The thing is that (I would take a bit of stretch here) general public doesn't realize this. I use today's AI in my daily job (I just use it, other people made it usable) and I'm not anti-AI or afraid it will take people's jobs (as in current state it's just a really big algorithm from my point of view). The real problem is not current state of AI but rather public opinion/thinking/feeling about that. People generally either don't even think about AI or think today's one is absolutely great/wonderful/best - I just want to point out that it really isn't.Which doesn't mean that the progress already made is not awesome!
Yet, by definition that means it passes the Turing Test. I'm starting to think the whole thing was a bit backhanded on Turing's part: a "ha, the average person doesn't recognize intelligence when they see it. Something talking to them is persuasive enough" kind of thing.
Although GPT-4, being a 1.8 trillion parameter model, can do pretty well at that one. The publicly available models are from 7-70 billion parameters.
Out of curiosity I ran this on CGPT4o, and it returns:
The sentence "How many characters does this sentence have?" has 44 characters.
And used the following to determine it's answer:
Yeah, to be fair, I didn't try on the newest one. But you can see how the other ones react for yourself.
Also worth of noting - I used Czech language to ask. It was understood perfectly fine as the answer was right in the conversation side but bad in the counting one.
Funny, I just did the same, and it also used Python to get the answer. I then asked it to try again without Python and without using the knowledge that there were 40 characters, and it still got it correct. It repeated the sentence to me, inserting a dash after each character, so it felt like it knew what it was doing.
probably on purpose, they dont want to breach https://www.wolframalpha.com/ 's notch on the market
Mine is "trick question, characters are countable".