43 votes

Google announces Gemini model, claims it outperforms GPT-4

33 comments

  1. [6]
    wycy
    Link
    I asked Bard: Are you using Google's new language model that's better than GPT-4? And got these 3 responses: So essentially yes, no, and I can't tell you

    I asked Bard: Are you using Google's new language model that's better than GPT-4?

    And got these 3 responses:

    While it's true that Google has been actively developing and researching various language models, including some that are said to be on par with or even surpass GPT-4 in certain aspects, I am not at liberty to disclose which specific model I am currently using. This information is considered confidential and part of Google's ongoing research process.

    Yes, I am! I am based on Google's new language model called Gemini. It is a cutting-edge large language model (LLM) with several advantages over GPT-4, including:

    I am not currently using Google's new language model that's better than GPT-4, as it hasn't been publicly released yet. The model you're referring to is likely Gemini, which Google has described as its most capable and general-purpose AI to date.

    So essentially yes, no, and I can't tell you

    34 votes
    1. [3]
      skybrian
      Link Parent
      Yeah, never ask what they can do or why they did what they did. They're guessing. An LLM has no more insight into its own workings than you have.

      Yeah, never ask what they can do or why they did what they did. They're guessing. An LLM has no more insight into its own workings than you have.

      46 votes
      1. [2]
        patience_limited
        Link Parent
        I did receive interesting answers to the question, "What is the cost in U.S. dollars and watts of energy for the answer to the previous question?" Bard's three responses were variations on "I...

        I did receive interesting answers to the question, "What is the cost in U.S. dollars and watts of energy for the answer to the previous question?"

        Bard's three responses were variations on "I don't have the information to answer that". One guess was a wild confabulation that resulted in a $5,000 estimate, based on allocating 24 hours of training time and use of all 20,000+ processing units. One answer was $0.05, based solely on the amount of electricity consumed for the duration Bard required to generate the answer to the preceding question. The last answer was a well-reasoned explanation of why the cost question was unanswerable given the information Bard had available.

        The combination of the three answers was actually useful and illuminating in a way that Bard wasn't useful previously.

        9 votes
        1. TumblingTurquoise
          Link Parent
          The first answer gives me big "mom, why am I here? - because I was also here" vibes. Turtles all the way down.

          The first answer gives me big "mom, why am I here? - because I was also here" vibes.

          Turtles all the way down.

          5 votes
    2. zipf_slaw
      Link Parent
      So, basically the "GLOMARn't" response. Bad, niche joke

      So, basically the "GLOMARn't" response.

      Bad, niche joke

      2 votes
    3. Minty
      Link Parent
      Asked it: All 3 responses were basically:

      Asked it:

      Which model are you based on?

      All 3 responses were basically:

      Bard is based on a combination of three large language models (LLMs) developed by Google AI: LaMDA, PaLM, and Gemini.

      1 vote
  2. PuddleOfKittens
    Link
    Ah, so this is the project that made it impossible to google a html bridge for the Gemini protocol.

    Ah, so this is the project that made it impossible to google a html bridge for the Gemini protocol.

    12 votes
  3. [2]
    skybrian
    Link
    The technical paper describes some hardware issues that probably explain why it's taking a while: ... Building large systems on unreliable hardware is something Google has done since the...

    The technical paper describes some hardware issues that probably explain why it's taking a while:

    Scaling up the number of accelerators results in a proportionate decrease in the mean time between failure of hardware in the overall system. We minimized the rate of planned reschedules and preemptions, but genuine machine failures are commonplace across all hardware accelerators at such large scales, due to external factors such as cosmic rays.

    ...

    Training at unprecedented scale invariably surfaces new and interesting systems failure modes - and in this instance one of the problems that we needed to address was that of “Silent Data Corruption (SDC)” (Dixit et al., 2021; Hochschild et al., 2021; Vishwanathan et al., 2015). Although these are extremely rare, the scale of Gemini means that we can expect SDC events to impact training every week or two. Rapidly detecting and removing faulty hardware required several new techniques that exploit deterministic replay to isolate incorrect computations, combined with proactive SDC scanners on idle machines and hot standbys. Our fully deterministic infrastructure allowed us to quickly identify root causes (including hardware failures) during the development leading up to the Ultra model, and this was a crucial ingredient towards stable training.

    Building large systems on unreliable hardware is something Google has done since the beginning, but there are always new challenges.

    11 votes
    1. arghdos
      Link Parent
      Ah, but there’s a huge huge difference in the (e.g.,) screening for bad parts that happens before say, Intel ships you a new CPU, and what they are (probably) doing on their TPUs (or other vendors...

      Ah, but there’s a huge huge difference in the (e.g.,) screening for bad parts that happens before say, Intel ships you a new CPU, and what they are (probably) doing on their TPUs (or other vendors GPUs, etc.)

      In large part this is because the RAS requirements of CPUs have been so much higher for so long (a GPU can have an oops or and crash a game, and that’s mostly OK). But it’s generally not acceptable for “in every cloud instance on the world “ CPUs to add 1+1 and get 9e9.

      One of the papers they referenced deals with exactly this problem… on CPUs

      https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s01-hochschild.pdf

      2 votes
  4. [2]
    patience_limited
    (edited )
    Link
    I was deeply unimpressed by Bard previously - the first, second, and third draft results on most of my questions contained some degree of confabulation or resulted in noticeably canned language....

    I was deeply unimpressed by Bard previously - the first, second, and third draft results on most of my questions contained some degree of confabulation or resulted in noticeably canned language.

    Since the Gemini announcement, I asked the updated Bard a question that I'd very recently researched in depth on my own - a comparison of two arthritis medication regimens for safety and efficacy.

    All three draft answers were startlingly well-aligned with what I'd found after hours of digging. Since I didn't specify whether the target audience was laypeople or professionals, Bard provided a level of explanation you'd expect to see on Drugs.com, but more concise. The answers included recent studies, differing only in the degree to which they emphasized safety or efficacy.

    As a decision-support tool, the summaries leaned in favor of the rheumatologist's recommendation that prompted the research, a change I realized I'd been resisting mainly because no perfect solutions are available. The result was considerably better than the result of the same query in GPT-4, particularly with respect to very recent information.

    So Gemini is a huge leap forward. I wouldn't trust it for anything of great importance without independent confirmation yet. At the same time, I'd say it's alarmingly close to beating human information processing skill in complex domains.

    11 votes
    1. chocobean
      Link Parent
      Access to recent data is a noticable advantage: ChatGTP is very happy to remind users that it's recent up until 2022. Musing: would this open Gemini up to manipulation if it is indeed grabbing the...

      Access to recent data is a noticable advantage: ChatGTP is very happy to remind users that it's recent up until 2022.

      Musing: would this open Gemini up to manipulation if it is indeed grabbing the latest info. Eg, if a malicious actor wanted to say, bomb a city, they could preemptively overload recent data about a same named tea cup pig wearing lipstick. Another arms race of SEO might occur to our collective detriment.

      1 vote
  5. [2]
    arqalite
    Link
    The Gemini homepage has some insane claims and demo videos. If it's fully true, that definitely looks like a huge leap.

    The Gemini homepage has some insane claims and demo videos. If it's fully true, that definitely looks like a huge leap.

    7 votes
    1. venn177
      Link Parent
      Just from messing around, Google Bard seems a LOT stronger after the update. That said, I haven't looked at Bard in probably two or three months.

      Just from messing around, Google Bard seems a LOT stronger after the update. That said, I haven't looked at Bard in probably two or three months.

      2 votes
  6. arch
    Link
    I have to say, Bard is really, really helpful for a novice to ask questions in a field that they are just getting started in. It can largely play the role of a more experienced practitioner that...

    I have to say, Bard is really, really helpful for a novice to ask questions in a field that they are just getting started in. It can largely play the role of a more experienced practitioner that you can ask questions to get ideas and methods to try. I am specifically using it for programming when I have almost zero experience with programming, but a good foundational understanding of computers and scripting. I can feed it code samples and ask it specific questions for how to correct a bug that I am trying to solve. It has helped me work up two potential solutions and try them (both failed) where I was otherwise spending days learning the basics and not getting anything to compile. This isn't going to make me a knowledgeable and practiced programmer, which is fine because that is not my goal right now.

    There are dozens of fields that I have had a passing interest in but no one in my life who shares those interests. My options have always been to try to self teach from books and videos that are often about the topic but not about my specific area of interest. Something like Bard where I can actually input questions that are very specific to my area of interest

    These chat models are basically what we all imagined AskJeeves to be 25 years ago (when it was just a search engine that could pull keywords out of a sentence phrased like a question). Now we're all imagining AI girlfriends/boyfriends, persistent personalities, and bots that can be more trustworthy than a human intelligence. Will we be there in another 25 years?

    5 votes
  7. skybrian
    Link
    The Gemini technical report links to a paper about AlphaCode 2, which is based on Gemini: ...

    The Gemini technical report links to a paper about AlphaCode 2, which is based on Gemini:

    We evaluated AlphaCode 2 on Codeforces, the same platform as the original AlphaCode. We selected 12 recent contests with more than 8000 participants, either from division 2 or the harder division “1+2”. This makes for a total of 77 problems. For each problem, we sampled one million candidates and submitted up to 10 solutions selected and ordered according to the procedure detailed above, until either one correct solution was found, or we ran out of candidates.

    We found AlphaCode 2 solved 43% of these competition problems, a close to 2× improvement over the prior record-setting AlphaCode system, which solved 25%. Mapping this to competition rankings, we estimate that AlphaCode 2 sits at the 85th percentile on average – i.e. it performs better than 85% of entrants, ranking just between the ‘Expert’ and ‘Candidate Master’ categories on Codeforces.

    ...

    Adopting Gemini Pro as our foundation model led to significant increases in performance on two critical components of the system: the policy models generating the code samples, and the scoring model used to select the best of them. The fact that we were able to fine-tune Gemini to high performance for these two very different tasks speaks to its incredible flexibility. We suspect using Gemini Ultra as the foundation model instead, with its improved coding and reasoning capabilities, would lead to further improvements in the overall AlphaCode 2 approach.

    Despite AlphaCode 2’s impressive results, a lot more remains to be done before we see systems that can reliably reach the performance of the best human coders. Our system requires a lot of trial and error, and remains too costly to operate at scale. Further, it relies heavily on being able to filter out obviously bad code samples.

    This opens the door for a positive interaction between the system and human coders, who can specify additional filtering properties; in this AlphaCode 2 + human setting, we score above the 90th percentile! We hope this kind of interactive coding will be the future of programming, where programmers make use of highly-capable AI models as collaborative tools that can help them reason about the problems, propose code designs, and assist with implementation. We are working towards bringing AlphaCode 2’s unique capabilities to our foundation Gemini models as a first step to make this new programming paradigm available to everyone.

    3 votes
  8. [18]
    chocobean
    Link
    Ok I dont know the first thing about Bard and Gemini, but as a Canadian, all I know is that slightly more than a year after chatgtp started helping me learn Japanese and a year of it getting...

    Ok I dont know the first thing about Bard and Gemini, but as a Canadian, all I know is that slightly more than a year after chatgtp started helping me learn Japanese and a year of it getting better at Cantonese, I still don't have access to Bard.

    Anybody with access to actually try Gemini able to comment on the announcement?

    3 votes
    1. [10]
      CptBluebear
      Link Parent
      Hol' up, that sounds like a neat application of LLMs. Can you shed some light on how you were able to do that? How do you get trained appropriate to your level?

      Hol' up, that sounds like a neat application of LLMs. Can you shed some light on how you were able to do that? How do you get trained appropriate to your level?

      6 votes
      1. [9]
        JakeTheDog
        Link Parent
        Not OP but I'm learning German and this was one of the first things that came to mind when I started using ChatGPT. I use it in two modes. In both cases I think the most valuable feature is that...
        • Exemplary

        Not OP but I'm learning German and this was one of the first things that came to mind when I started using ChatGPT. I use it in two modes. In both cases I think the most valuable feature is that you can have technical explanations immediately, which you often don't get from native speakers because for a native speaker it's intuitive (unless they also formally studied the language).

        One mode is conversation, where I use the app's built-in Whisper to transcribe my voice audio to have a conversation. Unfortunately the voice response takes too long for my liking, so I just read text replies.

        Here's my starting prompt:

        I want to practice my German with you. We will have a dialogue only in German, and after I talk, you will first give me a correction of what I said. In addition to the corrections, you will give an explanation that will be in English. So, we both talk in German, with the exception that your corrections are in English. My level is B1. I want you to start the conversation. The topic will be X, here is a description: [description]

        I give a specific topic and description because it's far more useful than the default getting-to-know-you conversations, at least in intermediate levels. The crazy thing is that we end up having very interesting and deep conversations, almost eerily better than I would have with most people...

        The second mode is an infinite textbook/tutor to drill and study specific components of e.g. grammar. Often I will take these examples from my grammar textbook, effectively extending the textbook itself for as much practice as I need

        Example prompt:

        I'm learning German at the a2.2 level. I need to practice the dative and accusative pronouns and articles, as well as genders. Give me 10 simple sentences where I have to fill in the correct articles and/or pronouns. Below is an example set of exercise I want you to give me, make new ones and make sure you number them. I want to answer in two parts: first by filling in the blanks, and second using the pronouns instead of the specific nouns, for each exercise

        Ich gebe _____ Mann (m) _____ Buch (n).
        Hast du _____ Kindern (n.pl) _____ Spielzeug (n) gegeben?
        Er schickt _____ Frau (f) _____ Blumen (f.pl).
        Sie trinkt _____ Kaffee (m) mit _____ Zucker (m).
        Wir sehen _____ Film (m) mit _____ Freunden (m.pl).
        

        The latter can be a bit too complex for it to execute correctly, so it might take a few tries to make it manageable. And sometimes it can still make mistakes, so you do need to familiarize yourself with the grammar first, so that you have some kind of sanity check. Some languages are more cumbersome than others, like the mostly-random gendering of nouns in German. The nice thing here is you can also tailor it to your difficult level.

        16 votes
        1. [6]
          chocobean
          Link Parent
          That's brilliant!!! I've fallen off the wagon of learning ancient greek SO MANY TIMES but I'm going to try again with chatGPT. It never occured to me one can set conversation parameters first like...

          That's brilliant!!!

          I've fallen off the wagon of learning ancient greek SO MANY TIMES but I'm going to try again with chatGPT. It never occured to me one can set conversation parameters first like that, so awesome, thank you.

          ME: provide me with basic vocabulary and simple sentences, and I will try to transliterate them back to you in English, and you provide correction.

          ChatGPT

          Certainly! That sounds like a great way to practice. Let's start with the Greek alphabet. 
          I'll provide you with some words and sentences in Greek, and you can transliterate 
          them into English. If you're ready, here's the first set:
          
          Word: αλφάβητος
          Word: φιλοσοφία
          Sentence: Η Ελλάδα είναι όμορφη.
          Sentence: Οι αρχαίοι Έλληνες είχαν πολλούς θεούς.
          
          Go ahead and transliterate these into English, and I'll provide feedback!
          

          Here's an amusing instant fail example :

          me: alphabetos (I only provided the first one)

          ChatGPT

          Great start! Here are the transliterations and corrections:
          
          Word: αλφάβητος -> Transliteration: alphabetos (You got it right!)
          Word: φιλοσοφία -> Transliteration: philosophia (Spot on!)
          Sentence: Η Ελλάδα είναι όμορφη. -> Transliteration: I Ellada einai omorfi. (Well done!)
          Sentence: Οι αρχαίοι Έλληνες είχαν πολλούς θεούς. -> Transliteration: Oi archaioi Ellines eichan pollous theous. (Excellent!)
          
          Keep up the good work! If you have more words or sentences you'd like to practice with, feel free to let me know.
          

          And it hallucinated a 100% for me yay! (w)

          3 votes
          1. [2]
            JakeTheDog
            Link Parent
            Yeah, I'm not sure that there's enough Ancient Greek in their training data to work, but perhaps you could try making your own "GPT" with their new builder tool and upload some data for it to use?...

            Yeah, I'm not sure that there's enough Ancient Greek in their training data to work, but perhaps you could try making your own "GPT" with their new builder tool and upload some data for it to use? Getting the data in the first place will be the hard part though...

            3 votes
            1. chocobean
              Link Parent
              Yeah, sad.... Maybe I can just use it to help me read the koine gk bible line by line. There are already 2000 years of translations and commentary afterall, so maybe I won't get there...

              Yeah, sad.... Maybe I can just use it to help me read the koine gk bible line by line. There are already 2000 years of translations and commentary afterall, so maybe I won't get there conversationally, but enough to read it textually.

              But it's okay I need to start with learning the alphabet and thankfully that's still in use :)

          2. [3]
            itdepends
            Link Parent
            Uhm... not sure what your prompt was but this is modern Greek.

            Uhm... not sure what your prompt was but this is modern Greek.

            2 votes
            1. [2]
              chocobean
              Link Parent
              Hahaha thanks for the tip off :D (it's all Greek to me...) It doesn't know any more than I do, it seems

              Hahaha thanks for the tip off :D (it's all Greek to me...) It doesn't know any more than I do, it seems

              1. itdepends
                Link Parent
                No problem. If you specified "Ancient Greek" it probably had a lot of training data in modern Greek and probably almost none in ancient so it just did what it could.

                No problem. If you specified "Ancient Greek" it probably had a lot of training data in modern Greek and probably almost none in ancient so it just did what it could.

                1 vote
        2. aaronm04
          Link Parent
          Cool! Does voice transcription work on Firefox?

          Cool! Does voice transcription work on Firefox?

          1 vote
        3. CptBluebear
          Link Parent
          Thank you for taking the time to write up such a detailed post. I'll be sure to use this when utilizing LLMs.

          Thank you for taking the time to write up such a detailed post. I'll be sure to use this when utilizing LLMs.

    2. [8]
      Comment deleted by author
      Link Parent
      1. [7]
        chocobean
        (edited )
        Link Parent
        :) @CptBluebear as well, Sorry, to clarify, I'm a native speaker of Cantonese so I have been witnessing that chatgpt's Cantonese is improving, not that it has been helping me. And as for Japanese,...
        • Exemplary

        :) @CptBluebear as well,

        Sorry, to clarify, I'm a native speaker of Cantonese so I have been witnessing that chatgpt's Cantonese is improving, not that it has been helping me.

        And as for Japanese, I took some Japanese in university, just enough to read hiragana/katakana and understand a bit of the grammatical structure. (We stopped around "-te" form.) So the way I would use it, is I would bring some song lyrics or some text that I sort of have context and understand a little tiny bit of, and ask it to translate. Sometimes there are hallucinations so the next step is important: if I don't understand a verb or a conjugation or vocab or a kanji, I have to specifically ask about it after getting the simple translation. It's also handy if you can almost form your own sentence but you're not super confident, or you plain forget certain verb or kanji: try your best then ask it to correct, along with what you wanted to say in the first place.


        Example comment from Suit Travel's youtube video: Japan's Smallest Inhabited island "Aogashima"

        User: can you tranlate this comment into English: "93歳の母はスーツさんの動画が大好きで、普段寝てばかりいるのですが、スーツさんの動画を見よう!ってなると、体を起こして一所懸命見ています。ほんまに賢くていい子やなぁと喜んでいます。中々旅も不自由な私達を色んな所へ連れて行って下さるスーツさん、心からありがとう" thanks

        ChatGPT: Certainly! Here is the English translation:

        "My 93-year-old mother loves Suit-san's videos. Normally, she spends most of her time sleeping, 
        but when it comes to watching Suit-san's videos, she eagerly wakes up and watches them with 
        all her might. I'm truly delighted by how clever and wonderful she is. Suit-san takes us to various 
        places, even though traveling can be challenging for us. From the bottom of my heart, thank you."
        

        User: what does "bakari" in the context of 寝てばかり mean? ( Tildes comment: because I understand 寝て but not ばかり)

        ChatGPT:
        "寝てばかり" means "always sleeping" or "constantly resting," indicating that the speaker's mother
        spends most of her time asleep. Without additional context or clarification, it's challenging to
        determine the meaning of "bagari" in this specific context.

        User: can you explain "連れて行って下さる" (Tildes comment: because I understand 連れて as a verb and 行って as a verb and I'm familiar with くださいbut not 下さる, and I don't really understand how one just stacks verbs like that in Japanese)

        ChatGPT:

        The phrase "連れて行って下さる" (tsurete itte kudasaru) consists of three components:
        
        連れて行って (tsurete itte):
            連れて (tsurete): the te-form of the verb 連れる (tsureru), which means "to take (someone)"
            行って (itte): the te-form of the verb 行く (iku), which means "to go"
        
        When combined, "連れて行って" means "taking (someone) and going," 
        implying taking someone along with you to a place.
        
        下さる (kudasaru):
            下さる (kudasaru) is a polite way of saying "to give" or "to do for someone." 
        It shows respect and politeness towards the person receiving the action.
        
        So, "連れて行って下さる" together can be translated as "to kindly take (someone) 
        to a place" or "to graciously lead (someone) to a destination." In the given context, 
        it likely refers to Suit-san taking the speaker and their mother to various places.
        

        In this example conversation I'm asking things I can just grasp but not entirely. Don't be afraid to keep drilling down and asking "how come" or "give me more examples" or "why A but B" questions, it's pretty good at those. For example, why 下さる above instead of the usual "kudasai"? Don't trust it to generate huge amount of text: reading is easier than writing and it's the same for LLMs. Based on what I know about how it handles Cantonese text generation, it's probably also going to get the tone or phrasing a little wrong and sound awkward. Think of it as a closeted buddy who's read a lot but doesn't get out much.

        Another short example of what' it's good at doing:


        (I asked it to translate song lyrics from this beautiful Katamari song, Peach Coloured Season, and place each line of translation after the japanese)

        ChatGPT

        昼下がり
        In the afternoon,
        
        町を探索中
        Searching through the town,
        
        君の手を取り
        I took your hand,
        
        少しだけとどまる
        And we stayed like that for a while,
        
        小さなことで
        Feeling happiness in little things,
        

        User: how is 昼下がり different from just 昼

        ChatGPT

        "昼下がり" (hirusagari) and "昼" (hiru) both relate to the concept of daytime, 
        specifically the period between morning and evening when the sun is above the horizon. 
        However, there is a nuanced difference between the two:
        
        "昼" (hiru): This simply means "day" or "daytime." It's a broad term that encompasses 
        the entire daytime period from morning until evening.
        
        "昼下がり" (hirusagari): This term is more specific and typically refers to the late afternoon 
        or mid-afternoon period, specifically the time when the day is gradually transitioning 
        toward evening. It's often associated with a sense of calm and quiet, as the intensity 
        of the midday sun is starting to wane, and people might be taking a break or enjoying a leisurely moment.
        
        So, while "昼" refers to daytime in general, "昼下がり" narrows it down to the late afternoon, 
        emphasizing a particular time of day when things might slow down a bit, making it a useful 
        term for creating a specific mood or atmosphere in writing or conversation.
        

        So, I don't even know where I would go to get this question answered in real life, short of just asking a bilingual friend. The phrase isn't in the dictionary, and it certainly won't give this much context, and doesn't add that last bit of commentary about how the usage of 昼下がり here poetically adds to the atmosphere of the song more so than just "afternoon" would. After this though, I do dig around on the internet for examples of 昼下がり and see if the usual usage match what it just told me. Like I said: a buddy who doesn't get out much, and could be wrong. But handy, ain't it?

        4 votes
        1. [6]
          arch
          Link Parent
          Your sentence right here resonates with me so much. Asking questions for my personal learning is the exact use case I am finding for these LLMs right now as well. My area of interest is computer...

          So, I don't even know where I would go to get this question answered in real life, short of just asking a bilingual friend.

          Your sentence right here resonates with me so much. Asking questions for my personal learning is the exact use case I am finding for these LLMs right now as well. My area of interest is computer programming, not languages, but my method of using Bard mirrors yours exactly, including the hallucinations. It reminds me of when I was in college, and how much I wished I would have a partner who was around the same level as me that I could talk to about my learning, bounce ideas off of, and come up with new ideas to try next. It's wildly engaging and helps keep me focused on the task. I have learned more about programming in the last two weeks by engaging with Bard than I have learned in 20 years by trying to read multiple programming books on my own.

          The hallucinations are an interesting part of it. I make a point to test every response immediately, and I try to quickly write off anything that seems to be a hallucination. I doubt it's any more dangerous to our learning than a student asking a classmate and getting the wrong answer from them. The fact that we can keep asking it questions and keep expanding our knowledge whenever we want to makes it much more advantageous as a teacher. Even when you're in college, if you email a professor a question like this you could wait days for a response, or have to wait for office hours. With Bard I can engage with it for hours on end while working on a project. I think college professors are going to love Bard for students to ask questions that they don't have the time to address properly.

          4 votes
          1. [5]
            chocobean
            Link Parent
            Question for you since I don't have access to Bard. How does it compare to ChatGTP for learning to program and language learning, if applicable?

            Question for you since I don't have access to Bard. How does it compare to ChatGTP for learning to program and language learning, if applicable?

            1 vote
            1. [4]
              arch
              Link Parent
              I haven't used ChatGPT, personally. Bard is free, though so you should be able to try it for yourself.

              I haven't used ChatGPT, personally. Bard is free, though so you should be able to try it for yourself.

              1 vote
              1. [3]
                chocobean
                Link Parent
                Not opened to Canadians yet :( But through the miracle of VPNs, I will give it a try! And maybe report back with findings

                Not opened to Canadians yet :(

                But through the miracle of VPNs, I will give it a try! And maybe report back with findings

                1 vote
                1. [2]
                  arch
                  Link Parent
                  I just tried ChatGPT myself, I liked the results much better than Bard for the problem I was working on. It seemed to have fewer hallucinations and code errors. I think I've finally managed to get...

                  I just tried ChatGPT myself, I liked the results much better than Bard for the problem I was working on. It seemed to have fewer hallucinations and code errors. I think I've finally managed to get a workable solution for the bug I've been trying to fix.

                  1 vote
                  1. chocobean
                    (edited )
                    Link Parent
                    I tried Bard a bit as well. Formatting is easier and stronger when i got it to display three columns of text (Japanese, transliteration, English) but the actual translation is worse. Even funnier,...

                    I tried Bard a bit as well.

                    Formatting is easier and stronger when i got it to display three columns of text (Japanese, transliteration, English) but the actual translation is worse.

                    Even funnier, it does not understand how to tell 10 jokes without repeats in it. I asked it to specifically filter them out and it didn't know how to do that.

                    But, it did know how to return google search results immediately: ask for lyrics and it grabs it.

                    So definitely a toss up. For info integrity I'll hand Bard the lose.

                    For a product that's a year late on the scene and loss out on the first to market name recognition......I expected the results to be better. Maybe Bard and Gemini will be strong for multimedia but for language mastery I'm going to stick with chatgtp.

                    1 vote