34 votes

Google warns its own employees: Do not use code generated by Bard

31 comments

  1. [24]
    Habituallytired
    Link
    My favorite part of that article is how google doesn’t want a federal regulatory agency for AI. that means we need to establish one immediately and start implementing some strict controls for it....

    My favorite part of that article is how google doesn’t want a federal regulatory agency for AI. that means we need to establish one immediately and start implementing some strict controls for it.

    I feel vindicated advising my tech friends not to use ai to make code when I had no idea how either thing works. I was strictly thinking about it like don’t let another company see your private code/proprietary information, but it seems like there are other issues that these programs can introduce as well.

    38 votes
    1. [18]
      negidius
      (edited )
      Link Parent
      I strongly disagree with AI regulations. Google would probably benefit, as they could afford the costs of compliance or avoidance, but smaller competitors could be shut down. Regulations create...

      I strongly disagree with AI regulations. Google would probably benefit, as they could afford the costs of compliance or avoidance, but smaller competitors could be shut down. Regulations create monopolies, and it would be incredibly dangerous and harmful for Google and Microsoft/OpenAI to have a duopoly over AI.

      Another danger is that regulations could prevent research from being conducted in the open, meaning large corporations and governments who can circumvent the regulations would be the only ones who could conduct effective research, and could develop AI technology unhindered while everyone else is constrained. This could for example lead to a situation where a government or corporation could create deepfake footage that is not just beyond everyone else's detection capabilities, but beyond what everyone else believe to be possible.

      AI could be one of the most beneficial technologies to ever be developed if it remains wildly available, but it could become the single most harmful and dangerous technology if access is restricted to only the most powerful.

      10 votes
      1. [13]
        mat
        Link Parent
        Yes the evidence so far very much suggests that letting tech firms (or really anyone) swan about doing whatever they like to make money without any regulation has worked out really well. I don't...

        Yes the evidence so far very much suggests that letting tech firms (or really anyone) swan about doing whatever they like to make money without any regulation has worked out really well.

        I don't really need a /s tag for that, do I?

        Personally I don't really get all the fear about AI, nor do I believe it's going to save us all (the hype is precisely the same as every other new tech of the past 30 years that was going to change everything and didn't) but the reason we have regulation is that you simply cannot trust businesses, of Google size or otherwise, to do the right thing for humans. Look at literally every unregulated sector for proof. Every possible corner cut for profit. Every possible avenue exploited as much as possible. The entire reason regulatory agencies exist is that without them the air is full of smoke, the rivers full of chemicals, cars have no airbags and so on and so on. AI is no different to all that has gone before in that regard.

        35 votes
        1. [10]
          negidius
          Link Parent
          I don't trust Google to do the right thing. That's one of the reasons I oppose regulations. I don't want Google to have a monopoly. I want Stability AI and other smaller entities to offer...

          I don't trust Google to do the right thing. That's one of the reasons I oppose regulations. I don't want Google to have a monopoly. I want Stability AI and other smaller entities to offer alternatives, so I don't have to rely on large corporations like Google and Microsoft.

          You cannot trust governments (or corporations) to do the right thing. You can open almost any page in most history books and newspapers and read about the atrocities committed by governments in pursuit of power or profit. All the worst atrocities in human history fall in this category.

          One of the main reasons regulatory agencies exist is because large corporations want to remain large, so they lobby governments to create barriers to entry so that they can continue to exploit captive consumers without having to worry about competition.

          4 votes
          1. [3]
            mat
            Link Parent
            Why would having regulation create a monopoly? The car industry is heavily regulated for obvious safety reasons, yet plenty of car manufacturers exist. Lots of industries have regulations and...

            Why would having regulation create a monopoly?

            The car industry is heavily regulated for obvious safety reasons, yet plenty of car manufacturers exist. Lots of industries have regulations and manage just fine. Business just have to operate by the rules, that's all.

            I don't agree with this peculiarly American view of government as it being a distinct thing from society. Government is the executive arm of society, not a discrete thing than acts in it's own interests. I get that for Americans, who live in a particularly corrupt administration, it might not seem that way - but one's government is exactly as good as the society that elects it. Many governments around the world are more people-focussed and perfectly capable of acting in the interests of the electorate rather than a few corporate interests. The EU, for example. Pretty good. Not perfect by any means. But certainly better than just letting business run free, which almost always a terrible idea. Government is better than business most of the time, so if I have to choose which is in charge of my safety, I'm choosing the state every single time.

            25 votes
            1. [2]
              negidius
              Link Parent
              Regulations create monopolies by increasing risks and barriers to entry. Car regulations certainly do that to some extent, but most car manufacturers were established when the industry was in its...

              Regulations create monopolies by increasing risks and barriers to entry. Car regulations certainly do that to some extent, but most car manufacturers were established when the industry was in its infancy. This means that there is a somewhat level playing field and the corporations are more focused on using regulations to give themselves marginal advantages over their competitors, rather than closing the market to new competitors. The marginal cost to a car manufacture of getting another competitor is not high enough to justify focusing on enacting barriers to entry. Instead, they focus on regulations that give them an advantage or disadvantage compared to their competitors. An example of an effect of this kind of lobbying could be regulators flip-flopping between prompting and penalizing diesel cars despite no change in the evidence regarding environmental and health harms.

              To see governments as being distinct from society is not a uniquely American view. I'm not American, and I recognise that governments are just groups of people. Just like corporations, gangs, churches, and all other varieties of groups, the people in governments respond to incentives and act in their self-interest most of the time. It makes no more sense to say that a government is "the executive arm of society" than to say the same about a street gang. Your view becomes especially strange when we consider governments who murder their citizens. Are those people being killed by their society rather than the people who are actually killing them? If the government are "we", are we committing suicide when it kills us?

              I think it's simply factually incorrect to say that "government is better than business most of the time". Consider all the worst atrocities in our history. One thing they have in common is that they were committed by governments. As I said in another comment, this isn't to say corporate executives are necessarily more moral than their counterparts in government, It could well be that they would commit the same atrocities if they had the same capacity for violence. The point is that they don't have the same independent capacity for violence, and it makes sense to trust the psychopath with a baseball bat over the psychopath with a machine gun, even if the latter usually does what the former wants anyway.

              4 votes
              1. sparksbet
                Link Parent
                The principal barrier to AI research on the scale of LLMs is the cost of training such huge models, and tech giants already have a monopoly in that regard. I think you're severely overestimating...

                The principal barrier to AI research on the scale of LLMs is the cost of training such huge models, and tech giants already have a monopoly in that regard. I think you're severely overestimating how much further regulation would affect the barrier of entry to the field. Tech companies below the scale of FAANG are typically fine-tuning models that were underlying created at these huge companies, and the alternative to that is typically using smaller/more traditional models where the amount of improvement from using one of these big models isn't worth the increased cost of fine-tuning.

                As with any regulation, the devil would be in the details. You could write legislation that only affects the big FAANG-sized companies actively training the huge models without touching the smaller companies using their models. But you could also do the reverse and make legislation that affects smaller companies much more heavily than tech giants. It's wrong to assume one of these is inevitable from more AI regulation though, without something more specific in front of us. The idea that regulation inevitably benefits only tech giants just isn't true, especially not in the current industry landscape.

                I work as a data scientist at one of those smaller companies, so this is my day-to-day, and I generally think more regulation would probably be a good thing as long as it's properly thought through and worded well. The wild west where pretty much anyone can slap "AI" on something and call it a day just isn't tenable or safe imo.

                2 votes
          2. [6]
            Pioneer
            Link Parent
            They should be able to. But they really need to be able to explain how their models work, where the data is sourced from and be able to break down the components. I'd honestly rather trust a...

            I don't trust Google to do the right thing. That's one of the reasons I oppose regulations. I don't want Google to have a monopoly. I want Stability AI and other smaller entities to offer alternatives, so I don't have to rely on large corporations like Google and Microsoft.

            They should be able to.

            But they really need to be able to explain how their models work, where the data is sourced from and be able to break down the components.

            You cannot trust governments (or corporations) to do the right thing.

            I'd honestly rather trust a government than a capitalistic entity being perfectly honest. But the regulations would need to be exceptionally well written. Like most regulations, they need to be held to the 'spirit' and not the letter of them... because that's how they become constantly worked around.

            13 votes
            1. [5]
              negidius
              Link Parent
              Requiring explainability would mean banning a whole class of AI technology. Such a ban would give a huge advantage to entities that can evade or ignore the ban more easily than others. Governments...

              Requiring explainability would mean banning a whole class of AI technology. Such a ban would give a huge advantage to entities that can evade or ignore the ban more easily than others.

              Governments are capitalistic entities too. They hold capital and are driven by the pursuit of profit just as much as any corporation. They exist in a symbiotic relationship with large corporations and are an integral part of the wider capitalistic system. Governments and corporations generally have the same goals, but as governments possess far greater capacity for violence, they can impose their will more directly and don't need to rely on ostensible consent or the support of other entities to the extent that corporations generally do.

              I don't understand how one can reasonably trust the U.S. government more than Google. If you were to make a list of the 100 most immoral acts of each (even limited to when they both existed), the first item on Google's list would be nothing compared to the last on the government's list. Two questions I think illustrates this point: What is the worst thing Google has done that wasn't on behalf of a government? What is the worst thing the U.S. government has done that wasn't on behalf of a corporation? A bonus question could be: How many people died as a result of each action? This isn't to say Google executives are necessarily more moral than their counterparts in the U.S. government. It could well be that they would commit the same atrocities if they had the same capacity for violence. The point is that they don't have the same capacity for violence, and it makes sense to trust the psychopath with a baseball bat over the psychopath with a machine gun.

              The only difference between interpreting regulations based on the 'spirit' and the 'letter' is that it moves some of the quid pro quo from the movement of enactment to the movement of enforcement. Some of the money that would have been spent lobbying legislators will instead be spent lobbying regulators and judges. The outcome for those who cannot afford to lobby is even worse, because even if they find a loophole in the law that would enable them to enter the market, the law can simply be ignored by a judge who really enjoyed their vacation courtesy of the other party.

              1 vote
              1. [4]
                Pioneer
                Link Parent
                It really wouldn't. I work in Data Engineering and this is the type of conversation I really do despise having. "AI" and ML aren't difficult to explain to folks who don't work in this sector. We...

                Requiring explainability would mean banning a whole class of AI technology. Such a ban would give a huge advantage to entities that can evade or ignore the ban more easily than others.

                It really wouldn't. I work in Data Engineering and this is the type of conversation I really do despise having. "AI" and ML aren't difficult to explain to folks who don't work in this sector. We deliberately obfuscate and confuse people using poor language choices to ensure we sound amazing.

                An LLM, or "Unsupervised System" can still absolutely be observed to see what it's doing.

                Governments are capitalistic entities too. They hold capital and are driven by the pursuit of profit just as much as any corporation. They exist in a symbiotic relationship with large corporations and are an integral part of the wider capitalistic system. Governments and corporations generally have the same goals, but as governments possess far greater capacity for violence, they can impose their will more directly and don't need to rely on ostensible consent or the support of other entities to the extent that corporations generally do.

                I was really confused about this as a European until I read the next part...

                I don't understand how one can reasonably trust the U.S. government more than Google.

                I suspect you're American. This is a HUGE problem on the internet where American voices drown out just about anyone else with what their life experience is. I trust my Government more than I trust Google, I can't trust the US Government because it's not my government.

                I'm not saying mine is perfect (Those don't exist), but I would trust a government more than any corporation in the current era.

                The only difference between interpreting regulations based on the 'spirit' and the 'letter' is that it moves some of the quid pro quo from the movement of enactment to the movement of enforcement. Some of the money that would have been spent lobbying legislators will instead be spent lobbying regulators and judges. The outcome for those who cannot afford to lobby is even worse, because even if they find a loophole in the law that would enable them to enter the market, the law can simply be ignored by a judge who really enjoyed their vacation courtesy of the other party.

                Again, this is a very American-centric attitude towards the global need for AI/ML/LLM regulations.

                You've been sold, bought and paid for. The rest of the world isn't anywhere near as bad.

                It can be hard to engage with you simply because our life experiences are hugely different.

                Sorry bud. I'll always trust government more than a corporation.

                15 votes
                1. [2]
                  negidius
                  (edited )
                  Link Parent
                  That depends on what you mean by "what it's doing". You can see that a particular token has a particular probability to succeed the previous token, but as far as I understand, you cannot explain...

                  An LLM, or "Unsupervised System" can still absolutely be observed to see what it's doing.

                  That depends on what you mean by "what it's doing". You can see that a particular token has a particular probability to succeed the previous token, but as far as I understand, you cannot explain why. You cannot, for instance, tell what piece of training data caused the model to return a particular output. If a LLM tells me that pistachio ice cream tastes better than pecan ice cream, as far as I understand it, it's not currently possible to automatically generate an explanation for why it said that rather than the opposite. Am I wrong about that?

                  I suspect you're American.

                  I'm not. I was born and raised in Europe, and have never lived in America. The comment I originally responded to advocates for regulation specifically by the U.S. federal government, so it makes sense to compare Google with them specifically. Another reason to use America is that most English speakers consume American news, and understand references such as the one about the vacation. If I referred to something from elsewhere in the world, you would probably not have known what I was talking about.

                  No matter what government you want to compare with, I can't imagine the result would be different. Think about the worst thing Google has done that wasn't on behalf of a government. Can you honestly tell me that your government hasn't done many things much worse since Google was founded?

                  2 votes
                  1. Pioneer
                    Link Parent
                    But probably is just that, probability. It's indicative of marketing gumph and statistical illteracy that we listen to qucks who decry these products as to what borders on magic. You can...

                    That depends on what you mean by "what it's doing". You can see that a particular token has a particular probability to succeed the previous token, but as far as I understand, you cannot explain why. You cannot, for instance, tell what piece of training data caused the model to return a particular output. If a LLM tells me that pistachio ice cream tastes better than pecan ice cream, as far as I understand it, it's not currently possible to automatically generate an explanation for why it said that rather than the opposite. Am I wrong about that?

                    But probably is just that, probability. It's indicative of marketing gumph and statistical illteracy that we listen to qucks who decry these products as to what borders on magic. You can understand these systems with a modicum and understanding of the source data.

                    Which is where the need for data regulations co-incides with this.

                    I'm not. I was born and raised in Europe, and have never lived in America. The comment I originally responded to advocates for regulation specifically by the U.S. federal government, so it makes sense to compare Google with them specifically. Another reason to use America is that most English speakers consume American news, and understand references such as the one about the vacation. If I referred to something from elsewhere in the world, you would probably not have known what I was talking about.

                    Apologies, I'd written a comment elsewhere about the EU/UK delivering on regulations towards AI and thought it had responded to that. It's also very common to hear American's decry their government and extoll the virtues of big business... I personally avoid American media as much as is possible. It can get depressing to say the least.

                    No matter what government you want to compare with, I can't imagine the result would be different. Think about the worst thing Google has done that wasn't on behalf of a government. Can you honestly tell me that your government hasn't done many things much worse since Google was founded?

                    Of course I can. Hell, many European nations went to war or enforced austerity in the 2010s to much condemnation.

                    This is different, ML/AI has a drastic challenge on all of us. One I would say that regulation needs to be in place for that goes beyond it right now. We need data protection and usage regulation that is simple, but effective. "If your dad is used in a model, you must be informed as to how and why" - That's not difficult to do. Most firms will have a Customer Dimension with a traceability format for this.

                    3 votes
                2. NaraVara
                  Link Parent
                  We really haven't. There's just a reflexively cynical streak in American culture (often stoked by conservative media outlets with financial interests in minimizing state capacity and exacerbated...

                  You've been sold, bought and paid for. The rest of the world isn't anywhere near as bad.

                  We really haven't. There's just a reflexively cynical streak in American culture (often stoked by conservative media outlets with financial interests in minimizing state capacity and exacerbated by the "center of the universe" problem where Americans act as if our government is the only entity with agency on the planet and everyone else is just dumbly following our lead) that wildly exaggerates both the extent and inevitability of corruption or incompetence.

                  1 vote
        2. bioemerl
          Link Parent
          AI isn't at the stage that regulation even starts to be a good idea. It's a rapidly moving target whose effects are totally unknown. Even if you pass a law, you'd have to immidiately pass three...

          AI isn't at the stage that regulation even starts to be a good idea. It's a rapidly moving target whose effects are totally unknown. Even if you pass a law, you'd have to immidiately pass three more next week when the state of the art is unrecognizable compared to what it is now.

          AI is a tool. To regulate it should be to regulate generally the powers of these tech companies. All targeting AI will do is entrench the existing participants while dousing water over progress in the field and denying access to these systems to common people who want to run them at home.

          Hell.

          Regulate it so that they have to let you run them at home and leave it at that. Neuter the tech companies power. Don't neuter AI

          2 votes
        3. NaraVara
          (edited )
          Link Parent
          It'll primarily be a problem because it's error prone, fact agnostic, and much worse at most things than a real person. But it's much cheaper to deploy at scale than even a minimum wage real...

          Personally I don't really get all the fear about AI, nor do I believe it's going to save us all

          It'll primarily be a problem because it's error prone, fact agnostic, and much worse at most things than a real person. But it's much cheaper to deploy at scale than even a minimum wage real person. So I think people aren't worried about AI when they voice concerns about AI.

          They're more worried that it provides an opportunity for companies to cheap out on how they provide essential services and, functionally, offload more of the work of engaging in commerce from the producer/middleman to the consumer. That seems to be where most technological advancements have gone the past decade or so, cut costs by making the customer do more of the work with the promise that it brings greater "convenience" or "availability" or "saving money." And we do realize those benefits usually in the small scale. But in the larger scale I think it creates a "death by a thousand cuts" situation for people where we have to juggle so much crap all the time. We used to just call a travel agent and have vacations planned for us for example. It made everything about the vacation cost, like, 20% more but it also meant a lot less logistical hassle to plan the vacation.

          With a lot of AI applications the same thing happens, except instead of us saving 20% to be inconvenienced the service providers save the money and we still get the hassle.

          2 votes
      2. [3]
        Habituallytired
        Link Parent
        Regulations should not be written in blood to have to exist. For once we should be proactive about regulations instead of AI deepfakes causing people to end their lives or to be murdered or any...

        Regulations should not be written in blood to have to exist. For once we should be proactive about regulations instead of AI deepfakes causing people to end their lives or to be murdered or any other thing that people can do to one another because of AI technology that we know will happen.

        Google (and other big players) should be locked out of those meetings to set the regulations outside of just providing information about how it all works on their end. Ideally this is how all regulations should be written, but we all know about lobbying. I'm sure California is already quietly looking into their own state regulations, and so CA and NY go, so goes the nation in terms of technology, usually.

        6 votes
        1. [2]
          V17
          Link Parent
          To me this sounds like a terrible idea because it imo almost absurdly overestimates the ability of lawmakers or various (even independent) interest groups to create functional regulations. Even...

          To me this sounds like a terrible idea because it imo almost absurdly overestimates the ability of lawmakers or various (even independent) interest groups to create functional regulations. Even regulations that are "written in blood" often have really bad side effects because they're not well thought out (it's hard and most people don't care enough) or because the world is simply too complex and unpredictable. It's just that with a bit of luck the side effects are less bad than the thing being regulated.

          An example that's short and simple enough: some scandinavian country (I think it was Sweden, but it's been a few years since I read about it) had slight issues with children dying after finding and overdosing on their parents or grandparents medicine. There weren't many cases, probably not more than in other countries, but a similar number of kids died every year. So they invented a solution: some medicine bottles had to have kid-proof safety lids. This regulation was later repealed because the result was that kids overdose deaths shot up immediately after being+- the same for years because grandparents had trouble opening the pill bottles as well, so they started leaving them completely open instead.

          Trying to create regulations pre-emptively in such a rapidly evolving area where most lawmakers have absolutely zero idea what's happening (and therefore have a big chance of being influenced in a bad direction) and even with experts there is no consensus on what should happen, plus the west has an interest in staying at the top and not letting China overtake, cannot ever lead to a good outcome.

          2 votes
          1. NaraVara
            Link Parent
            I'd dispute the "even" here. The fact that someone had to die before they took action is probably why they're not well thought out. The decision to take action was forced on them, so they crap...

            Even regulations that are "written in blood" often have really bad side effects because they're not well thought out (it's hard and most people don't care enough)

            I'd dispute the "even" here. The fact that someone had to die before they took action is probably why they're not well thought out. The decision to take action was forced on them, so they crap something out hastily rather than being proactive and understanding the systemic effects.

            Lawmakers have little idea about most things, that's just a fact of living in a democracy. The actual burden of knowing the relevant facts rests on staffing their research offices. But in the US, at least, decades of trying to shrink the government have massively slashed the size and capacity of these advisory committees.

            2 votes
      3. cokedragon
        Link Parent
        Regulation doesn't create monopolies. Not regulating against monopolies creates monopolies.

        Regulation doesn't create monopolies. Not regulating against monopolies creates monopolies.

        2 votes
    2. [5]
      Pioneer
      Link Parent
      Yeah. The EU are luckily throwing down and the UK seem to be at least making the right rumblings. There needs to be a serious sit down about tech and data in governments globally at the moment. If...

      Yeah. The EU are luckily throwing down and the UK seem to be at least making the right rumblings.

      There needs to be a serious sit down about tech and data in governments globally at the moment. If you cannot explain the black box then you absolutely have no place productionising some of these products.

      I work in Data and if I went to a senior stakeholder with the "Oh, do not look behind the curtain" attitude then they'd trounce me out of the boardroom.

      6 votes
      1. Habituallytired
        Link Parent
        The problem in the US is these people are more interested in hating on each other than actually doing their jobs, if they're not senile enough to not even be able to begin to comprehend the...

        The problem in the US is these people are more interested in hating on each other than actually doing their jobs, if they're not senile enough to not even be able to begin to comprehend the magical beep box in their hands.

        1 vote
      2. [3]
        Minori
        Link Parent
        I have some issue with banning or restricting a product just because we don't fully understand it. For example, we still don't know how antidepressants work, but we know they help people. I think...

        I have some issue with banning or restricting a product just because we don't fully understand it. For example, we still don't know how antidepressants work, but we know they help people. I think we should only regulate products that have been shown to have clear negative effects.

        1. solemn_fable
          Link Parent
          I’m not sure if that was your intent, but medicine is still highly regulated even if we don’t know how all those drugs work, exactly. Regulation doesn’t mean banning, it means establishing and...

          I’m not sure if that was your intent, but medicine is still highly regulated even if we don’t know how all those drugs work, exactly. Regulation doesn’t mean banning, it means establishing and enforcing rules and guidelines that are relevant to ensure safety and predictability.

          3 votes
        2. Pioneer
          Link Parent
          Medicine is highly regulated man, AI is just a hodge bodge of "trust me bros" at the top who think it's okay to spout off how they're worried they're going to end the world. Regulation doesn't...

          Medicine is highly regulated man, AI is just a hodge bodge of "trust me bros" at the top who think it's okay to spout off how they're worried they're going to end the world.

          Regulation doesn't mean outright banning. But if you're building and developing ML systems? You need to know what's going in with them. This is probably a wider data regulation about the usage of Data.

          1 vote
  2. [7]
    Jedi
    Link
    I really like GitHub Copilot. I’m not using it to generate solutions, but it’s kinda like a smarter code-completion. It understands the context of the code well enough (sometimes eerily well) and...

    I really like GitHub Copilot.

    I’m not using it to generate solutions, but it’s kinda like a smarter code-completion. It understands the context of the code well enough (sometimes eerily well) and saves time writing what I was going to write anyway.

    I’d be very interested in seeing open-source alternatives pop up that could be run locally.

    As for Bard or ChatGPT, it’s not really worth it in my experience. They usually fail to generate working code, and even if it leads you on the right path, it’d probably just be quicker to think out the problem you’re trying to solve.

    16 votes
    1. [3]
      codesplice
      Link Parent
      This is exactly my experience with Copilot - it's like a smarter evolution and extension of the linters I was already using. And I've been really impressed with how helpful it is both with coding...

      but it’s kinda like a smarter code-completion.

      This is exactly my experience with Copilot - it's like a smarter evolution and extension of the linters I was already using. And I've been really impressed with how helpful it is both with coding as well as with generating documentation.

      9 votes
      1. [2]
        Maxi
        Link Parent
        I'm gonna be frank and say it often suggest more coherent and comprehensive error messages than I would've written myself...

        I'm gonna be frank and say it often suggest more coherent and comprehensive error messages than I would've written myself...

        3 votes
        1. teaearlgraycold
          Link Parent
          I’ve also had it tell me what was causing a bug before! As I was adding debug statements the message it wrote was “You forgot to copy the X into the Y”. And that was indeed the exact issue.

          I’ve also had it tell me what was causing a bug before! As I was adding debug statements the message it wrote was “You forgot to copy the X into the Y”. And that was indeed the exact issue.

    2. [3]
      Twist
      Link Parent
      If you understand programming and you are not working on critical infrastructure I don't see why you shouldn't use AI as a starting point for your project. Time is valuable.

      If you understand programming and you are not working on critical infrastructure I don't see why you shouldn't use AI as a starting point for your project. Time is valuable.

      6 votes
      1. [2]
        Pipas
        Link Parent
        This is my opinion as well, if the AI generates a code based on my provided code that is 90% correct, I have the knowledge to give the other 10%. I definitely don't use it for everything but I...

        This is my opinion as well, if the AI generates a code based on my provided code that is 90% correct, I have the knowledge to give the other 10%.

        I definitely don't use it for everything but I found for some tasks it turns them from writing code to editing and tweaking which saves a lot of time.

        And this doesn't even take into account when the AI suggests something new I didn't know or a solution I wouldn't have thought off that after reading makes more sense than what my approach would have been.

        It's not replacing me, but as a tool that I can yield to be faster at delivering I believe it's incredibly useful.

        2 votes
        1. Habituallytired
          Link Parent
          Based on this conversation, I could see Google saying "don't use it" internally because then it will train Bard to write better code for its competitors. I still wouldn't trust putting my own code...

          Based on this conversation, I could see Google saying "don't use it" internally because then it will train Bard to write better code for its competitors. I still wouldn't trust putting my own code (or any of my own or my company's) proprietary information into it.

          It reminds me of the chatbots that would respond to you with the last thing someone else wrote to it back in the AIM days.