45 votes

Google’s greenhouse gas emissions jump 48% in five years

29 comments

  1. [4]
    Ganymede
    Link
    Can't wait to see how quickly they renege on their carbon zero commitments now that their newest way of screwing people over takes so much energy.

    Can't wait to see how quickly they renege on their carbon zero commitments now that their newest way of screwing people over takes so much energy.

    32 votes
    1. [3]
      SirNut
      Link Parent
      What way is that?

      What way is that?

      3 votes
      1. [2]
        whispersilk
        Link Parent
        AI. Training the LLMs that power Gemini and friends requires a staggering amount of computation and thus energy, and I think running the things once they're trained is only cheap by comparison.

        AI. Training the LLMs that power Gemini and friends requires a staggering amount of computation and thus energy, and I think running the things once they're trained is only cheap by comparison.

        33 votes
        1. teaearlgraycold
          Link Parent
          Yeah. Compared to performing a database lookup getting an LLM to answer a question takes millions to billions of times more energy.

          Yeah. Compared to performing a database lookup getting an LLM to answer a question takes millions to billions of times more energy.

          22 votes
  2. [23]
    OBLIVIATER
    Link
    Does AI actually make any money yet? Other than stock prices going to the moon because of potential revenue, what actual revenue is being generated by these insane energy usage bumps?

    Does AI actually make any money yet? Other than stock prices going to the moon because of potential revenue, what actual revenue is being generated by these insane energy usage bumps?

    16 votes
    1. [21]
      skybrian
      Link Parent
      They are charging for it. OpenAI charges for API access and monthly subscriptions to use the UI. Other services are similar. Google requires a paid subscription to use their best models. Free...

      They are charging for it. OpenAI charges for API access and monthly subscriptions to use the UI. Other services are similar. Google requires a paid subscription to use their best models. Free usage is fairly restricted because it’s too expensive to give away.

      I have no idea about revenue or profitability, though.

      13 votes
      1. [20]
        OBLIVIATER
        Link Parent
        Ok then I guess my next question would be... What are people using them for to be productive? Outside of AI art and weird memes, I haven't seen much of a practical use case for AI so far. Other...

        Ok then I guess my next question would be... What are people using them for to be productive? Outside of AI art and weird memes, I haven't seen much of a practical use case for AI so far. Other than for helping programmers which seems to be an actual useful tool.

        4 votes
        1. [8]
          ebonGavia
          Link Parent
          I mean it's useful for summarizing text, familiarizing yourself with new APIs, generating boilerplate code, etc. It's simultaneously useful (but unreliable) and way over-hyped. It's actually super...

          I mean it's useful for summarizing text, familiarizing yourself with new APIs, generating boilerplate code, etc. It's simultaneously useful (but unreliable) and way over-hyped. It's actually super annoying to talk about.

          I have a friend-coworker that thinks it's AGI. Infuriating. But it's not nothing, either; it is useful. Unlike the next crypto rugpull.

          20 votes
          1. [7]
            blivet
            Link Parent
            I doubt that its minimal utility is worth the insane amounts of energy it requires and the enormous environmental damage it causes.

            I doubt that its minimal utility is worth the insane amounts of energy it requires and the enormous environmental damage it causes.

            8 votes
            1. vord
              (edited )
              Link Parent
              I think that's the biggest overlooked aspect WRT climate change and the advancement of technology. Is the benefit of a new technology so massive that it's worth deployment? It's probably worth...

              I think that's the biggest overlooked aspect WRT climate change and the advancement of technology. Is the benefit of a new technology so massive that it's worth deployment?

              It's probably worth improving HVAC efficiency as best we can. Despite that, it's definitely not worth wholesale replacing HVAC systems every 10-20 years, as even a 20% efficiency improvement might save a few hundred kwh annually, but the energy and labor required to deploy said system negates a lot of the benefit. The money, presuming the HVAC system is still functional, is almost certainly better spent just deploying a small solar/wind setup on the property instead. We should spend more effort requiring resilience, such that we can be reasonably certain with minimal maintenance systems will work for 20, 30 years or more.

              And we're at a point now we really need to be thinking about that within the scope of computing. The web should be 100% usable with 2004-era computers, there isn't a single thing that we do today that was impossible back then...it's just that the perpetual bloat has rendered many of these computers useless. Which really sucks, because often these older machines 'trickle down' to less-wealthy countries, and it puts them at a further disadvantage with interacting with the West.

              So to bring this back to the original topic: Mass deployment of AI applications need to demonstrate benefit in-line with their additional computing costs. And what that really means is that there needs to be more transparency to the process top-to-bottom, such that a determined journalist could reasonably figure out the TCO of running things on AI platforms. Right now, AI applications are being heavily subsidized by multi-billion dollar companies that can run them at a massive loss, subsidizing it with high premiums on cheaper techs, until such time they've captured the market. And that further disguises the problem.

              13 votes
            2. EgoEimi
              (edited )
              Link Parent
              The environmental footprint is quite minimal. As someone mentioned, an LLM response has a footprint of 216mgCO2. A single banana has a footprint of 80gCO2, or 370 LLM responses. My LLM use is...

              The environmental footprint is quite minimal. As someone mentioned, an LLM response has a footprint of 216mgCO2. A single banana has a footprint of 80gCO2, or 370 LLM responses.

              My LLM use is roughly equivalent to eating one extra banana a week, at most.

              A single hamburger has a footprint of 9.73kgCO2, or ~45,000 LLM responses. If someone gave up just a single hamburger, they'd be good on LLMs for the next few years.

              7 votes
            3. [4]
              tibpoe
              Link Parent
              If it answers a question I have in 10 seconds versus it taking me 10 minutes to figure it on my own... what's the carbon impact of that? My carbon footprint is about 10 metric tons/year. Does...

              If it answers a question I have in 10 seconds versus it taking me 10 minutes to figure it on my own... what's the carbon impact of that?

              My carbon footprint is about 10 metric tons/year. Does answering a question using a LLM emit less than 185g of CO2e?

              4 votes
              1. [3]
                tibpoe
                Link Parent
                Well, I was curious, so I looked it up (fig 9). At 2e3J per LLM call and at USA's national energy mix average, that's 216mgCO2 per LLM response. This doesn't include the up-front costs, but I'd...

                Well, I was curious, so I looked it up (fig 9). At 2e3J per LLM call and at USA's national energy mix average, that's 216mgCO2 per LLM response. This doesn't include the up-front costs, but I'd expect them to be almost nothing when amortized over the hardware & model life of billions of responses generated.

                5 votes
                1. [2]
                  vord
                  Link Parent
                  Also bear in mind that a lot of a person's carbon footprint is just baked-in to living. Unless you spend that free time making choices to reduce your footprint, it's just making the problem worse....

                  Also bear in mind that a lot of a person's carbon footprint is just baked-in to living. Unless you spend that free time making choices to reduce your footprint, it's just making the problem worse.

                  Does that 10 minutes saved translate to you walking instead of driving because you could leave work earlier? Or does it translate to you having 10 more minutes to endlessly scroll on your phone while padding out the clock before leaving the office?

                  For the former, that's a potentially tangible benefit. For the latter, it's an active loss because you're just spending additional energy wasting time instead. And I'm betting 99% of the time, it's the latter.

                  12 votes
                  1. winther
                    Link Parent
                    That question will ultimately be up to whether companies are willing to reduce working hours due to the extra productivity or just add more stuff for the employees to do. I am not holding my...

                    That question will ultimately be up to whether companies are willing to reduce working hours due to the extra productivity or just add more stuff for the employees to do. I am not holding my breath on that one...

                    7 votes
        2. [6]
          redwall_hp
          (edited )
          Link Parent
          That is majorly overhyped too. Copilot is far less useful than IntelliJ's built in (non-LLM) autocomplete and I've never known ChatGPT to be helpful with anything beyond generating well-tread...

          Other than for helping programmers which seems to be an actual useful tool

          That is majorly overhyped too. Copilot is far less useful than IntelliJ's built in (non-LLM) autocomplete and I've never known ChatGPT to be helpful with anything beyond generating well-tread things that you'd know if you read framework docs. It's also good at making up APIs that don't exist in the tools you're using. It appeals to the sort of people who blindly copy and paste things off of Stack Overflow, but tends to lose its usefulness when you work on more complicated things. Kind of like how Stack Overflow stops being helpful midway through a bachelors degree program, and most of the responses refer to the same pseudocode example in the algorithms textbook you already have.

          It's also good at helping people cheat at interviews, which has been a problem lately.

          Writing code is a minuscule fraction of time spent engineering software, and probably the most enjoyable part, so even if it was effective, the productivity gain doesn't come close to justifying the massive economic and ecological resource suck.

          8 votes
          1. [3]
            vord
            (edited )
            Link Parent
            Another good example of a massively overhyped thing is automatic comments generation. Sure, a lack of comments sucks, most programmers hate writing useful comments, so it seems a great target for...

            Another good example of a massively overhyped thing is automatic comments generation. Sure, a lack of comments sucks, most programmers hate writing useful comments, so it seems a great target for automation. However, every example I've thrown at komment.ai does one of three things:

            • Makes an existing comment less concise, padding it out with a bunch of bullshit converting a 1-line comment into a 5-line paragraph without adding information.
            • Dumps the entirety of an underlying library's comment which would just be displayed on hover with hyperlinking by any remotely functional IDE
            • Dumps the definition of words used for variable names into comments above the variable name, often leveraging the first tactic recursively.

            It's not really capable of doing anything better for generation that an autocomplete template (IE a blank javadoc template) couldn't handle. Sure, the LLM might be able to auto-populate the handful of types and a guess based off the variable name of what its for....but that saves maybe 10 seconds.

            Actual good comments describe why a given section of code was written, not literally what the code is doing. "This function is for sanitizing user input for consistency before doing a database lookup" vs "This function converts characters to lowercase." This latter example is only useful to people who don't understand how to read code....which is of extremely limited usefulness. Good code is largely self-documenting...if it's not clear what the code is doing via variable naming and common conventions, those are things that should be corrected, not commented.

            Otherwise, the only use case is if you need to pad out your commit and LoC metrics without any thought.

            9 votes
            1. [2]
              winther
              Link Parent
              Yes good code comments would be the absolute last thing an LLM could provide. The useful comments I see and write are usually to explain something that might seem counter intuitive at first, but...

              Yes good code comments would be the absolute last thing an LLM could provide. The useful comments I see and write are usually to explain something that might seem counter intuitive at first, but we need to do it like that due to legacy business logic XYZ(tm) and so forth. I don't see an LLM being helpful there anytime soon.

              7 votes
              1. teaearlgraycold
                Link Parent
                GPT-4 is actually not too bad at detecting comments that are out of sync with the accompanying code, which is a common issue. When people edit blocks of code they frequently do not check if they...

                GPT-4 is actually not too bad at detecting comments that are out of sync with the accompanying code, which is a common issue. When people edit blocks of code they frequently do not check if they invalidated a statement in a related comment.

                I prototyped this but never built a product to expand on the idea.

                1 vote
          2. supergauntlet
            Link Parent
            It's not a mistake that the MBA dipshits want to replace anything even resembling fun with a slop generator built off massive statistical models that steal from everything. The goal is to not just...

            Writing code is a minuscule fraction of time spent engineering software, and probably the most enjoyable part, so even if it was effective, the productivity gain doesn't come close to justifying the massive economic and ecological resource suck.

            It's not a mistake that the MBA dipshits want to replace anything even resembling fun with a slop generator built off massive statistical models that steal from everything. The goal is to not just replace humans wherever it can but also make human lives more miserable where it can't.

            8 votes
          3. ewintr
            Link Parent
            I agree with your general message that things are overhyped, but it is worth pointing out that IntelliJ's built-in autocomplete does use machine learning, which can also be resource intensive....

            That is majorly overhyped too. Copilot is far less useful than IntelliJ's built in (non-LLM) autocomplete and I've never known ChatGPT to be helpful with anything beyond generating well-tread things that you'd know if you read framework docs.

            I agree with your general message that things are overhyped, but it is worth pointing out that IntelliJ's built-in autocomplete does use machine learning, which can also be resource intensive.

            Also, if ChatGPT can read the framework docs for me, then that can be incredibly useful. As a developer, there are so many languages, frameworks and tools that I need to work with, but I am only well versed in some of them. LLM's, specially those combined with search, can save a lot of time, just by giving some good pointers to start with.

        3. Protected
          Link Parent
          In my experience so far, more than 50% of the time, LLM-based chatbots do not produce suitably useful results (I keep trying). But they certainly do produce plausible enough results that I might...

          In my experience so far, more than 50% of the time, LLM-based chatbots do not produce suitably useful results (I keep trying). But they certainly do produce plausible enough results that I might be misled into thinking they were good if I was less knowledgeable. A minority of times, the results are indeed useful.

          I don't keep a long term chatgpt history but recently I've used to try to answer queries that are vague enough that I can't distill them into concrete search queries (with disastrous results). I also use it to bounce ideas for my writing, and to generate lists of things like names that follow certain parameters and such. Usually I read through the list, ask for refinements a couple times, decide the whole thing is garbage, discard it and come up with my own answer.

          Dictionary-style queries about meanings of words and expressions, and idiom equivalence and such (especially in languages I don't understand well) yield fairly decent results.

          Image interpretation results are bad if the image set has any amount of complexity, since the LLM will mix together image components and hallucinate more or less consistently.

          Diffusion-style image generators can yield good results if your goal is to create a visual template to help you when describing a concept in writing. But it requires a lot of effort in prompt and parameter massaging to get the right result in the first place, and the AI is often extremely stubborn about including certain things and not including others.

          6 votes
        4. skybrian
          Link Parent
          Yeah, I don't know. I mostly use it for asking programming questions.

          Yeah, I don't know. I mostly use it for asking programming questions.

          3 votes
        5. Englerdy
          Link Parent
          Other than programming (which GPT4o is remarkably good at), I've found chatgpt's ability to do webqueries now really helpful. So I can tell it to do a search and then answer x, y, or z questions...

          Other than programming (which GPT4o is remarkably good at), I've found chatgpt's ability to do webqueries now really helpful. So I can tell it to do a search and then answer x, y, or z questions or summarize its findings including links. It's not bullet proof but it's really helpful to quickly pull together info about something I'm curious about.

          And that's where I think LLMs are really catching their stride is as an additional tool that's good at working with language. Pulling in outside information either from a search or from documents dramatically improves its answers on specific knowledge areas.

          So summarization, doing a shity first draft of a something, editing a few paragraphs into bullet points, reorganizing something I've already written for clarity, commenting or explaining code, writing starter code/scripts/macros, and answer questions I can use as a starting point for my own searches (because something knowing what jargon to use is the hardest step) have all been useful to me. I've recently discovered a big community for hosting smaller LLMs on your own PC which looks really interesting as well for privacy if you can sacrifice some of the good generalized behavior of the big company models.

          3 votes
        6. CptBluebear
          Link Parent
          Look at it as if it's a feature, not a product. You see it in smaller things as an added feature. Image recognition, translation, summaries, and the like.

          Look at it as if it's a feature, not a product.

          You see it in smaller things as an added feature. Image recognition, translation, summaries, and the like.

          1 vote
        7. jujubunicorn
          Link Parent
          As a writer it's incredibly useful for organizing ideas, outlining the story, jotting down random tidbits about characters and having it remember everything.

          As a writer it's incredibly useful for organizing ideas, outlining the story, jotting down random tidbits about characters and having it remember everything.

    2. skybrian
      Link Parent
      Here are some reasonable guesses about revenue and spending on AI. They say that NVidia’s revenue was $90 billion in Q1 2024 and guess that total spending on data centers + GPU’s was double that....

      Here are some reasonable guesses about revenue and spending on AI.

      They say that NVidia’s revenue was $90 billion in Q1 2024 and guess that total spending on data centers + GPU’s was double that. Meanwhile, OpenAI seems to be getting revenue of about $3.4 billion per year, or let’s guess less than $1 billion for the same quarter, and they seem to think other companies are earning less.

      So, that’s a whole lot of money spent on data centers and earning revenue on it has barely begun.

      6 votes
  3. myrrh
    Link
    ...this got me to wonder ever what became of google's mysterious datacenter barges?..nothing, apparently: the project fizzled before fruition and its unfinished barges were sold off for scrap......

    ...this got me to wonder ever what became of google's mysterious datacenter barges?..nothing, apparently: the project fizzled before fruition and its unfinished barges were sold off for scrap...

    ...still, the idea of radically-sustainable datacenter development is sound, and precisely the sort of project deep-pocketed disintermediation initiatives are best-poised to pursue...i wonder whether the google of 2004 could even recognise the google of 2024?..

    1 vote