From the abstract: I would summarize this as “the carbon emissions of machine learning training depend drastically on the details of how you do it.” For example, they say that using Google’s...
From the abstract:
Remarkably, the choice of DNN, datacenter, and processor can reduce the carbon footprint up to ~100-1000X. These large factors also make retroactive estimates of energy cost difficult. To avoid miscalculations, we believe ML papers requiring large computational resources should make energy consumption and CO2e explicit when practical. We are working to be more transparent about energy use and CO2e in our future research. To help reduce the carbon footprint of ML, we believe energy usage and CO2e should be a key metric in evaluating models, and we are collaborating with MLPerf developers to include energy usage during training and inference in this industry standard benchmark.
I would summarize this as “the carbon emissions of machine learning training depend drastically on the details of how you do it.” For example, they say that using Google’s custom TPU’s instead of a particular graphics processor gets about a 5x improvement in performance per watt, and using a data center in a different region can result in a factor of 5-10x in emissions.
In section 4.8 they compare with carbon emissions of other activities. There is a bit of snark about Bitcoin:
Stated alternatively, ~70M people have Bitcoin wallets yet Google consumes 1/10th of Bitcoin’s energy to provide services for billions of people, and all of Google’s energy use is offset.
Also, machine language model training is a trivial amount of that:
Even if we assume all four of Google’s large NLP models in Table 4 were trained in 2019, the total represents less than 0.005%. The training of those four large NLP models is not a significant fraction of Google’s energy consumption.
Although not mentioned, this paper might be considered Google’s response to a previous controversy where a researcher got fired over trying to publish a paper about this.
A friend of mine once had a startup which would move compute tasks around the world to get the best environmental performance possible for the time of day and type of job - so they'd prefer...
A friend of mine once had a startup which would move compute tasks around the world to get the best environmental performance possible for the time of day and type of job - so they'd prefer underused DCs half the world away which ran on renewables over local ones which didn't, that kind of thing. They were monitoring capacity, optimising distribution and even calculating transit costs in terms of CO2 and so on, some really clever stuff.
It was a pretty interesting project which was successful enough to be a full time job for quite a few people - not sure where it ended up, but she was headhunted pretty hard by Google a number of years ago.
Thanks for including this little bit, this is absolutely something I would not have known or had access to figure out - not that I think Google is a particularly ethically good or moral company,...
Although not mentioned, this paper might be considered Google’s response to a previous controversy where a researcher got fired over trying to publish a paper about this.
Thanks for including this little bit, this is absolutely something I would not have known or had access to figure out - not that I think Google is a particularly ethically good or moral company, but it's good to know where the incentives lie and why something like this might be published in this very moment.
I think of this is as “an activist researcher tried to publish a mediocre paper about carbon emissions while at Google. Insisting on publishing it got people at Google upset enough to fire her and...
I think of this is as “an activist researcher tried to publish a mediocre paper about carbon emissions while at Google. Insisting on publishing it got people at Google upset enough to fire her and publish a better paper.” Also they want to add built-in measurements so machine learning researchers can publish numbers about the carbon emissions while doing their research.
So, in a way, you might say this is an example of activism working, because it provoked them into doing something about it. The result is good for preventing climate change and not good for the researcher who got fired, but if they believe in the cause enough, maybe they would consider it a worthy sacrifice? How often can activists say they accomplished something similar?
But was that sacrifice necessary? Maybe, maybe not. Google is officially strongly in favor of green energy and reducing their climate impact. A less confrontational researcher who was good at internal scientific politics might have been able to publish a similar paper through collaboration. I’m inclined to believe that a less confrontational approach would have worked, but I’m biased and there’s no way to know that from the outside. Counterfactuals are tricky.
From the abstract:
I would summarize this as “the carbon emissions of machine learning training depend drastically on the details of how you do it.” For example, they say that using Google’s custom TPU’s instead of a particular graphics processor gets about a 5x improvement in performance per watt, and using a data center in a different region can result in a factor of 5-10x in emissions.
In section 4.8 they compare with carbon emissions of other activities. There is a bit of snark about Bitcoin:
Also, machine language model training is a trivial amount of that:
Although not mentioned, this paper might be considered Google’s response to a previous controversy where a researcher got fired over trying to publish a paper about this.
A friend of mine once had a startup which would move compute tasks around the world to get the best environmental performance possible for the time of day and type of job - so they'd prefer underused DCs half the world away which ran on renewables over local ones which didn't, that kind of thing. They were monitoring capacity, optimising distribution and even calculating transit costs in terms of CO2 and so on, some really clever stuff.
It was a pretty interesting project which was successful enough to be a full time job for quite a few people - not sure where it ended up, but she was headhunted pretty hard by Google a number of years ago.
Thanks for including this little bit, this is absolutely something I would not have known or had access to figure out - not that I think Google is a particularly ethically good or moral company, but it's good to know where the incentives lie and why something like this might be published in this very moment.
I think of this is as “an activist researcher tried to publish a mediocre paper about carbon emissions while at Google. Insisting on publishing it got people at Google upset enough to fire her and publish a better paper.” Also they want to add built-in measurements so machine learning researchers can publish numbers about the carbon emissions while doing their research.
So, in a way, you might say this is an example of activism working, because it provoked them into doing something about it. The result is good for preventing climate change and not good for the researcher who got fired, but if they believe in the cause enough, maybe they would consider it a worthy sacrifice? How often can activists say they accomplished something similar?
But was that sacrifice necessary? Maybe, maybe not. Google is officially strongly in favor of green energy and reducing their climate impact. A less confrontational researcher who was good at internal scientific politics might have been able to publish a similar paper through collaboration. I’m inclined to believe that a less confrontational approach would have worked, but I’m biased and there’s no way to know that from the outside. Counterfactuals are tricky.