Thanks, Vijay! This is absolutely correct. To those who are concerned that I'm not engaging in normal scientific discourse and that I'm confusing healthy critique with toxicity, I want to provide some more context. 🧵 1/n

This thread by Jeff is just the most recent in a series, where he has jumped on twitter at seemingly every opportunity to make sensationalized claims about the (in)correctness of the estimates in our paper. 2/n
I say sensationalized, because while his estimates are technically more correct than mine, it's primarily because they were made with access to proprietary Google information, e.g. the carbon intensity of datacenters where training for the Evolved Transformer was run. 3/n
The estimates in our paper were completely reasonable given the information we had available when the paper was written in 2019, while I was a graduate student at UMass. I'm very happy that our work was able to spark interest in this issue, including Jeff's follow-up paper! 4/n
In fact, I cite and share his paper (Patterson et al. 2021; pretty frequently (though I disagree with the unsupported title claim), because they report some really interesting measurements of the energy that powers AI at Google! 5/n
For example, they report that ML consistently made up 10-15% of energy used at Google in 2019-2021, equating to 1.5-2.3TWh in 2020. The estimated LLM carbon emissions in Patterson et al. agree with those in our FAccT paper (which cites them): 6/n
Our work inspired their work and others, which is in turn inspiring ours, as indicated by our citation graph. This is totally normal engagement in the scientific discourse! That's not the behavior I was referring to as toxic in my tweet. 7/n
What I find inappropriate is significantly more senior/powerful researchers (Jeff Dean & Dave Patterson) leveraging their power to try to manipulate more junior researchers (me) in order to maintain their and/or their company's reputation. 8/n
It's inappropriate for Jeff to continue to bring up the inaccuracies of the estimate in our paper when only tangentially relevant, such as this thread. It ignores our power dynamics, likely having a bigger negative impact on me than positive impact on scientific discourse. 9/n
I also think is redirects the narrative around the potential negative environmental impact due to LLMs and ML to an old, irrelevant estimate, when there are much newer estimates, methodologies, and more enormous models being integrated into our lives quickly and at scale. 10/n
There is a lot more to this story that isn't public, but my interactions with Jeff and Dave soon after joining Google part-time in 2020 were quite unnerving and contributed to my decision to leave Google. I'm not ready to share the details of that story yet publicly. 11/11

Recommended by
Recommendations from around the web and our community.

I'd like everyone to read this thread, and then imagine what the corporate research environment is like, when the (white men) who get to be research VPs, do this.