Skip to content

Contended ChatGPT Requests Possibly Harming Environment According to Study Findings

Disregard the assertion that politeness comes free of charge? That's a far cry from reality, especially when considering environmental consequences.

ChatGPT Prompts Allegedly Causing Harm to Environmental Integrity, Report Suggests
ChatGPT Prompts Allegedly Causing Harm to Environmental Integrity, Report Suggests

Contended ChatGPT Requests Possibly Harming Environment According to Study Findings

In a groundbreaking study led by Maximilian Dauner, a PhD candidate at Hochschule München University of Applied Sciences, the environmental cost of interacting with large language models (LLMs) like ChatGPT has been brought to light. Dauner, a frequent user of AI models, is an advocate for using them concisely to minimize their carbon footprint.

The study reveals that the carbon footprint of LLMs comes primarily from three main phases: model training, inference (usage), and infrastructure operation. Training a large AI language model can emit as much CO₂ as the lifetime emissions of about five average cars. Each user query requires computational resources, contributing to ongoing emissions, and organizations using LLMs at scale could generate thousands of tonnes of CO₂ annually – equivalent to thousands of transatlantic flights.

The study also found that models that use internal reasoning to 'think' through answers can produce up to 50 times more emissions than those that respond concisely. Longer responses generated by the models, due to user's friendly interactions, lead to more emissions. Asking for bullet points instead of full paragraphs can help reduce emissions.

To address this issue, users can adopt practices such as requesting concise answers, avoiding unnecessary repetition or overly polite phrasing, and limiting redundant or frivolous queries. For companies and developers, optimizing model efficiency, investing in renewable energy-powered data centers, monitoring and reporting AI-related emissions transparently, and implementing smarter prompt engineering and caching strategies can help reduce the environmental impact of AI usage.

Maximilian Dauner suggests that companies behind LLMs could reduce emissions by developing systems to select the smallest model needed to answer each question accurately. This could cut unnecessary emissions at scale. The study further reveals that questions requiring deeper reasoning, such as philosophy or abstract algebra, produce far more emissions than straightforward topics like history.

Casual generation of images, jokes, or essays can also come at a climate cost. If a model generates 10,000 words of useful text or 10,000 words of nonsense, it results in the same CO₂ emissions. The number of parameters in models like Microsoft Copilot or ChatGPT is around 10 times more than the models Dauner tested. Models with more parameters, which are generally more accurate, also produced more emissions.

In summary, while LLMs like ChatGPT have a tangible carbon footprint driven by training and inference energy use, conscious interaction practices by users and efficiency improvements by companies can help substantially lower the environmental impact of AI usage.

  1. Maximilian Dauner's research at Hochschule München University of Applied Sciences has shown that the environmental cost of interacting with large language models (LLMs) like ChatGPT comes primarily from model training, inference (usage), and infrastructure operation, with training alone emitting as much CO₂ as five average cars over their lifetime.
  2. The study also found that models that engage in internal reasoning to 'think' through answers can produce up to 50 times more emissions than those that respond concisely, with longer responses leading to more emissions.
  3. To reduce the environmental impact of AI usage, users can practice requesting concise answers, avoiding unnecessary repetition, overly polite phrasing, and limiting redundant or frivolous queries.
  4. Companies and developers can optimize model efficiency, invest in renewable energy-powered data centers, monitor and report AI-related emissions transparently, and implement smarter prompt engineering and caching strategies to help reduce the environmental impact of AI usage.
  5. Maximilian Dauner proposes that companies behind LLMs could reduce emissions by developing systems to select the smallest model needed to answer each question accurately, which could cut unnecessary emissions at scale.
  6. The study further reveals that questions requiring deeper reasoning, such as philosophy or abstract algebra, produce far more emissions than straightforward topics like history, and that casual generation of images, jokes, or essays can come at a climate cost, with no difference in CO₂ emissions between 10,000 words of useful text and 10,000 words of nonsense.

Read also:

    Latest