Discover Artificial Intelligence (Jul 2024)

Green AI: exploring carbon footprints, mitigation strategies, and trade offs in large language model training

  • Vivian Liu,
  • Yiqiao Yin

DOI
https://doi.org/10.1007/s44163-024-00149-w
Journal volume & issue
Vol. 4, no. 1
pp. 1 – 12

Abstract

Read online

Abstract Prominent works in the space of Natural Language Processing (NLP) have long attempted to create new innovative models by improving upon previous model training approaches, altering model architecture, and developing more in-depth datasets to better their performance. However, with the quickly advancing field of NLP comes increased greenhouse gas emissions, posing concerns over the environmental damage caused by training LLMs. Gaining a comprehensive understanding of the various costs, particularly those pertaining to environmental aspects, that are associated with artificial intelligence serves as the foundational basis for ensuring safe AI models. Currently, investigations into the CO2 emissions of AI models remain an emerging area of research, and as such, we evaluate the CO2 emissions of well-known large language models, which have an especially high carbon footprint due to their significant amount of model parameters. We argue for the training of LLMs in a way that is responsible and sustainable by suggesting measures for reducing carbon emissions. Furthermore, we discuss how the choice of hardware affects CO2 emissions by contrasting the CO2 emissions during model training for two widely used GPUs. Based on our results, we present the benefits and drawbacks of our proposed solutions and make the argument for the possibility of training more environmentally safe AI models without sacrificing their robustness and performance.

Keywords