Can Large-Language Models Help us Better Understand and Teach the Development of Energy-Efficient Software?

Ryan Hasler, Konstantin Laufer, George K. Thiruvathukal, Huiyun Peng, Kyle Robinson, Kirsten Davis, Yung-Hisang Lu, James C. Davis

Research output: Contribution to journalArticle

Abstract

Computing systems are consuming an increasing and unsustainable fraction of society's energy footprint, notably in data centers. Meanwhile, energy-efficient software engineering techniques are often absent from undergraduate curricula. We propose to develop a learning module for energy-efficient software, suitable for incorporation into an undergraduate software engineering class. There is one major problem with such an endeavor: undergraduate curricula have limited space for mastering energy-related systems programming aspects. To address this problem, we propose to leverage the domain expertise afforded by large language models (LLMs). In our preliminary studies, we observe that LLMs can generate energy-efficient variations of basic linear algebra codes tailored to both ARM64 and AMD64 architectures, as well as unit tests and energy measurement harnesses. On toy examples suitable for classroom use, this approach reduces energy expenditure by 30-90%. These initial experiences give rise to our vision of LLM-based meta-compilers as a tool for students to transform high-level algorithms into efficient, hardware-specific implementations. Complementing this tooling, we will incorporate systems thinking concepts into the learning module so that students can reason both locally and globally about the effects of energy optimizations.

Original languageAmerican English
JournalArXiv
DOIs
StatePublished - Oct 30 2024

Cite this