These simple changes could make AI research much more energy efficient

Ever since the first newspaper Studying the impact of this technology on the environment was published three years ago, there has been a movement among researchers to self-report the energy consumption and emissions of their work. Having accurate numbers is an important step toward making changes, but actually collecting those numbers can be challenging.

“You can’t improve what you can’t measure,” says Jesse Dodge, a research scientist at the Allen Institute for AI in Seattle. “The first step for us, if we want to make progress in reducing emissions, is to get a good measurement.”

To that end, the Allen Institute recently partnered with Microsoft, AI company Hugging Face and three universities to… a tool that measures electricity consumption from any machine learning program running on Azure, Microsoft’s cloud service. It allows Azure users building new models to view the total power consumption of graphics processing units (GPUs) — computer chips specialized for parallel computing — at every stage of their project, from selecting a model to training and deploying it. It is the first major cloud provider to give users access to information about the energy impact of their machine learning programs.

While tools already exist that measure the energy consumption and emissions of machine learning algorithms running on local servers, those tools don’t work when researchers use cloud services offered by companies like Microsoft, Amazon and Google. Those services don’t give users direct insight into the GPU, CPU, and memory resources their activities are consuming — and existing tools, such as Carbontracker, Experiment Tracker, EnergyVis, and CodeCarbon, need those values ​​to make accurate estimates.

The new Azure tool, which debuted in October, currently reports energy consumption, not emissions. So Dodge and other researchers figured out how to map energy use to emissions, and they presented: an accompanying paper at that work at FAccT, a major computer science conference, in late June. Researchers used a service called Watt time to estimate emissions based on the zip codes of cloud servers with 11 machine learning models.

They found that emissions can be significantly reduced if researchers use servers in specific geographic locations and at certain times of the day. Emissions from training small machine learning models can be reduced by up to 80% if training starts at times when more renewable electricity is available on the grid, while emissions from large models can be reduced by more than 20% if training work pauses when there is renewable electricity electricity is scarce and restarts when more is abundant.

Leave a Comment

Your email address will not be published. Required fields are marked *