Pathways to Sustainable Clouds

By: 3BL Media

By Nicola Peill-Moelter, Ph.D., Director of Sustainability Innovation at VMware

SOURCE: VMware

DESCRIPTION:

What if brick-and-mortar companies predicted the rise of online shopping? Or the taxi lobby predicted ride sharing? What if cybersecurity firms predicted ransomware? What would they have done differently to be better positioned for the market shifts and disruption that were to come?

However, unlike those examples, we know a paradigm shift to a low-carbon economy is coming, based on decades of research and analysis by climate scientists, economists, and policymakers. Climate change poses an existential threat to our planet and our collective economies must step up and meet the moment. According to the latest Intergovernmental Panel on Climate Change report (6th Assessment), global emissions will need to be cut in half by 2030 and reduced to zero by 2050 to avoid the worst impacts. It’s a tall order — one that will require sacrifice, collaboration, and strategic planning. Every day, we see more evidence of customers, competitors, governments, investors and consumers committing to action. As we move towards this goal together, carbon will be a first-class metric. Inefficiency and waste are no longer acceptable byproducts. What role can and must VMware play in helping our customers accelerate this transition?

In Part I of this blog series, I laid the foundation for sustainable computing — what it is and why it’s the next frontier for innovation. In this post, I will delve into strategies for achieving sustainable computing through reducing workload energy and increasing carbon efficiency. In the final post of this three-blog series, I will share some of the innovative sustainability projects we’re working on at VMware.

Ultimately, sustainable computing is about minimizing the energy and carbon associated with running workloads. While datacenter efficiency is important, the industry figured out long ago how to use less than 0.1 watt for datacenter operations to support 1 watt of IT operation. Workloads are where the new opportunities for innovation lie. Workload energy efficiency is achieved by minimizing the on-premises and public-cloud infrastructure required to run workloads that meet their business requirements. And workload carbon efficiency means managing when and where workloads run to leverage low-carbon electricity. Let’s dive a little deeper into each of these concepts.

Strategy for achieving workload energy efficiency

Workload energy efficiency minimizes the energy required to run workloads hosted on IT infrastructure housed in datacenters. There are four components to achieving workload energy efficiency:

  1. Making energy visible
  2. Maximizing productive host utilization
  3. Operating energy-efficient IT hardware
  4. Designing compute-efficient applications

Making energy visible

Management thought leader, Peter Drucker famously said that we can’t manage what we don’t measure. So, it makes sense that to achieve workload energy efficiency, we need energy metrics to inform how we manage workloads at the container, host, and datacenter levels. For example, if you want to improve your home’s energy efficiency, you first need to know how much energy each appliance uses. Next, you determine how to improve that consumption for each appliance — maybe even swapping out an older appliance for a more efficient one. Or it may involve changing how you use the appliance, like turning off lights when they are not needed or only running the dishwasher when it’s full. For a host (server), energy is an intrinsic characteristic reflecting the extent of use by workloads of its compute resources, such as CPU, memory, and disk. Similar to improving the energy efficiency of our homes, making container and host energy visible enables benchmarking that we can act on. Adding that visibility informs strategies for management and optimization, which we will explore below in more detail.

Maximizing host utilization

Before virtualization, the best practice was to run one application per physical server. In other words, servers typically ran at only 5-15% utilization. This gross underutilization translated into massive energy waste — incurring both financial and environmental impacts. Virtualization enables higher server utilization, which enables more consolidation. This drastically reduces global datacenter electricity consumption. However, because many servers today are running at only 20-25% utilization, there is still significant room for improvement. Key opportunities for innovation include:

  1. Enabling “cloud-sharing” that puts spare capacity to productive use by transient and non-time-sensitive workloads.
  2. Recouping stranded capacity from oversized virtual machines, containers, and servers that no longer do useful work (sometimes called “zombies”).
  3. Leveraging hybrid public cloud bursting to provide on-demand peak and backup capacity, enabling customers to reduce on-premises infrastructure and run it with higher utilization.

These innovations would produce productivity and sustainability improvements, while also meeting performance and availability requirements.

Operating energy-efficient hardware

Thanks to technology innovations like solid-state drives and advances in chip manufacturing that enable Moore’s Law and Dennard scaling, each generation of computers is more computationally powerful, while still retaining the same level of energy consumption. Just think of your power-hungry 400 Watt desktop computer and monitor from the ’90s (assuming you are old enough!) compared to your sub-50 Watt laptop today. Although Moore’s Law and Dennard scaling are waning, new innovations are on the horizon to provide improved hardware efficiencies, such as die stacking, magnetic RAM, and NAND flash. Newer hardware can support significantly more workloads for the same energy cost. Hardware refresh cycles of three to four years would significantly reduce the total energy consumed by the hardware and the datacenter, freeing up significant space and power for new workloads.

There is some complexity in weighing the operational carbon benefit of upgrading to new IT hardware, compared to the energy impact from manufacturing that new hardware. For example, a 2020 study showed that the carbon emissions of Facebook’s Prineville datacenter operations (IT + datacenter infrastructure) was almost twice that of the carbon emissions from manufacturing that equipment (“embodied carbon”). However, when renewable energy was included in the analysis, the outcome was dramatically different, with manufacturing carbon being four times more than the operational carbon emissions. Nonetheless, it’s important to drive energy efficiency wherever possible to reduce our overall demand for electricity until it’s 100% renewable everywhere.

Another consideration for upgrading hardware is the environmental impact of disposing of the old hardware. Unfortunately, there’s no good answer today, other than the responsible recycling and disposal of these hazardous materials. This presents us with the opportunity to create new ways to extend the life of IT equipment. Examples might include making hardware components more modular and upgradeable via server disaggregation and composable architectures. As an aside, my dream is for computers to be fully compostable!

Designing compute-efficient applications

Compute-efficient applications are a focus of an emerging practice of sustainable software engineering, in which applications are designed, architected, coded, and tested in a way that minimizes the use of CPU, memory, network, and storage. Mobile-phone applications are good examples of this. Mobile phones have limited power, so the best-designed apps are built to minimize battery consumption. The Green Software Foundation has a working group to research and develop tools, code, libraries, and training for building compute-efficient applications. It also has a working group that’s developing a Software Carbon Intensity Specification to help users and developers make informed choices of their tools, approaches, and architectures.

Strategy for achieving workload carbon efficiency

Up to this point, I’ve focused on the factors involved in making workloads more energy efficient, which also confers carbon-efficiency benefits. Now, let’s look at making workloads more carbon-efficient by leveraging less carbon-intensive electricity. There are three components to workload carbon efficiency:

  1. Renewable-energy-powered datacenters
  2. Workload placement and scheduling
  3. Carbon-aware workloads.

Renewable-energy-powered datacenters

The most obvious component of carbon-efficient workloads is renewable-energy-powered datacenters. Once we realize the energy efficiency and productivity gains, to reach zero-carbon operations, the workloads need to be renewable energy-powered. Today, the renewable energy mix varies regionally from 100% in Iceland to 20% in the U.S., to zero in Bahrain. But many companies with large datacenter operations aren’t waiting for electric utilities to convert from fossil-fuel-based generation to renewable energy. Instead, they are going out and procuring it themselves and setting goals to achieve 100% renewable-energy-powered operations by the end of the decade. In the U.S. in recent years, corporate procurement of renewable energy has exceeded that of electric utilities. This private demand for renewables, coupled with the ever-increasing cost-competitiveness of renewable energy, is accelerating grid deployments globally. It’s now one of the fastest decarbonizing sectors. Recognizing the environmental impact of their industry, many public cloud providers are committing to 100% renewable-energy-powered operations by 2030. In fact, earlier this year at VMware, we launched an initiative called “Zero Carbon Committed.” Its purpose is to connect customers looking for low-carbon public cloud providers to meet their supply-chain sustainability goals with VMware cloud providers who are committed to zero-carbon clouds by 2030. To date, 23 providers are Zero Carbon Committed.

Workload placement and scheduling

A less-obvious component of workload carbon efficiency is placement and scheduling – when and where workloads are run. Integrating electricity carbon intensity as an optimization factor into workload management can significantly minimize system carbon emissions. A characteristic of the electricity that powers datacenter workloads is carbon intensity — the weighted average of the carbon emitted during the generation of that electricity across all generators on the grid. Carbon emissions can vary anywhere from near-zero for wind, solar, hydro, and nuclear power plants to very carbon-intensive for coal and natural gas power plants (e.g., 500 kg CO2/MWh). The mix of generators contributing electrons and the quantity generated on the local grid varies at any given time. Therefore, a grid’s carbon intensity varies over time.

For workloads that are not latency-sensitive and/or geographically restricted, the management system may determine when and/or where to run these workloads based on when and/or where the electricity is cleanest. For example, the management system can delay running a workload or run the workload in an alternate datacenter. This idea isn’t far-fetched. The share of renewables and low-carbon electricity reached almost 55% in 2019 for global electricity generation. In aggregate, workload placement and scheduling could help reduce demand for carbon-intensive electricity. In the longer term, managing datacenter workload demand could also improve the economics and stability of the electricity grid by facilitating the balance of electricity demand and supply.

Carbon-aware workloads

Carbon-aware workloads are necessary for enabling workload placement and scheduling to optimize system carbon emissions. Quality-of-service requirements such as latency, geographic restrictions, and mission-critical elements of these workloads can be communicated back to the management system. This enables the management system to identify and prioritize workloads that have the flexibility to alternate their scheduling and/or placement. The Green Software Foundation has a working group focused on developing an SDK to enable carbon-aware applications.

As we can see, there are pathways to zero-carbon clouds that can help accelerate the coming transition to a low-carbon economy. Innovations that maximize the productive use of cloud infrastructure will bring significant economic and environmental benefits. And managing workloads to use the cleanest energy can help stabilize the grid and provide lower-cost electricity. Some of these innovations can leverage existing capabilities. Others will require the maturation and adoption of emerging capabilities, such as hybrid cloud bursting to provide on-demand capacity for peak loads. VMware is perfectly positioned as a multi-cloud solutions provider to lead the way. Companies, investors, and governments are counting on it. Stay tuned for the final installment of this blog series, where we’ll look more closely at some of the sustainability innovation happening at VMware.

Tweet me: .@greenqween @VMware shares strategies to achieve sustainable computing through reducing workload energy and increasing carbon efficiency. #sustainability #ESG https://bit.ly/3c7Pcv2

KEYWORDS: NYSE:VMW, VMware

Pathways on hills computer part in the dirt

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.