Our never-ending hunger for more data and bandwidth has resulted in an unintended consequence – a substantial increase in global energy consumption.
Check out this timely piece on how to reduce energy consumption inside and outside the data center from my colleague, Brain Lavallée. Brian is a Director of Technology & Solutions Marketing at Ciena.
Our never-ending hunger for more data and bandwidth has resulted in an unintended consequence – a substantial increase in global energy consumption. Increasingly, data centers have come under intense scrutiny from environmental groups because of their significant contribution to carbon emissions.
According to the Natural Resources Defense Council (NRDC), nationwide, data centers in total used 91 billion kilowatt-hours (kWh) of electrical energy in 2013, and they will use 139 billion kWh by 2020. Currently, data centers consume up to 3 percent of all global electricity production while producing 200 million metric tons of carbon dioxide.
The migration to the cloud is driving the need for more data center capacity, which in turn is increasing energy consumption. And while many large data center, cloud, and telecom service providers are mindful of this growing problem, there are thousands of other business and government data centers, and small, corporate or multitenant operations, that are not, and could do more to reduce the carbon footprint.
NRDC projections show a 53 percent increase in data center energy use over a seven year period, but this does not have to be inevitable. These trends can be turned upside down if organizations take appropriate action both inside and outside the data center.
Inside the Data Center
Inside the data center, network operators can adopt multiple strategies and tactics to reduce energy consumption. Here are just a few:
- Take inventory: Begin with an inventory of all IT assets to assess and understand current power usage patterns; find out what the power costs per transaction and transactions per kWh are. The goal is to identify inefficiencies in existing power and cooling patterns, and the areas that are most susceptible to creating a positive impact in reducing costs and power usage. Once established, prioritize what systems need to be upgraded, reconfigured, or removed.
- Consider DCIM tools: Embrace non-disruptive power-management tools to perform trend analyses. Data center infrastructure management (DCIM) tools can assist companies in making their data centers energy-efficient. These functions can provide a holistic view of the data center by analyzing data center design, asset discovery, systems management functions, capacity planning and energy management.
- Go green: Adopt the latest green innovations. For example, free cooling uses outside air or water to cool data center facilities versus using powered refrigeration or air-conditioning units. Free cooling requires more than simply keeping the data center’s windows open – data centers must have filters to catch dust particles that can harm server equipment. The filtered air from outside must then be treated to meet specific humidity levels; high humidity can lead to metal rust, while low humidity can create static electricity problems. Smart temperature controls, solar panels and wind energy can also help data center managers meet environmental requirements, while also cutting electricity costs.
- Address server inefficiencies: The strain on servers to ensure the availability of emails, sensitive information and bandwidth-rich files is unquestioned, but according to the NRDC, up to 30 percent of these servers are running when it’s not required. Recognized as “zombie” servers, many data center managers are completely unware of the problem. Given how there are millions of servers running only at 10 to 15 percent capacity, or zero in the case of “zombie” servers, network operators can cut power usage by addressing server inefficiencies. Virtualization is and has been a strong industry trend that helps to minimize zombie servers by consolidating multiple servers within a single compute platform.
Outside the Data Center
Conversely, there are several factors outside the data center that have a direct influence on the energy footprint of a data center. Keep these in mind:
- Upgrade metro networks: The drive for efficiency is now expanding to the metro network, which acts as the highways between data centers and end users who connect to them. As bandwidth-hungry applications rise, existing network architectures, which were never intended to aggregate such high-capacity connections, are struggling to handle 10 Gigabit Ethernet (GbE) and 100GbE connections coming into the data center.
With unpredictable bandwidth demands, network operators need to ensure their networks can rapidly deliver high-capacity services, efficiently aggregate users, and provide express connections to data centers. Upgrading to a flexible and agile network architecture that responds well to today’s highly dynamic cloud-connected world will reduce power usage through optimized usage of network assets.
- Lay a foundation for network convergence: The strain created by the shift to cloud-based business models is most acutely being felt in the metro network where end users and content data centers are predominantly situated. Realize that the network is a strategic and critical business asset that will ultimately dictate the financial viability of corporations across many different market segments. Look for a service provider that is laying a foundation for network convergence to simplify metro network designs and lower operating costs, and also to increase agility. Ethernet-based metro networks are increasingly recognized as the technology of choice to simplify overall network designs and deliver on the promise of significant reductions in energy usage leading to significant savings in operating expenses.
- Embrace synergies between metro and data center technologies: Data centers are all about packet-based connectivity with an emphasis on programmability, density, scalability, low cost, and low energy consumption. Metro networks are all about coherent optics, scalability, resilience, and operations, administration and management (OAM) to maintain the ongoing health of the network that can span hundreds of kilometers.
- It is important to enable data center interconnectivity that optimizes 10GbE and 100GbE aggregation onto coherent-based 100G DWDM wavelengths to enable robust and scalable connectivity between data centers over metro Ethernet networks, and provide the required Packet OAM capabilities to ensure that strict Service Level Agreements are properly guaranteed.
Once you do make adjustments inside and outside the data center to be more energy efficient, conduct ongoing reviews of IT requirements and services to ensure continuous alignment with business goals. A quarterly review of delivered services over the previous term, and ongoing discussions of future requirements, is a wise practice to make sure you are meeting your current and future objectives.
This article originally appeared on the Ciena Insights blog.