Regulation bodies are increasing pressure on companies to address their environmental impact. Organizations succeeding in reporting and reducing their carbon footprint will gain a competitive advantage.
Carbon emissions are in the spotlight. In most countries around the world, regulation bodies and legislation are increasing pressure on companies to address their environmental impact. Large companies are not only asked to report their direct and indirect carbon emissions using the Scopes 1 and 2 defined by the GHG Protocol; they are also strongly encouraged to report their Scope 3 emissions (upstream and downstream emissions). Complying with this new standard can be burdensome. However, organizations that succeed in reporting all three scopes and anticipating future regulations by implementing sustainable IT initiatives will gain a competitive advantage. Each initiative counts. Let's review six of them:
Hardware Sentry provides the energy usage and cost and the carbon emissions of data centers, regardless of their location, in comprehensive Grafana dashboards
Most carbon emissions for data centers come from their energy consumption. To estimate your carbon footprint, expressed as equivalent CO₂, you need to consider:
- How much energy is consumed to operate your data center (Total Energy Consumption). It is typically the electrical energy drawn from the utility grid.
- How much energy is used to run the IT equipment (IT energy usage). This information is often collected by installing PDUs and using DCIM tools. But deploying, manually configuring, and maintaining smart PDUs represent a significant cost. Few companies have been able to report their IT energy usage globally.
- How much CO₂ is produced per kWh of electricity (CO₂ Emission Density). This information, expressed in kg/kWh, varies depending on the country and the region where the data center is located. This data is publicly available on the electricityMap, and Emissions & Generation Resource Integrated Database (eGRID) web pages.
Observing the carbon footprint of data centers may not seem easy at first but solutions, like Hardware Sentry OpenTelemetry Collector, exist.
Hardware Sentry OpenTelemetry Collector is an IT infrastructure monitoring solution, specialized in hardware failures detection in servers, network switches, and storage systems, which makes the IT infrastructure carbon emissions observable from the server to the application by reading internal sensors and leveraging a unique algorithm.
Switching to a more sustainable supplier or sourcing renewable energy sources can be a step forward to reduce your Scope 2 emission
Opting for a greener energy provider is one easy way to reduce the environmental impact of your data center as relying on a low carbon density electricity will naturally lower your carbon footprint without having to reduce your overall energy usage. However, the installation of additional “green” electricity sources (like windmills, solar panels, etc.) is not driven by market demand but is usually controlled by local government policies. The energy prices are governed by the law of supply and demand and a higher demand on clean energy will simply drive the price of this energy up. Without political pressure to invest in green energy, this option might not be the most efficient in the long term.
The data center efficiency is measured by the Power Usage Effectiveness (PUE) which is calculated by dividing the total electricity power consumed by a data center by the power used only by the IT equipment in that facility. This ratio usually ranges between 1.1 (very efficient) and 2.5 (inefficient). Data centers emitting a lot of heat, the PUE highly depends on the energy spent on cooling the server rooms. 100% of the energy of a processor is dissipated in heat. If you consider the data center as a perfectly isolated system, and according to thermodynamic laws, 100W of energy will be necessary to cool down a processor dissipating 100W of heat and maintain a stable temperature. That's why the theorical PUE is 2 when excluding all other parameters (e.g., batteries, lights, power distribution and conversion).
Evidently, data centers are not perfectly isolated systems and the energy required to maintain their temperature also depends on the outside temperature. One can intuitively assess that maintaining a temperature of 18°C in a facility full of servers generating heat continuously in a region with an average outside temperature of 25°C is going to require a lot of energy. Maintaining an internal ambient temperature of 19°C instead of 18°C will require less energy. Logically, to become more energy efficient, a data center must be operated at the highest possible temperature - while still in a safe range to avoid electronic failures!
The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) recommends an ambient temperature of 18-27°C / 64-80°F for server rooms. Although the temperature can reach 27°C / 80°F (see Google's example), data center operators often overprovision cooling capacity to prevent unplanned outages due to overheating. Determining the right temperature of the server room is a daunting task. It usually requires a detailed audit to consider all aspects affecting the temperature (surface area, cooling systems, air flows, humidity levels, etc.). Other solutions exist though to increase the temperature of the server room easily and safely: Hardware Sentry OpenTelemetry Collector
Example of a Grafana dashboard displaying all temperature sensors as a 'real' heat map
Hardware Sentry OpenTelemetry Collector helps increase the temperature of the server room without compromising its safety. Each temperature sensor is individually monitored and its value compared to the alert thresholds defined by the manufacturer. Calculating the heating margin of each monitored system is then simple: it corresponds to the number of degrees Celsius the temperature could be theoretically increased by before triggering a warning alert.
Safely Increase the temperature of your data centers with Hardware Sentry. Know how close servers are to their thermal limit.
Aggregating all these metrics allows us to determine the ambient temperature of a server room and its heating margin, i.e., the theoretical number of degrees the ambient temperature could be increased by while keeping all systems operating in a safe temperature range. In our example, we can see that even the warmer server room (Washington) could be warmer. Increasing the data center temperature by only 1°C reduces the energy required in cooling by 5%, which typically represents 2% energy savings on the overall data center. Increasing the temperature by 5°C will save 10% in the overall data center energy usage and therefore reduce its carbon footprint by 10%.
The analysis of the carbon footprint information for servers shows that 91% of the carbon footprint occurs in the use phase. The use phase corresponds to the emissions associated with the electricity consumed by the server over its lifetime (3-4 years). Considering the importance of the use phase to the carbon footprint, opting for more energy efficient servers could significantly reduce the overall footprint of your data center. If new servers can handle the workload while requiring 10% less power, their carbon footprint will logically be lower than less efficient servers running for 3 more years. An ENERGY STAR certified server for instance uses 30% less than conventional models and helps save $60 to $120 annually in electricity.
Optimize assets utilization by reclamining unused storage capacity
Data center surveys reveal that up to 30% of servers are lightly used while consuming electricity 24/7. A server that is powered on, running only the operating system and handling zero workload, still consumes a significant amount of energy. Despite progress recently made to optimize the power consumption in idle state, constant energy is still needed for the disks, the network, the fans and the processor.
To avoid this significant energy waste, data center operators should consolidate and remove unneeded hardware, notably by using virtualization. According to the Uptime Institute, removing a single server can save $500 in energy, $500 in operating system licenses, and $1,500 in hardware maintenance costs annually. Server virtualization also saves on the energy bill as it reduces the number of physical servers required. A university that virtualized only 35 physical servers saved more than $280,000 over 3 years. Specific capacity optimization tools can help administrators identify under-utilized systems and reorganize their infrastructure to handle the same workload with less physical systems.
Public cloud providers can offer higher efficiencies against on-premises infrastructures
The top cloud providers (AWS, Google Cloud, Microsoft Azure) are ahead of the game on sustainability. They not only offer higher efficiency against on-premises infrastructures, but they also provide tools to measure the carbon footprint of the cloud usage, thus reducing the burden on I&O leaders to report their environmental impact. AWS' infrastructure is for example 3.6 times more energy efficient than the median of U.S. enterprise data centers, the average PUE for all Google Data Centers was 1.10 in 2021, and the Microsoft Azure cloud platform could be up to 93% more energy-efficient and up to 98% more carbon efficient than on-premises solutions in 2018. By moving more workloads to net-zero public cloud services, an organization can reduce its scopes 2 and 3.
Among all the major initiatives we have reviewed in this document, none of them individually will get your organization to a carbon-neutral nirvana. Organizations should rather consider implementing a combination of IT solutions and best practices.
Hardware Sentry OpenTelemetry Collector developed by Sentry Software can help you understand where to start and how you can immediately optimize your energy consumption and go greener while saving on electricity costs. Observability is a key step toward sustainable best practices. The Sentry Software solution closely monitors each piece of IT equipment in your data center. It reports on power usage and cost, carbon emissions, provides temperature metrics to help you optimize the temperature of your server rooms without compromising IT service continuity. A clear awareness of your environmental impact will help you implement a comprehensive sustainable IT strategy with well-defined goals and target timelines.
Changes will not happen without leaders convinced of the importance of implementing them. Organizations need to make environmental issues and energy efficiency a priority and allow I&O leaders and data center operators to lead the way with the best IT practices!
From a KM for PATROL to a distribution of the OpenTelemetry Collector, discover the evolution of Hardware Sentry, a recognized IT infrastructure monitoring solution.
A quick walk down Memory Lane!
Hardware Sentry® has been detecting server hardware failures for BMC Software's customers since 2004 - time flies by! At that time, the product was known as Hardware Sentry KM for PATROL. It was a module for the BMC's monitoring agent: the PATROL Agent. For almost 20 years, Hardware Sentry has remained in this form through the various evolutions of BMC's monitoring ecosystem: ProactiveNet, Performance Management, and TrueSight.
Aiming for the cloud
Recently, BMC introduced BMC Helix Operations Management, a new SaaS platform based on open standards. BMC Helix is backward compatible and can consume the data collected by a PATROL Agent. However, the core components of BMC Helix (VictoriaMetrics and Grafana Dashboards) require a specific data format, which cannot be provided by a KM for PATROL without breaking the existing consoles and the views in TrueSight. We therefore decided to move away from the original PATROL design and embrace the new capabilities offered by the new Helix platform.
Considering that VictoriaMetrics is 100% compatible with Prometheus and that BMC Helix exposes a REST API endpoint to push metrics using the Prometheus Remote Write protocol, we designed the first alpha version of our “next-gen” monitoring product as a Prometheus exporter. Users just had to configure VictoriaMetrics' vmagent to scrape the exporter and push the metrics to BMC Helix. Except for a few timing hiccups, it was working fine.
Embracing the OpenTelemetry revolution
During our research to make the configuration and the architecture simpler, solve timing hiccups, remove the vmagent, and add filtering and processing capabilities, we discovered OpenTelemetry (OTel) and its Collector. The OpenTelemetry Collector is a vendor-agnostic proxy that can receive telemetry data in multiple formats, filter and transform this information, and export it to one or more back-ends. It is composed of three components:
- Receivers to get data into the collector.
- Processors to define how the received data is processed. OpenTelemetry processors help address availability and performance issues by reducing the number of outgoing connections to transmit data and by preventing memory errors.
- Exporters to define where to send the received and processed data. OpenTelemetry exporters provide powerful options for reporting telemetry data. They allow exporting metrics from an instrumented application to another third-party system (Prometheus, BMC Helix, Datadog, Splunk, Dynatrace...etc.).
Thanks to the OpenTelemetry Collector, we have full control of our data and the ability to send data to multiple destinations in parallel through configuration. The tools, APIs, and SDKs are available for most programming languages (at Sentry, we opted for Java and Go), which provide high interoperability across different languages and environments. Because OpenTelemetry is supported by a large community, extensions are regularly added to the collector to provide more capabilities (health check, service discovery, data forwarding, etc.) therefore, providing more value to our end users. And icing on the cake, OpenTelemetry is adopted and natively supported by observability leaders like BMC Software, Datadog, Grafana, Prometheus, New Relic, Splunk, etc.
Now that Hardware Sentry is based on OpenTelemetry and can help IT administrators report and optimize the carbon footprint of their servers, it becomes obvious that the solution should benefit to the majority, and not be limited to BMC Helix and Prometheus users!
Meeting the challenge of carbon neutrality
Our developers were really excited by this new project. In a few months, they created a custom distribution of the OpenTelemetry Collector: the Hardware Sentry OpenTelemetry Collector. This first version focused on the ability to push metrics to Prometheus and BMC Helix. We also developed dashboards for Grafana and BMC Helix which leverage the collected data to report hardware issues in the monitored systems, as well as their electricity usage and CO₂ emissions.
Our team is working now on the integration of other major observability platforms that natively support OpenTelemetry, starting with Datadog and Splunk. Like in Grafana and BMC Helix, we will be providing dashboards and predefined alert rules to help users get the solution working out-of-the-box, with minimal configuration.
Hardware Sentry KM was vendor-agnostic for the monitored systems (HP, Cisco, Dell EMC, Huawei, etc.). Now Hardware Sentry OpenTelemetry Collector is vendor-agnostic for the observability platform as well!
Achieving carbon neutrality while saying goodbye to vendor lock-in and embracing vendor-portability is now possible with Hardware Sentry OpenTelemetry Collector. Are you ready to dive in?
- Learn more about OpenTelemetry
- Download Hardware Sentry OpenTelemetry Collector [FREE]
- Read the Hardware Sentry OpenTelemetry Collector Documentation
If you have any question about Hardware Sentry OpenTelemetry Collector, reach out to our experts.