分享

【DKV】改善数据中心效率的工具:中小型企业数据中心的调查与思考

 yi321yi 2019-05-30

Tooling to Improve Data Center Efficiency A Survey and Considerations for the SMB Data Center


原文作者:

Shannon McCumber,计算机系,威斯康辛大学帕克塞德分校,美国

Susan J Lincke,计算机系,威斯康辛大学帕克塞德分校,美国

原文发表于2014年IEEE学报

INTRODUCTION

前言

This is a survey paper of recent research conducted on the opportunity to implement tools to increase data center efficiency and reduce the demand for energy to run data centers. This is important since data centers account for approximately 2% of total U.S. electricity consumption and that this number has increased 65% since 2000 [25, 26]. Our concern is that a tool is available for smaller data centers, in addition to larger ones. These small centers need a lower cost solution that is easy to understand by a regular IT person.

这是一篇近期的调查论文,研究了如何利用工具提高数据中心效率,减少数据中心运行能源需求的问题。这个问题很重要,因为数据中心用电量约占美国总用电量的2%,而且这个数字自2000年以来已经增长了65%。我们关注的是,相对于较大型数据中心,适用于较小型数据中心的工具。这些小型中心需要一种低成本的解决方案,且易于为普通IT从业人员所了解。

Genesh et al [2] propose an integrated approach to power management for the data center using two key concepts: ‘power proportional’ and ‘green data center’. These approaches focus on reducing the power consumption of disks and servers (power proportional) and the support infrastructure such as cooling equipment, power backups and distribution units (green data center). Our paper focuses on these two areas, by including a section for each area of improvement. In addition, we review comprehensive research tools, which implement a variety of techniques to green data centers.

Genesh等人利用“功率比例”和“绿色数据中心”这两个关键概念,提出了数据中心电力管理的综合方法。这些方法关注于减少磁盘和服务器的功耗(按功率比例)以及制冷设备、后备电源和配电单元(绿色数据中心)等支撑基础设施。我们的论文关注于这两个领域,包括每个领域的改进部分。此外,我们回顾了应用在不同技术绿色数据中心的综合研究工具,

We begin by reviewing general purpose data center efficiency metrics, mentioned in these case studies and research. Power usage efficiency (PUE) is a measure of how efficiently a data center uses its power. PUE is the ratio of total facility energy to the energy consumed by computing equipment:

PUE = TotalEnergyToFacility / ComputePwr      (1)

 我们首先回顾了这些案例研究中提到的通用数据中心效率指标。电能利用效率(PUE)是衡量数据中心使用电能效率的指标。PUE为设备总能耗与计算设备能耗的比值:

PUE = 设备总能耗 / 计算设备能耗    (1)

PUE is the most common green metric used to evaluate the efficiency of a data center and in the ideal case this value approaches 1.0 [10]. Google has data centers that have measured PUE values as low as 1.09, and they stress its importance in optimizing data center efficiency [13]. A similar metric is the coefficient of performance (COP), which is the ratio of the heating or cooling provided over the electrical energy consumed.

PUE是评估数据中心效率的最常见的绿色指标,在理想情况下PUE值接近1.0。谷歌的数据中心测量PUE值低至1.09,他们很重视优化数据中心效率的重要性。一个类似的指标是性能系数(COP),它是提供的采暖或制冷与所消耗电能的比值

Energy Reuse Effectiveness (ERE) is a green data center metric, which represents the amount of energy (or wasted heat) that can be reused by, for example, heat-activated cooling. ERE is measured as:

 (TotalElec – EnergyReused) / ElecUsedByITequip (2)

再生能源利用效率(Energy Reuse efficiency, ERE)是一个绿色数据中心指标,它代表了可以再利用的能源(或余热)的数量,例如,热激活冷却。ERE表示为:

( 总能耗 – 再利用能量) / 设备能耗    (2)

SERVER UTILIZATION AND POWER MANAGEMENT

服务器使用和电源管理

The U.S. Department of Energy [15] suggests that most

servers in the data center run at 20% or less utilization while drawing full power. Consolidating these underutilized resources is their ‘number one’ method to implement additional efficiency measures within existing data centers [14].

美国能源部指出,大多数数据中心中的服务器运行利用率为满负荷的20%或更少。整合这些未充分利用的资源是他们在现有数据中心内实施额外效率措施的“首要”方法。

Reducing the number of devices in the data center contributes to the overall goal of data center efficiency. Rack servers consume the most IT energy in the average data center [15]. Virtualization allows multiple independent systems to run on a single physical server and can drastically reduce the number of servers in a data center. In a dedicated data center, one way to reduce the number of servers is to consolidate applications. Retiring legacy, redundant and underutilized applications will free up resources. In one example, application consolidation resulted in a 60% reduction in energy costs [14]. Additional energy savings can be realized by consolidating redundancies in the IT system and sharing resources such as: power supplies, CPUs, disk drives and memory [15].

减少数据中心中的设备数量有助于实现数据中心效率的总目标。在普通数据中心中,机架服务器消耗了最多的IT能量。虚拟化技术允许在单个物理服务器上运行多个独立的系统,可以大大减少数据中心中的服务器数量。在专用数据中心中,合并应用程序是减少服务器数量的一种方法。遗留、冗余和未充分利用的应用程序将被释放资源。在某个案例中,应用程序整合降低了60%的能源成本。通过整合IT系统冗余并共享资源(如:电源、CPU、磁盘驱动器和内存),可以实现额外的节能。

Reducing the direct consumption of power by the data center is one of the most effective and least expensive approaches to Green IT [14]. Typical data center servers are built to manage peak loads while sitting idle the majority of the time. Idle servers consume about 60% of peak power [28]. A power proportional approach aims to focus the processing load onto a subset of servers to allow the remaining idle resources to be powered down. Ganesh et al [2] suggest powering down

减少数据中心的直接电力能耗是实现绿色IT的最有效和最便宜的方法之一。典型的数据中心服务器是为了管理峰值负载而构建的,而这些负载在大多数时间处于闲置状态。闲置服务器消耗了约60%的峰值功率。功率比例方法的目的是将运算负载集中到一部分服务器上,以便关闭其余闲置资源。

“containerized data centers” which would include not only the servers but also the power distribution, backup, networking and cooling equipment. Another power management approach uses Energy Star or high-performance processors, which use performance states to control power consumption by varying the voltage of a component depending on the circumstances. Performance states which undervolt decrease the voltage used in a component thus saving energy. This technique was first introduced to CPUs, but can also be used to control the active cache size, the number and/or operating rates of memory and I/O interconnects [22].

“集装箱数据中心”不仅包括服务器,还包括配电、备份、网络和制冷设备。另一种功率管理方法使用能源之星或高性能处理器,它们利用性能状态来控制功耗,方法是根据情况改变组件电压。电压不足的性能状态会降低组件电压从而节省能源。这种技术最初被用于CPU,但也可以用于控制在用缓存大小、内存和I/O接口数量和/或运行比例。

AIR FLOW

气流

Since air conditioning is a major factor in data center inefficiency, optimizing air temperature and flow is important. Computational Fluid Dynamics (CFD) is a tool widely used during this effort. Blogs indicate that commercial tools price from $7500 (CoolSim) to $30,000 (Future Facilities 6Sigma).

由于空调是造成数据中心效率低下的主要因素,因此优化空气温度和气流至关重要。计算流体力学(CFD)软件是一种广泛用于解决该类问题的工具。博客文章指出,商业工具软件的价格从7500美元(CoolSim软件)到30000美元(Future Facilities 公司的6Sigma软件)不等。

Model Air Temperature

空气温度模型

CFD tools are used to predict the flow field in a region of interest and the distribution of temperatures. CFD uses numerical methods and simulation to analyze air or fluid flow. A CFD model of the air flow in the current data center configuration is used as a baseline to compare against the results of modifications made. Servers, switches, UPSs, power distribution units (PDUs) and other components used in the data center each have a model. The flow fields that result from this simulation can be used to pinpoint hotspots, analyze new data center configurations and understand the distribution of workload to cooling equipment [23]. Google [13] used such data to design a solution using low cost materials such as meat locker curtains and sheet metal to contain hot and cold aisles.

CFD工具用于预测研究区域的流场和温度分布。CFD使用数值方法和仿真来分析空气或流体流动。将当前数据中心配置中的气流CFD模型作为基准,与修改后的结果进行比较。数据中心中的服务器、交换机、UPS、配电单元(PDU)和其他组件都有各自的模型。通过仿真得到的流场可以准确定位热点,分析新的数据中心配置,了解冷却设备的工作负荷分布情况。谷歌利用这些数据设计了一种解决方案,使用低成本的材料,如冷库隔帘和金属板来封闭冷热通道。

A lower cost alternative is to simply measure rack inlet and outlet temperatures. The Supply Heat Index (SHI) is a metric which describes this convective heat transfer across a rack [3]. SHI is calculated using the inlet and outlet rack temperatures (InletT, OutletT), and the supply temperature (SupplyT), which is the temperature delivered by the air conditioning unit:

 SHI = (InletT – SupplyT) / (OutletT – SuppyT)      (2)

 另一种低成本替代方法是直接测量机架进风和出风温度。送风热指数(SHI)是一个描述穿越机架的对流换热情况的参数。SHI的计算采用机架进出风温度(InletT, OutletT)和送风温度(SupplyT),即空调机组提供的温度:

SHI = (进风温度-送风温度)/(出风温度-送风温度)  (2)

Sharma et al [3] conducted a set of experiments that measured temperature distribution across a production data center facility using SHI calculations. The results of these calculations showed that the SHI can successfully be used to understand heat transfer and fluid flow and reduce the overall energy consumption of data centers [3].

Sharma等人进行了一组实验,使用SHI计算方法测量了生产数据中心设备上的温度分布。这些计算结果表明,该方法可以成功地用于分析热传递和流体流动,降低数据中心的整体能耗。

Breen et al [6] developed a model that showed that by increasing the air temperature supplied to each rack inlet resulted in potential gains for COP. The data center COP improves approximately 8% for every 5 degrees C increase. However the guidance for colder temperatures and low relative humidity have been relaxed since these factors have less impact on server performance than once thought [16].

 Breen等人开发了一个模型,该模型表明,通过提高每个机架入口的空气温度,可以为COP带来潜在的收益。数据中心每升高5℃COP大约能提高8%。然而,对于较低温度和较低相对湿度的限制已经放宽,因为这些因素对服务器性能的影响比以前认为的要小。

Hamann et al [4] used real time sensors to determine thermal zones within the data center. Air flow measurements were used to track air from different areas of the data center to the air conditioning units (ACU).

Hamann等人使用实时传感器来确定数据中心内的热区域。气流测量用于追踪从数据中心不同区域到空调机组(ACU)的气流。

Marshall and Bemis [23] propose that in addition to design and monitoring in the future, CFD will be used in real time analysis for data center energy efficiency. To extend this concept, a physics-based modeling approach has shown that it is possible to create the desired air flow patterns that are believed to help optimize the energy efficiency within the data center [4].

Marshall和Bemis提出,未来除了设计和监测,CFD还将用于数据中心能效的实时分析。为了扩展这个概念,基于物理的建模方法表明,可以创建所需的流型以优化数据中心的能源效率。

Wider Temperature/Humidity Tolerances

更宽的温/湿度允许范围

Environmental control within the data center focuses on monitoring and controlling air temperature and humidity. Raising air temperature reduces energy use, but could lower reliability, availability, and equipment life expectancy. As a result of a set of experiments (including those described earlier), ASHRAE has widened the recommended range of temperatures within the data center to 18-27 degrees C for all four classes, with an allowable range of 5-45 degrees C for their class 4 of equipment. Their maximum recommended relative humidity (RH) is up to 60% with an allowable maximum of 80% RH [16].

数据中心的环境控制主要是对空气温湿度的监测和控制。提高空气温度可以减少能源消耗,但可能降低可靠性、可用性和设备预期寿命。作为一系列实验的结果(包括前面已描述的),ASHRAE(美国采暖、制冷与空调工程师学会)已经将数据中心4个级别的建议温度范围扩大到18-27℃,4级设备的允许温度范围为5-45℃。它们的最大推荐相对湿度(RH)高达60%,允许的最大相对湿度为80%。

Breen et al [6] developed a model that described the “heat flow from the rack level to the cooling tower for an air cooled data center with chillers.” This model was then used to study the effects of varying the temperature supplied to the rack as well as the temperature rise across the rack. One case study extended the rack air inlet temperature (beyond ASHRAE recommendations) over the range 5-35 degrees C while holding the temperature across the rack constant. The second case held constant the rack air inlet temperature while varying the temperature across the rack from 5-35 degrees C. These studies [6] show that increasing either the rack air inlet temperature or the temperature across the rack both yield improved energy efficiency. The higher temperatures were acceptable, but the benefits gained by increasing the temperature rise across the racks outperformed the benefit of increased temperature at the air inlet.

Breen等人开发了一个模型用以描述“带冷水机组的风冷数据中心从机架级到冷却塔的热流体”。这个模型用来研究机架进风温度的改变和穿越机架温升的影响。某个案例研究中,将机架进风温度(超出ASHRAE的建议)扩展到5-35℃范围,同时保持机架上的温度恒定。另一个案例中保持机架进风温度恒定,同时改变机架的温度从5-35℃。研究表明,提高机架进风温度或机架两侧温度都能提高能效收益。较高的温度是可以接受的,但通过提高机架的温升所获得的效益优于提高进风温度的效益。

FREE AIR COOLING

自然空气冷却 

Free Air Cooling employs air side economizers, which use outside air at a favorable temperature in lieu of chilling units to cool the air delivered to the servers. Hot air drawn away from the servers is then expelled outside instead of being recycled and chilled.

 自然空气冷却采用风侧经济器,它使用室外适合温度的空气,而不是冷水机组冷却空气送到服务器。从服务器抽走的热空气会被排出室外,而不是被回收和冷却。

Facebook [19], Google [13] and Intel [17] data centers have all implemented this technology. Google uses free air cooling in all of its data centers, Facebook implemented free air cooling as part of a continued effort to achieve LEED gold certification and Intel found that its use of free air cooling 91% of the time instituted a 74% reduction of power consumed by the data center. Klein et al [24] agree that significant energy savings can be realized by implementing free air cooling. The use of sustainable energy sources to power the data center further drive green efficiencies and reduce the overall demand on non-renewable energy sources [18].

Facebook、谷歌和Intel数据中心都实现了这一技术。谷歌在其全部数据中心使用自然空气冷却;Facebook将自然空气冷却作为实现LEED金牌认证持续努力的一部分;英特尔发现其数据中心91%的时间使用自然空气冷却导致能源消耗减少74%。Klein等人认为实现自然空气冷却对节能具有重要意义。利用可持续能源推动数据中心进一步提高绿色能效,减少了对不可再生能源的总体需求。

During their quest for Leadership in Energy & Environmental Design (LEED) Gold Certification, Facebook created the Open Compute project. “The Open Compute Project is a set of technologies that reduces energy consumption and cost, increases reliability and choice in the marketplace, and simplifies operations and maintenance.” [31] Facebook implemented evaporating cooling which draws in air from the outside, filters it, and cools it by spraying a mist of purified water into the air, cooling it as it evaporates. The cooled air is then passed through another series of filters to ensure that the air is of the correct temperature and humidity before being delivered to the data center. This filtered outside air system may be turned off if too hot, or used if in the allowable temperature range, or may be diluted with data center air, if the outside air is too cool. Initially, the air was cooled to 80.6 degrees F based on ASHRAE recommendations and Facebook was preparing to make a change to 85 degrees F based on the results of their implementation at the time the video was made. In addition, the Facebook data center uses LED lighting, rain water for flushing the toilets, and other such measures to further reduce the overall energy consumption by its data centers [19].

Facebook在寻求“能源与环境设计先锋” (LEED)金级认证期间,创建了开放计算项目。开放计算项目是一系列降低能源消耗和成本、增加可靠性和市场选择、简化运维的技术。Facebook实现了蒸发冷却,它从外部吸入空气,过滤后,向空气中喷洒纯净水雾,通过水蒸发来实现制冷。冷却后的空气经过另一系列的过滤器,以确保空气进入数据中心前达到规定的温度和湿度。如果在允许的温度范围内,可以使用外部空气过滤系统,而外部空气过热时则可以关闭系统;如果外部空气太冷,可以用数据中心的空气来混合。最初,根据ASHRAE的建议,空气被冷却到80.6℉(译者注:27℃),Facebook准备根据视频录制时的实施结果将气温调整到85℉(译者注:29.4℃)。此外,Facebook数据中心采用LED照明、雨水冲厕和其他同类措施进一步降低数据中心的整体能耗。

Intel performed a proof of concept experiment with even a wider range of temperatures than ASHRAE recommends, on 900 production servers in a high density/high utilization environment. Half of the servers were maintained as usual with chilling units as a control, the other half used free air cooling. The half using free air cooling experienced a wide variation of temperatures (64 – 92 degrees F) and humidity levels (4-90% relative humidity) and the entire data center had a layer of dust settle over the equipment. In addition to a 74% reduction in energy consumption, the free air cooling side experienced no significant increase in server failure rates despite the widely variable environment that they were housed in [17].

英特尔在一个高密度/高利用率环境中的900台生产服务器上进行了一个概念验证实验,其温度范围甚至比ASHRAE建议的范围更广。一半的服务器像往常一样使用冷水机组作为控制,另一半使用自然空气冷却。使用自然空气冷却的那一半在温度(64-92℉,译者注:17.8℃-33.3℃)和湿度(4-90%相对湿度)范围内变化很大,整个数据中心设备上覆盖了一层灰尘。尽管服务器所处的环境变化很大,但是除了减少了74%的能源消耗之外,自然空气冷却的那部分服务器的故障率没有显著增加。

Using CFD analysis of free air cooling, a case study by Gebrehiwat et al [27] shows that when strong server fans are in use, it may not be necessary to use blowers in the power and cooling modules [27]. Conversely, Facebook has demonstrated the ability to reduce the use of the server fans by pressurizing the data center and increasing the wind power via large energy efficient fans, which distribute the cooler air [19]. Either way shows promise and appear to suggest that using both at full scale may be an unnecessary energy drain.

Gebrehiwat等人通过对一个自然空气冷却案例的CFD分析表明,在使用强大的服务器风扇时,可能不需要在电源和制冷模块中使用风机。相反,Facebook通过高能效风扇增加风力以及给数据中心加压,优化冷空气分配,展示了减少服务器风扇使用的能力。无论采用哪种方式,似乎都表明完全使用这两种方法可能是一种不必要的能源消耗。

COMPREHENSIVE TOOLS

综合工具    

These tools do not neatly categorize into one of the previous topics, since they optimize multiple areas. 

由于这些工具实现了多个不同领域的优化,所以难以将他们清晰地分类到前面的某一个主题中。

CoolEmAll [1] enables a comprehensive analysis of data center efficiency by providing a Simulation, Visualization, and Decision support toolkit (SVD Toolkit) integrating models of: application/server workload scheduling and power requirements, equipment characteristics, and cooling. CoolEmAll will enable data center designers and operators to analyze, simulate (via CFD), and optimize efficiency of existing and planned data centers. The project will define data center building blocks that design hardware specifications, geometrical models, and then project energy efficiency metrics. It will also deliver a toolkit that can be used to analyze andoptimize data centers that are built with these building blocks. CoolEmAll distinguishes their tool by optimizing low and variable loads in addition to peak loads. Similar to the Open Compute project, this one will also design pre-configured and ready to use components for data centers.

CoolEmAll软件通过提供一个仿真、可视化和决策支持工具包(SVD工具包)以全面分析数据中心的效率,该工具包集成了以下模型:应用程序/服务器工作负载调度和电源需求、设备参数和制冷。CoolEmAll使数据中心设计运维人员能够通过CFD分析、模拟现有和计划中的数据中心,并优化其效率。该项目将定义数据中心建筑模块的设计硬件规格、几何模型和项目能效指标。项目还提供了一套工具包,通过建立建筑模块来分析和优化数据中心。除了峰值负载,CoolEmAll工具还可分别用于优化低负载和可变负载。与开放计算项目类似,该软件的组件还可以用于设计已有的或预制数据中心。

The EoD Designer [8] software also acts as both a planning and optimization tool for data centers allowing the user to graphically design different data center configurations. It is coupled with a mathematical model to compute the total energy consumed by the data center. This mathematical model is based on six components of the data center: voltage transformer, uninterruptable power supply, IT equipment, computer room air handler, chiller and cooling tower. It then integrates this mathematical model into this software tool, which can simulate different data center configurations and lead to a minimal total energy consumption within the data center. The tool ships with several default data center components, but also allows the operator to define custom components. Detailed reports can be generated and display data based on monthly or yearly time intervals.

EoD Designer软件同样可以作为数据中心的规划和优化工具,允许用户对不同配置的数据中心开展图形化设计。它与数学模型相结合来计算数据中心消耗的总能量。该数学模型基于数据中心的六个组成部分:变压器,UPS,IT设备,机房空调,冷水机组和冷却塔。然后将该数学模型集成到该软件工具中,可以模拟不同的数据中心配置,从而将数据中心总能耗降至最低。该工具附带几个默认的数据中心组件,但也允许用户自定义组件。该软件可以根据每月或每年的时间间隔生成详细的报告并显示数据。

GDCSim [9] identifies a full set of features that would represent a holistic data center simulator including: automated processing, online analysis, iterative design, thermal analysis, workload management and cyber-physical interdependency. GDCSim proposes a simulation tool that simulates the physical behavior of the data center under different management techniques. The tool will have three components: CFD simulator, resource management, and a simulator. In operation, the three components work together in an iterative fashion to design the data center and determine the ideal resource management plan. This functionality was demonstrated in two case studies, which continuously monitored the Energy Reuse Effectiveness (ERE) during the design iterations. Both case studies demonstrated that the tool had a positive influence on the ERE measured in the data center [9].

GDCSim软件标识了一组完整的特性,这些特性表示一个完整的数据中心模拟器,包括:自动处理、在线分析、迭代设计、热分析、负载管理、网络物理系统依存。GDCSim提出了一种仿真工具,可以模拟数据中心在不同管理技术下的物理行为。该工具将包括三个组件:CFD仿真器、资源管理和模拟器。在运行中,三个组件以相互迭代的方式工作,设计数据中心并确定理想的资源管理计划。此功能在两个研究案例中得到了演示,它们在设计迭代期过程中持续监视再生能源利用效率(ERE)。两个案例研究都表明,该工具对数据中心测量的ERE值具有积极影响。

Goiri et al [5] developed a financial analysis tool to model power consumption for a virtualized data center. This tool optimizes resource management via scheduling, cost analysis and fault tolerance implementations on two experimental virtualized data centers. The scheduling algorithm they developed assesses each decision based on maximizing the data center provider’s profit. This algorithm uses the cost associated with power consumption as a variable in maximizing the provider’s profit, which correlates to reducing power consumption. The scheduling policy determines the host allocation for each virtualized machine, ensuring fault tolerance and determining an on/off cycle for each node. The researchers then compared their scheduling policy against commonly used ones and showed a 15% benefit to the provider’s profits when using their policy.

Goiri等人开发了一个财务分析工具为虚拟数据中心功耗建模。该工具通过调度、成本分析、在两个实验性虚拟化数据中心实现容错来优化资源管理。他们开发的调度算法基于最大化数据中心提供商的利润来评估每个决策。该算法将与功耗相关的成本作为最大化供应商的利润的一个变量,这与降低功耗有关。调度策略确定每个虚拟机的主机分配,确保容错,并确定每个节点的启/停周期。然后,研究人员将他们的调度策略与常用的调度策略进行了比较,发现当使用他们的策略时,供应商可获得15%的收益利润。

ANALYSIS

分析

There appears to be a gap in tools for businesses with small scale data centers [21]. These data centers do not have the expertise nor money to pay for expensive CFD tools, nor do they have the resources for a complete redesign, or to implement costly, large scale equipment. IT personnel often lack time and expertise in industrial engineering required to perform particularly the thermal analysis aspects.

对于拥有小型数据中心的企业来说,似乎在工具方面存在差距。这些数据中心没有专业知识,没有资金购买昂贵的CFD工具,也没有资源来进行彻底的重新设计,或完成昂贵的大规模设备。IT人员通常缺乏执行工业工程热分析方面的所需的时间和专业知识。

One option is to consolidate small data centers into larger ones. This may present better options for optimization, such as through using hosted data centers and cloud computing. Massive data centers can optimize for best practices, such as those used by Facebook, Google and Intel.

选择之一是将小型数据中心合并为大型数据中心。通过使用托管数据中心和云计算,可能是更好的优化选项。大型数据中心可以像Facebook、谷歌和英特尔那样优化最佳实践。

However, not all businesses are prepared to move to this type of data center solution. For these businesses, a gap exists which is not solved by many of the above-mentioned tools, since CFD tools are expensive, complex to run, and protected by patents. Low cost improvements in airflow can include monitoring the temperature throughout the data center by placing thermometers in key locations and by using sheet metal or meat locker curtains to contain or control warm air flow. The analysis could calculate the potential savings of in- rack based air conditioning or free air cooling, and indicate permissible temperature ranges for various equipment types.

然而,并不是所有企业都准备迁移到这种类型的数据中心解决方案。对于这些企业来说,由于CFD工具价格昂贵、运行复杂、受专利保护,上述许多工具都无法弥补这一差距。在气流方面的低成本改进可以包括通过在关键位置放置温度计来监视整个数据中心的温度,并通过使用金属板或冷库隔帘来容纳或控制热气流流动。该分析可以计算基于空调或自然空气冷却的机架的可能节约情况,并明确各种设备类型的允许温度范围。

A database tool available for smaller scale data centers could include efficiency metrics to compare with best in class, and modeling virtual machines. In addition, small to mid-size data centers should have the ability to inventory, analyze and rationalize their application portfolio. Consolidation could occur when redundant applications and resources are found. Infrastructure upgrades and application consolidation could be analyzed for energy efficiency to justify infrastructure changes. A third feature would compare the energy consumption of existing infrastructure equipment versus newer energy star equipment and resulting costs.

对于较小规模的数据中心可用的数据库工具能提供一流的效率指标和虚拟机建模。此外,中小型数据中心应该具有编目、分析和合理化应用组合程序的能力。冗余应用程序和资源可以整合。可以对基础设施升级和应用程序整合进行分析,以提高能源效率,从而证明基础设施的改变是合理的。第三个特点是比较现有基础设施设备与新能源之星设备的能耗及最终成本。

CONCLUSION

结论

The demand for data center storage is expected to continue to rise and thus the topic of data center efficiency will continue to be an important one. Google has set a high standard for how to approach data center efficiency. We have explored methods of efficiency and considered how they could be adapted for smaller data centers.

对数据中心存储的需求预计将持续上升,因此数据中心效率问题将依然是一个重要的课题。对于如何提高数据中心的效率,谷歌设定了很高的标准。我们已经探索了一些提高效率的方法,并思考如何将其适用于更小的数据中心。

翻译:

何海

DKV(Deep Knowledge Volunteer)计划精英成员

中国空气动力研究与发展中心,工程师

编辑:

梁鸿雁

中能测(北京)科技发展有限公司秘书处处长


    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章