边缘数据中心制冷新思路

更多干货等你发现!

边缘数据中心冷却:需要多维思考

Cooling Edge Data Centers: It Requires More Thought

BY VOICES OF THE INDUSTRY - NOVEMBER 18,2020

位于加州萨克拉门托(Sacramento)的RagingWire数据中心园区,大量彩色管道将水从冷冻站输入和输出。

Massive color-coded pipes move water to andfrom the cooling equipment plant at the RagingWire Data Center campus inSacramento, Calif.

技术解决方案总监Jon Benson;MC / MUS销售副总裁Jack Kolar;TAS Energy的高级项目应用工程师Abhishek Banerjee着重介绍了多种边缘数据中心的冷却方法,从直膨式冷却系统、到冷冻水解决方案、自然冷却等。

Jon Benson, Director of Technology andSolutions; Jack Kolar, VP MC/MUS Sales; and Abhishek Banerjee, Senior ProjectApplication Engineer, from TAS Energy highlight a variety of dierent approachesto cooling for edge data centers, from direct expansion cooling systems, tochilled water solutions, free cooling and more.

背景

The Setting

多年以来,数据中心行业远见卓识者预测到2020年将为我们带来终极数据世界:无处不在的5G、8K视频流、海量物联网中数十亿个互联设备、无人驾驶汽车等等。

For many years, the data center industryvisionaries predicted that 2020 would bring us the ultimate data world: 5Geverywhere, 8K video streaming, billions of interconnected devices in a massiveInternet of Things, selfdriving vehicles, and more.

这个终极数据世界的前景使这些远见卓识者开始预测,需要在互联网边缘安装成千上万个小型边缘数据中心,这些数据中心主要位于每个移动通信基站的底部和许多街角处。

The promise of this ultimate data worlddrove these same visionaries to predict the need for tens of thousands of smalledge data centers installed at the edge of the internet – mainly at the foot ofevery cell tower and on many street corners.

这一预测需求引发了将边缘数据中心产品推向市场的热潮。这促使决策过程聚焦于产品的形状和上市速度。

This predicted demand triggered a rush tobring edge data center products into the market. This drove the decision-makingprocess to focus on form factor and speed to market.

这种关注引发了这样的问题:“我们如何在尽可能小的空间内安装最多的设备?”以及“我们如何使这种产品迅速上市?”

This focus led to these questions: “how do we fit the most equipment into the smallest space possible?” and “how do we get this product to marketquickly?”

相比之下,用于大型托管和超大型数据中心的决策过程考虑了许多细节因素。这些要素包括为各种功率密度要求提供灵活性,规划弹性的配电解决方案,设计一种高效和有效的冷却解决方案,构建一幢坚固而安全的建筑物,交付数据中心系统的运营和维护等等。

In contrast, the decision-making processused for very large colocation and hyperscale data centers considers manyelements down the fine details. These elements include providing flexibilityfor a wide array of power density requirements, planning a resilient powerdistribution solution, devising an efficient and effective cooling solution,building a robust and safe building, delivering the operation and maintenanceof the data center systems, and more.

在边缘数据中心,这些要素通常被归类为最简单的解决方案或最小的外形尺寸。通常考虑最少的要素是冷却解决方案。

In edge data centers, these elements areoften relegated to the simplest solution or the smallest form factor. Theelement often given the least consideration is the cooling solution.

对于托管或超大型数据中心,可能要考虑五种或十种或更多种不同的冷却解决方案,对于边缘数据中心,仅考虑一种。这通常是事后才想到的,这就是为什么在大多数情况下都追求“附件”型解决方案而不是“集成”型解决方案的原因。

There may be five or ten or more differentcooling solutions considered for a colocation or hyperscale data center, whereonly one is considered for an edge data center. This is often an afterthought,which is why in most cases, ‘attachment’ type solutionsare sought after, as opposed to ‘integrated’ type solutions.

直膨,“通用”解决方案
DX, The ‘Common’ Solution

目前,边缘数据中心最常用的冷却解决方案是“直膨”(DX)冷却系统。从高端CRAC装置到简单的家用空调装置,这种冷却解决方案具有多种形式。该解决方案具有几个优点:较低的投资成本,几乎可以在任何气候下使用,零水消耗,几乎任何在线供应商都可以对其进行维护和保养。该解决方案也有几个缺点:具有最低的运行效率(PUE = 1.25~1.50 ),为未来的负载扩容提供了有限的灵活性(一个10 kW装置永远是10 kW装置),并且占用大量空间。

The most common cooling solution for edgedata centers today is the “direct expansion” (DX) cooling systems. This coolingsolution comes in forms factors from a high-end CRAC unit down to a simple homeair conditioning unit. This solution has several advantages: a lower capitalcost, it works in almost any climate, it uses zero water, and it can beserviced and maintained by almost any provider found online. This solution alsohas several disadvantages: has the lowest operating eciency (PUE = 1.25-1.50 ),provides limited exibility for future load growth (a 10 kW unit will always bea 10 kW unit), and occupies a lot of space.

使用分布式冷却解决方案为逐步增加冷却系统提供了一种更好的方法,以满足从IT设备的初始部署到完全部署以及以后的扩展能力,并利用这些较小的增量在冷却系统中提供额外的弹性。

The use of these distributed coolingsolutions provides a far better means for incrementally growing the coolingsystem to meet the scaling capacity from initial deployment to full deploymentof the ITE and beyond and to leverage these smaller increments to provideadditional resiliency in the cooling system.

随着行业继续重新定义边缘数据中心——现在涵盖从10 KW到10,000 KW 的范围,冷却解决方案的效率、有效性、弹性和灵活性变得越来越重要。

As the industry continues to redefine edgedata centers – now spanning a spectrum from 10 kW to10,000 kW — the efficiency, effectiveness,resiliency, and flexibility of the cooling solution take on greater importance.

这种重要性的增加提出了新的问题:

This increased importance raises new questions:

l 边缘数据中心是否配备运维人员还是“熄无人值守”?

l Will the edge data center be staffed or “lights out”?

l 现场可以通过合格的服务和维护来支持哪些冷却解决方案?

l Which cooling solutions can be supported with qualified service and maintenance at the location(s)?

l 对于平均和集中功率密度需要多大的灵活性?

l What level of flexibility for average and concentrated power density is required?

l 考虑哪种功率负载增长能力?一段时间内的预期增长是多少?定义时间了吗?电力负荷增长的能力是什么?预期在一段时间内增长多少?时间有规定吗?

l What capability for power load growth is considered? What is the expected increase over a period? Is that time defined?

l 需要什么运营效率?

l What operational efficiency is required?

这种重要性的提升意味着对边缘数据中心冷却解决方案的更深入思考,这些问题将为我们指明正确的方向

This elevated importance means morein-depth thought regarding edge data center cooling solutions and thesequestions will point us in the right direction.

分布式冷冻水解决方案
Chilled Water Distributed Solution

当单机柜平均功率密度增加到10 KW以上时,或者存在功率密度集中的机柜时,边缘数据中心可采用分布式制冷剂或分布式冷冻水冷却解决方案。这些分布式解决方案在满足边缘数据中心更大、更多样化的电力负荷需求方面提供了更大的灵活性。这些解决方案还提供了利用水来显着提高效率的选择(PUE = 1.10~1.20)。

When the average power density cabinetincreases above 10 kW or there is a cabinet with a concentrated power density,the edge data center may be better served by a distributed refrigerant or adistributed chilled water-cooling solution. These distributed solutions providefar more flexibility in meeting larger and more diverse power loadingrequirements in an edge data center. These solutions also provide the optionfor utilizing water to dramatically improve efficiency (PUE = 1.10-1.20).

尽管分布式系统的资本成本稍高一些,但它们确实提供了更高的资产弹性。这种弹性体现在无需更换整个冷却系统即可轻松适应不断变化的需求。在分布式系统中,可以将特定的10 KW装置轻松增加到15 KW或20 KW装置,而无需完全更换该装置。

While the distributed systems are slightlymore capital cost-intensive, they do provide a far higher asset resiliency;this resiliency is borne out in the ability to easily adapt to changing needswithout replacing an entire cooling system. In a distributed system, a specific10 kW unit can be easily augmented to become a 15 kW or 20 kW unit withoutcomplete replacement of the unit.

使用分布式冷却解决方案为逐步增加冷却系统提供了一种更好的方法,以满足从IT设备的初始部署到完全部署以及以后的扩展能力,并利用这些较小的增量在冷却系统中提供额外的弹性。

The use of these distributed coolingsolutions provides a far better means for incrementally growing the coolingsystem to meet the scaling capacity from initial deployment to full deploymentof the ITE and beyond and to leverage these smaller increments to provideadditional resiliency in the cooling system.

‘自然’冷却?
‘Free’ Cooling?

在大型数据中心越来越流行的冷却解决方案是使用“自然”冷却。该解决方案直接利用室外环境空气来为IT设备冷却。当环境空气有点热时,此解决方案通常与直接蒸发冷却配合使用,对于位于凉爽或干旱气候下的专用边缘数据中心,这可能是有益的解决方案。对于一些较小的边缘数据中心而言,由于气候条件广泛且为许多地方提供服务,这种冷却解决方案将远远不实用。

A cooling solution that has become morepopular in the larger data center is the use of “free” cooling. This solution utilizes once-through ambient air to directlycool the ITE. This solution is often paired with direct water-based evaporativecooling when the ambient air is a bit too warm For a purpose-built edge datacenter located in a cool or arid climate, this may be a solution that isbeneficial. For a fleet of smaller edge data centers, this cooling solutionwill be far less practical due to the wide array of climates and the availableservice in many localities.

重要的是要理解,“自然”冷却是一种低能耗的选择,而不是零能耗的选择。此外,“自然”冷却将带来新的维护和空气过滤问题需要考虑;主要是植物或树木附近的位置,这些植物或树木将花粉、花茎或纤维释放到空气中,或空气中会充满灰尘或污染。这些将影响到“效率”。

It is important to understand that “free” cooling is a lower energy option, not azero energy option. Additionally, “free” cooling will introduce new maintenance and air filtration issues toconsider; mainly is the location near plants or trees that release pollen,flowers, or fibers into the air or where dust or pollution will be in the air.These will factor into the “efficiency”.

高密度液体冷却
High-Density Liquid Cooling

冷却解决方案还需要考虑其他因素,例如现在出现的超算解决方案,并有望在2020年及以后的未来计算应用中变得更加普遍。边缘数据中心将需要能够冷却40 kW或80 kW或200 kW IT设备的“机柜”—它们已经存在了。

There are other considerations for thecooling solution – those ultra-high-density computingsolutions that are here now and promise to become more prevalent in futurecomputing applications – those in 2020 and beyond. There will bea need for edge data centers capable of cooling a “cabinet” housing 40 kW or 80 kW or 200 kW of ITE – they already exist.

这些超算解决方案将需要先进的冷却技术。例如“芯片直接”冷却技术-将制冷剂,水或不导电液体直接通过管道输送到服务器机箱中的技术-或将服务器装入不导电液体中的全浸没系统。这些已经部署在边缘数据中心中,并具有提供最高的效率的额外好处(PUE = 1.05或更低)。

These ultra-high-density computingsolutions will require advanced cooling technologies. Technologies such as “direct-to-chip” cooling – one whererefrigerant, water, or a non-conductive liquid, is piped directly into theserver chassis – or a full immersion system havingservers loaded into a bath of non-conductive liquid. These are already deployedin edge-style data centers and have the additional benefit of delivering theultimate in efficiency (PUE=1.05 or lower).

2020年的全球新冠疫情使我们迅速转向了“在家”的一切(工作,学校,购物,电影/视频/音乐,甚至虚拟的欢乐时光)。这种“在家”的生活重置将需要更多在家附近的数据内容,这将为边缘数据中心提供增长空间。一夜之间,数据负荷从办公室中安装的商业系统转移到家庭中的住宅系统。为了适应这些突然的变化,不仅要注意提供较低的延迟,而且还要提高这些系统的可靠性。边缘数据中心的冷却解决方案需要考虑这一非常重要的一环所需的周到考虑,以使我们全新的“在家”生活无缝。边缘数据中心的冷却解决方案需要对这个非常重要的环节进行全面考虑,以使我们新的一切“在家”生活无缝衔接。

The global pandemic of 2020 brought us arapid shift to “at home” everything(work, school, shopping, movies/video/music, and even virtual happy hours).This “at home” life reset will demand more data contentcloser to home —and that will provide growth for edgedata centers. There was an overnight shift in the data load from commercialsystems installed in offices, to the residential systems at home. Toaccommodate these sudden changes, it is imperative to pay attention not just tothe provision of lower latency, but also higher reliability of these systems.The cooling solutions for edge data centers require the thoughtfulconsideration required for this very important cog to make our new “at home” everything life seamless.

Jon Benson是技术和解决方案总监,Jack Kolar是MC / MUS销售副总裁,Abhishek Banerjee是TAS的高级项目应用工程师。

Jon Benson is the Director of Technologyand Solutions, Jack Kolar, is VP MC/MUS Sales, and Abhishek Banerjee is theSenior Project Application Engineer at TAS.

DeepKnowledge

翻译:许磊 暖通架构师   校对:田瑞杰  高级暖通工程师、架构师

原文出处:DCF【Cooling Edge Data Centers: It RequiresMore Thought】

(0)

相关推荐

  • 雷克萨斯与DCS合作提供新的充电服务

    数字充电解决方案(DCS)和雷克萨斯欧洲公司正在以 "雷克萨斯充电网络" 为商标为雷克萨斯司机推出一项新的充电服务.该服务可在挪威,瑞士,葡萄牙,德国,法国,英国,西班牙和意大利使 ...

  • 客户直达、AI赋能,解读联想智慧零售的“两架马车”

    逆水行舟,不进则退. 这句话用来形容当前线下零售行业再合适不过.一场疫情"黑天鹅"对零售行业的影响无比深远:公众消费习惯,不可逆地从线下大范围转到线上:线下竞争白热化,迫切要求企业 ...

  • 为了冷却数据中心服务器,微软转向沸腾的液体

    翻译:宋媛媛 校对:刘海峰 标题:为了冷却数据中心服务器,微软转向沸腾的液体 To cool datacenter servers,Microsoft turns to boiling liquid ...

  • SAP和微软在云端合作

    近日,SAP和Microsoft宣布扩大合作伙伴关系,以使客户能够在云中和边缘设计和运营智能数字供应链和Industry 4.0解决方案. 该合作伙伴关系包括对标准,联盟和开源的协作方法,将塑造供应链 ...

  • 2020年度数据中心科技成果奖一等奖—电力定制边缘数据中心解决方案

    技术名称:电力定制边缘数据中心解决方案 获奖单位:中国能源建设集团广东省电力设计研究院有限公司 主要完成人:梁汉东.吴劲松.赵德宁.张学昶.李舒涛 获奖理由 ■ E-block,国内首个基于电力定制特 ...

  • 【每日一题】数据中心制冷

    爱地理走天涯 数据中心制冷耗能巨大.一般电力成本占整个大数据中心支出成本的50%-70%,而其中制冷过程消耗的电能又占数据中心所有能耗的40%,散热不充分会直接影响数据中心的运行.早期数据处理量小.规 ...

  • 观点 | 互联网数据中心制冷如何“智”冷? 间接蒸发冷却助力碳中和

    摘要 得益于安全高效.显著节能的特性,间接蒸发冷却的接受程度更高,正在成为数据中心制冷方式的新选择. 自 1996年"Internet Data Center"概念被提出以来,我国 ...

  • 边缘数据中心在成都“落地成盒”

    来源:四川日报-川观新闻 川观新闻记者 唐泽文 立秋之后,成都东部新区未来医学城施工现场未有丝毫凉意,热火朝天的场景随处可见.在一处干净整洁的绿化带上,一个集装箱大小的"方形盒子" ...

  • 视角转变 | 从零开始思考数据中心制冷

    更多干货等你发现! 返璞归真-重新思考数据中心制冷 Back to Basics: rethinking data center cooling 过去十几年的工程原理能否成为解锁数据中心冷却方式未来的 ...

  • 从数据中心过渡到边缘数据中心

    数据中心有什么新功能?它们会在数据需求中生存吗? 数年来,数据中心一直是IT团队之间讨论的热门话题.庞大的设施可容纳海量数据,谷歌.亚马逊.微软.苹果等技术巨头的数据中心为全球用户提供众多服务.但是, ...

  • 边缘数据中心越来越受欢迎

    在一个对超高速处理和最大灵活性的需求日益增长的时代,小型灵活的边缘数据中心正在获得吸引力. 边缘数据中心正迅速普及,原因很简单:它们以最小的延迟提供更快的服务. 边缘数据中心设施靠近其服务的客户,旨在 ...

  • 数据中心制冷技术的应用及发展

    数据中心是一整套复杂的设施.它不仅仅包括计算机系统和其它与之配套的设备(例如通信和存储系统),还包含配电系统.制冷系统.消防系统.监控系统等多种基础设施系统.其中,制冷系统在数据中心是耗电大户,约占整 ...

  • @微信官方:给我一份数据中心制冷系统设计指南

    数据中心概况及常用冷水系统介绍 1.数据中心是单位的业务系统与数据资源进行集中.集成.共享.分析的场地.工具.流程等的有机组合.从应用层面看,包括业务系统.基于数据仓库的分析系统:从数据层面看,包括操 ...