声明:本文来自克终,CSDN博客,部分删减,仅供学习交流,侵删。
前言
相信点开这篇文章的读者,一定或多或少接触过“高可靠”“高可用”这些字眼,但是往往或语焉不详,或罗列术语(MTBF、MTTR ...),那么我们到底应该如何定量描述系统的可靠性和可用性指标呢,这些看着很上流的术语到底意味着什么呢?也许,看完这篇文章,您从此也可以和小伙伴们愉快地拽术语了!工业界通常使用“浴盆曲线”来描述硬件故障,具体如下图所示。具体来说,硬件的生命周期一般被划分为三个时期:1) Thefirst part is a decreasing failure rate, known as early failures2) Thesecond part is a constant failure rate, known as random failures3) Thethird part is an increasing failure rate, known as wear-out failures
软件故障可以通过每千行代码的缺陷数(Defects/KLOC)进行测量,称为缺陷密度(Defect Density):DefectDensity= Number of Defects / KLOC英文全称:Mean Time Between Failure,顾名思义,是指相邻两次故障之间的平均工作时间,是衡量一个产品的可靠性指标。Failure rate is the frequency with which an engineered systemor component fails, expressed, for example, in failures per hour. It is often denoted by theGreek letter λ (lambda) and is importantin reliability engineering.The failure rate of a system usually depends on time, withthe rate varying over the lifecycle of the system. Forexample, an automobile's failure rate in its fifth year of service may be many times greater than its failure rate during its first year of service. One does not expect to replace an exhaust pipe, overhaul thebrakes, or have majortransmission problems in a new vehicle.Inpractice, the mean time between failures (MTBF, 1/λ) isoften reported in steadof the failure rate. This is valid and useful if the failure rate may be assumed constant – often used for complex units / systems, electronics – and is ageneral agreement in some reliability standards (Military and Aerospace). Itdoes in this case only relate to the flat region of the bathtub curve, also called the 'use fullife period'. Because of this, it is incorrecttoextrapolate MTBF to give an estimate of the service life time of a component, which will typically be much less than suggested by the MTBF due to the much higher failure rates in the 'end-of-life wearout' part of the'bathtubcurve'.为便于理解,举个例子:比如正在运行中的100只硬盘,1年之内出了2次故障,则故障率为0.02次/年。上文提到的关于MTBF和Failure Rate关系值得细细体会,在现实生活中,硬件厂商也的确更热衷于在产品上标注MTBF(个人猜测是因为MTBF往往高达十万小时甚至百万小时,容易吸引眼球)。Failure Rate伴随着产品生命周期会产生变化,因此,只有在前述“浴盆曲线”的平坦底部(通俗点说就是产品的“青壮年时期”)才存在如下关系:英文全称:Mean Time To Repair,顾名思义,是描述产品由故障状态转为工作状态时修理时间的平均值。在工程学,MTTR是衡量产品维修性的值,在维护合约里很常见,并以之作为服务收费的准则。
GB/T3187-97对可用性的定义:在要求的外部资源得到保证的前提下,产品在规定的条件下和规定的时刻或时间区间内处于可执行规定功能状态的能力。它是产品可靠性、维修性和维修保障性的综合反映。Availability= MTBF / (MTBF + MTTR)关于Availability这个计算公式,很容易理解,这里不多做解释。通常大家习惯用N个9来表征系统可用性,比如99.9%(3-nines availability),99.999%(5-nines availability)。顾名思义,指机器出现故障的停机时间。这里之所以会提Downtime,是因为使用每年的宕机时间来衡量系统可用性,更符合直觉,更容易理解。
图4 Availability与Downtime对应关系一般来说,服务器的主要部件MTBF,厂商标称值都在百万小时以上。比如:主板、CPU、硬盘为100wh,内存为400wh(4根内存约为100wh),从而可以推算出服务器整体MTBF约25wh(约30年),年故障约3%,也就是说,100台服务器每年总要坏那么几台。上面的理论计算看着貌似也没啥问题,感觉还挺靠谱。但如果换个角度想想,总觉得哪里不太对劲:MTBF约30年,难道说可以期望它服役30年?先看看希捷的工程师如何解释:It is commonto see MTBF ratings between 300,000 to 1,200,000 hours for hard disk drive mechanisms, which might lead one to conclude that the specification promises between 30 and 120 years of continuous operation. This is not thecase! The specification is based on a large (statistically significant) numberof drives running continuously at a test site, with data extrapolated accordingto variousknown statistical models to yield the results.Based onthe observed error rate over a few weeks or months,the MTBF is estimated andnot representative of how long your individual drive,or any individual product,is likely to last. Nor isthe MTBF a warranty - it is representative of therelative reliability of afamily of products. A higher MTBF merely suggests agenerally more reliable and robust family of mechanisms (depending upon the consistency of the statistical models used). Historically, the field MTBF,which includes all returns regardless of cause, is typically 50-60% of projected MTBF.看到这里,再联系前文对于Failure Rate的阐述,我知道各位读者有没有摸清其中的门道。其实说白了很简单,这些厂商真正测算的是产品在“青壮年”健康时期的FailureRate,然后基于与MTBF的倒数关系,得出了动辄百万小时的MTBF。而现实世界中,这些产品的Failure Rate在“中晚年”时期会快速上升,因此,这些MTBF根本无法反映产品的真实寿命。文中也提到,希捷也意识到MTBF存在弊端,因此改用AFR(Annualized Failure Rate),俗称“年度不良率”。其实,早在2007年,Google和CMU同时在FAST07发表论文,详细讨论了硬盘故障的问题。Google采集了公司超过10w块消费级HDD硬盘数据(SATA和PATA,5400转和7200转,7家不同厂商,9种不同型号,容量从80G到400G不等),最终得出如下数据:Google foundthat disks had an annualized failure rate (AFR) of 3% for the first three months, dropping to 2% for the first year. In the second year the AFR climbed to 8% and stayed in the 6% to 9% range for years 3-5.在不少开发人员的心目中,往往认为硬件是很可靠的,但是经过前文的洗礼,应该认识到,其实硬件真的没有我们想象得那么可靠。如果你仅仅围绕两三台服务器打转,可能一两年都未必碰到一次硬件故障,但是如果你维护的是上千台服务器,那么恭喜你,基本每周都会有硬件故障来拜访你。单机软件的可靠性受限于硬件,因此,言必谈“7*24不停机”本身就是个无稽之谈。随之而来的就是一些根深蒂固的错误观念,比如内存碎片。新人一进公司往往就会被教导:后台开发应该尽量避免使用malloc/free等动态内存分配,因为容易导致内存碎片,随之而来的就是避免使用STL容器等。如果您有幸已经用上了64位系统,那么如果您还能遇到内存碎片,我只能说自求多福了;如果您还在使用32位系统,也不用太担心,今天的glibc也早就不是当年的glibc了,人家也在不断优化内存分配算法,何况kernel都在使用动态内存分配。退一万步讲,即使真的遇到了,那么就到了我想谈的下一个话题这里所说的“可重启”和很多人心目中的理解可能不太一样,我司相当多的开发人员热衷于共享内存的使用:比如应用升级或者程序CORE掉,往往借助所谓“秒起”来完成服务恢复,有些更极端的甚至拦截”段错误”一类信号。其实,无论如何秒起,总归会有部分用户受影响,另外,如果是由于程序错误导致的意外重启,谁能保证共享内存的数据仍然处于正确状态呢?此外,如果出现机房搬迁、空调故障、供电故障等意外,所谓的共享内存+秒起也只能干瞪眼。因此,正如上文所说的,通过容灾备份+路由切换实现优雅无缝重启才是好的设计。一般来说,“可重启”进程具备如下特征:
- 不使用生命期大于进程的IPC(共享内存、跨进程的mutex等)
首先将节点从服务列表中摘除,等待节点流量跌零,发起重启过程(更新文件等),确认服务启动正常后,重新将节点添加至服务列表,逐步引流进行正确性验证(若发现异常,及时摘除)。服务节点依次分批处理,真正实现无缝重启
服务访问方支持Failover,自动切换备用节点,或者通过Name Service一类设施自动摘除故障节点,人工介入恢复。当然,前面一些看法并非“放之四海而皆准”,在实际设计系统的时候,还是应该因地制宜,选择最适合当时环境的方案。