English

鼎甲公司使用kfed修复ASM磁盘组故障案例

前言

3月30日广州某单位Oracle RAC的数据库在添加新磁盘后出现故障,导致磁盘组无法挂载,该单位寻求Oracle公司解决,但费用非常昂贵,他们抱着侥幸的心理找到了鼎甲科技。广州鼎甲计算机科技有限公司作为国内顶尖的数据容灾备份厂商,鼎甲科技凭借其雄厚的技术实力以及完善的服务体系,与各种行业用户有着紧密的合作关系。鼎甲科技的技术人员在最短时间内,以零成本的服务,成功解决了此故障,让该单位的业务一切恢复正常,该单位对鼎甲科技的专业技术实力及服务质量给予高度赞扬及认可。下面给大家分享本案例使用kfed修复ASM磁盘组故障过程。

一、故障现象

鼎甲公司了解情况后立刻派工程师前往探究,通过对ASM的v$asm_disk视图的检查,发现磁盘状态全部显示正常;执行“alterdiskgroup dgdata mount;”显示成功,但接着查询v$asm_diskgroup,该磁盘组仍处于dismounted的状态。检查alert日志,看到磁盘组报告错误,然后在mounted之后随即被dismount了。alert日志如下:

Sat Mar 30 10:51:592013
NOTE: erasingincomplete header on grp 1 disk VOL19
NOTE: cache openingdisk 0 of grp 1: VOL10 label:VOL10
NOTE: F1X0 found ondisk 0 fcn 0.4276074
NOTE: cache openingdisk 1 of grp 1: VOL11 label:VOL11
NOTE: cache openingdisk 2 of grp 1: VOL12 label:VOL12
NOTE: cache openingdisk 3 of grp 1: VOL13 label:VOL13
NOTE: cache openingdisk 4 of grp 1: VOL14 label:VOL14
NOTE: cache openingdisk 5 of grp 1: VOL3 label:VOL3
NOTE: cache openingdisk 6 of grp 1: VOL4 label:VOL4
NOTE: cache openingdisk 7 of grp 1: VOL5 label:VOL5
NOTE: cache openingdisk 8 of grp 1: VOL6 label:VOL6
NOTE: cache openingdisk 9 of grp 1: VOL7 label:VOL7
NOTE: cache openingdisk 10 of grp 1: VOL8 label:VOL8
NOTE: cache openingdisk 11 of grp 1: VOL9 label:VOL9
NOTE: cache openingdisk 12 of grp 1: VOL1 label:VOL1
NOTE: cache openingdisk 13 of grp 1: VOL2 label:VOL2
NOTE: cache openingdisk 14 of grp 1: VOL15 label:VOL15
NOTE: cache openingdisk 15 of grp 1: VOL16 label:VOL16
NOTE: cache openingdisk 16 of grp 1: VOL17 label:VOL17
NOTE: cache openingdisk 17 of grp 1: VOL18 label:VOL18
NOTE: cachemounting (first) group 1/0x36E8615F (DGDATA)
* allocate domain1, invalid = TRUE
kjbdomatt send tonode 1
Sat Mar 30 10:51:592013
NOTE: attached torecovery domain 1
Sat Mar 30 10:51:592013
NOTE: startingrecovery of thread=1 ckpt=75.5792 group=1
NOTE: advancingckpt for thread=1 ckpt=75.5792
NOTE: cacherecovered group 1 to fcn 0.5174872
Sat Mar 30 10:51:592013
NOTE: opening chunk1 at fcn 0.5174872 ABA
NOTE: seq=76blk=5793
Sat Mar 30 10:51:592013
NOTE: cachemounting group 1/0x36E8615F (DGDATA) succeeded
WARNING: offliningdisk 16.3915944441 (VOL17) with mask 0x3
NOTE: PST update:grp = 1, dsk = 16, mode = 0x6
Sat Mar 30 10:51:592013
ERROR: too manyoffline disks in PST (grp 1)
NOTE: cache closingdisk 16 of grp 1: VOL17 label:VOL17
NOTE: cache closingdisk 16 of grp 1: VOL17 label:VOL17
Sat Mar 30 10:51:592013
SUCCESS: diskgroupDGDATA was mounted
Sat Mar 30 10:51:592013
ERROR:PST-initiated MANDATORY DISMOUNT of group DGDATA
NOTE: cachedismounting group 1/0x36E8615F (DGDATA)
Sat Mar 30 10:51:592013
NOTE: halting all I/Osto diskgroup DGDATA
Sat Mar 30 10:51:592013
kjbdomdet send tonode 1
detach from dom 1,sending detach message to node 1
Sat Mar 30 10:51:592013
Dirty detachreconfiguration started (old inc 2, new inc 2)
List of nodes:
 0 1
 Global Resource Directory partially frozen fordirty detach
* dirty detach -domain 1 invalid = TRUE
 10 GCS resources traversed, 0 cancelled
 4014 GCS resources on freelist, 6138 on array,6138 allocated
Dirty DetachReconfiguration complete
Sat Mar 30 10:51:592013
freeing rdom 1
Sat Mar 30 10:51:592013
WARNING: dirtydetached from domain 1
Sat Mar 30 10:51:592013
SUCCESS: diskgroupDGDATA was dismounted
 Received detach msg from node 1 for dom 2

凭着ASM知识的了解和经验,能大概知道是某个盘存在故障而被离线,进而导致磁盘组由于缺少磁盘而被卸载。

目前最大的问题就是,磁盘组无法挂载,导致无法对磁盘组进行任何操作,即使想删除可能存在故障的磁盘都没有办法。通过对某单位技术员沟通,了解到导致故障的操作:首先在磁盘组中添加3个新磁盘,报错,随后再尝试将新磁盘单独加入,仍报错,此后发现磁盘组被卸载。

通过查阅、分析和对比相关的信息资料,在metalink上看到一些类似情况的解决办法,使用dd清空故障磁盘头部,或者强制将故障磁盘加入新的磁盘组,使原有磁盘组无法识别原有故障盘,之后便可以成功加载。为了避免造成进一步损坏,我与对方单位已经达成共识,在有确定可行的方案之前,不能作任何修改操作。

二、故障分析

开始检查日志,查找最开始导致问题的操作和相关日志信息。

节点1,第一次同时添加VOL17、VOL18、VOL19时没有明显错误,但有一个警告“WARNING: offlining disk 18.3915945713 (VOL19) withmask 0x3”,判断可能VOL19添加时出现问题。日志如下:

Fri Mar 29 18:31:372013
SQL> alterdiskgroup DGDATA add disk 'ORCL:VOL17','ORCL:VOL18','ORCL:VOL19'
Fri Mar 29 18:31:372013
NOTE:reconfiguration of group 1/0x44e8663d (DGDATA), full=1
Fri Mar 29 18:31:382013
NOTE: initializingheader on grp 1 disk VOL17
NOTE: initializingheader on grp 1 disk VOL18
NOTE: initializingheader on grp 1 disk VOL19
NOTE: cache openingdisk 16 of grp 1: VOL17 label:VOL17
NOTE: cache openingdisk 17 of grp 1: VOL18 label:VOL18
NOTE: cache openingdisk 18 of grp 1: VOL19 label:VOL19
NOTE: PST update:grp = 1
NOTE: requestingall-instance disk validation for group=1
Fri Mar 29 18:31:382013
NOTE: diskvalidation pending for group 1/0x44e8663d (DGDATA)
SUCCESS: validateddisks for 1/0x44e8663d (DGDATA)
Fri Mar 29 18:31:402013
NOTE: requestingall-instance membership refresh for group=1
Fri Mar 29 18:31:402013
NOTE: membershiprefresh pending for group 1/0x44e8663d (DGDATA)
SUCCESS: refreshedmembership for 1/0x44e8663d (DGDATA)
Fri Mar 29 18:31:432013
WARNING: offliningdisk 18.3915945713 (VOL19) with mask 0x3
NOTE: PST update:grp = 1, dsk = 18, mode = 0x6
NOTE: PST update:grp = 1, dsk = 18, mode = 0x4
NOTE: cache closingdisk 18 of grp 1: VOL19
NOTE: PST update:grp = 1
NOTE: requestingall-instance membership refresh for group=1
Fri Mar 29 18:31:492013
NOTE: membershiprefresh pending for group 1/0x44e8663d (DGDATA)
NOTE: cache closingdisk 18 of grp 1: VOL19
SUCCESS: refreshedmembership for 1/0x44e8663d (DGDATA)
 Received dirty detach msg from node 1 for dom1
Fri Mar 29 18:31:512013
Dirty detachreconfiguration started (old inc 4, new inc 4)
List of nodes:
 0 1
 Global Resource Directory partially frozen fordirty detach
* dirty detach -domain 1 invalid = TRUE
 2817 GCS resources traversed, 0 cancelled
 1981 GCS resources on freelist, 7162 on array,6138 allocated
 1719 GCS shadows traversed, 0 replayed
Dirty DetachReconfiguration complete
Fri Mar 29 18:31:512013
NOTE: PST enablingheartbeating (grp 1)
Fri Mar 29 18:31:512013
NOTE: SMON startinginstance recovery for group 1 (mounted)
NOTE: F1X0 found ondisk 0 fcn 0.4276074
NOTE: startingrecovery of thread=1 ckpt=39.5722 group=1
NOTE: advancingckpt for thread=1 ckpt=39.5722
NOTE: smon didinstance recovery for domain 1
Fri Mar 29 18:31:532013
NOTE: recoveringCOD for group 1/0x44e8663d (DGDATA)
SUCCESS: completedCOD recovery for group 1/0x44e8663d (DGDATA)
Fri Mar 29 18:32:182013

同一时间可看到节点2有报错“ERROR:group 1/0x44e86390 (DGDATA): could not validate disk 18”,随后VOL19(即disk18)被离线并导致磁盘组被卸载,部分错误信息与后来磁盘组无法加载的日志吻合。日志如下:

Fri Mar 29 18:31:372013
NOTE:reconfiguration of group 1/0x44e86390 (DGDATA), full=1
NOTE: diskvalidation pending for group 1/0x44e86390 (DGDATA)
ERROR: group1/0x44e86390 (DGDATA): could not validate disk 18
SUCCESS: validateddisks for 1/0x44e86390 (DGDATA)
NOTE: membershiprefresh pending for group 1/0x44e86390 (DGDATA)
NOTE: PST update:grp = 1, dsk = 18, mode = 0x4
Fri Mar 29 18:31:432013
ERROR: too manyoffline disks in PST (grp 1)
Fri Mar 29 18:31:432013
SUCCESS: refreshedmembership for 1/0x44e86390 (DGDATA)
ERROR: ORA-15040thrown in RBAL for group number 1
Fri Mar 29 18:31:432013
Errors in file/opt/app/oracle/admin/+ASM/bdump/+asm2_rbal_14019.trc:
ORA-15040:diskgroup is incomplete
ORA-15066: offliningdisk "" may result in a data loss
ORA-15042: ASM disk"18" is missing
NOTE: cache closingdisk 18 of grp 1:
NOTE: membershiprefresh pending for group 1/0x44e86390 (DGDATA)
NOTE: cache closingdisk 18 of grp 1:
NOTE: cache openingdisk 16 of grp 1: VOL17 label:VOL17
NOTE: cache openingdisk 17 of grp 1: VOL18 label:VOL18
SUCCESS: refreshedmembership for 1/0x44e86390 (DGDATA)
Fri Mar 29 18:31:502013
ERROR:PST-initiated MANDATORY DISMOUNT of group DGDATA
NOTE: cachedismounting group 1/0x44E86390 (DGDATA)
Fri Mar 29 18:31:512013
NOTE: halting allI/Os to diskgroup DGDATA
Fri Mar 29 18:31:512013
kjbdomdet send tonode 0
detach from dom 1,sending detach message to node 0
Fri Mar 29 18:31:512013
Dirty detachreconfiguration started (old inc 4, new inc 4)
List of nodes:
 0 1
 Global Resource Directory partially frozen fordirty detach
* dirty detach -domain 1 invalid = TRUE
 2214 GCS resources traversed, 0 cancelled
 5528 GCS resources on freelist, 7162 on array,6138 allocated
Dirty DetachReconfiguration complete
Fri Mar 29 18:31:512013
WARNING: dirtydetached from domain 1
Fri Mar 29 18:31:512013
SUCCESS: diskgroupDGDATA was dismounted

由此判断,很可能是添加磁盘时VOL19在节点2上存在权限问题:通常情况下是Oracle用户没有相关设备的访问权限。根据此判断,我在自己的虚拟机上运行RAC,并模拟这一错误:在节点1上设置好Oracle用户对新增磁盘的访问权限,在节点2上不作设置,然后添加新增磁盘。操作后果然出现几乎相同的日志,但有一处差别:在我的模拟环境中日志有报告“ORA-15075:disk(s) are not visible cluster-wide”,而单位提供的日志没有这一错误,因此仍无法断定是同一问题。

后来,发现这单位的操作记录下确实有出现ORA-15075的错误,证实了第一次添加磁盘失败是由于权限问题造成的。围绕这一个误操作进行反复多次测试,发现在模拟环境中,即使出现该误操作也不会导致磁盘组无法挂载。只要哪个节点设置好Oracle用户对磁盘的访问权限,该节点就可以成功挂载磁盘组。

随后继续模拟实际操作,失败后再继续输入添加磁盘的命令,也不会出现任何进一步的故障,Oracle都会正确地报告“ORA-15029: disk '…' is already mounted by thisinstance”。这单位提供的操作记录显示,在第二次尝试添加VOL17及VOL18时,Oracle正确报告ORA-15029,说明VOL17及VOL18已成功加入磁盘组。

但操作记录显示随后的一次操作却出现了异常,此时再次尝试添加VOL17却出现“ORA-15033: disk 'ORCL:VOL17' belongs todiskgroup "DGDATA"”的错误。这是一个异常的错误,根据前面多次测试得到的经验,该错误表示的意思是“VOL17是属于另一个磁盘组的,不能添加到指定的磁盘组,除非加上FORCE选项强制加入”。也就是说,第二次尝试添加磁盘时VOL17还能被识别出是DGDATA磁盘组的,但第三次尝试添加磁盘时却没被识别出来。此时日志也出现了异常情况:

Fri Mar 29 18:35:412013
SQL> alter diskgroupDGDATA add disk 'ORCL:VOL17'
Fri Mar 29 18:35:412013
NOTE:reconfiguration of group 1/0x44e8663d (DGDATA), full=1
Fri Mar 29 18:35:412013
WARNING: ignoringdisk ORCL:VOL18 in deep discovery
WARNING: ignoringdisk ORCL:VOL19 in deep discovery
NOTE: requestingall-instance membership refresh for group=1
Fri Mar 29 18:35:412013
NOTE: membershiprefresh pending for group 1/0x44e8663d (DGDATA)
SUCCESS: validateddisks for 1/0x44e8663d (DGDATA)
NOTE: PST update:grp = 1, dsk = 16, mode = 0x4
Fri Mar 29 18:35:452013
ERROR: too manyoffline disks in PST (grp 1)
Fri Mar 29 18:35:452013
SUCCESS: refreshedmembership for 1/0x44e8663d (DGDATA)
ERROR: ORA-15040thrown in RBAL for group number 1
Fri Mar 29 18:35:452013
Errors in file/opt/app/oracle/admin/+ASM/bdump/+asm1_rbal_13974.trc:
ORA-15040:diskgroup is incomplete
ORA-15066:offlining disk "" may result in a data loss
ORA-15042: ASM disk"16" is missing
Fri Mar 29 18:35:452013
ERROR:PST-initiated MANDATORY DISMOUNT of group DGDATA
NOTE: cache dismountinggroup 1/0x44E8663D (DGDATA)
Fri Mar 29 18:35:452013
NOTE: halting allI/Os to diskgroup DGDATA
Fri Mar 29 18:35:452013
kjbdomdet send tonode 1
detach from dom 1,sending detach message to node 1
Fri Mar 29 18:35:452013
Dirty detachreconfiguration started (old inc 4, new inc 4)
List of nodes:
 0 1
 Global Resource Directory partially frozen fordirty detach
* dirty detach -domain 1 invalid = TRUE
 1291 GCS resources traversed, 0 cancelled
 2347 GCS resources on freelist, 7162 on array,6138 allocated
Dirty DetachReconfiguration complete
Fri Mar 29 18:35:452013
freeing rdom 1
Fri Mar 29 18:35:452013
WARNING: dirtydetached from domain 1
Fri Mar 29 18:35:462013
SUCCESS: diskgroupDGDATA was dismounted

此时磁盘节点1的磁盘组也被卸载,可以判断正是此时的异常导致了后来出现的故障。

由于在模拟环境上反复进行添加磁盘的操作并未重现出故障,此时只能判断该故障很可能是Oracle的BUG,可能正好该添加磁盘的操作影响了Oracle对新磁盘的rebalance操作,随后Oracle将该磁盘标记为离线,并导致磁盘组被卸载。与这单位技术员交流了测试结果,得知在单位的环境中节点2后来已经设置好Oracle用户对磁盘的访问权限,但故障依旧。此后我继续做dd及强制把故障磁盘加入新磁盘组的测试。

随后进行了一系列测试。由于测试环境下磁盘并不会出现故障,因此只能手动把磁盘组离线,然后进行“修复”后尝试挂载磁盘组。尝试了使用dd覆盖“故障磁盘”的头部,及把“故障磁盘”加入新磁盘组后删除,都无法再挂载原加入的磁盘组。但在测试环境下,磁盘组无法挂载都会报告“ORA-15042:ASM disk "…" is missing”,而不像实际环境中报告挂载成功。对比了网上其他人使用dd及强制加入新磁盘组的文章,发现有一个很大差异:网上修复的案例都是使用“normalredundancy”方式的磁盘组,这种情况下磁盘组中存在冗余数据,所以一个磁盘出现故障并不会使磁盘组被卸载,在这个前提下许多操作都有可能进行。而单位的故障系统是使用了“externalredundancy”,数据在Oracle看来是没有冗余的,这也是磁盘组目前无法挂载的一个原因。

基于上述情况,想到了2个解决方案。一个是查看Oracle有没有强制挂载磁盘组的命令,也许会有这种命令提供给用户进行故障修复。另一个是想到使用kfed可以修改磁盘头信息,那么我找一个正常的磁盘修改下磁盘头信息后恢复到故障盘,是否就能使故障盘被正确识别?随后第一个办法被否定了,查阅了资料发现只有11g有强制挂载磁盘组的选项,关键是只是“normal redundancy”的磁盘组才能使用。第二个办法在昨天被破坏的模拟环境上进行测试,居然可以成功!将这个方法的操作过程发给这单位的技术员,让他在自己的测试环境上进行验证。

这个kfed修复磁盘头的方法如下:找一个正常的磁盘,用kfed导出其磁盘头信息,对比故障盘导出的磁盘头信息,合并出一个修复后的故障盘磁盘头信息,导入故障盘。例如正常的磁盘是/dev/rdsk/c1t0d0s3,故障盘是/dev/rdsk/c1t1d0s1,使用以下操作:

kfed read/dev/rdsk/c1t0d0s3 text=header0
kfed read/dev/rdsk/c1t1d0s1 text=header1
vimdiff header0header1
(...修改出一个“正确”的故障盘磁盘头,另存为header1fix...)
kfed merge/dev/rdsk/c1t1d0s1 text=header1fix

如果故障盘的磁盘头没有可用信息,需要把它加入新磁盘组后删除,这样其磁盘头中就有新磁盘组的信息。

其中关键的需要修复的信息有:

kfdhdb.dsknum:磁盘在磁盘组中的序号,从0开始,如Oracle日志中的disk 18应该对应的数字为17

kfdhdb.grpname:磁盘组的名称,如果是从新磁盘组中删除,需要改为原磁盘组的名称

kfdhdb.grpstmp.hi:磁盘组的时间截,需要从正常磁盘头中复制

kfdhdb.grpstmp.lo:同上

不过后来收到单位技术员的反馈,故障系统上的VOL17、VOL18、VOL19磁盘头都是正确的,说明这种方法不会起作用。

克隆故障环境

后来,提出了可以使用dd把故障系统的磁盘都拷贝出来,然后在此基础上搭建测试环境,可以在克隆出的故障系统上进行研究。周三拿到了拷好的数据,使用iscsi加载到测试环境,运行oracleasm scandisks,开始在模拟环境上测试。

三、解决问题

使用kfed检查了VOL17、VOL18、VOL19的磁盘头,确实全部正常。把之前尝试过的方法在该模拟环境上重新尝试一遍,确实也都不奏效。需要想想其它办法。

参考了文章:http://blog.csdn.net/tianlesoftware/article/details/6740716 ,先在磁盘组中找到KFBTYP_LISTHEAD,然后再找到KFBTYP_DISKDIR,可看到DISKDIR块中包含有各磁盘的信息,其中VOL17的状态与其它盘都不同:

kfddde[0].entry.incarn:               4 ; 0x024: A=0 NUMM=0x1

其它盘(包括VOL18、VOL19)都是:

kfddde[0].entry.incarn:               1 ; 0x024: A=1 NUMM=0x0

当时分析后认为应该可以通过修改VOL17的状态,让VOL17变回正常。 不过当时并没有马上尝试,而是根据这个思路去找到PST表。与其修改VOL17的状态,不如找到PST表把VOL17删除掉。

PST表的解释:Partner StatusTable. Maintains info ondisk-to-diskgroup membership.

根据http://blog.csdn.net/tianlesoftware/article/details/6743677 这个链接的内容,PST表应该存在于某个磁盘的AU=1位置。检查了磁盘组中的所有磁盘,只有VOL10包含了PST表,但AU=1处并不包含任何有用的内容,它的类型是KFBTYP_PST_META。根据前面查找DISKDIR的经验,继续检查AU=1,BLK=1处的数据,仍然是KFBTYP_PST_META,再继续检查AU=1,BLK=2,发现了KFBTYP_PST_DTA。继续检查其内容,很有规律:

kfdpDtaE[0].status:           117440512 ; 0x000: V=1 R=1 W=1
kfdpDtaE[0].index:                    0 ; 0x004: CURR=0x0CURR=0x0 FORM=0x0 FORM=0x0
kfdpDtaE[0].partner[0]:               0 ; 0x008: 0x0000
kfdpDtaE[0].partner[1]:               0 ; 0x00a: 0x0000
kfdpDtaE[0].partner[2]:               0 ; 0x00c: 0x0000
......
kfdpDtaE[0].partner[19]:              0 ; 0x02e: 0x0000
kfdpDtaE[1].status:           117440512 ; 0x030: V=1 R=1 W=1
kfdpDtaE[1].index:                    0 ; 0x034: CURR=0x0CURR=0x0 FORM=0x0 FORM=0x0
kfdpDtaE[1].partner[0]:               0 ; 0x038: 0x0000
kfdpDtaE[1].partner[1]:               0 ; 0x03a: 0x0000
kfdpDtaE[1].partner[2]:               0 ; 0x03c: 0x0000
......
kfdpDtaE[1].partner[19]:              0 ; 0x05e: 0x0000
kfdpDtaE[2].status:            83886080 ; 0x060: V=1 R=1 W=1
......

直到检查到kfdpDtaE [18].status开始变为0。与磁盘一一进行对应,0~15对应原有的16块磁盘,16、17对应新增的VOL17、VOL18,而VOL19则由于权限问题没有出现在表中。决定尝试修改该表,将VOL17、VOL18从磁盘组中删除:

ddif=/dev/oracleasm/disks/VOL10 of=vol10.save bs=1048576 count=10
kfed read/dev/oracleasm/disks/VOL10 aun=1 blkn=2 text=pst.data
vi pst.data
(...修改kfdpDtaE[16].status及kfdpDtaE[17].status为0,另存为pst.update...)
kfed merge/dev/oracleasm/disks/VOL10 aun=1 blkn=2 text=pst.update

尝试挂载磁盘组,如原来一样报告成功,还得看日志:

Thu Apr  4 14:15:08 2013
SQL> alterdiskgroup dgdata mount
Thu Apr  4 14:15:08 2013
NOTE: cacheregistered group DGDATA number=2 incarn=0x0c76f699
Thu Apr  4 14:15:08 2013
NOTE: Hbeat:instance first (grp 2)
Thu Apr  4 14:15:13 2013
NOTE: startheartbeating (grp 2)
Thu Apr  4 14:15:13 2013
NOTE: erasingincomplete header on grp 2 disk VOL17
NOTE: erasingincomplete header on grp 2 disk VOL18
NOTE: erasingincomplete header on grp 2 disk VOL19
NOTE: cache openingdisk 0 of grp 2: VOL10 label:VOL10
NOTE: F1X0 found ondisk 0 fcn 0.4276074
NOTE: cache openingdisk 1 of grp 2: VOL11 label:VOL11
NOTE: cache openingdisk 2 of grp 2: VOL12 label:VOL12
NOTE: cache openingdisk 3 of grp 2: VOL13 label:VOL13
NOTE: cache openingdisk 4 of grp 2: VOL14 label:VOL14
NOTE: cache openingdisk 5 of grp 2: VOL3 label:VOL3
NOTE: cache openingdisk 6 of grp 2: VOL4 label:VOL4
NOTE: cache openingdisk 7 of grp 2: VOL5 label:VOL5
NOTE: cache openingdisk 8 of grp 2: VOL6 label:VOL6
NOTE: cache openingdisk 9 of grp 2: VOL7 label:VOL7
NOTE: cache openingdisk 10 of grp 2: VOL8 label:VOL8
NOTE: cache openingdisk 11 of grp 2: VOL9 label:VOL9
NOTE: cache openingdisk 12 of grp 2: VOL1 label:VOL1
NOTE: cache openingdisk 13 of grp 2: VOL2 label:VOL2
NOTE: cache openingdisk 14 of grp 2: VOL15 label:VOL15
NOTE: cache openingdisk 15 of grp 2: VOL16 label:VOL16
NOTE: cachemounting (first) group 2/0x0C76F699 (DGDATA)
NOTE: startingrecovery of thread=1 ckpt=94.5829 group=2
NOTE: advancing ckptfor thread=1 ckpt=94.5830
NOTE: cacherecovered group 2 to fcn 0.5174912
Thu Apr  4 14:15:13 2013
NOTE: opening chunk1 at fcn 0.5174912 ABA
NOTE: seq=95blk=5831
Thu Apr  4 14:15:13 2013
NOTE: cachemounting group 2/0x0C76F699 (DGDATA) succeeded
SUCCESS: diskgroupDGDATA was mounted
Thu Apr  4 14:15:13 2013
NOTE: recoveringCOD for group 2/0xc76f699 (DGDATA)
SUCCESS: completedCOD recovery for group 2/0xc76f699 (DGDATA)

有变化!VOL17及VOL18也跟VOL19一样被清理了头部,然后磁盘组不再报告VOL17需要离线,不再被卸载。此后检查磁盘组状态、磁盘状态,一切正常。修改了pfile,启动数据库,成功打开。重新强制添加3个新磁盘,成功,一切稳定运行。

修复故障

在模拟环境中打开数据库后,开始使用data pump导出部分业务数据。第二天安排部分应用开发人员上门检查数据一致性。最后在生产系统上按模拟环境的方法进行修复,生产数据库可正常打开,业务正常运行。

总结

本次ASM磁盘组故障问题反映了数据容灾备份的主要性,要防止系统出现操作失误或系统故障导致数据丢失,提前做好备份工作。

联系我们