Oracle Recover Case: 50TB ASM crash case

网友投稿 828 2022-10-25

本站部分文章、图片属于网络上可搜索到的公开信息,均用于学习和交流用途,不能代表睿象云的观点、立场或意见。我们接受网民的监督,如发现任何违法内容或侵犯了您的权益,请第一时间联系小编邮箱jiasou666@gmail.com 处理。

Oracle Recover Case: 50TB ASM crash case

某客户50 TB的ASM发生故障,经过合力拯救,恢复正常,在此简单记录一下!实际上最后发现比我想象中的简单的多。如下是关于该故障的详细描述情况。

--db alert log信息

Mon Jul 6 15:17:43 2015Errors in file /oraclehome/admin/xxxx/udump/xxxx_ora_262606.trc:ORA-27063: number of bytes read/written is incorrectIBM AIX RISC System/6000 Error: 6: No such device or addressAdditional information: -1Additional information: 1048576ORA-19502: write error on file "+DG_xxxx/xxxx/datafile/xxxx_load_1.679.847025081", blockno 3136352 (blocksize=32768)ORA-27063: number of bytes read/written is incorrect............Errors in file /oraclehome/admin/xxxx/bdump/xxxx_lgwr_185246.trc:ORA-00340: IO error processing online log 9 of thread 1ORA-00345: redo log write error block 394389 count 2ORA-00312: online log 9 thread 1: '+DG_xxxx/xxxx/onlinelog/group_9.433.789664647'ORA-15078: ASM diskgroup was forcibly dismountedMon Jul 6 15:18:46 2015LGWR: terminating instance due to error 340Instance terminated by LGWR, pid = 185246

从db的alert log来看,是出现了IO异常,导致lgwr进程写日志,最后lgwr进程强行终止数据库实例.很明显,这里我们需要分析为什么lgwr进程无法写日志呢 ? 接着查看asm日志如下:

Thu Jul 3 08:24:26 2014NOTE: ASMB process exiting due to lack of ASM file activityMon Jul 6 15:18:44 2015Errors in file /oraclehome/product/10.2.0/admin/+ASM/udump/+asm_ora_353008.trc:ORA-27091: unable to queue I/OORA-27072: File I/O errorIBM AIX RISC System/6000 Error: 6: No such device or addressAdditional information: 7Additional information: -939091968Additional information: -1Mon Jul 6 15:18:46 2015NOTE: cache initiating offline of disk 32 group 1WARNING: offlining disk 32.1115675731 (DG_xxxx_0032) with mask 0x3NOTE: PST update: grp = 1, dsk = 32, mode = 0x6Mon Jul 6 15:18:46 2015ERROR: too many offline disks in PST (grp 1)Mon Jul 6 15:18:46 2015ERROR: PST-initiated MANDATORY DISMOUNT of group DG_xxxx

从上述信息来看,很明显是因为asm 磁盘组中的32号盘出现IO问题,导致磁盘组被强制offline,最后数据库实例也crash。后面客户尝试手工mount diskgroup 发现报如下类似错误:

SQL> alter diskgroup datadg mountMon Jul 6 15:33:50 2015Errors in file /oraclehome/product/10.2.0/admin/+ASM/bdump/+asm_dbw0_275092.trc:ORA-15066: offlining disk "DG_xxxx_0032" may result in a data lossORA-15066: offlining disk "DG_xxxx_0032" may result in a data lossORA-15066: offlining disk "DG_xxxx_0032" may result in a data lossORA-15066: offlining disk "DG_xxxx_0032" may result in a data loss............ORA-15066: offlining disk "DG_xxxx_0032" may result in a data lossORMon Jul 6 15:33:51 2015Errors in file /oraclehome/product/10.2.0/admin/+ASM/bdump/+asm_b000_360654.trc:ORA-00600: internal error code, arguments: [kfcDismount02], [], [], [], [], [], [], []Mon Jul 6 15:33:52 2015NOTE: cache dismounting group 1/0xDDDF2CC7 (DG_xxxx)NOTE: dbwr not being msg'd to dismountMon Jul 6 15:33:52 2015

这个错误极有可能是某个bug,在安装该patch 之后,最后再次尝试mount,发现仍然报错。不过错误已经发生改变:

SQL> alter diskgroup dg_xxxx mountTue Jul 7 05:49:29 2015NOTE: cache registered group DG_xxxx number=1 incarn=0x72661a1fTue Jul 7 05:49:31 2015NOTE: Hbeat: instance first (grp 1)Tue Jul 7 05:49:36 2015NOTE: start heartbeating (grp 1)NOTE: cache opening disk 0 of grp 1: DG_xxxx_0000 path:/dev/rhdiskpower41NOTE: cache opening disk 1 of grp 1: DG_xxxx_0001 path:/dev/rhdiskpower42NOTE: cache opening disk 2 of grp 1: DG_xxxx_0002 path:/dev/rhdiskpower43NOTE: cache opening disk 3 of grp 1: DG_xxxx_0003 path:/dev/rhdiskpower44NOTE: cache opening disk 4 of grp 1: DG_xxxx_0004 path:/dev/rhdiskpower45............NOTE: cache opening disk 33 of grp 1: DG_xxxx_0033 path:/dev/rhdiskpower15NOTE: cache opening disk 34 of grp 1: DG_xxxx_0034 path:/dev/rhdiskpower14NOTE: cache opening disk 35 of grp 1: DG_xxxx_0035 path:/dev/rhdiskpower13NOTE: cache mounting (first) group 1/0x72661A1F (DG_xxxx)NOTE: starting recovery of thread=1 ckpt=6295.7329 group=1NOTE: crash recovery signalled OER-15131ERROR: ORA-15131 signalled during mount of diskgroup DG_xxxxNOTE: cache dismounting group 1/0x72661A1F (DG_xxxx)ERROR: diskgroup DG_xxxx was not mountedTue Jul 7 05:50:10 2015Shutting down instance: further logons disabled

可以看出,Oracle ASM在mount的时候,需要进行crash recovery,其中的检查点位置就是6295.7329;检查trace发现检查点所读取的位置如下:

NOTE: starting recovery of thread=1 ckpt=6295.7329 group=1CE: (0x7000000107c9640) group=1 (DG_xxxx) obj=625 blk=256 (indirect) hashFlags=0x0100 lid=0x0002 lruFlags=0x0000 bastCount=1 redundancy=0x11 fileExtent=0 AUindex=0 blockIndex=0 copy #0: disk=32 au=1638611BH: (0x70000001079c360) bnum=143 type=rcv reading state=rcvRead chgSt=not modifying flags=0x00000000 pinmode=excl lockmode=null bf=0x70000001048e000 kfbh_kfcbh.fcn_kfbh = 0.0 lowAba=0.0 highAba=0.0 last kfcbInitSlot return code=null cpkt lnk is null*** 2015-07-07 05:26:12.382

可以看到,oracle需要读取32号磁盘的第1638611号AU,10g AU默认是1M,那么这个位置大致是1.6T的样子,实际上这个checkpoint的位置,我们很容易找到,这里通过kfed可以直接读取,如下:

[xxxx:/oraclehome]$ kfed read /xxx/rhdiskpower13 aun=3 blkn=0|morekfbh.endian: 0 ; 0x000: 0x00kfbh.hard: 130 ; 0x001: 0x82kfbh.type: 7 ; 0x002: KFBTYP_ACDCkfbh.datfmt: 1 ; 0x003: 0x01kfbh.block.blk: 0 ; 0x004: T=0 NUMB=0x0kfbh.block.obj: 3 ; 0x008: TYPE=0x0 NUMB=0x3kfbh.check: 1350450563 ; 0x00c: 0x507e3d83kfbh.fcn.base: 0 ; 0x010: 0x00000000kfbh.fcn.wrap: 0 ; 0x014: 0x00000000kfbh.spare1: 0 ; 0x018: 0x00000000kfbh.spare2: 0 ; 0x01c: 0x00000000kfracdc.eyec[0]: 65 ; 0x000: 0x41kfracdc.eyec[1]: 67 ; 0x001: 0x43kfracdc.eyec[2]: 68 ; 0x002: 0x44kfracdc.eyec[3]: 67 ; 0x003: 0x43kfracdc.thread: 1 ; 0x004: 0x00000001kfracdc.lastAba.seq: 4294967295 ; 0x008: 0xffffffffkfracdc.lastAba.blk: 4294967295 ; 0x00c: 0xffffffffkfracdc.blk0: 1 ; 0x010: 0x00000001kfracdc.blks: 10751 ; 0x014: 0x000029ffkfracdc.ckpt.seq: 6295 ; 0x018: 0x00001897 ---ckpt的值kfracdc.ckpt.blk: 7329 ; 0x01c: 0x00001ca1kfracdc.fcn.base: 297751371 ; 0x020: 0x11bf534bkfracdc.fcn.wrap: 0 ; 0x024: 0x00000000kfracdc.bufBlks: 64 ; 0x028: 0x00000040

最后客户经过各种尝试之后,仍然在mount 磁盘组的时候报如下的错误:

Tue Jul 7 18:03:03 2015Errors in file /oraclehome/product/10.2.0/admin/+ASM/udump/+asm_ora_438636.trc:ORA-00600: internal error code, arguments: [kfcChkAio01], [], [], [], [], [], [], []ORA-15196: invalid ASM block header [kfc.c:5553] [blk_kfbl] [625] [2147483904] [2147483905 != 2147483904]NOTE: crash recovery signalled OER-600ERROR: ORA-600 signalled during mount of diskgroup DG_xxxxNOTE: cache dismounting group 1/0xE70AB6D0 (DG_xxxx)ERROR: diskgroup DG_xxxx was not mountedTue Jul 7 18:05:38 2015

关于这一点跟

Oracle MOS文档understanding and fixing errors ORA-600 [kfcChkAio01] and ORA-15196(Doc ID 757529.1)

完全一致,因此最后我们建议客户根据该文档的描述,处理即可,实际上处理的方式很简单,该文档提供了提供shell脚本,只需要修改其中的块号即可。处理完毕之后,成功mount 磁盘组如下:

Tue Jul 7 18:05:38 2015SQL> alter diskgroup dg_xxxx mountTue Jul 7 18:05:38 2015NOTE: cache registered group DG_xxxx number=1 incarn=0xce0ab6d3Tue Jul 7 18:05:38 2015NOTE: Hbeat: instance first (grp 1)Tue Jul 7 18:05:43 2015NOTE: start heartbeating (grp 1)............NOTE: cache mounting (first) group 1/0xCE0AB6D3 (DG_xxxx)NOTE: starting recovery of thread=1 ckpt=6295.7329 group=1NOTE: advancing ckpt for thread=1 ckpt=6295.8649NOTE: cache recovered group 1 to fcn 0.297779775Tue Jul 7 18:05:43 2015NOTE: opening chunk 1 at fcn 0.297779775 ABANOTE: seq=6296 blk=8650Tue Jul 7 18:05:43 2015NOTE: cache mounting group 1/0xCE0AB6D3 (DG_xxxx) succeededSUCCESS: diskgroup DG_xxxx was mountedTue Jul 7 18:05:45 2015NOTE: recovering COD for group 1/0xce0ab6d3 (DG_xxxx)SUCCESS: completed COD recovery for group 1/0xce0ab6d3 (DG_xxxx)

最后我们回过头来解释一下,为什么会出现这样的情况呢? 实际上,根本原因在于,客户在之前添加磁盘的时候操作不规范,如下:

Tue May 20 15:43:26 2014SQL> alter diskgroup dg_xxxx add disk '/xxx/rhdiskpower24' size 1677721M,.......'/xxx/rhdiskpower16' size 1677721M, '/xxx/rhdiskpower15' size 1677721M, '/xxx/rhdiskpower14' size 1677721M, '/xxx/rhdiskpower13' size 1677721M rebalance power 8Wed May 21 08:45:13 2014Starting background process ASMBASMB started with pid=13, OS id=267028Wed May 21 08:45:14 2014NOTE: ASMB process exiting due to lack of ASM file activityWed May 21 12:24:34 2014NOTE: stopping process ARB5NOTE: stopping process ARB2NOTE: stopping process ARB6NOTE: stopping process ARB1NOTE: stopping process ARB3NOTE: stopping process ARB7NOTE: stopping process ARB4NOTE: stopping process ARB0Wed May 21 12:24:38 2014SUCCESS: rebalance completed for group 1/0x595ad46e (DG_xxxx)Wed May 21 12:24:38 2014SUCCESS: rebalance completed for group 1/0x595ad46e (DG_xxxx)

前面出问题的disk 就是第32号盘,其大小是1677721M,实际上我们检查发现该磁盘的物理大小是1638400M。

换句话将,在添加磁盘的时候,写了一个比较大的数值,让Oracle以为是这么大,然而实际上并没有这么大。当然,这也只能说明是Oracle 10g 版本中对于asm 的校验不够严格。

所以,问题很明确,报错的AU 编号1638611是大于 1638400的,所以这是一个不存在的位置,因此asm crash了。

备注:客户这里asm diskgroup 一共用了36个盘,每个盘1.6TB,大约53TB,基本上全部用光了,还好能够简单修复之一,否则恢复难度和工作量就太大了。无可否认,云和恩墨 依然是国内恢复实力最强的公司,没有之一!

上一篇:上海市艾滋病疫情维持低流行水平,2021年报告感染者1693例
下一篇:透视“黎明计划”中的种种不实:警惕“流量思维”带来的事件变形
相关文章

 发表评论

暂时没有评论,来抢沙发吧~