Sunday Mar 31, 2013

【11g新特性】Cardinality Feedback基数反馈

Cardinality Feedback基数反馈是版本11.2中引入的关于SQL 性能优化的新特性,该特性主要针对 统计信息陈旧、无直方图或虽然有直方图但仍基数计算不准确的情况, Cardinality基数的计算直接影响到后续的JOIN COST等重要的成本计算评估,造成CBO选择不当的执行计划。以上是Cardinality Feedback特性引入的初衷。[Read More]

Thursday Mar 28, 2013

Exadata offload incremental backup

网友在OTN中文官方技术论坛上提问问题:

“Exadata在rman备份时候的offloading功能需要数据库打开BCT吗?同题目,BCT=Block Change Tracking。
oracle数据库中BCT是使用文件来记录一组数据块中,修改过的数据块做个标记。
rman备份时,exadata的 offloading是如何实现的呢?”

As Maclean answered

“exadata的 offload incremental backup optimization 是基于数据块为单位, 而block change tracking通过位图维护一组block, 所以 offload incremental backup的 粒度更细化 也更智能。

With Exadata,changes are tracked at the individual oracle block level rather than at the level of a large group of blocks. This result in less I/O bandwidth being consumed for backups and faster running backups.”

在Exadata上测试offload incremantal backup,未开启block change tracking的状态下, backup incremental level 1 200GB的数据文件,仅用7秒:

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name DBM

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    16384    SYSTEM               ***     +DATA_DM01/dbm/datafile/system.256.790880171
2    16384    SYSAUX               ***     +DATA_DM01/dbm/datafile/sysaux.257.790880183
3    16384    UNDOTBS1             ***     +DATA_DM01/dbm/datafile/undotbs1.258.790880195
4    16384    UNDOTBS2             ***     +DATA_DM01/dbm/datafile/undotbs2.260.790880213
5    1024     USERS                ***     +DATA_DM01/dbm/datafile/users.261.790880225
6    204800   TESTEXA              ***     +DATA_DM01/dbm/datafile/testexa.264.792624955

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    32767    TEMP                 33554431    +DATA_DM01/dbm/tempfile/temp.259.790880685

RMAN> 

RMAN> backup as compressed backupset incremental level 1 tablespace testexa;

Starting backup at 05-SEP-12
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=784 instance=dbm1 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=850 instance=dbm1 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=914 instance=dbm1 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=981 instance=dbm1 device type=DISK
allocated channel: ORA_DISK_5
channel ORA_DISK_5: SID=1042 instance=dbm1 device type=DISK
allocated channel: ORA_DISK_6
channel ORA_DISK_6: SID=1109 instance=dbm1 device type=DISK
allocated channel: ORA_DISK_7
channel ORA_DISK_7: SID=1174 instance=dbm1 device type=DISK
allocated channel: ORA_DISK_8
channel ORA_DISK_8: SID=1238 instance=dbm1 device type=DISK
allocated channel: ORA_DISK_9
channel ORA_DISK_9: SID=1302 instance=dbm1 device type=DISK
allocated channel: ORA_DISK_10
channel ORA_DISK_10: SID=1369 instance=dbm1 device type=DISK
allocated channel: ORA_DISK_11
channel ORA_DISK_11: SID=1435 instance=dbm1 device type=DISK
allocated channel: ORA_DISK_12
channel ORA_DISK_12: SID=3 instance=dbm1 device type=DISK
channel ORA_DISK_1: starting compressed incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00006 name=+DATA_DM01/dbm/datafile/testexa.264.792624955
channel ORA_DISK_1: starting piece 1 at 05-SEP-12
channel ORA_DISK_1: finished piece 1 at 05-SEP-12
piece handle=+RECO_DM01/dbm/backupset/2012_09_05/nnndn1_tag20120905t100651_0.499.793188411 tag=TAG20120905T100651 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 05-SEP-12

Exadata X2-2 1/4 RACK并行备份测试

[root@dm01db01 ~]# imageinfo

Kernel version: 2.6.18-274.18.1.0.1.el5 #1 SMP Thu Feb 9 19:07:16 EST 2012 x86_64
Image version: 11.2.3.1.1.120607
Image activated: 2012-08-14 19:16:01 -0400
Image status: success
System partition on device: /dev/mapper/VGExaDb-LVDbSys1

rman target /

Recovery Manager: Release 11.2.0.2.0 – Production on Mon Sep 3 10:13:11 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

RMAN> show all;

RMAN configuration parameters for database with db_unique_name DBM are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘%F’; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 12 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM ‘AES128′; # default
CONFIGURE COMPRESSION ALGORITHM ‘BASIC’ AS OF RELEASE ‘DEFAULT’ OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘/u01/app/oracle/product/11.2.0/dbhome_1/dbs/snapcf_dbm1.f’; # default

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name DBM

List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
—- ——– ——————– ——- ————————
1 16384 SYSTEM *** +DATA_DM01/dbm/datafile/system.256.790880171
2 16384 SYSAUX *** +DATA_DM01/dbm/datafile/sysaux.257.790880183
3 16384 UNDOTBS1 *** +DATA_DM01/dbm/datafile/undotbs1.258.790880195
4 16384 UNDOTBS2 *** +DATA_DM01/dbm/datafile/undotbs2.260.790880213
5 1024 USERS *** +DATA_DM01/dbm/datafile/users.261.790880225
6 204800 TESTEXA *** +DATA_DM01/dbm/datafile/testexa.264.792624955

TESTEXA 表空间 已被填充满
先做串行备份测试

RMAN> run
2> {
3> allocate channel c1 type disk;
4> backup as compressed backupset incremental level 0 tablespace testexa channel c1;
5> }

allocated channel: c1
channel c1: SID=589 instance=dbm1 device type=DISK

Starting backup at 03-SEP-12
channel c1: starting compressed incremental level 0 datafile backup set
channel c1: specifying datafile(s) in backup set
input datafile file number=00006 name=+DATA_DM01/dbm/datafile/testexa.264.792624955
channel c1: starting piece 1 at 03-SEP-12
channel c1: finished piece 1 at 03-SEP-12
piece handle=+RECO_DM01/dbm/backupset/2012_09_03/nnndn0_tag20120903t101448_0.295.793016089 tag=TAG20120903T101448 comment=NONE
channel c1: backup set complete, elapsed time: 00:08:25
Finished backup at 03-SEP-12
released channel: c1

备份200GB数据耗时 8分25s

使用并行度12 section size=1024MB的备份

RMAN> backup as compressed backupset section size 1024M incremental level 0 tablespace testexa;

piece handle=+RECO_DM01/dbm/backupset/2012_09_03/nnndn0_tag20120903t103006_0.493.793017075 tag=TAG20120903T103006 comment=NONE
channel ORA_DISK_5: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_7: finished piece 196 at 03-SEP-12
piece handle=+RECO_DM01/dbm/backupset/2012_09_03/nnndn0_tag20120903t103006_0.494.793017075 tag=TAG20120903T103006 comment=NONE
channel ORA_DISK_7: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_8: finished piece 197 at 03-SEP-12
piece handle=+RECO_DM01/dbm/backupset/2012_09_03/nnndn0_tag20120903t103006_0.495.793017075 tag=TAG20120903T103006 comment=NONE
channel ORA_DISK_8: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_2: finished piece 199 at 03-SEP-12
piece handle=+RECO_DM01/dbm/backupset/2012_09_03/nnndn0_tag20120903t103006_0.497.793017075 tag=TAG20120903T103006 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_3: finished piece 200 at 03-SEP-12
piece handle=+RECO_DM01/dbm/backupset/2012_09_03/nnndn0_tag20120903t103006_0.498.793017075 tag=TAG20120903T103006 comment=NONE
channel ORA_DISK_3: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_9: finished piece 198 at 03-SEP-12
piece handle=+RECO_DM01/dbm/backupset/2012_09_03/nnndn0_tag20120903t103006_0.496.793017075 tag=TAG20120903T103006 comment=NONE
channel ORA_DISK_9: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_1: finished piece 25 at 03-SEP-12
piece handle=+RECO_DM01/dbm/backupset/2012_09_03/nnndn0_tag20120903t103006_0.323.793017059 tag=TAG20120903T103006 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:11
Finished backup at 03-SEP-12

耗时约90s

测试Exadata单个cell失败

测试Exadata单个cell失败,模拟X2-2 3个cellserver中单个cell丢失的情况:

cell server

CellCLI> alter cell shutdown services all

Stopping the RS, CELLSRV, and MS services...
The SHUTDOWN of services was successful.




ASM LOG:


Sun Sep 02 09:08:48 2012
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_gmon_30756.trc:
ORA-27603: Cell storage I/O error, I/O failed on disk o/192.168.64.131/DATA_DM01_CD_00_dm01cel01 at offset 8384512 for data length 4096
ORA-27626: Exadata error: 12 (Network error)
ORA-27300: OS system dependent operation:rpc update timed out failed with status: 0
ORA-27301: OS failure message: Error 0
ORA-27302: failure occurred at: skgxp_path
WARNING: Write Failed. group:1 disk:0 AU:1 offset:4190208 size:4096
WARNING: Hbeat write to PST disk 0.3940753198 (DATA_DM01_CD_00_DM01CEL01) in group 1 failed. 
WARNING: Write Failed. group:2 disk:0 AU:1 offset:4190208 size:4096
WARNING: Hbeat write to PST disk 0.3940753233 (DBFS_DG_CD_02_DM01CEL01) in group 2 failed. 
WARNING: Write Failed. group:3 disk:0 AU:1 offset:4190208 size:4096
WARNING: Hbeat write to PST disk 0.3940753265 (RECO_DM01_CD_00_DM01CEL01) in group 3 failed. 
Sun Sep 02 09:08:48 2012
NOTE: process _b000_+asm1 (21302) initiating offline of disk 0.3940753198 (DATA_DM01_CD_00_DM01CEL01) with mask 0x7e in group 1
NOTE: checking PST: grp = 1
Sun Sep 02 09:08:48 2012
NOTE: process _b001_+asm1 (21304) initiating offline of disk 0.3940753233 (DBFS_DG_CD_02_DM01CEL01) with mask 0x7e in group 2
NOTE: checking PST: grp = 2
GMON checking disk modes for group 1 at 10 for pid 30, osid 21302
Sun Sep 02 09:08:48 2012
NOTE: process _b002_+asm1 (21306) initiating offline of disk 0.3940753265 (RECO_DM01_CD_00_DM01CEL01) with mask 0x7e in group 3
NOTE: checking PST: grp = 3
WARNING: Read Failed. group:1 disk:11 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:1 disk:11 AU:1 offset:0 size:4096
WARNING: Read Failed. group:1 disk:10 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:1 disk:10 AU:1 offset:0 size:4096
WARNING: Read Failed. group:1 disk:9 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:1 disk:9 AU:1 offset:0 size:4096
WARNING: Read Failed. group:1 disk:8 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:1 disk:8 AU:1 offset:0 size:4096
WARNING: Read Failed. group:1 disk:7 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:1 disk:7 AU:1 offset:0 size:4096
WARNING: Read Failed. group:1 disk:6 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:1 disk:6 AU:1 offset:0 size:4096
WARNING: Read Failed. group:1 disk:5 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:1 disk:5 AU:1 offset:0 size:4096
WARNING: Read Failed. group:1 disk:4 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:1 disk:4 AU:1 offset:0 size:4096
WARNING: Read Failed. group:1 disk:3 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:1 disk:3 AU:1 offset:0 size:4096
WARNING: Read Failed. group:1 disk:2 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:1 disk:2 AU:1 offset:0 size:4096
WARNING: Read Failed. group:1 disk:1 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:1 disk:1 AU:1 offset:0 size:4096
WARNING: Write Failed. group:1 disk:1 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:1 disk:2 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:1 disk:3 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:1 disk:4 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:1 disk:5 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:1 disk:6 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:1 disk:7 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:1 disk:8 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:1 disk:9 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:1 disk:10 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:1 disk:11 AU:1 offset:4096 size:4096
WARNING: GMON has insufficient disks to maintain consensus. minimum required is 3
NOTE: group DATA_DM01: updated PST location: disk 0012 (PST copy 0)
NOTE: group DATA_DM01: updated PST location: disk 0024 (PST copy 1)
GMON checking disk modes for group 2 at 11 for pid 35, osid 21304
NOTE: checking PST for grp 1 done.
WARNING: Read Failed. group:2 disk:9 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:2 disk:9 AU:1 offset:0 size:4096
WARNING: Read Failed. group:2 disk:8 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:2 disk:8 AU:1 offset:0 size:4096
WARNING: Read Failed. group:2 disk:7 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:2 disk:7 AU:1 offset:0 size:4096
WARNING: Read Failed. group:2 disk:6 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:2 disk:6 AU:1 offset:0 size:4096
WARNING: Read Failed. group:2 disk:5 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:2 disk:5 AU:1 offset:0 size:4096
WARNING: Read Failed. group:2 disk:4 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:2 disk:4 AU:1 offset:0 size:4096
WARNING: Read Failed. group:2 disk:3 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:2 disk:3 AU:1 offset:0 size:4096
WARNING: Read Failed. group:2 disk:2 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:2 disk:2 AU:1 offset:0 size:4096
WARNING: Read Failed. group:2 disk:1 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:2 disk:1 AU:1 offset:0 size:4096
WARNING: Write Failed. group:2 disk:1 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:2 disk:2 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:2 disk:3 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:2 disk:4 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:2 disk:5 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:2 disk:6 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:2 disk:7 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:2 disk:8 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:2 disk:9 AU:1 offset:4096 size:4096
WARNING: GMON has insufficient disks to maintain consensus. minimum required is 3
NOTE: group DBFS_DG: updated PST location: disk 0010 (PST copy 0)
NOTE: group DBFS_DG: updated PST location: disk 0020 (PST copy 1)
WARNING: Disk 0 (DATA_DM01_CD_00_DM01CEL01) in group 1 mode 0x7f is now being offlined
NOTE: checking PST for grp 2 done.
GMON checking disk modes for group 3 at 12 for pid 36, osid 21306
WARNING: Disk 0 (DATA_DM01_CD_00_DM01CEL01) in group 1 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Read Failed. group:3 disk:11 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:3 disk:11 AU:1 offset:0 size:4096
WARNING: Read Failed. group:3 disk:10 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:3 disk:10 AU:1 offset:0 size:4096
WARNING: Read Failed. group:3 disk:9 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:3 disk:9 AU:1 offset:0 size:4096
WARNING: Read Failed. group:3 disk:8 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:3 disk:8 AU:1 offset:0 size:4096
WARNING: Read Failed. group:3 disk:7 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:3 disk:7 AU:1 offset:0 size:4096
WARNING: Read Failed. group:3 disk:6 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:3 disk:6 AU:1 offset:0 size:4096
WARNING: Read Failed. group:3 disk:5 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:3 disk:5 AU:1 offset:0 size:4096
WARNING: Read Failed. group:3 disk:4 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:3 disk:4 AU:1 offset:0 size:4096
WARNING: Read Failed. group:3 disk:3 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:3 disk:3 AU:1 offset:0 size:4096
WARNING: Read Failed. group:3 disk:2 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:3 disk:2 AU:1 offset:0 size:4096
WARNING: Read Failed. group:3 disk:1 AU:1 offset:4096 size:4096
WARNING: Read Failed. group:3 disk:1 AU:1 offset:0 size:4096
WARNING: Write Failed. group:3 disk:1 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:3 disk:2 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:3 disk:3 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:3 disk:4 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:3 disk:5 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:3 disk:6 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:3 disk:7 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:3 disk:8 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:3 disk:9 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:3 disk:10 AU:1 offset:4096 size:4096
WARNING: Write Failed. group:3 disk:11 AU:1 offset:4096 size:4096
NOTE: group RECO_DM01: updated PST location: disk 0012 (PST copy 0)
NOTE: group RECO_DM01: updated PST location: disk 0024 (PST copy 1)
NOTE: checking PST for grp 3 done.
WARNING: Disk 1 (DATA_DM01_CD_01_DM01CEL01) in group 1 mode 0x7f is now being offlined
WARNING: Disk 1 (DATA_DM01_CD_01_DM01CEL01) in group 1 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 0 (DBFS_DG_CD_02_DM01CEL01) in group 2 mode 0x7f is now being offlined
WARNING: Disk 0 (DBFS_DG_CD_02_DM01CEL01) in group 2 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 0 (RECO_DM01_CD_00_DM01CEL01) in group 3 mode 0x7f is now being offlined
WARNING: Disk 0 (RECO_DM01_CD_00_DM01CEL01) in group 3 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 2 (DATA_DM01_CD_02_DM01CEL01) in group 1 mode 0x7f is now being offlined
WARNING: Disk 2 (DATA_DM01_CD_02_DM01CEL01) in group 1 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 1 (DBFS_DG_CD_03_DM01CEL01) in group 2 mode 0x7f is now being offlined
WARNING: Disk 1 (DBFS_DG_CD_03_DM01CEL01) in group 2 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 1 (RECO_DM01_CD_01_DM01CEL01) in group 3 mode 0x7f is now being offlined
WARNING: Disk 1 (RECO_DM01_CD_01_DM01CEL01) in group 3 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 3 (DATA_DM01_CD_03_DM01CEL01) in group 1 mode 0x7f is now being offlined
WARNING: Disk 3 (DATA_DM01_CD_03_DM01CEL01) in group 1 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 2 (DBFS_DG_CD_04_DM01CEL01) in group 2 mode 0x7f is now being offlined
WARNING: Disk 2 (DBFS_DG_CD_04_DM01CEL01) in group 2 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 2 (RECO_DM01_CD_02_DM01CEL01) in group 3 mode 0x7f is now being offlined
WARNING: Disk 2 (RECO_DM01_CD_02_DM01CEL01) in group 3 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 4 (DATA_DM01_CD_04_DM01CEL01) in group 1 mode 0x7f is now being offlined
WARNING: Disk 4 (DATA_DM01_CD_04_DM01CEL01) in group 1 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 3 (DBFS_DG_CD_05_DM01CEL01) in group 2 mode 0x7f is now being offlined
WARNING: Disk 3 (DBFS_DG_CD_05_DM01CEL01) in group 2 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 3 (RECO_DM01_CD_03_DM01CEL01) in group 3 mode 0x7f is now being offlined
WARNING: Disk 3 (RECO_DM01_CD_03_DM01CEL01) in group 3 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 5 (DATA_DM01_CD_05_DM01CEL01) in group 1 mode 0x7f is now being offlined
WARNING: Disk 5 (DATA_DM01_CD_05_DM01CEL01) in group 1 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 4 (DBFS_DG_CD_06_DM01CEL01) in group 2 mode 0x7f is now being offlined
WARNING: Disk 4 (DBFS_DG_CD_06_DM01CEL01) in group 2 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 4 (RECO_DM01_CD_04_DM01CEL01) in group 3 mode 0x7f is now being offlined
WARNING: Disk 4 (RECO_DM01_CD_04_DM01CEL01) in group 3 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 6 (DATA_DM01_CD_06_DM01CEL01) in group 1 mode 0x7f is now being offlined
WARNING: Disk 6 (DATA_DM01_CD_06_DM01CEL01) in group 1 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 5 (DBFS_DG_CD_07_DM01CEL01) in group 2 mode 0x7f is now being offlined
WARNING: Disk 5 (DBFS_DG_CD_07_DM01CEL01) in group 2 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 5 (RECO_DM01_CD_05_DM01CEL01) in group 3 mode 0x7f is now being offlined
WARNING: Disk 5 (RECO_DM01_CD_05_DM01CEL01) in group 3 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 7 (DATA_DM01_CD_07_DM01CEL01) in group 1 mode 0x7f is now being offlined
WARNING: Disk 7 (DATA_DM01_CD_07_DM01CEL01) in group 1 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 6 (DBFS_DG_CD_08_DM01CEL01) in group 2 mode 0x7f is now being offlined
WARNING: Disk 6 (DBFS_DG_CD_08_DM01CEL01) in group 2 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 6 (RECO_DM01_CD_06_DM01CEL01) in group 3 mode 0x7f is now being offlined
WARNING: Disk 6 (RECO_DM01_CD_06_DM01CEL01) in group 3 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 8 (DATA_DM01_CD_08_DM01CEL01) in group 1 mode 0x7f is now being offlined
WARNING: Disk 8 (DATA_DM01_CD_08_DM01CEL01) in group 1 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 7 (DBFS_DG_CD_09_DM01CEL01) in group 2 mode 0x7f is now being offlined
WARNING: Disk 7 (DBFS_DG_CD_09_DM01CEL01) in group 2 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 7 (RECO_DM01_CD_07_DM01CEL01) in group 3 mode 0x7f is now being offlined
WARNING: Disk 7 (RECO_DM01_CD_07_DM01CEL01) in group 3 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 9 (DATA_DM01_CD_09_DM01CEL01) in group 1 mode 0x7f is now being offlined
WARNING: Disk 9 (DATA_DM01_CD_09_DM01CEL01) in group 1 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 8 (DBFS_DG_CD_10_DM01CEL01) in group 2 mode 0x7f is now being offlined
WARNING: Disk 8 (DBFS_DG_CD_10_DM01CEL01) in group 2 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 8 (RECO_DM01_CD_08_DM01CEL01) in group 3 mode 0x7f is now being offlined
WARNING: Disk 8 (RECO_DM01_CD_08_DM01CEL01) in group 3 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 10 (DATA_DM01_CD_10_DM01CEL01) in group 1 mode 0x7f is now being offlined
WARNING: Disk 10 (DATA_DM01_CD_10_DM01CEL01) in group 1 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 9 (DBFS_DG_CD_11_DM01CEL01) in group 2 mode 0x7f is now being offlined
WARNING: Disk 9 (DBFS_DG_CD_11_DM01CEL01) in group 2 in mode 0x7f is now being taken offline on ASM inst 1
NOTE: initiating PST update: grp = 2, dsk = 0/0xeae31f51, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 2, dsk = 1/0xeae31f4a, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 2, dsk = 2/0xeae31f4c, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 2, dsk = 3/0xeae31f4f, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 2, dsk = 4/0xeae31f49, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 2, dsk = 5/0xeae31f4e, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 2, dsk = 6/0xeae31f50, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 2, dsk = 7/0xeae31f48, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 2, dsk = 8/0xeae31f4b, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 2, dsk = 9/0xeae31f4d, mask = 0x6a, op = clear
WARNING: Disk 9 (RECO_DM01_CD_09_DM01CEL01) in group 3 mode 0x7f is now being offlined
WARNING: Disk 9 (RECO_DM01_CD_09_DM01CEL01) in group 3 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 11 (DATA_DM01_CD_11_DM01CEL01) in group 1 mode 0x7f is now being offlined
WARNING: Disk 11 (DATA_DM01_CD_11_DM01CEL01) in group 1 in mode 0x7f is now being taken offline on ASM inst 1
NOTE: initiating PST update: grp = 1, dsk = 0/0xeae31f2e, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 1, dsk = 1/0xeae31f29, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 1, dsk = 2/0xeae31f2f, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 1, dsk = 3/0xeae31f28, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 1, dsk = 4/0xeae31f33, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 1, dsk = 5/0xeae31f2d, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 1, dsk = 6/0xeae31f2c, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 1, dsk = 7/0xeae31f30, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 1, dsk = 8/0xeae31f31, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 1, dsk = 9/0xeae31f2b, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 1, dsk = 10/0xeae31f2a, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 1, dsk = 11/0xeae31f32, mask = 0x6a, op = clear
WARNING: Disk 10 (RECO_DM01_CD_10_DM01CEL01) in group 3 mode 0x7f is now being offlined
WARNING: Disk 10 (RECO_DM01_CD_10_DM01CEL01) in group 3 in mode 0x7f is now being taken offline on ASM inst 1
WARNING: Disk 11 (RECO_DM01_CD_11_DM01CEL01) in group 3 mode 0x7f is now being offlined
WARNING: Disk 11 (RECO_DM01_CD_11_DM01CEL01) in group 3 in mode 0x7f is now being taken offline on ASM inst 1
NOTE: initiating PST update: grp = 3, dsk = 0/0xeae31f71, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 3, dsk = 1/0xeae31f6b, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 3, dsk = 2/0xeae31f6c, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 3, dsk = 3/0xeae31f74, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 3, dsk = 4/0xeae31f6d, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 3, dsk = 5/0xeae31f73, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 3, dsk = 6/0xeae31f70, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 3, dsk = 7/0xeae31f72, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 3, dsk = 8/0xeae31f6e, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 3, dsk = 9/0xeae31f6a, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 3, dsk = 10/0xeae31f6f, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 3, dsk = 11/0xeae31f75, mask = 0x6a, op = clear
GMON updating disk modes for group 1 at 13 for pid 30, osid 21302
NOTE: group DATA_DM01: updated PST location: disk 0012 (PST copy 0)
NOTE: group DATA_DM01: updated PST location: disk 0024 (PST copy 1)
NOTE: group DATA_DM01: updated PST location: disk 0012 (PST copy 0)
NOTE: group DATA_DM01: updated PST location: disk 0024 (PST copy 1)
GMON updating disk modes for group 2 at 14 for pid 35, osid 21304
NOTE: group DBFS_DG: updated PST location: disk 0010 (PST copy 0)
NOTE: group DBFS_DG: updated PST location: disk 0020 (PST copy 1)
NOTE: group DBFS_DG: updated PST location: disk 0010 (PST copy 0)
NOTE: group DBFS_DG: updated PST location: disk 0020 (PST copy 1)
GMON updating disk modes for group 3 at 15 for pid 36, osid 21306
NOTE: PST update grp = 1 completed successfully 
NOTE: initiating PST update: grp = 1, dsk = 0/0xeae31f2e, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 1, dsk = 1/0xeae31f29, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 1, dsk = 2/0xeae31f2f, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 1, dsk = 3/0xeae31f28, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 1, dsk = 4/0xeae31f33, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 1, dsk = 5/0xeae31f2d, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 1, dsk = 6/0xeae31f2c, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 1, dsk = 7/0xeae31f30, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 1, dsk = 8/0xeae31f31, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 1, dsk = 9/0xeae31f2b, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 1, dsk = 10/0xeae31f2a, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 1, dsk = 11/0xeae31f32, mask = 0x7e, op = clear
NOTE: group RECO_DM01: updated PST location: disk 0012 (PST copy 0)
NOTE: group RECO_DM01: updated PST location: disk 0024 (PST copy 1)
NOTE: group RECO_DM01: updated PST location: disk 0012 (PST copy 0)
NOTE: group RECO_DM01: updated PST location: disk 0024 (PST copy 1)
NOTE: PST update grp = 2 completed successfully 
NOTE: initiating PST update: grp = 2, dsk = 0/0xeae31f51, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 2, dsk = 1/0xeae31f4a, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 2, dsk = 2/0xeae31f4c, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 2, dsk = 3/0xeae31f4f, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 2, dsk = 4/0xeae31f49, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 2, dsk = 5/0xeae31f4e, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 2, dsk = 6/0xeae31f50, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 2, dsk = 7/0xeae31f48, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 2, dsk = 8/0xeae31f4b, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 2, dsk = 9/0xeae31f4d, mask = 0x7e, op = clear
NOTE: PST update grp = 3 completed successfully 
NOTE: initiating PST update: grp = 3, dsk = 0/0xeae31f71, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 3, dsk = 1/0xeae31f6b, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 3, dsk = 2/0xeae31f6c, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 3, dsk = 3/0xeae31f74, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 3, dsk = 4/0xeae31f6d, mask = 0x7e, op = clear
GMON updating disk modes for group 1 at 16 for pid 30, osid 21302
NOTE: initiating PST update: grp = 3, dsk = 5/0xeae31f73, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 3, dsk = 6/0xeae31f70, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 3, dsk = 7/0xeae31f72, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 3, dsk = 8/0xeae31f6e, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 3, dsk = 9/0xeae31f6a, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 3, dsk = 10/0xeae31f6f, mask = 0x7e, op = clear
NOTE: initiating PST update: grp = 3, dsk = 11/0xeae31f75, mask = 0x7e, op = clear
NOTE: group DATA_DM01: updated PST location: disk 0012 (PST copy 0)
NOTE: group DATA_DM01: updated PST location: disk 0024 (PST copy 1)
NOTE: group DATA_DM01: updated PST location: disk 0012 (PST copy 0)
NOTE: group DATA_DM01: updated PST location: disk 0024 (PST copy 1)
NOTE: cache closing disk 0 of grp 1: DATA_DM01_CD_00_DM01CEL01
NOTE: cache closing disk 1 of grp 1: DATA_DM01_CD_01_DM01CEL01
NOTE: cache closing disk 2 of grp 1: DATA_DM01_CD_02_DM01CEL01
NOTE: cache closing disk 3 of grp 1: DATA_DM01_CD_03_DM01CEL01
NOTE: cache closing disk 4 of grp 1: DATA_DM01_CD_04_DM01CEL01
NOTE: cache closing disk 5 of grp 1: DATA_DM01_CD_05_DM01CEL01
NOTE: cache closing disk 6 of grp 1: DATA_DM01_CD_06_DM01CEL01
NOTE: cache closing disk 7 of grp 1: DATA_DM01_CD_07_DM01CEL01
NOTE: cache closing disk 8 of grp 1: DATA_DM01_CD_08_DM01CEL01
NOTE: cache closing disk 9 of grp 1: DATA_DM01_CD_09_DM01CEL01
NOTE: cache closing disk 10 of grp 1: DATA_DM01_CD_10_DM01CEL01
NOTE: cache closing disk 11 of grp 1: DATA_DM01_CD_11_DM01CEL01
GMON updating disk modes for group 2 at 17 for pid 35, osid 21304
NOTE: group DBFS_DG: updated PST location: disk 0010 (PST copy 0)
NOTE: group DBFS_DG: updated PST location: disk 0020 (PST copy 1)
NOTE: group DBFS_DG: updated PST location: disk 0010 (PST copy 0)
NOTE: group DBFS_DG: updated PST location: disk 0020 (PST copy 1)
NOTE: cache closing disk 0 of grp 2: DBFS_DG_CD_02_DM01CEL01
NOTE: cache closing disk 1 of grp 2: DBFS_DG_CD_03_DM01CEL01
NOTE: cache closing disk 2 of grp 2: DBFS_DG_CD_04_DM01CEL01
NOTE: cache closing disk 3 of grp 2: DBFS_DG_CD_05_DM01CEL01
NOTE: cache closing disk 4 of grp 2: DBFS_DG_CD_06_DM01CEL01
NOTE: cache closing disk 5 of grp 2: DBFS_DG_CD_07_DM01CEL01
NOTE: cache closing disk 6 of grp 2: DBFS_DG_CD_08_DM01CEL01
NOTE: cache closing disk 7 of grp 2: DBFS_DG_CD_09_DM01CEL01
NOTE: cache closing disk 8 of grp 2: DBFS_DG_CD_10_DM01CEL01
NOTE: cache closing disk 9 of grp 2: DBFS_DG_CD_11_DM01CEL01
NOTE: PST update grp = 1 completed successfully 
GMON updating disk modes for group 3 at 18 for pid 36, osid 21306
NOTE: group RECO_DM01: updated PST location: disk 0012 (PST copy 0)
NOTE: group RECO_DM01: updated PST location: disk 0024 (PST copy 1)
NOTE: group RECO_DM01: updated PST location: disk 0012 (PST copy 0)
NOTE: group RECO_DM01: updated PST location: disk 0024 (PST copy 1)
NOTE: cache closing disk 0 of grp 3: RECO_DM01_CD_00_DM01CEL01
NOTE: cache closing disk 1 of grp 3: RECO_DM01_CD_01_DM01CEL01
NOTE: cache closing disk 2 of grp 3: RECO_DM01_CD_02_DM01CEL01
NOTE: cache closing disk 3 of grp 3: RECO_DM01_CD_03_DM01CEL01
NOTE: cache closing disk 4 of grp 3: RECO_DM01_CD_04_DM01CEL01
NOTE: cache closing disk 5 of grp 3: RECO_DM01_CD_05_DM01CEL01
NOTE: cache closing disk 6 of grp 3: RECO_DM01_CD_06_DM01CEL01
NOTE: cache closing disk 7 of grp 3: RECO_DM01_CD_07_DM01CEL01
NOTE: cache closing disk 8 of grp 3: RECO_DM01_CD_08_DM01CEL01
NOTE: cache closing disk 9 of grp 3: RECO_DM01_CD_09_DM01CEL01
NOTE: cache closing disk 10 of grp 3: RECO_DM01_CD_10_DM01CEL01
NOTE: cache closing disk 11 of grp 3: RECO_DM01_CD_11_DM01CEL01
NOTE: PST update grp = 2 completed successfully 
NOTE: PST update grp = 3 completed successfully 
Sun Sep 02 09:08:49 2012
NOTE: Attempting voting file refresh on diskgroup DBFS_DG
NOTE: Voting file relocation is required in diskgroup DBFS_DG
Sun Sep 02 09:08:51 2012
NOTE: successfully read ACD block gn=3 blk=11264 via retry read
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_lgwr_30748.trc:
ORA-15062: ASM disk is globally closed
NOTE: Attempting voting file refresh on diskgroup DBFS_DG
NOTE: Voting file relocation is required in diskgroup DBFS_DG
NOTE: Attempting voting file relocation on diskgroup DBFS_DG
NOTE: successfully read ACD block gn=3 blk=11264 via retry read
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_lgwr_30748.trc:
ORA-15062: ASM disk is globally closed
Sun Sep 02 09:10:06 2012
NOTE: successfully read ACD block gn=3 blk=11264 via retry read
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_lgwr_30748.trc:
ORA-15062: ASM disk is globally closed
NOTE: successfully read ACD block gn=3 blk=11264 via retry read
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_lgwr_30748.trc:
ORA-15062: ASM disk is globally closed
Sun Sep 02 09:10:39 2012
NOTE: successfully read ACD block gn=1 blk=11264 via retry read
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_lgwr_30748.trc:
ORA-15062: ASM disk is globally closed
NOTE: successfully read ACD block gn=1 blk=11264 via retry read
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_lgwr_30748.trc:
ORA-15062: ASM disk is globally closed
NOTE: successfully read ACD block gn=1 blk=11264 via retry read
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_lgwr_30748.trc:
ORA-15062: ASM disk is globally closed

*** 2012-09-02 09:08:48.881
Box name 0 - 192.168.64.132
OSS OS Pid - 12253
Reconnect: Attempts: 1 Last TS: 5693385570
Dumping SKGXP connection state: Band 0: port ID - 0x1267e4a8, connection - 0x126d98d0
Dumping SKGXP connection state: Band 1: port ID - 0x12714de8, connection - 0x126aab90
Dumping SKGXP connection state: Band 2: port ID - 0x126a20d8, connection - 0x126865a0
Dumping SKGXP connection state: Band 3: port ID - 0x126662b8, connection - 0x1269c680
Dumping SKGXP connection state: Band 4: port ID - 0x1271b048, connection - 0x126b2130
Dumping SKGXP connection state: Band 5: port ID - 0x12715188, connection - 0x126d2330
Dumping SKGXP connection state: Band 6: port ID - 0x126a26d8, connection - 0x12691610
Dumping SKGXP connection state: Band 7: port ID - 0x126a2728, connection - 0x126a35f0
Dumping SKGXP connection state: Band 8: port ID - 0x126a2778, connection - 0x126b96d0
Reconnecting to box 0x126c70a0 ...
Storage box 0x126c70a0 Inc: 1 with the source id 3379307313
Box name 0 - 192.168.64.133
OSS OS Pid - 12308
Reconnect: Attempts: 1 Last TS: 5693385570
Dumping SKGXP connection state: Band 0: port ID - 0x126a2128, connection - 0x126a70c0
Dumping SKGXP connection state: Band 1: port ID - 0x126a2598, connection - 0x126950e0
Dumping SKGXP connection state: Band 2: port ID - 0x12666308, connection - 0x126e0e70
Dumping SKGXP connection state: Band 3: port ID - 0x1271b0a8, connection - 0x126d5e00
Dumping SKGXP connection state: Band 4: port ID - 0x126666c8, connection - 0x1270ed60
Dumping SKGXP connection state: Band 5: port ID - 0x12714ed8, connection - 0x1268a070
Dumping SKGXP connection state: Band 6: port ID - 0x1271afe8, connection - 0x12682ad0
Dumping SKGXP connection state: Band 7: port ID - 0x126661c8, connection - 0x126c0c70
Dumping SKGXP connection state: Band 8: port ID - 0x12666218, connection - 0x126c72c0
Reissuing requests for the box 0x126c70a0
Reissuing requests for the box 0x126a33d0

*** 2012-09-02 09:08:51.185
NOTE: successfully read ACD block gn=3 blk=11264 via retry read
ORA-15062: ASM disk is globally closed

RDBMS ALERT.loG

Sun Sep 02 05:59:59 2012
Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter
Sun Sep 02 05:59:59 2012
Starting background process VKRM
Sun Sep 02 05:59:59 2012
VKRM started with pid=83, OS id=5062 
Sun Sep 02 09:08:48 2012
NOTE: disk 0 (DATA_DM01_CD_00_DM01CEL01) in group 1 (DATA_DM01) is offline for reads
NOTE: disk 1 (DATA_DM01_CD_01_DM01CEL01) in group 1 (DATA_DM01) is offline for reads
NOTE: disk 2 (DATA_DM01_CD_02_DM01CEL01) in group 1 (DATA_DM01) is offline for reads
NOTE: disk 3 (DATA_DM01_CD_03_DM01CEL01) in group 1 (DATA_DM01) is offline for reads
NOTE: disk 4 (DATA_DM01_CD_04_DM01CEL01) in group 1 (DATA_DM01) is offline for reads
NOTE: disk 5 (DATA_DM01_CD_05_DM01CEL01) in group 1 (DATA_DM01) is offline for reads
NOTE: disk 6 (DATA_DM01_CD_06_DM01CEL01) in group 1 (DATA_DM01) is offline for reads
NOTE: disk 7 (DATA_DM01_CD_07_DM01CEL01) in group 1 (DATA_DM01) is offline for reads
NOTE: disk 8 (DATA_DM01_CD_08_DM01CEL01) in group 1 (DATA_DM01) is offline for reads
NOTE: disk 9 (DATA_DM01_CD_09_DM01CEL01) in group 1 (DATA_DM01) is offline for reads
NOTE: disk 10 (DATA_DM01_CD_10_DM01CEL01) in group 1 (DATA_DM01) is offline for reads
NOTE: disk 11 (DATA_DM01_CD_11_DM01CEL01) in group 1 (DATA_DM01) is offline for reads
NOTE: disk 0 (RECO_DM01_CD_00_DM01CEL01) in group 3 (RECO_DM01) is offline for reads
NOTE: disk 1 (RECO_DM01_CD_01_DM01CEL01) in group 3 (RECO_DM01) is offline for reads
NOTE: disk 2 (RECO_DM01_CD_02_DM01CEL01) in group 3 (RECO_DM01) is offline for reads
NOTE: disk 3 (RECO_DM01_CD_03_DM01CEL01) in group 3 (RECO_DM01) is offline for reads
NOTE: disk 4 (RECO_DM01_CD_04_DM01CEL01) in group 3 (RECO_DM01) is offline for reads
NOTE: disk 5 (RECO_DM01_CD_05_DM01CEL01) in group 3 (RECO_DM01) is offline for reads
NOTE: disk 6 (RECO_DM01_CD_06_DM01CEL01) in group 3 (RECO_DM01) is offline for reads
NOTE: disk 7 (RECO_DM01_CD_07_DM01CEL01) in group 3 (RECO_DM01) is offline for reads
NOTE: disk 8 (RECO_DM01_CD_08_DM01CEL01) in group 3 (RECO_DM01) is offline for reads
NOTE: disk 9 (RECO_DM01_CD_09_DM01CEL01) in group 3 (RECO_DM01) is offline for reads
NOTE: disk 10 (RECO_DM01_CD_10_DM01CEL01) in group 3 (RECO_DM01) is offline for reads
NOTE: disk 11 (RECO_DM01_CD_11_DM01CEL01) in group 3 (RECO_DM01) is offline for reads
NOTE: disk 0 (DATA_DM01_CD_00_DM01CEL01) in group 1 (DATA_DM01) is offline for writes
NOTE: disk 1 (DATA_DM01_CD_01_DM01CEL01) in group 1 (DATA_DM01) is offline for writes
NOTE: disk 2 (DATA_DM01_CD_02_DM01CEL01) in group 1 (DATA_DM01) is offline for writes
NOTE: disk 3 (DATA_DM01_CD_03_DM01CEL01) in group 1 (DATA_DM01) is offline for writes
NOTE: disk 4 (DATA_DM01_CD_04_DM01CEL01) in group 1 (DATA_DM01) is offline for writes
NOTE: disk 5 (DATA_DM01_CD_05_DM01CEL01) in group 1 (DATA_DM01) is offline for writes
NOTE: disk 6 (DATA_DM01_CD_06_DM01CEL01) in group 1 (DATA_DM01) is offline for writes
NOTE: disk 7 (DATA_DM01_CD_07_DM01CEL01) in group 1 (DATA_DM01) is offline for writes
NOTE: disk 8 (DATA_DM01_CD_08_DM01CEL01) in group 1 (DATA_DM01) is offline for writes
NOTE: disk 9 (DATA_DM01_CD_09_DM01CEL01) in group 1 (DATA_DM01) is offline for writes
NOTE: disk 10 (DATA_DM01_CD_10_DM01CEL01) in group 1 (DATA_DM01) is offline for writes
NOTE: disk 11 (DATA_DM01_CD_11_DM01CEL01) in group 1 (DATA_DM01) is offline for writes
NOTE: disk 0 (RECO_DM01_CD_00_DM01CEL01) in group 3 (RECO_DM01) is offline for writes
NOTE: disk 1 (RECO_DM01_CD_01_DM01CEL01) in group 3 (RECO_DM01) is offline for writes
NOTE: disk 2 (RECO_DM01_CD_02_DM01CEL01) in group 3 (RECO_DM01) is offline for writes
NOTE: disk 3 (RECO_DM01_CD_03_DM01CEL01) in group 3 (RECO_DM01) is offline for writes
NOTE: disk 4 (RECO_DM01_CD_04_DM01CEL01) in group 3 (RECO_DM01) is offline for writes
NOTE: disk 5 (RECO_DM01_CD_05_DM01CEL01) in group 3 (RECO_DM01) is offline for writes
NOTE: disk 6 (RECO_DM01_CD_06_DM01CEL01) in group 3 (RECO_DM01) is offline for writes
NOTE: disk 7 (RECO_DM01_CD_07_DM01CEL01) in group 3 (RECO_DM01) is offline for writes
NOTE: disk 8 (RECO_DM01_CD_08_DM01CEL01) in group 3 (RECO_DM01) is offline for writes
NOTE: disk 9 (RECO_DM01_CD_09_DM01CEL01) in group 3 (RECO_DM01) is offline for writes
NOTE: disk 10 (RECO_DM01_CD_10_DM01CEL01) in group 3 (RECO_DM01) is offline for writes
NOTE: disk 11 (RECO_DM01_CD_11_DM01CEL01) in group 3 (RECO_DM01) is offline for writes
Sun Sep 02 09:08:48 2012
Errors in file /u01/app/oracle/diag/rdbms/dbm/dbm1/trace/dbm1_ckpt_31547.trc:
ORA-27603: Cell storage I/O error, I/O failed on disk o/192.168.64.131/RECO_DM01_CD_03_dm01cel01 at offset 16826368 for data length 16384
ORA-27626: Exadata error: 12 (Network error)
ORA-27300: OS system dependent operation:rpc update timed out failed with status: 0
ORA-27301: OS failure message: Error 0
ORA-27302: failure occurred at: skgxp_path
WARNING: Write Failed. group:3 disk:3 AU:4 offset:49152 size:16384





*** 2012-09-02 09:08:48.865
ORA-27626: Exadata error: 12 (Network error)
ORA-27300: OS system dependent operation:rpc update timed out failed with status: 0
ORA-27301: OS failure message: Error 0
ORA-27302: failure occurred at: skgxp_path
WARNING: Write Failed. group:3 disk:3 AU:4 offset:49152 size:16384
path:o/192.168.64.131/RECO_DM01_CD_03_dm01cel01
         incarnation:0xeae31f74 asynchronous result:'I/O error'
         subsys:OSS iop:0x2adfeda7d130 bufp:0x2adfed997e00 osderr:0xc osderr1:0x0
         Exadata error:'Network error'
WARNING: (post-reap) disk offline and rejecting I/O
  dsk: 3, au: 4, fn: 256, ext: 0


SQL> select status from v$datafile; 

STATUS
-------
SYSTEM
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE

6 rows selected.

丢失一个cell的情况下数据库所有数据文件仍ONLINE可操作状态

重启CELL,观察恢复状态:

CellCLI> alter cell startup services all

Starting the RS, CELLSRV, and MS services...
Getting the state of RS services... 
 running
Starting CELLSRV services...
The STARTUP of CELLSRV services was successful.
Starting MS services...
The STARTUP of MS services was successful.



CellCLI> list griddisk
         DATA_DM01_CD_00_dm01cel01       active
         DATA_DM01_CD_01_dm01cel01       active
         DATA_DM01_CD_02_dm01cel01       active
         DATA_DM01_CD_03_dm01cel01       active
         DATA_DM01_CD_04_dm01cel01       active
         DATA_DM01_CD_05_dm01cel01       active
         DATA_DM01_CD_06_dm01cel01       active
         DATA_DM01_CD_07_dm01cel01       active
         DATA_DM01_CD_08_dm01cel01       active
         DATA_DM01_CD_09_dm01cel01       active
         DATA_DM01_CD_10_dm01cel01       active
         DATA_DM01_CD_11_dm01cel01       active
         DBFS_DG_CD_02_dm01cel01         active
         DBFS_DG_CD_03_dm01cel01         active
         DBFS_DG_CD_04_dm01cel01         active
         DBFS_DG_CD_05_dm01cel01         active
         DBFS_DG_CD_06_dm01cel01         active
         DBFS_DG_CD_07_dm01cel01         active
         DBFS_DG_CD_08_dm01cel01         active
         DBFS_DG_CD_09_dm01cel01         active
         DBFS_DG_CD_10_dm01cel01         active
         DBFS_DG_CD_11_dm01cel01         active
         RECO_DM01_CD_00_dm01cel01       active
         RECO_DM01_CD_01_dm01cel01       active
         RECO_DM01_CD_02_dm01cel01       active
         RECO_DM01_CD_03_dm01cel01       active
         RECO_DM01_CD_04_dm01cel01       active
         RECO_DM01_CD_05_dm01cel01       active
         RECO_DM01_CD_06_dm01cel01       active
         RECO_DM01_CD_07_dm01cel01       active
         RECO_DM01_CD_08_dm01cel01       active
         RECO_DM01_CD_09_dm01cel01       active
         RECO_DM01_CD_10_dm01cel01       active
         RECO_DM01_CD_11_dm01cel01       active



ASM ALERT.LOG

Sun Sep 02 09:15:18 2012
WARNING: Disk 0 (DATA_DM01_CD_00_DM01CEL01) in group 1 mode 0x1 is now being offlined
WARNING: Disk 1 (DATA_DM01_CD_01_DM01CEL01) in group 1 mode 0x1 is now being offlined
WARNING: Disk 2 (DATA_DM01_CD_02_DM01CEL01) in group 1 mode 0x1 is now being offlined
WARNING: Disk 3 (DATA_DM01_CD_03_DM01CEL01) in group 1 mode 0x1 is now being offlined
WARNING: Disk 4 (DATA_DM01_CD_04_DM01CEL01) in group 1 mode 0x1 is now being offlined
WARNING: Disk 5 (DATA_DM01_CD_05_DM01CEL01) in group 1 mode 0x1 is now being offlined
WARNING: Disk 6 (DATA_DM01_CD_06_DM01CEL01) in group 1 mode 0x1 is now being offlined
WARNING: Disk 7 (DATA_DM01_CD_07_DM01CEL01) in group 1 mode 0x1 is now being offlined
WARNING: Disk 8 (DATA_DM01_CD_08_DM01CEL01) in group 1 mode 0x1 is now being offlined
WARNING: Disk 9 (DATA_DM01_CD_09_DM01CEL01) in group 1 mode 0x1 is now being offlined
WARNING: Disk 10 (DATA_DM01_CD_10_DM01CEL01) in group 1 mode 0x1 is now being offlined
WARNING: Disk 11 (DATA_DM01_CD_11_DM01CEL01) in group 1 mode 0x1 is now being offlined
Sun Sep 02 09:15:19 2012
Starting background process XDWK
Sun Sep 02 09:15:19 2012
XDWK started with pid=35, OS id=26059 
Sun Sep 02 09:15:19 2012
NOTE: disk validation pending for group 1/0x34e3ee5a (DATA_DM01)
NOTE: Found o/192.168.64.131/DATA_DM01_CD_03_dm01cel01 for disk DATA_DM01_CD_03_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/DATA_DM01_CD_01_dm01cel01 for disk DATA_DM01_CD_01_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/DATA_DM01_CD_10_dm01cel01 for disk DATA_DM01_CD_10_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/DATA_DM01_CD_09_dm01cel01 for disk DATA_DM01_CD_09_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/DATA_DM01_CD_06_dm01cel01 for disk DATA_DM01_CD_06_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/DATA_DM01_CD_05_dm01cel01 for disk DATA_DM01_CD_05_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/DATA_DM01_CD_00_dm01cel01 for disk DATA_DM01_CD_00_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/DATA_DM01_CD_02_dm01cel01 for disk DATA_DM01_CD_02_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/DATA_DM01_CD_07_dm01cel01 for disk DATA_DM01_CD_07_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/DATA_DM01_CD_08_dm01cel01 for disk DATA_DM01_CD_08_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/DATA_DM01_CD_11_dm01cel01 for disk DATA_DM01_CD_11_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/DATA_DM01_CD_04_dm01cel01 for disk DATA_DM01_CD_04_DM01CEL01
WARNING: ignoring disk  in deep discovery
SUCCESS: validated disks for 1/0x34e3ee5a (DATA_DM01)
NOTE: membership refresh pending for group 1/0x34e3ee5a (DATA_DM01)
Sun Sep 02 09:15:24 2012
NOTE: successfully read ACD block gn=1 blk=11264 via retry read
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_lgwr_30748.trc:
ORA-15062: ASM disk is globally closed
GMON querying group 1 at 19 for pid 19, osid 30754
NOTE: cache opening disk 0 of grp 1: DATA_DM01_CD_00_DM01CEL01 path:o/192.168.64.131/DATA_DM01_CD_00_dm01cel01
NOTE: cache opening disk 1 of grp 1: DATA_DM01_CD_01_DM01CEL01 path:o/192.168.64.131/DATA_DM01_CD_01_dm01cel01
NOTE: cache opening disk 2 of grp 1: DATA_DM01_CD_02_DM01CEL01 path:o/192.168.64.131/DATA_DM01_CD_02_dm01cel01
NOTE: cache opening disk 3 of grp 1: DATA_DM01_CD_03_DM01CEL01 path:o/192.168.64.131/DATA_DM01_CD_03_dm01cel01
NOTE: cache opening disk 4 of grp 1: DATA_DM01_CD_04_DM01CEL01 path:o/192.168.64.131/DATA_DM01_CD_04_dm01cel01
NOTE: cache opening disk 5 of grp 1: DATA_DM01_CD_05_DM01CEL01 path:o/192.168.64.131/DATA_DM01_CD_05_dm01cel01
NOTE: cache opening disk 6 of grp 1: DATA_DM01_CD_06_DM01CEL01 path:o/192.168.64.131/DATA_DM01_CD_06_dm01cel01
NOTE: cache opening disk 7 of grp 1: DATA_DM01_CD_07_DM01CEL01 path:o/192.168.64.131/DATA_DM01_CD_07_dm01cel01
NOTE: cache opening disk 8 of grp 1: DATA_DM01_CD_08_DM01CEL01 path:o/192.168.64.131/DATA_DM01_CD_08_dm01cel01
NOTE: cache opening disk 9 of grp 1: DATA_DM01_CD_09_DM01CEL01 path:o/192.168.64.131/DATA_DM01_CD_09_dm01cel01
NOTE: cache opening disk 10 of grp 1: DATA_DM01_CD_10_DM01CEL01 path:o/192.168.64.131/DATA_DM01_CD_10_dm01cel01
NOTE: cache opening disk 11 of grp 1: DATA_DM01_CD_11_DM01CEL01 path:o/192.168.64.131/DATA_DM01_CD_11_dm01cel01
SUCCESS: refreshed membership for 1/0x34e3ee5a (DATA_DM01)
WARNING: Disk 0 (DBFS_DG_CD_02_DM01CEL01) in group 2 mode 0x1 is now being offlined
WARNING: Disk 1 (DBFS_DG_CD_03_DM01CEL01) in group 2 mode 0x1 is now being offlined
WARNING: Disk 2 (DBFS_DG_CD_04_DM01CEL01) in group 2 mode 0x1 is now being offlined
WARNING: Disk 3 (DBFS_DG_CD_05_DM01CEL01) in group 2 mode 0x1 is now being offlined
WARNING: Disk 4 (DBFS_DG_CD_06_DM01CEL01) in group 2 mode 0x1 is now being offlined
WARNING: Disk 5 (DBFS_DG_CD_07_DM01CEL01) in group 2 mode 0x1 is now being offlined
WARNING: Disk 6 (DBFS_DG_CD_08_DM01CEL01) in group 2 mode 0x1 is now being offlined
WARNING: Disk 7 (DBFS_DG_CD_09_DM01CEL01) in group 2 mode 0x1 is now being offlined
WARNING: Disk 8 (DBFS_DG_CD_10_DM01CEL01) in group 2 mode 0x1 is now being offlined
WARNING: Disk 9 (DBFS_DG_CD_11_DM01CEL01) in group 2 mode 0x1 is now being offlined
NOTE: Voting File refresh pending for group 1/0x34e3ee5a (DATA_DM01)
NOTE: Attempting voting file refresh on diskgroup DATA_DM01
NOTE: disk validation pending for group 2/0x34f3ee5b (DBFS_DG)
Sun Sep 02 09:15:31 2012
NOTE: Attempting voting file refresh on diskgroup DBFS_DG
NOTE: Voting file relocation is required in diskgroup DBFS_DG
NOTE: Attempting voting file relocation on diskgroup DBFS_DG
NOTE: Found o/192.168.64.131/DBFS_DG_CD_09_dm01cel01 for disk DBFS_DG_CD_09_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/DBFS_DG_CD_06_dm01cel01 for disk DBFS_DG_CD_06_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/DBFS_DG_CD_03_dm01cel01 for disk DBFS_DG_CD_03_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/DBFS_DG_CD_10_dm01cel01 for disk DBFS_DG_CD_10_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/DBFS_DG_CD_04_dm01cel01 for disk DBFS_DG_CD_04_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/DBFS_DG_CD_11_dm01cel01 for disk DBFS_DG_CD_11_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/DBFS_DG_CD_07_dm01cel01 for disk DBFS_DG_CD_07_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/DBFS_DG_CD_05_dm01cel01 for disk DBFS_DG_CD_05_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/DBFS_DG_CD_08_dm01cel01 for disk DBFS_DG_CD_08_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/DBFS_DG_CD_02_dm01cel01 for disk DBFS_DG_CD_02_DM01CEL01
WARNING: ignoring disk  in deep discovery
SUCCESS: validated disks for 2/0x34f3ee5b (DBFS_DG)
NOTE: membership refresh pending for group 2/0x34f3ee5b (DBFS_DG)
NOTE: successfully read ACD block gn=2 blk=11264 via retry read
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_lgwr_30748.trc:
ORA-15062: ASM disk is globally closed
NOTE: Attempting voting file refresh on diskgroup DBFS_DG
NOTE: Voting file relocation is required in diskgroup DBFS_DG
NOTE: Attempting voting file relocation on diskgroup DBFS_DG
Sun Sep 02 09:15:35 2012
GMON querying group 2 at 20 for pid 19, osid 30754
NOTE: cache opening disk 0 of grp 2: DBFS_DG_CD_02_DM01CEL01 path:o/192.168.64.131/DBFS_DG_CD_02_dm01cel01
NOTE: cache opening disk 1 of grp 2: DBFS_DG_CD_03_DM01CEL01 path:o/192.168.64.131/DBFS_DG_CD_03_dm01cel01
NOTE: cache opening disk 2 of grp 2: DBFS_DG_CD_04_DM01CEL01 path:o/192.168.64.131/DBFS_DG_CD_04_dm01cel01
NOTE: cache opening disk 3 of grp 2: DBFS_DG_CD_05_DM01CEL01 path:o/192.168.64.131/DBFS_DG_CD_05_dm01cel01
NOTE: cache opening disk 4 of grp 2: DBFS_DG_CD_06_DM01CEL01 path:o/192.168.64.131/DBFS_DG_CD_06_dm01cel01
NOTE: cache opening disk 5 of grp 2: DBFS_DG_CD_07_DM01CEL01 path:o/192.168.64.131/DBFS_DG_CD_07_dm01cel01
NOTE: cache opening disk 6 of grp 2: DBFS_DG_CD_08_DM01CEL01 path:o/192.168.64.131/DBFS_DG_CD_08_dm01cel01
NOTE: cache opening disk 7 of grp 2: DBFS_DG_CD_09_DM01CEL01 path:o/192.168.64.131/DBFS_DG_CD_09_dm01cel01
NOTE: cache opening disk 8 of grp 2: DBFS_DG_CD_10_DM01CEL01 path:o/192.168.64.131/DBFS_DG_CD_10_dm01cel01
NOTE: cache opening disk 9 of grp 2: DBFS_DG_CD_11_DM01CEL01 path:o/192.168.64.131/DBFS_DG_CD_11_dm01cel01
SUCCESS: refreshed membership for 2/0x34f3ee5b (DBFS_DG)
WARNING: Disk 0 (RECO_DM01_CD_00_DM01CEL01) in group 3 mode 0x1 is now being offlined
WARNING: Disk 1 (RECO_DM01_CD_01_DM01CEL01) in group 3 mode 0x1 is now being offlined
WARNING: Disk 2 (RECO_DM01_CD_02_DM01CEL01) in group 3 mode 0x1 is now being offlined
WARNING: Disk 3 (RECO_DM01_CD_03_DM01CEL01) in group 3 mode 0x1 is now being offlined
WARNING: Disk 4 (RECO_DM01_CD_04_DM01CEL01) in group 3 mode 0x1 is now being offlined
WARNING: Disk 5 (RECO_DM01_CD_05_DM01CEL01) in group 3 mode 0x1 is now being offlined
WARNING: Disk 6 (RECO_DM01_CD_06_DM01CEL01) in group 3 mode 0x1 is now being offlined
WARNING: Disk 7 (RECO_DM01_CD_07_DM01CEL01) in group 3 mode 0x1 is now being offlined
WARNING: Disk 8 (RECO_DM01_CD_08_DM01CEL01) in group 3 mode 0x1 is now being offlined
WARNING: Disk 9 (RECO_DM01_CD_09_DM01CEL01) in group 3 mode 0x1 is now being offlined
WARNING: Disk 10 (RECO_DM01_CD_10_DM01CEL01) in group 3 mode 0x1 is now being offlined
WARNING: Disk 11 (RECO_DM01_CD_11_DM01CEL01) in group 3 mode 0x1 is now being offlined
NOTE: Voting File refresh pending for group 2/0x34f3ee5b (DBFS_DG)
NOTE: Attempting voting file refresh on diskgroup DBFS_DG
NOTE: Voting file relocation is required in diskgroup DBFS_DG
NOTE: Attempting voting file relocation on diskgroup DBFS_DG
NOTE: voting file allocation on grp 2 disk DBFS_DG_CD_02_DM01CEL01
Sun Sep 02 09:16:09 2012
NOTE: disk validation pending for group 3/0x34f3ee5c (RECO_DM01)
NOTE: Found o/192.168.64.131/RECO_DM01_CD_09_dm01cel01 for disk RECO_DM01_CD_09_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/RECO_DM01_CD_01_dm01cel01 for disk RECO_DM01_CD_01_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/RECO_DM01_CD_02_dm01cel01 for disk RECO_DM01_CD_02_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/RECO_DM01_CD_04_dm01cel01 for disk RECO_DM01_CD_04_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/RECO_DM01_CD_08_dm01cel01 for disk RECO_DM01_CD_08_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/RECO_DM01_CD_10_dm01cel01 for disk RECO_DM01_CD_10_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/RECO_DM01_CD_06_dm01cel01 for disk RECO_DM01_CD_06_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/RECO_DM01_CD_00_dm01cel01 for disk RECO_DM01_CD_00_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/RECO_DM01_CD_07_dm01cel01 for disk RECO_DM01_CD_07_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/RECO_DM01_CD_05_dm01cel01 for disk RECO_DM01_CD_05_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/RECO_DM01_CD_03_dm01cel01 for disk RECO_DM01_CD_03_DM01CEL01
WARNING: ignoring disk  in deep discovery
NOTE: Found o/192.168.64.131/RECO_DM01_CD_11_dm01cel01 for disk RECO_DM01_CD_11_DM01CEL01
WARNING: ignoring disk  in deep discovery
SUCCESS: validated disks for 3/0x34f3ee5c (RECO_DM01)
Sun Sep 02 09:16:12 2012
NOTE: group RECO_DM01: updated PST location: disk 0012 (PST copy 0)
NOTE: group RECO_DM01: updated PST location: disk 0024 (PST copy 1)
NOTE: group RECO_DM01: updated PST location: disk 0000 (PST copy 2)
NOTE: membership refresh pending for group 3/0x34f3ee5c (RECO_DM01)
WARNING: GMON found an alien heartbeat (grp 3)
Sun Sep 02 09:16:15 2012
NOTE: successfully read ACD block gn=3 blk=11264 via retry read
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_lgwr_30748.trc:
ORA-15062: ASM disk is globally closed
GMON querying group 3 at 21 for pid 19, osid 30754
NOTE: group RECO_DM01: updated PST location: disk 0012 (PST copy 0)
NOTE: group RECO_DM01: updated PST location: disk 0024 (PST copy 1)
NOTE: group RECO_DM01: updated PST location: disk 0000 (PST copy 2)
NOTE: cache opening disk 0 of grp 3: RECO_DM01_CD_00_DM01CEL01 path:o/192.168.64.131/RECO_DM01_CD_00_dm01cel01
NOTE: cache opening disk 1 of grp 3: RECO_DM01_CD_01_DM01CEL01 path:o/192.168.64.131/RECO_DM01_CD_01_dm01cel01
NOTE: cache opening disk 2 of grp 3: RECO_DM01_CD_02_DM01CEL01 path:o/192.168.64.131/RECO_DM01_CD_02_dm01cel01
NOTE: cache opening disk 3 of grp 3: RECO_DM01_CD_03_DM01CEL01 path:o/192.168.64.131/RECO_DM01_CD_03_dm01cel01
NOTE: cache opening disk 4 of grp 3: RECO_DM01_CD_04_DM01CEL01 path:o/192.168.64.131/RECO_DM01_CD_04_dm01cel01
NOTE: cache opening disk 5 of grp 3: RECO_DM01_CD_05_DM01CEL01 path:o/192.168.64.131/RECO_DM01_CD_05_dm01cel01
NOTE: cache opening disk 6 of grp 3: RECO_DM01_CD_06_DM01CEL01 path:o/192.168.64.131/RECO_DM01_CD_06_dm01cel01
NOTE: cache opening disk 7 of grp 3: RECO_DM01_CD_07_DM01CEL01 path:o/192.168.64.131/RECO_DM01_CD_07_dm01cel01
NOTE: cache opening disk 8 of grp 3: RECO_DM01_CD_08_DM01CEL01 path:o/192.168.64.131/RECO_DM01_CD_08_dm01cel01
NOTE: cache opening disk 9 of grp 3: RECO_DM01_CD_09_DM01CEL01 path:o/192.168.64.131/RECO_DM01_CD_09_dm01cel01
NOTE: cache opening disk 10 of grp 3: RECO_DM01_CD_10_DM01CEL01 path:o/192.168.64.131/RECO_DM01_CD_10_dm01cel01
NOTE: cache opening disk 11 of grp 3: RECO_DM01_CD_11_DM01CEL01 path:o/192.168.64.131/RECO_DM01_CD_11_dm01cel01
SUCCESS: refreshed membership for 3/0x34f3ee5c (RECO_DM01)
NOTE: Voting File refresh pending for group 3/0x34f3ee5c (RECO_DM01)
NOTE: Attempting voting file refresh on diskgroup RECO_DM01


RDBMS ALERT


Sun Sep 02 09:15:23 2012
NOTE: Found o/192.168.64.131/DATA_DM01_CD_00_dm01cel01 for disk DATA_DM01_CD_00_DM01CEL01
SUCCESS: disk DATA_DM01_CD_00_DM01CEL01 (0.3940753198) replaced in diskgroup DATA_DM01
NOTE: Found o/192.168.64.131/DATA_DM01_CD_01_dm01cel01 for disk DATA_DM01_CD_01_DM01CEL01
SUCCESS: disk DATA_DM01_CD_01_DM01CEL01 (1.3940753193) replaced in diskgroup DATA_DM01
NOTE: Found o/192.168.64.131/DATA_DM01_CD_02_dm01cel01 for disk DATA_DM01_CD_02_DM01CEL01
SUCCESS: disk DATA_DM01_CD_02_DM01CEL01 (2.3940753199) replaced in diskgroup DATA_DM01
NOTE: Found o/192.168.64.131/DATA_DM01_CD_03_dm01cel01 for disk DATA_DM01_CD_03_DM01CEL01
SUCCESS: disk DATA_DM01_CD_03_DM01CEL01 (3.3940753192) replaced in diskgroup DATA_DM01
NOTE: Found o/192.168.64.131/DATA_DM01_CD_04_dm01cel01 for disk DATA_DM01_CD_04_DM01CEL01
SUCCESS: disk DATA_DM01_CD_04_DM01CEL01 (4.3940753203) replaced in diskgroup DATA_DM01
NOTE: Found o/192.168.64.131/DATA_DM01_CD_05_dm01cel01 for disk DATA_DM01_CD_05_DM01CEL01
SUCCESS: disk DATA_DM01_CD_05_DM01CEL01 (5.3940753197) replaced in diskgroup DATA_DM01
NOTE: Found o/192.168.64.131/DATA_DM01_CD_06_dm01cel01 for disk DATA_DM01_CD_06_DM01CEL01
SUCCESS: disk DATA_DM01_CD_06_DM01CEL01 (6.3940753196) replaced in diskgroup DATA_DM01
NOTE: Found o/192.168.64.131/DATA_DM01_CD_07_dm01cel01 for disk DATA_DM01_CD_07_DM01CEL01
SUCCESS: disk DATA_DM01_CD_07_DM01CEL01 (7.3940753200) replaced in diskgroup DATA_DM01
NOTE: Found o/192.168.64.131/DATA_DM01_CD_08_dm01cel01 for disk DATA_DM01_CD_08_DM01CEL01
SUCCESS: disk DATA_DM01_CD_08_DM01CEL01 (8.3940753201) replaced in diskgroup DATA_DM01
NOTE: Found o/192.168.64.131/DATA_DM01_CD_09_dm01cel01 for disk DATA_DM01_CD_09_DM01CEL01
SUCCESS: disk DATA_DM01_CD_09_DM01CEL01 (9.3940753195) replaced in diskgroup DATA_DM01
NOTE: Found o/192.168.64.131/DATA_DM01_CD_10_dm01cel01 for disk DATA_DM01_CD_10_DM01CEL01
SUCCESS: disk DATA_DM01_CD_10_DM01CEL01 (10.3940753194) replaced in diskgroup DATA_DM01
NOTE: Found o/192.168.64.131/DATA_DM01_CD_11_dm01cel01 for disk DATA_DM01_CD_11_DM01CEL01
SUCCESS: disk DATA_DM01_CD_11_DM01CEL01 (11.3940753202) replaced in diskgroup DATA_DM01
NOTE: disk 0 (DATA_DM01_CD_00_DM01CEL01) in group 1 (DATA_DM01) is online for writes
NOTE: disk 1 (DATA_DM01_CD_01_DM01CEL01) in group 1 (DATA_DM01) is online for writes
NOTE: disk 2 (DATA_DM01_CD_02_DM01CEL01) in group 1 (DATA_DM01) is online for writes
NOTE: disk 3 (DATA_DM01_CD_03_DM01CEL01) in group 1 (DATA_DM01) is online for writes
NOTE: disk 4 (DATA_DM01_CD_04_DM01CEL01) in group 1 (DATA_DM01) is online for writes
NOTE: disk 5 (DATA_DM01_CD_05_DM01CEL01) in group 1 (DATA_DM01) is online for writes
NOTE: disk 6 (DATA_DM01_CD_06_DM01CEL01) in group 1 (DATA_DM01) is online for writes
NOTE: disk 7 (DATA_DM01_CD_07_DM01CEL01) in group 1 (DATA_DM01) is online for writes
NOTE: disk 8 (DATA_DM01_CD_08_DM01CEL01) in group 1 (DATA_DM01) is online for writes
NOTE: disk 9 (DATA_DM01_CD_09_DM01CEL01) in group 1 (DATA_DM01) is online for writes
NOTE: disk 10 (DATA_DM01_CD_10_DM01CEL01) in group 1 (DATA_DM01) is online for writes
NOTE: disk 11 (DATA_DM01_CD_11_DM01CEL01) in group 1 (DATA_DM01) is online for writes
NOTE: disk 0 (DATA_DM01_CD_00_DM01CEL01) in group 1 (DATA_DM01) is online for reads
NOTE: disk 1 (DATA_DM01_CD_01_DM01CEL01) in group 1 (DATA_DM01) is online for reads
NOTE: disk 2 (DATA_DM01_CD_02_DM01CEL01) in group 1 (DATA_DM01) is online for reads
NOTE: disk 3 (DATA_DM01_CD_03_DM01CEL01) in group 1 (DATA_DM01) is online for reads
NOTE: disk 4 (DATA_DM01_CD_04_DM01CEL01) in group 1 (DATA_DM01) is online for reads
NOTE: disk 5 (DATA_DM01_CD_05_DM01CEL01) in group 1 (DATA_DM01) is online for reads
NOTE: disk 6 (DATA_DM01_CD_06_DM01CEL01) in group 1 (DATA_DM01) is online for reads
NOTE: disk 7 (DATA_DM01_CD_07_DM01CEL01) in group 1 (DATA_DM01) is online for reads
NOTE: disk 8 (DATA_DM01_CD_08_DM01CEL01) in group 1 (DATA_DM01) is online for reads
NOTE: disk 9 (DATA_DM01_CD_09_DM01CEL01) in group 1 (DATA_DM01) is online for reads
NOTE: disk 10 (DATA_DM01_CD_10_DM01CEL01) in group 1 (DATA_DM01) is online for reads
NOTE: disk 11 (DATA_DM01_CD_11_DM01CEL01) in group 1 (DATA_DM01) is online for reads
Sun Sep 02 09:16:12 2012
NOTE: Found o/192.168.64.131/RECO_DM01_CD_00_dm01cel01 for disk RECO_DM01_CD_00_DM01CEL01
SUCCESS: disk RECO_DM01_CD_00_DM01CEL01 (0.3940753265) replaced in diskgroup RECO_DM01
NOTE: Found o/192.168.64.131/RECO_DM01_CD_01_dm01cel01 for disk RECO_DM01_CD_01_DM01CEL01
SUCCESS: disk RECO_DM01_CD_01_DM01CEL01 (1.3940753259) replaced in diskgroup RECO_DM01
NOTE: Found o/192.168.64.131/RECO_DM01_CD_02_dm01cel01 for disk RECO_DM01_CD_02_DM01CEL01
SUCCESS: disk RECO_DM01_CD_02_DM01CEL01 (2.3940753260) replaced in diskgroup RECO_DM01
NOTE: Found o/192.168.64.131/RECO_DM01_CD_03_dm01cel01 for disk RECO_DM01_CD_03_DM01CEL01
SUCCESS: disk RECO_DM01_CD_03_DM01CEL01 (3.3940753268) replaced in diskgroup RECO_DM01
NOTE: Found o/192.168.64.131/RECO_DM01_CD_04_dm01cel01 for disk RECO_DM01_CD_04_DM01CEL01
SUCCESS: disk RECO_DM01_CD_04_DM01CEL01 (4.3940753261) replaced in diskgroup RECO_DM01
NOTE: Found o/192.168.64.131/RECO_DM01_CD_05_dm01cel01 for disk RECO_DM01_CD_05_DM01CEL01
SUCCESS: disk RECO_DM01_CD_05_DM01CEL01 (5.3940753267) replaced in diskgroup RECO_DM01
NOTE: Found o/192.168.64.131/RECO_DM01_CD_06_dm01cel01 for disk RECO_DM01_CD_06_DM01CEL01
SUCCESS: disk RECO_DM01_CD_06_DM01CEL01 (6.3940753264) replaced in diskgroup RECO_DM01
NOTE: Found o/192.168.64.131/RECO_DM01_CD_07_dm01cel01 for disk RECO_DM01_CD_07_DM01CEL01
SUCCESS: disk RECO_DM01_CD_07_DM01CEL01 (7.3940753266) replaced in diskgroup RECO_DM01
NOTE: Found o/192.168.64.131/RECO_DM01_CD_08_dm01cel01 for disk RECO_DM01_CD_08_DM01CEL01
SUCCESS: disk RECO_DM01_CD_08_DM01CEL01 (8.3940753262) replaced in diskgroup RECO_DM01
NOTE: Found o/192.168.64.131/RECO_DM01_CD_09_dm01cel01 for disk RECO_DM01_CD_09_DM01CEL01
SUCCESS: disk RECO_DM01_CD_09_DM01CEL01 (9.3940753258) replaced in diskgroup RECO_DM01
NOTE: Found o/192.168.64.131/RECO_DM01_CD_10_dm01cel01 for disk RECO_DM01_CD_10_DM01CEL01
SUCCESS: disk RECO_DM01_CD_10_DM01CEL01 (10.3940753263) replaced in diskgroup RECO_DM01
NOTE: Found o/192.168.64.131/RECO_DM01_CD_11_dm01cel01 for disk RECO_DM01_CD_11_DM01CEL01
SUCCESS: disk RECO_DM01_CD_11_DM01CEL01 (11.3940753269) replaced in diskgroup RECO_DM01
NOTE: disk 0 (RECO_DM01_CD_00_DM01CEL01) in group 3 (RECO_DM01) is online for writes
NOTE: disk 1 (RECO_DM01_CD_01_DM01CEL01) in group 3 (RECO_DM01) is online for writes
NOTE: disk 2 (RECO_DM01_CD_02_DM01CEL01) in group 3 (RECO_DM01) is online for writes
NOTE: disk 3 (RECO_DM01_CD_03_DM01CEL01) in group 3 (RECO_DM01) is online for writes
NOTE: disk 4 (RECO_DM01_CD_04_DM01CEL01) in group 3 (RECO_DM01) is online for writes
NOTE: disk 5 (RECO_DM01_CD_05_DM01CEL01) in group 3 (RECO_DM01) is online for writes
NOTE: disk 6 (RECO_DM01_CD_06_DM01CEL01) in group 3 (RECO_DM01) is online for writes
NOTE: disk 7 (RECO_DM01_CD_07_DM01CEL01) in group 3 (RECO_DM01) is online for writes
NOTE: disk 8 (RECO_DM01_CD_08_DM01CEL01) in group 3 (RECO_DM01) is online for writes
NOTE: disk 9 (RECO_DM01_CD_09_DM01CEL01) in group 3 (RECO_DM01) is online for writes
NOTE: disk 10 (RECO_DM01_CD_10_DM01CEL01) in group 3 (RECO_DM01) is online for writes
NOTE: disk 11 (RECO_DM01_CD_11_DM01CEL01) in group 3 (RECO_DM01) is online for writes
NOTE: disk 0 (RECO_DM01_CD_00_DM01CEL01) in group 3 (RECO_DM01) is online for reads
NOTE: disk 1 (RECO_DM01_CD_01_DM01CEL01) in group 3 (RECO_DM01) is online for reads
NOTE: disk 2 (RECO_DM01_CD_02_DM01CEL01) in group 3 (RECO_DM01) is online for reads
NOTE: disk 3 (RECO_DM01_CD_03_DM01CEL01) in group 3 (RECO_DM01) is online for reads
NOTE: disk 4 (RECO_DM01_CD_04_DM01CEL01) in group 3 (RECO_DM01) is online for reads
NOTE: disk 5 (RECO_DM01_CD_05_DM01CEL01) in group 3 (RECO_DM01) is online for reads
NOTE: disk 6 (RECO_DM01_CD_06_DM01CEL01) in group 3 (RECO_DM01) is online for reads
NOTE: disk 7 (RECO_DM01_CD_07_DM01CEL01) in group 3 (RECO_DM01) is online for reads
NOTE: disk 8 (RECO_DM01_CD_08_DM01CEL01) in group 3 (RECO_DM01) is online for reads
NOTE: disk 9 (RECO_DM01_CD_09_DM01CEL01) in group 3 (RECO_DM01) is online for reads
NOTE: disk 10 (RECO_DM01_CD_10_DM01CEL01) in group 3 (RECO_DM01) is online for reads
NOTE: disk 11 (RECO_DM01_CD_11_DM01CEL01) in group 3 (RECO_DM01) is online for reads


[root@dm01cel01 trace]# pwd
/opt/oracle/cell11.2.3.1.1_LINUX.X64_120607/log/diag/asm/cell/dm01cel01/trace

CELL的ALERT日志如下:

Sun Sep 02 09:07:00 2012
[RS] Process /opt/oracle/cell11.2.3.1.1_LINUX.X64_120607/cellsrv/bin/cellrsmmt (pid: 10473) received clean shutdown signal from pid: 9334, uid: 0
Sun Sep 02 09:07:04 2012
[RS] Stopped Service MS with pid 10474
Sun Sep 02 09:07:04 2012
[RS] Process /opt/oracle/cell11.2.3.1.1_LINUX.X64_120607/cellsrv/bin/cellrsomt (pid: 12307) received clean shutdown signal from pid: 9334, uid: 0
Sun Sep 02 09:07:04 2012
Clean shutdown signal delivered to OSS<12308>
[RS] Stopped Service CELLSRV with pid 12308
Sun Sep 02 09:07:07 2012
[RS] Process /opt/oracle/cell11.2.3.1.1_LINUX.X64_120607/cellsrv/bin/cellrsbmt (pid: 9340) received clean shutdown signal from pid: 9334, uid: 0
[RS] Stopped Service RS_BACKUP
[RS] Stopped Service RS_MAIN
Sun Sep 02 09:07:08 2012


[RS] Process /opt/oracle/cell11.2.3.1.1_LINUX.X64_120607/cellsrv/bin/cellrsbkm (pid: 9342) received clean shutdown signal from pid: 9334, uid: 0
Sun Sep 02 09:07:09 2012
[RS] Process /opt/oracle/cell11.2.3.1.1_LINUX.X64_120607/cellsrv/bin/cellrssmt (pid: 9346) received clean shutdown signal from pid: 9334, uid: 0
Sun Sep 02 09:07:10 2012
[RS] Process /opt/oracle/cell11.2.3.1.1_LINUX.X64_120607/cellsrv/bin/cellrssrm (pid: 9334) received clean shutdown signal from pid: 9334, uid: 0
Sun Sep 02 09:13:40 2012
RS version=11.2.3.1.1,label=OSS_11.2.3.1.1_LINUX.X64_120607,Fri_Jun__8_12:49:44_PDT_2012
[RS] Started Service RS_MAIN with pid 18806
Sun Sep 02 09:13:40 2012
[RS] Started monitoring process /opt/oracle/cell11.2.3.1.1_LINUX.X64_120607/cellsrv/bin/cellrsbmt with pid 18812
Sun Sep 02 09:13:40 2012
[RS] Started monitoring process /opt/oracle/cell11.2.3.1.1_LINUX.X64_120607/cellsrv/bin/cellrsmmt with pid 18813
Sun Sep 02 09:13:40 2012
[RS] Started monitoring process /opt/oracle/cell11.2.3.1.1_LINUX.X64_120607/cellsrv/bin/cellrsomt with pid 18815
Sun Sep 02 09:13:40 2012
RSBK version=11.2.3.1.1,label=OSS_11.2.3.1.1_LINUX.X64_120607,Fri_Jun__8_12:49:44_PDT_2012
[RS] Started Service RS_BACKUP with pid 18814
Sun Sep 02 09:13:40 2012
[RS] Started monitoring process /opt/oracle/cell11.2.3.1.1_LINUX.X64_120607/cellsrv/bin/cellrssmt with pid 18844
Sun Sep 02 09:13:40 2012
Successfully setting event parameter -
Sun Sep 02 09:13:40 2012
Successfully setting event parameter -
CELLSRV process id=18817
CELLSRV cell host name=dm01cel01.acs.oracle.com
CELLSRV version=11.2.3.1.1,label=OSS_11.2.3.1.1_LINUX.X64_120607,Fri_Jun__8_12:49:44_PDT_2012
OS Hugepage status:
   Total/free hugepages available=4001/155; hugepage size=2048KB
MS_ALERT HUGEPAGE CLEAR
Cache Allocation: Num 1MB hugepage buffers: 8000 Num 1MB non-hugepage buffers: 0
Cache Allocation: BufferSize: 512. Num buffers: 5000. Start Address: 2AACA2E00000
Cache Allocation: BufferSize: 2048. Num buffers: 5000. Start Address: 2AACA3072000
Cache Allocation: BufferSize: 4096. Num buffers: 5000. Start Address: 2AACA3A37000
Cache Allocation: BufferSize: 8192. Num buffers: 10000. Start Address: 2AACA4DC0000
Cache Allocation: BufferSize: 16384. Num buffers: 5000. Start Address: 2AACA9BE1000
Cache Allocation: BufferSize: 32768. Num buffers: 5000. Start Address: 2AACAEA02000
Cache Allocation: BufferSize: 65536. Num buffers: 5000. Start Address: 2AACB8643000
Cache Allocation: BufferSize: 10485760. Num buffers: 23. Start Address: 2AACCBEC4000
CELL communication is configured to use 1 interface(s):
    192.168.64.131
IPC version: Oracle RDS/IP (generic)
IPC Vendor 1 Protocol 3
  Version 4.1
CellDisk v0.6 name=CD_00_dm01cel01 status=NORMAL guid=d8227f04-4b2a-4a2b-b320-a73fe3211671 found on dev=/dev/sda3
  GridDisk name=RECO_DM01_CD_00_dm01cel01 guid=27bc5e71-1662-419f-84ac-ac7936d051f9 (3428420588)
  GridDisk name=DATA_DM01_CD_00_dm01cel01 guid=1215af58-90d9-4a74-a51b-3ddc6d54aede (3447961868)
CellDisk v0.6 name=FD_05_dm01cel01 status=NORMAL guid=a3718f03-8112-4643-932e-0837ea9d445f found on dev=/dev/sdaa
CellDisk v0.6 name=FD_06_dm01cel01 status=NORMAL guid=309f226c-8255-4c71-9991-ad7eab8e934b found on dev=/dev/sdab
CellDisk v0.6 name=FD_07_dm01cel01 status=NORMAL guid=af4b7aa9-486a-4458-a4e7-0ecfb17536ab found on dev=/dev/sdac
CellDisk v0.6 name=FD_13_dm01cel01 status=NORMAL guid=7bad74e5-d125-4702-882e-a11db0154588 found on dev=/dev/sdw
CellDisk v0.6 name=FD_14_dm01cel01 status=NORMAL guid=7588df10-a2e2-497d-967f-0e1ef5d07d74 found on dev=/dev/sdx
CellDisk v0.6 name=FD_15_dm01cel01 status=NORMAL guid=d70c7598-82e8-496e-ae8a-8f7c888374b8 found on dev=/dev/sdy
CellDisk v0.6 name=FD_04_dm01cel01 status=NORMAL guid=95662b6a-2eeb-40e4-81ca-d880b364fbac found on dev=/dev/sdz
FlashCache: allowing client IOs
Smart Flash Logging disabled due to lack of active Flash Log Stores
Cellsrv Incarnation is set: 2

CELLSRV Server startup complete
Sun Sep  2 09:13:43 2012
[RS] Started Service CELLSRV with pid 18817
[RS] Started Service MS with pid 18816
Sun Sep 02 09:13:47 2012
Heartbeat with diskmon started on dm01db02.acs.oracle.com
Heartbeat with diskmon started on dm01db01.acs.oracle.com
Sun Sep 02 09:13:47 2012
I/O Resource Manager enabled
Sun Sep 02 09:13:48 2012
Caching enabled on FlashCache dm01cel01_FLASHCACHE (1818411676), size=22GB, cdisk=FD_13_dm01cel01
Caching enabled on FlashCache dm01cel01_FLASHCACHE (3896713996), size=22GB, cdisk=FD_05_dm01cel01
Caching enabled on FlashCache dm01cel01_FLASHCACHE (4258309268), size=22GB, cdisk=FD_11_dm01cel01
Caching enabled on FlashCache dm01cel01_FLASHCACHE (2295236356), size=22GB, cdisk=FD_09_dm01cel01
Caching enabled on FlashCache dm01cel01_FLASHCACHE (2066597948), size=22GB, cdisk=FD_06_dm01cel01
Caching enabled on FlashCache dm01cel01_FLASHCACHE (3538503772), size=22GB, cdisk=FD_15_dm01cel01
Caching enabled on FlashCache dm01cel01_FLASHCACHE (1832946924), size=22GB, cdisk=FD_10_dm01cel01
Caching enabled on FlashCache dm01cel01_FLASHCACHE (542355212), size=22GB, cdisk=FD_00_dm01cel01
Caching enabled on FlashCache dm01cel01_FLASHCACHE (1736192156), size=22GB, cdisk=FD_04_dm01cel01
Caching enabled on FlashCache dm01cel01_FLASHCACHE (2921083820), size=22GB, cdisk=FD_14_dm01cel01
Caching enabled on FlashCache dm01cel01_FLASHCACHE (3957806660), size=22GB, cdisk=FD_12_dm01cel01
Caching enabled on FlashCache dm01cel01_FLASHCACHE (767858284), size=22GB, cdisk=FD_01_dm01cel01
Sun Sep 02 09:14:00 2012
Info: Assigning flash CD FD_00_dm01cel01 to group#1
Sun Sep 02 09:14:00 2012
Info: Assigning flash CD FD_01_dm01cel01 to group#1
Sun Sep 02 09:14:00 2012
Info: Assigning flash CD FD_02_dm01cel01 to group#1
Sun Sep 02 09:14:00 2012
Info: Assigning flash CD FD_03_dm01cel01 to group#1
Sun Sep 02 09:14:00 2012
Info: Assigning flash CD FD_04_dm01cel01 to group#2
Sun Sep 02 09:14:00 2012
Info: Assigning flash CD FD_05_dm01cel01 to group#2
Sun Sep 02 09:14:00 2012
Info: Assigning flash CD FD_06_dm01cel01 to group#2
Sun Sep 02 09:14:00 2012
Info: Assigning flash CD FD_07_dm01cel01 to group#2
Sun Sep 02 09:14:00 2012
Info: Assigning flash CD FD_08_dm01cel01 to group#4
Sun Sep 02 09:14:00 2012

Exadata Cellserv 如何列出存储告警信息

Exadata Cellserv 如何列出存储告警信息,可以通过cellcli命令行执行以下命令获得:

LIST ALERTDEFINITION命令用以显示cell服务器上可能生成的各种告警alert的定义。下面的示例列出了告警名字、度量名和描述。
度量名定义了告警所基于的度量。ADRALERT和HardwareAlert则不基于度量,因此没有度量名字

LIST ALERTHISTORY 命令用来显示一个cell服务器上的历史告警信息。在例子中仅列出所有严重性为critical的alert,且过滤条件为没有被管理员所查阅过(examinedBy)的。

create threshold命令用来定义一个阀值,指定条件生成一个度量告警。

CellCLI> LIST ALERTDEFINITION ATTRIBUTES name,metricname,description
         ADRAlert                                                                        "Incident Alert"
         HardwareAlert                                                                   "Hardware Alert"
         StatefulAlert_CD_IO_BY_R_LG                     CD_IO_BY_R_LG                   "Threshold Alert"
         StatefulAlert_CD_IO_BY_R_LG_SEC                 CD_IO_BY_R_LG_SEC               "Threshold Alert"
         StatefulAlert_CD_IO_BY_R_SM                     CD_IO_BY_R_SM                   "Threshold Alert"
         StatefulAlert_CD_IO_BY_R_SM_SEC                 CD_IO_BY_R_SM_SEC               "Threshold Alert"
         StatefulAlert_CD_IO_BY_W_LG                     CD_IO_BY_W_LG                   "Threshold Alert"
         StatefulAlert_CD_IO_BY_W_LG_SEC                 CD_IO_BY_W_LG_SEC               "Threshold Alert"
         StatefulAlert_CD_IO_BY_W_SM                     CD_IO_BY_W_SM                   "Threshold Alert"
         StatefulAlert_CD_IO_BY_W_SM_SEC                 CD_IO_BY_W_SM_SEC               "Threshold Alert"
         StatefulAlert_CD_IO_ERRS                        CD_IO_ERRS                      "Threshold Alert"
         StatefulAlert_CD_IO_ERRS_MIN                    CD_IO_ERRS_MIN                  "Threshold Alert"
         StatefulAlert_CD_IO_LOAD                        CD_IO_LOAD                      "Threshold Alert"
         StatefulAlert_CD_IO_RQ_R_LG                     CD_IO_RQ_R_LG                   "Threshold Alert"
         StatefulAlert_CD_IO_RQ_R_LG_SEC                 CD_IO_RQ_R_LG_SEC               "Threshold Alert"
         StatefulAlert_CD_IO_RQ_R_SM                     CD_IO_RQ_R_SM                   "Threshold Alert"
         StatefulAlert_CD_IO_RQ_R_SM_SEC                 CD_IO_RQ_R_SM_SEC               "Threshold Alert"
         StatefulAlert_CD_IO_RQ_W_LG                     CD_IO_RQ_W_LG                   "Threshold Alert"
         StatefulAlert_CD_IO_RQ_W_LG_SEC                 CD_IO_RQ_W_LG_SEC               "Threshold Alert"
         StatefulAlert_CD_IO_RQ_W_SM                     CD_IO_RQ_W_SM                   "Threshold Alert"
         StatefulAlert_CD_IO_RQ_W_SM_SEC                 CD_IO_RQ_W_SM_SEC               "Threshold Alert"
         StatefulAlert_CD_IO_ST_RQ                       CD_IO_ST_RQ                     "Threshold Alert"
         StatefulAlert_CD_IO_TM_R_LG                     CD_IO_TM_R_LG                   "Threshold Alert"
         StatefulAlert_CD_IO_TM_R_LG_RQ                  CD_IO_TM_R_LG_RQ                "Threshold Alert"
         StatefulAlert_CD_IO_TM_R_SM                     CD_IO_TM_R_SM                   "Threshold Alert"
         StatefulAlert_CD_IO_TM_R_SM_RQ                  CD_IO_TM_R_SM_RQ                "Threshold Alert"
         StatefulAlert_CD_IO_TM_W_LG                     CD_IO_TM_W_LG                   "Threshold Alert"
         StatefulAlert_CD_IO_TM_W_LG_RQ                  CD_IO_TM_W_LG_RQ                "Threshold Alert"
         StatefulAlert_CD_IO_TM_W_SM                     CD_IO_TM_W_SM                   "Threshold Alert"
         StatefulAlert_CD_IO_TM_W_SM_RQ                  CD_IO_TM_W_SM_RQ                "Threshold Alert"
         StatefulAlert_CG_FC_IO_BY_SEC                   CG_FC_IO_BY_SEC                 "Threshold Alert"
         StatefulAlert_CG_FC_IO_RQ                       CG_FC_IO_RQ                     "Threshold Alert"
         StatefulAlert_CG_FC_IO_RQ_SEC                   CG_FC_IO_RQ_SEC                 "Threshold Alert"
         StatefulAlert_CG_FD_IO_BY_SEC                   CG_FD_IO_BY_SEC                 "Threshold Alert"
         StatefulAlert_CG_FD_IO_LOAD                     CG_FD_IO_LOAD                   "Threshold Alert"
         StatefulAlert_CG_FD_IO_RQ_LG                    CG_FD_IO_RQ_LG                  "Threshold Alert"
         StatefulAlert_CG_FD_IO_RQ_LG_SEC                CG_FD_IO_RQ_LG_SEC              "Threshold Alert"
         StatefulAlert_CG_FD_IO_RQ_SM                    CG_FD_IO_RQ_SM                  "Threshold Alert"
         StatefulAlert_CG_FD_IO_RQ_SM_SEC                CG_FD_IO_RQ_SM_SEC              "Threshold Alert"
         StatefulAlert_CG_IO_BY_SEC                      CG_IO_BY_SEC                    "Threshold Alert"
         StatefulAlert_CG_IO_LOAD                        CG_IO_LOAD                      "Threshold Alert"
         StatefulAlert_CG_IO_RQ_LG                       CG_IO_RQ_LG                     "Threshold Alert"
         StatefulAlert_CG_IO_RQ_LG_SEC                   CG_IO_RQ_LG_SEC                 "Threshold Alert"
         StatefulAlert_CG_IO_RQ_SM                       CG_IO_RQ_SM                     "Threshold Alert"
         StatefulAlert_CG_IO_RQ_SM_SEC                   CG_IO_RQ_SM_SEC                 "Threshold Alert"
         StatefulAlert_CG_IO_UTIL_LG                     CG_IO_UTIL_LG                   "Threshold Alert"
         StatefulAlert_CG_IO_UTIL_SM                     CG_IO_UTIL_SM                   "Threshold Alert"
         StatefulAlert_CG_IO_WT_LG                       CG_IO_WT_LG                     "Threshold Alert"
         StatefulAlert_CG_IO_WT_LG_RQ                    CG_IO_WT_LG_RQ                  "Threshold Alert"
         StatefulAlert_CG_IO_WT_SM                       CG_IO_WT_SM                     "Threshold Alert"
         StatefulAlert_CG_IO_WT_SM_RQ                    CG_IO_WT_SM_RQ                  "Threshold Alert"
         StatefulAlert_CL_BBU_CHARGE                     CL_BBU_CHARGE                   "Threshold Alert"
         StatefulAlert_CL_BBU_TEMP                       CL_BBU_TEMP                     "Threshold Alert"
         StatefulAlert_CL_CPUT                           CL_CPUT                         "Threshold Alert"
         StatefulAlert_CL_CPUT_CS                        CL_CPUT_CS                      "Threshold Alert"
         StatefulAlert_CL_CPUT_MS                        CL_CPUT_MS                      "Threshold Alert"
         StatefulAlert_CL_FANS                           CL_FANS                         "Threshold Alert"
         StatefulAlert_CL_FSUT                           CL_FSUT                         "Threshold Alert"
         StatefulAlert_CL_MEMUT                          CL_MEMUT                        "Threshold Alert"
         StatefulAlert_CL_MEMUT_CS                       CL_MEMUT_CS                     "Threshold Alert"
         StatefulAlert_CL_MEMUT_MS                       CL_MEMUT_MS                     "Threshold Alert"
         StatefulAlert_CL_RUNQ                           CL_RUNQ                         "Threshold Alert"
         StatefulAlert_CL_SWAP_IN_BY_SEC                 CL_SWAP_IN_BY_SEC               "Threshold Alert"
         StatefulAlert_CL_SWAP_OUT_BY_SEC                CL_SWAP_OUT_BY_SEC              "Threshold Alert"
         StatefulAlert_CL_SWAP_USAGE                     CL_SWAP_USAGE                   "Threshold Alert"
         StatefulAlert_CL_TEMP                           CL_TEMP                         "Threshold Alert"
         StatefulAlert_CL_VIRTMEM_CS                     CL_VIRTMEM_CS                   "Threshold Alert"
         StatefulAlert_CL_VIRTMEM_MS                     CL_VIRTMEM_MS                   "Threshold Alert"
         StatefulAlert_CT_FC_IO_BY_SEC                   CT_FC_IO_BY_SEC                 "Threshold Alert"
         StatefulAlert_CT_FC_IO_RQ                       CT_FC_IO_RQ                     "Threshold Alert"
         StatefulAlert_CT_FC_IO_RQ_SEC                   CT_FC_IO_RQ_SEC                 "Threshold Alert"
         StatefulAlert_CT_FD_IO_BY_SEC                   CT_FD_IO_BY_SEC                 "Threshold Alert"
         StatefulAlert_CT_FD_IO_LOAD                     CT_FD_IO_LOAD                   "Threshold Alert"
         StatefulAlert_CT_FD_IO_RQ_LG                    CT_FD_IO_RQ_LG                  "Threshold Alert"
         StatefulAlert_CT_FD_IO_RQ_LG_SEC                CT_FD_IO_RQ_LG_SEC              "Threshold Alert"
         StatefulAlert_CT_FD_IO_RQ_SM                    CT_FD_IO_RQ_SM                  "Threshold Alert"
         StatefulAlert_CT_FD_IO_RQ_SM_SEC                CT_FD_IO_RQ_SM_SEC              "Threshold Alert"
         StatefulAlert_CT_IO_BY_SEC                      CT_IO_BY_SEC                    "Threshold Alert"
         StatefulAlert_CT_IO_LOAD                        CT_IO_LOAD                      "Threshold Alert"
         StatefulAlert_CT_IO_RQ_LG                       CT_IO_RQ_LG                     "Threshold Alert"
         StatefulAlert_CT_IO_RQ_LG_SEC                   CT_IO_RQ_LG_SEC                 "Threshold Alert"
         StatefulAlert_CT_IO_RQ_SM                       CT_IO_RQ_SM                     "Threshold Alert"
         StatefulAlert_CT_IO_RQ_SM_SEC                   CT_IO_RQ_SM_SEC                 "Threshold Alert"
         StatefulAlert_CT_IO_UTIL_LG                     CT_IO_UTIL_LG                   "Threshold Alert"
         StatefulAlert_CT_IO_UTIL_SM                     CT_IO_UTIL_SM                   "Threshold Alert"
         StatefulAlert_CT_IO_WT_LG                       CT_IO_WT_LG                     "Threshold Alert"
         StatefulAlert_CT_IO_WT_LG_RQ                    CT_IO_WT_LG_RQ                  "Threshold Alert"
         StatefulAlert_CT_IO_WT_SM                       CT_IO_WT_SM                     "Threshold Alert"
         StatefulAlert_CT_IO_WT_SM_RQ                    CT_IO_WT_SM_RQ                  "Threshold Alert"
         StatefulAlert_DB_FC_IO_BY_SEC                   DB_FC_IO_BY_SEC                 "Threshold Alert"
         StatefulAlert_DB_FC_IO_RQ                       DB_FC_IO_RQ                     "Threshold Alert"
         StatefulAlert_DB_FC_IO_RQ_SEC                   DB_FC_IO_RQ_SEC                 "Threshold Alert"
         StatefulAlert_DB_FD_IO_BY_SEC                   DB_FD_IO_BY_SEC                 "Threshold Alert"
         StatefulAlert_DB_FD_IO_LOAD                     DB_FD_IO_LOAD                   "Threshold Alert"
         StatefulAlert_DB_FD_IO_RQ_LG                    DB_FD_IO_RQ_LG                  "Threshold Alert"
         StatefulAlert_DB_FD_IO_RQ_LG_SEC                DB_FD_IO_RQ_LG_SEC              "Threshold Alert"
         StatefulAlert_DB_FD_IO_RQ_SM                    DB_FD_IO_RQ_SM                  "Threshold Alert"
         StatefulAlert_DB_FD_IO_RQ_SM_SEC                DB_FD_IO_RQ_SM_SEC              "Threshold Alert"
         StatefulAlert_DB_FL_IO_BY                       DB_FL_IO_BY                     "Threshold Alert"
         StatefulAlert_DB_FL_IO_BY_SEC                   DB_FL_IO_BY_SEC                 "Threshold Alert"
         StatefulAlert_DB_FL_IO_RQ                       DB_FL_IO_RQ                     "Threshold Alert"
         StatefulAlert_DB_FL_IO_RQ_SEC                   DB_FL_IO_RQ_SEC                 "Threshold Alert"
         StatefulAlert_DB_IO_BY_SEC                      DB_IO_BY_SEC                    "Threshold Alert"
         StatefulAlert_DB_IO_LOAD                        DB_IO_LOAD                      "Threshold Alert"
         StatefulAlert_DB_IO_RQ_LG                       DB_IO_RQ_LG                     "Threshold Alert"
         StatefulAlert_DB_IO_RQ_LG_SEC                   DB_IO_RQ_LG_SEC                 "Threshold Alert"
         StatefulAlert_DB_IO_RQ_SM                       DB_IO_RQ_SM                     "Threshold Alert"
         StatefulAlert_DB_IO_RQ_SM_SEC                   DB_IO_RQ_SM_SEC                 "Threshold Alert"
         StatefulAlert_DB_IO_UTIL_LG                     DB_IO_UTIL_LG                   "Threshold Alert"
         StatefulAlert_DB_IO_UTIL_SM                     DB_IO_UTIL_SM                   "Threshold Alert"
         StatefulAlert_DB_IO_WT_LG                       DB_IO_WT_LG                     "Threshold Alert"
         StatefulAlert_DB_IO_WT_LG_RQ                    DB_IO_WT_LG_RQ                  "Threshold Alert"
         StatefulAlert_DB_IO_WT_SM                       DB_IO_WT_SM                     "Threshold Alert"
         StatefulAlert_DB_IO_WT_SM_RQ                    DB_IO_WT_SM_RQ                  "Threshold Alert"
         StatefulAlert_FC_BYKEEP_OVERWR                  FC_BYKEEP_OVERWR                "Threshold Alert"
         StatefulAlert_FC_BYKEEP_OVERWR_SEC              FC_BYKEEP_OVERWR_SEC            "Threshold Alert"
         StatefulAlert_FC_BYKEEP_USED                    FC_BYKEEP_USED                  "Threshold Alert"
         StatefulAlert_FC_BY_USED                        FC_BY_USED                      "Threshold Alert"
         StatefulAlert_FC_IO_BYKEEP_R                    FC_IO_BYKEEP_R                  "Threshold Alert"
         StatefulAlert_FC_IO_BYKEEP_R_SEC                FC_IO_BYKEEP_R_SEC              "Threshold Alert"
         StatefulAlert_FC_IO_BYKEEP_W                    FC_IO_BYKEEP_W                  "Threshold Alert"
         StatefulAlert_FC_IO_BYKEEP_W_SEC                FC_IO_BYKEEP_W_SEC              "Threshold Alert"
         StatefulAlert_FC_IO_BY_R                        FC_IO_BY_R                      "Threshold Alert"
         StatefulAlert_FC_IO_BY_R_MISS                   FC_IO_BY_R_MISS                 "Threshold Alert"
         StatefulAlert_FC_IO_BY_R_MISS_SEC               FC_IO_BY_R_MISS_SEC             "Threshold Alert"
         StatefulAlert_FC_IO_BY_R_SEC                    FC_IO_BY_R_SEC                  "Threshold Alert"
         StatefulAlert_FC_IO_BY_R_SKIP                   FC_IO_BY_R_SKIP                 "Threshold Alert"
         StatefulAlert_FC_IO_BY_R_SKIP_SEC               FC_IO_BY_R_SKIP_SEC             "Threshold Alert"
         StatefulAlert_FC_IO_BY_W                        FC_IO_BY_W                      "Threshold Alert"
         StatefulAlert_FC_IO_BY_W_SEC                    FC_IO_BY_W_SEC                  "Threshold Alert"
         StatefulAlert_FC_IO_ERRS                        FC_IO_ERRS                      "Threshold Alert"
         StatefulAlert_FC_IO_RQKEEP_R                    FC_IO_RQKEEP_R                  "Threshold Alert"
         StatefulAlert_FC_IO_RQKEEP_R_MISS               FC_IO_RQKEEP_R_MISS             "Threshold Alert"
         StatefulAlert_FC_IO_RQKEEP_R_MISS_SEC           FC_IO_RQKEEP_R_MISS_SEC         "Threshold Alert"
         StatefulAlert_FC_IO_RQKEEP_R_SEC                FC_IO_RQKEEP_R_SEC              "Threshold Alert"
         StatefulAlert_FC_IO_RQKEEP_R_SKIP               FC_IO_RQKEEP_R_SKIP             "Threshold Alert"
         StatefulAlert_FC_IO_RQKEEP_R_SKIP_SEC           FC_IO_RQKEEP_R_SKIP_SEC         "Threshold Alert"
         StatefulAlert_FC_IO_RQKEEP_W                    FC_IO_RQKEEP_W                  "Threshold Alert"
         StatefulAlert_FC_IO_RQKEEP_W_SEC                FC_IO_RQKEEP_W_SEC              "Threshold Alert"
         StatefulAlert_FC_IO_RQ_R                        FC_IO_RQ_R                      "Threshold Alert"
         StatefulAlert_FC_IO_RQ_R_MISS                   FC_IO_RQ_R_MISS                 "Threshold Alert"
         StatefulAlert_FC_IO_RQ_R_MISS_SEC               FC_IO_RQ_R_MISS_SEC             "Threshold Alert"
         StatefulAlert_FC_IO_RQ_R_SEC                    FC_IO_RQ_R_SEC                  "Threshold Alert"
         StatefulAlert_FC_IO_RQ_R_SKIP                   FC_IO_RQ_R_SKIP                 "Threshold Alert"
         StatefulAlert_FC_IO_RQ_R_SKIP_SEC               FC_IO_RQ_R_SKIP_SEC             "Threshold Alert"
         StatefulAlert_FC_IO_RQ_W                        FC_IO_RQ_W                      "Threshold Alert"
         StatefulAlert_FC_IO_RQ_W_SEC                    FC_IO_RQ_W_SEC                  "Threshold Alert"
         StatefulAlert_FL_ACTUAL_OUTLIERS                FL_ACTUAL_OUTLIERS              "Threshold Alert"
         StatefulAlert_FL_BY_KEEP                        FL_BY_KEEP                      "Threshold Alert"
         StatefulAlert_FL_DISK_FIRST                     FL_DISK_FIRST                   "Threshold Alert"
         StatefulAlert_FL_DISK_IO_ERRS                   FL_DISK_IO_ERRS                 "Threshold Alert"
         StatefulAlert_FL_EFFICIENCY_PERCENTAGE          FL_EFFICIENCY_PERCENTAGE        "Threshold Alert"
         StatefulAlert_FL_EFFICIENCY_PERCENTAGE_HOUR     FL_EFFICIENCY_PERCENTAGE_HOUR   "Threshold Alert"
         StatefulAlert_FL_FLASH_FIRST                    FL_FLASH_FIRST                  "Threshold Alert"
         StatefulAlert_FL_FLASH_IO_ERRS                  FL_FLASH_IO_ERRS                "Threshold Alert"
         StatefulAlert_FL_FLASH_ONLY_OUTLIERS            FL_FLASH_ONLY_OUTLIERS          "Threshold Alert"
         StatefulAlert_FL_IO_DB_BY_W                     FL_IO_DB_BY_W                   "Threshold Alert"
         StatefulAlert_FL_IO_DB_BY_W_SEC                 FL_IO_DB_BY_W_SEC               "Threshold Alert"
         StatefulAlert_FL_IO_FL_BY_W                     FL_IO_FL_BY_W                   "Threshold Alert"
         StatefulAlert_FL_IO_FL_BY_W_SEC                 FL_IO_FL_BY_W_SEC               "Threshold Alert"
         StatefulAlert_FL_IO_W                           FL_IO_W                         "Threshold Alert"
         StatefulAlert_FL_IO_W_SKIP_BUSY                 FL_IO_W_SKIP_BUSY               "Threshold Alert"
         StatefulAlert_FL_IO_W_SKIP_BUSY_MIN             FL_IO_W_SKIP_BUSY_MIN           "Threshold Alert"
         StatefulAlert_FL_IO_W_SKIP_LARGE                FL_IO_W_SKIP_LARGE              "Threshold Alert"
         StatefulAlert_FL_PREVENTED_OUTLIERS             FL_PREVENTED_OUTLIERS           "Threshold Alert"
         StatefulAlert_GD_IO_BY_R_LG                     GD_IO_BY_R_LG                   "Threshold Alert"
         StatefulAlert_GD_IO_BY_R_LG_SEC                 GD_IO_BY_R_LG_SEC               "Threshold Alert"
         StatefulAlert_GD_IO_BY_R_SM                     GD_IO_BY_R_SM                   "Threshold Alert"
         StatefulAlert_GD_IO_BY_R_SM_SEC                 GD_IO_BY_R_SM_SEC               "Threshold Alert"
         StatefulAlert_GD_IO_BY_W_LG                     GD_IO_BY_W_LG                   "Threshold Alert"
         StatefulAlert_GD_IO_BY_W_LG_SEC                 GD_IO_BY_W_LG_SEC               "Threshold Alert"
         StatefulAlert_GD_IO_BY_W_SM                     GD_IO_BY_W_SM                   "Threshold Alert"
         StatefulAlert_GD_IO_BY_W_SM_SEC                 GD_IO_BY_W_SM_SEC               "Threshold Alert"
         StatefulAlert_GD_IO_ERRS                        GD_IO_ERRS                      "Threshold Alert"
         StatefulAlert_GD_IO_ERRS_MIN                    GD_IO_ERRS_MIN                  "Threshold Alert"
         StatefulAlert_GD_IO_RQ_R_LG                     GD_IO_RQ_R_LG                   "Threshold Alert"
         StatefulAlert_GD_IO_RQ_R_LG_SEC                 GD_IO_RQ_R_LG_SEC               "Threshold Alert"
         StatefulAlert_GD_IO_RQ_R_SM                     GD_IO_RQ_R_SM                   "Threshold Alert"
         StatefulAlert_GD_IO_RQ_R_SM_SEC                 GD_IO_RQ_R_SM_SEC               "Threshold Alert"
         StatefulAlert_GD_IO_RQ_W_LG                     GD_IO_RQ_W_LG                   "Threshold Alert"
         StatefulAlert_GD_IO_RQ_W_LG_SEC                 GD_IO_RQ_W_LG_SEC               "Threshold Alert"
         StatefulAlert_GD_IO_RQ_W_SM                     GD_IO_RQ_W_SM                   "Threshold Alert"
         StatefulAlert_GD_IO_RQ_W_SM_SEC                 GD_IO_RQ_W_SM_SEC               "Threshold Alert"
         StatefulAlert_IORM_MODE                         IORM_MODE                       "Threshold Alert"
         StatefulAlert_N_HCA_MB_RCV_SEC                  N_HCA_MB_RCV_SEC                "Threshold Alert"
         StatefulAlert_N_HCA_MB_TRANS_SEC                N_HCA_MB_TRANS_SEC              "Threshold Alert"
         StatefulAlert_N_MB_DROP                         N_MB_DROP                       "Threshold Alert"
         StatefulAlert_N_MB_DROP_SEC                     N_MB_DROP_SEC                   "Threshold Alert"
         StatefulAlert_N_MB_RDMA_DROP                    N_MB_RDMA_DROP                  "Threshold Alert"
         StatefulAlert_N_MB_RDMA_DROP_SEC                N_MB_RDMA_DROP_SEC              "Threshold Alert"
         StatefulAlert_N_MB_RECEIVED                     N_MB_RECEIVED                   "Threshold Alert"
         StatefulAlert_N_MB_RECEIVED_SEC                 N_MB_RECEIVED_SEC               "Threshold Alert"
         StatefulAlert_N_MB_RESENT                       N_MB_RESENT                     "Threshold Alert"
         StatefulAlert_N_MB_RESENT_SEC                   N_MB_RESENT_SEC                 "Threshold Alert"
         StatefulAlert_N_MB_SENT                         N_MB_SENT                       "Threshold Alert"
         StatefulAlert_N_MB_SENT_SEC                     N_MB_SENT_SEC                   "Threshold Alert"
         StatefulAlert_N_NIC_KB_RCV_SEC                  N_NIC_KB_RCV_SEC                "Threshold Alert"
         StatefulAlert_N_NIC_KB_TRANS_SEC                N_NIC_KB_TRANS_SEC              "Threshold Alert"
         StatefulAlert_N_NIC_NW                          N_NIC_NW                        "Threshold Alert"
         StatefulAlert_N_RDMA_RETRY_TM                   N_RDMA_RETRY_TM                 "Threshold Alert"
         Stateful_HardwareAlert                                                          "Hardware Stateful Alert"
         Stateful_SoftwareAlert                                                          "Software Stateful Alert"


CellCLI> list alerthistory where  severity='critical' and examinedBy='' detail
         name:                   1_1
         alertMessage:           "Cell configuration check discovered the following problems:   Check Exadata configuration via ipconf utility Verifying of Exadata configuration file /opt/oracle.cellos/cell.conf Error. Exadata configuration file not found /opt/oracle.cellos/cell.conf [INFO] The ipconf check may generate a failure for temporary inability to reach NTP or DNS server. You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] As root user run /usr/local/bin/ipconf -verify -semantic to verify consistent network configurations."
         alertSequenceID:        1
         alertShortName:         Software
         alertType:              Stateful
         beginTime:              2011-08-09T15:16:39-04:00
         endTime:                2011-08-09T15:37:04-04:00
         examinedBy:             
         metricObjectName:       checkconfig
         notificationState:      0
         sequenceBeginTime:      2011-08-09T15:16:39-04:00
         severity:               critical
         alertAction:            "Correct the configuration problems. Then run cellcli command:   ALTER CELL VALIDATE CONFIGURATION   Verify that the new configuration is correct."

         name:                   2
         alertMessage:           "RS-7445 [Required IP parameters missing] [Check cellinit.ora] [] [] [] [] [] [] [] [] [] []"
         alertSequenceID:        2
         alertShortName:         ADR
         alertType:              Stateless
         beginTime:              2011-08-09T15:32:47-04:00
         endTime:                
         examinedBy:             
         notificationState:      0
         sequenceBeginTime:      2011-08-09T15:32:47-04:00
         severity:               critical
         alertAction:            "Errors in file /opt/oracle/cell11.2.2.4.2_LINUX.X64_111221/log/diag/asm/cell/dm01cel01/trace/rstrc_11798_4.trc  (incident=1).   Please create an incident package for incident 1 using ADRCI and upload the incident package to Oracle Support.  This can be done as shown below.  From a shell session on cell localhost, enter the following commands:   $ cd /opt/oracle/cell11.2.2.4.2_LINUX.X64_111221/log  $ adrci  adrci> set home diag/asm/cell/dm01cel01  adrci> ips pack incident 1 in /tmp   <<>>  Add this zip file as an attachment to an email message and send the message to Oracle Support."
		 
CellCLI> CREATE THRESHOLD db_io_rq_sm_sec.db123 comparison='>', critical=120
Threshold db_io_rq_sm_sec.db123 successfully created
		 
		 

如何重置Exadata cell上的flash cache的内容

如何重置Exadata cell上的flash cache的内容?

可以通过以下命令实现:

cellcli
CellCLI: Release 11.2.3.1.1 - Production on Sun Sep 02 07:29:08 EDT 2012

Copyright (c) 2007, 2011, Oracle.  All rights reserved.
Cell Efficiency Ratio: 527

CellCLI> LIST FLASHCACHECONTENT where objectnumber=17425 detail
         cachedKeepSize:         8755838976
         cachedSize:             8757706752
         dbID:                   2080757153
         dbUniqueName:           DBM
         hitCount:               12940
         hoursToExpiration:      21
         missCount:              78488
         objectNumber:           17425
         tableSpaceNumber:       7

仅需要设置immediate cellsrv.cellsrv_flashcache(Reset,0,0,0) event,event的设置语法与普通的 RDBMS十分相似

CellCLI> alter cell events = "immediate cellsrv.cellsrv_flashcache(Reset,0,0,0)"
Cell dm01cel01 successfully altered

CellCLI> LIST FLASHCACHECONTENT where objectnumber=17425 detail


其他一些有用的命令:

将flashcache的状态转储到trace file中:

CellCLI> alter cell events = “immediate cellsrv.cellsrv_flashcache(dumpStats,0,0,12345)”

重置状态信息

CellCLI> alter cell events = “immediate cellsrv.cellsrv_flashcache(resetStats,0,0,0)”

Exadata测试CELL_FLASH_CACHE KEEP性能

Exadata测试CELL_FLASH_CACHE KEEP  SMART Flash Cache性能

imageinfo

Kernel version: 2.6.18-274.18.1.0.1.el5 #1 SMP Thu Feb 9 19:07:16 EST 2012 x86_64
Cell version: OSS_11.2.3.1.1_LINUX.X64_120607
Cell rpm version: cell-11.2.3.1.1_LINUX.X64_120607-1

Active image version: 11.2.3.1.1.120607
Active image activated: 2012-08-13 18:00:09 -0400
Active image status: success
Active system partition on device: /dev/md6
Active software partition on device: /dev/md8

In partition rollback: Impossible

Cell boot usb partition: /dev/sdm1
Cell boot usb version: 11.2.3.1.1.120607

Inactive image version: 11.2.2.4.2.111221
Inactive image activated: 2012-08-09 15:36:25 -0400
Inactive image status: success
Inactive system partition on device: /dev/md5
Inactive software partition on device: /dev/md7

Boot area has rollback archive for the version: 11.2.2.4.2.111221
Rollback to the inactive partitions: Possible

CellCLI> list flashcache detail 
         name:                   dm01cel01_FLASHCACHE
         cellDisk:               FD_15_dm01cel01,FD_11_dm01cel01,FD_09_dm01cel01,FD_14_dm01cel01,FD_00_dm01cel01,FD_12_dm01cel01,FD_03_dm01cel01,FD_01_dm01cel01,FD_13_dm01cel01,FD_07_dm01cel01,FD_04_dm01cel01,FD_08_dm01cel01,FD_05_dm01cel01,FD_10_dm01cel01,FD_02_dm01cel01,FD_06_dm01cel01
         creationTime:           2012-08-13T17:58:02-04:00
         degradedCelldisks:      
         effectiveCacheSize:     365.25G
         id:                     f7118853-fd8d-4df4-917e-738c093530a7
         size:                   365.25G
         status:                 normal




CellCLI> LIST METRICCURRENT WHERE objecttype='FLASHCACHE';
         FC_BYKEEP_OVERWR                FLASHCACHE      0.000 MB
         FC_BYKEEP_OVERWR_SEC            FLASHCACHE      0.000 MB/sec
         FC_BYKEEP_USED                  FLASHCACHE      8,350 MB
         FC_BY_USED                      FLASHCACHE      8,518 MB
         FC_IO_BYKEEP_R                  FLASHCACHE      8,328 MB
         FC_IO_BYKEEP_R_SEC              FLASHCACHE      0.000 MB/sec
         FC_IO_BYKEEP_W                  FLASHCACHE      8,201 MB
         FC_IO_BYKEEP_W_SEC              FLASHCACHE      0.000 MB/sec
         FC_IO_BY_R                      FLASHCACHE      8,700 MB
         FC_IO_BY_R_MISS                 FLASHCACHE      8,704 MB
         FC_IO_BY_R_MISS_SEC             FLASHCACHE      0.000 MB/sec
         FC_IO_BY_R_SEC                  FLASHCACHE      0.000 MB/sec
         FC_IO_BY_R_SKIP                 FLASHCACHE      69,824 MB
         FC_IO_BY_R_SKIP_SEC             FLASHCACHE      0.001 MB/sec
         FC_IO_BY_W                      FLASHCACHE      9,783 MB
         FC_IO_BY_W_SEC                  FLASHCACHE      0.000 MB/sec
         FC_IO_ERRS                      FLASHCACHE      0
         FC_IO_RQKEEP_R                  FLASHCACHE      8,340 IO requests
         FC_IO_RQKEEP_R_MISS             FLASHCACHE      8,340 IO requests
         FC_IO_RQKEEP_R_MISS_SEC         FLASHCACHE      0.0 IO/sec
         FC_IO_RQKEEP_R_SEC              FLASHCACHE      0.0 IO/sec
         FC_IO_RQKEEP_R_SKIP             FLASHCACHE      15 IO requests
         FC_IO_RQKEEP_R_SKIP_SEC         FLASHCACHE      0.0 IO/sec
         FC_IO_RQKEEP_W                  FLASHCACHE      8,343 IO requests
         FC_IO_RQKEEP_W_SEC              FLASHCACHE      0.0 IO/sec
         FC_IO_RQ_R                      FLASHCACHE      38,219 IO requests
         FC_IO_RQ_R_MISS                 FLASHCACHE      19,694 IO requests
         FC_IO_RQ_R_MISS_SEC             FLASHCACHE      0.0 IO/sec
         FC_IO_RQ_R_SEC                  FLASHCACHE      0.0 IO/sec
         FC_IO_RQ_R_SKIP                 FLASHCACHE      246,344 IO requests
         FC_IO_RQ_R_SKIP_SEC             FLASHCACHE      0.1 IO/sec
         FC_IO_RQ_W                      FLASHCACHE      137,932 IO requests
         FC_IO_RQ_W_SEC                  FLASHCACHE      0.0 IO/sec


列出度量定义

CellCLI> LIST METRICDEFINITION FC_BY_USED DETAIL
         name:                   FC_BY_USED
         description:            "Number of megabytes used on FlashCache"
         metricType:             Instantaneous
         objectType:             FLASHCACHE
         unit:                   MB


SQL> alter table larget storage (cell_flash_cache keep);

Table altered.

SQL> 
SQL> select a.name,b.value 
  2      from v$sysstat a , v$mystat b
  3    where
a.statistic#=b.statistic#
and (a.name in ('physical read total bytes','physical write total bytes',
'cell IO uncompressed bytes') or a.name like 'cell phy%'
or a.name like '%flash cache read hits');   4    5    6    7  

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
physical read total bytes                                            114688
physical write total bytes                                                0
cell physical IO interconnect bytes                                  114688
cell physical IO bytes pushed back due to excessive CPU on cell           0
cell physical IO bytes saved during optimized file creation               0
cell physical IO bytes saved during optimized RMAN file restore           0
cell physical IO bytes eligible for predicate offload                     0
cell physical IO bytes saved by storage index                             0
cell physical IO interconnect bytes returned by smart scan                0
cell IO uncompressed bytes                                                0
cell flash cache read hits                                                0

11 rows selected.

SQL> alter system flush buffer_cache;

System altered.

SQL> select count(*) from larget;

  COUNT(*)
----------
 242778112

SQL> set timing on;
SQL> select a.name,b.value 
  2      from v$sysstat a , v$mystat b
  3    where
a.statistic#=b.statistic#
and (a.name in ('physical read total bytes','physical write total bytes',
'cell IO uncompressed bytes') or a.name like 'cell phy%'
or a.name like '%flash cache read hits');   4    5    6    7  

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
physical read total bytes                                        2.6262E+10
physical write total bytes                                                0
cell physical IO interconnect bytes                              3018270928
cell physical IO bytes pushed back due to excessive CPU on cell           0
cell physical IO bytes saved during optimized file creation               0
cell physical IO bytes saved during optimized RMAN file restore           0
cell physical IO bytes eligible for predicate offload            2.6262E+10
cell physical IO bytes saved by storage index                             0
cell physical IO interconnect bytes returned by smart scan       3018090704
cell IO uncompressed bytes                                       2.6284E+10
cell flash cache read hits                                               55

11 rows selected.

Elapsed: 00:00:00.01
SQL> select count(*) from larget;

  COUNT(*)
----------
 242778112

Elapsed: 00:00:06.83
SQL> select a.name,b.value 
  2      from v$sysstat a , v$mystat b
  3    where
a.statistic#=b.statistic#
and (a.name in ('physical read total bytes','physical write total bytes',
'cell IO uncompressed bytes') or a.name like 'cell phy%'
or a.name like '%flash cache read hits');   4    5    6    7  

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
physical read total bytes                                        5.2525E+10
physical write total bytes                                                0
cell physical IO interconnect bytes                              6036394312
cell physical IO bytes pushed back due to excessive CPU on cell           0
cell physical IO bytes saved during optimized file creation               0
cell physical IO bytes saved during optimized RMAN file restore           0
cell physical IO bytes eligible for predicate offload            5.2524E+10
cell physical IO bytes saved by storage index                             0
cell physical IO interconnect bytes returned by smart scan       6036214088
cell IO uncompressed bytes                                       5.2570E+10
cell flash cache read hits                                            27999

11 rows selected.

Elapsed: 00:00:00.00

cell server IO calibrate 

ellCLI> calibrate force;
Calibration will take a few minutes...
Aggregate random read throughput across all hard disk LUNs: 1936 MBPS
Aggregate random read throughput across all flash disk LUNs: 4148.56 MBPS
Aggregate random read IOs per second (IOPS) across all hard disk LUNs: 4906
Aggregate random read IOs per second (IOPS) across all flash disk LUNs: 142303
Controller read throughput: 1939.98 MBPS
Calibrating hard disks (read only) ...
LUN 0_0  on drive [28:0     ] random read throughput: 168.39 MBPS, and 419 IOPS
LUN 0_1  on drive [28:1     ] random read throughput: 165.32 MBPS, and 412 IOPS
LUN 0_10 on drive [28:10    ] random read throughput: 170.72 MBPS, and 421 IOPS
LUN 0_11 on drive [28:11    ] random read throughput: 169.51 MBPS, and 412 IOPS
LUN 0_2  on drive [28:2     ] random read throughput: 171.15 MBPS, and 421 IOPS
LUN 0_3  on drive [28:3     ] random read throughput: 170.58 MBPS, and 413 IOPS
LUN 0_4  on drive [28:4     ] random read throughput: 166.37 MBPS, and 413 IOPS
LUN 0_5  on drive [28:5     ] random read throughput: 167.69 MBPS, and 424 IOPS
LUN 0_6  on drive [28:6     ] random read throughput: 171.89 MBPS, and 427 IOPS
LUN 0_7  on drive [28:7     ] random read throughput: 167.78 MBPS, and 425 IOPS
LUN 0_8  on drive [28:8     ] random read throughput: 170.74 MBPS, and 423 IOPS
LUN 0_9  on drive [28:9     ] random read throughput: 168.56 MBPS, and 420 IOPS
Calibrating flash disks (read only, note that writes will be significantly slower) ...
LUN 1_0  on drive [FLASH_1_0] random read throughput: 272.06 MBPS, and 19867 IOPS
LUN 1_1  on drive [FLASH_1_1] random read throughput: 272.06 MBPS, and 19892 IOPS
LUN 1_2  on drive [FLASH_1_2] random read throughput: 271.68 MBPS, and 19869 IOPS
LUN 1_3  on drive [FLASH_1_3] random read throughput: 272.40 MBPS, and 19875 IOPS
LUN 2_0  on drive [FLASH_2_0] random read throughput: 272.54 MBPS, and 20650 IOPS
LUN 2_1  on drive [FLASH_2_1] random read throughput: 272.67 MBPS, and 20683 IOPS
LUN 2_2  on drive [FLASH_2_2] random read throughput: 271.98 MBPS, and 20693 IOPS
LUN 2_3  on drive [FLASH_2_3] random read throughput: 272.48 MBPS, and 20683 IOPS
LUN 4_0  on drive [FLASH_4_0] random read throughput: 271.85 MBPS, and 19932 IOPS
LUN 4_1  on drive [FLASH_4_1] random read throughput: 272.22 MBPS, and 19924 IOPS
LUN 4_2  on drive [FLASH_4_2] random read throughput: 272.38 MBPS, and 19908 IOPS
LUN 4_3  on drive [FLASH_4_3] random read throughput: 271.73 MBPS, and 19901 IOPS
LUN 5_0  on drive [FLASH_5_0] random read throughput: 271.61 MBPS, and 19906 IOPS
LUN 5_1  on drive [FLASH_5_1] random read throughput: 271.39 MBPS, and 19897 IOPS
LUN 5_2  on drive [FLASH_5_2] random read throughput: 270.85 MBPS, and 19901 IOPS
LUN 5_3  on drive [FLASH_5_3] random read throughput: 270.99 MBPS, and 19884 IOPS
CALIBRATE results are within an acceptable range.
Calibration has finished.




SQL> Select data_object_id from dba_objects where  object_name='LARGET';

DATA_OBJECT_ID
--------------
         17425

SELECT statistic_name, value   
FROM V$SEGMENT_STATISTICS 
     WHERE dataobj#= 17425 AND ts#=7 AND
     statistic_name='optimized physical reads';

STATISTIC_NAME                                                        VALUE
---------------------------------------------------------------- ----------
optimized physical reads                                              43687



CellCLI> LIST FLASHCACHECONTENT where objectnumber=17425 detail
         cachedKeepSize:         8755838976
         cachedSize:             8757706752
         dbID:                   2080757153
         dbUniqueName:           DBM
         hitCount:               12940
         hoursToExpiration:      23
         missCount:              78488
         objectNumber:           17425
         tableSpaceNumber:       7

V$SYSSTAT视图中累计性地记录了从flash cache中获益的I/O request数目,这些累计数目来自于所有的CELL存储服务器, 相关的统计名字叫做’cell flash cache read hits’,相似的统计信息在V$SESSTAT和V$MYSTAT中都有。

另一个统计值‘physical read requests optimized’ 反映了Exadata storage index与cell flash cache一起获益的磁盘IO数目。

在11g的AWR报告中出现了新的段落来描述数据库对象和SQL分别体现的高和低的Smart flash cache命中率。这些段落是:
Segment by unoptimized reads
Segment by Optimized reads
SQL ordered by Physical Reads (Unoptimized)

在 AWR报告中I/O读取请求收益于Smart flash cache的被称作”Optimized reads”, 仅仅是从普通SAS DISK读取的称作”Unoptimized Reads”

Segments by UnOptimized Reads

  • Total UnOptimized Read Requests: 66,587
  • Captured Segments account for 86.9% of Total
Owner Tablespace Name Object Name Subobject Name Obj. Type UnOptimized Reads %Total
SYS SYSTEM AUD$ TABLE 38,376 57.63
PIN PIN02 PURCHASED_PRODUCT_T TABLE 5,149 7.73
PIN PINX02 I_PURCHASED_PRODUCT__ID INDEX 3,617 5.43
PIN PIN00 IDX_TRANS_LOG_MSISDN INDEX 2,471 3.71
PIN PIN02 BILLLOG_T P_R_02292012 TABLE PARTITION 1,227 1.84

Segments by Optimized Reads

  • Total Optimized Read Requests: 207,547
  • Captured Segments account for 88.9% of Total
Owner Tablespace Name Object Name Subobject Name Obj. Type Optimized Reads %Total
SYS SYSTEM AUD$ TABLE 92,198 44.42
PIN PIN02 PURCHASED_PRODUCT_T TABLE 23,142 11.15
PIN PINX02 I_PURCHASED_PRODUCT__ID INDEX 10,781 5.19
PIN PIN00 IDX_TRANS_LOG_MSISDN INDEX 9,354 4.51
PIN PIN02 SERVICE_T TABLE 7,818 3.77

如何修复重编译Datapump工具expdp/impdp

数据泵工具expdp/impdp是10g中引发的服务器端导入导出外部工具,虽然是外部的binary,但是实际expdp/impdp都依赖于内部的PL/SQL package主要是(dbms_datapump),在很多情况下我们需要修复或重新加载Datapump工具,方法如下:

对于版本10.1:

1. Catdp.sql orders the installation of all its components including 
   the Metadata API which was previously installed separately.
   By default catproc.sql invoke this script.

SQL >@ $ORACLE_HOME/rdbms/admin/catdp.sql

2. dbmspump.sql will create DBMS procedures for dataPUMP

SQL >@ $ORACLE_HOME/rdbms/admin/dbmspump.sql

对于版本10.2:

1. Catdph.sql will Re-Install DataPump types and views

SQL >@ $ORACLE_HOME/rdbms/admin/catdph.sql 

Note: If XDB is installed, then it is required to run "catmetx.sql" script also.

Use this code to verify if XDB is installed:

SQL> select substr(comp_name,1,30) comp_name, substr(comp_id,1,10)
     comp_id,substr(version,1,12) version,status from dba_registry;

Sample output if XDB installed,

Oracle XML Database            XDB        -version-   VALID

2. prvtdtde.plb will Re-Install tde_library packages

SQL >@ $ORACLE_HOME/rdbms/admin/prvtdtde.plb 

3. Catdpb.sql will Re-Install DataPump packages

SQL >@ $ORACLE_HOME/rdbms/admin/catdpb.sql 

4.Dbmspump.sql will Re-Install DBMS DataPump objects

SQL >@ $ORACLE_HOME/rdbms/admin/dbmspump.sql 

5. To recompile  invalid objects, if any

SQL >@ $ORACLE_HOME/rdbms/admin/utlrp.sql

对于版本11g :

1. Catproc.sql 

SQL >@ $ORACLE_HOME/rdbms/admin/catproc.sql 

2. To recompile invalid objects, if any

SQL >@ $ORACLE_HOME/rdbms/admin/utlrp.sql

示例使用方法FOR 10.2.0.5:

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi
PL/SQL Release 10.2.0.5.0 - Production
CORE    10.2.0.5.0      Production
TNS for Linux: Version 10.2.0.5.0 - Production
NLSRTL Version 10.2.0.5.0 - Production

select substr(comp_name,1,30) comp_name, substr(comp_id,1,10)
     comp_id,substr(version,1,12) version,status from dba_registry;

确认XDB未安装

@$ORACLE_HOME/rdbms/admin/catdph.sql 

....

Package created.

Grant succeeded.

SQL> @ $ORACLE_HOME/rdbms/admin/prvtdtde.plb 

Library created.

No errors.

Package created.

Synonym created.

Package created.

@ $ORACLE_HOME/rdbms/admin/catdpb.sql 

@ $ORACLE_HOME/rdbms/admin/dbmspump.sql 

@ $ORACLE_HOME/rdbms/admin/utlrp.sql

同时可以把expdp、impdp的binary重新编译一遍

cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk iexpdp iimpdp

- Linking Datapump Export utility (expdp)
rm -f /s01/oracle/product/10.2.0.5/db_1/rdbms/lib/expdp
gcc -o /s01/oracle/product/10.2.0.5/db_1/rdbms/lib/expdp -L/s01/oracle/product/10.2.0.5/db_1/rdbms/lib/ -L/s01/oracle/product/10.2.0.5/db_1/lib/ -L/s01/oracle/product/10.2.0.5/db_1/lib/stubs/   /s01/oracle/product/10.2.0.5/db_1/rdbms/lib/s0udexp.o  /s01/oracle/product/10.2.0.5/db_1/rdbms/lib/defopt.o -ldbtools10 -lclntsh  `cat /s01/oracle/product/10.2.0.5/db_1/lib/ldflags`    -lnsslb10 -lncrypt10 -lnsgr10 -lnzjs10 -ln10 -lnnz10 -lnl10 -lnro10 `cat /s01/oracle/product/10.2.0.5/db_1/lib/ldflags`    -lnsslb10 -lncrypt10 -lnsgr10 -lnzjs10 -ln10 -lnnz10 -lnl10 -lclient10 -lnnetd10  -lvsn10 -lcommon10 -lgeneric10 -lmm -lsnls10 -lnls10  -lcore10 -lsnls10 -lnls10 -lcore10 -lsnls10 -lnls10 -lxml10 -lcore10 -lunls10 -lsnls10 -lnls10 -lcore10 -lnls10 `cat /s01/oracle/product/10.2.0.5/db_1/lib/ldflags`    -lnsslb10 -lncrypt10 -lnsgr10 -lnzjs10 -ln10 -lnnz10 -lnl10 -lnro10 `cat /s01/oracle/product/10.2.0.5/db_1/lib/ldflags`    -lnsslb10 -lncrypt10 -lnsgr10 -lnzjs10 -ln10 -lnnz10 -lnl10 -lclient10 -lnnetd10  -lvsn10 -lcommon10 -lgeneric10   -lsnls10 -lnls10  -lcore10 -lsnls10 -lnls10 -lcore10 -lsnls10 -lnls10 -lxml10 -lcore10 -lunls10 -lsnls10 -lnls10 -lcore10 -lnls10 -lclient10 -lnnetd10  -lvsn10 -lcommon10 -lgeneric10 -lsnls10 -lnls10  -lcore10 -lsnls10 -lnls10 -lcore10 -lsnls10 -lnls10 -lxml10 -lcore10 -lunls10 -lsnls10 -lnls10 -lcore10 -lnls10   `cat /s01/oracle/product/10.2.0.5/db_1/lib/sysliblist` -Wl,-rpath,/s01/oracle/product/10.2.0.5/db_1/lib -lm    `cat /s01/oracle/product/10.2.0.5/db_1/lib/sysliblist` -ldl -lm   -L/s01/oracle/product/10.2.0.5/db_1/lib
mv -f /s01/oracle/product/10.2.0.5/db_1/bin/expdp /s01/oracle/product/10.2.0.5/db_1/bin/expdpO
mv /s01/oracle/product/10.2.0.5/db_1/rdbms/lib/expdp /s01/oracle/product/10.2.0.5/db_1/bin/expdp
chmod 751 /s01/oracle/product/10.2.0.5/db_1/bin/expdp

 - Linking Datapump Import utility (impdp)
rm -f /s01/oracle/product/10.2.0.5/db_1/rdbms/lib/impdp
gcc -o /s01/oracle/product/10.2.0.5/db_1/rdbms/lib/impdp -L/s01/oracle/product/10.2.0.5/db_1/rdbms/lib/ -L/s01/oracle/product/10.2.0.5/db_1/lib/ -L/s01/oracle/product/10.2.0.5/db_1/lib/stubs/   /s01/oracle/product/10.2.0.5/db_1/rdbms/lib/s0udimp.o  /s01/oracle/product/10.2.0.5/db_1/rdbms/lib/defopt.o -ldbtools10 -lclntsh  `cat /s01/oracle/product/10.2.0.5/db_1/lib/ldflags`    -lnsslb10 -lncrypt10 -lnsgr10 -lnzjs10 -ln10 -lnnz10 -lnl10 -lnro10 `cat /s01/oracle/product/10.2.0.5/db_1/lib/ldflags`    -lnsslb10 -lncrypt10 -lnsgr10 -lnzjs10 -ln10 -lnnz10 -lnl10 -lclient10 -lnnetd10  -lvsn10 -lcommon10 -lgeneric10 -lmm -lsnls10 -lnls10  -lcore10 -lsnls10 -lnls10 -lcore10 -lsnls10 -lnls10 -lxml10 -lcore10 -lunls10 -lsnls10 -lnls10 -lcore10 -lnls10 `cat /s01/oracle/product/10.2.0.5/db_1/lib/ldflags`    -lnsslb10 -lncrypt10 -lnsgr10 -lnzjs10 -ln10 -lnnz10 -lnl10 -lnro10 `cat /s01/oracle/product/10.2.0.5/db_1/lib/ldflags`    -lnsslb10 -lncrypt10 -lnsgr10 -lnzjs10 -ln10 -lnnz10 -lnl10 -lclient10 -lnnetd10  -lvsn10 -lcommon10 -lgeneric10   -lsnls10 -lnls10  -lcore10 -lsnls10 -lnls10 -lcore10 -lsnls10 -lnls10 -lxml10 -lcore10 -lunls10 -lsnls10 -lnls10 -lcore10 -lnls10 -lclient10 -lnnetd10  -lvsn10 -lcommon10 -lgeneric10 -lsnls10 -lnls10  -lcore10 -lsnls10 -lnls10 -lcore10 -lsnls10 -lnls10 -lxml10 -lcore10 -lunls10 -lsnls10 -lnls10 -lcore10 -lnls10   `cat /s01/oracle/product/10.2.0.5/db_1/lib/sysliblist` -Wl,-rpath,/s01/oracle/product/10.2.0.5/db_1/lib -lm    `cat /s01/oracle/product/10.2.0.5/db_1/lib/sysliblist` -ldl -lm   -L/s01/oracle/product/10.2.0.5/db_1/lib
mv -f /s01/oracle/product/10.2.0.5/db_1/bin/impdp  /s01/oracle/product/10.2.0.5/db_1/bin/impdpO
mv /s01/oracle/product/10.2.0.5/db_1/rdbms/lib/impdp /s01/oracle/product/10.2.0.5/db_1/bin/impdp
chmod 751 /s01/oracle/product/10.2.0.5/db_1/bin/impdp

[oracle@vrh8 lib]$ ls -l $ORACLE_HOME/bin/*pdp
-rwxr-x--x 1 oracle oinstall 228377 Aug 26 09:15 /s01/oracle/product/10.2.0.5/db_1/bin/expdp
-rwxr-x--x 1 oracle oinstall 233704 Aug 26 09:15 /s01/oracle/product/10.2.0.5/db_1/bin/impdp

11g中如何禁用自动统计信息收集作业

11g中如何禁用自动统计信息收集作业?

因为11g中auto stats gather job被集成到auto task中,所以与10g中的禁用方式不一样:

SQL> select client_name,status from DBA_AUTOTASK_CLIENT;

CLIENT_NAME                                                      STATUS
---------------------------------------------------------------- --------
auto optimizer stats collection                                  ENABLED
auto space advisor                                               ENABLED
sql tuning advisor                                               ENABLED

begin
DBMS_AUTO_TASK_ADMIN.DISABLE(client_name => 'auto optimizer stats collection',
operation => NULL,
window_name => NULL);
end;
/
PL/SQL procedure successfully completed.

SQL>  select client_name,status from DBA_AUTOTASK_CLIENT;

CLIENT_NAME                                                      STATUS
---------------------------------------------------------------- --------
auto optimizer stats collection                                  DISABLED
auto space advisor                                               ENABLED
sql tuning advisor                                               ENABLED

Exadata:Smart Scan(一)

Smart Scan是Exadata的主要特性之一,该特性主要依赖于于Exadata Storage Software:

[oracle@database ~]$ sqlplus  maclean/maclean

SQL*Plus: Release 11.2.0.2.0 Production on Sat Aug 18 22:46:39 2012
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

select a.name,b.value/1024/1024 MB
from v$sysstat a , v$mystat b
where
a.statistic#=b.statistic#
and (a.name in ('physical read total bytes','physical write total bytes',
'cell IO uncompressed bytes') or a.name like 'cell phy%');

NAME                                                                     MB
---------------------------------------------------------------- ----------
physical read total bytes                                          .3984375
physical write total bytes                                                0
cell physical IO interconnect bytes                                .3984375
cell physical IO bytes saved during optimized file creation               0
cell physical IO bytes saved during optimized RMAN file restore           0
cell physical IO bytes eligible for predicate offload                     0
cell physical IO bytes saved by storage index                             0
cell physical IO interconnect bytes returned by smart scan                0
cell IO uncompressed bytes                                                0

SQL> set linesize 200 pagesize 2000

SQL> set autotrace on;

SQL>  select  /*+ OPT_PARAM('cell_offload_processing' 'false') */ count(*) from sales
  2   where time_id between '01-JAN-98' and '31-DEC-98'
  3   and amount_sold>=101;

  COUNT(*)
----------
     30661

Execution Plan
----------------------------------------------------------
Plan hash value: 8150843

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    22 |  1238   (2)| 00:00:15 |
|   1 |  SORT AGGREGATE             |       |     1 |    22 |            |          |
|*  2 |   FILTER                    |       |       |       |            |          |
|*  3 |    TABLE ACCESS STORAGE FULL| SALES | 14685 |   315K|  1238   (2)| 00:00:15 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(TO_DATE('01-JAN-98')=101 AND "TIME_ID">='01-JAN-98' AND
   3 - filter("AMOUNT_SOLD"》=101 AND "TIME_ID"》='01-JAN-98' AND
              "TIME_ID"《='31-DEC-98')

Note
-----
   - dynamic sampling used for this statement (level=2)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       4437  consistent gets
       4433  physical reads
          0  redo size
        424  bytes sent via SQL*Net to client
        419  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL> select a.name,b.value/1024/1024 MB
  2  from v$sysstat a , v$mystat b
  3  where
  4  a.statistic#=b.statistic#
  5  and (a.name in ('physical read total bytes','physical write total bytes',
  6  'cell IO uncompressed bytes') or a.name like 'cell phy%');

NAME                                                                     MB
---------------------------------------------------------------- ----------
physical read total bytes                                         35.484375
physical write total bytes                                                0
cell physical IO interconnect bytes                               35.484375
cell physical IO bytes saved during optimized file creation               0
cell physical IO bytes saved during optimized RMAN file restore           0
cell physical IO bytes eligible for predicate offload                     0
cell physical IO bytes saved by storage index                             0
cell physical IO interconnect bytes returned by smart scan                0
cell IO uncompressed bytes                                                0

9 rows selected.

上面我们使用了exadata核心参数cell_offload_processing=false禁用了smart scan, 下面我们来看看Smart scan的统计表现

select count(*) from sales
where time_id between '01-JAN-2003' and '31-DEC-2003'
and amount_sold=1;


PARSING IN CURSOR #8100532 len=100 dep=0 uid=93 oct=3 lid=93 tim=1345357700828975 hv=1616885803 ad='3f07a6a0' sqlid='7z3cz8ph5zf1b'
select  count(*) from sales
 where time_id between '01-JAN-98' and '31-DEC-98'
 and amount_sold>=101
END OF STMT
PARSE #8100532:c=149978,e=1430146,p=271,cr=310,cu=0,mis=1,r=0,dep=0,og=1,plh=8150843,tim=1345357700828975
EXEC #8100532:c=0,e=31,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=8150843,tim=1345357700829041
WAIT #8100532: nam='SQL*Net message to client' ela= 2 driver id=1650815232 #bytes=1 p3=0 obj#=79496 tim=1345357700829074
WAIT #8100532: nam='cell smart table scan' ela= 502 cellhash#=1375519866 p2=0 p3=0 obj#=79496 tim=1345357700830804
WAIT #8100532: nam='cell smart table scan' ela= 20243 cellhash#=1375519866 p2=0 p3=0 obj#=79496 tim=1345357700851709
WAIT #8100532: nam='cell smart table scan' ela= 32442 cellhash#=1375519866 p2=0 p3=0 obj#=79496 tim=1345357700884378
WAIT #8100532: nam='cell smart table scan' ela= 6315 cellhash#=1375519866 p2=0 p3=0 obj#=79496 tim=1345357700891113
WAIT #8100532: nam='cell smart table scan' ela= 17460 cellhash#=1375519866 p2=0 p3=0 obj#=79496 tim=1345357700909251



SQL> select a.name,b.value/1024/1024 MB
  2  from v$sysstat a , v$mystat b
  3  where
  4  a.statistic#=b.statistic#
  5  and (a.name in ('physical read total bytes','physical write total bytes',
  6  'cell IO uncompressed bytes') or a.name like 'cell phy%');

NAME                                                                     MB
---------------------------------------------------------------- ----------
physical read total bytes                                         53.484375
physical write total bytes                                                0
cell physical IO interconnect bytes                                 .484375
cell physical IO bytes saved during optimized file creation               0
cell physical IO bytes saved during optimized RMAN file restore           0
cell physical IO bytes eligible for predicate offload             53.484375
cell physical IO bytes saved by storage index                             0
cell physical IO interconnect bytes returned by smart scan          .484375
cell IO uncompressed bytes                                        53.484375

9 rows selected.

增量检查点如何更新控制文件?

有同学在 T.askmaclean.com上提问关于增量检查点更新控制文件的问题:

Know more about checkpoint

checkpoint 分成很多种 full 、file、thread、parallel query、 object 、incremental 、logfile switch

每一种checkpoint 都有其自身的特性,例如Incremental Checkpoint会要求ckpt 每3s 更新一次controlfile 但是不更新datafile header, 而FULL CHECKPOINT要求立即完成(同步的) 且会同时更新 controlfile 和 datafile header。

Incremental Checkpoint会要求ckpt 每3s 更新一次controlfile
>>我想问的时:如何查看此时控制文件中更新的SCN?除了DUMP控制文件,有没有命令查询?

我希望通过以下演示说明该问题:

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi
PL/SQL Release 10.2.0.5.0 - Production
CORE    10.2.0.5.0      Production
TNS for Linux: Version 10.2.0.5.0 - Production
NLSRTL Version 10.2.0.5.0 - Production

SQL> oradebug setmypid;
Statement processed.

SQL> oradebug dump controlf 4;
Statement processed.

SQL> oradebug tracefile_name ;
/s01/admin/G10R25/udump/g10r25_ora_4660.trc

另开一个窗口等待6s在做一次controlf DUMP 

SQL>  exec dbms_lock.sleep(6);
oradebug setmypid;
oradebug dump controlf 4;
oradebug tracefile_name ;

PL/SQL procedure successfully completed.

SQL> Statement processed.
SQL> Statement processed.
SQL> /s01/admin/G10R25/udump/g10r25_ora_4663.trc

比较以上获得的2个前后有6s间隔的CONTROLFILE DUMP 文件:

[oracle@vrh8 udump]$ diff /s01/admin/G10R25/udump/g10r25_ora_4660.trc /s01/admin/G10R25/udump/g10r25_ora_4663.trc
1c1
< /s01/admin/G10R25/udump/g10r25_ora_4660.trc --- > /s01/admin/G10R25/udump/g10r25_ora_4663.trc
13c13
< Unix process pid: 4660, image: oracle@vrh8.oracle.com (TNS V1-V3) --- > Unix process pid: 4663, image: oracle@vrh8.oracle.com (TNS V1-V3)
15,18c15,19
< *** ACTION NAME:() 2012-07-22 07:59:08.215
< *** MODULE NAME:(sqlplus@vrh8.oracle.com (TNS V1-V3)) 2012-07-22 07:59:08.215
< *** SERVICE NAME:(SYS$USERS) 2012-07-22 07:59:08.215
< *** SESSION ID:(159.7) 2012-07-22 07:59:08.215 --- > *** 2012-07-22 07:59:31.779
> *** ACTION NAME:() 2012-07-22 07:59:31.779
> *** MODULE NAME:(sqlplus@vrh8.oracle.com (TNS V1-V3)) 2012-07-22 07:59:31.779
> *** SERVICE NAME:(SYS$USERS) 2012-07-22 07:59:31.779
> *** SESSION ID:(159.9) 2012-07-22 07:59:31.779
96,98c97,99
< THREAD #1 - status:0x2 flags:0x0 dirty:56
< low cache rba:(0x1a.3.0) on disk rba:(0x1a.121.0)
< on disk scn: 0x0000.013fe7a8 07/22/2012 07:59:02 --- > THREAD #1 - status:0x2 flags:0x0 dirty:57
> low cache rba:(0x1a.3.0) on disk rba:(0x1a.148.0)
> on disk scn: 0x0000.013fe7c2 07/22/2012 07:59:27
100,101c101,102
< heartbeat: 789262462 mount id: 2675014163
< Flashback log tail log# 15 thread# 1 seq 229 block 274 byte 0 --- > heartbeat: 789262470 mount id: 2675014163
> Flashback log tail log# 15 thread# 1 seq 229 block 275 byte 0
2490c2491
<   V$RMAN_STATUS: recid=140734752341296, stamp=140734752341288 --- >   V$RMAN_STATUS: recid=140733792718464, stamp=140733792718456
2501c2502
<   V$RMAN_STATUS: recid=140734752341296, stamp=140734752341288 --- >   V$RMAN_STATUS: recid=140733792718464, stamp=140733792718456
2511c2512
<   V$RMAN_STATUS: recid=140734752341296, stamp=140734752341288 --- >   V$RMAN_STATUS: recid=140733792718464, stamp=140733792718456
2521c2522
<   V$RMAN_STATUS: recid=140734752341296, stamp=140734752341288 --- >   V$RMAN_STATUS: recid=140733792718464, stamp=140733792718456
2531c2532
<   V$RMAN_STATUS: recid=140734752341296, stamp=140734752341288 --- >   V$RMAN_STATUS: recid=140733792718464, stamp=140733792718456

排除部分V$RMAN_STATUS记录存在差异外,最主要的差别在于: CHECKPOINT PROGRESS RECORDS 这是因为 ckpt 每3s一次对controlfile做heartbeat 更新 CHECKPOINT PROGRESS RECORDS。

第一次 controlf dump:

***************************************************************************
CHECKPOINT PROGRESS RECORDS
***************************************************************************
(size = 8180, compat size = 8180, section max = 11, section in-use = 0,
last-recid= 0, old-recno = 0, last-recno = 0)
(extent = 1, blkno = 2, numrecs = 11)
THREAD #1 – status:0×2 flags:0×0 dirty:56
low cache rba:(0x1a.3.0) on disk rba:(0x1a.121.0)
on disk scn: 0×0000.013fe7a8 07/22/2012 07:59:02
resetlogs scn: 0×0000.01394f1a 07/19/2012 07:27:21
heartbeat: 789262462 mount id: 2675014163
Flashback log tail log# 15 thread# 1 seq 229 block 274 byte 0
THREAD #2 – status:0×0 flags:0×0 dirty:0
low cache rba:(0×0.0.0) on disk rba:(0×0.0.0)
on disk scn: 0×0000.00000000 01/01/1988 00:00:00
resetlogs scn: 0×0000.00000000 01/01/1988 00:00:00
heartbeat: 0 mount id: 0
Flashback log tail log# 0 thread# 0 seq 0 block 0 byte 0

第二次 controlf dump:

***************************************************************************
CHECKPOINT PROGRESS RECORDS
***************************************************************************
(size = 8180, compat size = 8180, section max = 11, section in-use = 0,
last-recid= 0, old-recno = 0, last-recno = 0)
(extent = 1, blkno = 2, numrecs = 11)
THREAD #1 – status:0×2 flags:0×0 dirty:57
low cache rba:(0x1a.3.0) on disk rba:(0x1a.148.0)
on disk scn: 0×0000.013fe7c2 07/22/2012 07:59:27
resetlogs scn: 0×0000.01394f1a 07/19/2012 07:27:21
heartbeat: 789262470 mount id: 2675014163
Flashback log tail log# 15 thread# 1 seq 229 block 275 byte 0
THREAD #2 – status:0×0 flags:0×0 dirty:0
low cache rba:(0×0.0.0) on disk rba:(0×0.0.0)
on disk scn: 0×0000.00000000 01/01/1988 00:00:00
resetlogs scn: 0×0000.00000000 01/01/1988 00:00:00

差异在于
on disk rba
on disk scn
heartbeat
Flashback log tail log#

即实际CKPT每3s更新heartbeat控制文件一次,更新的内容是 on disk rba、on disk scn、heartbeat 如果启用了闪回日志的话那么还有Flashback log , 而并不更新数据库当前的SCN(CURRENT SCN)。

如果你想查看ckpt每3s更新的 on disk scn的话可以参考 内部视图X$KCCCP–[K]ernel [C]ache [C]ontrolfile management [c]heckpoint [p]rogress  X$KCCCP Checkpoint Progress Records:

SQL> desc x$kcccp;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ADDR                                               RAW(8)
 INDX                                               NUMBER
 INST_ID                                            NUMBER
 CPTNO                                              NUMBER
 CPSTA                                              NUMBER
 CPFLG                                              NUMBER
 CPDRT                                              NUMBER
 CPRDB                                              NUMBER
 CPLRBA_SEQ                                         NUMBER
 CPLRBA_BNO                                         NUMBER
 CPLRBA_BOF                                         NUMBER
 CPODR_SEQ                                          NUMBER
 CPODR_BNO                                          NUMBER
 CPODR_BOF                                          NUMBER
 CPODS                                              VARCHAR2(16)
 CPODT                                              VARCHAR2(20)
 CPODT_I                                            NUMBER
 CPHBT                                              NUMBER
 CPRLS                                              VARCHAR2(16)
 CPRLC                                              NUMBER
 CPMID                                              NUMBER
 CPSDR_SEQ                                          NUMBER
 CPSDR_BNO                                          NUMBER
 CPSDR_ADB                                          NUMBER

其中cpods 为 ” on disk scn” ,cpodr_seq||cpodr_bno||cpodr_bof为”on disk rba”,CPHBT为heartbeat number:

SQL> select cpods "on disk scn",
  2         to_char(cpodr_seq, 'xxxxxx') || ',' ||
  3         to_char(cpodr_bno, 'xxxxxxxxx') || ',' ||
  4         to_char(cpodr_bof, 'xxxxxxxxx') "on disk rba",
  5         CPHBT "heartbeat number"
  6    from x$kcccp;

on disk scn      on disk rba                    heartbeat number
---------------- ------------------------------ ----------------
20968609              1a,      240a,         0         789263152
0                      0,         0,         0                 0
0                      0,         0,         0                 0
0                      0,         0,         0                 0
0                      0,         0,         0                 0
0                      0,         0,         0                 0
0                      0,         0,         0                 0
0                      0,         0,         0                 0

8 rows selected.

SQL> 
SQL> 
SQL> exec dbms_lock.sleep(3);

PL/SQL procedure successfully completed.

SQL> select cpods "on disk scn",
  2         to_char(cpodr_seq, 'xxxxxx') || ',' ||
  3         to_char(cpodr_bno, 'xxxxxxxxx') || ',' ||
  4         to_char(cpodr_bof, 'xxxxxxxxx') "on disk rba",
  5         CPHBT "heartbeat number"
  6    from x$kcccp;

on disk scn      on disk rba                    heartbeat number
---------------- ------------------------------ ----------------
20968613              1a,      2410,         0         789263154
0                      0,         0,         0                 0
0                      0,         0,         0                 0
0                      0,         0,         0                 0
0                      0,         0,         0                 0
0                      0,         0,         0                 0
0                      0,         0,         0                 0
0                      0,         0,         0                 0

8 rows selected.

SQL> 
SQL> exec dbms_lock.sleep(3);

PL/SQL procedure successfully completed.

SQL> select cpods "on disk scn",
  2         to_char(cpodr_seq, 'xxxxxx') || ',' ||
  3         to_char(cpodr_bno, 'xxxxxxxxx') || ',' ||
  4         to_char(cpodr_bof, 'xxxxxxxxx') "on disk rba",
  5         CPHBT "heartbeat number"
  6    from x$kcccp;

on disk scn      on disk rba                    heartbeat number
---------------- ------------------------------ ----------------
20968623              1a,      241e,         0         789263156
0                      0,         0,         0                 0
0                      0,         0,         0                 0
0                      0,         0,         0                 0
0                      0,         0,         0                 0
0                      0,         0,         0                 0
0                      0,         0,         0                 0
0                      0,         0,         0                 0

8 rows selected.

诊断ORA-08103错误

ORA-08103问题的诊断最好是能生成8103错误的ERROR STACK TRACE, 在TRACE中会记录具体引发8103的对象的OBJ和OBJD,这便于我们定位可能存在corruption的对象。[Read More]

Exadata混合列压缩如何处理INSERT和UPDATE

Hybrid Columnar Compression混合列压缩是Exadata数据库一体机的核心功能之一,与普通的高级压缩特性(advanced compression)不同,Hybrid columnar compression (HCC) 仅仅在Exadata平台上可用。使用HCC的情况下数据压缩存放在CU(compression unit压缩单位中),一个CU单位包括多个数据库块,这是出于单数据块不利于以列值压缩算法的考量所决定的,当一个CU包含多个block时可以实现较优的列值压缩算法。[Read More]

Exadata下新建DiskGroup

Exadata下新建Asm Diskgroup 的步骤大致如下:

1.使用dcli -g /home/oracle/cell_group -l root cellcli -e list griddisk 命令找出active的griddisk

[root@dm01db01 ~]# dcli -g /home/oracle/cell_group -l root cellcli -e list griddisk
dm01cel01: DATA_DM01_CD_00_dm01cel01     active
dm01cel01: DATA_DM01_CD_01_dm01cel01     active
dm01cel01: DATA_DM01_CD_02_dm01cel01     active
dm01cel01: DATA_DM01_CD_03_dm01cel01     active
dm01cel01: DATA_DM01_CD_04_dm01cel01     active
dm01cel01: DATA_DM01_CD_05_dm01cel01     active
dm01cel01: DATA_DM01_CD_06_dm01cel01     active
dm01cel01: DATA_DM01_CD_07_dm01cel01     active
dm01cel01: DATA_DM01_CD_08_dm01cel01     active
dm01cel01: DATA_DM01_CD_09_dm01cel01     active
dm01cel01: DATA_DM01_CD_10_dm01cel01     active
dm01cel01: DATA_DM01_CD_11_dm01cel01     active
dm01cel01: DBFS_DG_CD_02_dm01cel01       active
dm01cel01: DBFS_DG_CD_03_dm01cel01       active
dm01cel01: DBFS_DG_CD_04_dm01cel01       active
dm01cel01: DBFS_DG_CD_05_dm01cel01       active
dm01cel01: DBFS_DG_CD_06_dm01cel01       active
dm01cel01: DBFS_DG_CD_07_dm01cel01       active
dm01cel01: DBFS_DG_CD_08_dm01cel01       active
dm01cel01: DBFS_DG_CD_09_dm01cel01       active
dm01cel01: DBFS_DG_CD_10_dm01cel01       active
dm01cel01: DBFS_DG_CD_11_dm01cel01       active
dm01cel01: RECO_DM01_CD_00_dm01cel01     active
dm01cel01: RECO_DM01_CD_01_dm01cel01     active
dm01cel01: RECO_DM01_CD_02_dm01cel01     active
dm01cel01: RECO_DM01_CD_03_dm01cel01     active
dm01cel01: RECO_DM01_CD_04_dm01cel01     active
dm01cel01: RECO_DM01_CD_05_dm01cel01     active
dm01cel01: RECO_DM01_CD_06_dm01cel01     active
dm01cel01: RECO_DM01_CD_07_dm01cel01     active
dm01cel01: RECO_DM01_CD_08_dm01cel01     active
dm01cel01: RECO_DM01_CD_09_dm01cel01     active
dm01cel01: RECO_DM01_CD_10_dm01cel01     active
dm01cel01: RECO_DM01_CD_11_dm01cel01     active
dm01cel02: DATA_DM01_CD_00_dm01cel02     active
dm01cel02: DATA_DM01_CD_01_dm01cel02     active
dm01cel02: DATA_DM01_CD_02_dm01cel02     active
dm01cel02: DATA_DM01_CD_03_dm01cel02     active
dm01cel02: DATA_DM01_CD_04_dm01cel02     active
dm01cel02: DATA_DM01_CD_05_dm01cel02     active
dm01cel02: DATA_DM01_CD_06_dm01cel02     active
dm01cel02: DATA_DM01_CD_07_dm01cel02     active
dm01cel02: DATA_DM01_CD_08_dm01cel02     active
dm01cel02: DATA_DM01_CD_09_dm01cel02     active
dm01cel02: DATA_DM01_CD_10_dm01cel02     active
dm01cel02: DATA_DM01_CD_11_dm01cel02     active
dm01cel02: DBFS_DG_CD_02_dm01cel02       active
dm01cel02: DBFS_DG_CD_03_dm01cel02       active
dm01cel02: DBFS_DG_CD_04_dm01cel02       active
dm01cel02: DBFS_DG_CD_05_dm01cel02       active
dm01cel02: DBFS_DG_CD_06_dm01cel02       active
dm01cel02: DBFS_DG_CD_07_dm01cel02       active
dm01cel02: DBFS_DG_CD_08_dm01cel02       active
dm01cel02: DBFS_DG_CD_09_dm01cel02       active
dm01cel02: DBFS_DG_CD_10_dm01cel02       active
dm01cel02: DBFS_DG_CD_11_dm01cel02       active
dm01cel02: RECO_DM01_CD_00_dm01cel02     active
dm01cel02: RECO_DM01_CD_01_dm01cel02     active
dm01cel02: RECO_DM01_CD_02_dm01cel02     active
dm01cel02: RECO_DM01_CD_03_dm01cel02     active
dm01cel02: RECO_DM01_CD_04_dm01cel02     active
dm01cel02: RECO_DM01_CD_05_dm01cel02     active
dm01cel02: RECO_DM01_CD_06_dm01cel02     active
dm01cel02: RECO_DM01_CD_07_dm01cel02     active
dm01cel02: RECO_DM01_CD_08_dm01cel02     active
dm01cel02: RECO_DM01_CD_09_dm01cel02     active
dm01cel02: RECO_DM01_CD_10_dm01cel02     active
dm01cel02: RECO_DM01_CD_11_dm01cel02     active
dm01cel03: DATA_DM01_CD_00_dm01cel03     active
dm01cel03: DATA_DM01_CD_01_dm01cel03     active
dm01cel03: DATA_DM01_CD_02_dm01cel03     active
dm01cel03: DATA_DM01_CD_03_dm01cel03     active
dm01cel03: DATA_DM01_CD_04_dm01cel03     active
dm01cel03: DATA_DM01_CD_05_dm01cel03     active
dm01cel03: DATA_DM01_CD_06_dm01cel03     active
dm01cel03: DATA_DM01_CD_07_dm01cel03     active
dm01cel03: DATA_DM01_CD_08_dm01cel03     active
dm01cel03: DATA_DM01_CD_09_dm01cel03     active
dm01cel03: DATA_DM01_CD_10_dm01cel03     active
dm01cel03: DATA_DM01_CD_11_dm01cel03     active
dm01cel03: DBFS_DG_CD_02_dm01cel03       active
dm01cel03: DBFS_DG_CD_03_dm01cel03       active
dm01cel03: DBFS_DG_CD_04_dm01cel03       active
dm01cel03: DBFS_DG_CD_05_dm01cel03       active
dm01cel03: DBFS_DG_CD_06_dm01cel03       active
dm01cel03: DBFS_DG_CD_07_dm01cel03       active
dm01cel03: DBFS_DG_CD_08_dm01cel03       active
dm01cel03: DBFS_DG_CD_09_dm01cel03       active
dm01cel03: DBFS_DG_CD_10_dm01cel03       active
dm01cel03: DBFS_DG_CD_11_dm01cel03       active
dm01cel03: RECO_DM01_CD_00_dm01cel03     active
dm01cel03: RECO_DM01_CD_01_dm01cel03     active
dm01cel03: RECO_DM01_CD_02_dm01cel03     active
dm01cel03: RECO_DM01_CD_03_dm01cel03     active
dm01cel03: RECO_DM01_CD_04_dm01cel03     active
dm01cel03: RECO_DM01_CD_05_dm01cel03     active
dm01cel03: RECO_DM01_CD_06_dm01cel03     active
dm01cel03: RECO_DM01_CD_07_dm01cel03     active
dm01cel03: RECO_DM01_CD_08_dm01cel03     active
dm01cel03: RECO_DM01_CD_09_dm01cel03     active
dm01cel03: RECO_DM01_CD_10_dm01cel03     active
dm01cel03: RECO_DM01_CD_11_dm01cel03     active

注意若没有符合要求的griddisk, 你可以使用’cellcli -e drop griddisk’ 和’cellcli -e create griddisk’命令重建griddisk ,但是不要轻易drop DBFS_DG开头的griddisk

2.登陆ASM实例后create disk group

如果不知道CELL的IP,可以从下面的配置文件中找到

[root@dm01db02 ~]# cat /etc/oracle/cell/network-config/cellip.ora 
cell="192.168.64.131"
cell="192.168.64.132"
cell="192.168.64.133"

SQL> create diskgroup DATA_MAC normal  redundancy 
  2  DISK
  3  'o/192.168.64.131/RECO_DM01_CD_*_dm01cel01'
  4  ,'o/192.168.64.132/RECO_DM01_CD_*_dm01cel02'
  5  ,'o/192.168.64.133/RECO_DM01_CD_*_dm01cel03'
  6  attribute
  7  'AU_SIZE'='4M',
  8  'CELL.SMART_SCAN_CAPABLE'='TRUE',
  9  'compatible.rdbms'='11.2.0.2',
 10  'compatible.asm'='11.2.0.2'
 11  /

3. MOUNT 新建的DISKGROUP

ALTER DISKGROUP DATA_MAC mount ;

4.或使用crsctl start/stop resource ora.DATA_MAC.dg 控制该资源

拨开Oracle优化器迷雾探究Histogram之秘

适合参与成员: 对于性能调优和CBO优化器有兴趣的同学,或急于提升SQL调优技能的同学。

讲座材料presentation 当前正式版本下载:

【Maclean Liu技术分享】拨开Oracle CBO优化器迷雾,探究Histogram直方图之秘_0321.pdf.pdf(1.1 MB, 下载次数: 749)

[Read More]
About

author's avatar The Maclean Liu
Advanced Customer Services
AskMaclean Logo 10g_ocm SHOUG

Search

Categories
Archives
« March 2013 »
SunMonTueWedThuFriSat
     
1
2
3
4
5
6
7
8
11
12
13
14
15
16
18
20
21
22
23
24
25
26
27
29
30
      
Today