Tuesday Mar 19, 2013

深入理解Oracle Universal Installer (OUI) Text

文档介绍了Oracle Universal Installer (OUI)的一些细节只是,PPT Slideshare版地址在这里:http://www.askmaclean.com/archives/%E6%B7%B1%E5%85%A5%E7%90%86%E8%A7%A3oracle-universal-installer-oui.html , 这里share一下Text版本的。

[Read More]

asm_power_limit对IO的影响

根据文档《Oracle Sun Database Machine High Availability Best Practices (Doc ID 1069521.1)》的介绍,Oracle官方推荐在Exadata Database Machine上设置 asm_power_limit为4,使用4这个推荐值可以把Exadata上由于ASM rebalance产生的Io损耗对应用的影响最小化。

当使用更快速的ASM数据重平衡时(更大的ASM_POWER_LIMIT),可能潜在地增加对应用的影响,最大的asm_power_limit是11。

根据测试当ASM_POWER_LIMIT=1时添加或移除exadata cell对吞吐量的影响在30MBPS ,当ASM_POWER_LIMIT=11时为330MBPS 。

这里的吞吐量影响并不包括数据压实阶段( compaction phase),数据压实有利于性能但并非为了维护数据正确性所必须的。

http://www.askmaclean.com/archives/asm_power_limit%E5%AF%B9io%E7%9A%84%E5%BD%B1%E5%93%8D.html 

了解AMDU工具生成的MAP文件

AMDU是ORACLE针对ASM开发的源数据转储工具,其全称为ASM Metadata Dump Utility(AMDU), 在《使用AMDU工具从无法MOUNT的DISKGROUP中抽取数据文件》中我们介绍了AMDU抽取数据库文件的方法, 今天我们来介绍AMDU使用DUMP转储模式时生成的MAP文件的含义。[Read More]

DML UPDATE/DELETE与CR一致性读的秘密

这个问题源于OTN中文论坛的一个帖子<大事务中的更新丢失问题>

环境为Oracle 10.2.0.4 on Linux x64
有一个大表,百万级,col1字段全为0
t1 事务A启动,把所有记录col2全更新为1
t2 事务B启动,根据主键,把一条记录更新为2,然后commit
t3 事务A执行完成,并COMMIT
t4 查询此表,发现col1全部为1,事务B的更新丢失了。
这是为什么呢,其中逻辑是怎样的
谢谢!

对于这个问题我想说明的是对于事务transaction 而言Oracle同样提供读一致性,称为transaction-level read consistency:

The database can also provide read consistency to all queries in a transaction, known as transaction-level read consistency. In this case, each statement in a transaction sees data from the same point in time, which is the time at which the transaction began.

Oracle Database Concepts
11g Release 2 (11.2)
Part Number E10713-04

为了证明和演示该事务级别的读一致性,我设计了以下演示:

有一张2个列的大表(T1,T2), 其中分别有 T1=600000,T2=’U2′ 和  T1=900000和 T2=’U1′的 2行数据,T1为主键。

在A)时刻,Session A使用SELECT .. FOR UPDATE锁住T1=600000的这一行

在之后的B)时刻,Session B尝试update TAB set t2=’U3′ where t2=’U2′ 即更新所有T2=’U2′的数据行,但是因为 T1=600000,T2=’U2′这一行正好被Session A锁住了,所以Session B会一直等待’enq: TX – row lock contention’;T1=900000和 T2=’U1′的数据行位于Session B处理 T1=600000,T2=’U2′行等待之后才能处理到的数据块中。

在之后的C)时刻,Session C更新update TAB set t2=’U2′ where t1=900000;并commit, 即将T1=900000和 T2=’U1′更新为 T1=900000和 T2=’U2′,这样就符合Session B 更新Update的条件t2=’U2′了。

在D)时刻, Session A执行commit释放锁,Session B得以继续工作,当他处理到T1=900000的记录时存在以下分歧:

1)若update DML满足transaction-level read consistency,则它应当看到的是session B事务开始环境SCN(env SCN)时的块的前镜像,即虽然session C更新了t2=’U2′满足其条件,但是为了一致性,session B仍需要对该行所在数据块做APPLY UNDO,直到满足该session B事务开始时间点的Best CR块,而CR一致镜像中t2=’U1′,不满足Session B的更新条件, 那么session B在整个事务中仅更新一行数据 T1=600000,T2=’U2′,session B only Update One Rows。

2) 若update DML不满足transaction-level read consistency,则session B看到的是当前read commited的镜像,即是Session C已更新并提交的块镜像,此时的记录为T1=900000和 T2=’U2′符合session B的更新条件,则session B要更新2行数据。

我们来看一下实际实验的结果:

构建实验环境:

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE    11.2.0.3.0      Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production

create table update_cr tablespace users as select rownum t1, 'MACLEAN     ' T2 
from dual connect by level  update update_cr set t2='U2' where t1=600000;

1 row updated.

SQL> update update_cr set t2='U1' where t1=900000;

1 row updated.

SQL> commit;

Commit complete.

SQL> select * from update_cr where t1=600000 or t1=900000;

        T1 T2
---------- ------------
    600000 U2
    900000 U1

在A)时刻,Session A使用SELECT .. FOR UPDATE锁住T1=600000的这一行:

session A:

SQL> select * from update_cr where t1=600000 for update;

        T1 T2
---------- ------------
    600000 U2

在之后的B)时刻,Session B尝试update TAB set t2=’U3′ where t2=’U2′ 即更新所有T2=’U2′的数据行,但是因为 T1=600000,T2=’U2′这一行正好被Session A锁住了,所以Session B会一直等待’enq: TX – row lock contention’;T1=900000和 T2=’U1′的数据行位于Session B处理 T1=600000,T2=’U2′行等待之后才能处理到的数据块中。

session B:

SQL> alter system flush buffer_cache;

System altered.

SQL> /

System altered.

SQL> alter session set events '10046 trace name context forever,level 8 : 10201 trace name context forever,level 10';

Session altered.

SQL> update update_cr set t2='U3' where t2='U2';

在之后的C)时刻,Session C更新update TAB set t2=’U2′ where t1=900000;并commit, 即将T1=900000和 T2=’U1′更新为 T1=900000和 T2=’U2′,这样就符合Session B 更新Update的条件t2=’U2′了。

session C:

SQL> select * from update_cr where t1=900000;

        T1 T2
---------- ------------
    900000 U1

SQL> update update_cr set t2='U2' where t1=900000;

1 row updated.

SQL> commit;

Commit complete.

SQL> alter system flush buffer_cache;

System altered.

之后session A 执行 commit;session B得以继续update,得到实验的结果:

session A 执行 commit;

session B;

SQL> update update_cr set t2='U3' where t2='U2';

1 row updated.

以看到以上实验的结果 update仅更新了一行数据,证明了观点1″若update DML满足transaction-level read consistency,则它应当看到的是session B事务开始环境SCN(env SCN)时的块的前镜像,即虽然session C更新了t2=’U2′满足其条件,但是为了一致性,session B仍需要对该行所在数据块做APPLY UNDO,直到满足该session B事务开始时间点的Best CR块,而CR一致镜像中t2=’U1′,不满足Session B的更新条件, 那么session B在整个事务中仅更新一行数据 T1=600000,T2=’U2′,session B only Update One Rows。”的正确性。

即update/delete之类的DML在Oracle中满足transaction-level read consistency,保证其所”看到的”是满足事务开始时间点读一致性的Consistent Read,这也是为什么DML会产生Consistent Read的原因之一。

我们回过头来看一下 上面Session B产生的10201 undo apply和10046 trace的内容,可以发现更多内容:

SQL> select dbms_rowid.rowid_block_number(rowid),dbms_rowid.rowid_relative_fno(rowid) from update_cr where t1=600000 or t1=900000;

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID)
------------------------------------ ------------------------------------
                                2589                                    4
                                3558                                    4

WAIT #139924171154784: nam='db file sequential read' ela= 19504 file#=3 block#=128 blocks=1 obj#=0 tim=1258427129863987
Applying CR undo to block 4 : 1000a1d itl entry 03:
xid: 0x0001.010.00000277 uba: 0x00c0300e.007c.1e
flg: ---- lkc: 1 fsc: 0x0000.00000000
WAIT #139924171154784: nam='db file sequential read' ela= 12957 file#=3 block#=12302 blocks=1 obj#=0 tim=1258427129877291
CRS upd rd env [0x7f42a284a704]: (scn: 0x0000.00223eb9 xid: 0x0000.000.00000000 uba: 0x00000000.0000.00 statement num=0 parent xid: 0x0000.000.00000000 st-scn: 0x0000.00000000 hi-scn: 0x0000.00000000 ma-scn: 0x0000.00223e78 flg: 0x00000060) undo env [0x7fffe3fe7f60]: (
scn: 0x0000.00223eb9 xid: 0x0001.010.00000277 uba: 0x00c0300e.007c.1e statement num=2 parent xid: 0x0000.000.00000000 st-scn: 0xe404.00000000 hi-scn: 0x2100.00000000 ma-scn: 0x0000.00000000 flg: 0x62515e33)
CRS upd (before): 0xa5fe9198 cr-scn: 0x0000.00223eb9 xid: 0x0000.000.00000000 uba: 0x00000000.0000.00 cl-scn: 0x0000.00223eba sfl: 0
CRS upd (after) : 0xa5fe9198 cr-scn: 0x0000.00223eb9 xid: 0x0001.010.00000277 uba: 0x00c0300e.007c.1e cl-scn: 0x0000.00223eba sfl: 0

*** 2009-11-16 22:06:01.253
WAIT #139924171154784: nam='enq: TX - row lock contention' ela= 31358812 name|mode=1415053318 usn<<16 | slot=65552 sequence=631 obj#=76969 tim=1258427161253791
WAIT #139924171154784: nam='db file sequential read' ela= 36051 file#=4 block#=2589 blocks=1 obj#=76969 tim=1258427161290180
WAIT #139924171154784: nam='db file scattered read' ela= 14718 file#=4 block#=2590 blocks=98 obj#=76969 tim=1258427161305684
WAIT #139924171154784: nam='db file scattered read' ela= 26432 file#=4 block#=2690 blocks=126 obj#=76969 tim=1258427161338364
WAIT #139924171154784: nam='db file scattered read' ela= 38289 file#=4 block#=2818 blocks=126 obj#=76969 tim=1258427161384445
WAIT #139924171154784: nam='db file scattered read' ela= 24265 file#=4 block#=2946 blocks=126 obj#=76969 tim=1258427161416302
WAIT #139924171154784: nam='db file scattered read' ela= 21288 file#=4 block#=3074 blocks=126 obj#=76969 tim=1258427161444684
WAIT #139924171154784: nam='db file scattered read' ela= 23840 file#=4 block#=3202 blocks=126 obj#=76969 tim=1258427161476063
WAIT #139924171154784: nam='db file scattered read' ela= 27439 file#=4 block#=3330 blocks=126 obj#=76969 tim=1258427161511429
WAIT #139924171154784: nam='db file scattered read' ela= 24424 file#=4 block#=3458 blocks=126 obj#=76969 tim=1258427161543429
Applying CR undo to block 4 : 1000de6 itl entry 03:
xid: 0x0003.01f.00000345 uba: 0x00c03092.008f.34
flg: --U- lkc: 1 fsc: 0x0000.00223ef9
WAIT #139924171154784: nam='db file sequential read' ela= 8033 file#=3 block#=12434 blocks=1 obj#=0 tim=1258427161557534
CRS upd rd env [0x7f42a284a704]: (scn: 0x0000.00223eb9 xid: 0x0000.000.00000000 uba: 0x00000000.0000.00 statement num=0 parent xid: 0x0000.000.00000000 st-scn: 0x0000.00000000 hi-scn: 0x0000.00000000 ma-scn: 0x0000.00223e78 flg: 0x00000060) undo env [0x7fffe3fe7f60]: (
WAIT #139924171154784: nam='db file sequential read' ela= 8033 file#=3 block#=12434 blocks=1 obj#=0 tim=1258427161557534
CRS upd rd env [0x7f42a284a704]: (scn: 0x0000.00223eb9 xid: 0x0000.000.00000000 uba: 0x00000000.0000.00 statement num=0 parent xid: 0x0000.000.00000000 st-scn: 0x0000.00000000 hi-scn: 0x0000.00000000 ma-scn: 0x0000.00223e78 flg: 0x00000060) undo env [0x7fffe3fe7f60]: (
scn: 0x0000.00223ef8 xid: 0x0003.01f.00000345 uba: 0x00c03092.008f.34 statement num=8192 parent xid: 0x0000.000.0baf3fa0 st-scn: 0x3fa0.00000000 hi-scn: 0x0000.00000000 ma-scn: 0x0000.00000000 flg: 0x00000000)
CRS upd (before): 0xa4fdaa08 cr-scn: 0x0000.00223eb9 xid: 0x0000.000.00000000 uba: 0x00000000.0000.00 cl-scn: 0x0000.00223eff sfl: 0
CRS upd (after) : 0xa4fdaa08 cr-scn: 0x0000.00223eb9 xid: 0x0003.01f.00000345 uba: 0x00c03092.008f.34 cl-scn: 0x0000.00223eff sfl: 0
WAIT #139924171154784: nam='db file scattered read' ela= 13029 file#=4 block#=3586 blocks=126 obj#=76969 tim=1258427161572511
WAIT #139924171154784: nam='db file scattered read' ela= 22282 file#=4 block#=3714 blocks=126 obj#=76969 tim=1258427161602324
WAIT #139924171154784: nam='db file sequential read' ela= 6127 file#=4 block#=522 blocks=1 obj#=76969 tim=1258427161615461
WAIT #139924171154784: nam='db file scattered read' ela= 13503 file#=4 block#=3842 blocks=42 obj#=76969 tim=1258427161629323

以上可以看到对T1=600000所在的数据块1000a1d=》datafile 4 259 apply了UNDO,其环境SCN 为scn: 0×0000.00223eb9:

Applying CR undo to block 4 : 1000a1d itl entry 03:
xid: 0×0001.010.00000277 uba: 0x00c0300e.007c.1e
flg: —- lkc: 1 fsc: 0×0000.00000000
WAIT #139924171154784: nam=’db file sequential read’ ela= 12957 file#=3 block#=12302 blocks=1 obj#=0 tim=1258427129877291
CRS upd rd env [0x7f42a284a704]: (scn: 0×0000.00223eb9 xid: 0×0000.000.00000000 uba: 0×00000000.0000.00 statement num=0 parent xid: 0×0000.000.00000000 st-scn: 0×0000.00000000 hi-scn: 0×0000.00000000 ma-scn: 0×0000.00223e78 flg: 0×00000060) undo env [0x7fffe3fe7f60]: (
scn: 0×0000.00223eb9 xid: 0×0001.010.00000277 uba: 0x00c0300e.007c.1e statement num=2 parent xid: 0×0000.000.00000000 st-scn: 0xe404.00000000 hi-scn: 0×2100.00000000 ma-scn: 0×0000.00000000 flg: 0x62515e33)
CRS upd (before): 0xa5fe9198 cr-scn: 0×0000.00223eb9 xid: 0×0000.000.00000000 uba: 0×00000000.0000.00 cl-scn: 0×0000.00223eba sfl: 0
CRS upd (after) : 0xa5fe9198 cr-scn: 0×0000.00223eb9 xid: 0×0001.010.00000277 uba: 0x00c0300e.007c.1e cl-scn: 0×0000.00223eba sfl: 0

对于T1=900000的数据行所在块1000de6=》 datafile 4 3558 block同样apply了UNDO,其环境SCN均为00223eb9

Applying CR undo to block 4 : 1000de6 itl entry 03:
xid: 0×0003.01f.00000345 uba: 0x00c03092.008f.34
flg: –U- lkc: 1 fsc: 0×0000.00223ef9
WAIT #139924171154784: nam=’db file sequential read’ ela= 8033 file#=3 block#=12434 blocks=1 obj#=0 tim=1258427161557534
CRS upd rd env [0x7f42a284a704]: (scn: 0×0000.00223eb9 xid: 0×0000.000.00000000 uba: 0×00000000.0000.00 statement num=0 parent xid: 0×0000.000.00000000 st-scn: 0×0000.00000000 hi-scn: 0×0000.00000000 ma-scn: 0×0000.00223e78 flg: 0×00000060) undo env [0x7fffe3fe7f60]: (
scn: 0×0000.00223ef8 xid: 0×0003.01f.00000345 uba: 0x00c03092.008f.34 statement num=8192 parent xid: 0×0000.000.0baf3fa0 st-scn: 0x3fa0.00000000 hi-scn: 0×0000.00000000 ma-scn: 0×0000.00000000 flg: 0×00000000)
CRS upd (before): 0xa4fdaa08 cr-scn: 0×0000.00223eb9 xid: 0×0000.000.00000000 uba: 0×00000000.0000.00 cl-scn: 0×0000.00223eff sfl: 0
CRS upd (after) : 0xa4fdaa08 cr-scn: 0×0000.00223eb9 xid: 0×0003.01f.00000345 uba: 0x00c03092.008f.34 cl-scn: 0×0000.00223eff sfl: 0

http://www.askmaclean.com/archives/dml-updatedelete%E4%B8%8Ecr%E4%B8%80%E8%87%B4%E6%80%A7%E8%AF%BB%E7%9A%84%E7%A7%98%E5%AF%86.html 

Silent Installation静默安装11gR2 DB SERVER单机并手动建库步骤

静默安装11gR2 DB SERVER单机并手动建库步骤

静默安装是在我们无法使用OUI图形界面安装ORACLE DB软件,亦或者我们需要大规模部署该软件时采用的方法。
静默安装不要求启动图形界面,仅仅使用命令行即可实施。

具体在11gR2单机以silent静默方式安装时,可以省略使用response file,步骤如下:

1. 解压安装包,如果你是在Linux上且安装目前最新的Patchset 11.2.0.3(推荐)的话,首先解压介质安装包

如果在AIX上可能还需要执行必要的rootpre.sh

2. 创建必要的目录,以及Oracle用户的ulimit限制和kernel parameters

[root@mlab2 grid]# mkdir /u01

[root@mlab2 grid]# chown oracle:oinstall /u01
[root@mlab2 grid]# su – oracle

3. 正式使用runInstaller -silent 静默方式安装
$ ./runInstaller -silent -debug -force \
FROM_LOCATION=/home/oracle/database/stage/products.xml \
oracle.install.option=INSTALL_DB_SWONLY \
UNIX_GROUP_NAME=oinstall \
INVENTORY_LOCATION=/u01/app/oracle/oraInventory \
ORACLE_HOME=/u01/app/oracle/product/11201/db_1 \
ORACLE_HOME_NAME=”OraDb11g_Home9″ \
ORACLE_BASE=/u01/app/oracle \
oracle.install.db.InstallEdition=EE \
oracle.install.db.isCustomInstall=false \
oracle.install.db.DBA_GROUP=dba \
oracle.install.db.OPER_GROUP=dba \
DECLINE_SECURITY_UPDATES=true

4.
切换到root用户执行必要的脚本

[oracle@mlab2 bin]$ su – root
Password:
[root@mlab2 ~]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete.
[root@mlab2 ~]# /u01/app/oracle/product/11201/db_1/root.sh
Check /u01/app/oracle/product/11201/db_1/install/root_mlab2.oracle.com_2009-11-23_20-26-32.log for the output of root script

5.
设置必要的环境变量

[oracle@mlab2 ~]$ cat db2.sh

export ORACLE_SID=CRMV
export ORACLE_HOME=/u01/app/oracle/product/11201/db_1
export ORACLE_BASE=/u01/app/oracle
export PATH=$ORACLE_HOME/bin:/bin:/sbin:/usr/bin:/usr/sbin:$ORACLE_HOME/OPatch:/opt/oracle.SupportTools

6.

创建必要的目录

[root@mlab2 ~]# mkdir /oradata/
[root@mlab2 ~]# chown oracle:oinstall /oradata

[oracle@mlab2 ~]$ mkdir -p /oradata/CRMV/controlfile
[oracle@mlab2 dbs]$ mkdir /oradata/CRMV/logfile
[oracle@mlab2 dbs]$ mkdir /oradata/CRMV/datafile

7.

创建init.ora PFILE初始化参数文件
[oracle@mlab2 ~]$ source db2.sh
[oracle@mlab2 ~]$ cd $ORACLE_HOME
[oracle@mlab2 db_1]$ cd dbs
[oracle@mlab2 dbs]$ cat init
initCRMV.ora init.ora

[oracle@mlab2 dbs]$ cat initCRMV.ora
db_name=CRMV
sga_target=1200M
control_files=(‘/oradata/CRMV/controlfile/control01.dbf’,'/oradata/CRMV/controlfile/control02.dbf’)
undo_management=AUTO
undo_tablespace=UNDOTBS

8.创建密码文件

[oracle@mlab2 dbs]$ orapwd file=orapwCRMV entries=10 password=oracle
[oracle@mlab2 dbs]$ ls -l orapwCRMV
-rw-r—– 1 oracle oinstall 2560 Nov 23 20:58 orapwCRMV

9.
正式手动建库

CREATE DATABASE CRMV
USER SYS IDENTIFIED BY oracle
USER SYSTEM IDENTIFIED BY oracle
LOGFILE GROUP 1 (‘/oradata/CRMV/logfile/redo01.log’) SIZE 256M,
GROUP 2 (‘/oradata/CRMV/logfile/redo02.log’) SIZE 256M,
GROUP 3 (‘/oradata/CRMV/logfile/redo03.log’) SIZE 256M
MAXLOGFILES 50
MAXLOGMEMBERS 5
MAXLOGHISTORY 10000
MAXDATAFILES 3000
MAXINSTANCES 10
CHARACTER SET AL32UTF8
NATIONAL CHARACTER SET UTF8
DATAFILE ‘/oradata/CRMV/datafile/system01.dbf’ SIZE 825M REUSE
EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE ‘/oradata/CRMV/datafile/sysaux01.dbf’ SIZE 825M REUSE
DEFAULT TEMPORARY TABLESPACE tempts1
TEMPFILE ‘/oradata/CRMV/datafile/temp01.dbf’
SIZE 200M REUSE
UNDO TABLESPACE undotbs
DATAFILE ‘/oradata/CRMV/datafile/undotbs01.dbf’
SIZE 600M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;

此步骤也可以参考链接:http://docs.oracle.com/cd/B19306_01/server.102/b14231/create.htm#g1034097

10.
创建非必要的表空间例如USERS,以及应用表空间或索引表空间
SQL> alter system set db_create_file_dest=’/oradata/CRMV’;

System altered.
CONNECT SYS/password AS SYSDBA
– create a user tablespace to be assigned as the default tablespace for users
CREATE TABLESPACE users LOGGING
DATAFILE SIZE 525M AUTOEXTEND ON NEXT 16M MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL;
– create a tablespace for indexes, separate from user tablespace
CREATE TABLESPACE indx LOGGING
DATAFILE SIZE 325M AUTOEXTEND ON NEXT 16M MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL;

SQL>
SQL> alter database default tablespace users;

Database altered.

SQL> alter database default temporary tablespace tempts1;
alter database default temporary tablespace tempts1
*
ERROR at line 1:
ORA-12907: tablespace TEMPTS1 is already the default temporary tablespace

11.
执行catalog.sql和catproc.sql 2个必要的字典脚本

CONNECT SYS/password AS SYSDBA
@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql
EXIT

1* select comp_name,version from dba_server_registry
SQL> /

COMP_NAME VERSION
—————————————- ——————————
Oracle Database Catalog Views 11.2.0.3.0
Oracle Database Packages and Types 11.2.0.3.0

2 rows selected.

其他一些有必要执行的脚本

@?/rdbms/admin/catblock.sql
@?/rdbms/admin/dbmspool.sql

conn system/oracle

@?/sqlplus/admin/pupbld.sql
@?/sqlplus/admin/help/hlpbld.sql helpus.sql

12. 设置必要的参数,并打开 日志归档

SQL> create spfile from pfile;

File created.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 1252663296 bytes
Fixed Size 2227944 bytes
Variable Size 335544600 bytes
Database Buffers 905969664 bytes
Redo Buffers 8921088 bytes
Database mounted.
SQL> alter system set deferred_segment_creation=false;

System altered.
SQL> alter database archivelog ;

Database altered.
SQL> alter database open;

Database altered.
SQL> alter system archive log current;

System altered.

SQL> /

System altered.

SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/app/oracle/product/11201/db_1/dbs/arch
Oldest online log sequence 3
Next log sequence to archive 5
Current log sequence 5
更多11g 参数推荐可以参考 http://www.askmaclean.com/archives/11gr2-features-disable.html

13. 如果需要重命名DB NAME,则可以运行如下命令

SQL> alter database rename global_name to LIUXIANGBING;

Database altered.

SQL> select * from global_name;

GLOBAL_NAME
——————————————————————————–
LIUXIANGBING

14. 配置LISTENER、SQLNET.ORA等网络参数

http://www.askmaclean.com/archives/silent-install-11gr2-create-database-by-manual.html 

利用44951 event解决LOB SPACE enq HW – contention等待争用

对存有LOB大对象的表的并发插入、更新引起的LOB Segment High Water Mark是常见的LOb并发争用; 特别是在ASSM(Auto Segment Space Management)的表空间上这种LOB的HWM可能比MSSM(Manual Segment Space Management)更为严重,其原因是在MSSM下LOB的HWM bump一次会获取128个chunk, 而在ASSM下默认只获取必要的chunk(default 1) (This is worse in ASSM than MSSM because ASSM only gets the amount of space requested while MSSM gets 128 chunks at a time.)。[Read More]

11g Real Application Testing:Database Replay使用方法

Database Replay使用方法

1.1       捕获性能负载

1. 针对需要捕获性能负载时段执行如下PL/SQL脚本:

   execute dbms_workload_capture.start_capture(‘&CAPTURE_NAME’,'&DIRECT_NAME’,default_action=>’INCLUDE’); 

 

CAPTURE_NAME=> 本次capture的名字

可以通过 DBA_WORKLOAD_CAPTURES 视图监控

DIRECTORY_NAME=> ORACLE目录对象名,该目录用以存放catpure文件,现有测试表明在繁忙的OTLP数据中收集10分钟数据消耗磁盘空间1GB,建议为该目录分配足够的磁盘空间

2. 可以通过如下SQL监控capture的情况

查询1:select id,name,status,start_time,end_time,connects,user_calls,dir_path from dba_workload_captures where id = (select max(id) from dba_workload_captures) ; 

 

set pagesize 0 long 30000000 longchunksize 1000

select dbms_workload_capture.report(&ID,’TEXT’) from dual;

其中ID是查询1获得的ID值

3. 当不再需要捕获更多负载时通过如下脚本结束capture:

execute dbms_workload_capture.finish_capture; 

 

1.2       预处理捕获

1. 将捕获到的capture file传输到目标数据库主机上,并创建必要的Oracle Directory 对象

2.  在目标数据库预处理capture file

execute DBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE(‘&DIRECTORY_NAME’); 

 

注意 Capture Preprocess By Process_capture Can Not Complete [ID 1265686.1]

Bug 9742032  Database replay: dbms_workload_replay.process_capture takes a lot of time

– ***********************************************************

–  PROCESS_CAPTURE

–    Processes the workload capture found in capture_dir in place.

–    Analyzes the workload capture found in the capture_dir and

–    creates new workload replay specific metadata files that are

–    required to replay the given workload capture.

–    This procedure can be run multiple times on the same

–    capture directory – useful when this procedure encounters

–    unexpected errors or is cancelled by the user.

–    Once this procedure runs successfully, the capture_dir can be used

–    as input to INITIALIZE_REPLAY() in order to replay the captured

–    workload present in capture_dir.

–    Before a workload capture can be replayed in a particular database

–    version, the capture needs to be “processed” using this

–    PROCESS_CAPTURE procedure in that same database version.

–    Once created, a processed workload capture can be used to replay

–    the captured workload multiple times in the same database version.

–    For example:

–      Say workload “foo” was captured in “rec_dir” in Oracle

–      database version 10.2.0.4

–      In order to replay the workload “foo” in version 11.1.0.1

–      the workload needs to be processed in version 11.1.0.1

–      The following procedure needs to be executed in a 11.1.0.1 database

–      in order to process the capture directory “rec_dir”

–        DBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE(‘rec_dir’);

–      Now, rec_dir contains a valid 11.1.0.1 processed workload capture

–      that can be used to replay the workload “foo” in 11.1.0.1 databases

–      as many number of times as required.

–    The procedure will take as input the following parameters:

–      capture_dir – name of the workload capture directory object.

–                    (case sensitive)

–                    The directory object must point to a valid OS

–                    directory that has appropriate permissions.

–                    New files will be added to this directory.

–                    (MANDATORY)

–      parallel_level – number of oracle processes used to process the

–                       capture in a parallel fashion.

–                       The NULL default value will auto-compute the

–                       parallelism level, whereas a value of 1 will enforce

–                       serial execution.

3. 以上预处理可能因为bug:8919603

1.3       开始REPLAY重放

通过 wrc工具的calibrate模式评估需要多少个客户端机

wrc mode=calibrate replaydir=$REPLAY_DIR$REPLAY_DIR指定预处理过的目录

 

Recommendation:

Consider using at least 75 clients divided among 19 CPU(s)

You will need at least 142 MB of memory per client process.

If your machine(s) cannot match that number, consider using more clients.

Workload Characteristics:

- max concurrency: 2830 sessions

- total number of sessions: 70362

Assumptions:

- 1 client process per 50 concurrent sessions

- 4 client process per CPU

- 256 KB of memory cache per concurrent session

- think time scale = 100

- connect time scale = 100

- synchronization = TRUE

准备数据库环境,将数据库闪回到capture时间点并创建用户:

shutdown abort;      ==》关闭2个实例startup mount;        ==》 启动一个实例到mountflashback database to restore point prereplay;

 

alter database open resetlogs;

startup;               ==>启动另一个实例

create user orasupport identified by oracle;

grant dba to orasupport;

执行如下脚本准备capture:

exec DBMS_WORKLOAD_REPLAY.INITIALIZE_REPLAY(replay_name =>’&REPLAY_NAME’,replay_dir => ‘&REPLAY_DIR’); 

 

exec DBMS_WORKLOAD_REPLAY.PREPARE_REPLAY(synchronization=>false,

connect_time_scale=>80,think_time_scale=>25, SCALE_UP_MULTIPLIER=>1);

synchronization=》指定了commit order是否要求一致 , 对于压力测试可以为false,

connect_time_scale=》连接时间比例,设置为80%,意为原本要capture 10分钟之后才会登录的session,现在8分钟后就会登录 ,注意设置该值过低会导致大量session登录,可能出现ORA-18/ORA-20错误; 这里设置为80为了加大负载压力

think_time_scale=》指2个SQL CALL之间间隔的时间比例,如本来2个SQL之间为100s,设置think_time_scale为25后,其间隔变为25s。

SCALE_UP_MULTIPLIER=>指定查询的倍数,建议一开始设置为1:1,

– ***********************************************************

–  PREPARE_REPLAY

–    Puts the DB state in REPLAY mode. The database

–    should have been initialized for replay using

–    DBMS_WORKLOAD_REPLAY.INITIALIZE_REPLAY(), and optionally any

–    capture time connection strings that require remapping have been

–    already done using DBMS_WORKLOAD_REPLAY.REMAP_CONNECTION().

–    One or more external replay clients (WRC) can be started

–    once the PREPARE_REPLAY procedure has been executed.

–    The procedure will take as input the following parameters:

–      synchronization – Turns synchronization to the given scheme during

–                        workload replay.

–                        When synchronization is SCN, the COMMIT order

–                        observed during the original workload capture

–                        will be preserved during replay.

–                        Every action that is replayed will be executed

–                        ONLY AFTER all of it’s dependent COMMITs (all

–                        COMMITs that were issued before the given action

–                        in the original workload capture) have finished

–                        execution.

–                        When synchronization is OBJECT_ID, a more advanced

–                        synchronization scheme is used.

–                        Every action that is replayed will be executed

–                        ONLY AFTER the RELEVANT COMMITs have finished

–                        executing. The relevant commits are the ones that

–                        were issued before the given action  in the

–                        orginal workload capture and that had modified

–                        at least one of the database objects the given

–                        action is referencing (either implicitely or

–                        explicitely).

–                        This OBJECT_ID scheme has the same logical

–                        property of making sure that any action will see

–                        the same data it saw during capture, but will

–                        allow more concurrency during replays for the

–                        actions that do not touch the same objects/tables.

–                        DEFAULT VALUE: SCN, preserve commit order.

–                        For legacy reason, there is a boolean version of

–                        this procedure:

–                          TRUE  means ‘SCN’

–                          FALSE means ‘OFF’

–      connect_time_scale       – Scales the time elapsed between the

–                                 instant the workload capture was started

–                                 and session connects with the given value.

–                                 The input is interpreted as a % value.

–                                 Can potentially be used to increase or

–                                 decrease the number of concurrent

–                                 users during the workload replay.

–                                 DEFAULT VALUE: 100

–                                 For example, if the following was observed

–                                 during the original workload capture:

–                                 12:00 : Capture was started

–                                 12:10 : First session connect  (10m after)

–                                 12:30 : Second session connect (30m after)

–                                 12:42 : Third session connect  (42m after)

–                                 If the connect_time_scale is 50, then the

–                                 session connects will happen as follows:

–                                 12:00 : Replay was started

–                                         with 50% connect time scale

–                                 12:05 : First session connect  ( 5m after)

–                                 12:15 : Second session connect (15m after)

–                                 12:21 : Third session connect  (21m after)

–                                 If the connect_time_scale is 200, then the

–                                 session connects will happen as follows:

–                                 12:00 : Replay was started

–                                         with 200% connect time scale

–                                 12:20 : First session connect  (20m after)

–                                 13:00 : Second session connect (60m after)

–                                 13:24 : Third session connect  (84m after)

–      think_time_scale         – Scales the time elapsed between two

–                                 successive user calls from the same

–                                 session.

–                                 The input is interpreted as a % value.

–                                 Can potentially be used to increase or

–                                 decrease the number of concurrent

–                                 users during the workload replay.

–                                 DEFAULT VALUE: 100

–                                 For example, if the following was observed

–                                 during the original workload capture:

–                                 12:00 : User SCOTT connects

–                                 12:10 : First user call issued

–                                         (10m after completion of prevcall)

–                                 12:14 : First user call completes in 4mins

–                                 12:30 : Second user call issued

–                                         (16m after completion of prevcall)

–                                 12:40 : Second user call completes in 10m

–                                 12:42 : Third user call issued

–                                         ( 2m after completion of prevcall)

–                                 12:50 : Third user call completes in 8m

–                                 If the think_time_scale is 50 during the

–                                 workload replay, then the user calls

–                                 will look something like below:

–                                 12:00 : User SCOTT connects

–                                 12:05 : First user call issued 5 mins

–                                         (50% of 10m) after the completion

–                                         of prev call

–                                 12:10 : First user call completes in 5m

–                                         (takes a minute longer)

–                                 12:18 : Second user call issued 8 mins

–                                         (50% of 16m) after the completion

–                                         of prev call

–                                 12:25 : Second user call completes in 7m

–                                         (takes 3 minutes less)

–                                 12:26 : Third user call issued 1 min

–                                         (50% of 2m) after the completion

–                                         of prev call

–                                 12:35 : Third user call completes in 9m

–                                         (takes a minute longer)

–      think_time_auto_correct  – Auto corrects the think time between calls

–                                 appropriately when user calls takes longer

–                                 time to complete during replay than

–                                 how long the same user call took to

–                                 complete during the original capture.

–                                 DEFAULT VALUE: TRUE, reduce

–                                 think time if replay goes slower

–                                 than capture.

–                                 For example, if the following was observed

–                                 during the original workload capture:

–                                 12:00 : User SCOTT connects

–                                 12:10 : First user call issued

–                                         (10m after completion of prevcall)

–                                 12:14 : First user call completes in 4m

–                                 12:30 : Second user call issued

–                                         (16m after completion of prevcall)

–                                 12:40 : Second user call completes in 10m

–                                 12:42 : Third user call issued

–                                         ( 2m after completion of prevcall)

–                                 12:50 : Third user call completes in 8m

–                                 If the think_time_scale is 100 and

–                                 the think_time_auto_correct is TRUE

–                                 during the workload replay, then

–                                 the user calls will look something

–                                 like below:

–                                 12:00 : User SCOTT connects

–                                 12:10 : First user call issued 10 mins

–                                         after the completion of prev call

–                                 12:15 : First user call completes in 5m

–                                         (takes 1 minute longer)

–                                 12:30 : Second user call issued 15 mins

–                                         (16m minus the extra time of 1m

–                                          the prev call took) after the

–                                         completion of prev call

–                                 12:44 : Second user call completes in 14m

–                                         (takes 4 minutes longer)

–                                 12:44 : Third user call issued immediately

–                                         (2m minus the extra time of 4m

–                                          the prev call took) after the

–                                         completion of prev call

–                                 12:52 : Third user call completes in 8m

–      scale_up_multiplier      – Defines the number of times the query workload

–                                 is scaled up during replay. Each captured session

–                                 is replayed concurrently as many times as the

–                                 value of the scale_up_multiplier. However, only

–                                 one of the sessions in each set of identical

–                                 replay sessions executes both queries and updates.

–                                 The remaining sessions only execute queries.

–                                 More specifically note that:

–                                   1. One replay session (base session) of each set

–                                      of identical sessions will replay every call

–                                      from the capture as usual

–                                   2. The remaining sessions (scale-up sessions) will

–                                      only replay calls that are read-only.

–                                      Thus, DDL, DML, and PLSQL calls that

–                                      modified the database will be

–                                      skipped. SELECT FOR UPDATE statements are also skipped.

–                                   3. Read-only calls from the scale-up are

–                                      synchronized appropriately and obey the

–                                      timings defined by think_time_scale, connect_time_scale,

–                                      and think_time_auto_correct. Also the queries

–                                      are made to wait for the appropriate commits.

–                                   4. No replay data or error divergence

–                                      records will be generated for the

–                                      scale-up sessions.

–                                   5. All base or scale-up sessions that

–                                      replay the same capture file will connect

–                                      from the same workload replay client.

–          capture_sts – If this parameter is TRUE, a SQL tuning set

–                        capture is also started in parallel with workload

–                        capture. The resulting SQL tuning set can be

–                        exported using DBMS_WORKLOAD_REPLAY.EXPORT_AWR

–                        along with the AWR data.

–                        Currently, parallel STS capture

–                        is not supported in RAC. So, this parameter has

–                        no effect in RAC.

–                        Furthermore capture filters defined using the

–                        dbms_workload_replay APIs do not apply to the

–                        sql tuning set capture.

–                        The calling user must have the approriate

–                        privileges (‘administer sql tuning set’).

–                        DEFAULT VALUE: FALSE

–     sts_cap_interval – This parameter specifies the capture interval

–                        of the SQL set capture from the cursor cache in

–                        seconds. The default value is 300.

–    Prerequisites:

–      -> The database has been initialized for replay using

–         DBMS_WORKLOAD_REPLAY.INITIALIZE_REPLAY().

–      -> Any capture time connections strings that require remapping

–         during replay have already been remapped using

–         DBMS_WORKLOAD_REPLAY.REMAP_CONNECTION().

以上完成后启动WRC 客户端:

nohup  wrc  orasupport/oracle replaydir=$REPLAY_DIR  DSCN_OFF=TRUE &

 

建议一个INST开75-100个WRC客户端,使用nohup 后台启动

MODE=REPLAY (default)

———————

Keyword     Description

—————————————————————-

USERID      username

PASSWORD    password

SERVER      server connection identifier (Default: empty string)

REPLAYDIR   replay directory (Default:.)

WORKDIR     directory for trace files

DEBUG       ON, OFF (Default: OFF)

CONNECTION_OVERRIDE  TRUE, FALSE (Default: FALSE)

TRUE   All replay threads connect using SERVER,

settings in DBA_WORKLOAD_CONNECTION_MAP will be ignore

FALSE  Use settings from DBA_WORKLOAD_CONNECTION_MAP

SERIALIZE_CONNECTS  TRUE, FALSE (Default: FALSE)

TRUE   All the replay threads will connect to

the database in a serial fashion one after

another. This setting is recommended when

the replay clients use the bequeath protocol

to communicate to the database server.

FALSE  Replay threads will connect to the database

in a concurrent fashion mimicking the original

capture behavior.

DSCN_OFF    TRUE, FALSE (Default: FALSE)

TRUE   Ignore all dependencies due to block

contention during capture when synchronizing

the replay.

FALSE  Honor all captured dependencies.

MODE=CALIBRATE

————–

Provides an estimate of the number of replay clients needed

Keyword     Description

—————————————————————-

REPLAYDIR   replay directory (Default:.)

Advanced parameters:

PROCESS_PER_CPU       Maximum number of client process than can be run

per CPU (Default: 4)

THREADS_PER_PROCESS   Maximum number of threads than can be run within

a client process (Default: 50)

MODE=LIST_HOSTS

—————

Displays all host names involved in the capture

Keyword     Description

—————————————————————-

REPLAYDIR   the workload directory (Default:.)

MODE=GET_TABLES

—————

Lists all objects referenced by captured SQL statements

Keyword     Description

—————————————————————-

REPLAYDIR   the workload directory (Default:.)

执行下列脚本正式开始REPLAY

exec DBMS_WORKLOAD_REPLAY.start_replay();

建议通过EM或者下面的脚本查询进度:

Select id, name,status from dba_workload_replays; 

 

若replay完成可以在EM或者使用如下脚本获得replay报告:

set pagesize 0 long 30000000 longchunksize 1000

select dbms_workload_replay.report(&ID,’TEXT’) from dual;

&ID可以从上面的查询获得


http://www.askmaclean.com/archives/database-replay.html 

Oracle队列锁: Enqueue HW

HW真的是个著名的enqueue lock,著名度仅次于TM、TX吧。对于有高并发INSERT的OLTP数据库的DBA而言,HW enqueue真实家常便饭的等待事件。但是对于该等待事件的详细说明却少之又少。 这里我们总结一下这个HW enqueue lock。[Read More]

深入了解Oracle ASM(一):基础概念

Automatic Storage Management是Oracle 在版本10g中率先(对比其他RDBMS)提出的数据库存储自动解决方案,在版本11g中得到进一步升华。ASM提供了数据库管理所需要的一个简单、有效的存储管理接口,该接口实现了跨服务器和存储平台。 ASM是文件系统filesystem和volume manager卷管理软件的一体化,专门为Oracle的数据库文件锁设计的; ASM在保证如文件系统般管理简单的基础上提供高性能的异步Async IO。ASM的引入提高了数据库的可扩展容量,同时节约了DBA的时间,使其能够更敏捷、更高效地管理一个流动性较大的数据库环境。[Read More]

11gR2游标共享新特性带来的一些问题以及_cursor_features_enabled、_cursor_obsolete_threshold和106001 event

版本11gR2中引入cursor sharing游标共享和mutex互斥锁增强的一些特性,而这些特性也带来了一些问题(主要体现在版本11.2.0.1和11.2.0.2上,11.2.0.3上基本已经修复)。

Cursor Obsolescence游标废弃是一种SQL Cursor游标管理方面的增强特性,该特性启用后若parent cursor父游标名下的子游标child cursor总数超过一定的数目,则该父游标parent cursor将被废弃,同时一个新的父游标将被开始。 这样做有2点好处:

[Read More]

gc buffer busy/gcs log flush sync与log file sync

如果你在ADDM(?/rdbms/admin/addmrpt)中找到上述文字,那么基本可以确认gc buffer busy的源头是log file sync(虽然本质上不是),那么优先解决log file sync的问题; log file sync 当然有少数的bug存在,但更多的是存储、板卡、链路等硬件因素造成的。解决了log file sync后,那么gc buffer busy往往也就解决了。[Read More]

【技术分享】开Oracle调优鹰眼,深入理解AWR性能报告


讲座材料presentation 当前版本下载: 【Maclean Liu技术分享】开Oracle调优鹰眼,深入理解AWR性能报告_20130303版.pdf.pdf(1.79 MB, 下载次数: 398)

【11g新特性】DEFERRED_SEGMENT_CREATION延迟段创建特性

deferred_segment_creation 延迟段创建是11.2的新特性, 当创建一个 空表或者空分区时,实际的表段table segment被延迟到第一行数据插入的时候。
该功能通过DEFERRED_SEGMENT_CREATION参数启用,默认为TRUE.[Read More]
About

author's avatar The Maclean Liu
Advanced Customer Services
AskMaclean Logo 10g_ocm SHOUG

Search

Categories
Archives
« March 2013 »
SunMonTueWedThuFriSat
     
1
2
3
4
5
6
7
8
11
12
13
14
15
16
18
20
21
22
23
24
25
26
27
29
30
      
Today