Saturday Nov 30, 2013

Things to Consider when Planning the Redo logs for Oracle Database

Very basic and generic discussion from the performance point of view. Customers still have to do their due diligence in understanding redo logs, and how they work in Oracle database, before finalizing redo log configuration for their deployments.

  • size them properly
    • log writer writes to a single redo log file until either it is full or a manual log switch is requested
          Oracle supports multiplexed redo logs for availability, but this behavior of writing to a file until it is full or a log switch happens, still hold
    • if the transactions generate a lot of redo before a database commit, consider large sizes in tens of gigabytes for redo logs
    • if not sized properly, it leads to unnecessary log switches, which in turn increase checkpoint activity resulting in unnecessary slow down of the database operations
          two redo logs each with at least 5G in size might be a good start. observe the log switches, checkpoints and increase (or decrease, though there is no performance benefit) the file size accordingly

  • do not mix redo logs with the rest of the database or anything else
    • in a normal functioning database, most of the time, log writer simply writes redo entries sequentially to redo logs
    • any slow down in writing the redo data to logs hurt the performance of the database
    • best not to share the disks/volumes on which redo logs are hosted, with anything else
          set of disks, volumes exclusive to redo logs, that is

  • ensure that the underlying disks or I/O medium used to store the redo logs are fast, optimally configured and can sustain the amount of I/O bandwidth needed to write the redo entries to the redo logs
        if those requirements are not met, it could lead to 'log file sync' waits, which will slow down the database transactions

  • redo logs on non-volatile flash storage may have performance benefits over the traditional hard disk drives
    • check this blog post out, Redo logs on F40 PCIe Cards, for related discussion (keywords: 4K block size for redo logs, block alignment)

Saturday Aug 31, 2013

[Oracle Database] Unreliable AWR reports on T5 & Redo logs on F40 PCIe Cards

(1) AWR report shows bogus wait events and times on SPARC T5 servers

Here is a sample from one of the Oracle 11g R2 databases running on a SPARC T5 server with Solaris 11.1 SRU 7.5

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class
latch: cache buffers chains 278,727 812,447,335 2914850 13307324.15Concurrency
library cache: mutex X212,595449,966,33021165427370136.56Concurrency
buffer busy waits219,844349,975,25115919255732352.01Concurrency
latch: In memory undo latch25,46837,496,8001472310614171.59Concurrency
latch free2,60224,998,5839607449409459.46Other

Reason:
Unknown. There is a pending bug 17214885 - Implausible top foreground wait times reported in AWR report.

Tentative workaround:
Disable power management as shown below.

# poweradm set administrative-authority=none

# svcadm disable power
# svcadm enable power

Verify the setting by running poweradm list.

Also disable NUMA I/O object binding by setting the following parameter in /etc/system (requires a system reboot).

set numaio_bind_objects=0

Oracle Solaris 11 added support for NUMA I/O architecture. Here is a brief explanation of NUMA I/O from Solaris 11 : What's New web page.

Non-Uniform Memory Access (NUMA) I/O : Many modern systems are based on a NUMA architecture, where each CPU or set of CPUs is associated with its own physical memory and I/O devices. For best I/O performance, the processing associated with a device should be performed close to that device, and the memory used by that device for DMA (Direct Memory Access) and PIO (Programmed I/O) should be allocated close to that device as well. Oracle Solaris 11 adds support for this architecture by placing operating system resources (kernel threads, interrupts, and memory) on physical resources according to criteria such as the physical topology of the machine, specific high-level affinity requirements of I/O frameworks, actual load on the machine, and currently defined resource control and power management policies.

Do not forget to rollback these changes after applying the fix for the database bug 17214885, when available.

(2) Redo logs on F40 PCIe cards (non-volatile flash storage)

Per the F40 PCIe card user's guide, the Sun Flash Accelerator F40 PCIe Card is designed to provide best performance for data transfers that are multiples of 8k size, and using addresses that are 8k aligned. To achieve optimal performance, the size of the read/write data should be an integer multiple of this block size and the data transferred should be block aligned. I/O operations that are not block aligned and that do not use sizes that are a multiple of the block size may suffer performance degration, especially for write operations.

Oracle redo log files default to a block size that is equal to the physical sector size of the disk, typically 512 bytes. And most of the time, database writes to the redo log in a normal functioning environment. Oracle database supports a maximum block size of 4K for redo logs. Hence to achieve optimal performance for redo write operations on F40 PCIe cards, tune the environment as shown below.

  1. Configure the following init parameters
    _disk_sector_size_override=TRUE
    _simulate_disk_sectorsize=4096
    
  2. Create redo log files with 4K block size
    eg.,
    SQL> ALTER DATABASE ADD LOGFILE '/REDO/redo.dbf' size 20G blocksize 4096;
    
  3. [Solaris only] Append the following line to /kernel/drv/sd.conf (requires a reboot)
    sd-config-list="ATA     3E128-TS2-550B01","disksort:false,\
                 cache-nonvolatile:true, physical-block-size:4096";
    
  4. [Solaris only][F20] To enable maximum throughput from the MPT driver, append the following line to /kernel/drv/mpt.conf and reboot the system.
    mpt_doneq_thread_n_prop=8;
    

This tip is applicable to all kinds of flash storage that Oracle sells or sold including F20/F40 PCIe cards and F5100 storage array. sd-config-list in sd.conf may need some adjustment to reflect the correct vendor id and product id.

Tuesday Mar 05, 2013

SuperCluster Best Practices : Deploying Oracle 11g Database in Zones

To be clear, this post is about a white paper that's been out there for more than two months. Access it through the following url.

  Best Practices for Deploying Oracle Solaris Zones with Oracle Database 11g on SPARC SuperCluster

The focus of the paper is on databases and zones. On SuperCluster, customers have the choice of running their databases in logical domains that are dedicated to running Oracle Database 11g R2. With exclusive access to Exadata Storage Servers, those domains are aptly called "Database" domains. If the requirement mandates, it is possible to create and use all logical domains as "database domains" or "application domains" or a mix of those. Since the focus is on databases, the paper talks only about the database domains and how zones can be created, configured and used within each database domain for fine grained control over multiple databases consolidated in a SuperCluster environment.

When multiple databases are being consolidated (including RAC databases) in database logical domains, zones are one of the options that fulfill requirements such as the fault, operation, network, security and resource isolation, multiple RAC instances in a single logical domain, separate identity and independent manageability for database instances.

The best practices cover the following topics. Some of those are applicable to standalone, non-engineered environments as well.

Solaris Zones

  • CPU, memory and disk space allocation
  • Zone Root on Sun ZFS Storage Appliance
  • Network configuration
  • Use of DISM
  • Use of ZFS filesystem
  • SuperCluster specific zone deployment tool, ssc_exavm
  • ssctuner utility

Oracle Database

  • Exadata Storage Grid (Disk Group) Configuration
  • Disk Group Isolation
    • Shared Storage approach
    • Dedicated Storage Server approach
  • Resizing Grid Disks

Oracle RAC Configuration
Securing the Databases, and

Example Database Consolidation Scenarios

  • Consolidation example using Half-Rack SuperCluster
  • Consolidation example using Full-Rack SuperCluster

Acknowledgements

A large group of experts reviewed the material and provided quality feedback. Hence they deserve credit for their work and time. Listed below are some of those reviewers (sincere apologies if I missed listing any major contributors).

Kesari Mandyam, Binoy Sukumaran, Gowri Suserla, Allan Packer, Jennifer Glore, Hazel Alabado, Tom Daly, Krishnan Shankar, Gurubalan T, Rich long, Prasad Bagal, Lawrence To, Rene Kundersma, Raymond Dutcher, David Brean, Jeremy Ward, Suzi McDougall, Ken Kutzer, Larry Mctintosh, Roger Bitar, Mikel Manitius

Friday Nov 23, 2012

emca fails with "Database instance is unavailable" though available

The following example shows the symptoms of failure, and the exact error message.

$ emca -repos create

...
Password for SYSMAN user:  

Do you wish to continue? [yes(Y)/no(N)]: Y
Nov 19, 2012 10:33:42 AM oracle.sysman.emcp.DatabaseChecks \
         checkDbAvailabilityImpl
WARNING: ORA-01034: ORACLE not available

Nov 19, 2012 10:33:42 AM oracle.sysman.emcp.DatabaseChecks \
         throwDBUnavailableException
SEVERE: 
Database instance is unavailable. Fix the ORA error thrown and 
run EM Configuration Assistant again.

Some of the possible reasons may be : 

1) Database may not be up. 
2) Database is started setting environment variable ORACLE_HOME 
with trailing '/'. Reset ORACLE_HOME and bounce the database. 

For eg. Database is started setting environment variable 
ORACLE_HOME=/scratch/db/ . Reset ORACLE_HOME=/scratch/db  and bounce 
the database.

Fix:

Ensure that the ORACLE_HOME is pointing to the right location in $ORACLE_HOME/bin/emca file.

Rather than installing from scratch, if ORACLE_HOME was copied over from another location, likely it results in wrong location for ORACLE_HOME in several Enterprise Manager (EM) specific scripts and files. It usually happens when the directory structure on the target machine is not identical to the structure on the original/source machine, including the top level directory location where Oracle RDBMS was installed properly using the installer.

Tuesday Feb 28, 2012

Oracle RDBMS & Solaris : Few Random Tips (Feb 2012)

These tips are just some quick solutions or workarounds. Use these quickies at your own risk.

[#1] Oracle Data Pump

Q: How to exclude the table definition while importing a table using Oracle Data Pump import utility?

A: Use EXCLUDE=TABLE/TABLE option.

eg.,

impdp login/password DUMPFILE=<DUMP_FILENAME> LOGFILE=<LOGFILE_NAME> \
 DIRECTORY=<DB_DIR_NAME> TABLES=<TABLE_NAME> EXCLUDE=TABLE/TABLE



[#2] Workaround to ORA-01089: immediate shutdown in progress - no operations are permitted

When the database is in the middle of an instance shutdown, if another shutdown or startup was attempted, Oracle RDBMS may throw the above ORA-01089 error. The workaround is to force Oracle to start the database instance using startup force option. This option will shutdown the database instance (if running) using the abort command and then starts it up.

eg.,

SQL> STARTUP FORCE



[#3] Quick steps to upgrade the Oracle database from version 11.2.0.[1 or 2] to 11.2.0.3

Execute the following in the same sequence as sysdba.

startup upgrade
!cd $ORACLE_HOME/rdbms/admin
@utlu112i.sql		/* pre-upgrade information tool */
exec dbms_stats.gather_dictionary_stats (DEGREE => 64);
@catupgrd.sql		/* create/modify data dictionary tables */
@utlu112s 		/* all components should be in VALID state */
shutdown immediate
startup
@catuppst.sql		/* upgrade actions that do not require DB in UPGRADE mode */
@utlrp.sql		/* recompile stored PL/SQL and Java code */
SELECT count(*) FROM dba_invalid_objects;		
                        /* verify that all packages and classes are valid */
exit



[#4] Q: Solaris: how to get rid of zombie processes?

A: Run the following with appropriate user privileges.

ps -eaf | grep defunct | grep -v grep | preap `awk '{ print $2 }'`

Alternative way: (not as good as the previous one - still may work as expected)

prstat -n 500 1 1 | grep zombie | preap `awk '{ print $1 }'`



[Added on 03/01/2012]

[#5] Solaris: Many TCP listen drops

eg.,

# netstat -sP tcp | grep tcpListenDrop
        tcpListenDrop       =2442553     tcpListenDropQ0     =     0

To alleviate numerous TCP listen drops, bump up the value for the tunable tcp_conn_req_max_q

# ndd -set /dev/tcp tcp_conn_req_max_q <value>



[Added on 03/02/2012]

[#6] Solaris ZFS: listing all properties and values for a zpool

Run: zfs get all <zpool_name> as any OS user

eg.,

% zpool list
NAME    SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
rpool   276G   167G   109G    60%  ONLINE  -
spec    556G   168G   388G    30%  ONLINE  -

% zfs get all rpool
NAME   PROPERTY              VALUE                  SOURCE
rpool  type                  filesystem             -
rpool  creation              Fri May 27 17:06 2011  -
...
rpool  compressratio         1.00x                  -
rpool  mounted               yes                    -
rpool  quota                 none                   default
rpool  reservation           none                   default
rpool  recordsize            128K                   default
...
rpool  checksum              on                     default
rpool  compression           off                    default
...
rpool  logbias               latency                default
rpool  sync                  standard               default
rpool  rstchown              on                     default



[#7] Solaris: listing all ZFS tunables

Run: echo "::zfs_params" | mdb -k with root/super-user privileges

eg.,

# echo "::zfs_params" | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x10000000
zfs_arc_min = 0x10000000
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
..
..
zio_injection_enabled = 0x0
zvol_immediate_write_sz = 0x8000

Monday Oct 10, 2011

Oracle Database on NFS : Resolving "ORA-27086: unable to lock file - already in use" Error

Some Context

Oracle database was hosted on ZFS Storage Appliance (NAS). The database files are accessible from the database server node via NFS mounted filesystems. Solaris 10 is the operating system on DB node.

Someone forgets to shutdown the database instance and unmount the remote filesystems before rebooting the database server node. After the system boots up, Oracle RDBMS fails to bring up the database due to locked-out data files.

eg.,

SQL> startup
ORACLE instance started.

Total System Global Area 1.7108E+10 bytes
Fixed Size		    2165208 bytes
Variable Size		 9965671976 bytes
Database Buffers	 6845104128 bytes
Redo Buffers		  295329792 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: '/orclvol4/entDB/system01.dbf'

======================
Extract from alert log:
======================

...
ALTER DATABASE OPEN
Fri Aug 05 21:30:54 2011
Errors in file /oracle112/diag/rdbms/entdb/entDB/trace/entDB_dbw0_7235.trc:
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: '/orclvol4/entDB/system01.dbf'
ORA-27086: unable to lock file - already in use
SVR4 Error: 11: Resource temporarily unavailable
Additional information: 8
Additional information: 21364
Errors in file /oracle112/diag/rdbms/entdb/entDB/trace/entDB_dbw0_7235.trc:
ORA-01157: cannot identify/lock data file 2 - see DBWR trace file
ORA-01110: data file 2: '/orclvol4/entDB/sysaux01.dbf'
ORA-27086: unable to lock file - already in use
SVR4 Error: 11: Resource temporarily unavailable
Additional information: 8
Additional information: 21364
...

Reason for the lock failure:

Because of the sudden ungraceful shutdown of the database, file locks on data files were not released by the NFS server (ZFS SA in this case). NFS server held on to the file locks even after the NFS client (DB server node in this example) was restarted. Due to this, Oracle RDBMS is not able to lock those data files residing on NFS server (ZFS SA). As a result, database instance was failed to start up in exclusive mode.

Workaround

Manually clear the NFS locks as outlined below.

On NFS Client (database server node):

  1. Shutdown the mounted database
  2. Unmount remote (NFS) filesystems
  3. Execute: clear_locks -s <nfs_server_host>

    eg.,

    # clear_locks -s sup16
    Clearing locks held for NFS client ipsedb1 on server sup16
    clear of locks held for ipsedb1 on sup16 returned success
    

On NFS Server (ZFS SA):
    (this step may not be necessary but wouldn't hurt to perform)

  1. Execute: clear_locks <nfs_client_host>

    eg.,

    sup16# clear_locks 10.129.207.93
    Clearing locks held for NFS client 10.129.207.93 on server sup16
    clear of locks held for 10.129.207.93 on sup16 returned success
    

Again back on NFS Client (database server node):

  1. Restart NFS client
        (this step may not be necessary but wouldn't hurt to perform)
    # svcadm -v disable nfs/client
    # svcadm -v enable nfs/client
    
  2. Mount remote/NFS filesystems
  3. Finally start the database

Also see:
Listing file locks on Solaris 10

Saturday Sep 10, 2011

Oracle RDBMS : Generic Large Object (LOB) Performance Guidelines

This blog post is generic in nature and based on my recent experience with a content management system where securefile BLOBs are critical in storing and retrieving the checked in content. It is stro ngly suggested to check the official documentation in addition to these brief guidelines. In general, Oracle Database SecureFiles and Large Objects Developer's Guide 11g Release 2 (11.2) is a good starting point when creating tables involving SecureFiles and LOBs.

Guidelines

  • Tablespace: create the LOB in a different tablespace isolated from the rest of the database
  • Block size: consider larger block size (default 8 KB) if the expected size of the LOB is big
  • Chunk size: consider larger chunk size (default 8 KB) if larger LOBs are expected to be stored and retrieved
  • Inline or Out-of-line: choose "DISABLE STORAGE IN ROW" (out-of-line) if the average LOB size is expected to be > 4 KB. The default inlining is fine for smaller LOBs
  • CACHE or NOCACHE: consider bypassing the database buffer cache (NOCACHE) if large number of LOBs are stored and not expected to be retrieved frequently
  • COMPRESS or NOCOMPRESS: choose COMPRESS option if storage capacity is a concern and a constraint. It saves disk space at the expense of some performance overhead. In a RAC database environment, it is recommended to compress the LOBs to reduce the interconnect traffic
  • De-duplication: by default, duplicate LOBs are stored as a separate copy in the database. Choosing DEDUPLICATE option enables sharing the same data blocks for similar files thus reducing storage overhead and simplifying storage management
  • Partitioning: consider partitioning the parent table to maximize application performance. Hash partitioning is one of the options if there is no potential partition key in the table
  • Zero-Copy I/O protocol: turned on by default. Turning it off in a RAC database environment could be beneficial. Set the initialization parameter _use_zero_copy_io=FALSE to turn o ff the Zero-Copy I/O protocol
  • Shared I/O pool: database uses the shared I/O pool to perform large I/O operations on securefile LOBs. The shared I/O pool uses shared memory segments. If this pool is not large enough or if there is not enough memory available in this pool for a securefile LOB I/O operation, Oracle uses a portion of PGA until there is sufficient memory available in the shared I/O pool. Hence it is recommen ded to size the shared I/O pool appropriately by monitoring the database during the peak activity. Relevant initialization parameters: _shared_io_pool_size and _shared_iop_max_size

Also see:
Oracle Database Documentation : LOB Performance Guidelines

Wednesday May 25, 2011

Oracle Database: How to Figure Out if a Tablespace is Empty

It is not uncommon for Oracle DBAs to create, drop tablespaces for various reasons throughout the life of a database management system. It is a good practice to double check whether a tablespace is really empty before dropping it. One way is to visually check the "Tablespaces" section in Oracle Enterprise Manager (OEM) database console. However sometimes the graphical interface may return false positives. Another way is to rely on data dictionary views to obtain relevant information as accurately as possible. In the latter case, DBA_SEGMENTS / USER_SEGMENTS views are useful in mapping segments such as table, index, table/index partition etc., to a tablespace.

eg.,

The following example queries USER_SEGMENTS view to list the segments and their types stored in a tablespace called "TS_SALES_DATA32K".

SQL> COLUMN SEGMENT_NAME FORMAT A30
SQL> COLUMN SEGMENT_TYPE FORMAT A30
SQL> 
SQL> SELECT SEGMENT_NAME, SEGMENT_TYPE
  2  FROM USER_SEGMENTS
  3  WHERE TABLESPACE_NAME = 'TS_SALES_DATA32K';

SEGMENT_NAME                   SEGMENT_TYPE
------------------------------ ------------------------------
SALES_DATA                     TABLE

The tablespace "TS_SALES_DATA32K" is holding one table called "SALES_DATA". The following query returns no rows meaning tablespace "TS_SALES_DATA" is empty - hence it can be dropped with no hesitation.

SQL> SELECT SEGMENT_NAME, SEGMENT_TYPE         
  2  FROM USER_SEGMENTS       
  3  WHERE TABLESPACE_NAME = 'TS_SALES_DATA';

no rows selected

Here is another query that lists out all the tablespaces in a database along with the number of segments/objects stored in each of those tablespaces. Note that it is possible to extract similar information in different ways using more efficient queries.

SQL> COLUMN TABLESPACE FORMAT A40
SQL> 
SQL> SELECT UT.TABLESPACE_NAME "TABLESPACE", COUNT (US.SEGMENT_NAME) "NUM SEGMENTS"
  2  FROM USER_TABLESPACES UT, USER_SEGMENTS US
  3  WHERE UT.TABLESPACE_NAME = US.TABLESPACE_NAME
  4  GROUP BY (UT.TABLESPACE_NAME)
  5  ORDER BY COUNT (US.SEGMENT_NAME) DESC;

TABLESPACE                               NUM SEGMENTS
---------------------------------------- -----------
TS_DP                                         114989
TS_DP_X                                          306
..
TS_SALES_DATA32K                                   1
TS_SALES_DATA                                      0

13 rows selected.

SEE ALSO:
Oracle Database Concepts : Data Blocks, Extents, and Segments



(Copy of this blog post is also available at:
http://technopark02.blogspot.com/2011/05/oracle-database-how-to-figure-out-if.html)

Sunday Jan 09, 2011

Oracle 11g : Poor Performance Accessing V$SESSION_FIX_CONTROL

PeopleSoft HCM, Financials/SCM 9.x customers may have to patch their Oracle database server with RDBMS patch 9699654. Rest of the Oracle customers: read the symptoms and decide.

In couple of PeopleSoft deployments it is observed that the following SQL is the top query when all queries are sorted by elapsed time or CPU time. 11.2.0.1.0 is the Oracle database server version.


SELECT VALUE FROM V$SESSION_FIX_CONTROL WHERE BUGNO = :B1 AND SESSION_ID = USERENV('SID')

The target query is being executed thousands of times. The poor performance is due to the lack of a proper index. Here is the explain plan that exhibits the performance issue.

-------------------------------------------------------------------------------
| Id  | Operation	 | Name       | Starts | E-Rows | A-Rows |   A-Time   |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |	      |      1 |	|      1 |00:00:00.02 |
|\*  1 |  FIXED TABLE FULL| X$QKSBGSES |      1 |      1 |      1 |00:00:00.02 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("BUGNO_QKSBGSEROW"=:B1 AND
	      "SID_QKSBGSEROW"=USERENV('SID') AND "INST_ID"=USERENV('INSTANCE')))

20 rows selected.

Oracle Corporation accepted this behavior as a bug and agreed to fix in Oracle RDBMS 12.1. Meanwhile an RDBMS patch was made available to the customers running 11.2.0.1 or later. 9699654 is the bug# (Bad performance of V$SESSION_FIX_CONTROL query) - so, Solaris SPARC customers can download the RDBMS patch 9699654 directly from the support web site. Customers on other platforms: please search the bug database and support web site with appropriate keywords.

After applying the RDBMS patch 9699654, the optimizer was using an index and the query performance was improved as expected. Also the target SQL query was no longer the top SQL - in fact, no references to this particular query were found in the AWR report. The new explain plan is shown below.

----------------------------------------------------------------------------------------------
| Id  | Operation               | Name               | Starts | E-Rows | A-Rows |   A-Time   |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |                    |      1 |        |      1 |00:00:00.01 |
|\*  1 |  FIXED TABLE FIXED INDEX| X$QKSBGSES (ind:1) |      1 |      1 |      1 |00:00:00.01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("BUGNO_QKSBGSEROW"=:B1 AND "SID_QKSBGSEROW"=USERENV('SID') AND
              "INST_ID"=USERENV('INSTANCE')))

20 rows selected.

Tuesday Jun 29, 2010

Identifying Ideal Oracle Database Objects for Flash Storage and Accelerators

The Sun Storage F5100 Flash Array and Sun Flash Accelerator F20 PCIe Card help accelerate I/O bound applications such as databases. The following are some of the guidelines to identify Oracle database objects that can benefit by using the flash storage. Even though the title explicitly states "Oracle", some of these guidelines are applicable to other databases and non-database products. Exercise discretion, evaluate and experiment before implementing these recommendations as they are.

  • Heavily used database tables and indexes are ideal for flash storage

    • - The database workloads with no I/O bottlenecks may not show significant performance gains
    • - The database workloads with severe I/O bottlenecks can fully realize the benefits of flash devices

      • Top 5 Timed Foreground Events section in any AWR report that was collected on the target database system is useful in finding whether disk I/O is a bottleneck

        • Large number of Waits and the large amount of time in DB spent waiting for some blocked resource under User I/O Wait Class is an indication of I/O contention on the system
  • Identify the I/O intensive tables and indexes in a database with the help of Oracle Enterprise Manager Database Control, a web-based tool for managing Oracle database(s)

    • - The "Performance" page in OEM Database Control helps you quickly identify and analyze performance problems
    • - Historical and the real-time database activity can be viewed from the "performance" page.
      • The same page also provides information about the top resource consuming database objects
  • An alternate way to identify the I/O intensive objects in a database is to analyze the AWR reports that are generated over a period of time especially when the database is busy

    • - Scan through the SQL ordered by .. tables in each AWR report
    • - Look for the top INSERT & UPDATE statements with more elapsed and DB times
      • The database tables that are updated frequently & repeatedly, along with the indexes created on such tables are good candidates for the flash devices

    • - SQL ordered by Reads is useful in identifying the database tables with large number of physical reads
      • The database table(s) from which large amounts of data is read/fetched from physical disk(s) are also good candidates for the flash devices

        • To identify I/O intensive indexes, look through the explain plans of the top SQLs that are sorted by Physical Reads

  • Examine the File IO Stats section in any AWR report that was collected on the target database system

    • - Consider moving the database files with heavy reads, writes and relatively high average buffer wait time to flash volumes
  • Examine Segments by Physical Reads, Segments by Physical Writes and Segments by Buffer Busy Waits sections in AWR report

    • - The database tables and indexes with large number of physical reads, physical writes and buffer busy waits may benefit from the flash acceleration
  • Sun flash storage may not be ideal for storing Oracle redo logs

    • - Sun Flash Modules (FMOD) in F5100 array and F20 Flash Accelerator Card are optimized for 4K sector size

        A redo log write that is not aligned with the beginning of the 4K physical sector results in a significant performance degradation

    • - In general, Oracle redo log files default to a block size that is equal to the physical sector size of the disk, which is typically 512 bytes

      • Majority of the recent Oracle Database platforms detect the 4K sector size on Sun flash devices
      • Oracle database automatically creates redo log files with a 4K block size on file systems created on Sun flash devices
        • However with a block size of 4K for the redo logs, there will be significant increase in redo wastage that may offset expected performance gains

F5100 Flash Storage and F20 PCIe Flash Accelerator Card as Oracle Database Smart Flash Cache

In addition to the I/O intensive database objects, customers running Oracle 11g Release 2 or later versions have the flexibility of using flash devices to turn on the "Database Smart Flash Cache" feature to reduce physical disk I/O. The Database Smart Flash Cache is a transparent extension of the database buffer cache using flash storage technology. The flash storage acts as a Level 2 cache to the (Level 1) SGA. Database Smart Flash Cache can significantly improve the performance of Oracle databases by reducing the amount of disk I/O at a much lower cost than adding an equivalent amount of RAM.

F20 Flash Accelerator offers an additional benefit - since it is a PCIe card, the I/O operations bypass disk controller overhead.

The database flash cache can be enabled by setting appropriate values to the following Oracle database parameters.


	db_flash_cache_file
	db_flash_cache_size

Check Oracle Database Administrator's Guide 11g Release 2 (11.2) : Configuring Database Smart Flash Cache documentation for the step-by-step instructions to configure Database Smart Flash Cache on flash devices.

About

Benchmark announcements, HOW-TOs, Tips and Troubleshooting

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today