Saturday Nov 30, 2013

Things to Consider when Planning the Redo logs for Oracle Database

Very basic and generic discussion from the performance point of view. Customers still have to do their due diligence in understanding redo logs, and how they work in Oracle database, before finalizing redo log configuration for their deployments.

  • size them properly
    • log writer writes to a single redo log file until either it is full or a manual log switch is requested
          Oracle supports multiplexed redo logs for availability, but this behavior of writing to a file until it is full or a log switch happens, still hold
    • if the transactions generate a lot of redo before a database commit, consider large sizes in tens of gigabytes for redo logs
    • if not sized properly, it leads to unnecessary log switches, which in turn increase checkpoint activity resulting in unnecessary slow down of the database operations
          two redo logs each with at least 5G in size might be a good start. observe the log switches, checkpoints and increase (or decrease, though there is no performance benefit) the file size accordingly

  • do not mix redo logs with the rest of the database or anything else
    • in a normal functioning database, most of the time, log writer simply writes redo entries sequentially to redo logs
    • any slow down in writing the redo data to logs hurt the performance of the database
    • best not to share the disks/volumes on which redo logs are hosted, with anything else
          set of disks, volumes exclusive to redo logs, that is

  • ensure that the underlying disks or I/O medium used to store the redo logs are fast, optimally configured and can sustain the amount of I/O bandwidth needed to write the redo entries to the redo logs
        if those requirements are not met, it could lead to 'log file sync' waits, which will slow down the database transactions

  • redo logs on non-volatile flash storage may have performance benefits over the traditional hard disk drives
    • check this blog post out, Redo logs on F40 PCIe Cards, for related discussion (keywords: 4K block size for redo logs, block alignment)

Tuesday Mar 05, 2013

SuperCluster Best Practices : Deploying Oracle 11g Database in Zones

To be clear, this post is about a white paper that's been out there for more than two months. Access it through the following url.

  Best Practices for Deploying Oracle Solaris Zones with Oracle Database 11g on SPARC SuperCluster

The focus of the paper is on databases and zones. On SuperCluster, customers have the choice of running their databases in logical domains that are dedicated to running Oracle Database 11g R2. With exclusive access to Exadata Storage Servers, those domains are aptly called "Database" domains. If the requirement mandates, it is possible to create and use all logical domains as "database domains" or "application domains" or a mix of those. Since the focus is on databases, the paper talks only about the database domains and how zones can be created, configured and used within each database domain for fine grained control over multiple databases consolidated in a SuperCluster environment.

When multiple databases are being consolidated (including RAC databases) in database logical domains, zones are one of the options that fulfill requirements such as the fault, operation, network, security and resource isolation, multiple RAC instances in a single logical domain, separate identity and independent manageability for database instances.

The best practices cover the following topics. Some of those are applicable to standalone, non-engineered environments as well.

Solaris Zones

  • CPU, memory and disk space allocation
  • Zone Root on Sun ZFS Storage Appliance
  • Network configuration
  • Use of DISM
  • Use of ZFS filesystem
  • SuperCluster specific zone deployment tool, ssc_exavm
  • ssctuner utility

Oracle Database

  • Exadata Storage Grid (Disk Group) Configuration
  • Disk Group Isolation
    • Shared Storage approach
    • Dedicated Storage Server approach
  • Resizing Grid Disks

Oracle RAC Configuration
Securing the Databases, and

Example Database Consolidation Scenarios

  • Consolidation example using Half-Rack SuperCluster
  • Consolidation example using Full-Rack SuperCluster

Acknowledgements

A large group of experts reviewed the material and provided quality feedback. Hence they deserve credit for their work and time. Listed below are some of those reviewers (sincere apologies if I missed listing any major contributors).

Kesari Mandyam, Binoy Sukumaran, Gowri Suserla, Allan Packer, Jennifer Glore, Hazel Alabado, Tom Daly, Krishnan Shankar, Gurubalan T, Rich long, Prasad Bagal, Lawrence To, Rene Kundersma, Raymond Dutcher, David Brean, Jeremy Ward, Suzi McDougall, Ken Kutzer, Larry Mctintosh, Roger Bitar, Mikel Manitius

Monday Sep 24, 2012

E-Business Suite : Role of CHUNK_SIZE in Oracle Payroll

Different batch processes in Oracle Payroll flow have the ability to spawn multiple child processes (or threads) to complete the work in hand. The number of child processes to fork is controlled by the THREADS parameter in APPS.PAY_ACTION_PARAMETERS view.

THREADS parameter

The default value for THREADS parameter is 1, which is fine for a single-processor system but not optimal for the modern multi-core multi-processor systems. Setting the THREADS parameter to a value equal to or less than the total number of [virtual] processors available on the system may improve the performance of payroll processing. However on the down side, since multiple child processes operate against the same set of payroll tables in HR schema, database may experience undesired consequences such as buffer busy waits and index contention, which results in giving up some of the gains achieved by using multiple child processes/threads to process the work. Couple of other action parameters, CHUNK_SIZE and CHUNK_SHUFFLE, help alleviate the database contention.

eg.,

Set a value for THREADS parameter as shown below.

CONNECT APPS/APPS_PASSWORD

UPDATE PAY_ACTION_PARAMETERS
SET PARAMETER_VALUE = DESIRED_VALUE
WHERE PARAMETER_NAME = 'THREADS';

COMMIT;

(I am not aware of any maximum value for THREADS parameter)


CHUNK_SIZE parameter

The size of each commit unit for the batch process is controlled by the CHUNK_SIZE action parameter. In other words, chunking is the act of splitting the assignment actions into commit groups of desired size represented by the CHUNK_SIZE parameter. The default value is 20, and each thread processes one chunk at a time -- which means each child process inserts or processes 20 assignment actions at any time.

When multiple threads are configured, each thread picks up a chunk to process, completes the assignment actions and then picks up another chunk. This is repeated until all the chunks are exhausted.

It is possible to use different chunk sizes in different batch processes. During the initial phase of processing, CHUNK_SIZE number of assignment actions are inserted into relevant table(s). When multiple child processes are inserting data at the same time into the same set of tables, as explained earlier, database may experience contention. The default value of 20 is mostly optimal in such a case. Experiment with different values for the initial phase by +/-10 for CHUNK_SIZE parameter and observe the performance impact. A larger value may make sense during the main processing phase. Again experimentation is the key in finding the suitable value for your environment. Start with a large value such as 2000 for the chunk size, then increment or decrement the size by 500 at a time until an optimal value is found.

eg.,

Set a value for CHUNK_SIZE parameter as shown below.

CONNECT APPS/APPS_PASSWORD

UPDATE PAY_ACTION_PARAMETERS
SET PARAMETER_VALUE = DESIRED_VALUE
WHERE PARAMETER_NAME = 'CHUNK_SIZE';

COMMIT;

CHUNK_SIZE action parameter accepts a value that is as low as 1 or as high as 16000.


CHUNK SHUFFLE parameter

By default, chunks of assignment actions are processed sequentially by all threads - which may not be a good thing especially given that all child processes/threads performing similar actions against the same set of tables almost at the same time. By saying not a good thing, I mean to say that the default behavior leads to contention in the database (in data blocks, for example).

It is possible to relieve some of that database contention by randomizing the processing order of chunks of assignment actions. This behavior is controlled by the CHUNK SHUFFLE action parameter. Chunk processing is not randomized unless explicitly configured.

eg.,

Set chunk shuffling as shown below.

CONNECT APPS/APPS_PASSWORD

UPDATE PAY_ACTION_PARAMETERS
SET PARAMETER_VALUE = 'Y'
WHERE PARAMETER_NAME = 'CHUNK SHUFFLE';

COMMIT;

Finally I recommend checking the following document out for additional details and additional pay action tunable parameters that may speed up the processing of Oracle Payroll.
    My Oracle Support Doc ID: 226987.1 Oracle 11i & R12 Human Resources (HRMS) & Benefits (BEN) Tuning & System Health Checks

Also experiment with different combinations of parameters and values until the right set of action parameters and values are found for your deployment.

Thursday Sep 23, 2010

OOW 2010 : Accelerate and Bullet-Proof Your Siebel CRM Deployment with Oracle's Sun Servers

The best practices slides from today's OpenWorld presentation can be downloaded from the following location.

        Siebel on Oracle Solaris : Best Practices, Tuning Tips

The entire presentation with proper disclaimers and Oracle Solaris Cluster specific slides will be posted on Oracle's web site soon. Stay tuned.

Tuesday Jun 29, 2010

Identifying Ideal Oracle Database Objects for Flash Storage and Accelerators

The Sun Storage F5100 Flash Array and Sun Flash Accelerator F20 PCIe Card help accelerate I/O bound applications such as databases. The following are some of the guidelines to identify Oracle database objects that can benefit by using the flash storage. Even though the title explicitly states "Oracle", some of these guidelines are applicable to other databases and non-database products. Exercise discretion, evaluate and experiment before implementing these recommendations as they are.

  • Heavily used database tables and indexes are ideal for flash storage

    • - The database workloads with no I/O bottlenecks may not show significant performance gains
    • - The database workloads with severe I/O bottlenecks can fully realize the benefits of flash devices

      • Top 5 Timed Foreground Events section in any AWR report that was collected on the target database system is useful in finding whether disk I/O is a bottleneck

        • Large number of Waits and the large amount of time in DB spent waiting for some blocked resource under User I/O Wait Class is an indication of I/O contention on the system
  • Identify the I/O intensive tables and indexes in a database with the help of Oracle Enterprise Manager Database Control, a web-based tool for managing Oracle database(s)

    • - The "Performance" page in OEM Database Control helps you quickly identify and analyze performance problems
    • - Historical and the real-time database activity can be viewed from the "performance" page.
      • The same page also provides information about the top resource consuming database objects
  • An alternate way to identify the I/O intensive objects in a database is to analyze the AWR reports that are generated over a period of time especially when the database is busy

    • - Scan through the SQL ordered by .. tables in each AWR report
    • - Look for the top INSERT & UPDATE statements with more elapsed and DB times
      • The database tables that are updated frequently & repeatedly, along with the indexes created on such tables are good candidates for the flash devices

    • - SQL ordered by Reads is useful in identifying the database tables with large number of physical reads
      • The database table(s) from which large amounts of data is read/fetched from physical disk(s) are also good candidates for the flash devices

        • To identify I/O intensive indexes, look through the explain plans of the top SQLs that are sorted by Physical Reads

  • Examine the File IO Stats section in any AWR report that was collected on the target database system

    • - Consider moving the database files with heavy reads, writes and relatively high average buffer wait time to flash volumes
  • Examine Segments by Physical Reads, Segments by Physical Writes and Segments by Buffer Busy Waits sections in AWR report

    • - The database tables and indexes with large number of physical reads, physical writes and buffer busy waits may benefit from the flash acceleration
  • Sun flash storage may not be ideal for storing Oracle redo logs

    • - Sun Flash Modules (FMOD) in F5100 array and F20 Flash Accelerator Card are optimized for 4K sector size

        A redo log write that is not aligned with the beginning of the 4K physical sector results in a significant performance degradation

    • - In general, Oracle redo log files default to a block size that is equal to the physical sector size of the disk, which is typically 512 bytes

      • Majority of the recent Oracle Database platforms detect the 4K sector size on Sun flash devices
      • Oracle database automatically creates redo log files with a 4K block size on file systems created on Sun flash devices
        • However with a block size of 4K for the redo logs, there will be significant increase in redo wastage that may offset expected performance gains

F5100 Flash Storage and F20 PCIe Flash Accelerator Card as Oracle Database Smart Flash Cache

In addition to the I/O intensive database objects, customers running Oracle 11g Release 2 or later versions have the flexibility of using flash devices to turn on the "Database Smart Flash Cache" feature to reduce physical disk I/O. The Database Smart Flash Cache is a transparent extension of the database buffer cache using flash storage technology. The flash storage acts as a Level 2 cache to the (Level 1) SGA. Database Smart Flash Cache can significantly improve the performance of Oracle databases by reducing the amount of disk I/O at a much lower cost than adding an equivalent amount of RAM.

F20 Flash Accelerator offers an additional benefit - since it is a PCIe card, the I/O operations bypass disk controller overhead.

The database flash cache can be enabled by setting appropriate values to the following Oracle database parameters.


	db_flash_cache_file
	db_flash_cache_size

Check Oracle Database Administrator's Guide 11g Release 2 (11.2) : Configuring Database Smart Flash Cache documentation for the step-by-step instructions to configure Database Smart Flash Cache on flash devices.

About

Benchmark announcements, HOW-TOs, Tips and Troubleshooting

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today