Friday Feb 28, 2014

[Solaris] Changing hostname, Parallel Compression, pNFS, Upgrading SRUs and Clearing Faults

[1] Solaris 11+ : changing hostname

Starting with Solaris 11, a system's identify (nodename) is configured through the config/nodename service property of the svc:/system/identity:node SMF service. Solaris 10 and prior versions have this information in /etc/nodename configuration file.

The following example demonstrates the commands to change the hostname from "ihcm-db-01" to "ehcm-db-01".

eg.,
# hostname
ihcm-db-01

# svccfg -s system/identity:node listprop config
config                       application        
config/enable_mapping       boolean     true
config/ignore_dhcp_hostname boolean     false
config/nodename             astring     ihcm-db-01
config/loopback             astring     ihcm-db-01
#

# svccfg -s system/identity:node setprop config/nodename="ehcm-db-01"

# svccfg -s system/identity:node refresh  -OR- 
	# svcadm refresh svc:/system/identity:node
# svcadm restart system/identity:node

# svccfg -s system/identity:node listprop config
config                       application        
config/enable_mapping       boolean     true
config/ignore_dhcp_hostname boolean     false
config/nodename             astring     ehcm-db-01
config/loopback             astring     ehcm-db-01

# hostname
ehcm-db-01

[2] Parallel Compression

This topic is not Solaris specific, but certainly helps Solaris users who are frustrated with the single threaded implementation of all officially supported compression tools such as compress, gzip, zip.

pigz (pig-zee) is a parallel implementation of gzip that suits well for the latest multi-processor, multi-core machines. By default, pigz breaks up the input into multiple chunks of size 128 KB, and compress each chunk in parallel with the help of light-weight threads. The number of compress threads is set by default to the number of online processors. The chunk size and the number of threads are configurable.

Compressed files can be restored to their original form using -d option of pigz or gzip tools. As per the man page, decompression is not parallelized out of the box, but may show some improvement compared to the existing old tools.

The following example demonstrates the advantage of using pigz over gzip in compressing and decompressing a large file.

eg.,

Original file, and the target hardware.

$ ls -lh PT8.53.04.tar 
-rw-r--r--   1 psft     dba         4.8G Feb 28 14:03 PT8.53.04.tar

$ psrinfo -pv
The physical processor has 8 cores and 64 virtual processors (0-63)
  The core has 8 virtual processors (0-7)
	...
  The core has 8 virtual processors (56-63)
    SPARC-T5 (chipid 0, clock 3600 MHz)

gzip compression.

$ time gzip --fast PT8.53.04.tar 

real    3m40.125s
user    3m27.105s
sys     0m13.008s

$ ls -lh PT8.53*
-rw-r--r--   1 psft     dba         3.1G Feb 28 14:03 PT8.53.04.tar.gz

/* the following prstat, vmstat outputs show that gzip is compressing the 
	tar file using a single thread - hence low CPU utilization. */

$ prstat -p 42510

   PID USERNAME  SIZE   RSS STATE   PRI NICE      TIME  CPU PROCESS/NLWP      
 42510 psft     2616K 2200K cpu16    10    0   0:01:00 1.5% gzip/1

$ prstat -m -p 42510

   PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/NLWP  
 42510 psft      95 4.6 0.0 0.0 0.0 0.0 0.0 0.0   0  35  7K   0 gzip/1

$ vmstat 2

 r b w   swap  free  re  mf pi po fr de sr s0 s1 s2 s3   in   sy   cs us sy id
 0 0 0 776242104 917016008 0 7 0 0 0  0  0  0  0 52 52 3286 2606 2178  2  0 98
 1 0 0 776242104 916987888 0 14 0 0 0 0  0  0  0  0  0 3851 3359 2978  2  1 97
 0 0 0 776242104 916962440 0 0 0 0 0  0  0  0  0  0  0 3184 1687 2023  1  0 98
 0 0 0 775971768 916930720 0 0 0 0 0  0  0  0  0 39 37 3392 1819 2210  2  0 98
 0 0 0 775971768 916898016 0 0 0 0 0  0  0  0  0  0  0 3452 1861 2106  2  0 98

pigz compression.

$ time ./pigz PT8.53.04.tar 

real    0m25.111s	<== wall clock time is 25s compared to gzip's 3m 27s
user    17m18.398s
sys     0m37.718s

/* the following prstat, vmstat outputs show that pigz is compressing the 
        tar file using many threads - hence busy system with high CPU utilization. */

$ prstat -p 49734

   PID USERNAME  SIZE   RSS STATE   PRI NICE      TIME  CPU PROCESS/NLWP      
49734 psft       59M   58M sleep    11    0   0:12:58  38% pigz/66

$ vmstat 2

 kthr      memory            page            disk          faults      cpu
 r b w   swap  free  re  mf pi po fr de sr s0 s1 s2 s3   in   sy   cs us sy id
 0 0 0 778097840 919076008 6 113 0 0 0 0 0  0  0 40 36 39330 45797 74148 61 4 35
 0 0 0 777956280 918841720 0 1 0 0 0  0  0  0  0  0  0 38752 43292 71411 64 4 32
 0 0 0 777490336 918334176 0 3 0 0 0  0  0  0  0 17 15 46553 53350 86840 60 4 35
 1 0 0 777274072 918141936 0 1 0 0 0  0  0  0  0 39 34 16122 20202 28319 88 4 9
 1 0 0 777138800 917917376 0 0 0 0 0  0  0  0  0  3  3 46597 51005 86673 56 5 39

$ ls -lh PT8.53.04.tar.gz 
-rw-r--r--   1 psft     dba         3.0G Feb 28 14:03 PT8.53.04.tar.gz

$ gunzip PT8.53.04.tar.gz 	<== shows that the pigz compressed file is 
                                         compatible with gzip/gunzip

$ ls -lh PT8.53*
-rw-r--r--   1 psft     dba         4.8G Feb 28 14:03 PT8.53.04.tar

Decompression.

$ time ./pigz -d PT8.53.04.tar.gz 

real    0m18.068s
user    0m22.437s
sys     0m12.857s

$ time gzip -d PT8.53.04.tar.gz 

real    0m52.806s <== compare gzip's 52s decompression time with pigz's 18s
user    0m42.068s
sys     0m10.736s

$ ls -lh PT8.53.04.tar 
-rw-r--r--   1 psft     dba         4.8G Feb 28 14:03 PT8.53.04.tar

Of course, there are other tools such as Parallel BZIP2 (PBZIP2), which is a parallel implementation of the bzip2 tool are worth a try too. The idea here is to highlight the fact that there are better tools out there to get the job done in a quick manner compared to the existing/old tools that are bundled with the operating system distribution.


[3] Solaris 11+ : Upgrading SRU

Assuming the package repository is set up already to do the network updates on a Solaris 11+ system, the following commands are helpful in upgrading a SRU.

  • List all available SRUs in the repository.

    # pkg list -af entire
  • Upgrade to the latest and greatest.

    # pkg update

    To find out what changes will be made to the system, try a dry run of the system update.

    # pkg update -nv
  • Upgrade to a specific SRU.

    # pkg update entire@<FMRI>

    Find the Fault Managed Resource Identifier (FMRI) string by running pkg list -af entire command.

Note that it is not so easy to downgrade SRU to a lower version as it may break the system. Should there be a need to downgrade or switch between different SRUs, relying on Boot Environments (BE) might be a good idea. Check Creating and Administering Oracle Solaris 11 Boot Environments document for details.


[4] Parallel NFS (pNFS)

Just a quick note — RFC 5661, Network File System (NFS) Version 4.1 introduced a new feature called "Parallel NFS" or pNFS, which allows NFS clients to access storage devices containing file data directly. When file data for a single NFS v4 server is stored on multiple and/or higher-throughput storage devices, using pNFS can result in significant improvement in file access performance. However Parallel NFS is an optional feature in NFS v4.1. Though there was a prototype made available few years ago when OpenSolaris was still alive, as of today, Solaris has no support for pNFS. Stay tuned for any updates from Oracle Solaris teams.

Here is an interesting write-up from one of our colleagues at Oracle|Sun (dated 2007) -- NFSv4.1's pNFS for Solaris.

(Credit to Rob Schneider and Tom Gould for initiating this topic)


[5] SPARC hardware : Check for and clear faults from ILOM

Couple of ways to check the faults using ILOM command line interface.

By running:

  1. show faulty command from ILOM command prompt, or
  2. fmadm faulty command from within the ILOM faultmgmt shell

Once found, use the clear_fault_action property with the set command to clear the fault for a FRU.

The following example checks for the faulty FRUs from ILOM faultmgmt shell, then clears it out.

eg.,

-> start /SP/faultmgmt/shell
Are you sure you want to start /SP/faultmgmt/shell (y/n)? y

faultmgmtsp> fmadm faulty

------------------- ------------------------------------ -------------- --------
Time                UUID                                 msgid          Severity
------------------- ------------------------------------ -------------- --------
2014-02-26/16:17:11 18c62051-c81d-c569-a4e6-e418db2f84b4 PCIEX-8000-SQ  Critical
        ...
        ...
Suspect 1 of 1
   Fault class  : fault.io.pciex.rc.generic-ue
   Certainty    : 100%
   Affects      : hc:///chassis=0/motherboard=0/cpuboard=1/chip=2/hostbridge=4
   Status       : faulted

   FRU
      Status            : faulty
      Location          : /SYS/PM1
      Manufacturer      : Oracle Corporation
      Name              : TLA,PM,T5-4,T5-8
        ...

Description : A fault has been diagnosed by the Host Operating System.

Response    : The service required LED on the chassis and on the affected
              FRU may be illuminated.

        ...

faultmgmtsp> exit

-> set /SYS/PM1 clear_fault_action=True
Are you sure you want to clear /SYS/PM1 (y/n)? y
Set 'clear_fault_action' to 'True'

Note that this procedure clears the fault from the SP but not from the host.

Tuesday Feb 25, 2014

AIX customers: Run for the Hills ..

.. or keep your cool and embrace Solaris.

When Oracle acquired Sun, IBM tried to capitalize the situation just like every other competitor Sun had – doubts raised about Oracle's ability to turn Sun's hardware business around, and Solaris customers were advised to flee SPARC. Fast forward four years .. Oracle appears to have successfully dispelled the doubts with proven long-term commitment to the Solaris/SPARC business with consistent investment and delivery on established roadmaps. Besides, Oracle has been innovating in the server space with engineered systems that are pre-integrated to reduce the cost and complexity of IT infrastructures while increasing productivity and performance.

On the other hand, judging by the recent turn of events at IBM such as selling off critical server technologies, decline in data center business, employee furloughs, layoffs etc., it appears that Big Blue has its own struggles to deal with. In any case, irrespective of what is happening at IBM, AIX customers who are contemplating to migrate to a modern operating platform that is reliable, secure, cloud-ready and offers a rich set of features to virtualize, consolidate, diagnose, debug and most importantly scale and perform, have an attractive alternative — Oracle Solaris. Act before it is too late.

Unfortunately migrating larger deployments from one platform to another is not as easy as migrating desktop users from one operating system to another. So, Oracle put together a bunch of documents to make the AIX to Solaris transition as smooth as possible for the existing AIX customers. Access the AIX-to-Solaris migration pages at:

     http://www.oracle.com/aixtosolaris
     Modernizing IBM AIX/Power to Oracle Solaris/SPARC (Oracle Technology Network)

The above pages have pointers to white papers such as IBM AIX to Oracle Solaris Technology Mapping Guide (for system admins, power users), Simplify the Migration of Oracle Database and Oracle Applications from AIX to Oracle Solaris (for DBAs, application specific admins) and IBM AIX Technologies Compared to Oracle Solaris 11 along with hands-on labs, training, blogs and other useful resources. Check those out, and use the contact information available in those pages to speak or chat with relevant Oracle team(s) who can help get started with the migration process. Good luck.

Saturday Nov 30, 2013

Things to Consider when Planning the Redo logs for Oracle Database

Very basic and generic discussion from the performance point of view. Customers still have to do their due diligence in understanding redo logs, and how they work in Oracle database, before finalizing redo log configuration for their deployments.

  • size them properly
    • log writer writes to a single redo log file until either it is full or a manual log switch is requested
          Oracle supports multiplexed redo logs for availability, but this behavior of writing to a file until it is full or a log switch happens, still hold
    • if the transactions generate a lot of redo before a database commit, consider large sizes in tens of gigabytes for redo logs
    • if not sized properly, it leads to unnecessary log switches, which in turn increase checkpoint activity resulting in unnecessary slow down of the database operations
          two redo logs each with at least 5G in size might be a good start. observe the log switches, checkpoints and increase (or decrease, though there is no performance benefit) the file size accordingly

  • do not mix redo logs with the rest of the database or anything else
    • in a normal functioning database, most of the time, log writer simply writes redo entries sequentially to redo logs
    • any slow down in writing the redo data to logs hurt the performance of the database
    • best not to share the disks/volumes on which redo logs are hosted, with anything else
          set of disks, volumes exclusive to redo logs, that is

  • ensure that the underlying disks or I/O medium used to store the redo logs are fast, optimally configured and can sustain the amount of I/O bandwidth needed to write the redo entries to the redo logs
        if those requirements are not met, it could lead to 'log file sync' waits, which will slow down the database transactions

  • redo logs on non-volatile flash storage may have performance benefits over the traditional hard disk drives
    • check this blog post out, Redo logs on F40 PCIe Cards, for related discussion (keywords: 4K block size for redo logs, block alignment)

Monday Oct 14, 2013

[Script] Breakdown of Oracle SGA into Solaris Locality Groups

Goal: for a given process, find out how the SGA was allocated in different locality groups on a system running Solaris operating system.

Download the shell script, sga_in_lgrp.sh. The script accepts any Oracle database process id as input, and prints out the memory allocated in each locality group.

Usage: ./sga_in_lgrp.sh <pid>

eg.,

# prstat -p 12820

   PID USERNAME  SIZE   RSS STATE   PRI NICE      TIME  CPU PROCESS/NLWP
 12820 oracle     32G   32G sleep    60  -20   0:00:16 0.0% oracle/2

# ./sga_in_lgrp.sh 12820

Number of Locality Groups (lgrp): 4
------------------------------------

lgroup 1 :   8.56 GB
lgroup 2 :   6.56 GB
lgroup 3 :   6.81 GB
lgroup 4 :  10.07 GB

Total allocated memory:  32.00 GB

For those who wants to have a quick look at the source code, here it is.

# cat sga_in_lgrp.sh

#!/bin/bash

# check the argument count
if [ $# -lt 1 ]
then
        echo "usage: ./sga_in_lgrp.sh <oracle pid>"
        exit 1
fi

# find the number of locality groups
lgrp_count=$(kstat -l lgrp | tail -1 | awk -F':' '{ print $2 }')
echo "\nNumber of Locality Groups (lgrp): $lgrp_count"
echo "------------------------------------\n"

# save the ism output using pmap
pmap -sL $1 | grep ism | sort -k5 > /tmp/tmp_pmap_$1

# calculate the total amount of memory allocated in each lgroup
for i in `seq 1 $lgrp_count`
do
        echo -n "lgroup $i : "
        grep "$i   \[" /tmp/tmp_pmap_$1 | awk '{ print $2 }' | sed 's/K//g' | 
               awk '{ sum+=$1} END {printf ("%6.2f GB\n", sum/(1024*1024))}'
done

echo
echo -n "Total allocated memory: "
awk '{ print $2 }' /tmp/tmp_pmap_$1 | sed 's/K//g' | awk '{ sum+=$1} END 
         {printf ("%6.2f GB\n\n", sum/(1024*1024))}'

rm /tmp/tmp_pmap_$1

Like many things in life, there will always be a better or simpler way to achieve this. If you find one, do not fret over this approach. Please share, if possible.

Saturday Aug 31, 2013

[Oracle Database] Unreliable AWR reports on T5 & Redo logs on F40 PCIe Cards

(1) AWR report shows bogus wait events and times on SPARC T5 servers

Here is a sample from one of the Oracle 11g R2 databases running on a SPARC T5 server with Solaris 11.1 SRU 7.5

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class
latch: cache buffers chains 278,727 812,447,335 2914850 13307324.15Concurrency
library cache: mutex X212,595449,966,33021165427370136.56Concurrency
buffer busy waits219,844349,975,25115919255732352.01Concurrency
latch: In memory undo latch25,46837,496,8001472310614171.59Concurrency
latch free2,60224,998,5839607449409459.46Other

Reason:
Unknown. There is a pending bug 17214885 - Implausible top foreground wait times reported in AWR report.

Tentative workaround:
Disable power management as shown below.

# poweradm set administrative-authority=none

# svcadm disable power
# svcadm enable power

Verify the setting by running poweradm list.

Also disable NUMA I/O object binding by setting the following parameter in /etc/system (requires a system reboot).

set numaio_bind_objects=0

Oracle Solaris 11 added support for NUMA I/O architecture. Here is a brief explanation of NUMA I/O from Solaris 11 : What's New web page.

Non-Uniform Memory Access (NUMA) I/O : Many modern systems are based on a NUMA architecture, where each CPU or set of CPUs is associated with its own physical memory and I/O devices. For best I/O performance, the processing associated with a device should be performed close to that device, and the memory used by that device for DMA (Direct Memory Access) and PIO (Programmed I/O) should be allocated close to that device as well. Oracle Solaris 11 adds support for this architecture by placing operating system resources (kernel threads, interrupts, and memory) on physical resources according to criteria such as the physical topology of the machine, specific high-level affinity requirements of I/O frameworks, actual load on the machine, and currently defined resource control and power management policies.

Do not forget to rollback these changes after applying the fix for the database bug 17214885, when available.

(2) Redo logs on F40 PCIe cards (non-volatile flash storage)

Per the F40 PCIe card user's guide, the Sun Flash Accelerator F40 PCIe Card is designed to provide best performance for data transfers that are multiples of 8k size, and using addresses that are 8k aligned. To achieve optimal performance, the size of the read/write data should be an integer multiple of this block size and the data transferred should be block aligned. I/O operations that are not block aligned and that do not use sizes that are a multiple of the block size may suffer performance degration, especially for write operations.

Oracle redo log files default to a block size that is equal to the physical sector size of the disk, typically 512 bytes. And most of the time, database writes to the redo log in a normal functioning environment. Oracle database supports a maximum block size of 4K for redo logs. Hence to achieve optimal performance for redo write operations on F40 PCIe cards, tune the environment as shown below.

  1. Configure the following init parameters
    _disk_sector_size_override=TRUE
    _simulate_disk_sectorsize=4096
    
  2. Create redo log files with 4K block size
    eg.,
    SQL> ALTER DATABASE ADD LOGFILE '/REDO/redo.dbf' size 20G blocksize 4096;
    
  3. [Solaris only] Append the following line to /kernel/drv/sd.conf (requires a reboot)
    sd-config-list="ATA     3E128-TS2-550B01","disksort:false,\
                 cache-nonvolatile:true, physical-block-size:4096";
    
  4. [Solaris only][F20] To enable maximum throughput from the MPT driver, append the following line to /kernel/drv/mpt.conf and reboot the system.
    mpt_doneq_thread_n_prop=8;
    

This tip is applicable to all kinds of flash storage that Oracle sells or sold including F20/F40 PCIe cards and F5100 storage array. sd-config-list in sd.conf may need some adjustment to reflect the correct vendor id and product id.

Tuesday Jul 30, 2013

Oracle Tips : Solaris lgroups, CT optimization, Data Pump, Recompilation of Objects, ..

1. [Re]compiling all objects in a schema
exec DBMS_UTILITY.compile_schema(schema => 'SCHEMA');

To recompile only the invalid objects in parallel:

exec UTL_RECOMP.recomp_parallel(<NUM_PARALLEL_THREADS>, 'SCHEMA');

A NULL value for SCHEMA recompiles all invalid objects in the database.


2. SGA breakdown in Solaris Locality Groups (lgroup)

To find the breakdown, execute pmap -L | grep shm. Then separate the lines that are related to each locality group and sum up the value in 2nd column to arrive at a number that shows the total SGA memory allocated in that locality group.

(I'm pretty sure there will be a much easier way that I am not currently aware of.)


3. Default values for shared pool, java pool, large pool, ..

If the *pool parameters were not set explicitly, executing the following query is one way to find out what are they currently set to.

eg.,
SQL> select * from v$sgainfo;

NAME                                  BYTES RES
-------------------------------- ---------- ---
Fixed SGA Size                      2171296 No
Redo Buffers                      373620736 No
Buffer Cache Size                8.2410E+10 Yes
Shared Pool Size                 1.7180E+10 Yes
Large Pool Size                   536870912 Yes
Java Pool Size                   1879048192 Yes
Streams Pool Size                 268435456 Yes
Shared IO Pool Size                       0 Yes
Granule Size                      268435456 No
Maximum SGA Size                 1.0265E+11 No
Startup overhead in Shared Pool  2717729536 No
Free SGA Memory Available                 0
12 rows selected.

4. Fix to PLS-00201: identifier 'GV$SESSION' must be declared error

Grant select privilege on gv_$SESSION to the owner of the database object that failed to compile.

eg.,
SQL> alter package OWF_MGR.FND_SVC_COMPONENT compile body;
Warning: Package Body altered with compilation errors.

SQL> show errors
Errors for PACKAGE BODY OWF_MGR.FND_SVC_COMPONENT:

LINE/COL ERROR
-------- -----------------------------------------------------------------
390/22   PL/SQL: Item ignored
390/22   PLS-00201: identifier 'GV$SESSION' must be declared

SQL> grant select on gv_$SESSION to OWF_MGR;
Grant succeeded.

SQL> alter package OWF_MGR.FND_SVC_COMPONENT compile body;
Package body altered.

5. Solaris Critical Thread (CT) optimization for Oracle logwriter (lgrw)

Critical Thread is a new scheduler optimization available in Oracle Solaris releases Solaris 10 Update 10 and later versions. Latency sensitive single threaded components of software such as Oracle database's logwriter benefit from CT optimization.

On a high level, LWPs marked as critical will be granted more exclusive access to the hardware. For example, on SPARC T4 and T5 systems, such a thread will be assigned exclusive access to a core as much as possible. CT optimization won't delay scheduling of any runnable thread in the system.

Critical Thread optimization is enabled by default. However the users of the system have to hint the OS by marking a thread or two "critical" explicitly as shown below.

priocntl -s -c FX -m 60 -p 60 -i pid <pid_of_critical_single_threaded_process>

From database point of view, logwriter (lgwr) is one such process that can benefit from CT optimization on Solaris platform. Oracle DBA's can either make the lgwr process 'critical' once the database is up and running, or can simply patch the 11.2.0.3 database software by installing RDBMS patch 12951619 to let the database take care of it automatically. I believe Oracle 12c does it by default. Future releases of 11g software may make lgwr critical out of the box.

Those who install the database patch 12951619 need to carefully follow the post installation steps documented in the patch README to avoid running into unwanted surprises.


6. ORA-14519 error while importing a table from a Data Pump export dump
ORA-14519: Conflicting tablespace blocksizes for table : Tablespace XXX block \
size 32768 [partition specification] conflicts with previously specified/implied \
tablespace YYY block size 8192
 [object-level default]
Failing sql is:
CREATE TABLE XYZ
..

All partitions in table XYZ are using 32K blocks whereas the implicit default partition is pointing to a 8K block tablespace. Workaround is to use the REMAP_TABLESPACE option in Data Pump impdp command line to remap the implicit default tablespace of the partitioned table to the tablespace where the rest of partitions are residing.


7. Index building task in Data Pump import process

When Data Pump import process is running, by default, index building is performed with just one thread, which becomes a bottleneck and causes the data import process to take a long time especially if many large tables with millions of rows are being imported into the target database. One way to speed up the import process execution is by skipping index building as part of data import task with the help of EXCLUDE=INDEX impdp command line option. Extract the index definitions for all the skipped indexes from the Data Pump dump file as shown below.

impdp <userid>/<password> directory=<directory> dumpfile=<dump_file>.dmp \
    sqlfile=<index_def_file>.sql INCLUDE=INDEX

Edit <index_def_file>.sql to set the desired number of parallel threads to build each index. And finally execute the <index_def_file>.sql to build the indexes once the data import task is complete.

Sunday Jun 30, 2013

Solaris Tips : Assembler, Format, File Descriptors, Ciphers & Mount Points

1. Most Oracle software installers need assembler

Assembler (as) is not installed by default on Solaris 11.
     Find and install

eg.,
# pkg search assembler
INDEX       ACTION VALUE                           PACKAGE        
pkg.fmri    set    solaris/developer/assembler     pkg:/developer/assembler@0.5.11-0.175.1.5.0.3.0

# pkg install pkg:/developer/assembler

Assembler binary used to be under /usr/ccs/bin directory on Solaris 10 and prior versions.
     There is no /usr/ccs/bin on Solaris 11. Contents were moved to /usr/bin



2. Non-interactive retrieval of the entire list of disks that format reports

If the format utility cannot show the entire list of disks in a single screen on stdout, it shows some and prompts user to - hit space for more or s to select - to move to the next screen to show few more disks. Run the following command(s) to retrieve the entire list of disks in a single shot.

format < /dev/null

	-or-

echo "\n" | format



3. Finding system wide file descriptors/handles in use

Run the following kstat command as any user (privileged or non-privileged).

kstat -n file_cache -s buf_inuse

Going through /proc (process filesystem) is less efficient and may lead to inaccurate results due to the inclusion of duplicate file handles.



4. ssh connection to a Solaris 11 host fails with error Couldn't agree a client-to-server cipher (available: aes128-ctr,aes192-ctr,aes256-ctr,arcfour128,arcfour256,arcfour)

Solution: add 3des-cbc to the list of accepted ciphers to sshd configuration file.

Steps:

  1. Append the following line to /etc/ssh/sshd_config
    Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour128,arcfour256,\
       arcfour,3des-cbc
  2. Restart ssh daemon
    svcadm -v restart ssh



5. UFS: Finding the last mount point for a device

fsck utility reports the last mountpoint on which the filesystem was mounted (it won't show the mount options though). The filesystem should be unmounted when running fsck.

eg.,
# fsck -n /dev/dsk/c0t5000CCA0162F7BC0d0s6
** /dev/rdsk/c0t5000CCA0162F7BC0d0s6 (NO WRITE)
** Last Mounted on /export/oracle
** Phase 1 - Check Blocks and Sizes
...
...

Friday May 31, 2013

Oracle Internet Directory 11g Benchmark on SPARC T5

SUMMARY

System Under Test (SUT)     Oracle's SPARC T5-2 server
Software     Oracle Internet Directory 11gR1-PS6
Target Load     50 million user entries
Reference URL     OID/T5 benchmark white paper

Oracle Internet Directory (OID) is an LDAP v3 Directory Server that has multi-threaded, multi-process, multi-instance process architecture with Oracle database as the directory store.


BENCHMARK WORKLOAD DESCRIPTION

Five test scenarios were executed in this benchmark - each test scenario performing a different type of LDAP operation. The key metrics are throughput -- the number of operations completed per second, and latency -- the time it took in milliseconds to complete an operation.

TEST SCENARIOS & RESULTS

1. LDAP Search operation : search for and retrieve specific entries from the directory

In this test scenario, each LDAP search operation matches a single unique entry. Each Search operation results in the lookup of an entry in such a way that no client looks up the same entry twice and no two clients lookup the same entry, and all entries are looked-up randomly.

#clients Throughput
Operations/Second
Latency
milliseconds
1,000 944,624 1.05

2. LDAP Add operation : add entries, their object classes, attributes and values to the directory

In this test scenario, 16 concurrent LDAP clients added 500,000 entries of object class InetOrgPerson with 21 attributes to the directory.

#clients Throughput
Operations/Second
Latency
milliseconds
16 1,000 15.95

3. LDAP Compare operation : compare a given attribute value to the attribute value in a directory entry

In this test scenario, userpassword attribute was compared. That is, each LDAP Compare operation matches user password of a user.

#clients Throughput
Operations/Second
Latency
milliseconds
1,000 594,426 1.68

4. LDAP Modify operation : add, delete or replace attributes for entries

In this test scenario, 50 concurrent LDAP clients updated a unique entry each time and a total of 50 million entries were updated. Attribute that is being modified was not indexed

#clients Throughput
Operations/Second
Latency
milliseconds
50 16,735 2.98

5. LDAP Authentication operation : authenticates the credentials of a user

In this test scenario, 1000 concurrent LDAP clients authenticated 50 million users.

#clients Throughput
Operations/Second
Latency
milliseconds
1,000 305,307 3.27

BONUS: LDAP Mixed operations Test

In this test scenario, 1000 LDAP clients were used to perform LDAP Search, Bind and Modify operations concurrently.
Operation breakdown (load distribution): Search: 65%. Bind: 30%. Modify: 5%

LDAP Operation #clients Throughput
Operations/Second
Latency
milliseconds
Search 650 188,832 3.86
Bind 300 87,159 1.08
Modify 50 14,528 12

And finally, the:

HARDWARE CONFIGURATION

1 x Oracle SPARC T5-2 Server
    » 2 x 3.6 GHz SPARC T5 sockets each with 16 Cores (Total Cores: 32) and 8 MB L3 cache
    » 512 GB physical memory
    » 2 x 10 GbE cards
    » 1 x Sun Storage F5100 Flash Array with 80 flash modules
    » Oracle Solaris 11.1 operating system

ACKNOWLEDGEMENTS

Major credit goes to our colleague, Ramaprakash Sathyanarayan

Friday Apr 12, 2013

Siebel 8.1.1.4 Benchmark on SPARC T5

Hardly six months after announcing Siebel 8.1.1.4 benchmark results on Oracle SPARC T4 servers, we have a brand new set of Siebel 8.1.1.4 benchmark results on Oracle SPARC T5 servers. There are no updates to the Siebel benchmark kit in the last couple years - so, we continued to use the Siebel 8.1.1.4 benchmark workload to measure the performance of Siebel Financial Services Call Center and Order Management business transactions on the recently announced SPARC T5 servers.

Benchmark Details

The latest Siebel 8.1.1.4 benchmark was executed on a mix of SPARC T5-2, SPARC T4-2 and SPARC T4-1 servers. The benchmark test simulated the actions of a large corporation with 40,000 concurrent active users. To date, this is the highest user count we achieved in a Siebel benchmark.


User Load Breakdown & Achieved Throughput

Siebel Application Module %Total Load #Users Business Trx per Hour
Financial Services Call Center 70 28,000 273,786
Order Management 30 12,000 59,553
Total     100 40,000 333,339

Average Transaction Response Times for both Financial Services Call Center and Order Management transactions were under one second.


Software & Hardware Specification

 Test Component Software Version Server Model Server Qty Per Server Specification OS
Chips Cores vCPUs CPU Speed CPU Type Memory
Application Server Siebel 8.1.1.4 SPARC T5-2 2 2 32 256 3.6 GHz SPARC-T5 512 GB Solaris 10 1/13 (S10U11)
Database Server Oracle 11g R2 11.2.0.2 SPARC T4-2 1 2 16 128 2.85 GHz SPARC-T4 256 GB Solaris 10 8/11 (S10U10)
Web Server iPlanet Web Server 7.0.9 (7 U9) SPARC T4-1 1 1 8 64 2.85 GHz SPARC-T4 128 GB Solaris 10 8/11 (S10U10)
Load Generator Oracle Application Test Suite 9.21.0043 SunFire X4200 1 2 4 4 2.6 GHz AMD Opteron 285 SE 16 GB Windows 2003 R2 SP2
Load Drivers (Agents) Oracle Application Test Suite 9.21.0043 SunFire X4170 8 2 12 12 2.93 GHz Intel Xeon X5670 48 GB Windows 2003 R2 SP2

Additional Notes:

  • Siebel Gateway Server was configured to run on one of the application server nodes
  • Four Siebel application servers were configured in the Siebel Enterprise to handle 40,000 concurrent users
    • - Each SPARC T5-2 was configured to run two Siebel application server instances
    • - Each of the Siebel application server instances on SPARC T5-2 servers were separated using Solaris virtualization technology, Zones
    • - 40,000 concurrent user sessions were load balanced across all four Siebel application server instances
  • Siebel database was hosted on a Sun Storage F5100 Flash Array consisting 80 x 24 GB flash modules (FMODs)
    • - Siebel 8.1.1.4 benchmark workload is not I/O intensive and does not require flash storage for better I/O performance
  • Fourteen iPlanet Web Server virtual servers were configured with Siebel Web Server Extension (SWSE) plug-in to handle 40,000 concurrent user load
    • - All fourteen iPlanet Web Server instances forwarded HTTP requests from Siebel clients to all four Siebel application server instances in a round robin fashion
  • Oracle Application Test Suite (OATS) was stable and held up amazingly well over the entire duration of the test run.
  • The benchmark test results were validated and thoroughly audited by the Siebel benchmark and PSR teams
    • - Nothing new here. All Sun published Siebel benchmarks including the SPARC T4 one were properly audited before releasing those to the outside world

Resource Utilization

Component #Users CPU% Memory Footprint
Gateway/Application Server 20,000 67.03 205.54 GB
Application Server 20,000 66.09 206.24 GB
Database Server 40,000 33.43 108.72 GB
Web Server 40,000 29.48 14.03 GB

Finally, how does this benchmark stack up against other published benchmarks? Short answer is "very well". Head over to the Oracle Siebel Benchmark White Papers webpage to do the comparison yourself.



[Credit to our hard working colleagues in SAE, Siebel PSR, benchmark and Oracle Platform Integration (OPI) teams. Special thanks to Sumti Jairath and Venkat Krishnaswamy for the last minute fire drill]

Copy of this blog post is also available at:
Siebel 8.1.1.4 Benchmark on SPARC T5

Tuesday Mar 05, 2013

SuperCluster Best Practices : Deploying Oracle 11g Database in Zones

To be clear, this post is about a white paper that's been out there for more than two months. Access it through the following url.

  Best Practices for Deploying Oracle Solaris Zones with Oracle Database 11g on SPARC SuperCluster

The focus of the paper is on databases and zones. On SuperCluster, customers have the choice of running their databases in logical domains that are dedicated to running Oracle Database 11g R2. With exclusive access to Exadata Storage Servers, those domains are aptly called "Database" domains. If the requirement mandates, it is possible to create and use all logical domains as "database domains" or "application domains" or a mix of those. Since the focus is on databases, the paper talks only about the database domains and how zones can be created, configured and used within each database domain for fine grained control over multiple databases consolidated in a SuperCluster environment.

When multiple databases are being consolidated (including RAC databases) in database logical domains, zones are one of the options that fulfill requirements such as the fault, operation, network, security and resource isolation, multiple RAC instances in a single logical domain, separate identity and independent manageability for database instances.

The best practices cover the following topics. Some of those are applicable to standalone, non-engineered environments as well.

Solaris Zones

  • CPU, memory and disk space allocation
  • Zone Root on Sun ZFS Storage Appliance
  • Network configuration
  • Use of DISM
  • Use of ZFS filesystem
  • SuperCluster specific zone deployment tool, ssc_exavm
  • ssctuner utility

Oracle Database

  • Exadata Storage Grid (Disk Group) Configuration
  • Disk Group Isolation
    • Shared Storage approach
    • Dedicated Storage Server approach
  • Resizing Grid Disks

Oracle RAC Configuration
Securing the Databases, and

Example Database Consolidation Scenarios

  • Consolidation example using Half-Rack SuperCluster
  • Consolidation example using Full-Rack SuperCluster

Acknowledgements

A large group of experts reviewed the material and provided quality feedback. Hence they deserve credit for their work and time. Listed below are some of those reviewers (sincere apologies if I missed listing any major contributors).

Kesari Mandyam, Binoy Sukumaran, Gowri Suserla, Allan Packer, Jennifer Glore, Hazel Alabado, Tom Daly, Krishnan Shankar, Gurubalan T, Rich long, Prasad Bagal, Lawrence To, Rene Kundersma, Raymond Dutcher, David Brean, Jeremy Ward, Suzi McDougall, Ken Kutzer, Larry Mctintosh, Roger Bitar, Mikel Manitius

About

Benchmark announcements, HOW-TOs, Tips and Troubleshooting

Search

Archives
« June 2016
SunMonTueWedThuFriSat
   
1
2
3
4
5
6
7
8
9
10
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
  
       
Today