Tuesday Jun 29, 2010

Identifying Ideal Oracle Database Objects for Flash Storage and Accelerators

The Sun Storage F5100 Flash Array and Sun Flash Accelerator F20 PCIe Card help accelerate I/O bound applications such as databases. The following are some of the guidelines to identify Oracle database objects that can benefit by using the flash storage. Even though the title explicitly states "Oracle", some of these guidelines are applicable to other databases and non-database products. Exercise discretion, evaluate and experiment before implementing these recommendations as they are.

  • Heavily used database tables and indexes are ideal for flash storage

    • - The database workloads with no I/O bottlenecks may not show significant performance gains
    • - The database workloads with severe I/O bottlenecks can fully realize the benefits of flash devices

      • Top 5 Timed Foreground Events section in any AWR report that was collected on the target database system is useful in finding whether disk I/O is a bottleneck

        • Large number of Waits and the large amount of time in DB spent waiting for some blocked resource under User I/O Wait Class is an indication of I/O contention on the system
  • Identify the I/O intensive tables and indexes in a database with the help of Oracle Enterprise Manager Database Control, a web-based tool for managing Oracle database(s)

    • - The "Performance" page in OEM Database Control helps you quickly identify and analyze performance problems
    • - Historical and the real-time database activity can be viewed from the "performance" page.
      • The same page also provides information about the top resource consuming database objects
  • An alternate way to identify the I/O intensive objects in a database is to analyze the AWR reports that are generated over a period of time especially when the database is busy

    • - Scan through the SQL ordered by .. tables in each AWR report
    • - Look for the top INSERT & UPDATE statements with more elapsed and DB times
      • The database tables that are updated frequently & repeatedly, along with the indexes created on such tables are good candidates for the flash devices

    • - SQL ordered by Reads is useful in identifying the database tables with large number of physical reads
      • The database table(s) from which large amounts of data is read/fetched from physical disk(s) are also good candidates for the flash devices

        • To identify I/O intensive indexes, look through the explain plans of the top SQLs that are sorted by Physical Reads

  • Examine the File IO Stats section in any AWR report that was collected on the target database system

    • - Consider moving the database files with heavy reads, writes and relatively high average buffer wait time to flash volumes
  • Examine Segments by Physical Reads, Segments by Physical Writes and Segments by Buffer Busy Waits sections in AWR report

    • - The database tables and indexes with large number of physical reads, physical writes and buffer busy waits may benefit from the flash acceleration
  • Sun flash storage may not be ideal for storing Oracle redo logs

    • - Sun Flash Modules (FMOD) in F5100 array and F20 Flash Accelerator Card are optimized for 4K sector size

        A redo log write that is not aligned with the beginning of the 4K physical sector results in a significant performance degradation

    • - In general, Oracle redo log files default to a block size that is equal to the physical sector size of the disk, which is typically 512 bytes

      • Majority of the recent Oracle Database platforms detect the 4K sector size on Sun flash devices
      • Oracle database automatically creates redo log files with a 4K block size on file systems created on Sun flash devices
        • However with a block size of 4K for the redo logs, there will be significant increase in redo wastage that may offset expected performance gains

F5100 Flash Storage and F20 PCIe Flash Accelerator Card as Oracle Database Smart Flash Cache

In addition to the I/O intensive database objects, customers running Oracle 11g Release 2 or later versions have the flexibility of using flash devices to turn on the "Database Smart Flash Cache" feature to reduce physical disk I/O. The Database Smart Flash Cache is a transparent extension of the database buffer cache using flash storage technology. The flash storage acts as a Level 2 cache to the (Level 1) SGA. Database Smart Flash Cache can significantly improve the performance of Oracle databases by reducing the amount of disk I/O at a much lower cost than adding an equivalent amount of RAM.

F20 Flash Accelerator offers an additional benefit - since it is a PCIe card, the I/O operations bypass disk controller overhead.

The database flash cache can be enabled by setting appropriate values to the following Oracle database parameters.


	db_flash_cache_file
	db_flash_cache_size

Check Oracle Database Administrator's Guide 11g Release 2 (11.2) : Configuring Database Smart Flash Cache documentation for the step-by-step instructions to configure Database Smart Flash Cache on flash devices.

Wednesday Apr 28, 2010

Oracle Database: Say No Thanks! to a New Index

.. unless you are working with a database that is largely read-only or if the new index is supposed to be fixing a critical performance issue with no side effect(s).

Two topics covered in this blog entry with plenty of simple examples:

  1. Why creating new indexes on a heavily modified table may not be a good idea? and
  2. How to identify unused indexes?
Read on.

Indexes are double-edged swords that may improve the performance of targeted queries, but in some cases they may accidentally degrade the performance of other queries that are not targeted. In any case, exercise caution while adding a new index to the database. After adding a new index, monitor the overall performance of the database, not just the targeted query.

If DML statements that modify data (INSERT, UPDATE, or DELETE) are being executed large number of times on a table, make sure that the addition of a new index on the same table does not negatively affect the performance of those DML operations. Usually this is not a problem if the SQLs being executed are simply retrieving but not adding or modifying the existing data. In all other cases, there is some performance overhead induced by the addition of each new index. For example, if there are 10 indexes created on a table DUMMY, adding a new row of data to the table DUMMY may require updating all 10 indexes behind the scenes by the database management system.

Here is an example demonstrating the performance overhead of a new index on a table.


SQL> CREATE TABLE VIDEO
  2  (BARCODE VARCHAR(10) NOT NULL,
  3  TITLE VARCHAR2(25) NOT NULL,
  4  FORMAT VARCHAR2(10),
  5  PRICE NUMBER,
  6  DATA_OF_RELEASE DATE)
  7  /

Table created.

SQL> insert into VIDEO values ('9301224321', 'AVATAR', 'BLU-RAY', 19.99, '22-APR-2010');

1 row created.

..

SQL> insert into VIDEO values ('3782460017', 'THE SIMPSONS - SEASON 20', 'BLU-RAY', 29.99, '04-JUL-2009');

1 row created.

SQL> select \* from VIDEO;

BARCODE 	TITLE				    FORMAT		 PRICE DATA_OF_RELEASE
--------------- ----------------------------------- --------------- ---------- ---------------
9301224321	AVATAR				    BLU-RAY		 19.99 22-APR-10
7619203043	BEN-HUR 			    VHS 		  9.79 12-MAR-63
7305832093	THE MATRIX			    DVD 		 12.29 03-DEC-99
4810218795	MEMENTO 			    DVD 		  8.49 02-FEB-02
3782460017	THE SIMPSONS - SEASON 20	    BLU-RAY		 29.99 04-JUL-09

SQL> select \* from USER_INDEXES where TABLE_NAME = 'VIDEO';

no rows selected

SQL> alter session set events '10046 trace name context forever, level 8';

Session altered.

SQL> select \* from VIDEO where FORMAT = 'BLU-RAY';

BARCODE 	TITLE				    FORMAT		 PRICE DATA_OF_RELEASE
--------------- ----------------------------------- --------------- ---------- ---------------
9301224321	AVATAR				    BLU-RAY		 19.99 22-APR-10
3782460017	THE SIMPSONS - SEASON 20	    BLU-RAY		 29.99 04-JUL-09

SQL> alter session set events '10046 trace name context off';

Session altered.

SQL trace file has the following contents.


SQL ID: 0pu5s70nsdnzv
Plan Hash: 3846322456
SELECT \* 
FROM
 VIDEO WHERE FORMAT = :"SYS_B_0"


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0         16          0           2
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0         16          0           2

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 28  

Rows     Row Source Operation
-------  ---------------------------------------------------
      2  TABLE ACCESS FULL VIDEO (cr=16 pr=0 pw=0 time=3 us cost=4 size=100 card=2)

Let's create an index and see what happens.


SQL> create index VIDEO_IDX1 on VIDEO (FORMAT);

Index created.

SQL>  alter session set events '10046 trace name context forever, level 8';

Session altered.

SQL> select \* from VIDEO where FORMAT = 'BLU-RAY';

BARCODE 	TITLE				    FORMAT		 PRICE DATA_OF_RELEASE
--------------- ----------------------------------- --------------- ---------- ---------------
9301224321	AVATAR				    BLU-RAY		 19.99 22-APR-10
3782460017	THE SIMPSONS - SEASON 20	    BLU-RAY		 29.99 04-JUL-09

SQL> alter session set events '10046 trace name context off';

Session altered.

The latest contents of the trace file are as follows. Notice the reduction in buffer gets from 16 to 4. That is, the new index improved the query performance by 75%.


SQL ID: 0pu5s70nsdnzv
Plan Hash: 2773508764
SELECT \* 
FROM
 VIDEO WHERE FORMAT = :"SYS_B_0"


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          4          0           2
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          4          0           2

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 28  (CS90)

Rows     Row Source Operation
-------  ---------------------------------------------------
      2  TABLE ACCESS BY INDEX ROWID VIDEO (cr=4 pr=0 pw=0 time=12 us cost=2 size=100 card=2)
      2   INDEX RANGE SCAN VIDEO_IDX1 (cr=2 pr=0 pw=0 time=10 us cost=1 size=0 card=2)(object id 76899)


Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   MODE: ALL_ROWS
      2   TABLE ACCESS (BY INDEX ROWID) OF 'VIDEO' (TABLE)
      2    INDEX   MODE: ANALYZED (RANGE SCAN) OF 'VIDEO_IDX1' (INDEX)

So far so good. Let's add a new row of data and examine the trace file one more time. From hereafter, keep an eye on the "current" column (logical IOs performed due to an INSERT, UPDATE or DELETE) and notice how it changes with different actions -- adding and removing: indexes, new row(s) of data etc.,


SQL ID: dnb2d8cpdj56p
Plan Hash: 0
INSERT INTO VIDEO 
VALUES
 (:"SYS_B_0", :"SYS_B_1", :"SYS_B_2", :"SYS_B_3", :"SYS_B_4")


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          1          7           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          1          7           1

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 28  (CS90)

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  LOAD TABLE CONVENTIONAL  (cr=1 pr=0 pw=0 time=0 us)


Rows     Execution Plan
-------  ---------------------------------------------------
      0  INSERT STATEMENT   MODE: ALL_ROWS
      0   LOAD TABLE CONVENTIONAL OF 'VIDEO'

Now drop the index, re-insert the last row and get the tracing data again.


SQL> drop index VIDEO_IDX1;

Index dropped.

SQL> delete from VIDEO where BARCODE ='4457332907';

1 row deleted.

SQL> commit;

Commit complete.

SQL> alter session set events '10046 trace name context forever, level 8';

Session altered.

SQL> insert into VIDEO values ('4457332907', 'KING OF THE HILL - ALL', 'DVD', 90.00, '01-JAN-2011');

1 row created.

SQL> alter session set events '10046 trace name context off';

Session altered.

The contents of the latest trace file are shown below.


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.01       0.00          0          2          5          1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.01       0.00          0          2          5           1

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 28  (CS90)

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  LOAD TABLE CONVENTIONAL  (cr=1 pr=0 pw=0 time=0 us)


Rows     Execution Plan
-------  ---------------------------------------------------
      0  INSERT STATEMENT   MODE: ALL_ROWS
      0   LOAD TABLE CONVENTIONAL OF 'VIDEO'

This time create two indexes and see what happens.


SQL> CREATE INDEX VIDEO_IDX1 ON VIDEO (FORMAT);

Index created.

SQL> CREATE INDEX VIDEO_IDX2 ON VIDEO (TITLE);

Index created.

Trace file contents:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          1          9           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          1          9           1

Notice the two additional logical IOs (look under "current" column). Those additional logical input/output operations are the result of the new indexes. The number goes up as we add more indexes and data to the table VIDEO.


SQL> delete from VIDEO where BARCODE ='4457332907';

1 row deleted.

SQL> commit;

Commit complete.

SQL> create index VIDEO_IDX3 on VIDEO (PRICE, DATA_OF_RELEASE);

Index created.

SQL>  alter session set events '10046 trace name context forever, level 8';

Session altered.

SQL> insert into VIDEO values ('4457332907', 'KING OF THE HILL - ALL', 'DVD', 90.00, '01-JAN-2011');

1 row created.

SQL> alter session set events '10046 trace name context off';

Session altered.


SQL trace:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          1         11           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          1         11           1

You can try other operations such as UPDATE, DELETE on your own.

Since there are only few rows of data in the table VIDEO, it is hard to notice the real performance impact in these examples. If you really want to see the negative performance impact due to the large number of indexes on a heavily updated table, try adding thousands or millions of rows of data and few more indexes.

Moral of the story: Indexes aren't always cheap - they may have some overhead associated with them. Be aware of those overheads and ensure that the index maintenance overhead do not offset the performance gains resulting from the indexes created on a particular table.


Monitoring Index Usage

Now we know the possible disadvantage of having too many indexes on a heavily updated table. One way to reduce the index maintenance overhead is to instrument the indexes so we can monitor their usage from time to time and remove the unused indexes. To start monitoring the index usage, alter the index by specifying the keywords MONITORING USAGE.


SQL> select index_name from user_indexes where table_name = 'VIDEO';

INDEX_NAME
--------------------------------------------------------------------------------
VIDEO_IDX3
VIDEO_IDX1
VIDEO_IDX2

SQL> alter index VIDEO_IDX1 MONITORING USAGE;

Index altered.

SQL> alter index VIDEO_IDX2 MONITORING USAGE;

Index altered.

SQL> alter index VIDEO_IDX3 MONITORING USAGE;

Index altered.

Once the indexes are instrumented, query the V$OBJECT_USAGE view occasionally to see if the instrumented indexes are being used in executing SQL queries.


SQL> select \* from VIDEO where BARCODE LIKE '%22%';

BARCODE 	TITLE				    FORMAT		 PRICE DATA_OF_RELEASE
--------------- ----------------------------------- --------------- ---------- ---------------
9301224321	AVATAR				    BLU-RAY		 19.99 22-APR-10

SQL> select \* from VIDEO where FORMAT = 'VHS';

BARCODE 	TITLE				    FORMAT		 PRICE DATA_OF_RELEASE
--------------- ----------------------------------- --------------- ---------- ---------------
7619203043	BEN-HUR 			    VHS 		  9.79 12-MAR-63

SQL> select \* from VIDEO where PRICE < 20;

BARCODE 	TITLE				    FORMAT		 PRICE DATA_OF_RELEASE
--------------- ----------------------------------- --------------- ---------- ---------------
4810218795	MEMENTO 			    DVD 		  8.49 02-FEB-02
7619203043	BEN-HUR 			    VHS 		  9.79 12-MAR-63
7305832093	THE MATRIX			    DVD 		 12.29 03-DEC-99
9301224321	AVATAR				    BLU-RAY		 19.99 22-APR-10

SQL> select \* from VIDEO where FORMAT = 'BLU-RAY' AND DATA_OF_RELEASE < '01-JAN-2010';

BARCODE 	TITLE				    FORMAT		 PRICE DATA_OF_RELEASE
--------------- ----------------------------------- --------------- ---------- ---------------
3782460017	THE SIMPSONS - SEASON 20	    BLU-RAY		 29.99 04-JUL-09


SQL> column INDEX_NAME format A25
SQL> column START_MONITORING format A30

SQL> select INDEX_NAME, USED, START_MONITORING 
  2  from V$OBJECT_USAGE 
  3  where INDEX_NAME LIKE 'VIDEO_IDX%'
  4  /

INDEX_NAME		  USED	    START_MONITORING
------------------------- --------- ------------------------------
VIDEO_IDX1		  YES	    04/27/2010 01:10:20
VIDEO_IDX2		  NO	    04/27/2010 01:10:25
VIDEO_IDX3		  YES	    04/27/2010 01:10:31

In the above example, the index VIDEO_IDX2 was not in use during the period of index monitoring. If we are convinced that the queries that will be executed are similar to the ones that were executed during the index monitoring period, we can go ahead and remove the index VIDEO_IDX2 to reduce the performance overhead during updates on table VIDEO.

To stop monitoring the index usage, alter the index with the keywords NOMONITORING USAGE.


SQL> alter index VIDEO_IDX1 NOMONITORING USAGE;

Index altered.

Thursday Feb 18, 2010

PeopleSoft Campus Solutions 9.0 benchmark on Sun SPARC Enterprise M4000 and X6270 blades

Oracle|Sun published PeopleSoft Campus Solutions 9.0 benchmark results today. Here is the direct URL to the benchmark results white paper:

      PeopleSoft Enterprise Campus Solutions 9.0 using Oracle 11g on a Sun SPARC Enterprise M4000 & Sun Blade X6270 Modules

Sun published three PeopleSoft benchmarks on SPARC platform over the last 12 month period -- one OLTP and two batch benchmarks[1]. The latest benchmark is somewhat special for at least couple of reasons:

  • Campus Solutions 9.0 workload has both online transactions and batch processes, and
  • This is the very first time ever Sun published a PeopleSoft benchmark on x64 hardware running Oracle Enterprise Linux

The summary of the benchmark test results is shown below. These numbers were extracted from the very first page of the benchmark results white papers where Oracle|PeopleSoft highlights the significance of the test results and the actual numbers that are of interest to the customers. Test results are sorted by the hourly throughput (invoices & transcripts per hour) in the descending order. Click on the link that is underneath the vendor name to open corresponding benchmark result.

While analyzing these test results, remember that the higher the throughput, the better. In the case of online transactions, it is desirable to keep the response times as low as possible.

(Prettier version of the following table is at:
        Oracle PeopleSoft Campus Solutions 9.0 Benchmark Test Results
)

Oracle PeopleSoft Campus Solutions 9.0 Benchmark Test Results

Vendor Hardware Configuration OS Resource Utilization Response/Elapsed Times at Peak Load (6,000 users)
Online Transactions: Avg Response Times (sec) Batch Throughput/hr
CPU% Mem (GB) Logon LSC Page Load Page Save Invoice Transcripts
Sun DB 1 x M4000 with 2 x 2.53GHz SPARC64 VII QC processors, 32GB RAM
1 x Sun Storage Flash Accelerator F20 with 4 x 24GB FMODs
1 x ST2540 array with 11 × 136.7GB SAS 15K RPM drives
Solaris 10 37.29 20.94 0.64 0.78 0.82 1.57 31,797 36,652
APP 2 x X6270 blades with 2 x 2.93GHz Xeon 5570 QC processors, 24GB RAM OEL4 U8 41.69\* 4.99\*
WEB+PS 1 x X6270 blade with 2 x 2.8GHz Xeon 5560 QC processors, 24GB RAM OEL4 U8 33.08 6.03

HP DB 1 x Integrity rx6600 with 4 x 1.6GHz Itanium 9050 DC procs, 32G RAM
1 x HP StorageWorks EVA8100 array with 58 x 146GB drives
HP-UX 11iv3 61 30 0.71 0.91 0.83 1.63 22,753 30,257
APP 2 x BL460c blade with 2 x 3.16GHz Xeon 5460 QC procs, 16GB RAM RHEL4U5 61.81 3.6
WEB 1 x BL460c blade with 2 x 3GHz Xeon 5160 DC procs, 8GB RAM RHEL4U5 44.36 3.77
PS 1 x BL460c blade with 2 x 3GHz Xeon 5160 DC procs, 8GB RAM RHEL4U5 21.90 1.48

HP DB 1 x ProLiant DL580 G4 w/ 4 x 3.4GHz Xeon 7140M DC procs, 32G RAM
1 x HP StorageWorks XP128 array with 28 x 73GB drives
Win2003R2 70.37 21.26 0.72 1.17 0.94 1.80 17,621 25,423
APP 4 x BL480c G1 blades with 2 x 3GHz Xeon 5160 DC procs, 12GB RAM Win2003R2 65.61 2.17
WEB 1 x BL460c G1 blades with 2 x 3GHz Xeon 5160 DC procs, 12GB RAM Win2003R2 54.11 3.13
PS 1 x BL460c G1 blades with 2 x 3GHz Xeon 5160 DC procs, 12GB RAM Win2003R2 32.44 1.40

This is all public information. Feel free to compare the hardware configurations & the data presented in the table and draw your own conclusions. Since both Sun and HP used the same benchmark workload, toolkit and ran the benchmark with the same number of concurrent users and job streams for the batch processes, comparison should be pretty straight forward.

Hopefully the following paragraphs will provide relevant insights into the benchmark and the application.

Caution in interpreting the Online Transaction Response Times

Average response times for the online transactions were measured using HP's QuickTest Pro (QTP) tool. This is a benchmark requirement. QTP test scripts have a dependency on the web browser (IE in particular) -- hence it is extremely sensitive to the web browser latencies, remote desktop/VNC latencies and other latencies induced by the operating system. Be aware that all these latencies will be factored into the transaction response times and due to this, the final average transaction response times might be skewed a little. In other words, the reported average transaction response times may not necessarily be very accurate. In most of the cases we might be looking at the approximate values and the actual values might be far better than the ones reported in the benchmark report. (I really wish Oracle|PeopleSoft would throw away some of the skewed samples to make the data more accurate and reliable.). Please keep this in mind when looking at the response times of the online transactions.

Quick note about Consolidation

In our benchmark environment, we had the PeopleSoft Process Scheduler (batch server) set up on the same node as that of the web server node. In general, Oracle recommends setting up the process scheduler either on the database server node or on a dedicated system. However in the benchmark environment, we chose not to run the process scheduler on the database server node as it would hurt the performance of the online transactions. At the same time, we noticed plenty of idle CPU cycles on the web server node even at the peak load of 6,000 concurrent users, so we decided to run the PS on the web server node. In case if customers are not comfortable with this kind of setup, they can use any supported virtualization technology (eg., Logical Domains, Containers on Solaris, Oracle VM on OEL) to separate the process scheduler from the web server by allocating the system resources as they like. It is just a matter of choice.

PeopleSoft Load Balancing

PeopleSoft has load balancing mechanism built into the web server to forward the incoming requests to appropriate application server in the enterprise, and within the application server to send the request to an appropriate application server process, PSAPPSRV. (I'm not 100% sure but I think application server balances the load among application server processes in a round robin fashion on \*nix platforms whereas on Windows, it forwards all the requests to a single application server process until it reaches the configured limit before moving on to the next available application server process.). However this in-built load balancing is not perfect. Most of the times, the number of requests processed by each of the identically configured application server processes [running on different application server nodes in the enterprise] may not be even. This minor shortcoming could lead to uneven resource usage across different nodes in the PeopleSoft deployment. You can notice this in the CPU and memory usage reported for the two app server nodes in the benchmark environment (check the benchmark results white paper).

Sun Flash Accelerator F20 PCIe Card

To reduce I/O latency, hot tables and hot indexes were placed on a Sun Flash Accelerator F20 PCIe Card in this benchmark. The F20 card has a total capacity of 96 GB with 4 x 24GB Flash Modules (FMODs). Although this workload is moderately I/O intensive, the batch processes in this benchmark generate a lot of I/O for few minutes in the steady state of the benchmark. The flash accelerator handled the burst of I/O activity pretty well, and as a result the performance of the batch processesing was improved.

Check the white paper Best Practices for Oracle PeopleSoft Enterprise Payroll for North America using the Sun Storage F5100 Flash Array or Sun Flash Accelerator F20 PCIe Card to know more about the top flash products offered by Oracle|Sun and how they can be deployed in a PeopleSoft environment for maximum benefit.

Solaris specific Tuning

Almost on all versions of Solaris 10, the kernel uses 4M as the maximum page size despite the fact that the underlying hardware supports as high as 256M pages. However large pages may improve the performance of some of the memory intensive workloads such as Oracle database by reducing the number of virtual <=> physical translations there by reducing the expensive dTLB/iTLB misses. In the benchmark environment, the following values were set in the /etc/system configuration file of the database server node to enable 256MB pages for the process heap and ISM.


\* 256M pages for process heap
set max_uheap_lpsize=0x10000000

\* 256M pages for ISM
set mmu_ism_pagesize=0x10000000

While we are on the same topic, Linux configuration is out-of-the-box. No OS tuning was performed in this benchmark.

Tuning Tip for Solaris Customers

Even though we did not set up the middle-tier on a Solaris box in this benchmark, this particular tuning tip is still valid and may help all those customers running the application server on Solaris. Consider lowering the shell limit for the file descriptors to a value of 512 or less if it was set to any value greater than 512. As of today (until the release of PeopleTools 8.50), there are certain parts of code in PeopleSoft calls the file control routine, fcntl(), and the file close routine, fclose(), in a loop "ulimit -n" number of times to close a bunch of files which were opened to perform a specific task. In general, PeopleSoft processes won't open hundreds of files. Hence the above mentioned behavior results in ton of dummy calls that error out. Besides, those system calls are not cheap -- they consume CPU cycles. It gets worse when there are a number of PeopleSoft processes that exhibit this kind of behavior simultaneously. (high system CPU% is one of the symptoms that helps identifying this behavior). Oracle|PeopleSoft is currently trying to address this performance issue. Meanwhile customers can lower the file descriptors shell limit to reduce its intensity and impact.

We have not observed this behavior on OEL when running the benchmark. But be sure to trace the system calls and figure out if the shell limit for the file descriptors need be lowered even on Linux or other supported platforms.


______________________________________

Footnotes:

1. PeopleSoft benchmarks on Sun platform in year 2009-2010

  1. PeopleSoft HRMS 8.9 SELF-SERVICE Using ORACLE on Sun SPARC Enterprise M3000 and Enterprise T5120 Servers -- online transactions (OLTP)
  2. PeopleSoft Enterprise Payroll 9.0 using Oracle for Solaris on a Sun SPARC Enterprise M4000 (8 streams) -- batch workload
  3. PeopleSoft Enterprise Payroll 9.0 using Oracle for Solaris on a Sun SPARC Enterprise M4000 (16 streams) -- batch workload



2. \*HP's benchmark results white paper did not show the CPU and memory breakdown numbers separately for each of the application server nodes. It only shows the average of average CPU and memory utilization for all app server nodes under "App Servers". Sun's average CPU, memory numbers [shown in the above table] were calculated in the same way for consistency.

Wednesday Dec 23, 2009

Accessing MySQL Database(s) from StarOffice / OpenOffice.org Suite of Applications

This blog post is organized into two major sections and several sub-sections. The major sections focus on the tasks to be performed at the MySQL server and the \*Office client while the sub-sections talk about the steps to be performed in detail.

To show the examples in this exercise, we will be creating a new MySQL database user with user ID SOUSER. This new user will be granted read-only access to couple of tables in a MySQL database called ISVe. The database can be accessed from any host in the network. ben10.sfbay is the hostname of the MySQL server.

Tasks to be Performed at the MySQL Server

This section is intended only for the MySQL Server Administrators. If you are an end-user, skip ahead to Tasks to be Performed at the Client side.

Create a new MySQL user and grant required privileges.

eg.,

% mysql -u root -p
Enter password: \*\*\*\*\*
Server version: 5.1.25-rc-standard Source distribution
..

mysql> CREATE USER SOUSER IDENTIFIED BY 'SOUSER';
Query OK, 0 rows affected (0.00 sec)

mysql> show grants for SOUSER;
+-------------------------------------------------------------------------------------------------------+
| Grants for SOUSER@%                                                                                   |
+-------------------------------------------------------------------------------------------------------+
| GRANT USAGE ON \*.\* TO 'SOUSER'@'%' IDENTIFIED BY PASSWORD '\*8370607DA2602E52F463FF3B2FFEA53E81B9314C' | 
+-------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

mysql> USE ISVe;
Database changed

mysql> show tables;
+--------------------------+
| Tables_in_ISVe           |
+--------------------------+
| CustomReport             | 
| CustomSQL                | 
| ISVeOldProjects          | 
| ISVeOrg                  | 
| ISVeProject              | 
| ISVeProjectExecution     | 
| ISVeProjectGoalAlignment | 
| ISVeProjectMiscInfo      | 
| ISVeProjectScoping       | 
| ISVeProjectStatus        | 
| ISVeProjects             | 
| ISVeProjectsVW           | 
| ISVeSearchLog            | 
| LastRefreshed            | 
+--------------------------+
14 rows in set (0.00 sec)

mysql> GRANT SELECT ON ISVe.ISVeOldProjects TO 'SOUSER'@'%';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT SELECT ON ISVe.ISVeProjects TO 'SOUSER'@'%';
Query OK, 0 rows affected (0.00 sec)

mysql> show grants for SOUSER;
+-------------------------------------------------------------------------------------------------------+
| Grants for SOUSER@%                                                                                   |
+-------------------------------------------------------------------------------------------------------+
| GRANT USAGE ON \*.\* TO 'SOUSER'@'%' IDENTIFIED BY PASSWORD '\*8370607DA2602E52F463FF3B2FFEA53E81B9314C' | 
| GRANT SELECT ON `ISVe`.`ISVeOldProjects` TO 'SOUSER'@'%'                                              | 
| GRANT SELECT ON `ISVe`.`ISVeProjects` TO 'SOUSER'@'%'                                                 | 
+-------------------------------------------------------------------------------------------------------+
3 rows in set (0.00 sec)

mysql> quit
Bye

Check the database connectivity and the accessibility from a remote location.


% mysql -h ben10.sfbay -D ISVe -u SOUSER -pSOUSER
Server version: 5.1.25-rc-standard Source distribution

mysql> show tables;
+-----------------+
| Tables_in_ISVe  |
+-----------------+
| ISVeOldProjects |
| ISVeProjects    |
+-----------------+
2 rows in set (0.03 sec)

mysql> select count(\*) from ISVeOldProjects;
+----------+
| count(\*) |
+----------+
|     2880 |
+----------+
1 row in set (0.04 sec)

mysql> select count(\*) from ISVeProjects;
+----------+
| count(\*) |
+----------+
|     4967 |
+----------+
1 row in set (0.33 sec)

mysql> delete from ISVeOldProjects;
ERROR 1142 (42000): DELETE command denied to user 'SOUSER'@'vpn-192-155-222-19.SFBay.Sun.COM' for table 'ISVeOldProjects'

mysql> delete from ISVeProjects;
ERROR 1142 (42000): DELETE command denied to user 'SOUSER'@'vpn-192-155-222-19.SFBay.Sun.COM' for table 'ISVeProjects'

mysql> quit
Bye


Tasks to be Performed at the Client side (End-User's Workstation)


StarOffice and OpenOffice suite of applications can access the MySQL Server using JDBC or native drivers.

MySQL Connector/J is a platform independent JDBC Type 4 driver that is developed specifically to connect to a MySQL database. Using Connector/J, it is possible to connect to almost any version of MySQL Server from any version of StarOffice or OpenOffice.org

Sun|MySQL recently developed a native MySQL driver to facilitate connecting from StarOffice / OpenOffice.org suite of applications to a MySQL database. The new native driver is called MySQL Connector/OpenOffice.org. However the current version of the MySQL Connector for OO.o is compatible only with OpenOffice 3.1, StarOffice 9.1 or newer and it can connect only to MySQL Server 5.1 or later versions. This native connector is supposed to be faster in comparison with the Java connector.

We will explore both MySQL connectors in this section.

Note:
As an end user, you need not be concerned about the internal workings of these MySQL connectors. You just need to worry about installing and configuring the drivers so the \*Office applications can connect to the MySQL database in a seamless fashion.

I. Connector/J approach

  1. Installation steps for MySQL Connector/J

    Using the following navigation, find the location of the JRE that is being used by StarOffice / OpenOffice.org

    • Launch StarOffice / OpenOffice.org
    • Tools Menu -> Options
    • In the 'Options' window, StarOffice / OpenOffice.org -> Java

    Here is a sample screen capture from a Mac running StarOffice 9.

    In the above example, /System/Library/Frameworks/JavaVM.framework/Versions/1.5.0/Home is the location of the JRE. Here onwards, this location will be referred as JRE_HOME.

    1. Download the connector from the following web page:

              http://dev.mysql.com/downloads/connector/j/

      As of this writing, 5.1.10 is the current version for Connector/J

    2. Extract the driver and the rest of the files from the compressed [downloaded] archive

      eg.,
      
      % gunzip -c mysql-connector-java-5.1.10.tar.gz | tar -xvf -
      
      
    3. Locate the jar file that contains the driver --- mysql-connector-java-5.1.10-bin.jar, and copy it into the <JRE_HOME>/lib/ext directory with 'root' privileges.

      eg.,
      
      % sudo cp mysql-connector-java-5.1.10-bin.jar /System/Library/Frameworks/JavaVM.framework/Versions/1.5.0/Home/lib/ext
      
      % ls -l /System/Library/Frameworks/JavaVM.framework/Versions/1.5.0/Home/lib/ext/\*connector\*jar
      /System/Library/Frameworks/JavaVM.framework/Versions/1.5.0/Home/lib/ext/mysql-connector-java-5.1.10-bin.jar
      
      
    4. Restart StarOffice / OpenOffice.org

    This concludes the installation of MySQL Connector/J.



    2. Configuration steps for Connector/J
    1. Launch StarOffice / OpenOffice.org

    2. In the Welcome screen, click on "Database". A database wizard pops up to help us create, open or connect to an existing database from StarOffice / OpenOffice.org.

    3. Since our main interest is only to connect to an existing database in this exercise, click on "Connect to an existing database" radio button and select "MySQL" from the drop-down menu that is underneath the selected radio button.

      Click on "Next >>" button

    4. In the next screen, select JDBC by clicking on "Connect using JDBC (Java Database Connectivity)" radio button

      Click on "Next >>" button

    5. In "Set up connection to a MySQL database using JDBC" screen, provide the name of the database, hostname or IP address of the MySQL database server (server URL) that you want to connect to along with the port# on which the MySQL server is actively listening for new database connections.

      MySQL JDBC driver class text field will be automatically filled with the string com.mysql.jdbc.Driver. Leave that string intact, and click on "Test Class" button to make sure that the relevant class can be loaded with no issues. Unless the driver class is loaded successfully, you will not be able to connect to the MySQL database. In case of unsuccessful class loading, double check the installation steps for MySQL Connector/J.

      Click on "Next >>" button

      Note:
      In the above screenshot, notice that the "Name of the database" was filled with ISVe?zeroDateTimeBehavior=convertToNull (It is not completely visible in the above screen capture, but you just have to believe me). In this example, ISVe is the database name and zeroDateTimeBehavior is the configuration property which was set to a value of convertToNull. Without this configuration property, Connector/J throws an exception when it encounters date values such as 0000-00-00. In such cases, the error message will be something similar to java.sql.SQLException: Value '0000-00-00' can not be represented as java.sql.Date.

      Configuration properties define how Connector/J will make a connection to a MySQL server. The list of Connector/J configuration properties are documented in the following web page:

              http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-configuration-properties.html

      If you have more than one configuration property, you can define all of those properties in the "Name of the database" field. The syntax would be:
          <MySQL_DB_Name>?<Config_Property1=Value>&<Config_Property2=Value>&..&<Config_Propertyn=Value>

    6. Provide the database user name and the password details in "Set up the user authentication" screen. Click on "Password required" check box if there is a password setup for the database user.

      Click on "Test Connection" button to ensure a successful connection to the MySQL database using the credentials provided in this window.

      Click on "Next >>" button

    7. In the final screen, simply accept the default values and click on 'Finish' button.

      "Yes, register the database for me" and "Open the database for edition" are the defaults selected in this screen.

      When you click on the 'Finish' button, you will be prompted to provide a name to the database to save it as a file on your local machine. The saved file contains information about the database including the queries, reports and forms that are linked to the MySQL database. The actual data remain in the MySQL database. Hence you need not worry about the size of the file that is being saved on your local machine. It will be small in size.

    8. Ultimately the Database area of the Base main window appears as shown in the following screen capture.

      Notice the RDBMS name, Type of Connectivity, MySQL Database Name along with the configuration properties, Database user name and the Database server hostname at the bottom of the window.

      You will be able to query the database, create new forms/reports etc., from this window. Unfortunately discussion around those topics is beyond the scope of this blog post - so, we will stop here.




    II Connector/OpenOffice.org approach

    MySQL Connector for OpenOffice.org is a MySQL driver for OpenOffice suite of applications. Even though it appears to be a native driver, MySQL Connector/OpenOffice.org has no implementation for the MySQL Client Server protocol. It is in reality a proxy on the top of MySQL Connector for C++ aka MySQL Connector/C++.

    Unlike MySQL Connector/J, Connector/OpenOffice.org has no dependency on JRE, and it can easily be installed using the OpenOffice.org Extension Manager. Due to the underlying native code, Connector/OpenOffice.org may outperform Connector/J in performance.

    1. Installation steps for MySQL Connector/OpenOffice.org

    Before installing the connector, make sure that you have OpenOffice.org 3.1 [or later] -OR- StarOffice 9.1 [or later] suite installed, and the version of the MySQL server on which the database is hosted is at least 5.1. If any of these requirements are not met, skip this entire section and check the I. Connector/J approach for the instructions that may work with your current versions of StarOffice / OpenOffice and MySQL server.

    1. Download the connector for your platform from the following location:

              http://extensions.services.openoffice.org/project/mysql_connector
    2. Launch StarOffice / OpenOffice.org

    3. Bring up the "Extension Manager" by clicking on Tools Menu -> Extension Manager ...

    4. Click on "Add" button, then locate the OpenOffice connector that you downloaded in step #1 (see two steps above). Click on "Open" button. The name of the connector will be something similar to mysql-connector-ooo-....oxt.

    5. Choose appropriate response to the question "For whom do you want to install the extension?". In this example, I chose the option "Only for me".

    6. Read the "Extension Software License Agreement" and accept the agreement to install the Connector/OpenOffice.org as an extension to StarOffice / OpenOffice.org

    7. Restart StarOffice / OpenOffice.org to complete the installation.



    2. Configuration steps for MySQL Connector/OpenOffice.org
    1. Launch StarOffice / OpenOffice.org

    2. In the Welcome screen, click on "Database". A database wizard pops up to help us create, open or connect to an existing database from StarOffice / OpenOffice.org.

    3. Since our main interest is only to connect to an existing database in this exercise, click on "Connect to an existing database" radio button and select "MySQL" from the drop-down menu that is underneath the selected radio button.

      Click on "Next >>" button

    4. In the next screen, select "Connect native" radio button

      Click on "Next >>" button

    5. In "Set up connection to a MySQL database" screen, provide the name of the database, hostname or IP address of the MySQL database server (server URL) that you want to connect to along with the port# on which the MySQL server is actively listening for new database connections. If the MySQL Server is running on the same machine as that of the StarOffice / OpenOffice.org application, you can provide the location of the socket under "Socket" field. If not, leave it blank.

      Click on "Next >>" button

    6. Provide the database user name and the password details in "Set up the user authentication" screen. Click on "Password required" check box if there is a password setup for the database user.

      Click on "Test Connection" button to ensure a successful connection to the MySQL database using the credentials provided in this window.

      Click on "Next >>" button

    7. In the final screen, simply accept the default values and click on 'Finish' button.

      "Yes, register the database for me" and "Open the database for edition" are the defaults selected in this screen.

      When you click on the 'Finish' button, you will be prompted to provide a name to the database to save it as a file on your local machine. The saved file contains information about the database including the queries, reports and forms that are linked to the MySQL database. The actual data remain in the MySQL database. Hence you need not worry about the size of the file that is being saved on your local machine. It will be small in size.

    8. Ultimately the Database area of the Base main window appears as shown in the following screen capture.

      Notice the RDBMS name, Type of Connectivity, MySQL Database Name along with the configuration properties, Database user name and the Database server hostname at the bottom of the window.

      You will be able to query the database, create new forms/reports etc., from this window. Unfortunately discussion around those topics is beyond the scope of this blog post - so, we will stop here.


    That is all there is to it in installing and configuring the MySQL connectors for \*Office suite of applications. Now enjoy the flexibility of fetching the data from your favorite office productivity software.

About

Benchmark announcements, HOW-TOs, Tips and Troubleshooting

Search

Archives
« February 2015
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
       
       
Today