Monday Sep 28, 2009

Problems with Solaris and a DISM enabled Oracle Database

Resolving problems becomes ever more challenging in todays complex application environments and sometimes error messages can be misleading or confusing to the system administrator / DBA trying to resolve the issue. With my involvement with the VOSJEC (Veritas Oracle Sun Joint Escalation Center), we've seen a steady increase in the number of cases being raised about customers using Oracle with DISM enabled on Solaris systems and in the majority of cases disabling DISM appears to resolve the issue but with DISM use becoming more popular and making DBA's lives easier I personally don't feel as though that's the real right course of action. DISM essentially enables the DBA the ability to resize the SGA on the fly without taking down the database and maximize availability and avoid downtime.  See the following page from the Administering Oracle Database on Solaris documentation for further details.

For knowing about other DISM related issues then have a read of Solution 230653 : DISM Troubleshooting For Oracle9i and Later Releases as it gives some background of DISM and what can go wrong.

How do I know if the customer is using DISM or not...

\* There should be an ora_dism process running on the system so you can check from the ps listing (explorer / guds).

\* Check in the Oracle alert log as that should also mention the DISM process starting. It should also log the fact that the DISM process has died which means your Oracle shared memory segmented aren't locked and can cause other problems .... see Solution 230653 : DISM Troubleshooting For Oracle9i and Later Releases.

So, what the issue you here yourself asking, well several customers complained about Oracle batch jobs crashing or the entire database falling over with something like:

Sun Mar 15 14:53:28 2009
Errors in file /u01/app/oracle/admin/JSPSOL0/udump/jspsol03_ora_23220.trc:
ORA-27091: unable to queue I/O
ORA-27072: File I/O error
SVR4 Error: 12: Not enough space
Additional information: 4
Additional information: 938866512
Additional information: -1

OR

Mon Aug 24 09:01:17 2009
Errors in file /opt/oracle/admin/MW/bdump/mw1_lgwr_18124.trc:
ORA-00340: IO error processing online log 1 of thread 1
ORA-00345: redo log write error block 167445 count 11
ORA-00312: online log 1 thread 1: '/dev/vx/rdsk/mwdbdg/redo11'
ORA-27070: async read/write failed
SVR4 Error: 12: Not enough space
Additional information: 1
Additional information: 167445
LGWR: terminating instance due to error 340

The first jump to cause would indicate that this is some kind of I/O issue and that perhaps we've run out of space on the backing storage layer. Which unfortunately in this case wasn't the reason as all filesystems had enough free space available. The next hit in our possible causes, is perhaps that we don't have enough swap space available as in using DISM you have to have enough swap space configured as DISM segments in use, see  Solution  214947 :   Solaris[TM] Operating System: DISM double allocation of memory for details. Again, the customer appeared to have the correct swap space configuration for the DISM segment currently in use.
Now, what I actually suspected was that CR 6559612 multiple softlocks on a DISM segment should decrement availrmem just once is coming into play here which if you read engineerings comments on it says can only happen if DISM isn't locked properly and we end up going down a different code path due to seg_pdisable being set to 1. There is another reason for seg_pdisable being set and that's when we're trying to handle page retirement. Coincidentally for the two situations mentioned above both had pending page retirements happening  due CR 6587140 -  "page_retire()/page_trycapture should not try to retire non relocatable kernel pages" fixed in KJP 141414-06 (latest 10) so there was a window for opportunity for CR 6559612 to happen every once and a while.

From what I can currently work out CR 6559612 - "multiple softlocks on a DISM segment should decrement availrmem just once" is a duplicate of CR 6603296 - "Multiple writes into dism segment reduces available swap" and also mentions duplicate CR 6644570 - "Application using large page sizes with DISM on Huron got pread to return error ENOMEM".

There seems to be a family of related issues which are all fixed as part of CR 6423097 - "segvn_pagelock() may perform very poorly" which includes the above:

6526804 DR delete_memory_thread, AIO, and segvn deadlock
6557794 segspt_dismpagelock() and segspt_shmadvise(MADV_FREE) may deadlock
6557813 seg_ppurge_seg() shouldn't flush all unrelated ISM/DISM segments
6557891 softlocks/pagelocks of anon pages should not decrement availrmem for memory swapped pages
6559612 multiple softlocks on a DISM segment should decrement availrmem just once
6562291 page_mem_avail() is stuck due to availrmem overaccounting and lack of seg_preap() calls
6596555 locked anonymous pages should not have assigned disk swap slots
6639424 hat_sfmmu.c:hat_pagesync() doesn't handle well HAT_SYNC_STOPON_REF and HAT_SYNC_STOPON_MOD flags
6639425 optimize checkpage() optimizations
6662927 page_llock contention during I/O

These are all fixed in the development release of Solaris Nevada build 91 but haven't been backported as yet. From looking at the bug reports it does seem likely that CR 6423097 and associated CR's will get fixed in Solaris 10 U8 (scheduled for release 9th October 2009).

So, that would lead us to the following suggestions:

1/ CR 6423097 mentions disabling large page support completely to workaround the issue which you could do and still continue to use DISM:

Workaround for nevada and S10U4 is to add to /etc/system:

set max_uheap_lpsize = 0x2000
set max_ustack_lpsize = 0x2000
set max_privmap_lpsize = 0x2000
set max_shm_lpsize = 0x2000
set use_brk_lpg = 0
set use_stk_lpg = 0
and reboot.

for S10U3 and earlier S10 releases, the workaround is to add to /etc/system:

set exec_lpg_disable = 1
set use_brk_lpg = 0
set use_stk_lpg = 0
set use_zmap_lpg = 0
and reboot

NOTE: this is \*not\* the same as disabling large page coalescing as this completely disables large page support. Disabling large page coalescing is sometimes required dependent on certain types of application workload so see 215536 : Oracle(R) 10g on Solaris[TM] 10 may run slow for further details.

2/ Disable DISM entirely for the short term and use ISM with Oracle whilst waiting for the release of Solaris 10 update 8.

Conclusion / Actions
-------------------------------------

The above is essentially a hit list of things to be aware of when seeing errors of that nature. If the customer isn't using DISM at all then that might take you down another  avenue. If it does then remember that "Applications Drive System Behavior", so you need to understand the architecture, when and where things run to aid in tying the loose ends of any problem together and be able to prove root cause or not. See my previously blog entry about dealing with performance issues as it essentially takes about the same troubleshooting process I use all the time in diagnosing complex problems.

About

I'm an RPE (Revenue Product Engineering) Engineer supporting Solaris on Exadata, Exalogic and Super Cluster. I attempt to diagnose and resolve any problems with Solaris on any of the Engineered Systems.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today