Thursday Apr 15, 2010

My Last Day and Blog Post at Sun

I am getting close to finish my 10th year at Sun Microsystems, Inc now part of Oracle America, Inc. However before I finish the year, it is time for me to move on with my new ventures.

Today is my last day in Sun/Oracle.

My new coordinates  and my LinkedIn Profile.

Stay in touch.

Thursday Oct 02, 2008

The New Sun Microsystems

It is the start of the last quarter of 2008 and we engineers in ISV Engineering are still baffled when we hear from new Startups when they say "Oh we are startups, I don't think we can afford Sun Microsystems" or sometimes even from old partners. It's like  Flashback to 1999! Yes we were selling $4million servers based only on SPARC in 1999. However in 2008 we are selling many servers for much less than $1000. (HINT: Join Sun Startup Essentials) In fact now we sell systems based on various CPU architectures like  AMD Opteron, Intel Xeon, UltraSPARC T2, SPARC64 VII. We also have one of the biggest OpenSource offerings right from MySQL, Java,, OpenSolaris, Glassfish, Netbeans, etc. (I for one cannot even remember all our OpenSource projects). So what we figured that essentially it is hard  for non-Sun folks to distinguish  the current Sun from the Sun of 1999. Essentially our biggest competition in 2008 is the lingering image of Sun from 1999 as seller of expensive Sun SPARC systems.

It is hard to push that message out when you don't have advertising budget. But we have to get the message out.  So what am I going to do about it?

Just to show the fact that we have changed from last decade, next time when I have to introduce myself,   I am going to say:

"I am Jignesh and I work for The New Sun Microsystems"

to highlight the fact and push the message out. I am changing the introduction in my blog and my email signature to reflect the same. I am hoping people will  read or hear the "New" moniker and ask their favorite Sun contacts "What's new?" and that's the exactly the question that I want those potential customers to ask us.

Update on Nov 6, 2008: Check out the New website

Monday Jun 25, 2007

Postgres and Sun Studio 12 build parameters and postgresql.conf values

Lot of people were requesting the compiler options and postgresql.conf options used in the Postgres scalability on Sun Fire T2000 test that I had done previously.

I had not published them before since I am not sure if these options are optimal for all cases or not. Anyway I am providing them as it looks it will help lot of people to jumpstart with Postgres on Solaris and in the meanwhile I can continue doing more tests and tweak them for another entry if required.

 The PostgreSQL binaries were configured with Sun Studio 12 as follows:

$ ./configure --prefix=/usr/local/pg824 CC=/opt/SUNWspro/bin/cc CFLAGS="-xO3 -xarch=native \\
-xspace -W0,-Lt -W2,-Rcond_elim -Xa  -xildoff -xc99=none -xCC" --without-readline

Again, I am told that the above options may  not be optimum for Sun Studio 12. I will update them after doing more tests with other options.

The main postgresql.conf options modified are as follows:

max_connections = 1000
unix_socket_directory = '/tmp'
temp_buffers = 1000
max_prepared_transactions = 450
maintenance_work_mem = 512MB
max_fsm_pages = 208000
max_fsm_relations = 10000
vacuum_cost_delay = 50
bgwriter_delay = 200ms
wal_sync_method = fdatasync
full_page_writes = off
wal_buffers = 2500
commit_delay = 10
checkpoint_segments = 256
checkpoint_timeout = 300
checkpoint_warning = 99
random_page_cost = 1.5
cpu_tuple_cost = 0.001
cpu_index_tuple_cost = 0.0005
cpu_operator_cost = 0.00025
effective_cache_size = 40GB

Of course I shouldn't forget the network tunables used also:

ndd -set /dev/tcp tcp_conn_req_max_q 16384
ndd -set /dev/tcp tcp_conn_req_max_q0 16384
ndd -set /dev/tcp tcp_xmit_hiwat 131072
ndd -set /dev/tcp tcp_recv_hiwat 131072
ndd -set /dev/tcp tcp_naglim_def 1

The actual database was divided into three file systems: One for $PGDATA, one for  pg_xlog and one more for index tablespaces. All mounted with forcedirectio option) Your mileage may vary but hopefully for the better.

Monday Mar 19, 2007

Sun Studio tools and DB2 V8.2 on Solaris 10 Zones

Every DBA or System Administrations or Software Developers find some need of "drilling-down" to the call-flow of how an application is interacting with the Operating System beneath it.

Various tools and features including DTrace exists to solve the problem of finding "What's going on" on a particular setup. Whilte DTrace is more dynamic, it will be overkill for DTrace to capture all events and then a huge post-processing job to understand the what all those events really mean and how they are interacting with each other.

This is where Sun Studio Performance Analyzer tools still have an edge over DTrace (which is a personal view, many may not agree). Specially when the engineer trying to figure out the problem does not have access to the system that has the problem.

We now look at how Sun Studio Performance Analyzer tools should be used to collect such information about DB2 V8.2 running on Solaris 10 (Zones or otherwise).

Typically, information is collected using the "collect" tool and the information is analyzed using the GUI "analyzer" tool. There is also a text analyzer tool also available called "er_print". The text analyzer tool is primarily useful when the information (which is reffered to as "experiments" in this context) is on a remote server somewhere in west coast and the engineer is located in east coast. (Java remote display event handling still has a long way to go.) Generally the experiment is collected by starting the application as "collect $applicationname" or by invoking dbx on the process and setting "set collector enable true" and "cont" the process.

However things are not that simple with DB2 V8.2. DB2 V8.2 is multi-processed database engine which means if you want to collect on each processes, you have to manually enable it on each process. Also since DB2 V8.2 (that includes DB2 V9.1 and all prior versions) uses setuid flags to use root priviledges to start the DB2 database engine. The collect tool works by means of preloading certain library files which Solaris security mechanism does not allow by default for unsecured libraries to happen between "userid" boundaries (user -> root -> user). Also other problem is default file permissions, so if root creates any file, then the user will not have write permissions to write to it.

The good news is that there is a work around to solve all of the above problems and thats the real content of this blog entry.

For the blog entry sake, lets assume that "db2inst1" is the DB2 Instance owner that hosts the database and /export/home/db2inst1 is the home directory path of the user "db2inst1". You will need "root" access (or atleast get the system administrator to do it for you) in order to setup all the workarounds. (This is where Solaris 10 Zones are ideal, since SysAdmins can create a zone and give developers root access to the zone without really compromising the entire system. This is how ground work for this blog entry was being executed.) In this case, DB2 Instance directory will then be /export/home/db2inst1/sqllib. For a single partitioned DB2 instance (where the content of sqllib/db2nodes.cfg is exactly one line), if you try to execute the collect statement on db2start, you will see the following message:

-bash-3.00$ /opt/SUNWspro/bin/collect  -F all -A copy -d /export/home/db2inst1/experiments $HOME/sqllib/adm/db2start
Creating experiment database /export/home/db2inst1/experiments/ ... db2start: warning: 
: open failed: illegal insecure pathname
03/14/2007 14:30:58     0   0   SQL1063N  DB2START processing was successful.
SQL1063N  DB2START processing was successful.

DB2 has started and an experiment is created however, the experiment will be empty because the library is not allowed to cross setuid flags since it has an insecured path name. Also when you cross such setuid environments, few files created for profile information while the process has effective userid of root will not be available for the process when the effective userid is "db2inst1" again. Hence umask also has to be modified.

To work around this problem one has to use the crle command using root priviledges as follows:

First find out the default settings:

# crle -64

Default configuration file (/var/ld/64/ld.config) not found
  Default Library Path (ELF):   /lib/64:/usr/lib/64  (system default)
  Trusted Directories (ELF):    /lib/secure/64:/usr/lib/secure/64  (system default)

Modify trusted directories path by using the old list and appending the directory that contains the dbxruntime libraries

# crle -64 -s  /lib/secure/64:/usr/lib/secure/64:/opt/SUNWspro/bin/../prod/bin/../lib/v9/dbxruntime

NOTE: There should be no trailing forward slash after the directory name otherwise it will not work for that directory.

Verify the setting again. Note we are changing only for 64-bit binaries.

# crle -64

Configuration file [version 4]: /var/ld/64/ld.config
  Default Library Path (ELF):   /lib/64:/usr/lib/64  (system default)
  Trusted Directories (ELF):    

Command line:
  crle -64 -c /var/ld/64/ld.config -s \\

Now stop the DB2 engine if it is still running and then modify umask and restart DB2 engine as follows

-bash-3.00$ umask 000
-bash-3.00$ /opt/SUNWspro/bin/collect   -F all -A copy -d \\
/export/home/db2inst1/experiments $HOME/sqllib/adm/db2start
Creating experiment database /export/home/db2inst1/experiments/ ...
03/19/2007 10:46:53     0   0   SQL1063N  DB2START processing was successful.
SQL1063N  DB2START processing was successful.
-bash-3.00$ db2sampl
-bash-3.00$ db2 connect to sample

   Database Connection Information

 Database server        = DB2/SUN64 8.2.6
 SQL authorization ID   = DB2INST1
 Local database alias   = SAMPLE

-bash-3.00$ db2 "select count(\*) from employee"


  1 record(s) selected.

-bash-3.00$ db2 connect reset
DB20000I  The SQL command completed successfully.
-bash-3.00$ db2stop
03/19/2007 10:52:37     0   0   SQL1064N  DB2STOP processing was successful.
SQL1064N  DB2STOP processing was successful.

When db2stop is executed, it also marks the end of the collection of the DB2 engine instance. Now you can use "er_print" or the "analyzer" GUI as follows:

-bas-3.00$ cd /export/home/db2inst1/experiments
-bash-3.00$ /opt/SUNWspro/bin/er_print

In this case since we know it has lots of sub processes we load it easily as follows:

-bash-3.00$ /opt/SUNWspro/bin/er_print\*.er
(/opt/SUNWspro/bin/er_print) limit 10
(/opt/SUNWspro/bin/er_print) functions
Functions sorted by metric: Exclusive User CPU Time

Excl.     Incl.      Name
User CPU  User CPU
  sec.      sec.
30.111    30.111     
 1.651     1.651     take_deferred_signal
 1.231     1.231     memset
 0.741     1.081     sqlno_qnc_bound_by_func(sqlno_globals\*,sqlno_set,sqlno_set,sqlnq_qnc\*,sqlno_qtb\*,int\*,
 0.700     6.545     sqlnp_parser(sqlnp_cb\*)
 0.550     0.550     memcpy
 0.550     1.281     sqlogmblkEx
 0.500     0.771     sqlno_bit2intD(sqlno_globals\*,sqlno_sdb\*,sqlno_apcb\*,sqlno_set,sqlno_iset\*\*)
 0.470     2.051     sqlno_prop_ff(sqlno_globals\*,sqlno_apcb\*,sqlno_qtb\*,sqlno_join_context\*,sqlno_set,sqlno_set,
 0.350     0.921     sqlno_fd_normalize_order(sqlno_globals\*,sqlno_order_class\*,sqlno_order_class\*,


If you use the analyzer GUI then you have to select all processes before you start looking at the data.

Also if this is associated with a technical problem, you can tar up the directory /export/home/experiments and see it to the technical support team which can also look at the data from it using the same tools.

In another entry we will see how to collect data using the Sun Studio tools when DB2 engine is used with DPF - Database Partition Feature. This is easily identified when there are more than 1 lines in the SHOME/sqllib/db2nodes.cfg file.


Jignesh Shah is Principal Software Engineer in Application Integration Engineering, Oracle Corporation. AIE enables integration of ISV products including Oracle with Unified Storage Systems. You can also follow me on my blog


« July 2016