Saturday Apr 03, 2010

Random Notes - DB and JDBC work

Doing DB tasks after a while gave me a feeling that I am learning few things again..
happens.. ! I guess.
Just to make sure it doesn't happen again,

Random Notes

- I was able to use the Hibernate Toolkit for creating Callable Statement.
- Passing String array is okay, make sure to implement the annonymus class Array
- Don't use commas as separators within an array element
Postgres treats commas as field separators
- Debugging is a challenge, if you have insert statements within, you cannot use commit. Have the inserts inside begin .. end; and from java client, beginTransaction and then commit. The log lines will not appear without commit.
- For substring, use substr (, );
- If you have create temp table, use create temp table without oid and EXECUTE
that statement.
For any inserts or other transactions on the temp table, use EXECUTE again.
Otherwise java client or any client will error out saying table with OID not

Monday Jun 08, 2009

:wq blog

I guess I have decided to stop here.
Looking back my previous posts and the blog, without any latest update, looks as older and frivolous as reassurances of an economic rebound ! :(
There isn't a mentionable personal acheivement since I moved to the bay area about an year ago.
Well, yes there is.. I managed to stay (live, drive and visit tourist places) without wearing Sun Glasses !! :)
Ok, was nice bloggin here.

Tuesday May 13, 2008

Controlling Threads with concurrent package

For long, I have been using Runnable interface to get the long running non interactive natured stuff done in the background. In some previous posts, I had mentioned the crude way of using ThreadGroup to achieve pooling sort of control. It was also fun working on a task scheduler that ran tasks in controlled set of threads. Logically, what do you need ? The pool size, the Runnable targets, an infinite loop and a data structure to control the thread scheduling !! So the code looked really.. as we call in India.. 'Fundooo' ! But when I look at the Concurrent package, the same code looks like a floppy disk in front of a DVD. I haven't explored the package completely so don't want to say a blue ray disk.
Just for my understanding, let me post a test program that I can anytime compile and run later to refresh my memory.
import java.util.concurrent.\*;
import java.util.Random;

public class TestConcurrent {
        public static void main(String args[]) {
                ExecutorService runner = Executors.newSingleThreadExecutor();
                ExecutorService pool = Executors.newFixedThreadPool(5);

                int style = 1;

                try { style = (new Integer(args[0])).intValue();
                } catch (Exception e ) {}

              switch(style) {

              case 1:
                  System.out.println("Running in separate threads...");
                  for(int oldstyle=0; oldstyle<10; oldstyle++) {
                      new Thread(new TestRunnable()).start();

              case 2:
                  System.out.println("Running in a worker thread....");
                  for(int newstyle=0; newstyle<10; newstyle++) {
                      runner.submit(new TestRunnable());

              case 3:
                  System.out.println("Running in a thread pool....");
                  for(int newstyle=0; newstyle<10; newstyle++) {
                      pool.submit(new TestRunnable());

              System.out.println("Main is over....");

class TestRunnable implements Runnable {

    private static int i = 0;
    private int member = 0;
    private int runtime = 0;

    public TestRunnable() {
       member = ++i;
       runtime = (new Random()).nextInt(10);

    public void run() {
        System.out.println("I am instance " + member + " with runtime " + runtime + " seconds");
        try { Thread.sleep(runtime \* 1000); }catch(Exception e) {System.out.println("Interrupted");}
        System.out.println("Instance "+ member + " runtime over...!");


Usage is obvious:
$ java TestConcurrent [1|2|3]

Tuesday Apr 29, 2008

10 Satellites at a time

     Well, I guess people might think this is about Sun Connection Satellite or xVM OPS Center. But I'll write about it in my systems management posts. This is about an important page added to the space history, by Indian scientists. ISRO reached the enviable milestone of launching 10 satellites into the space using a single PSLV and successfully placing them into their desired orbits. This brief news covers the story and compares it with similar attempts by other nations in the past.

Friday Apr 25, 2008

IPL: Bangalore Vs Chennai -- Got the ticks

I have tickets to watch the Bangalore Vs Chennai Clash on Monday right here in Chinnaswamy Stadium ! It's going to be FUN. Match is scheduled to start at 8.00 in the night and the stadium is at a walking distance from the SUN office on Kasturaba Road. Perfect Setup !

The top attractions for me ( not in any order )
  1. Muralitharan Vs Kallis and Boucher
  2. Kumbale if he joins Vs Hayden, Dhoni and Hussy
  3. The crowd and the atmosphere in the floodlights
  4. The Washington Red Skins Cheerleaders of course
  5. A chance to appear on a national channel ( by doing crazy acts, and hoping to catch the cameraman's attention ) Let's see how it goes.

A week into the tournament, it's really a major hit all over India. That reminds me, a month back I guess when the Under 19 team was back fresh from the World Cup victory, I managed to talk to Sayyd Iqbal Abdulla in person when both of us were waiting for departure at the Bangalore airport. I also managed to get him and my Son in one frame. He is now part of the Kolkata Knight Riders squad, also featuring the likes of Sourav Ganguly, Rickey Ponting and Chris Geyl soon to join. I am sure this pic is going to be priceless.

Wednesday Apr 16, 2008

Bangalore Road Rage

This afternoon, had to go to the LIC of India to pay my Insurance premium. I was late by a couple of days, so reached there worried a bit about whether the middle aged cashier with a grave face will accept the money or throw it back to me and show me the door. But he let me go with some late fee. On the way back, I realized that Abhijeet was at his driving best (!). I vividly remember getting a feel of near weightlessness or going into a trance for sometime. Then I decided to really make it a memorable experience. :) Here is the small clip. Notice that after 50 seconds, it really really got exciting... !! I said, I have paid the premium just now...

Wednesday Feb 06, 2008

TT spectators

We had a local SysNet Table Tennis championship tournament few months back. It was fun. The matches were organized at player's convenience based on the workload each one had at that time. TT is very popular stress buster in our team and nearly every team member participated. On this particular day, you can see almost the entire SysNet IEC team that showed up to support us. Or at least that's what it looked like till the match was over. And sure, the cheered, jeered, applauded. But check out what happened right after the match. If it was only the later half of the video, the title could have been 'Spectators riot at TT game' :)

Thursday Jan 10, 2008

My Notes on the Migration to PostgreSQL Experience

         Recently, I was involved in changing the database implementation for one of the products. The product had been using the most popular database and there were several reasons ranging from performance goals, maintainability, platform support requirements to licensing cost. After taking a look at several replacement candidates, the team narrowed down to PostgreSQL. The decision eventually became very easy with the availability of PostgreSQL in Solaris 10 and the enterprise ready features of it.
         I wish I had the time to carefully note the minutest detail of the porting experience. But this is a set of short notes. Let me try to explain the requirements in brief. The product has a central server layer that collects data from tens or sometimes hundreds of systems periodically. The collected data needs to be processed and stored in the database for generating reports and graphs. The data retainment policy is to keep on rolling it up so that the data stays over a long duration at a gradually reducing granularity level with time. Which means the freshly collected data should be the most granular while the older data should be summarized over a period of time and purged out as and when required.

Porting the code
     Migrating just the data was nearly a piece of cake with the help of a downloadable utility. After manually creating the Postgres schema, we were able to migrate the data from the older version of the product and use it for the prototype. Deciding on the datatypes to be used is not a rocket science as there is an equivalent or better in postgres for nearly every data type. Keeping performance in mind, the numeric datatype was seldom used as it consumes 14+ bytes, but not many pitfalls there.
     While porting the procedures procedural language code to the PostgreSQL functions, the team learnt that most of the code can be reused as is. However, some of the functions don't compile but have various equivalent functions such as COALESCE and a lot of date operators, functions. The operators and type casting with :: comes very handy.
The usage of PL/psSQL itself is not one of the best ways of doing things if the old blocks of code were already hitting the roofs of the utilization levels. But we can talk about it a little later.
     At the same time, a lot of code written in C to do 'bulk loads' into the database tables was replaced with a single 'COPY' statement. Amazing !! The Copy statement required changing the format of the source file of the 'bulk load' operation but that was a very very minor overhead. All that was required was to read the old format line by line and convert it into a single delimiter separated fields, something easily done using a perl script. A huge amount of code REMOVED at a cost of a small perl script and call to COPY statement.

     So, the product is in a stage where the business logic is ported. It's functional and can handle prototype/dummy data. But when actual data starts flowing in, the size will go up and will test the limits of the database performance. The database design of the old implementation highly utilized the partitioning techniques in order to scale upto several Gbs of data. New partitions were being created dynamically at the same time old partitions dropped after summarizing their data as per the retainment policy. Postgres 8.x has partitioning mechanism that, at the face of it, looks very different. But as and when we went on implementing it, we found it simpler to administer. For (a) the table owner can also own the partitions, eliminating a need to bring in the most privileged user. (b) The partitions are tables, making them easy to manipulate from the administration point of view. (c) The partition indexing automatically becomes local as it's just like indexing a table. ... and several such reasons.
     One of the stumbling blocks we faces was that the Postgres partitioning works perfectly with the help of Postgres rules for insert command. But the Copy command does not follow the rules. So a way out was to
     \* create partitions and rules
     \* create a temp table and insert all the data into it using COPY
     \* use insert into < original_table > select from < temp table > order by < partition field >
     Next hurdle was, the pre-partitioned tables to be migrated. A migration utility will not retain the partitions easily. Hence,
\* Solution A: Refer to the metadata to find out if the table has been partitioned, and get the partition info. This requires a higher privileged user.
\* Soluiton B: Create the max possible partition starting backwards from the current date. Eventually when it becomes old enough, it will be dropped anyway as per the design.

Postgres initial configuration
     So, now all set with the data and business logic ported to Postgres. The partitions are in place to improve the query performance and enable effective maintenance.
But can the PL/pgSQL scale while processing huge amount of data and give at par performance as compared to the old database ?
That's when database tuning came in picture.
     \* Shared buffers adjusted to
     f(x) = (x / 3) \* (1024 / 8) For 511 < x < 2049
     = 682 For x > 2048
     \* Work memory adjusted to 1/4 th of Shared Buffers
     f(x) = (x / 2) \* (1024 / 16) For 511 < x < 1025
     \* Maintenance work memory, effective cache size and max fsm pages set to 2 times work_mem
     \* constraint_exclusion set to on. (This will boost up query performance when partitioned tables are queried.)
     \* A manual vacuum and analyze forced just before running the batch jobs ( instead of autovacuum )

Directories for tablespaces
The idea was to have 3 directories on separate file systems and preferably on separate disks
The first dir would have the smaller tables, more or less static in nature
The second dir would have the medium sized tables holding the summarized or less granular data.
The third dir would have the large tables holding the most granular or non summarized data.
The indexes placed in the second dir holding the medium tables.
The application data stays in a yet another directory and if the above three dirs do not use the same filesystem, we get it as the forth file system. This will give the pg_xlog it's own filesystem and if configured, a different disk.

Business Logic Updates
     Seems we are all set. But the first round of testing itself revealed we are far from it.
The batch job functions seemed to take forever. So, it needed code changes. Remember, it's nearly a reused and ported code in PL/pgSQL. The main point is PL/pgSQL usage of cursors needs special treatment. Especially when there are loops. The older implementation had nested loops performing singular inserts. There were intermediate commit statements after a certain transaction count. PL/pgSQL does not allow it. It's not the best approach in the first place.
A careful look at the nested loops, and we quickly figured out that one of the loop could be eliminated by replacing it with an INSERT .. SELECT. The huge bonus we get is, now it becomes a single transaction. Also, figured out that intersections using EXCEPT don't go well with the PL/pgSQL performance. After running the new query with explain and explain analyze, figured out the indexing changes required. In particular, a lot of function based indexes were required. One needs immutable function to do so in postgres, and it's beautiful to read the code. The postgres indexing is very different in some cases especially composite indexes and as said function based indexes can be used very effectively. With all that in place, the performance improved magically ! Now we had a situation where the PL/pgSQL blocks were faster than the older implementation.

New Bottleneck, the conversion script.
     Remember the conversion perl script I talked about to get the data in a single delimiter format, so that COPY statement can pick it ? Well, Only after PL/pgSQL started performing (better) we came to know that the perl script is a new bottleneck. So converted it into a multiprocess script that forks off the conversion logic for every 0.1 M rows. Now, even the script and the PL/pgSQL blocks put together outperformed the older block.

         While, obviously a lot more can be done to make it perform better, the short experience was good enough to give me a feel of the strengths of PostgreSQL. The Postgres emblem although represents 'The elephant never forgets', I think it should be the elephant with tremendous power and strength but friendly and useful like our Indian Elephants. :) Would highly recommend using PostgreSQL for your applications.

Wednesday Jan 02, 2008

A Good Software, User Interface and Cooking.

Whenever I show my presentation slides to my uber boss, I receive comments on possible improvements... We have discussed the impact of a slick UI experience. I am not so much of a UI person, but these things seem to matter a lot more these days. A s/w with a very robust and technically strong backend but with a plain vanilla UI may not appeal to end users. While a flamboyant UI magnifies the effectiveness and triggers easy, quick and widespread adoption...Well, a few weeks back I attempted my cooking skills on ..

[Read More]

Tuesday Dec 18, 2007

IEC Sports Day 2007

The one and half weeks of FUN is now over. The Sun Microsystems India Engineering Center Annual Sports Days concluded with a prize distribution this afternoon. The winners, especially the team event winners looked visibly pleased with the trophies. Obviously due to their win or run up to the finals. But the size of the trophies looked large enough to make it special for them.          The well publicized event, although announced a little late, was open for one and half day for registration. And boy, there were 550+ e-mails on the alias when I checked last, out of which 400+ were registration mails.
Agreed that many sent more than one mail as per the process, enrolling for one form of sport each, but the team game registration was a single mail per 6 to 8 member team. I must mention that cricket alone had 33 registered teams with non common members, which means we had ~200 distinct cricket players with at least 33 girls, ready to face the tennis ball darted at them from 22 yards ! 200+ for one sport alone. That was quite amazing !!!
         I was tempted to put it under the sports category in continuation to my previous post
Some of the matches proved to be mis-matches, but the enhanced rules or the format of some of the games gave us very closely contested battles in more than 80% of the cases. The amateurs were pitted against experts, but they had skewed rules to help them baffle the experts. The favorites were suddenly seen running out of ideas when the over enthusiasts started playing the games their own way, their own style. It was Fun !!
         Unfortunately, the last day was ruined by unexpected rain. Cricket semis and finals saved for the last day had to be reduced to a bowl out. Sure, there were other ways of deciding the winner when the conditions were totally unplayable. But we thought we would rather see the teams PLAY and win. Athletics and other outdoor games had to be canceled. Football really saved the day for the sports lovers. The mud and the small to mid size pools of water right in the middle of the field did not stop the finalists and the referee. The indoor games, Badminton, TT, Chess, caroms, foosball, were of course unaffected.
         Not everything went on as per the plan. There were moments of conflicts. Many of us went through frustration and disappointment. But in the end, all of us walked away with more friends and surely everyone had a lot of FUN.. !!

Friday Dec 14, 2007

Solaris Containers came to the rescue

One of my friends leading a quality team responsible for performance/stress testing of a network intensive application, had something interesting to discuss with me few months back.
The application has multiple instances of a component communicating to a central server over the network. In a customer environment, this piece of the software has one instance on one OS instance. This one does a lot of snmp talk back n forth with the server. The test case was to have ~1500 instances talking to a single server at a time. His team has a simulation script that runs 50 to 100 instances on one box. He had used 15 to 30 Solaris 8 and 9 hosts some 4 yrs back to successfully test it. Amazing effort ! Because those many instances are good enough to drive the system crazy. And managing 15 to 30 such system must have been SOMEthing!
The current situation was that he had a lab with reduced resources, a smaller team and very few days to repeat it with the latest version of the same s/w. So I gave him the obvious suggestion ! Virtualize the host(s) to get the magic set of 15 to 30 os instances. And more obvious choice of virtualization technology in this case, Solaris Containers. All the team needed was multiple OS instances of the same OS flair. Within this constraint, containers are extremely lightweight. The team had a 24 vCPU 48 G RAM box that was selected to host them.
So it all looked great on the paper. One of his team members configured some 8 zones to start with and started instantiating 100 per zone. The moment the instances in just 2 of them started talking over the network, the system refused to respond to any of the terminals connected. Although there were s/w partitions in the form of zones, all were sharing a NIC.
A team meeting with a few glum faces and few 'I knew it' remarks... And here is when the exclusive IP stack support added to containers came to the rescue !!! With additional NICs on board, and an upgrade to Solaris 10 Update 4, the zones were reconfigured to have exclusive ip type. And it worked !! 30 boxes down to a single box. So easy to manage, fast boot and shutdown, nearly instantaneous replication with cloning to get 16 Containers running 100 instances each. Cool Stuff !!!

Wednesday Nov 14, 2007

Sun Management Center 4.0 is out

Now that Sun Management Center 4.0 has been released, I guess I can come back to blogging a bit.

SunMC 4.0

A lot of resources are available in the form of blogs, documents, forums, web casts. Some of them are:

  1. SunMC 4.0 in action managing Containers on the UltraSparc 2 Systems
  2. The product page
  3. Steve's blog
  4. The Sun Connection Blog
  5. The Big Admin Wiki

Let me try posting some feature specific information in my next posts.

Wednesday May 16, 2007

Footprints on the world map

This map shows the countries I have visited so far. It looks cool...

[Read More]

Saturday Apr 21, 2007

What's new in container manager

We added a lot of cool features to Container Manager recently by means of 125830-03, 125831-03, 125832-03, 125833-03, 125834-03.

[Read More]



« July 2016