Friday Jul 25, 2008

SubType Check Optimization in Java !

Lots of our friends keep on asking, why to use Java SE 5.0 or Java SE 6. And most of the time you need to reply something impressive, then only they will start using it and can understand more benefits.  I was reading the gradual performance improvement in JDK versions which is quite interesting. Java has spotted all its reason to being slow (very nice article, which speaks why Java is slow than C++) and optimized those on max level. One of the reasons mentioned in this article was Lots of Casts. And that's true, a good, big project code goes about millions of cast checking in Java and off course need attention for optimization. JDK 5 and onwards has done a fast subtype checking in Hotspot VM. This blog is dedicated on a small talk on the same, for detail read this article written by John and Cliff.

Prior to 5, Every Subtype has cached with its SuperType(casting of which is correct). The cache strength is 2 and if results return negative then its goes for a VM call which resolves this problem and caches if VM resolves it as positive. So anytime unavailability in cache is a costly operation where we need to make a call for VM. And in the worst scenario mentioned in SpecJBB we can have 3 rotating elements with 2 cache.

So, [A B] in cache <---- C found by VM and get cached, A is out now.

[B,C] in cache <-------- A negative test, VM call(+), get cached, B out.

[C,A] in cache <-------- B negative test, VM call(+), get cached, C out !

So, in basic term we can't trust on complexity(calls happen at runtime). And hence it better to move on a better algorithm. The new algorithm pass the code through an optimizer which checks more specification at compile time. Like if Base class and Derived class is known at compile time only. It try to understand lot of code at compile time only. It put the entire code inline and there is no requirement of VM calls. Complexity is quite consistent and it takes one load for most of the object or object array.

This also divide the subtype checks into primary and secondary checks. For Class, Array of Classes and for array of primitive value, primary check has been done whereas interface and array of interface are handled by secondary check. Finally a smart algorithm combines them.

In primary subtype check:

Aim to Check: Is S a subtype of T ? Calculate the depth of S and T. Depth mean to say all the parent. Though it is done in some base level or assemble level, I am writing a code in Java to find out the depth.  Here is the code using reflection API's:

package findparent;
public class Main {

    public static void main(String[] args) throws Exception {
        String[] display = new String[10];
        int i = 1;
        FifthClass tc = new FifthClass();
        Class classname = tc.getClass();
        display[0] = classname.getName();
        Class parent = classname.getSuperclass();
        while (!(parent.getName().equals("java.lang.Object"))) {
            display[i] = parent.getName();
            classname = parent.newInstance().getClass();
            parent = classname.getSuperclass();
            i++;
        }
        display[i] = "java.lang.Object";
        for (int j = 0; j <= i; j++) {
            System.out.println(display[j]);
        }
        System.out.println("Depth of tc is " + i);
    }
}

class FirstClass {
}

class SecondClass extends FirstClass {
}

class ThridClass extends SecondClass {
}

class ForthClass extends ThridClass {
}

class FifthClass extends ForthClass {
}

Now the algo. says:

S.is_subtype_of(T) :=
return (T.depth <= S.depth) ? (T==S.display[T.depth]) : false;

And further a lot of optimization. Which we will check in next blog :-). I will also try to cover how the secondary Subtype check is being done and also how to combine both the checks.

Till now, make a big inheritance tree and try to see the difference between older JDK and JDK5 onwards.

Saturday Jun 28, 2008

Why JDK6 again ?

Again a comparison and reason why JDK6. Weeks back one of our team member gave a presentation on JVM performance improvement in Mustang. I have just collected a useful point here and try to compare with older JDK version. Here is the code:

import java.util.\*;

public class RuntimePerformanceGC {
 
    public static void main(String[] args) {
        RuntimePerformanceGC ob = new RuntimePerformanceGC();
        Vector v = new Vector();
        java.util.Date now = new java.util.Date(); 
        long t1 = now.getTime();                   
        ob.addItems(v);
        java.util.Date now1 = new java.util.Date();
        System.out.println(now1.getTime()-t1);
    }
    public void addItems(Vector v) {
        for(int i =0;i<500000;i++)
        v.add("Item" + i);
    }
}
 

And the output in ms is:

JDK1.4.2: 984

JDK 6: 578

You can see a massive difference. 37 percent improvement in time. Why ? Here goes :

This is because of Runtime Performance Optimization. The new lock coarsening optimization technique implemented in hotspot eliminates the unlock and relock operations when a lock is released and then reacquired with no meaningful work done in between those operations. And this is what happening in our example. Since, Vector do thread safe operation, it takes the lock for add, release the lock and then takes it again. 

So, I just tried to give one more reason why use JDK6 ;-). This is off course not the only reason for big improvement, I will try to cover some more in upcoming blogs :-)


Wednesday Apr 30, 2008

Compiler Optimization Can cause problem

Last week, I was created a presentation on Multi-threading in Java.
Though this fact, I have covered in presentation but still wanted to
blog on same. In multi-threading world, compiler optimization can cause
serious problems. Just check my small code:

public class NonVolatileProblem extends Thread{

ChangeFlag cf;

public static void main(String[] args) {
ChangeFlag cf = new ChangeFlag();
NonVolatileProblem th1 = new NonVolatileProblem(cf);
NonVolatileProblem th2 = new NonVolatileProblem(cf);

th1.start();
th2.start();

}
public void run() {
cf.method1();
cf.method2();
}

public NonVolatileProblem(ChangeFlag cf) {
this.cf = cf;
}
}

class ChangeFlag {

boolean flag = false;

public void method1() {
flag = false;
try {
Thread.sleep(1000);
} catch(Exception e) { System.out.println("Don't want to be here"); }
if(flag) {
System.out.println("This can be reached ");
}
System.out.println("Value of flag" + flag);
}

public void method2() {
flag = true;
}
}

Check
out the reason in bold. Now if compiler optimize the code and remove
the part of if(flag), thinking of that flag value will always be false.
Then we have a situation here(FBI style of speaking :-D), because other
thread can change its value and can make the flag value true. Just run
this code 5-6 may be 10 times, you will be able to see the SOP
statement "This can be reached". Just for the shake of getting that I
have added sleep statement. Here what I got on my 3rd run of the code :)

Value of flag:false
This can be reached
Value of flag:true

Handling
such type of situation is not difficult, specification says to add a
word volatile before the variable flag which will tell the compiler not
to optimize its code just by seeing some initial value or declaration.

About

Hi, I am Vaibhav Choudhary working in Sun. This blog is all about simple concept of Java and JavaFX.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today