Monday May 11, 2009

Fedora 9 and Mac VMware Fusion 2.0.4

UPDATE: (5/11/2009) Patches for openjdk6 were deleted from this blog, not needed any more if using the latest OpenJDK 6 sources.

Thought I would provide some notes on setting up a Fedora 9 VMware image on my Mac laptop using VMWare Fusion 2.0.4. I used the same steps for both Fedora 9 32bit (i386) and 64bit (x86_64), however I had some trouble with installing x86_64, even seemed to trigger a MacOS panic at one point after doing the yum update, not sure what that was all about, only happened once. This was strictly a local Fedora install, so I didn't need to deal with any of the networking issues of setting up a real physical machine.

I'll try and re-create the order of things as best I can:

  1. Create VMware Virtual Machine from Fedora 9 install iso image. I set it up to have a 20GB disk (You cannot change this disk size afterwards!). I'm using 768Mb RAM (512Mb caused slow builds) and during the install I asked for the "Software Development" packages.

  2. Update your system and make sure you have all you need. Logged in as root:

    yum install kernel kernel-headers kernel-devel
    yum install hg ksh tcsh csh cups cups-devel freetype freetype-devel lesstif-devel
    yum groupinstall "X Software Development" "Development Tools" "Java Development"
    yum update
    This will take a while. A reboot after you are all done would be a good idea.
  3. Install VMware tools. Once you extract out the VMware tools folder vmware-tools-distrib, once again logged in as root do the following:

    cd vmware-tools-distro
    The list of questions to answer is long and convoluted, mostly the default answer works fine, but in some cases it seems to think you are using a remote login and you have to say "yes" to continue the installation.
  4. Mouse problems: For some reason all my single clicks were being treated as double clicks, which drove me nuts. I found this posting which solved the problem, I use option 2 and edited the file /etc/X11/xorg.conf and added the following lines, logged in as root:

    Section "ServerFlags"
            Option      "AutoAddDevices" "false"
    A reboot of your virtual machine is necessary to fix this.
  5. The default limit on file descriptors is very low, to allow for a larger limit the following addition to the file /etc/security/limits.conf will increase that limit, again logged in as root:

    \* soft nofile 64000
    \* hard nofile 64000
    You need to logout and back in for these new limits to be available.
  6. Recently it was discovered that the upgraded kernel-headers package has trimmed down the files it delivers to /usr/include/linux/ (i.e. dirent.h) and although this doesn't impact OpenJDK building, it could impact builds of parts of the Sun JDK (plugin). So to avoid this missing include file problem, you have to do this last step because the above steps need the latest and matching kernel-headers files. To get the older kernel-headers package run:

    yum remove kernel-headers glibc-headers
    yum install kernel-headers-2.6.25 glibc-headers
    Bugs have been filed on the Sun JDK to see if we can break this dependency on the /usr/include/linux/ files.

That's the basic system setup. In addition I also setup my own home directory with the following so I can build the OpenJDK:

  1. Get webrev tool:

    mkdir -p ${HOME}/bin
    cd ${HOME}/bin
    chmod a+x webrev
  2. Get latest ant:

    mkdir -p ${HOME}/import/ant_home
    cd ${HOME}/import/ant_home
    tar -xzf apache\*.tar.gz
    mv apache-ant-1.7.1/\* .
  3. Get forest extension:

    mkdir -p ${HOME}/hgrepos
    cd ${HOME}/hgrepos
    hg clone hgforest
  4. Setup your ${HOME}/.hgrc file:

    cat > ${HOME}/.hgrc <<EOF
    username = ${USER}
    ssh = ssh -C
    groups = wheel
    clone = --pull
    fclone = --pull
    fetch = -m Merge
    ffetch = -m Merge
  5. Get OpenJDK7 sources (jdk7 build source forest):

    mkdir -p ${HOME}/hgrepos/jdk7
    cd ${HOME}/hgrepos/jdk7
    hg fclone jdk7-build
  6. Get OpenJDK6 sources (jdk6 master source forest):

    mkdir -p ${HOME}/hgrepos/jdk6
    cd ${HOME}/hgrepos/jdk6
    hg fclone jdk6-master

Now to see if I can build both OpenJDK7 and OpenJDK6:

# To get rid of a few sanity errors
export LANG

# My own private copy of ant
export ANT_HOME

# Use the JDK that is part of Fedora 9

# Add java and ant to the PATH
export PATH

# Go to the root of the jdk7 source forest
cd ${HOME}/hgrepos/jdk7/jdk7-build

# Build jdk7
#  Don't run javadoc, too slow, needs 1024Mb RAM minimum
make NO_DOCS=true

# Go to the root of the jdk6 source forest
cd ${HOME}/hgrepos/jdk6/jdk6-master

# Build jdk6
#  Don't run javadoc, too slow, needs 1024Mb RAM minimum
make NO_DOCS=true

SUCCESS! They both build.


Wednesday Dec 31, 2008

Where has JVMPI gone?

Have you seen this error before?

FATAL ERROR: JVMPI, an experimental interface, is no longer supported.
Please use the supported interface: the JVM Tool Interface (JVM TI).
For information on temporary workarounds contact:

For a long time now, since we released JDK 1.5, we have been warning people that the VM profiling interface JVMPI is going away. Starting with the JDK 6 update 3 release (JDK6u3), it is gone for good.

If you really need JVMPI, your best bet is to use a JDK 1.5 or older release, and also find out about transitioning to JVM TI. More often than not, you have become dependent on a tool that uses JVMPI, in which case you should try and upgrade that tool to a version that uses JVM TI instead. But if you have written your own JVMPI code, see the JVMPI transition article at for help in transitioning to JVM TI.

NOTE: Getting this message indicates that JVMPI has been requested of the JVM. A request for JVMPI must be made prior to JVM initialization and regardless of whether JVMPI is eventually used at runtime, just the request for it will have a negative performance impact on your Java application. In most situations, JVMPI should never be requested unless some kind of performance work is being done and slower performance is considered acceptable. JVM TI does not have many of the JVMPI limitations.

A few references of interest:


Monday Jan 07, 2008

Building and Porting the OpenJDK: A Shopping Cart

What is the difference between Building the OpenJDK and Porting it? Certainly porting requires you to build it, but porting can mean much more, and with different levels of porting effort. A different operating system or hardware architecture is certainly a major porting effort. But a different C++ compiler could also be a porting effort, probably to a lesser degree, but don't underestimate it.

The number of OS/Arch/C++ combinations out there is pretty large. Even if you picked Linux/X86/g++, it doesn't mean much, certainly this would either build and work, or build and work after some small degree of porting effort. If the g++ compiler was significantly older or newer or different from what has been successfully used before, then there is a small chance you will encounter some porting effort. If the version of Linux you are using is very old, or newer, you also run the risk of adding some small porting effort. As to whether the changes made to the OpenJDK to make it work should be made permanent, it depends on the changes. Sometimes the porting effort can be a bit like the Whac a mole game, you whack a fix to one OS/Arch/C++ combination and a different broken combination pops up. :\^(

Now I'm not suggesting that every Build of the OpenJDK is a "port", but in a minor way, it is a bit of a port if the specific combination has never been tried before. If you suspect you might be doing a bit of a port and having problems, I'd recommend a little divide and conquer approach to it. Maybe get langtools to build first, then hotspot, then maybe corba, jaxp, and jaxws. Leaving the jdk for last, but the order may depend on what kind of port you are doing.

So here is my Shopping Cart:

  • A C and C++ compiler, plus all the trappings that go with it, like /usr/include files and /usr/lib libraries to support building C++ applications. The risk of problems here includes the same ones any C/C++ project might face, but we have a few extra problems that crop up from time to time.
    • Anything other than the specific compiler versions listed in the OpenJDK Build README runs some risk of not working perfectly. In practice, even when problems arise in this area, the effort level is low to correct it, and the frequency of these problems has gone down over the years. However, a big warning here, the Hotspot VM is built with high C++ optimization levels, and it's not unusual for compiler or VM bugs to manifest themselves when the C++ and Hotspot code mixes it up at runtime. It rarely happens with minor compiler revisions, but major compiler updates or new compilers do pose a risk due to the different compiler vendors approach to the various language and system standards. If you suspect some kind of compiler runtime problem is happening, you will need to selectively turn down the optimization on the suspect C++ sources files, or just turn down the optimization level completely. Keep in mind that the overall performance of your resulting JDK build could suffer by turning down this C++ optimization level.
    • The Hotspot VM is your major hurdle for the C++ compiler, a great deal of code needs to compile. The rest of the OpenJDK contains many C source files, but getting them to compile and work is considered minor compared to the Hotspot C++ source files. The non-Hotspot native code is more likely to encounter issues with the system /usr/include files or /usr/lib libraries.
    • If the hardware architecture isn't X86 or sparc, then you have some assembly code to write for Hotspot, and possibly a small amount for the java launcher code. I don't have details on this, but I think we will probably get these details from the OpenJDK Porters Group. You will definitely want to be connected with this group if you are doing any porting work.
    • If the OS isn't Solaris, Linux, or Windows, you will encounter more problems I'm sure. The differences in the /usr/include files can pose some problems, but if any amount of native C/C++ code has been ported to this OS, the problems should be well known. The C/C++ code in the OpenJDK should follow the C/C++ language standards, however with any project this size, it would be hard to guarantee that 100% of the language/system "undefined behaviors" have been avoided.
  • A boot JDK (at least JDK 1.5) to run javac/javah/javadoc (from langtools) that builds the OpenJDK class files. The boot JDK also provides the jar utility. It's possible that a JDK 6 is required to build either jaxp or jaxws areas of the OpenJDK, but the dependencies are on class files added to JDK 6, some investigation into this needs to be done. Ideally we would like to allow for just a JDK 1.5 boot.
  • The binary plugs (which will go away someday).
  • Other Bits and Pieces: Also needed are the CUPS include files, the X11 include files and ALSA sound files, Windows users also need the Platform SDK and DirectX 9 SDK. All of these are somewhat like the C++ compiler, and are installed once on your system.

I hope this helps in some way.

As Landon Fuller is finding out with his little porting adventure, some platforms are easier than others.



Various blogs on JDK development procedures, including building, build infrastructure, testing, and source maintenance.


« August 2016