X

Buck's Blog

  • February 6, 2015

Thread Stuck at readBytesPinned / writeBytesPinned on JRockit

David Buck
Principal Member of Technical Staff
Introduction

Sometimes you will see one or more threads in your application hang
while executing a method called either readBytesPinned or
writeBytesPinned. This is a common occurrence and does not indicate any
JVM-level issue. In this post I will explain what these methods do and
why they might block.


Background

Before explaining what these methods do, it is important to introduce
the idea of object pinning. Pinning is where we temporarily tag an
object on the heap so that the garbage collector will not try to move
the object until we remove the tag. Normally, an object might be moved
from one address to another if it is being promoted (i.e. being moved
from young space to old space) or as part of compaction (defragmentation). But if an object is pinned, the GC will not try to
move it until it is unpinned.


So why would we want to pin an object? There are several scenarios where
pinning is important, but in the case of readBytesPinned or
writeBytesPinned, it is simply a performance optimization. Pinning a
buffer (a byte array) during an I/O operation allows us to hand it's
address directly to the operating system. Because the buffer is pinned,
we do not need to worry that the garbage collector will try to move it
to a different address before the I/O operation finishes. If we were not
able to pin the buffer, we would need to allocate additional native
(off-heap) memory to pass to the OS's native I/O call and also copy data
between the on-heap and off-heap buffers. So by pinning the buffer to a
constant address on the heap, we avoid both having to do an otherwise
redundant native memory allocation and a copy. If this sounds similar to
the use case for NIO’s direct buffers, you've got the right idea.
Basically, JRockit gives you the best of both worlds, the I/O speed of
direct I/O operations, and the safety / convenience of pure-Java memory
management (no off-heap allocations).


Let’s Try It!

Now lets try a really contrived example to see what this might look like:


First, we’ll make a named pipe and open it by redirecting some output.
(Please don’t forget to kill the cat process when you are done.)


$ mkfifo pipe

$ cat - > pipe &


Now Let’s make a trivial Java program [1] that tries to read from our
new pipe.


import java.io.FileInputStream;

public class PipeRead {

    public static void main(String[] args) throws Exception {

        FileInputStream in = new FileInputStream("pipe");

        in.read(new byte[10]);

    }

}


Finally, we compile and run.


$ javac PipeRead.java

$ java PipeRead


Now if we collect a thread dump, we can see that the main thread is
blocked waiting for data (that in this case will never come) from our pipe.


"Main Thread" id=1 idx=0x4 tid=2570 prio=5 alive, in native

    at
jrockit/io/FileNativeIO.readBytesPinned(Ljava/io/FileDescriptor;[BII)I(Native
Method)

    at jrockit/io/FileNativeIO.readBytes(FileNativeIO.java:32)

    at java/io/FileInputStream.readBytes([BII)I(FileInputStream.java)

    at java/io/FileInputStream.read(FileInputStream.java:198)

    at PipeRead.main(PipeRead.java:6)

    at jrockit/vm/RNI.c2java(JJJJJ)V(Native Method)

    -- end of trace


If we were to try HotSpot on the exact same testcase, we would see it
doing a blocking read just like JRockit does.


"main" prio=10 tid=0x00007f25e4006800 nid=0xa8a runnable
[0x00007f25e9c44000]

   java.lang.Thread.State: RUNNABLE

    at java.io.FileInputStream.readBytes(Native Method)

    at java.io.FileInputStream.read(FileInputStream.java:198)

    at PipeRead.main(PipeRead.java:6)


So even though the top of the stack trace for JRockit has
JRockit-specific classes/methods, the JVM itself does not have anything
to do with why the thread is stopped. It is simply waiting for input
from a data source it is trying to read from.


Troubleshooting

So what should you do when you have a thread that appears stuck in a
call to readBytesPinned or writeBytesPinned? That depends entirely on
where the application is trying to read data from or write data to.


Lets look at a real-world example of a thread stuck doing a blocking read:


    "ExecuteThread: '2' for queue: 'weblogic.kernel.Default'" id=20
idx=0x2e tid=16946 prio=5 alive, in native, daemon

        at jrockit/net/SocketNativeIO.readBytesPinned(I[BIII)I(Native
Method)

        at
jrockit/net/SocketNativeIO.socketRead(Ljava/io/FileDescriptor;[BIII)I(Unknown
Source)[inlined]

        at
java/net/SocketInputStream.socketRead0(Ljava/io/FileDescriptor;[BIII)I(Unknown
Source)[inlined]

        at
java/net/SocketInputStream.read([BII)I(SocketInputStream.java:113)[optimized]

        at oracle/net/ns/Packet.receive()V(Unknown Source)[inlined]

        at oracle/net/ns/DataPacket.receive()V(Unknown Source)[optimized]

        at oracle/net/ns/NetInputStream.getNextPacket()V(Unknown
Source)[optimized]

        at oracle/net/ns/NetInputStream.read([BII)I(Unknown
Source)[inlined]

        at oracle/net/ns/NetInputStream.read([B)I(Unknown Source)[inlined]

        at oracle/net/ns/NetInputStream.read()I(Unknown Source)[optimized]

        at
oracle/jdbc/driver/T4CMAREngine.unmarshalUB1()S(T4CMAREngine.java:1099)[optimized]

<rest of stack omited>


In the above case, you can tell from the stack trace that the JDBC (database) driver is doing a blocking read from a network socket. So the typical next step would be to see if there is a
reason why the expected data may have been delayed (or
even never arrive at all). For example, the database server we are talking
to could be hung, there could be a network issue that is delaying (or
even dropping) the database’s response, or there could be some sort of
protocol mismatch where both parties believe it is the other side’s turn
to talk. Analyzing log files on both sides may provide  clues as to what
happened. If the issue is reproducible, collecting a network trace and
analyzing it with a tool like WireShark may
also prove useful.


Obviously, this is just one of countless scenarios where a thread may
get stuck waiting on some external (to the JVM) resource. But other
cases should be similar: you must look further down in the stack,
determine what the application is waiting for (where it is expecting to
receive data from or trying to send data to) and then troubleshoot the
issue from there. Sometimes, tools like lsof, truss, or strace can come
in handy here. Very often, this troubleshooting involves investigating
other processes or even other machines across your network.


Conclusion

Seeing a thread block temporarily at readBytesPinned or writeBytesPinned
is completely normal and does not usually indicate a problem. However,
if one or more threads blocks on a call to either of these methods for
an unreasonable amount of time, you should examine further down the
stack trace and attempt to determine what external resource the Java
application is waiting for.


[1] Obviously, this is horrible code. Real production code would
include proper exception handling and of course close the
FileInputStream when we are finished with it. You have been warned.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.