Thursday May 24, 2007

Monitoring: why, when, what

Recently a discussion popped up around monitoring. We like to have DTrace probes installed in as many software products as possible. The technical problems to tackle are minor. Inserting DTrace probes in your code is not really rocket science. And indeed that is a good thing!

Basically you can use DTrace probes to provide information through the DTrace framework to others. One --in my view the most significant-- advantage of using DTrace probes is the ability to create context. Context in the sense of what other things were happening when an application probe fired. Monitoring just the application is an advantage and should have been done already in the first place. Realizing this with DTrace gives the benefit of context (besides being rather easy:-)

Why do you want to monitor an application? To get knowledge about the behaviour of your application as it is deployed on some platform. Is it just the knowledge for the sake of knowledge? I tend to think not! If I know my application is not behaving as expected but I cannot influence it, than what good is knowledge? Indeed besides the instrumentation there is a need for "knobs" that effect the application its behaviour.

Only if we know the knobs and the counters and understand their relationship, can we master the nuances to indeed improve the deployment. This implies that we must understand our application too. At least to understand if there is a problem.

Up till now I talked about generic behaviour. Lets go a little deeper and more concrete. Lets start looking at things from a performance and scalability point of view. Since I am quite familiar with databases lets assume I am working on a database. I know the source quite well, understand its architecture and have a profound knowledge of its modules. What do I want to know? This can be higher level things. From a user perspective for example. This user executes transactions. Besides these transactions being handled properly ;\^) the user most likely wants to have them finish in an acceptable time. Learning how long a transaction takes could be the first (global) thing we like to know. A timer can be activated right before the transaction starts, and stopped once the transaction ends. Storing this result, or adding it to the total "time elapsed for transactions" could be two ways of monitoring. Through a DTrace probe access to these numbers could be provided.

What do we learn, knowing the time spend in a transaction, or the time spent in transaction handling? Basically nothing! OK now we see a billion seconds elapsed. So what? That is the nature of a database: execute transactions. We still do not have a clue where this time was consumed. There is no context. Adding a counter that shows the time the database engine was waiting for the application, the user or another external party that requested execution of the transactions, can at least teach us if most of the time is spend outside of or inside the database while handling transactions. If say 80% of the time is external we should look at the application that uses this database.

Assuming time is spent in the database we still do not have a clue about the where in the database. Understanding the why is still way in the future. A transaction is a set of SQL statements executed in a certain order. Given a data set this transaction works on, this transaction should always return the same answer as long as this data set does not change. Monitoring the data set is most likely not the way to go: chances are this one contains so many elements that it becomes just too much to handle ;\^)

One next step could be to monitor the SQL statements during the phases that they go through (parsing, execution, retrieving rows, etc). At least this added granularity learns us more about where time is consumed. Lets side step a little: "where time is consumed" implies something. On any computer time is either spend on the CPU or on waiting for a resource to become available. This resource could be a record in the database that is locked by another user; waiting for a latch or mutex before being allowed to execute a code section; waiting for a CPU becoming available. How can a process wait? Or better a thread, since in these days of multi threaded applications the process is just an environment where threads can run in. Any wait is done by the kernel, since waiting is nothing more than "being taken of the CPU and not being scheduled for CPU until some condition occurs. The only way for a thread to end up in the kernel is by doing a system call.

System calls, and the time spend in them, can already be measured with DTrace. However the need is not for system calls per se, but for those that are called in the context of a given SQL statement in the context of that transaction. Therefore adding DTrace probes is preferred over home made implementation for monitoring. This because DTrace can provide the context to system calls.

So we implement probes throughout our code to learn which SQL statements take how long, and in what code sections these statements spend most of their time. Do we now know why a transaction takes so long? Maybe. If we learned that most of our time is spend while reading blocks from disk, than surely there is a strong IO component that influences our execution time. We also know it is an IO component of SQL statement S so a solution could be to look at that statement. Perhaps the statement is not as optimally written as it could be; perhaps by adding(or removing) an index will the engine be able to solve it faster.

The statements of transactions are executed in a potentially highly concurrent environment. This means a single transaction can still be fast when executed in an idle database. However if more users executed the same or other statement that influence our transaction than we need to gather more information about "where time flies by".

Many databases buffer their disk blocks or their records in a pool. This pool is than accessed by a thread while executing a statement. Since this pool changes over time (a certain block that now holds info on table T, record R will in a second hold information on a different record of a different table) access needs to be regulated. Assume a hash array is used to implement fast lookups. A beautiful hashing function is crafter and we assume all the lists that originate from the hash buckets are equally long. Or in other words the blocks are uniformly distributed. Now is this really the case? This could be something worthwhile to instrument. Through this instrumentation can we actually learn how long the chains really are. And understand why so much CPU is lost travelling these chains.

There probably is a lock on the bucket, regulating access to the chain that starts from it. If many sessions have to wait to acquire this lock than it is likely that two or more hot blocks happen to fall on this single chain guarded by this lock. If we can see this (by monitoring) we can influence it by resizing the hash array (our knob).Or perhaps even by changing the parameters that influence that hash function.

In conclusion the answer to "what do we instrument" is highly dependent on the level of detail that needs to be understood. If the application administrator is not aware of the workings of the application than "global" probes that can be understood in a higher context (transaction time) are likely the only ones of value. The disadvantage is the information they give: little. In my view the best way of placing probes is in levels. Starting with global ones that are easy to understand, to other ones that demand more knowledge about the application but provide more information, to the final ones that really require indepth insight but will reveal what is actually happening down their in the engine. The actual where of the probes in the application source is therefore highly dependent on the application.

Technorati Tags: , ,

Tuesday Oct 24, 2006

Yet another Dtrace related article

Recently I visited an ISV in Denmark. One of the presentations I did was about DTrace. Had only an hour to tell them the beauty of this. My initial thinking was "impossible." However the feedback received was very positive indeed. One guys respons was "I never thought it possible that someone can tell a story for more than an hour, on a subject of 10 seconds and still have a very quiet and focused audience throughout this process." This made me conclude it would be a good idea to kind of share this message with a wider audience. Initially I thought it would be nice to blog about. However to tell a story around 10-15 slides turns out to become quite a lengthy paragraph. I decided to try to make a publication out of this. The subject is about login onto a system with ssh. After giving the login name it took 10 seconds before the password prompt appeared. Way too long. The story tell's how DTrace was used to find the reason and the solution. It is not DTrace per say but the way things evolved over time. Too often the root cause is not clear. First a better understanding of the situation is needed. Here too DTrace can help us. The objective of the article is to help others develop a creative way of thinking about the use of this tool.
The complete story is now available
here


Enjoy!! And do not hesitate to contact me if your questions, remarks and/or suggestions.


Technorati Tags:


About

paulvandenbogaard

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today