There are basically two ways of profiling a Java Application, known under the name of “sampling” and “instrumentation”. The difference between them is quite big, as one takes periodically statistical data, whereas the other integrates into the code, setting an entry and an exit point, therefore being able to deliver more than statistical data:(EXACT) number of times a method has been called, the time it needed to perform, and so on.
CPU sampling tries to give you an overview of where your application is spending the most time. There are usually two times reported:
- Self time – Like i was mentioning in my previous blog, about wall times and cpu own times, the self time ” is a measure of how much real time that elapses from start to end, including time that passes due to programmed (artificial) delays or waiting for resources to become available”. In the example below, where there is some Eclipse Cache coordination involved, the process dealing with Cache Coordination with JMS has a self time of 1115152 milliseconds. Although it looks like a high value at first, the one giving us the relevant information is actually the Self Time(CPU), which tells us how long was it actually consuming CPU time doing the job it should do, from the total amoung of 1115152 milliseconds.
- Self Time (CPU) – This is the time actually spent by CPU executing method code, and this is the one you are most interested in. This is where you have to dig: for invocation count vs cpu own time. A low invocations count with a large cpu own time usually means your code is unefficient
Looking further down to the second method, oracle.net.ns.packet.receive(), we can see that the self time and the self time (cpu) are the same. This means that from the amount of time reported as wall time (or self time), it has practically used the whole time for doing jobs in our context. Once you found your hot stop, you can access it’s backtrace, and see who was the biggest consumer inside the method.
The fact that sampling uses statistical gathering of data makes it pretty unintrusive. Of course there will always be a small overhead, but nothing comparable to bytecode instrumentation. Therefore, it is always a good starting point in identifying hot spots when CPU is running like a headless chicken. If the problem is there, and it is so obvious, you fill find it by using sampling.
The biggest advantage of instrumentation over sampling is that this is an ongoing process, listening, gathering, acting like a filter on your application’s code, counting not only the time a method was invoked (EXACT number), but also the time it took to perform.
Like i said, in order to do that, instrumentation adds probes. This means inserting custom byte code for recording method invocations, object creation and other operations that take place inside the method. This is INTRUSIVE, and will definitely lower the performance of your application, but will give you the accuracy you need to make the difference between a simple alarm and a real problem.
As a personal opinion, i would always start with sampling whenever i see some CPU anomaly (strange usage trend, high usage, etc.), drilling in only when necessary, specially when it comes to profiling applications deployed in some fancy, complex application servers.