Blog

Orient meets Italy – Exotic dressing for salads using ginger, thai basilico and balsamic cream

Tonight while my wife was putting the kids to sleep, i thought about creating a fresh, exotic, splashy dressing for a classical garden salad (lettuce, cucumbers, tomatoes, radish, avocado)

So i started with fresh ginger and squeezed lemon juice. To mild the soureness and spiceyness of the both above i used a little bit of balsamic cream and honey. Mixed that well with olive oil and added chopped  thai basilico and lemon balm. The thai basilico has a sort of freshness so different to the italian one, mixed with the lemon balm it just ROCKS.

My wife said it was a dream of a dressing, so her being my best critic, i said it’s worth writing it down. So here it goes

Oriental/italian dressing

Dressing for two portions

Ingredients:

  • 2 cm cube cut of ginger, freshly shredded
  • squeezed juice of a quarter of a lime
  • one tablespoon of balsamic cream
  • about 5 tablespoons of extra-virgin olive oil of best quality (you do NOT want a bitter olive oil, it will spoil it all)
  • 5 small leaves of thai basilico
  • 2 small leaves of lemon balm
  • on teaspoon of honey

Steps:

  1. Shred the ginger and mix it with the lemon juice and balsamic cream
  2. Add a tablespoon of honey, and mix well
  3. Mix in the olive oil and beat it well for a minute or so
  4. Chopp the thai basilico and lemon balm and mix them in

 

The shift

I’m on paternity leave. That gives me a little bit more time to create instead of just cook. And i love this.

Time is now the most precious resource that one might have. And of it, my family takes the most, which makes me happy. We get to do a lot during the days (and nights haha), and it is just awesome to just leave the work aside for some time…

And the “me” time, the small amount of time after kids are asleep…it’s all about cooking. I guess it’s pretty much the only hobby i have left, but the one i will never give up.

So for the next couple of months, i’ll be just spending time with my wife and kids, travelling, cooking and enjoying life the way i see it. No more tech-stuff for a while 🙂

See you around

 

 

Groovy JMX Bean Monitoring

Head Note

It’s been a while since i have last written an article, i guess life got a little bit more complex ever since it was filled by our two little kids 🙂 Time is now such an expensive resource, haha.

Anyway, back to the topic.

I have finally had some time at work to put some work into refactoring our Load Testing Framework. One of the key topics i had set my eyes on was a reliable way of monitoring the application servers, a way with very low overhead, stable and most of all adaptable. First i played a lot with the REST Monitoring Interface that Glassfish offers, check my other postings relating to that. REST Monitoring was quite cool, but it only allowed monitoring what Glassfish allowed by setting the respective monitoring levels ( JDBC connection pool, EJB Container, etc.) While that may suffice to some, it had following disadvantages:

  • Monitoring could be performed only on the levels exposed by Glassfish ( through module monitoring levels), meaning i could not get any System Load information for example
  • Each resource had to be queried separately
  • Each query result had to be parsed in order to be imported into the monitoring database
  • Relative overhead
  • Sometimes, under heavy load, resources were not available for querying

Besides that, i had to use a combination of CURL for performing the Request, XML processing and unix editors ( like sed or awk ) to get all this results straight. Not to mention the effort i had to put whenever i needed to set up a new monitoring item.

Further on, i had the problem that i could only monitor one application server at a time. Since we run our tests in a distributed environment, i needed something that i could use to easily monitor one to many application servers, at the same time! Since i had played with Groovy before, doing some integration with our Jenkins, i set once more my eyes on it.

JMX MBean Monitoring with Groovy

For the sake of keeping this article simple, we’ll use the best free Java APM  Tool there is on the market: VisualVM.

As defined in the JMX Specification, client applications (as ours) obtain MBeans through an MBean Server Connection. Once we have obtained an MBean server connection, we can use it to query the underlying beans, retrieving their attributes (of course, operating on the beans is possible as well)

Let’s have a look at two software components exposing a lot of debugging information through MBeans, Glassfish and OpenMQ

Glassfish MBean Monitoring

Glassfish by default exposes its’ JMX Server on port 8686. A default connection will also require username and password, which by default are: admin / adminadmin. Once you have connected to the JMX Server, you’ll notice a new tab called “MBeans”

Glassfish MBeans Tab
Glassfish MBeans Tab

Module Monitoring Levels in Glassfish

As you can see, the exposed MBeans are on the left. The most interesting for us will be the ones giving “runtime” information on performance metrics like: used database connections, commited transactions, queue lenghts, number of open connections and so on. In Glassfish you get all this for free, by enabling the so called “Module Monitoring” ( Just expand the MBean called “amx”, and navigate to the child called “Module Monitoring Levels”)

Module Monitoring Levels in Glassfish
Module Monitoring Levels in Glassfish

As you can see, we have enabled the monitoring for some of the modules, among them being:

  • JDBC Connection Pool: information regarding the number of acquired logical connections, physical connections, connection timeouts, etc.
  • Thread Pool: information regarding the number of active threads, total threads, etc
  • JVM: information regarding the current but also peak usage of the memory spaces

Let’s use the JDBC Connection Pool MBean to check on the current pool usage statistics. Expand the node called “amx:jdbc-connection-pool-mon”

JDBC Connection Pool Monitoring - Attribute value view
JDBC Connection Pool Monitoring – Attribute value view

We can see now metrics like:

  • number of free connections in the pool
  • number of logical connections acquired from the pool
  • number of currently used jdbc connections in the pool
  • etc.

So if we’d wanted to take a live peek at the system, check on it’s resource monitoring, we could do that easily following the steps above. But that is not all to it. How about things that Glassfish does not expose via its main Mbean “amx”, things like : Garbage Collection statistics, Memory Usage statistics for all Memory Spaces, Operating System statistics like CPU Usage and so on? Let’s take a look at the “java.lang” Mbean

Other MBeans
Other MBeans

We can retrieve CPU usage, Compilation statistics, Memory Space statistics, Garbage Collection Statistics and so on… The one thing missing is regularly checking this information, and aggregating the results. Which brings me to the todays’ topic. Before that, let’s summarize a little what we have acchieved up to now:

  1. Connect to a Glassfish Server using JMX connection (service:jmx:rmi:///jndi/rmi://server:8686/jmxrmi)
  2. Connect to an MBean Server using VisualVM and the MBeans plugin
  3. Configure monitoring levels in Glassfish (Module Monitoring Levels MBean)
  4. Retrieve some performance metrics from the JDBC Connection Pool Monitor
  5. Retrieve other performance metrics independent of Glassfish (JVM, Operating System, etc.)

Dynamically querying MBeans with Groovy

Let’s say we would like to keep a constant eye on the JDBC Connection Pool, and would like to retrieve it’s metrics every couple of seconds. Let’s take a look at the Groovy MBean specification:

Its constructor looks like this:

GroovyMBean(MBeanServerConnection server, ObjectName name)

We need a MBean Server Connection, and an object name. The object names of all exposed MBeans can be viewed in the metadata Tab of the MBeans Browser:

JDBC Connection Pool - Mbean Metadata
JDBC Connection Pool – Mbean Metadata

So let us first create a Connector class that connects us to the JMX Server. We will use it to pass a list of servers that we want to connect to later.

[code language=”java”]
public class Connector {
static server
def serverUrl, user, password
Connector (serverUrl, user, password) {
this.serverUrl = serverUrl
this.user = user
this.password = password
}

def connect () {
HashMap environment = new HashMap();
String[] credentials = [user, password];
environment.put (JMXConnector.CREDENTIALS, credentials);

// Connect to remote MBean Server
def jmxUrl = ‘service:jmx:rmi:///jndi/rmi://’+serverUrl+’/jmxrmi’
try {
println jmxUrl
server = JmxFactory.connect(new JmxUrl(jmxUrl),environment).MBeanServerConnection
return server
}
catch (Exception e) {
println("Could not connect to mbean")
//System.exit(0)
}
}
}
[/code]

Since we now have the connection, we need the MBean’s connection name in order to work with it. I will use the expression “entry point” instead of the object name. All we have to do now is to create a new MBean Object:

[code language=”java”]
def entryPoint=’amx:pp=/mon/server-mon[server],type=jdbc-connection-pool-mon,name=resources/EocPool'</code>
def monitoringBean = new GroovyMBean(connection,entryPoint)</code>
[/code]

We could now just go after the beans attributes by retrieving them using the full path. Let’ say we want to monitor the number of connections acquired:

[code language=”java”]def connAcquired=monitoringBean.numconnacquired.count[/code]

That would be all to it, this would give us back the value of the attribute. We could of course do a map of attributes, and retrieve the attributes one by one:

[code language=”java”]attributeMap = [ numconnacquired:count, numconnfree:current, numconnused:current][/code]

[code language=”java”]attributeMap.each { key,value -> println (key,value)}[/code]

This would then look something like this:

numconnacquired 2117663
numconnfree 41
numconnused 54

Of course, if we wanted to monitor other Beans as well, we’ll have to create an attribute map for each of the beans as well. Why not go the other way around? Get the MBean, get all its attributes, retrieve all values for all attributes, and use only whatever we’d need. Let’s create a map of MBeans, containing a label (that we will later use for logging) and the entry point (object name) We will first do this for the JDBC Connection Pool and Thread Pool Monitoring Beans:

[code language=”java”]
beanMap=[
ThreadPoolMonitor: [label:’Monitor – Thread Pool’, entryPoint:’amx:pp=/mon/server-mon[server],type=thread-pool-mon,name=network/jk-main-listener-1/thread-pool’],
JdbcMonitor: [label:’Monitor – JDBC Pool’, entryPoint:’amx:pp=/mon/server-mon[server],type=jdbc-connection-pool-mon,name=resources/EocPool’],
]

for (e in connectorMap ) {
beanMap.each { key,value -> setMonitoringBean(e.key,e.value,beanMap."$key".entryPoint,beanMap."$key".label)}
}
[/code]

Let’s take a look at the setMonitoringBean method, which takes following arguments: server, connection, object name and label:

[code language=”java”]
public static void setMonitoringBean(server,connection, entryPoint, label) {

// Initialize the value Map. We will hold all attributes and values in this map
def valueMap=[:]
def timestamp = new Date().format("yyyy-MM-dd HH:mm:ss")
try {
// Connect to the MBean using the given server and MBean Connection Information
def monitoringBean = new GroovyMBean(connection,entryPoint)
// Get the MBeans existing attributes and add them to a map so we could traverse it
def attributeList = [ monitoringBean.listAttributeDescriptions()].flatten()
// We need to split the MBean entry point so we can check on the type of the bean. Currently we support two types: composite types (having child attributes) and long types (single values)
// Traverse each attribute and store the values in the value map
//def beanObjectName=monitoringBean.name()
attributeList.each {
def splitter=it.split(‘ ‘)
def attributeType=splitter[1], attributeName=splitter[2]
if (attributeType==’javax.management.openmbean.CompositeData’ ){
// Only store numeric number; filter out timestamp attributes
monitoringBean."${attributeName}".contents.each { key,value ->
try {
if ("$value".matches("[0-9].*") && !"$key".matches(".*Time")) valueMap.put(label+"|"+attributeName+"-"+key,value)
}
catch (Exception e) {logFile << ‘Exception returned when checking attribute: ‘+attributeName+’\t’+e+’\n’}
}
}
else
{
if (attributeType==’long’ || attributeType==’double’ || attributeType==’java.lang.Long’ || attributeType==’java.lang.Integer’){
// Directly store the value of the attribute, since it is a simple attribute
try {
def valueHolder=monitoringBean."${attributeName}"
valueMap.put(label+"|"+attributeName,valueHolder)

}
catch (Exception e) {logFile << ‘Exception returned when checking attribute: ‘+attributeName+’\t’+e+’\n’} } } } // Flush the valueMap into the ResultFile valueMap.each { entry -> resultFile << server+"|"+timestamp+"|"+"$entry".replaceAll(‘=’,’|’)+"|"+testId+"\n"}
valueMap.clear()
}
catch (Exception e) {
logFile << ‘Something went wrong\n’
println e
valueMap.clear()
}
[/code]

This would now return the following results:

[code]
myserver|2015-10-29 17:23:11|Monitor – Thread Pool|corethreads-count|5
myserver|2015-10-29 17:23:11|Monitor – Thread Pool|currentthreadsbusy-count|0
myserver|2015-10-29 17:23:11|Monitor – Thread Pool|totalexecutedtasks-count|49424
myserver|2015-10-29 17:23:11|Monitor – Thread Pool|maxthreads-count|1024
myserver|2015-10-29 17:23:11|Monitor – Thread Pool|currentthreadcount-count|83
myserver|2015-10-29 17:23:11|Monitor – JDBC Pool|numpotentialconnleak-count|0
myserver|2015-10-29 17:23:11|Monitor – JDBC Pool|numconnsuccessfullymatched-count|0
myserver|2015-10-29 17:23:11|Monitor – JDBC Pool|numconnfailedvalidation-count|0
myserver|2015-10-29 17:23:11|Monitor – JDBC Pool|numconnreleased-count|343676
myserver|2015-10-29 17:23:11|Monitor – JDBC Pool|waitqueuelength-count|0
myserver|2015-10-29 17:23:11|Monitor – JDBC Pool|numconnfree-current|95
myserver|2015-10-29 17:23:11|Monitor – JDBC Pool|numconnfree-highWaterMark|95
myserver|2015-10-29 17:23:11|Monitor – JDBC Pool|numconnfree-lowWaterMark|0
myserver|2015-10-29 17:23:11|Monitor – JDBC Pool|connrequestwaittime-current|0
myserver|2015-10-29 17:23:11|Monitor – JDBC Pool|connrequestwaittime-highWaterMark|4586
myserver|2015-10-29 17:23:11|Monitor – JDBC Pool|connrequestwaittime-lowWaterMark|0
myserver|2015-10-29 17:23:11|Monitor – JDBC Pool|numconnused-current|0
myserver|2015-10-29 17:23:11|Monitor – JDBC Pool|numconnused-highWaterMark|67
myserver|2015-10-29 17:23:11|Monitor – JDBC Pool|numconnused-lowWaterMark|0
myserver|2015-10-29 17:23:11|Monitor – JDBC Pool|numconndestroyed-count|0
myserver|2015-10-29 17:23:11|Monitor – JDBC Pool|numconnacquired-count|343676
myserver|2015-10-29 17:23:11|Monitor – JDBC Pool|averageconnwaittime-count|0
myserver|2015-10-29 17:23:11|Monitor – JDBC Pool|numconntimedout-count|0
myserver|2015-10-29 17:23:11|Monitor – JDBC Pool|numconnnotsuccessfullymatched-count|0
myserver|2015-10-29 17:23:11|Monitor – JDBC Pool|numconncreated-count|95

[/code]

Some words on the attributes and their types: We can have attributes of type:

  • composite (with subattributes)
  • long
  • integer
  • string
  • bollean

That is the reason we need to check on each of the attributes to see if we have to retrieve it’s subattributes or its value. Doing this we can dynamically retrieve all attributes of an MBean, deciding afterwards which we should use and which not.

If we add a map of servers as well, we’ll only need to connect once, and then retrieve the results by polling the servers periodically

[code language=”/java”]
def serverList = [
server1:"server1:"+monitoringPort,
server2:"server2:"+monitoringPort,
server3:"server3:"+monitoringPort,
]
[/code]

All we have to do now is to add a loop and poll the MBeans in a loop. My implementation relies on the existence of a control file. As long as the file exists, the beans will be polled in 5 seconds interval.

Aggregation of JMX Monitoring collected with Groovy

All we need to do now is to import the data into a database of our choice, and draw the charts accordingly. This would look something like:

Groovy JMX JDBC Connection Pool Monitoring
Groovy JMX JDBC Connection Pool Monitoring

Adding monitoring for a new MBean

All you have to do is to extend the BeanMap with the new (one or more) Beans. Let’s say we would like some statistics on the memory spaces:

[code language=”java”]

beanMap=[
ThreadPoolMonitor: [label:’Monitor – Thread Pool’, entryPoint:’amx:pp=/mon/server-mon[server],type=thread-pool-mon,name=network/jk-main-listener-1/thread-pool’],
JdbcMonitor: [label:’Monitor – JDBC Pool’, entryPoint:’amx:pp=/mon/server-mon[server],type=jdbc-connection-pool-mon,name=resources/EocPool’],
MemoryEden: [label:’Monitor – MemoryEden’, entryPoint:’java.lang:type=MemoryPool,name=Par Eden Space’],
MemoryPerm: [label:’Monitor – MemoryPerm’, entryPoint:’java.lang:type=MemoryPool,name=CMS Perm Gen’],
MemoryOld: [label:’Monitor – MemoryOld’, entryPoint:’java.lang:type=MemoryPool,name=CMS Old Gen’],
MemorySurvivor: [label:’Monitor – MemorySurvivor’, entryPoint:’java.lang:type=MemoryPool,name=Par Survivor Space’]
][/code]

That’s it, No other hussle, no nothing. Just add a new server, or a new bean, and there you go. And the best part of it, it connects only once to each of the servers, and then acts as an aggregator…Groovy isn’t it?

An aggregated report could then look like this:

Groovy JMX Monitoring Report
Groovy JMX Monitoring Report

Feel free to use the groovy script i have created, adapt it, extend it and so on. Critic opinions are welcomed as much as improvement ideas 😉 Let’s keep the open source going…APM tools can be so expensive noawadays.

I can go home to my kids now. Now this is groovy!

[code language=”java”]
/*
Created: Alexandru Ersenie
Groovy script for monitoring Application Server over JMX Protocol
Usage: groovy $script_name $test_id $workspace $jmxPort
Usage example: groovy jmx_mon.groovy 21322 /home/testing 22086

Define the list of servers you want to monitor : class Main -> serverList
Define the list of mbeans you want to monitor : class Main -> beanMap
Define the user and password for the JMX Connection: class Connector (default admin:adminadmin)
Define the polling period in milliseconds: class Main -> pollTimer (default: 5000)
*/
import javax.management.ObjectName
import javax.management.remote.JMXConnectorFactory as JmxFactory
import javax.management.remote.JMXServiceURL as JmxUrl
import java.util.HashMap
import javax.management.remote.*
import java.text.DateFormat
import java.util.regex.*

public class Monitoring {
String connection, entryPoint
static logFile,resultFile,monitoringFile,testId
//Constructor
Monitoring () {
this.connection = connection
this.entryPoint = entryPoint
}
public static void setMonitoringBean(server,connection, entryPoint, label) {

// Initialize the value Map. We will hold all attributes and values in this map
def valueMap=[:]
def timestamp = new Date().format("yyyy-MM-dd HH:mm:ss")
try {
// Connect to the MBean using the given server and MBean Connection Information
def monitoringBean = new GroovyMBean(connection,entryPoint)
// Get the MBeans existing attributes and add them to a map so we could traverse it
def attributeList = [ monitoringBean.listAttributeDescriptions()].flatten()
// We need to split the MBean entry point so we can check on the type of the bean. Currently we support two types: composite types (having child attributes) and long types (single values)
// Traverse each attribute and store the values in the value map
//def beanObjectName=monitoringBean.name()
attributeList.each {
def splitter=it.split(‘ ‘)
def attributeType=splitter[1], attributeName=splitter[2]
if (attributeType==’javax.management.openmbean.CompositeData’ ){
// Only store numeric number; filter out timestamp attributes
monitoringBean."${attributeName}".contents.each { key,value ->
try {
if ("$value".matches("[0-9].*") && !"$key".matches(".*Time")) valueMap.put(label+"|"+attributeName+"-"+key,value)
}
catch (Exception e) {logFile << ‘Exception returned when checking attribute: ‘+attributeName+’\t’+e+’\n’}
}
}
else
{
if (attributeType==’long’ || attributeType==’double’ || attributeType==’java.lang.Long’ || attributeType==’java.lang.Integer’){
// Directly store the value of the attribute, since it is a simple attribute
try {
def valueHolder=monitoringBean."${attributeName}"
valueMap.put(label+"|"+attributeName,valueHolder)

}
catch (Exception e) {logFile << ‘Exception returned when checking attribute: ‘+attributeName+’\t’+e+’\n’} } } } // Flush the valueMap into the ResultFile valueMap.each { entry -> resultFile << server+"|"+timestamp+"|"+"$entry".replaceAll(‘=’,’|’)+"|"+testId+"\n"}
valueMap.clear()
}
catch (Exception e) {
logFile << ‘Something went wrong\n’ println e valueMap.clear() } } public static void main(String[] args) { testId=args[0] def runtimeFolder=args[1] def monitoringPort=args[2] def beanMap=[:] def pollTimer=5000 resultFile = new File(runtimeFolder+"/monitoring/glassfish_stats.log") monitoringFile = new File(runtimeFolder+"/monitoring/control_file") logFile = new File (runtimeFolder+"/run.log") resultFile.write ” def serverList = [ def serverList = [ server1:"server1:"+monitoringPort, server2:"server2:"+monitoringPort, server3:"server3:"+monitoringPort, ] ] def connectorMap = [:] if (monitoringFile.exists()) {de serverList.each { key, value ->
if (key!=’tstjms201c’) {
println key
connectorMap.put(key,new Connector(value,’admin’,’adminadmin’).connect())}
else {
println key
connectorMap.put(key,new Connector(value,’admin’,’admin’).connect())
}
}
}
else
{
println("Monitoring file not found, monitoring will now exit")
logFile << ‘Monitoring file not found, monitoring will not be performed’ System.exit(0) } beanMap=[ ThreadPoolMonitor: [label:’Monitor – Thread Pool’, entryPoint:’amx:pp=/mon/server-mon[server],type=thread-pool-mon,name=network/jk-main-listener-1/thread-pool’], JdbcMonitor: [label:’Monitor – JDBC Pool’, entryPoint:’amx:pp=/mon/server-mon[server],type=jdbc-connection-pool-mon,name=resources/EocPool’], MemoryEden: [label:’Monitor – MemoryEden’, entryPoint:’java.lang:type=MemoryPool,name=Par Eden Space’], MemoryPerm: [label:’Monitor – MemoryPerm’, entryPoint:’java.lang:type=MemoryPool,name=CMS Perm Gen’], MemoryOld: [label:’Monitor – MemoryOld’, entryPoint:’java.lang:type=MemoryPool,name=CMS Old Gen’], MemorySurvivor: [label:’Monitor – MemorySurvivor’, entryPoint:’java.lang:type=MemoryPool,name=Par Survivor Space’] ] while (monitoringFile.exists()) { { for (e in connectorMap ) { beanMap.each { key,value -> setMonitoringBean(e.key,e.value,beanMap."$key".entryPoint,beanMap."$key".label)}
}
sleep(pollTimer)
}
}
}
public class Connector {
static server
def serverUrl, user, password
Connector (serverUrl, user, password) {
this.serverUrl = serverUrl
this.user = user
this.password = password
}

def connect () {
HashMap environment = new HashMap();
String[] credentials = [user, password];
environment.put (JMXConnector.CREDENTIALS, credentials);

// Connect to remote MBean Server
def jmxUrl = ‘service:jmx:rmi:///jndi/rmi://’+serverUrl+’/jmxrmi’
try {
println jmxUrl
server = JmxFactory.connect(new JmxUrl(jmxUrl),environment).MBeanServerConnection
return server
}
catch (Exception e) {
println("Could not connect to mbean")
//System.exit(0)
}
}
}

[/code]

JAMDL – Java Automatic Memory Leak Detector using JMap, Jasper and MySQL

I was working on this idea for a couple of years now, trying to give it a shape, but never really finding the time for the details. The basic concept was simple:

  1. Start monitoring of objects
  2. Run a test
  3. Collect and import monitoring metrics
  4. Use math and estimation to detect memory leaks

The memory leak

When does an object become a memory leak suspect? Quite simple: i will use here two acronyms

  1. SNI – Start Number of Instances
  2. ENI – End Number of Instances

Taking the shortest path, one would say whenever this result is returned:

ENI-SNI > 0

That means: “if the number of instances at the end of the test is higher than the one at the beginning of the test is considered a memory leak” Well, not necessarily:

  • some objects may be initialized only by the test itself when loading specific classes, so they were never there when starting the application server
  • soft references: the objects may be collected, meaning it is up to the Garbage Collector to decide when to remove the object
  • session timeouts: some users close their sessions upon logging out, others (most of them) just close their browser -> session timeout is the one responsible to remove the session and the attached objects, and the timeout may vary depending on the implementation, meaning it is not the Test End Timestamp that is decisive, but the Test End Timestamp + Timeout

Now just saying “higher number of instances” is not enough. We need to evaluate the delta between SNI and ENI. We can for example use the standard deviation. Upon measuring only two values, if the difference is high enough, we can assume that something is not going as planned, and we might have a memory leak

Monitoring the JVM Object Map

Jmap is a great tool (part of the JDK ) that we can use to inspect the memory map. Using jmap, one can at any point in time (with some overhead, of course – not that big ) retrieve a list of all objects residing in the heap. The results would look something like this:

Java Object Map – Objects and number of instances in the heap

1:       1803751      166592520  [C
2:        347130      103348904  [B
3:        565934       74272832  <constMethodKlass>
4:        565934       72452448  <methodKlass>
5:        242821       62836312  [I
6:         55222       60738280  <constantPoolKlass>
7:         55221       40555296  <instanceKlassKlass>
8:        484966       38797280  java.lang.reflect.Method
9:        886626       35851896  [Ljava.lang.Object;
10:        391682       32626104  [Ljava.util.HashMap$Entry;
11:         46464       32591136  <constantPoolCacheKlass>
12:       1351668       32440032  java.lang.String
13:        748338       23946816  java.util.HashMap$Entry
14:        426004       20448192  java.util.HashMap
15:        501680       20067200  java.util.LinkedHashMap$Entry
16:        820685       19696440  java.util.ArrayList
17:        233944       16843968  java.lang.reflect.Field
18:        360930       11549760  java.util.concurrent.ConcurrentHashMap$HashEntry

The first column is a unique key for the object that will not change during the existence of the object in the JVM.

The second column is the number of instances the object occupies in the heap

The third column is the size (in bytes) that the object occupies in the heap.

Like i mentioned before, in order to perform memory analysis, we need to gather at least two metrics:

  • object occupancy before starting the test scenario – SNI
  • object occupancy after ending the test scenario – ENI

Those of you reading this post should know by now the two types of garbage collection (Young and Full) and how garbage collection works. Therefore i will just jump into the details of the problem.

One may now say:  what if the objects are dead, and waiting to be collected by the next garbage collection? Would the results then still be reliable?

We need to make sure that both SNI and ENI are measured after a FULL GARBAGE COLLECTION. That way we’ll make sure there is no dead object waiting to be collected.
So, our scenario up to now runs like this:

  1. Full Garbage Collection -> Retrieve SNI for all live objects
  2. Run the test
  3. Wait for the session timeout
  4. Full Garbage Collection -> Retrieve ENI for all live objects

On the other side, we would still want to see what happens with the objects WHILE the test is running: are they collected by the Young Garbage Collector at all? Is it an increasing line that we see, or it decreases as well? Like i said, we are talking about suspects, so we need some more proof to decide if we deal with a leak or not.

So let’s add a loop and retrieve the TNI (temporary number of instances) every couple of seconds

Presuming our performance test will trigger at least a couple of Young Garbage collections, we could try retrieving the heap occupancy map regularly. Adding timestamps to the results will then allow us to see the lifecycle of the object during the performance test. This would then look something like:

 

ID                Instances   Bytes   Name      TimeStamp
262:          1035         281520  MyObject1,15-00-57
457:          1035          91080  MyObject2,15-00-57
613:           475          45600  MyObject3,15-00-57
642:           414          39744  MyObject4,15-00-57
689:           267          32040  MyObject5,15-00-57
862:           177          18408 MyObject6,15-00-57
1434:           118           4720  MyObject7,15-00-57
283:           788         214336  MyObject1,15-01-30
493:           788          69344  MyObject2,15-01-30
662:           369          35424  MyObject3,15-01-30
699:           308          29568  MyObject4,15-01-30
733:           214          25680  MyObject5,15-01-30
955:           135          14040 MyObject6,15-01-30
1405:           118           4720  MyObject7,15-01-30
285:           726         197472  MyObject1,15-02-03
495:           726          63888  MyObject2,15-02-03
657:           345          33120  MyObject3,15-02-03
696:           284          27264  MyObject4,15-02-03
726:           202          24240  MyObject5,15-02-03
973:           118          12272 MyObject6,15-02-03
1365:           118           4720  MyObject7,15-02-03
318:           411         111792  MyObject1,15-02-36
556:           411          36168  MyObject2,15-02-36
716:           217          20832  MyObject3,15-02-36
786:           138          16560  MyObject5,15-02-36
818:           156          14976  MyObject4,15-02-36
1290:           120           4800  MyObject7,15-02-36

AMDL Reports – Object Lifecycle Reports

We can now expect two types of graphics:

Memory Leak - Increasing trend and no garbage collection
Memory Leak – Increasing trend and no garbage collection

Here we see an increasing trend , without any decreases over time, meaning the object is not being collected at all

Object Lifecycle - No Memory Leak - Stable trend, increasing and decreasing line
Memory Leak – Increasing trend and no garbage collection

Here we see a stable trend, where the objects are being collected

Assuming that at the end of the test we’ll import all monitoring data into the database, and then generate reports containing the three items (SNI, TNI, ENI) , the full list of steps to perform JAMDL would be now:

  1. Full Garbage Collection -> Retrieve SNI for all live objects
  2. Start and run the test
  3. Perform TNI collection while test is running and the session timeout has not occured
  4. Wait for the session timeout and stop TNI collection
  5. Full Garbage Collection -> Retrieve ENI for all live objects
  6. Import the results into the database
  7. Compute the deviation between SNI and ENI
  8. Automatic generation of performance report containing all memory leak suspects that resulted from point 7

This is how a AMDL Session would look like in VisualVM

AMDL Visual VM Session
AMDL Visual VM Session

 AMDL Main Report with Memory Leak Suspects

Integrating the results in the main report could look like this (for presentation purposes i have set the deviation very low to 10)

AMDL Performance Report
AMDL Performance Report

We can now drill into the two memory leaks suspects and see if there is a memory leak indeed:

Object Lifecycle - Drill down report - No memory leak
Object Lifecycle – Drill down report – No memory leak

And since the post is about memory leaks, this would be one then:

Memory Leak - Increasing trend and no garbage collection
Memory Leak – Increasing trend and no garbage collection

Using a relational database you can decide on your own on the implementation of the deviation:

  1. ENI – SNI: You can compute the difference between the ENI and SNI, and set a threshold. For example if at the end of the test i have 100 instances more, do report that as a suspect
  2. STDEV(ENI,SNI): You can compute the deviation between the two values, and set a threshold.

It is up to you to decide on the implementation that suits you best.

 

One last word: theoretically you can use this even in production environment, as long as you do not retrieve the memory map that often, and as long as you do not perform full garbage collection. In that case, of course, the timespan for monitoring must be long enough to allow objects to be collected by the Old Generation Garbage Collection…nevertheless, a point to think of, that could save you some time, and actively report possible memory leak suspects.

Cheers, have fun and enjoy. I will gladly help with further information regarding any of the 8 points above

Alex

 

 

 

One Step Monitoring of Key Indicators in Glassfish 3.1 via REST

Ever since upgrading Glassfish from v.3.0.1 to V3.1.2.2 i made a note to myself to redesign and simplify the active monitoring i was using in my Load Testing scripts, so that i could easily monitor things like:

  • JDBC Connections used
  • JDBC Connections timed out
  • JDBC Connections free
  • Http-threads busy

and so on.

Since the interface has changed, and my previous monitoring implementation was using also pretty much hardcoded values (of which i am absolutely no fan), it was time to redesign this in a smarter way, making use of the new interface, trying to get all my metrics in one step, or in as few steps as required.

Glassfish 3.1.2.2 Rest Monitoring

Needless to remind you how to enable/disable monitoring (sure you know it by now), but then again, it is not that hard to detail it once more.

First we check on the current status of the monitoring levels

asadmin -p 11048 –passwordfile /opt/glassfish/portal/v3.1.2.2/passwords get server.monitoring-service.module-monitoring-levels.*

*
server.monitoring-service.module-monitoring-levels.connector-connection-pool=OFF
server.monitoring-service.module-monitoring-levels.connector-service=OFF
server.monitoring-service.module-monitoring-levels.deployment=OFF
server.monitoring-service.module-monitoring-levels.ejb-container=OFF
server.monitoring-service.module-monitoring-levels.http-service=OFF
server.monitoring-service.module-monitoring-levels.jdbc-connection-pool=HIGH
server.monitoring-service.module-monitoring-levels.jersey=OFF
server.monitoring-service.module-monitoring-levels.jms-service=OFF
server.monitoring-service.module-monitoring-levels.jpa=OFF
server.monitoring-service.module-monitoring-levels.jvm=OFF
server.monitoring-service.module-monitoring-levels.orb=OFF
server.monitoring-service.module-monitoring-levels.security=OFF
server.monitoring-service.module-monitoring-levels.thread-pool=OFF
server.monitoring-service.module-monitoring-levels.transaction-service=OFF
server.monitoring-service.module-monitoring-levels.web-container=HIGH
server.monitoring-service.module-monitoring-levels.web-services-container=OFF

Now we set the desired monitoring level, let’s take the http-service as example:

asadmin -p 11048 –passwordfile /opt/glassfish/portal/v3.1.2.2/passwords set server.monitoring-service.module-monitoring-levels.http-service=HIGH
server.monitoring-service.module-monitoring-levels.http-service=HIGH
Command set executed successfully.

Let’s check it if it is really enabled (we choose xml as output format, available are html, xml and json)

curl -k -s -u admin:adminadmin -H “Accept: application/xml” https://server:11048/monitoring/domain/server/network/jk-main-listener-1/thread-pool

<?xml version=”1.0″ encoding=”UTF-8″ standalone=”no”?>
<map>
<entry key=”extraProperties”>
<map>
<entry key=”entity”>
<map>
<entry key=”corethreads”>
<map>
<entry key=”unit” value=”count”/>
<entry key=”starttime”>
<number>1392653042076</number>
</entry>
<entry key=”count”>
<number>5</number>
</entry>
<entry key=”description” value=”Core number of threads in the thread pool”/>
<entry key=”name” value=”CoreThreads”/>
<entry key=”lastsampletime”>
<number>1392722067843</number>
</entry>
</map>
</entry>
<entry key=”currentthreadsbusy“>
<map>
<entry key=”unit” value=”count”/>
<entry key=”starttime”>
<number>1392653042077</number>
</entry>
<entry key=”count”>
<number>0</number>
</entry>
<entry key=”description” value=”Provides the number of request processing threads currently in use in the listener thread pool serving requests”/>
<entry key=”name” value=”CurrentThreadsBusy”/>
<entry key=”lastsampletime”>
<number>1392738373205</number>
</entry>
</map>
</entry>
<entry key=”totalexecutedtasks“>
<map>
<entry key=”unit” value=”count”/>
<entry key=”starttime”>
<number>1392653042077</number>
</entry>
<entry key=”count”>
<number>123022</number>
</entry>
<entry key=”description” value=”Provides the total number of tasks, which were executed by the thread pool”/>
<entry key=”name” value=”TotalExecutedTasksCount”/>
<entry key=”lastsampletime”>
<number>1392738373205</number>
</entry>
</map>
</entry>
<entry key=”maxthreads“>
<map>
<entry key=”unit” value=”count”/>
<entry key=”starttime”>
<number>1392653042076</number>
</entry>
<entry key=”count”>
<number>1024</number>
</entry>
<entry key=”description” value=”Maximum number of threads allowed in the thread pool”/>
<entry key=”name” value=”MaxThreads”/>
<entry key=”lastsampletime”>
<number>1392722067843</number>
</entry>
</map>
</entry>
<entry key=”currentthreadcount“>
<map>
<entry key=”unit” value=”count”/>
<entry key=”starttime”>
<number>1392653042077</number>
</entry>
<entry key=”count”>
<number>150</number>
</entry>
<entry key=”description” value=”Provides the number of request processing threads currently in the listener thread pool”/>
<entry key=”name” value=”CurrentThreadCount”/>
<entry key=”lastsampletime”>
<number>1392737736187</number>
</entry>
</map>
</entry>
</map>
</entry>
<entry key=”childResources”>
<map/>
</entry>
</map>
</entry>
<entry key=”message” value=””/>
<entry key=”exit_code” value=”SUCCESS”/>
<entry key=”command” value=”Monitoring Data”/>
</map>

Of course you can use a browser using the same url connection and have it nicely displayed, but we need the “curled” version, in order to further extract the desired values.

Extracting xml tags with awk and xmllint under bash

Let’s say we now need to extract following metrics:

  • currentthreadsbusy
  • totalexecutedtasks
  • maxthreads
  • currentthreadcount

Since my Linux distribution did not have any xmlextractor, but had an xmlparser and i wanted this solution to be portable on any Linux machine, i decided to go the hard way, and use awk and xmllint.

xmllint : The xmllint program parses one or more XML files, specified on the command line as XML-FILE (or the standard input if the filename provided is – ). It prints various types of output, depending upon the options selected. It is useful for detecting errors both in XML code and in the XML parser itself.

I used a trick here, and reformatted the xml response displaying it as a pretty printed xml with line breaks.

curl -k -s -u admin:adminadmin -H “Accept: application/xml” https://myserver:11048/monitoring/domain/server/network/jk-main-listener-1/thread-pool | xmllint –format –

Now i need to get my monitoring items. The trick i used here is as follows:

  1. Extract everything starting with the item i am looking for, in this case currentthreadsbusy searching up to the pattern “/map”. This will return the following fragment

    currentthreadcount“>
    <map>
    <entry key=”unit” value=”count”/>
    <entry key=”starttime”>
    <number>1392653042077</number>
    </entry>
    <entry key=”count”>
    <number>150</number>
    </entry>
    <entry key=”description” value=”Provides the number of request processing threads currently in the listener thread pool”/>
    <entry key=”name” value=”CurrentThreadCount”/>
    <entry key=”lastsampletime”>
    <number>1392737736187</number>
    </entry>
    </map>

  2. Further i need to extract the monitoring value i am interested in. This can be in my case either “count” or “current”, so i will use awk once more and look for the following pattern

    awk ‘/<entry key=”count|current”>/,/<\/entry>/’

  3. This will now return the following fragment:

    <entry key=”count”>
    <number>150</number>
    </entry>

  4. The only thing left here to do is use a regular expression to extract only digits:

    grep -o ‘[0-9]*’

Let us put it all together now:

  1. Store the response of the curl request into a variable:

    http_mon_response=`curl -k -s -u admin:adminadmin -H “Accept: application/xml” https://myserver:11048/monitoring/domain/server/network/jk-main-listener-1/thread-pool`

  2. Retrieve the desired metric:

    val=`echo $http_mon_response | xmllint –nowarning –format – | awk ‘/currentthreadcount/,/\/map/’ | awk ‘/<entry key=”count|current”>/,/<\/entry>/’ | grep -o ‘[0-9]*’ `

Since we want to make this dynamic, and use one request, and extract as many metrics as possible, let’s write a small for loop that does that for us.

Suppose we want to retrieve the following monitoring metrics:

JDBC

  • numconnused
  • numconnfree
  • numconntimedout

HTTP

  • currentthreadsbusy

We will create a function called trace_gf_statistics that will post the curl request regularly, and write the outputs into an external file:

function trace_gf_statistics
{
# List of jdbc monitoring items to be retrieved

jdbc_names_short=(numconnused numconnfree numconntimedout)
# List of http thread pool monitoring items to be retrieved

http_names=(currentthreadsbusy)

# Only run the monitoring while this file exists. This file will be removed by the controlling process once the monitoring is stopped
status=`ls /tmp | grep glassfish_stats`
while [ “$status” != “” ];
do
MONITOR_TIMESTAMP=`date +%H-%M-%S`

# Store the JDBC Metrics into a variable
jdbc_mon_response=`curl -k -s -u admin:adminadmin -H “Accept: application/xml” https://myserver:11048/monitoring/domain/server/resources/EocPool`

# Store the HTTP Thread Pool Metrics into a variable
http_mon_response=`curl -k -s -u admin:adminadmin -H “Accept: application/xml” https://myserver:11048/monitoring/domain/server/network/jk-main-listener-1/thread-pool`

# Now iterate through all monitoring items we defined in the beginning and output the results
for jdbc_mon_item in ${jdbc_names_short[@]} ;
do
val=`echo $jdbc_mon_response | xmllint –nowarning –format – | awk ‘/’${jdbc_mon_item}’/,/\/map/’ | awk ‘/<entry key=”count|current”>/,/<\/entry>/’ | grep -o ‘[0-9]*’ `
echo $MONITOR_TIMESTAMP”:JDBC-“$jdbc_mon_item:$val >> ${JMETER_RESULTS}/glassfish_stats.log
done
for http_mon_item in ${http_names[@]} ;
do
val=`echo $http_mon_response | xmllint –nowarning –format – | awk ‘/’${http_mon_item}’/,/\/map/’ | awk ‘/<entry key=”count|current”>/,/<\/entry>/’ | grep -o ‘[0-9]*’ `
echo $MONITOR_TIMESTAMP”:HTTP-“$http_mon_item:$val >> ${JMETER_RESULTS}/glassfish_stats.log
done

# Post the requests every 3 seconds and then check for the existence of the status file
sleep 3
status=`ls /tmp | grep glassfish_stats`
done
}

Results

And this is how a typical monitoring output file looks like, separated by “:” delimiter.

16-32-32:JDBC-numconnused:131
16-32-32:JDBC-numconnfree:31
16-32-32:JDBC-numconntimedout:0
16-32-32:HTTP-currentthreadsbusy:58
16-32-35:JDBC-numconnused:110
16-32-35:JDBC-numconnfree:10
16-32-35:JDBC-numconntimedout:0
16-32-35:HTTP-currentthreadsbusy:40
16-32-38:JDBC-numconnused:110
16-32-38:JDBC-numconnfree:10
16-32-38:JDBC-numconntimedout:0
16-32-38:HTTP-currentthreadsbusy:36
16-32-42:JDBC-numconnused:103
16-32-42:JDBC-numconnfree:3
16-32-42:JDBC-numconntimedout:0
16-32-42:HTTP-currentthreadsbusy:27
16-32-45:JDBC-numconnused:121
16-32-45:JDBC-numconnfree:21
16-32-45:JDBC-numconntimedout:0
16-32-45:HTTP-currentthreadsbusy:43
16-32-48:JDBC-numconnused:83
16-32-48:JDBC-numconnfree:17
16-32-48:JDBC-numconntimedout:0
16-32-48:HTTP-currentthreadsbusy:7
16-32-51:JDBC-numconnused:126
16-32-51:JDBC-numconnfree:37
16-32-51:JDBC-numconntimedout:0
16-32-51:HTTP-currentthreadsbusy:64
16-32-55:JDBC-numconnused:204
16-32-55:JDBC-numconnfree:74
16-32-55:JDBC-numconntimedout:0
16-32-55:HTTP-currentthreadsbusy:127

You can now import the delimited file into whatever reporting tool you like, generating reports like this:

Glassfish Rest JDBC Monitoring Report

JasperReport-RestMonitoring-JDBC-Monitoring-Report

Glassfish Rest HTTP Thread Pool Monitoring Report

JasperReport-RestMonitoring-HTTP-Monitoring-Report

Needless to say that you can extend your monitoring items in the script above with whatever monitors you may need. It suffices to add a corresponding metric array, the curl request and iterate over the monitoring items.

This is a sample of my performance report, while using Jasper Server and Jasper Reports:

JasperReport-RestMonitoring

I will probably update this script regularly, so come back soon for a new, improved version of it.

Cheers

Alex

Load and performance testing for J2EE – Slides made public

Hi all,

it has been a while since i have posted on my blog. Although i would have posted more often, a lot of things changed in my life, biggest of them being our son Philip, who came to the world last year in December. I decided therefore to take a little time off and focus more on our family and spending some quality time with the family’s new member 🙂

In the light of this, i would now like to return with a post that i have been postponing for a while, and share with you all the slides that i prepared for a presentation that i had held in Hamburg last year, organized by the Java User Group Hamburg, and focusing on Load and Performance Testing for J2EE.

I can only say it was a very successful presentation, in the attendance of about 50+ members of the group, on a highly interesting, but rather not that much talked about topic…performance testing in JAVA

You will find things like Performance Basics (scope, metrics, factors on performance, generating load, performance reports), Monitoring (Monitoring types, active and reactive monitoring, CPU, Garbage Collection monitoring, Heap and other monitoring) and Tools (open source tools for monitoring, reporting and analysing)

I would be happy to hear your feedback on this one, being an opinion, a question or even criticism…they are all welcomed

Load and Performance Testing for J2EE – An approach using open source tools – By Alexandru Ersenie

Cheers,

Alex

 

P.S. I will start answering to the comments in the days to come. Sorry for the delay

IReport / Jasper Reports – Working with subreports and collections in Jasper Reports

Hi, and sorry for not updating for a while. I am currently under heavy load, and can scarcely find time to write, although i have several new topics prepared. I am also working on a presentation on Java Performance Testing and Monitoring which i will probably hold here in Hamburg, on the 18th of July. More on that for those interested in a follow-up post.

Now let’s dive into the subject: Working with subreports and collections in Jasper Reports

It seems that several users have been facing this problem, so i thought i wrote an explanatory post on this topic.

1. Building the main report

Let’s start with the main report. This looks like this:

Image

The sub-report is the grey box with a yellowish highlighting. The main report is passing the four following parameters to the sub-report, of which one is a collection:

  • http_request
  • filterstop
  • ic_testconfig
  • filterstart

Please notice how the name matches EXACTLY the expression (the passed parameter has to be named exactly the same as the local variable used in the subreport)

Also notice the properties of the sub-report, highlighted in the screenshot below:

  • Subreport Expression has to be: “repo:statistics”, where statistics is the name of the IReport file containing the designed subreport
  • Expression Class: java.lang.String
  • Connection type: Use a connection expression

Image

2. Creating the sub-report

Let’s create a sub-report in the repository, with the name we just configured in the main report: statistics

My recommendation is to create a single folder in your repository, called “subreports”, and add all sub-reports there. In my case, i have three sub-reports (we will only focus on the statistics subreport):

  • statistics
  • detailed_statistics
  • hudson_statistics

The structure in the repository looks like this:

Image

Let’s take a look at my sub-report. This will receive the collection as input parameters. The collection is actually a series of test id’s that i use to build a report over several test runs (for example, if i run a test twice, once with id 123, and once with id 124, and i want to see a single report of all transactions for both test-runs, i will give both parameters as input:

Image

The query is the one that takes the collection input parameter and processes it. Let’s see how that looks like:

select
count(t) as totaltransactions,
avg(t) as responseaverage,
……
from
testresults tr
where $X{IN,tr.testrun_id,ic_testconfig} and DATE_FORMAT(DATE_ADD(‘1970-01-01 00:00:00′ ,INTERVAL ts*1000 MICROSECOND),’%H:%i:%s’) between $P{filterstart} and $P{filterstop}
group by tr.lb

We will now add this sub-report into the Jasper Server Repository, by adding a new resource from the JRXML File we just created for our subreport. We will have to assign two identifiers:

  • Label: statistics
  • Name: rootavg

 

Jasper Server Repository - Adding a JRXML Resource
Jasper Server Repository – Adding a JRXML Resource

 

Now we add the two identifiers mentioned above

Jasper Server Repository - Labeling the JRXML Resource
Jasper Server Repository – Labeling the JRXML Resource

We can now refresh the repository in the IReport local instance, and see the sub-report added in the location we chose, with the identifiers we just assigned (name is statistics, id is rootavg):

Subreport in IReport Repository
Subreport in IReport Repository

3. Adding the sub-report as a resource to the main report

We have now added both the main report and the sub-report. Well, it is not enough to define the sub-report in the main report. The main report has to know where the called sub-report resides, therefore we need to add it as a resource of the main report. Remember that these resources have to be defined and available in the JasperServer Repository (Server Side).

We start by editing the main report on the server side, and adding the resources:

Add subreport as resource in jasper server repository
Add subreport as resource in jasper server repository

We still have to add the parameters on the server side. Remember we want to use a collection. In order to do that, we use a “Multi Select Query Type” Input Control:

 

Jasper Server Add Input control
Jasper Server Add Input control

We configure it as a “Multi Select Query Type” with the same name we are going to use in our report, that being “ic_testconfig”:

Jasper Server Collection Input Control
Jasper Server Collection Input Control

After refreshing the Repository in IReport, our Report looks like this. Notice the input controls, and how the collection item is now available

Configured repository with subreports
Configured repository with subreports

4. Final

Let’s review the steps once again:

  1. Create the main report, and decide on the parameters you want to pass to the sub-report. Upload the main report to the Jasper Server, and add the parameters on the server side too. The resources always have to bee synchronized
  2. Create the sub-report, and the query that will receive the collection. Upload the sub-report to Jasper Server
  3. Add the sub-report as a resource of the main report in Jasper Server
  4. Watch out the query syntax when using collections:
    1. where $X{IN,tr.testrun_id,ic_testconfig}

I think that’s it. Tried to put it as explanatory as possible, in the short amount of time that i have at disposal these days. I am really sorry for the delay in replying to comments, and posting new content. Hope to get things off my head in the near future

Cheers,

Alex

 

Wine experience – Bellingham Cabernet Sauvignon & Cabernet Franc – 2009

Bellingham Cabernet Sauvignon & Cabernet Franc - 2009
Bellingham Cabernet Cuvee

With the risk of disappointing my technical oriented followers, i needed to share the experience i had upon trying the South African Bellingham 2009.

Having had (really) bad experiences with south african shiraz  – i cannot even remember having had even one that’d make me say it was close to good – i have spot this bottle while actually looking for a bottle of argentinian Malbec. Once again –  i repeated to myself -, marketing and bottle presentation has an enormous role. I still believe red, black and white to be winning colors in bottle selections. Of course this view is as subjective as it gets, but hey, i am no wine expert…i just love trying, enjoying, and finding all about it.

I guess i’d become a fan of Cabernet Sauvignon. Having tried a limited vintage of Romania’s Dealu Mare 10202 Cabernet Sauvignon 2007 edition, i couldn’t take my mind off it no more….So instead of opening another bottle of the seven left (the romanian ones i mean), i decided to try the south african one. Excellent decision.

Bellingham – as i read tonight – derives from “Bellinchamp” (pretty fields) If you search on youtube, the imagery on the scene is fantastic. It lies not far away from Cape Town, and takes advantage of the coolness provided by the Drakenstein mountains…just take a look at this great scene (photo from Wikipedia: http://en.wikipedia.org/wiki/File:Stellenbosch_WC_ZA.jpg)

Image

These guys have a long (still young, compared to Europe) tradition in south african wines, being the first ones to produce a rose in South Africa, in 1949, and the first Shiraz to be marketed in South Africa in 1956. It looks like they specialized in producing some great Cuvees, mixing Cabernet Sauvignon with Cabernet Franc and Merlot, or mixing Malbec with Merlot, and so on. One of their white ones was selected by Jancis Robinson on her 65 great white list of December 2011. That already sends the right message! You can find the full list here:

http://www.jancisrobinson.com/articles/a201112014/layout/print.html

Back to the wine…

The 2009 Bellingham Cabernet Sauvignon & Franc is subjected to slow maturation in French oak barrels for 12 months (40% new oak, 30% second fill and 30% third fill) A a mere splash of Cabernet Franc (14%) is added in the final blending.

http://sites.wine.co.za/Directory/Wine.aspx?WINEID=29528

First of all, this one is still a young one…very young one. Once opened, you will find it too acid, confusing your nose, but… winning your eyes. Don’t taste it yet…allow it to breathe…do that for about 20 minutes to half an hour, and get back to it…

Get a glimpse at its amazing deep violet color before diving in a powerful and intense mixture of currants, woods, nuts… Take a deep breath before tasting it…you will notice how smooth it is now, and will feel the Cabernet Cuvee emitting a specific strong, aromatic smell

Works well with poultry ( i myself tried it with some Shiitake Risotto and chicken), but i am sore it will be highly complimented by a well spiced steak, or some strong, raw, spicy cheese, like the ones the Swiss do ( Le Gruyere, Apenzeller, Santa Klara)

For as much as 8 to 10 €, i will definitely get another couple of bottles, “revisiting” them in a couple of years from now.

Well, time for another glass, and “The king’s speech”

Enjoy…

Glassfish – Vertical clustering with multiple domains

Introduction

There are two reasons why i started to try this out:

  1. It allows performing load balancing tests in a clustered system, using one single server (therefore hardware requirement reduces to one server instead of two)
  2. By using two domains under one glassfish instance, one can optimize the memory usage on a single server, having lots of RAM at disposal

The server’s hardware configuration i was running before doing this, was something like 8 GB RAM and 4 CPU’S. So i was thinking of using the 8 GB for two JVM’s instead of one. There are several reasons for working with a smaller JVM, first of them being the efficiency of Garbage Collection (other reasons include for example finding entries in large hashmaps, keeping redundant objects shorter in the memory, etc.)

Glassfish Horizontal Cluster
Glassfish Horizontal Cluster - ModJK balancing

We are currently using JMS for cache synchronization, so i had the option of using Glassfish’s embedded JMS Server (which i did), or a separate, stand alone JMS Server (which i also tried and works perfectly) This did not work out of the box, since there are some configuration items to be taken into consideration.

Ok, so here is the concept:

  1. Create, configure and start the first domainwith embedded JMS Server: ports, topics, jms configuration, jvm configuration, mod_jk, thread pools etc.
  2. Restart the first domain for configurations to apply
  3. Create the second domain
  4. Configure the second domain: ports, topics, jms configuration, jvm configuration, mod_jk, thread pools etc.
  5. Start the second domain using the first domain as JMS Server

When creating the first domain, there will be some default settings like:

  • http listener port: 8080
  • https listener port: 8081
  • JMS port 7676
  • jvm Xmx 512M
  • etc.

While this is very convenient if using one domain pro physical server, when creating the second domain, arbitrary ports will be assigned. Therefore you would like to have both domains follow a naming convention. In order to avoid any conflicts, following configuration items have to be kept synchronized (the ports have to be unique) The list contains the default configuration of a freshly set up domain:

Port configuration

<iiop-listener id="orb-listener-1" port="3700" address="0.0.0.0" lazy-init="true" />
<iiop-listener id="SSL" port="3820" address="0.0.0.0" security-enabled="true">
<iiop-listener id="SSL_MUTUALAUTH" port="3920" address="0.0.0.0" security-enabled="true">
<jmx-connector port="8686" address="0.0.0.0" security-enabled="false" name="system" auth-realm-name="admin-realm" />
<jvm-options>-Dosgi.shell.telnet.port=6666</jvm-options>
<network-listener port="8080" protocol="http-listener-1" transport="tcp" name="http-listener-1" thread-pool="http-thread-pool" />
<network-listener port="8181" protocol="http-listener-2" transport="tcp" name="http-listener-2" thread-pool="http-thread-pool" />
<network-listener port="4848" protocol="admin-listener" transport="tcp" name="admin-listener" thread-pool="http-thread-pool" />
<network-listener port="8010" protocol="jk-connector" transport="tcp" name="jk-connector" jk-enabled="true" thread-pool="http-thread-pool" />

I suggest a suffix based naming convention for the ports. Therefore, each configured port should end in the number of the domain. For example:

Domain’s 1 http port would be 8081 instead of 8080. Domain’s 2 port would then be 8082, and so on

This is how it would look like for two domains:

Glassfish Multiple Domain Port configuration
Glassfish Multiple Domain Port configuration

 JMS Configuration

Since the first Domain is running as Master JMS, we need to configure all other domains to use this JMS Server. There will be therefore a different JMS Configuration on the first domain on one hand, and all other domains on the other hand:

Domain 1 JMS  Configuration

<jms-service default-jms-host="default_JMS_host" type="LOCAL">
<jms-host host="0.0.0.0" name="default_JMS_host" />
</jms-service>

Domain 2 JMS Configuration
<jms-service default-jms-host="default_JMS_host" type="REMOTE">
<jms-host host="127.0.0.1" name="default_JMS_host" />
</jms-service>

Creating and configuring Domain 1

There is nothing easier creating a domain in Glassfish. This will create all default options for you to start and get it running. We will configure it directly to use 4841 as administration port, instead of 4848.
asadmin create-domain --adminport 4841 domain1
In order to be able to reconfigure the default settings, we need to start the domain
asadmin start-domain domain1

We will want to do the following:

  • delete the default memory settings and re-create them with our configured values
  • reset the JMS Service configuration
  • create a JMS topic
  • reset the default ports to the ones following the naming convention

Reconfigure Memory Settings

asadmin delete-jvm-options "-client:-Xmx512m:-XX\:NewRatio=2:-XX\:MaxPermSize=192m"

Since we do not know the Telnet port of the second domain (it will be arbitrarly assigned), we have to be able to delete this configuration in a dynamic way:

asadmin delete-jvm-options `asadmin get server.* | grep -o '\-Dosgi.shell.telnet.port=[0-9]*'`
Now we can re-create the memory settings, and configure the garbage collection according to our needs. The configuration below deals with a 3.5GB JVM, and a CMS Garbage Collector. More details are available by searching this blog:
asadmin create-jvm-options "-server:-Dosgi\.shell\.telnet\.port=6661:-DjvmRoute=lb1:-XX\:MaxPermSize=512m:-Xmx3550m:-Xms3550m:-XX\:NewSize=1500m:-XX\:MaxNewSize=1500m:-XX\:ParallelGCThreads=2:-XX\:\+UseConcMarkSweepGC:-XX\:\+UseParNewGC:-XX\:SurvivorRatio=3:-XX\:TargetSurvivorRatio=90:-XX\:MaxTenuringThreshold=4:-XX\:\+CMSParallelRemarkEnabled:-XX\:\+CMSPermGenSweepingEnabled:-XX\:\+CMSClassUnloadingEnabled:-XX\:\+PrintGCDetails:-Xloggc\:\${com.sun.aas.instanceRoot}/logs/jgc.log:-Xss128k"
We have now configured the JVM options, and the telnet port. Let's proceed with configuring JMS

Reset the JMS Service Configuration

As mentioned before, the JMS Server is acting as Master. We need to set it as LOCAL Type.

asadmin -p 4841 -W /opt/glassfish/passwords set server.jms-service.type=LOCAL
asadmin -p 4841 -W /opt/glassfish/passwords set server.jms-service.jms-host.default_JMS_host.lazy-init=false
asadmin -p 4841 -W /opt/glassfish/passwords set server.jms-service.jms-host.default_JMS_host.host=0.0.0.0
asadmin -p 4841 -W /opt/glassfish/passwords set server.jms-service.jms-host.default_JMS_host.port=7676

Create a JMS Topic


asadmin -p 4841 -W /opt/glassfish/passwords create-jmsdest --desttype topic "Sample"
asadmin -p 4841 -W /opt/glassfish/passwords create-jms-resource --restype javax.jms.Topic --property Name=Sample "jms/SampleTopic"
asadmin -p 4841 -W /opt/glassfish/passwords create-jms-resource --restype javax.jms.TopicConnectionFactory --property transaction-support=NoTransaction "jms/SampleTopicFactory"

We are now ready for the last step, configuring the remaining ports

Reset the default ports to the ones following the naming convention

Configure the http listeners

asadmin -p 4841 -W /opt/glassfish/passwords set server.network-config.network-listeners.network-listener.http-listener-1.port=8081
asadmin -p 4841 -W /opt/glassfish/passwords set configs.config.server-config.network-config.protocols.protocol.http-listener-1.http.compression=off
asadmin -p 4841 -W /opt/glassfish/passwords set configs.config.server-config.network-config.protocols.protocol.http-listener-1.http.compressable-mime-type=text/html,text/xml,text/plain,text/javascript,text/css
asadmin -p 4841 -W /opt/glassfish/passwords set server.network-config.network-listeners.network-listener.http-listener-2.port=8181
asadmin -p 4841 -W /opt/glassfish/passwords set configs.config.server-config.network-config.protocols.protocol.http-listener-2.http.compression=off
asadmin -p 4841 -W /opt/glassfish/passwords set configs.config.server-config.network-config.protocols.protocol.http-listener-2.http.compressable-mime-type=text/html,text/xml,text/plain,text/javascript,text/css

Configure the thread pool (default is minimum of 2 and max of 5)

asadmin -p 4841 -W /opt/glassfish/passwords set configs.config.server-config.thread-pools.thread-pool.http-thread-pool.max-thread-pool-size=200
asadmin -p 4841 -W /opt/glassfish/passwords set configs.config.server-config.thread-pools.thread-pool.http-thread-pool.min-thread-pool-size=100

Configure ModJK Connector (this needs its own listener)

asadmin -p 4841 -W /opt/glassfish/passwords create-http-listener --listenerport 8011 --listeneraddress 0.0.0.0 --defaultvs server jk-connector
asadmin -p 4841 -W /opt/glassfish/passwords set configs.config.server-config.network-config.network-listeners.network-listener.jk-connector.jk-enabled=true

Configure JMX Connector

asadmin -p 4841 -W /opt/glassfish/passwords set server.admin-service.jmx-connector.system.port=8681

Configure other Domain Ports

asadmin -p 4841 -W /opt/glassfish/passwords set server.iiop-service.iiop-listener.SSL.port=3821
asadmin -p 4841 -W /opt/glassfish/passwords set server.iiop-service.iiop-listener.SSL_MUTUALAUTH.port=3921
asadmin -p 4841 -W /opt/glassfish/passwords set server.iiop-service.iiop-listener.orb-listener-1.port=3701

We are now ready to restart the domain
asadmin -p 4841 -W /opt/glassfish/passwords restart-domain domain1

Creating and configuring Domain 2

asadmin create-domain --adminport 4842 domain2
In order to be able to reconfigure the default settings, we need to start the domain
asadmin start-domain domain2

We will want to do the following:

  • delete the default memory settings and re-create them with our configured values
  • reset the JMS Service configuration, and configure it to use the Domain 1 JMS Server
  • create a JMS topic
  • reset the default ports to the ones following the naming convention

Reconfigure Memory Settings

asadmin -p 4842 -W /opt/glassfish/passwords delete-jvm-options "-client:-Xmx512m:-XX\:NewRatio=2:-XX\:MaxPermSize=192m"

Since we do not know the Telnet port of the second domain (it will be arbitrarly assigned), we have to be able to delete this configuration in a dynamic way:

asadmin -p 4842 -W /opt/glassfish/passwords delete-jvm-options `asadmin get server.* | grep -o '\-Dosgi.shell.telnet.port=[0-9]*'`
Now we can re-create the memory settings, and configure the garbage collection according to our needs. The configuration for the second domain has been adapted to a smaller JVM. Watch out how you name your jvmRoute Parameter, since this will be used when configuring the Apache ModJK workers (in this case, lb2 – load balancer 2):
asadmin -p 4842 -W /opt/glassfish/passwords create-jvm-options "-server:-Dosgi\.shell\.telnet\.port=6662:-DjvmRoute=lb2:-XX\:MaxPermSize=512m:-Xmx2500m:-Xms2500m:-XX\:NewSize=1500m:-XX\:MaxNewSize=1500m:-XX\:ParallelGCThreads=2:-XX\:\+UseConcMarkSweepGC:-XX\:\+UseParNewGC:-XX\:SurvivorRatio=3:-XX\:TargetSurvivorRatio=90:-XX\:MaxTenuringThreshold=4:-XX\:\+CMSParallelRemarkEnabled:-XX\:\+CMSPermGenSweepingEnabled:-XX\:\+CMSClassUnloadingEnabled:-XX\:\+PrintGCDetails:-Xloggc\:\${com.sun.aas.instanceRoot}/logs/jgc.log:-Xss128k"
We have now configured the JVM options, and the telnet port. Let's proceed with configuring JMS

Reset the JMS Service Configuration

As mentioned before, the JMS Server is acting as slave. We need to set it as REMOTE Type.

asadmin -p 4842 -W /opt/glassfish/passwords set server.jms-service.type=REMOTE
asadmin -p 4842 -W /opt/glassfish/passwords set server.jms-service.jms-host.default_JMS_host.lazy-init=false
asadmin -p 4842 -W /opt/glassfish/passwords set server.jms-service.jms-host.default_JMS_host.host=127.0.0.1
asadmin -p 4842 -W /opt/glassfish/passwords set server.jms-service.jms-host.default_JMS_host.port=7676

Create a JMS Topic

The destination has previously been created on domain 1 , so we do not need to create it again (if you try that, you will get an error). We will therefore skip step1, and proceed with creating only the jms resources.

asadmin -p 4842 -W /opt/glassfish/passwords create-jms-resource –restype javax.jms.Topic –property Name=Sample “jms/SampleTopic”
asadmin -p 4842 -W /opt/glassfish/passwords create-jms-resource –restype javax.jms.TopicConnectionFactory –property transaction-support=NoTransaction “jms/SampleTopicFactory”

We are now ready for the last step, configuring the remaining ports

Reset the default ports to the ones following the naming convention

Configure the http listeners

asadmin -p 4842 -W /opt/glassfish/passwords set server.network-config.network-listeners.network-listener.http-listener-1.port=8082
asadmin -p 4842 -W /opt/glassfish/passwords set configs.config.server-config.network-config.protocols.protocol.http-listener-1.http.compression=off
asadmin -p 4842 -W /opt/glassfish/passwords set configs.config.server-config.network-config.protocols.protocol.http-listener-1.http.compressable-mime-type=text/html,text/xml,text/plain,text/javascript,text/css
asadmin -p 4842 -W /opt/glassfish/passwords set server.network-config.network-listeners.network-listener.http-listener-2.port=8182
asadmin -p 4842 -W /opt/glassfish/passwords set configs.config.server-config.network-config.protocols.protocol.http-listener-2.http.compression=off
asadmin -p 4842 -W /opt/glassfish/passwords set configs.config.server-config.network-config.protocols.protocol.http-listener-2.http.compressable-mime-type=text/html,text/xml,text/plain,text/javascript,text/css

Configure the thread pool (default is minimum of 2 and max of 5)

asadmin -p 4842 -W /opt/glassfish/passwords set configs.config.server-config.thread-pools.thread-pool.http-thread-pool.max-thread-pool-size=200
asadmin -p 4842 -W /opt/glassfish/passwords set configs.config.server-config.thread-pools.thread-pool.http-thread-pool.min-thread-pool-size=100

Configure ModJK Connector (this needs its own listener)

asadmin -p 4842 -W /opt/glassfish/passwords create-http-listener --listenerport 8012 --listeneraddress 0.0.0.0 --defaultvs server jk-connector
asadmin -p 4842 -W /opt/glassfish/passwords set configs.config.server-config.network-config.network-listeners.network-listener.jk-connector.jk-enabled=true

Configure JMX Connector

asadmin -p 4842 -W /opt/glassfish/passwords set server.admin-service.jmx-connector.system.port=8682

Configure other Domain Ports

asadmin -p 4842 -W /opt/glassfish/passwords set server.iiop-service.iiop-listener.SSL.port=3822
asadmin -p 4842 -W /opt/glassfish/passwords set server.iiop-service.iiop-listener.SSL_MUTUALAUTH.port=3922
asadmin -p 4842 -W /opt/glassfish/passwords set server.iiop-service.iiop-listener.orb-listener-1.port=3702

We are now ready to restart the domain
asadmin -p 4842 -W /opt/glassfish/passwords restart-domain domain2

MOD JK Workers Configuration

I suggest configuring your Workers using templates. It will be easier to handle, since you will update the configuration in one single place.

Let’s define a worker template

worker.template.type=ajp13
worker.template.lbfactor=50
worker.template.socket_keepalive=false
worker.template.socket_connect_timeout=300
worker.template.ping_mode=A
worker.template.ping_timeout=1000
worker.template.connection_pool_size=32
worker.template.connection_pool_timeout=600

We can now define our two workers, each linked to one Glassfish Domain. Pay attention to the name of the workers, and the port, which have to be synchronized with the names you defined in the JVM Options of the Domains (jvmRoute Parameter)

Configure worker for Domain 1

worker.lb1.reference=worker.template
worker.lb1.host=glassfish_server
worker.lb1.port=8011
worker.lb1.lbfactor=50
worker.lb1.socket_keepalive=1

Configure worker for Domain 2
worker.lb2.reference=worker.template
worker.lb2.host=glassfish_server
worker.lb2.port=8012
worker.lb2.lbfactor=50
worker.lb2.socket_keepalive=1

Configure the load balancer

worker.lb.type=lb
worker.lb.balance_workers=lb1,lb2
worker.lb.sticky_session=True
worker.lb.sticky_session_force=false
worker.lb.method=B


# list workers visible to apache
worker.list=lb

Domains running – Check processes

You should now have two root processes:

  • domain 1 glassfish, with embedded JMS
  • domain 2 glassfish

This should look like:

14031 ?        Sl     2:07 /usr/java/jdk1.6.0_23/bin/java -cp /opt/glassfishv3/glassfish/modules/glassfish.jar -XX:+UnlockDiagnosticVMOptions -XX:ParallelGCThreads=2 -XX:+UseConcMarkSweepGC -XX:MaxPermSize=51
12394 ?        Sl     3:16 /usr/java/jdk1.6.0_23/bin/java -cp /opt/glassfishv3/glassfish/modules/glassfish.jar -XX:+UnlockDiagnosticVMOptions -XX:ParallelGCThreads=2 -XX:+UseConcMarkSweepGC -XX:MaxPermSize=51
12446 ?        S      0:00  \_ /bin/sh /opt/glassfishv3/mq/bin/imqbrokerd -javahome /usr/java/jdk1.6.0_23/jre -Dimq.cluster.nowaitForMasterBroker=true -varhome /opt/glassfishv3/glassfish/domains/domain1/imq -
12480 ?        Sl     0:04      \_ /usr/java/jdk1.6.0_23/jre/bin/java -cp /opt/glassfishv3/mq/bin/../lib/imqbroker.jar:/opt/glassfishv3/mq/bin/../lib/imqutil.jar:/opt/glassfishv3/mq/bin/../lib/jsse.jar:/opt/g

You can now check if the workers are configured properly, by running some http requests over your server. You should see where the request has been directed to:

“GET /home” 200 2601 14984 “https://myserver/” “Mozilla/5.0 (Windows; U; Windows NT 6.1; de; rv:1.9.2.8)
Gecko/20100722 Firefox/3.6.8” lb 0.014809 F<lb2 0 OK> L<lb2 0 OK> – the request has been directed to the second server

“GET /home” 200 2601 14984 “https://myserver/” “Mozilla/5.0 (Windows; U; Windows NT 6.1; de; rv:1.9.2.8)
Gecko/20100722 Firefox/3.6.8” lb 0.014809 F<lb1 0 OK> L<lb1 0 OK>- the request has been directed to the first server

Performance Impact

I ran the same test by directing all requests first to the domain 1, and then to domain 2.

Let’s review the JVM and Garbage Collection Strategy for the two domains, and then review the results by comparing response times, and garbage collection statistics.

Domain 1 JVM Configuration – Bigger JVM


<jvm-options>-DjvmRoute=lb1</jvm-options>
<jvm-options>-XX:MaxPermSize=512m</jvm-options>
<jvm-options>-Xmx3550m</jvm-options>
<jvm-options>-Xms3550m</jvm-options>
<jvm-options>-XX:NewSize=1500m</jvm-options>
<jvm-options>-XX:MaxNewSize=1500m</jvm-options>
<jvm-options>-XX:ParallelGCThreads=2</jvm-options>
<jvm-options>-XX:+UseConcMarkSweepGC</jvm-options>
<jvm-options>-XX:+UseParNewGC</jvm-options>
<jvm-options>-XX:SurvivorRatio=3</jvm-options>
<jvm-options>-XX:TargetSurvivorRatio=90</jvm-options>
<jvm-options>-XX:MaxTenuringThreshold=4</jvm-options>
<jvm-options>-XX:+CMSParallelRemarkEnabled</jvm-options>
<jvm-options>-XX:+CMSPermGenSweepingEnabled</jvm-options>
<jvm-options>-XX:+CMSClassUnloadingEnabled</jvm-options>
<jvm-options>-XX:+PrintGCDetails</jvm-options>

Response times for Domain 1

Request

90% response time

Homepage 688
Login 1375
Add transaction 243
Logout 555

Domain 2 JVM Configuration – Smaller JVM


<jvm-options>-DjvmRoute=tap2w2</jvm-options>
<jvm-options>-XX:MaxPermSize=512m</jvm-options>
<jvm-options>-Xmx2500m</jvm-options>
<jvm-options>-Xms2500m</jvm-options>
<jvm-options>-XX:NewSize=1500m</jvm-options>
<jvm-options>-XX:MaxNewSize=1500m</jvm-options>
<jvm-options>-XX:ParallelGCThreads=2</jvm-options>
<jvm-options>-XX:+UseConcMarkSweepGC</jvm-options>
<jvm-options>-XX:+UseParNewGC</jvm-options>
<jvm-options>-XX:SurvivorRatio=3</jvm-options>
<jvm-options>-XX:TargetSurvivorRatio=90</jvm-options>
<jvm-options>-XX:MaxTenuringThreshold=4</jvm-options>
<jvm-options>-XX:+CMSParallelRemarkEnabled</jvm-options>
<jvm-options>-XX:+CMSPermGenSweepingEnabled</jvm-options>
<jvm-options>-XX:+CMSClassUnloadingEnabled</jvm-options>
<jvm-options>-XX:+PrintGCDetails</jvm-options>

Response Times for Domain 2

Request

90% response time

Homepage 338
Login 491
Add transaction 156
Logout 219

Now, let’s do a head to head compare also by adding the number of performed transactions, and the performance improvement from the big domain 1 configuration, to the smaller domain 2 configuration

# of Requests Request 90% response time Performance Improvement
Domain 1 Domain 2
150 Homepage 688 338 103 %
150 Login 1375 491 180 %
15 000 Add transaction 243 156 55 %
150 Logout 555 219 153 %

As you can see, there is a major improvement, from 55 % to 180 %, depending on the business transaction. Having taken all other outside variables into consideration, i can conclude there is definitely place for improvement in a smaller jvm.

Let’s take a look at some garbage collection statistics:

Garbage Collection duration:

Garbage Collection Duration statistics
Garbage Collection Duration statistics

Garbage Collection Time Statistics – Head to Head compare

Statistics Domain 1 Domain 2
Percentage of time in GC 0.217 % 0.098 %
Time spent in Full GC 15.877 s 4.758 s
Percentage of time in Full GC 0.118 % 0.035 %
Average duration Parallel Scavenge 0.16 s 0.113 s

Final

This post was about vertical clustering in glassfish. It can help individuals who, in lack of hardware, can develop and test in a clustered system, using one single Glassfish Instance, and multiple domains. I have reviewed the pros and cons, the things to take care of when configuring multiple domains, and the performance improvement by running some load tests against the two configurations.

I hope you can use this in your deployment scenarios, and that this post will clear any or at least some unanswered question relating to this topic.Feel free to ask me if you still have questions. I will try answering them as soon as possible.

Cheers,

Alex