Configure Apache and Tomcat severs together
The most common way to deploy your application in the production environment is to hide the Tomcat behind Apache. This has good and bad parts but it gives you a lot of flexibility and support from Apache. There are a couple of alternatives to put these two severs together:
- mod_jk, this is the old connector developed under the Tomcat project and it is using the Tomcat’s AJP protocol. It is expected to be faster than the HTTP protocol which is text based.
- mod_proxy, is the support module for HTTP protocol. It is TCP based and uses the HTTP which is plain text. When a web client makes a request to Apache, the Apache will make the same call to the Tomcat and then the Tomcat’s response is passed back to the web client. This connector is part of the Apache for a very long time and it is available also for older versions of Apache. This is the simplest way to put the Apache in front of a Tomcat but also the slowest way to do it.
- mod_proxy_ajp, is new and is part of the Apache 2.2. It is working like mod_proxy, but as the name says it is using the AJP connector for sending and getting data from Tomcat. It is using also TCP and it is expected to be faster than plain mod_proxy
Tomcat Clustering & Java Servlet Specification
After I read more about Tomcat Clustering I realized that the main purpose of Tomcat clustering is to offer fault tolerance, failover and high availability support. I read a lot about load balancing but when it comes to Java Servlets I found out that the only choice you have in terms of balancing is to use sticky sessions. This is more a limitation that comes from Java Servlet Specification and not from Tomcat, but it make sense.
For an application to be “distributed” you have to mark it as “distributable” by add the <distributable/> tag in web.xml.
<web-app>
<distributable />
</web-app>
There are multiple ways to balance the client request to your server pool but when it comes to Java Servlet Specification you have only one choice, as the specs say:
“Within an application that is marked as distributable, all requests that are part of a session can only be handled on a single JVM at any one time.”
“You may have multiple JVMs, each handling requests from different clients concurrently for any given distributable web application”
So, I guess you can kiss goodbye the round robin and all other load balancing options, but at least Tomcat will provide you failover, scalability and high availability.