Skip to end of metadata
Go to start of metadata
Table of Contents

The clustering guide describes concerns and configuration guidelines for deploying CAS in a high availability (HA) environment.

Implementing clustering introduces CAS server security concerns


It's easy to visualize the requirements to secure the path of sensitive information when working with a single-server installation of CAS:

  1. Protect user passwords with SSL encryption
  2. Secure the communication between the CAS server and the credential store
  3. Assure that the Ticket Granting Cookie is only sent from the browser to the CAS server
  4. Assure that Proxy Tickets are only issued to an SSL-protected endpoint
  5. Secure the validation of Service Tickets and Proxy Tickets with SSL encryption

It is also easy to visualize how clustering CAS servers may create additional security concerns. This article, while thorough at explaining needs for CAS servers to share their data with each other, does not aim to explain how to secure these additional network communication channels. It is imperative that implementers analyze each of the steps described below for potential security weaknesses in their network environments.

Relevant webinar


See also the relevant September 2010 Jasig CAS Community Call, with both slides and audio available, which featured a presentation with a perspective on clustering CAS from Howard Gilbert at Yale University.


Clustering is essential if your CAS instance is to be "highly available," or HA in manager-speak. Since CAS is a stateful application, there must be a way for each CAS instance to know about what the other CAS instance has done. It would be nice to just use one CAS instance (and one instance on the appropriate hardware can probably easily handle your login needs), but if that instance fails, you do not want all of your users to have to log in again.

As mentioned above, CAS is a stateful application, and stateful in more than one way. CAS keeps track of users in the application's session, and it keeps track of the services the user visits and the tickets used to visit those services. Although the service and proxy tickets are only stored in memory for a brief amount of time, if you are load balancing and clustering CAS, each instance of CAS must immediately know about those tickets. If they do not, CAS simply will not work (most of the time). You may think that LB sticky sessions will save you, but they won't! Sticky sessions are good for sending the user (via a web browser) back to the same CAS instance, but it does not solve the problem that applications also use CAS, and the LB may have already determined that a particular application should be using another CAS instance (via sticky sessions)!

So, there are several things that need to be be done for clustering to work:

  • Replicate user login information
  • Replicate tickets
  • Ensure all tickets (TGTs, service, and proxy tickets) are unique across JVMs

Since CAS is a Java application (and based on Spring at that), there are many ways to do clustering. Furthermore, there is no easy "on/off" switch for clustering, hence this document. The CAS clustering described here takes advantage of the Spring aspects of CAS, and implements the clustering purely via XML configuration! (Of course, we do use Java classes that have already been written by the CAS team.)


This HOW TO makes the following assumptions:

  • CAS 3.0.6 or greater
  • CAS 3.1.0 or greater
  • CAS 3.2.0 or greater
  • Tomcat 5.5 or 6.0
  • JBOSS 4 running "all"
  • You know how to deploy CAS 3.0.x / CAS 3.1.x in Tomcat
  • You know how to configure Tomcat (or at least poke blindly at the controls until they let you go)
  • That CAS is configured to actually work, i.e., users can actually use your CAS for authN
  • You have some load balancing mechanism for your (soon to be) clustered environment
  • You have checked in with your network administrators about using Multicast on your network
  • One CAS instance per host - if you have more, you will have to make some adjustments, but they will be obvious to figure out


Guaranteeing Ticket Uniqueness

*If you are using CAS 3.2.x, feel free to skip this step.  It is already part of your implementation.

Since all the tickets need to be unique across JVMs, we will configure this part first, and it is the easiest part to do, too.

The first problem you need to solve is what unique identifier to use. I choose the hostname of the server from which CAS is being served. Because this is Java and we do everything via XML configuration and not Java code, we will solve this problem using the applicationContext.xml file and one other file external to CAS. The benefit of this approach is that a single deployable (WAR file) can be used across all nodes of the cluster with host-specific properties resolved from the filesystem of each host. We use this strategy at Virginia Tech and it works very well.

By default CAS gets vital host-specific configuration properties from the file that is packed in the WAR file. Place that file on a convenient filesystem location that is accessible by the Java process running the servlet container, e.g.,


The contents of should be exactly the same as that distributed with the CAS distribution:

In order for CAS to load properties from the filesystem instead of the classpath of the unpacked WAR file, you must modify the file /WEB-INF/applicationContext.xml.


The property placeholder is used by ticket generators to tag tickets issued by a particular cluster node:


This creates tickets, for example, like the following:

TGT-2-Lj1aIVkEqGDCSLaXwXVQlIcYQcyyqcI0tuR-<hostname of your server>

Tomcat Session Replication

Since CAS stores the login information in the application session we need to setup session replication between our Tomcat instances.


Note there was an approach (sometimes referenced in older resources) for preserving application login state via a Spring Workflow 1.0 configuration option (Spring 1.0 documentation on this here). Spring Webflow 2.0+ (used in modern versions of CAS) no longer has this feature, meaning this state must be maintained in some other way (such as Tomcat session replication covered here).

The first thing you need to do is tell CAS (the application) that it is distributable 1. So, in the CAS web.xml file you need to add the <distributable/> tag. The web.xml file is located here:

CAS 3.0.x

CAS 3.1.x & CAS 3.2.x

In this file, I put the distributable tag right below the context-param section:


Now you need to tell Tomcat to replicate the session information by adding Cluster elements under the Host elements. In the following examples, data is replicated via UDP multicast since it requires the least amount of host-specific configuration. An alternative is to use TCP, where each node must explicitly know about its peers. Regardless of your choice, you should thoroughly test node failure with your replication strategy to determine whether your network supports graceful node loss and recovery.

Tomcat 5.5.x server.xml
Tomcat 6.x server.xml

See and for more information on Tomcat 6 clustering.

Note 1: Again, please check with your network administrator before turning this on. I have set mcastTTL to 1 because my network admin told me " If you want to force it to stay within your subnet, my understanding is that you can do so by using a TTL of 1." If you want to do clustering outside of a single subnet, you will probably have to change this value, or remove the mcastTTL attribute and value altogether.

Note 2: You will see a lot of references to the jvmRoute attribute of the Engine tag, but you only need to specify that if you are clustering more than one Tomcat on one host. In that case, you will have to specify the jvmRoute that corresponds to the Apache worker you have specified for that Tomcat instance.

Note 3: If your Tomcat cluster doesn't work (Tomcat instance not seeing other member), perhaps you must change auto in tcpListenAddress="auto" by IP address of server.

Note 4: If your Tomcat cluster still doesn't work ensure that the TCP and UDP ports on the servers are not being blocked by a host-based firewall, that your network interface has multicast enabled, and that it has the appropriate routes for multicast.

Note 5: If you see a large stacktrace in the cas.log file that ends with a root cause of: " Cannot assign requested address", it's likely due to the JVM trying to use IPv6 sockets while your system is using IPv4. Set the JVM to prefer IPv4 by setting the Java system property You can set the CATALINA_OPTS environment variable so Tomcat will pick it up automatically with:


Now start up your two (or more) Tomcat instances (on separate hosts!) and you should see something like the following in the catalina.out log:

May 22, 2007 4:25:54 PM org.apache.catalina.cluster.tcp.SimpleTcpCluster memberAdded
INFO: Replication member added:org.apache.catalina.cluster.mcast.McastMember
     [tcp://,catalina,,4001, alive=5]

Conversly, in the catalina.out log on my other server, I see:

May 22, 2007 4:27:13 PM org.apache.catalina.cluster.tcp.SimpleTcpCluster memberAdded
INFO: Replication member added:org.apache.catalina.cluster.mcast.McastMember
     [tcp://,catalina,,4001, alive=5]

Excellent, you now have clustering of the user's login information for CAS. Test it out by logging into CAS, then stopping Tomcat on the server you logged in at, and then hit the login page again, and CAS should show you the "you are already logged in page."

Ticket Cache Replication

Now you we need to setup the ticket cache replication using the org.jasig.cas.ticket.registry.JBossCacheTicketRegistry class. We implement this by editing the applicationContext.xml config file again.


Note 1: No space between classpath: and jbossTicketCacheReplicationConfig.xml, otherwise you have a not found exception.

In the cache bean above, there is a property with a value of classpath:jbossTicketCacheReplicationConfig.xml so now we have to find and do something with this file.

jbossCache.xml started out life as jbossTestCache.xml. Since I do not like to put things into production with the word "test" in them, I changed the name (and a few things inside the file). This file is located at:

CAS 3.0.x

CAS 3.1.x & CAS 3.2.x

Open this file up and get ready for some editing. I discovered that the default file did not work in my installation, as was noted by some others on the CAS mailing list. Scott Battaglia sent an edited version to the list. 2

You have to comment-out the following lines:


Next, you have to edit the mcast_addr In the ClusterConfig section, set the mcast_addr to the value appropriate for your network, and if your hosts are on the same subnet, set ip_ttl to 1. You may also need to set the bind_addr property to the IP address you want this host to listen for TreeCache updates. This is especially true if you are using bonding and/or IPV6 on your system :

Now that you have edited this file, you have to get it onto your CLASSPATH I have decided to put it directly into my Tomcat directory:

For JBOSS, this is a good location:

If you know of a better way to get it on your CLASSPATH by putting it somewhere in the localPlugins directory, please let me know.

Now, the hard part: Rounding up the 10 jars needed to make JBossCache work! JBossCache for CAS requires the following jars
(*Skip this if you are running on JBoss):


CAS 3.0.x

You can get all of these jar files in the JBossCache distribution. 3 Once you have these jars, put them in your localPlugins/lib directory:


CAS 3.1.x

Using Maven 2, it is not as hard as CAS 3.0.x branch.

Add the following dependency to the pom.xml file located at the folder cas-server-webapp and it will include the JBoss cache stuff in cas.war

Remarks: The dependency is needed if you are NOT using JBoss Application Server.


CAS 3.2.x on JBOSS (or probably any CAS implementation on JBoss)

You need to exclude some jars from the deployment otherwise they will conflict with JBOSS.


Ok, now let's test this thing! Build cas.war and redeploy to your two (or more) Tomcat instances and you should see the JBossCache info in the catalina.out log:

2007-05-23 16:59:34,486 INFO [org.jasig.cas.util.JBossCacheFactoryBean] - <Starting TreeCache service.>

GMS: address is

In the catalina.out log on my other server, I see:

2007-05-23 17:01:22,113 INFO [org.jasig.cas.util.JBossCacheFactoryBean] - <Starting TreeCache service.>

GMS: address is

If you see this, and no Java exceptions, you are doing well! If you see Java exceptions, they are probably related to Tomcat not being able to find the jbossTicketCacheReplicationConfig.xml file in its CLASSPATH or it can't fine some class related to the JBossCache, i.e., one of the jars is missing.

Ensuring Ticket Granting Ticket Cookie Visibility

The last step before you can test out whether CAS is set up to be clustered correctly is to ensure that the ticket granting ticket (TGT) cookie set in the users' browsers is visible by all of the nodes in the CAS cluster.  Using your favorite text editor (shameless plug for vim), open the cas-servlet.xml file and look for the warnCookieGenerator and ticketGrantingTicketCookieGenerator beans.  Both of these beans need to have the cookieDomain property set to the domain where the TGT cookie should be visible to.  Edit the bean declarations based on the following example (substitute your domain as necessary):

Protect your ticket granting cookies!


Warning: do not set the cookieDomain any wider than absolutely necessary. All hosts in the cookieDomain must be absolutely trusted - at a security level of your CAS server itself. Ideally all clustered CAS server instances will appear to the end user's web browser to be answering the very same URLs (e.g., the cluster is fronted by a hardware load balancer) and so the cookieDomain can be maximally restrictive.

Setting the cookie domain such that untrusted servers have access to the Ticket Granting Cookie will allow those servers to hijack the end user's single sign on session and acquire service tickets in his or her name to illicitly authenticate to CASified applications.




You will need to deploy as a .war file into JBoss's farm at:

After you have started your cluster servers, insure you have a cluster by checking the JBoss DefaultPartition.  The CurrentView should show all the ip's of your cluster.  If not, you will need to research why your cluster is not finding the other nodes.

Service Management

If you use the service management feature to restrict access to the CAS server based on CAS client service URLs/URL patterns, a Quartz job like the following must be added to one of your Spring contexts. The purpose of the job is to refresh the other nodes of service changes by reloading the services from the backing store. A service registry implementation that supports clustering, e.g. JpaServiceRegistryDaoImpl, LdapServiceRegistryDao, is required for proper clustering support.  Both the Service Manager Reload Job and Trigger should be added to ticketRegistry.xml

Service Manager Reload Job

In order for the above job to fire, the trigger must be added to the Quartz scheduler bean as follows:



The following references are used in this document:

  1. Tomcat Clustering/Session Replication HOW-TO