Navigation Menu
Resilient Enterprise Messaging with JBoss A-MQ & Red Hat Enterprise Linux

Resilient Enterprise Messaging with JBoss A-MQ & Red Hat Enterprise Linux

By on Aug 8, 2013 in Article | 1 comment

Background

At JUDCon 2013 in Boston, Scott Cranton and I presented a talk entitled Resilient Enterprise Messaging with Fuse & Red Hat Enterprise Linux. This technical article is the follow up work from that presentation.

JBoss A-MQ is built on ActiveMQ which is a robust messaging platform that supports STOMP, JMS, AMQP and modern principals in Message Oriented Middleware (MOM). It’s built from the ground up to be loosly coupled and asynchronous in nature. This provides ActiveMQ with native high availability capabilities. An administrator can easily configure an ActiveMQ master/slave architecture with a shared filesystem. In the future this will be augmented with Replicated LevelDB.

Red Hat Enterprise Linux is a popular, stable and scalable Linux distribution which has High Availability and Resillient Storage add-on support built on CMAN, RGManager, Corosync, and GFS2. High Availability and Resillient Storage expand upon the high availability capabilities built into ActiveMQ and provide a robust, and complete solution for enterprise deployments that require deeper clustering capabilities.

There are two main architectures commonly used to provide fault tollerance to a messaging server. The first is master/slave which is easy to configure, but as it scales, it requires 2X resources. The second is made up of active nodes and redundant nodes. The redundant nodes can take over for any one of the active nodes should they scale. The active/redundant architecture requires more software and more initial configuration, but uses N+1 or N+2 resources as it scales.

This article will explore the technical requirements and best practices for building and designing a N+1 architecture using JBoss A-MQ, Red Hat Enterprise Linux, the High Availability Add-On and the Resillient Storage Add-On.

Scaling

The native High Availability features of ActiveMQ allow it to scale quite well and will satisfy most messaging requirements. As the messaging platform grows, administrators can add pairs of clustered servers while sharding Queues and Topics. Even shared storage can be scaled by providing different LUNs or NFS4 shares to each pair of clustered servers. The native High Availability in ActiveMQ provides great scaling and decent fault resolution in smaller environments (2-6 nodes).

As a messaging platform grows larger (6+ nodes), the paired, master/slave architecture of the native high availabilty can start to require a lot of hardware resources. For example, at 10 nodes of messaging, the 2N architecture will require another 10 nodes of equal or greater resources for resiliency. This is a total of 20 messaging nodes. At this point it becomes attractive to investigate an N+1 or N+2 architecture. For example, in an N+2 architecture, acceptable fault tollerance may be provided to this same 10 node messaging platform with only two extra nodes of equal or greater resources. This would only require a total of 12 instead of 20 nodes.

Fault Tolerance

The native master/slave architecture of ActiveMQ provides decent fault detection. Both nodes are configured to provide the exact same messaging service. As each node starts, each attempts to gain a lock on the shared filesystem. Whoever gets the lock first, starts serving traffic. The slave node then periodically attempts to get the lock. If the master fails, the slave will obtain the lock the next time it tries and will then provide access to the message queues identical to the master. This works great for some basic hardware/software failures.

As fault tolerance requirements expand, failure scenario checking can be mapped into the service clustering software such as the Red Hat Enterprise Linux High Availability Add-On. This allows administrators to write advanced checks and expand upon them as lessons are learned in production. Administrators can ensure that the ActiveMQ processes and shared storage are monitored, as with the native master/slave high availability, but can also utilize JMX, network port checking things as advanced as internal looking glass services to ensure that the messaging platform is available to it’s clients. Finally, the failover logic can be embedded in the clustering software which allows administrators to create failover domains and easily add single nodes as capacity is required.

From a fault tollerance perspective, it is worth stressing that in either architecture, the failover node must be of equal or greater resources. For example, if the workload running on a messaging node requires 65% of the memory, cpu, or network bandwidth, the failover node must be able to satisfy these requirements. If the workload consumes this amount of resources on the day the messaging platform is put into production, the requirements will typically grow over time. If for example, at the end of two years, the workload has grown to 85%, the workload may now require more capacity than failover node can provide and will cause an outage. It is an anti-pattern to have failover nodes that are not of equal or greater resource capacity than the production nodes.

Architecture

To create an example architecture that demonstrates client to broker and broker to broker message flow, we will configure two brokers as separate services in the clustering software. The producer and consummer will be sample Java code that is run from any computer that has network connectivity to the messaging cluster. Messages will flow as follows:

Resilient Messaging Architecture - Logical Architecture

The two services AMQ-East and AMQ-West will be limited to their respective failover domains East and West. This will prevent AMQ-East and AMQ-West from running on the same physical node. Failover domains also allow administrators to architect several failover servers and scale out capacity within the clustering software. For this tutorial, the environment will be composed of the following systems and failover domains.

Resilient Messaging Architecture - Normal State

 

 

ActiveMQ

The following installation should be performed identically on amq02.example.com, amq03.example.com, and amq04.example.com.

Installation

ActiveMQ is supported by Red Hat in two configurations. It can be ran inside of a Karaf container and is the prefered method in non-clustered environments or if configuration will be managed by the Fuse Management Console (Zookeepr). ActiveMQ can also be ran directly on the JVM. This is the prefered method for clustered setups and is the method employed in this tutorial.

Red Hat provides a tar ball with standalone ActiveMQ in the extras directory

 

For simplicity the standalone version is installed and linked in /opt

 

Configuration

ActiveMQ comes with several example configuration files. Use the static network of brokers configuration files as a foundation for the clustered pair. Notice that the ports are different by default, this is so that both brokers can run on the same machine. In a production environment, you may or may not want this to be possible.

 

The orignal network of broker configuration files are setup to both run on the same machine. Change default from localhost to the ativemq-east failover ip:

 

Storage

The storage requirements in a clustered ActiveMQ messaging platform based on RGManager are different than the ActiveMQ Master/Slave architecture. A standard filesystem such as Ext3, Ext4, BTRFS, or a shared filesystem such as NFS or GFS2 can be used. This provides the architect to use the filesystem that maps best to functional and throughput requirements. Each filesystem has advantages and disadvantages.

EXT4

This is an obvious choice for reliability and throughput. There is no lock manager which could provide better performance. An EXT4 filesystem is not clustered and must be managed by the cluster software. This will take extra time during a failover.

GFS2

We have seen good success with GFS2. It allows each node in the cluster to mount the filesystem by default and prevents the cluster software needing to handle mounts and unmounts. This will provide quicker failover during an outage. GFS2 has the advantage of typically residing on Fiber Channel storage and as such is out of band from the standard corporate network.

NFS

Unlike the Master/Slave architecture built in to ActiveMQ, any version of NFS can be used with the clustered architecture. Like GFS2, NFS has the advantage of being mounted at boot, providing quicker failovers during an outage. NFS traditionally uses the standard enterprise network and in my experience may be suseptable to impact by other users on the network.

Clustering

The following installation should be performed identically on amq02.example.com, amq03.example.com, and amq04.example.com.

Init Scripts

These init scripts were developed to be scalable. Configuration data is embeded in a separate configuration file in /etc/sysconfig. As new brokers are added, the administrator can simply copy the wrapper script. For example, camq-west could be copied to camq-north to start a North broker.

Attentive readers may also notice that there are artifacts in this code implying that these scripts are also used for clustered systems that rely on the Fuse Management Console, built on Zookeeper. While it will not be used in this tutorial, most of the required infrastructure is included in these scripts.

 

 

 

 

Cluster Configuration File

This configuration file was generated by Luci for a three node cluster built in the Solutions Architect’s Lab at Red Hat (not the Crunchtools Lab). It is provided for guidance and may require additional tuning for your environment.

 

Test Code & Cluster Failover

The test code is pulled from the Fuse by Example repo on GitHub

Producer

Modify the producer code to send messages to the AMQ-East broker. Make the following change.

 

Consumer

Modify the producer code to send messages to the AMQ-East broker. Make the following change.

 

Cluster

To set up the experiment, make sure that the cluster is in the following state:

 

Run the Code

Run the producer in one terminal and the consumer in another. You will see the messages flow:

Resilient Messaging Architecture - Logical Architecture
 

Teminal One (amq01.example.com): Build & Run

 

Teminal Two (amq01.example.com): Build & Run

 

Testing Failover

Resilient Messaging Architecture - Failover State 1

Teminal Three (amq04.example.com): Failover Tests

Now tell the clustering software to fail the AMQ-East service back and forth between amq03.example.com and amq04.example.com. This example, fails back and forth five times, but this can easily be changed to 25 or 500 for robust testing.

 

    1 Comment

  1. Read it. Thanks!

Post a Reply

Your email address will not be published.