X

Online Meetup

Enabling Digital Transformation
with Container Technologies

[hexagon_mask.png]

Deploying ActiveMQ with Red Hat Fuse in a Kubernetes Cluster

If you are planning to deploy Fuse ESB in a Kubernetes cluster, you may need to integrate it with a message broker. (For more details on deploying Red Hat JBoss Fuse, read our blog, Deploying Red Hat JBoss Fuse using Azure Container Services and Kubernetes.) Your choice of message broker platform and deployment configuration will depend on several requirements. Here are some requirements that typically impact your technology stack selection:

  1. Need of persistent messaging and durable delivery
  2. Specific requirements on high availability (HA)/failover
  3. Low-latency delivery
  4. Velocity (number of messages, e.g. > 1 M/sec)
  5. Volume (message size, e.g. > 1 Mbyte)

If requirements 3 through 5 are most critical for you (for example, you need real-time processing or big data handling), an independent network of brokers may be a better option than a cluster.

For this tutorial, let’s assume we simply need durable messaging and a simple infrastructure to process a low volume of messages. In this case, the obvious solution is to turn on the embedded ActiveMQ in Red Hat Fuse and configure it to use an external database.

 

 

In the event of failure when using this solution, one of the standby nodes is automatically activated. This ensures no messages are lost because all of the messages always persist in the database outside of the cluster.

The pros with this solution:

  • This is a simple deployment into an existing Red Hat Fuse installation.
  • You have simple failover scenario options such as a self-healing cluster and configuration of a redundant message database. Alternately, you can just rely on database backups.

The cons with this solution:

  • Using an external database decreases performance.
  • A high number or large size of messages may lead to extensive memory consumption of the Message Broker, which can kill the Red Hat Fuse node.

For our tutorial, this solution nicely fits our needs. To get started, we’ll introduce changes on three layers:

  1. ActiveMQ configuration;
  2. Red Hat Fuse configuration (a Docker file to create the Docker Image with Fuse installed and configured)
  3. Kubernetes configuration.

Configuring ActiveMQ

Change the persistence configuration by defining the JDBC persistence adapter to connect to the MS SQL Server database. After this change, we’ll have a master slave cluster configuration in which the first node that starts will be the ActiveMQ. This node will lock the database, while the other nodes will be in standby mode:

 

sed -i "s/<\/beans>/$(sed -e 's/[\&/]/\\&/g' -e 's/$/\\n/' persistence.txt  | tr -d '\n')<\/beans>/" etc/activemq.xml

sed -i "s/<kahaDB.*/<jdbcPersistenceAdapter dataDirectory=\"..\/activemq-data\" dataSource=\"#mssql-ds\" ><adapter><transact-jdbc-adapter\/><\/adapter><\/jdbcPersistenceAdapter>/" etc/activemq.xml

 

Where “persistence.txt” contains:

 

<bean id="mssql-ds" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
   <property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
   <property name="url" value="jdbc:sqlserver://HOST:1433;databaseName=DATABASE" />
   <property name="username" value="****"/>
   <property name="password" value="****"/>
</bean>

 

Next, reconfigure the ActiveMQ to enable the message scheduling and bind the nodes to all network interfaces:

 

sed -i 's/<broker/<broker schedulerSupport="true" /' etc/activemq.xml
sed -i 's/tcp:\/\/${bindAddress}:${bindPort}/tcp:\/\/0.0.0.0:61616/' etc/activemq.xml

 

 

 

Configuring Red Hat Fuse

To configure Red Hat Fuse, install the “activemq-camel” feature and several libraries. In order to do this, add the following commands to the Red Hat Fuse Docker file as described in this post on Deploying Red Hat JBoss Fuse using Azure Container Services:

 

bin/client 'osgi:install -s mvn:commons-pool/commons-pool/1.6'
bin/client 'osgi:install -s mvn:commons-dbcp/commons-dbcp/1.4'
bin/client 'osgi:install -s mvn:com.microsoft.sqlserver/mssql-jdbc/6.2.0.jre8'
 
bin/client features:install activeq-camel

 

The complete Docker file for Red Hat Fuse with ActiveMQ is:

 

# Use latest jboss/base-jdk:8 image as the base
FROM jboss/base-jdk:8

MAINTAINER Evgeny Pishnyuk <maintainer-email@gmail.com>

ENV DEPLOY_LOCAL_STORAGE=install
ENV DEPLOY_CLOUD_STORAGE=https://your-cloud-storage-with-prepared-artifacts

ENV FUSE_VERSION 6.3.0.redhat-262

RUN curl $DEPLOY_CLOUD_STORAGE/jboss-fuse-karaf-$FUSE_VERSION.zip > /opt/jboss/jboss-fuse-karaf.zip
WORKDIR /opt/jboss
RUN unzip jboss-fuse-karaf.zip -d /opt/jboss && rm *.zip
RUN ln -s "jboss-fuse-$FUSE_VERSION" jboss-fuse

# We turn on the default admin user. Please consider password
RUN sed -i 's/#admin/admin/' etc/users.properties

# We install components that we needed
RUN bin/fuse server & \
sleep 30 && \
bin/client log:clear && \
bin/client 'osgi:install -s mvn:xom/xom/1.2.5' && \
bin/client features:install camel-jetty && \
bin/client features:install camel-xmljson && \
bin/client 'osgi:install -s mvn:commons-pool/commons-pool/1.6' && \
bin/client 'osgi:install -s mvn:commons-dbcp/commons-dbcp/1.4' && \
bin/client 'osgi:install -s mvn:com.microsoft.sqlserver/mssql-jdbc/6.2.0.jre8' && \
bin/client features:install activeq-camel && \
sleep 10 && \
bin/client log:display && \
bin/client 'shutdown -f' && \
sleep 5

# !Usually it is more affordable to use inheritance of Docker Containers and here will be a split point

WORKDIR /opt/jboss/jboss-fuse

COPY $DEPLOY_LOCAL_STORAGE/*.jar /opt/deploy/

# We deploy our service – we do it in different step to save time for building of Docker Image

RUN bin/fuse server & \
sleep 30 && \
bin/client log:clear && \
bin/client 'osgi:install -s file:/opt/deploy/some-service.jar' && \
sleep 10 && \
bin/client log:display && \
bin/client 'shutdown -f' && \
sleep 5

# Add ports of your services
EXPOSE 8181 8101 1099 44444 61616 1883 5672 61613 61617 8883 5671 61614

CMD /opt/jboss/jboss-fuse/bin/fuse server

 

Configuring Kubernetes

Start the Red Hat Fuse in a Kubernetes StatefulSet to get predictable DNS-names for our nodes. The names will follow this pattern:

 

{StatefulSet.name}-{replica_no}.{Headless.name}

 

In our example, the names are: rhesb-0.activemq and rhesb-1.activemq.

 

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
 name: rhesb
spec:
 serviceName: "activemq"
 replicas: 2
 template:
  metadata:
   labels:
    name: rhesb
    app: rhesb
  spec:
   terminationGracePeriodSeconds: 5
   imagePullSecrets:
    - name: regsecret
   containers:
   - name: rhesb
     imagePullPolicy: Always
     image: xxx.azurecr.io/rhesb:latest
     ports:
     - containerPort: 8051
… other needed ports
     readinessProbe:
      tcpSocket:
        port: 8181
      initialDelaySeconds: 60
      periodSeconds: 20

 

The Kubernetes Headless service will resolve node names via the DNS. The Headless service name must match the ServiceName in StatefulSet.

 

apiVersion: v1
kind: Service
metadata:
  name: activemq
spec:
  ports:
  - port: 61616
    protocol: TCP
    targetPort: 61616
  selector:
    name: rhesb
  clusterIP: None

 

Usage Example

In the configuration fragment of Camel (a framework under Red Hat Fuse), we define the ActiveMQ connection properties in the bean with the ID “activemq” and implement two Camel Routes as shown below which will move a message between two queues back and forth every minute.

Note:

  • Replace the localhost in failover:(tcp://localhost:61616) with the list of actual nodes during the container startup.
  • Camel <from> URI contains ?acknowledgementModeName=CLIENT_ACKNOWLEDGE indicates message delivery was acknowledged after route is completed without exceptions.

 

<bean id="activemq" class="org.apache.activemq.camel.component.ActiveMQComponent" >
        <property name="connectionFactory">
          <bean class="org.apache.activemq.ActiveMQConnectionFactory">
            <property name="brokerURL" value="failover:(tcp://localhost:61616)?initialReconnectDelay=100" />
            <property name="userName" value="admin"/>
            <property name="password" value="admin"/>
          </bean>
        </property>
</bean>
<camelContext id="MockPulseAPIContext" xmlns="http://camel.apache.org/schema/blueprint">
        <route id="poc_activemqincoming">
            <from id="qfrom" uri="activemq:queue:incoming?acknowledgementModeName=CLIENT_ACKNOWLEDGE"/>
            <to id="qto" uri="activemq:queue:sleep1min"/>
            <log id="_logq" message="Incoming ${body}"/>
        </route>
 
        <route id="poc_activemqdelayed">
            <from id="qfromdelayed" uri="activemq:queue:sleep1min?acknowledgementModeName=CLIENT_ACKNOWLEDGE"/>
            <delay asyncDelayed="true" id="qdelay">
                <constant>60000</constant>
            </delay>
            <to id="qtodelayed" uri="activemq:queue:incoming"/>
        </route>
</camelContext>

 

Testing

To test your configuration, send a “Something for the Masses” message, and confirm that the message was processed on both nodes.

 

 

In case the ActiveMQ master node shuts down or fails, one of the nodes in standby mode will be promoted to become the master, so no message from the ActiveMQ queues will be lost.

 

You May Also Like