Software architectures frequently require asynchronous interactions. However, messaging is only one of many middleware components required in your architecture. You shouldn’t spend too much time starting and configuring your messaging component. When planning how to implement your messaging middleware, follow best practices by selecting a component that is fast and efficient.
Focus on these key messaging requirements to identify a component that is:
- Open, free, and with an active user community
- Highly available (HA) and fault tolerant
- Stable: after deployment, you don’t want to worry about it
- Fast: target a processing speed of 5–10K messages per second
- Able to persist messages in the event of a failure
- Easily deployable
- Not vendor-lock; this will allow you to move from one cloud to another or to on-premise
- Complete with all basic, general application-level requirements for messaging
- Based on mainstream technology for easy maintenance and support
With these requirements in mind, let’s choose our technology stack for this tutorial.
We’ll select Amazon Web Services (AWS) as our cloud provider because AWS is a leader in the field and has a fairly easy setup. Remember that we are looking for a messaging component with no vendor lock, so we can easily move to another vendor (for example, Azure) in the future.
For the messaging broker, we’ll choose RabbitMQ, the most widely deployed open source message broker. RabbitMQ meets requirements 1, 2, 5, 8, and 9 from our list above.
When selecting a cluster management tool, we must ensure that requirements 2, 3, 6, 7, and 9 from our list are addressed. Kubernetes is becoming increasingly popular and is basically a de facto standard today. However, Kubernetes doesn’t meet requirement 6 on our list. So, you’ll need a solution which insulates you from the complexity of Kubernetes. Enter Kublr — a solution that allows you to quickly setup a production-ready Kubernetes cluster and focus on RabbitMQ.
Now that we have a stack that meets all of our requirements, let’s get started.
Step One: Create a Kubernetes Cluster
To setup your cluster, visit the Kublr website and sign up for the private beta.
For guidance on how to create your Kubernetes cluster, refer to our Kublr Quick Start Guide.
Next, ensure you have installed kubectl and a Kubernetes config file for your cluster access.
Step Two: Deploy RabbitMQ
For detailed instructions on deploying RabbitMQ, read Wes Morgan’s insightful blog on RabbitMQ cluster on kubernetes with StatefulSets. The steps outlined in this blog work perfectly in a dev environment, but not necessarily as well in a production environment. For example, if you need to send one or two million messages to the RabbitMQ, it will fail. Why? There are specific, key details about the application and deployment of RabbitMQ that you must consider. In the next step, we’ll do some fine-tuning to make our messaging more stable.
First, let’s consider the memory. RabbitMQ under Docker usually sees all available memory from the host VM. This can cause many issues. Modify your Kubernetes definition to add memory limits. If, for example, you have a total of 14GB RAM on your VM, you could limit the RabbitMQ usage to half of this total.
Note that RabbitMQ still sees all 14GB of memory. The option here is to use the “set_vm_memory_high_watermark” to help prevent memory outages.
Another potential cause of failure may be a lack of disk space. Just think of how many messages a client can send to the messaging within an hour and what would happen if the other client isn’t able to consume them within that time frame. Certainly something you’d want to avoid. Note that message sizes in the RabbitMQ administrator console do not correspond to the message size on the disk.
In a production environment, “long queues” occur quite often for various reasons. Lazy queues are one of the most useful options to make RabbitMQ more stable. Whenever you expect long queues, you should use lazy queues.
Finally, configure liveness probes and test your results!