Hydra Messaging Queueing

When it comes to messaging and queuing it's important to consider the level of underlying delivery assurances your applications requires. Hydra offers "basic" messaging and queuing which isn't intended as as replacement for servers such as as MQTT, Rabbit and Kafka. As such, Hydra doesn't offer the many features present in those systems.

What follows is thus an explaination of the functionality that Hydra "does" offers.

Hydra queuing, like most of Hydra, relies on functionality built into Redis. Hydra uses a well documented pattern of atomic message queuing that is popular among Redis users. The Redis rpush, rpoplpush and lrem functions are used to manage the status of messages in a list structure that represents the queue. Just some background - and not necessarily something you need to worry about as the goal of Hydra is to simplify such concerns.

Hydra queuing works by queuing message to the message queue of an existing service. That means the Hydra doesn't have the concept of a shared queue that all microservices can work with. Rather, any queued message is placed in the message queue for a particular service.

To further explore this let's imagine an `email-service` which creates and delivers emails.

Any other microservice which wants to send an email can send a message to the email-service.

Such a message might look like this:

{
  "to": "email-service:/",
  "mid": "2cae7508-c459-4794-86c6-42eb78f32573",
  "ts": "2018-02-16T13:34:51.540Z",
  "ver": "UMF/1.4.6",
  "bdy": {
    "to": "[email protected]",
    "from": "[email protected]",
    "htmlBody": "some html markup"
  }
}

That message might be sent from, say an accounting service to the email-service, which in turn would queue the message for eventual delivery.

Lets consider Hydra's message queuing functions in light of our email example.

queueMessage

The accounting-service would use the hydra queueMessage function to place a message in the email-service queue. The actual message would look like the earlier one we saw.

When the queueMessage function receives a UMF message it takes the the value of the to field and parses it to extract the service name. In our case here, that's the email-service. The service name is used internally to determine which queue to place the message in. A look inside the hydra source code reveals that messages are place in a Redis list called hydra:service::{serviceName}:mqrecieved. The last portion of the key is the mqrecieved queue. More about that later.

  /**
   * @name queueMessage
   * @summary Queue a message
   * @param {object} message - UMF message to queue
   * @return {promise} promise - resolving to the message that was queued or a rejection.
   */
  queueMessage(message)

getQueueMessage

With an email message placed in the mqrecieved queue of the email-service, that service is now able to pull out a single message and begin processing it.

To do that our email-service simply called the hydra getQueuedMessage using it's service name. Now this is an important consideration. Any service can call getQueuedMessage and supply the name of another service to help that service process messages! This is not recommended - but is possible. It's designed that way for developers who "know what they're doing". In our case our email-service would simply use getQueuedMessage('email-service') to retrieve the message the accounting service queued.

  /**
   * @name getQueuedMessage
   * @summary retrieve a queued message
   * @param {string} serviceName who's queue might provide a message
   * @return {promise} promise - resolving to the message that was dequeued or a rejection.
   */
  getQueuedMessage(serviceName)

Now you may be wondering what happens when there are multiple instances of our email-service, each of which is checking the email message queue for queued emails? Won't that lead to duplicate message processing?

The answer is no. Because the getQueuedMessage() is atomic in that multiple calls to it won't return the same message. So multiple service instances can attempt to pull messages at the same time and only one of them will receive a given message. Hydra accomplishes this using the Redis rpoplpush function. The way that works is that a message is read from the mqrecieved queue and placed in the mqinprogress queue. So the next call to getQueuedMessage won't see the original message in the received queue because it's been moved the in process queue. Again, that's just implementation details and not something you necessarily need to worry about.

So once an instance of our email-service constructs and sends an email it marks the queued message as having been successfully processed.

markQueueMessage

So our email service calls markQueueMessage(message, completed, reason) sending in the actual message followed by a completed true or false and an optional reason string.

  /**
   * @name markQueueMessage
   * @summary Mark a queued message as either completed or not
   * @param {object} message - message in question
   * @param {boolean} completed - (true / false)
   * @param {string} reason - if not completed this is the reason processing failed
   * @return {promise} promise - resolving to the message that was dequeued or a rejection.
   */
  markQueueMessage(message, completed, reason)

If the our email service isn't able to send the message it can call the markQueueMessage parameters completed as false. That will cause the message to be requeued for another service to try.

The reason field is useful in indicating why a message was marked as completed or incompleted.

Marking a message as completed (true) will remove it from the mqinprogress queue.

Tips and Tricks

As I stated earlier, Hydra messaging queuing is basic - but it can also be powerful and extramely fast thanks to Redis.

Given the reliance on Redis it's important not to create large queued messages and Redis performance would be impacted at scale. One solution to this concern is to queue a small message which points to a database record or file system storage.

A nice trick that we've used is to have a service queue messages into its own queue. The usecase goes as follows... Say that a service receives a request which can't or doesn't need to be processed immediately. The service could queue a message for later processing by sending it to itself. Because other instances of your service might be checking the queue then another service would receive the message and process it. I'm reminded of the game of Volleyball when one place pushes the ball into the air to allow another player to slam the ball across the net.

If you need more messaging queuing related capability than Hydra offer's out of the box you may want to look at Kue. Or one of the fine full-blown messaging queuing systems widely available.

results matching ""

    No results matching ""