Cluster example 3
This example describes how to connect a normal Queue Manager to a fault tolerant cluster complex (cluster QMB) and utilize simple workload balancing.
This example is based on seven WebSphere MQ queue managers, where four of them are forming a cluster. The figure shows the configuration where: QMB1-QMB4 are cluster Queue Managers, and QMX1-QMX3 are all normal Queue Managers. The configuration of QMX2 is explained later on.
QMB1 and QMB3 acts as "gateway" Queue Managers, and they are hosting full repositories for the cluster.
QMB2 and QMB4 acts as server Queue Managers hosting the server queue QMQ, which offers workload balancing (default based on round robin).
QMX1 thru QMX3 acts as normal application Queue Managers which hosts some "client" applications, which requires an answer from the servers. This might be ERP, WWW, dot-something solutions. This means that QMX1-3 must be able to send messages to the QMQ Queue on QMB2 and QMB4. QMX1-3 have a normal channel pair against either QMB1 or/and QMB3, yes we're dealing with a single point of failure configuration.
On QMB1-3 we have a QMGR-ALIAS which is a special QREMOTE(ANY.CLUSTER) RNAME('') RQMNAME('') XMITQ('') which "just" creates a "null" reference saying that the Queue Manager also have to respond to ANY.CLUSTER. This gives QMB1 and QMB3 the ability select the right Cluster target queue and do workload balancing.
On QMX1-3 there are a QREMOTE definition of the server Queue: QREMOTE(SERVER) RNAME('SERVER') RQMNAME('ANY.CLUSTER') XMITQ('QMBx.XMIT.QUEUE') which have the job of sending the messages towards the Queue Manager ANY.CLUSTER on QMB1/QMB3 using the XMITQ.
When the message arrives on QMB1 or QMB3 the message have RQMNAME('ANY.CLUSTER'), and now it's up to QMB1 or QMB3 to resolve and select the right Server on the right Queue Manager.
All Queues are defined DEFBIND(NOTFIXED), otherwise this configuration won't work. This makes it possible for all Queue Managers to send messages to SERVER.
When an reply is returned from the servers QMB2 and QMB4 the are using the names for example QMX1.REPLY refers to REPLY on QMX1. This also means that QMX2 are able to send messages to QMX1.REPLY (If there are a QREMOTE on QMX2 with RNAME('QMX1.REPLY') and RQMNAME('ANY.CLUSTER').
Why using such a configuration, there are many reasons for doing that, one could be that QMX1-3 are just hosting non critical applications, another that QMX1-3 are located with some of your business partners and your company don't want to expose your WebSphere MQ configuration...... I could think of a dozen reasons more for doing it this way.
If you click on the different Queue Managers you will see the configuration of them, queues, channels and so, all presented by MQ-Inventory®. Complete overview of the QMB-cluster. (Currently there are a minor error regarding the special case of MQMALIAS('ANY.CLUSTER').
Explanation of configuration
Queue Manager QMX1 are connected to QMB1 with a set of channels (SDR-RCVR), Queue Manager QMX3 is connected to QMB3 in the same way.
Why not connect QMX1 to both QMB1 and QMB3 ?
That's a great idea, but we cant share XMITQ, and a QREMOTE can't point out more than one XMITQ. This could be the issue seen from QMX1->QMB1, but it's possible from QMB1->QMX1 and QMB3->QMX1 that's not the problem, but the fact that messages are routed thru QMB1 and QMB3, and if one of the sender channels stops, a lot of messages will stay in the XMITQ until the channel problem is fixed. So it's not a clever solution at all.
Explanation on QMX2
If we would like to improve the availability we could have a double channel set like the set on QMX2 so it can be connected to either QMB1 or QMB3, QMX2 have only one XMITQ which is shared between the two channels (you remember that only one sender channel my open it at a time), so we start the sender channel manually (with automation tool), and to minimize the number of messages that could sit in the XMITQ of QMB1/3 we will only have QMX2.REPLY PUTINHIBIT(NO) on the QMGR that are running the channel.