tmqueue — Transactional Message Queue server.
This is special ATMI server which is used for transactional persistent queue operations. tmqueue is backend server used by tpenqueue and tpdequeue() calls. Queue server works in pair with tmsrv(8) instance, where both tmqueue and tmsrv are configured in XA environment with NDRX_XA_RMLIB=libndrxxaqdisk.so (actual disk driver) and NDRX_XA_DRIVERLIB=libndrxxaqdisks.so (static registration) or NDRX_XA_DRIVERLIB=libndrxxaqdiskd.so (dynamic registration) driver loaders. Related tmsrv instance shall be started first, before tmqueue process.
Actual queue configuration is described either in configuration file q.conf(5) which may be present as separate file or as par of common-configuration ini section.
Current storage backends uses file system for persistence. For more details on back-end configuration see libndrxxaqdisks(8).
The tpenqueue(3) and tpdequeue(3) can be invoked either as part of the global transaction or not. If global transaction is not used, then tmqueue will open the transactions internally.
The queue server internally uses thread pools for handling the work. Multiple thread pools are used to avoid the deadlocking where internal processing might do the XA driver calls which in the end sends notifications to the same server back.
Three thread-pools are used:
1) First one is for accepting the ATMI tpcall(3) requests.
2) Second thread pool is used by forward services (which basically are senders for automatic queues).
3) Third thread pool is used by notifications received from transaction manager ™. TM notifies the queue server for XA operation progress (prepare, abort, commit).
Every instance of tmqueue will advertise following list of services:
For example for Queue Space MYSPACE, Cluster Node ID 6, Enduro/X Server ID 100 services will look like:
The automatic forwarder pool, ever configured time, will scan the automatic queues for non-locked messages. Once such message is found, the message is submitted for worker thread. The worker thread will do the synchronous call to target server (srvnm from q.conf), wait for answer and either update tries counter or remove the message if succeed. If message is submitted with TPQREPLYQ then on success, the response message from invoked service is submitted to reply queue. If message destination service fails for number of attempts, then message is forwarded to TPQFAILUREQ, in this case original TPQCTL is preserved.
Tmqueue server is optimized for waking up the forwarder dispatch thread if new processable message is enqueued in automatic queue. If thread which enqueued new message detects that forwarder thread sleeps, then immediate forwarder thread wakeup is performed. This approach does not synchronize fully with dispatcher thread (for performance reason), thus it might be still possible that scan time sleep is done, even processable message was enqueued.
During the startup, tmqueue will read messages from the XA backend driver. All messages are read. All active messages found in persistent storage are rolled back at startup. Only prepared and committed commands/messages are read and stored in message lists. Any prepared commands automatically puts the lock marking on corresponding messages.
In case if tmqueue was shutdown with incomplete transactions:
Tmqueue server for any transaction keeps internal accounting with active transaction expiry setting. If transaction is not completed in given time frame, the tmqueue will automatically rollback the transaction.
For more usage of the persistent queues, configuration, command line and codding samples, see atmitests/test028_tmq folder.
When commit is performed for several enqueued messages within global transaction, each message is unblocked (made available in queue space for dequeue or automatic forward) separately, message by message. So this means that during the commit execution, even thought commit is not completed yet, part of the messages will appear as committed.
Flag NOJOIN is not supported for XA libndrxxaqdisks and libndrxxaqdiskd drivers.
Only 1 tmqueue server is allowed per queue space. Thus it is mandatory that <min> and <max> tags are set to 1. Otherwise incorrect queue logic may be expected.