concurrent request delayed performance

Added by Mohammed Imran 8 months ago

Hello,

we are using Enduro-X core(server) and EnduroX-Java(client) to achieve SOAP webservices for replacing Oracle Tuxedo.
Concurrent request are delayed after processing from EnduroX-core, the enduroX java appears to delay processing response queue, work well with single requests. The current setup we have are 3 tmsrv, 1 admsrv, 1 tpesrv and 1 instance for EnduroX server application(call to API). We don’t know if anything else we need are missing or is there anything else we need to include for improving concurrent transaction processing. Also not sure how can we verify if all tmsrv are taking part in processing concurrent requests. Below are the sequence of methods we are using.

/* Client flow start */

AtmiCtx ctx = new AtmiCtx();
TypedUbf ub = (TypedUbf)ctx.tpalloc("UBF", "", Request_Length);
ub.Bchg(reqclass.Request_queue, 0, Request);
ub = (TypedUbf)ctx.tpcall("Request_function", ub, 0);

/* Server flow start */

Bget(p_ub, Request_queue, 0, Request, 0L)
API_function(Request, Response) -> tpbegin , tpsuspend, tpresume -> tpcommit
Bchg(p_ub, Response_queue, 0, Response, 0L)
tpreturn
/* Server flow end */

result= ub.BgetString ( reqclass.Response_queue, 0 );
ctx.tpterm();

/* Client flow end */


Replies (3)

RE: concurrent request delayed performance - Added by Madars 8 months ago

Hello,

1) The service is in C/C++, right?

2) What is the response time for the core service? Maybe it is doing some long processing and thus concurrent requests wait on this service.

2.1) Can you please give the following output:

$ xadmin psc

this will show some statistics about requests.

2.2) Did you start multiple copies of the core service in ndrxconfig.xml, i.e. having <min> > 1

3) Is your Java web handler capable of doing Enduro/X requests concurrently (multiple threads?) You need several threads each allocated own AtmiCtx and http/s thread shall dispatch the workload accross these Enduro/X sender threads.

4) Can you run the test and grab following shell outputs:

Services response times:

$ xadmin psc

Queue staistics:

$ xadmin pq
$ xadmin pqa

Transaction statistics:

$ xadmin pt

5) tmsrv can handle traffic concurrently, it is highly multi-threaded by it self and also ndrxconfig.xml <min> > 1 concurrency it can handle too.

You can enable tmsrv logs to see what threads are doing:

edit app.ini,


[@debug]
...
# add:
tmsrv=ndrx=5 file=

restart the tmsrv:

$ xadmin sreload tmsrv

and check the logs e.g. (depends on the tmsrv -e <log file> path)


$ tail -f $NDRX_APPHOME/log/tmsrv-rm1.log

PS,

We have "endurox-connect" product which would gives you out of the box JSON web service handlers, which converts JSON<->UBF and submits message to configured target service. Additionally if SOAP is required, "restincl" provides generic "ext" mode, where you provide in configuration two services for message body parsing and message body building, and in this way you can write converters for SOAP messages, while the all other tasks such http/s handler, http routing, concurrency is done by the endurox-connect (restincl binary) by it self. "restincl" is highly capable of concurrency and it can easly handle message up to several tens of thousands requests/responses per second (of course the target services must be also quick and load balanced).

The converter services are simple one, the service receives UBF request with http message data fields (http body, header) and task for the converter is to build target UBF which later is submitted to core service by restincl. When doing response, the converter shall build http header/body data fields out from the response UBF buffer (which is provided by your core service responses).

RE: concurrent request delayed performance - Added by Mohammed Imran 8 months ago

Hi Madars,

Please find response below.

1) That's right it's C++.

2) Most of the request completed within 10ms, few take 2 to 3 seconds to process from core API server(ex: ACM-X111 from attached ndrx config), and for this server we see rest other concurrent request gets delayed, because of which the end to end time also increased for concurrent requests.

2.1) Attached. Not able to understand output of this static, everything showing 0ms, what this time represents?
2.2) Yes we used <min> & <max> = 2 for one of the core server in ndrxconfig.xml it does reduces the load to some extent but not significantly as we might be expecting huge number of incoming requests. Attached ndrxconfig.xml for your reference.

3) Yes we are using spring boot frame work and it can handle concurrent request, we do see request are going through handler but Core server is not processing until one of them are available. Does increasing core server instances increased performance? Is there any limitation on maximum number of core server can be used?

4) Attached.

5) how does multiple tmsrv work here? Is it responsible for assigning task to next available core servers? Does having multiple tmsrv increase load balance or it needs multiple API core servers as well to handle load balance?

RE: concurrent request delayed performance - Added by Madars 8 months ago

Hi Mohammed,

2.1) Attached. Not able to understand output of this static, everything showing 0ms, what this time represents?

did you run some test prior capturing the output? If the output is captured on the system which is booted but not worked, then # completed and response times would be 0ms.

3) Yes we are using spring boot frame work and it can handle concurrent request, we do see request are going through handler but Core server is not processing until one of them are available. Does increasing core server instances increased performance? Is there any limitation on maximum number of core server can be used?

Yes it will increase the performance, as with higher number processes, system can handle larger number of concurrent requests for the slow service.
The upper limit for number of servers is 32000, but in default configuration it is about 10000.
So you can easily increase the count to something like 20..50 (however it depends on your process logic, number of DB connections and can the DB handle this number of connections).

5) how does multiple tmsrv work here? Is it responsible for assigning task to next available core servers? Does having multiple tmsrv increase load balance or it needs multiple API core servers as well to handle load balance?

Each tmsrv can handle 10 concurrent XA operations related to two-phase commit. In case if you set <min> larger, then at the point where those 10 threads for first tmsrv of the XA resource is exhausted, then second instance would start to cope with the traffic.

tmsrv does not assign any task to servers. It only handles the support routines for the distributed transaction processing (i.e. tpbegin, tpcommit, etc..).
So to get the more concurrent processing, you should start larger number of your core servers.

ps,
In order to assess the bottleneck, I would recommend to start to load the system with high number of concurrent requests, do that constantly
and in that time inspect the

$ xadmin pq

if the number of enqueued messages, keeps constantly above 0 for some service, then this is indication, that there is bottleneck related to this service.

You should start worry about tmsrv, only if at this scenario you would see that number of enqueued messages grows against any of these services:

@TM-3                 0       0 0 0 0 0 0 0 0 0 0 0
@TM-1-3               0       0 0 0 0 0 0 0 0 0 0 0
@TM-1-3-4             0       0 0 0 0 0 0 0 0 0 0 0
@TM-1-3-5             0       0 0 0 0 0 0 0 0 0 0 0
@TM-1-3-6             0       0 0 0 0 0 0 0 0 0 0 0

(1-3/3)