Did You Know - Request wait time is not a good measure of concurrent manager health
For many years now all the big players have followed the line that the length of time a request waits to be
processed (pending time) is a measure of concurrent manage health. The inference is the longer requests wait the unhealthier the concurrent managers are.
That’s not to say that a request that has been in the queue longer than normal is not having a problem,
but as for manager health, well that’s a stretch!
So why is wait time not a good measure?
Scenario 1 – Overnight managers
What if you have an Overnight manager that only wakes up a 7:00pm. Placing a request in that queue at 3:00pm
will cause it to remain in a status of Pending Error until the overnight manager starts and
by 7:00pm it will have a wait time of 4 hours.
Scenario 2 – Resubmit a prior request
It is common practice for users to resubmit an prior request, may be they want the same report but don’t
want to type the parameters again. When the program is run the new program picks up the requested start
date of the prior program. Guess what, even if the new program run immediately, it will show a wait time of
the time between when the original program was run and the time the latest started. This could be days….
Scenario 3 – On-hold request
Try this one. Submit a concurrent program and place it on-hold before it runs. Now, in 5 days time take it off
hold and let it run. No surprises for guessing the wait time will be - 5 days.
Scenario 4 – Custom manager to run slow requests
Suppose you have created a concurrent manager to run all the slow, resource intensive requests called “SLOW MANAGER”.
Any request sent to that queue should be long running. If you get a few of these requests in the queue the wait time
should be quite large.
Once you start creating custom managers wait time should be calculated on a per manager basis as each
custom manager will have a different processing profile.
All these examples are normal processing within an Oracle E-Business Suite environment. None of these scenarios
of long wait times indicates a problem with the concurrent manager configuration.
There is a better solution; one I have been using for most of my career and I built into the Quest products
I designed. The number of requests in the queues is the best indicator I have used to indicate
concurrent manager health.
I.e. If you know you have a normal high level queue load for the standard manager of 50 pending requests,
if it the pending (normal) number gets to 75 or 100 you know there must be something wrong, that is there are a
few slow processing jobs holding up the managers
If the slow queue has a normal peak of 5 requests, alert when it gets to 10 or 15.
None of this is rocket science, its just about understanding queue behavior.
This has to be implemented on a manager by manager basis as each manager has a different processing profile,
Standard is different to the “SLOW” Manager and in turn is different to the “FAST” Manager.
Tip for custom concurrent programs
Now here is an idea. All custom concurrent programs should be assigned to the “SLOW” manager until they earn
the right to be moved to the standard manager. The developers will not like this, but it does limit the
damage a dodgy concurrent program can do in a production environment.
Want to know more? Concurrent Managers has continued to be such a problem area for so many sites I have now also devoted 3 full
PAMtutorials to the subject of Concurrent Managers
so I would encourage anyone interested in better managing their Concurrent Managers to check these out as well!
Last update: May 2009