When Conventional Thinking Fails: A Performance Case Study in Order Management Workflow customization
By gaurav.verma on May 17, 2007
Think about it..
management workflow interacts with a variety of components like core
workflow, order management and pricing application call interfaces.
nature of customization may unwittingly worsen the situation. An
undesirable combination of the aspects can cause a very difficult to
diagnose and troubleshoot situation as shall be demonstrated in this
article is essentially the distilled wisdom of severity 1 situation at
a client site who was unable to ship ordered items to their customers
since OM order line workflow background process had a severe
performance issue. The business impact was tremendous: $ 1.5 to 2
million worth of shipping and invoicing was being prevented.
a times, performance troubleshooting involves having some functional
insight as well. TKPROF and 10046 trace is not the panacea for all
performance issues. In this case, some insight into the nature of Order
Management APIs was also required for getting traction on the
performance problem. A painful discovery path followed, riddled with
bumps and insightful discoveries. The troubleshooting approach used was
really out of the box and involved breaking down the code being
executed and using logical deduction.
generic learnings from the ordeal are presented in this case study. It
is hoped that the learnings will help oracle applications user
community to be concious of hidden implications of customizing OM
Summary of learnings from the Case study..
- Don't have expensive custom
activities defined after START_FULFILLMENT step in OM
workflow. It will pronounce the performance hit many times, especially when
Oracle configurator and order management modules are being used in tamdem
(Better, don't have expensive workflow
custom activities at all)
- Batch processing
should be over piece-meal processing, especially when an API has provision for
the same. This reduces most of the repetitive processing
- The whitebox
(drill down) approach works for taking apart a baffling performance problem.
Trying to simulate a test case usually leads to the heart of the performance
issue in a more reliable way
- Getting extensive
debugs and traces is great, but only when it is known what is being looked for.
Asking the right probing questions is very important. Question, question,
- A well though out
plan with minimal tracing and using a drill down approach, can bring better
results than a shot gun or blunderbuss approach
- Sometimes, a high
level functional knowledge of the processing being done can be very useful in
understanding the nature of problem. A balance between strict technical and pure
functional knowledge can be fruitful towards solving performance