Performance Tuning 201 - The ProcessData Predicament

103 15
One of the most common questions in the field is - How do I get my Sterling Integrator (SI) servers and Processes to perform better? There are articles of controlling your Process Queues, tuning your Sterling Integrator installations, looking at your System infrastructure (i.
e.
network, database and file storage performance), and advice on fine tuning (or overhauling) your Business Processes and Translation Objects.
One of the most overlooked subjects is possibly the greatest contributor to slow performance and system resource waste.
Neglect (or ineffective control) of this same subject will lead to greater complexity in developing, maintaining, supporting and troubleshooting Business Processes.
This subject is key to visibility into the Process execution behavior.
The subject I am talking about is ProcessData.
This is the first installment of a series of SI-Axis articles covering the topic "Performance Tuning 201: The ProcessData Predicament" Part 1: The ProcessData Predicament (Describing ProcessData and the problems around its misuse) Part 2: Planning ProcessData (Outlining the need (and how) to plan your ProcessData structure) Part 3: The Wildcard Assign (Describing and its impact) Part 4: Taking Control of ProcessData (Applying all the above in your Business process) Part 1: What is ProcessData? To better control ProcessData - is to first understand what ProcessData is.
ProcessData is an XML DOM (Document Object Model) which resides in memory during the execution of a Process.
A DOM is a technique of structuring XML in memory in such a way that it is faster to address, search, manipulate and add to XML content - in any direction.
This is opposed to SAX (document search) - which essentially only searches in one direction, but uses less resources to do so.
Every Process that runs on SI (synchronously or asynchronously) has its own ProcessData.
The system memory used by the ProcessData DOM could be as much as 10x (10 times) that of the raw XML data you see when you view the ProcessData content.
The size of the DOM content has direct influence to the throughput of the DOM parser.
This means that the larger your ProcessData is, the slower it performs in executing XPath instructions.
What is commonly known, when talking terms of performance tuning, is that Persistence and ProcessData have direct links: •Lots of ProcessData + Full Persistence = Poor Performance + Huge Storage Capacity Requirements But in context to the previous paragraph...
•Large volumes of ProcessData = High System Memory + CPU Usage = Poor Performance ...
even if Persistence is completely turned off.
Large volumes of ProcessData do not only negatively affect process execution and system resource usage.
Users are asked to search through large volumes of ProcessData (which is often unstructured and obscure to even the most technical of users) to identify content useful in making observations into the execution of the Process and thereby fulfill their tasks of supporting, reporting on, or enhancing a Process.
The more of a mess that your ProcessData is in, the more difficult it is for the appropriate Users to read.
So how do I take control of my ProcessData? The first thing that comes to most every user's minds is the "Release Service.
" The Release Service is a very powerful and useful tool - but not necessarily the first in your toolkit you should reach for.
Why, you may ask, do I say that? Resolving the ProcessData predicament starts before you even create your first Business Process, and continues for every process step you add.
Planning is the key to establishing effective ProcessData control.
To find more articles like this visit: http://www.
siaxis.
com
Source...

Leave A Reply

Your email address will not be published.