Thanks Andreas for quick reply, really appreciated :)
and answering to your questions, both type of inputs are valid, small and large files, for small files they are spread during the day since the generation of these files will be upon request, and the large files will be piled up for each month, but for performance issues, I am focusing here on the large files since the time here is limited and should be as minimum as we can.
about the design of our project, it’s like what you mentioned, that the one input should create more than one version of files, you mentioned:
"you might want to prepare the job in one application, for instance split it up and process it in another with many threads (using more processor power)."
what I did in my design is that there is one job receive the input, and in the msg stage about 4 processes are called. to produce the outputs in different versions as you mentioned, so does this match what you advised?
or I should modify the design to have a job for each version of output, and connect the input connector to all these jobs, so I can achieve here multiple jobs working in parallel and be a way from callproc function? but will this cause another side effect like multiple copy of input which will affecting the DB space?
and to be honest this statement was strange for me:
" The set up you have could really perform great, but it might not perform better than an ordinary 4 core with 4GB RAM! "
I think the great performance when StreamServe reach to 100% utilization of using server resources, thus I can have the max throughput. am I right?