Leave feedback
  • Question

    To Get Maximum Performance ... How ..

Enter a new topic
  • Ahmad Saleh Ahmad Saleh
    0 likes 198 views

    Dear Experts ,

    we have a 2 application powerfull servers :

    64bit windows server 2012R2

    128GB RAM

    32 processors

    how can we get the best utilization to get the best performance ? what are the factors and the configuration and the design which could be done to get the best results.

    knowing that we are using 5.6.2 version .

    your advices are appreciated .

    regards.

     

     

    Monday 10 April, 2017
  • Andreas Hjelle Andreas Hjelle
    1 likes

    Hi,

    I would definitely start the other way round: what kind of set up do I need to get our work done? It really depends on what kind of jobs you will receive. Large files, small files? Jobs spread out during the day or piled up on specific times?

    For instance if you receive one really large file, you will need lots of RAM, but the number of processors might not help at all.

     

    If you have jobs where one input create different, complex output, and you need to process in singleton (-sync) to get the output in a particular order, you might want to prepare the job in one application, for instance split it up and process it in another with many threads (using more processor power).

     

    The set up you have could really perform great, but it might not perform better than an ordinary 4 core with 4GB RAM!

     

    Also note that network and database will affect processing, not only the Strs server.

     

    And again: start with analyzing the input and output.

     

    Good luck.

    Regards,

    andreas

    Monday 10 April, 2017
  • Ahmad Saleh Ahmad Saleh
    0 likes

    Thanks Andreas for quick reply, really appreciated :)

    and answering to your questions, both type of inputs are valid, small and large files, for small files they are spread during the day since the generation of these files will be upon request, and the large files will be piled up for each month, but for performance issues, I am focusing here on the large files since the time here is limited and should be as minimum as we can.

    about the design of our project, it’s like what you mentioned, that the one input should create more than one version of files, you mentioned:

    "you might want to prepare the job in one application, for instance split it up and process it in another with many threads (using more processor power)."

    what I did in my design is that there is one job receive the input, and in the msg stage about 4 processes are called. to produce the outputs in different versions as you mentioned, so does this match what you advised?

    or I should modify the design to have a job for each version of output, and connect the input connector to all these jobs, so I can achieve here multiple jobs working in parallel and be a way from callproc function? but will this cause another side effect like multiple copy of input which will affecting the DB space?

    and to be honest this statement was strange for me:

    " The set up you have could really perform great, but it might not perform better than an ordinary 4 core with 4GB RAM! "

    I think the great performance when StreamServe reach to 100% utilization of using server resources, thus I can have the max throughput. am I right?

     

    Regards,

    Ahmad.

     

     

     

     

    Wednesday 12 April, 2017
  • Andreas Hjelle Andreas Hjelle
    1 likes

    Hi,

    From your answer I assume then that the large job is what you need to look into.

    There are several ways to make large jobs perform better.

    Job scaling: this is very fast to implement and will let you utilize more of the server capabilities.

    Split up the job in smaller batches and increase the number of threads on input and output queues. Also use different queues for different output (Mail, print, file). The startup argument asynchronqueue might help increasing performance as well. The startup argument sync will decrease performance.

    Analyze the job execution and re-write or remove time consuming processes. If you run millions of documents in one batch, one fraction of a second will make a huge difference.

     

    Good luck!

    andreas

    Tuesday 18 April, 2017
  • Robert Mühlberg Robert Mühlberg
    1 likes

    We had already devided our incoming jobs into two classes over years too:

    1) one directory or http port for all small jobs -> InConnector "smal" -> queue "smal" -> own number of its parallel threads (they don't wait during big ones are busy)

    2) second directory or http port for big jobs -> InConnector "big" -> queue "big" -> own number of its parallel threads (maximal equal to the number of all CPU kernels)

    Additional: very big jobs are devided into smaller parts to real use of more then one CPU of server at same time, but then it creates its output file per job based on "OutputMode Job".

    And have a look to your Database server processes and what are you storing here.

    Good luck Robert

    Wednesday 19 April, 2017
  • Ahmad Saleh Ahmad Saleh
    0 likes

    thanks Andreas , these are very useful information .

    also thanks Robert  , but I don’t understand exactly your point , do you mean that by configuration 2 paths , each one with certain specifications in this way I can match the need and controlling the total behavior ?

    regards.

     

     

    Thursday 20 April, 2017
  • Robert Mühlberg Robert Mühlberg
    2 likes

    Hi Ahmed,

    yes, if you define differ InConnectors (and every has it own queue) for every group of jobs [one group for small jobs/one group for big jobs], than you can set differ behavior per group.

    Good luck Robert

    Thursday 20 April, 2017