Leave feedback
  • Test strategy for your StreamServe implementation

Write an Article
Wednesday 22 September, 2010
Anders Ekstrom Anders Ekstrom
3 likes 3899 views

Testing of a StreamServe solution is as important as the testing of any other software system. StreamServe solutions are usually once they have been properly tested very reliable and will produce a predictable result. Every implementation needs their own custom test plan, but this article gives some general pointers in the work of ensuring the quality of a StreamServe implementation.

Test strategy

Testing of a StreamServe solution is as important as the testing of any other software system. StreamServe solutions are usually once they have been properly tested very reliable and will produce a predictable result. In other words, with good testing most errors and defects are possible to capture and correct.

What is typically not covered in testing (object to maintenance focus):

  • The experience of properly tested StreamServe  solutions is that if they crash in a live environment this is most likely caused by:
  • Changes in the data stream sent to StreamServe (voiding the interface contract) 
  • Resource files removed or inaccessible (e.g. images or content in database used by the StreamServe process either are accessed on disk and disk is removed or e.g. access rights are changed or resources are in a database and are have for some reason changed id´s, are missing or inaccessible due to database failure, network failure or access rights)
  • Log file area is full
  • Access rights changed or passwords for service accounts to e.g. database account, service account, email server account or any other account used by the solution is changed
  • DB or disk is full, two packed with data or in other ways non operational.
  • Service packs of OS or other system components have been applied
  • New policies for security settings for e.g.  web server or antivirus software.
  • Network changes. Ports blocked, IP´s changed etc.
  • Other changes in infrastructure


Test focus

Test is expensive and should focus where defects are likely to occur, and where adequate testing is possible. That is:

  • Testing should focus on functionality and logic in the components of the solution that is changed.
  • Regression testing should test the overall flow of the process to make sure that it is intact. 
  • Testing should be document and integration centric (provided that the solution is structured in a document centric way – which it usually is) and typically documents that all source from one business system is tested together with each other.

Initial testing and regression testing

Initial testing of a system should cover the complete testing of the solution, together with a review of the project source code to make sure that guidelines are followed. Regression testing should reuse test cases and test data when appropriate to make sure that overall process is operational. In runtime the StreamServe projects run in different services at the server, all logic and documents that reside in a changed service should be focus for regression testing BUT limited regression testing should be done for other services residing on the same hardware (or virtual hardware) particularly if they are using the same databases (which they usually are).


Deployment and typical environments

Deployment to an environment should be preceded by a database backup as well as a backup of the earlier used deployment package (if these backups are not part of CM process and/or daily database backup routines already).

There should be a well described test procedure and dedicated test environments available to support this procedure. Minimum number of environments is:

  • Developer seat
    Used by the developer to do component testing
  • System test               
    A complete test environment that developers have access to and that have a (almost) identical software configuration to the production environment.  Developers will be able to deploy to this environment.
  • Production environment
    The environment used for production and where the developer have no access rights. Deployment to this environment should follow CM processes.

 Most medium to large size organizations usually add at least one more environment in the test chaing and that is the acceptance test environment (or UAT environment) which is an environment identical to the production environment (software and hardware) and where developers have no access rights so that deployment need to go through the CM processes as well.


Test classes, test code

The use of test code is not very common when testing StreamServe solutions (many modern guidelines for system development suggest the writing of test code is the most important aspect of achieving efficient testing) – this is due to the fact that this code is many times hard to produce.  As the changes in a StreamServe solution often creates layout changes in document it does not make sense to binary check the integrity of output files (the output files have changed and most output formats like PCL, AFP and pdf might change quite extensively in their binary structure even for a small change in the layout). Changes that is not document centric typically involves changes in drivers, upgrading of software or other changes of distribution or distribution and most commonly this will also affect the binary integrity of the output – making test code hard to write and maintain.

However, the use of earlier (tested) version of the solution or the use of output generated by an earlier output management system could be utilized to heavily increase testing efficiency.  Typically printouts of a document batch (probably limited to a couple of hundred documents) are great to use when testing the solution (making comparison page by page possible for the tester). Pdf-files provide the same possibility but electronically, although testing is very often (unfortunately) faster by using paper.

Tools for automatic comparision of output data are available from StreamServe.

Rest and review cases 

Use cases

All use cases for the solution should be used to generate test scenarios that then are used to test the solution. This should be done in accordance to traditional test methodologies.


Input data

Input data should be checked and tested to make sure that it follows the interface contract. For xml it is recommended to use schemas or DTD to validate the input data. StreamServe is able to do this validation in runtime – but smaller test files are easily checked with xml tools like XML Spy.  A lot of time is saved by always having tested that the input data file used for testing are in accordance with the interface contract.



 Build and Deployment instructions

These instructions are typically updated during the lifespan of the solution and testing them to make sure that they are accurate is always very important. Deployment instructions must be able to use in catastrophic scenarios for the organizations to be able to deploy completely new hardware in a short time span, not delaying the production more than absolutely necessary. The rule for testing the instructions are that the one that have written them cannot test them and that the best testing is typically done by someone that knows as little as possible about the solution, StreamServe and the network and the environment – the instructions should cover all the gaps of knowledge making it “idiot proof” to follow them.

Steering logic, logging and other framework components

A set of data that tests all possible steering logic, logging and other framework components should be available – covering boundary checks for this logic (data that is outside scope as well)

Output distribution integration

A set of test data that test all distribution channels and the boundaries for them should be available, covering all output connectors and using e.g. test-email-addresses to make sure no real customers will receive testdocuments.

Document layout, content and calculations

This is a common focus point for testing and to test this a number of test data batches should be available:

-          Single document test files for component testing and review functionality in StreamStudio.  Typically 3-5 examples for every document type.

-          Normal sized data batches with production (or production like) data

-          Boundary checking data in large enough batch to cover all boundary checks

-           Batches (split to several files for cluster testing) for performance test

Test is done by feeding input file to StreamServe and then reviewing the output. In component and system test, the use of a pdf-out-connector for testing (it is possible to create physical platform layers in the streamserve platform to automatically direct all output to test connectors when running in test environment) will likely make testing more efficient.

Performance and throughput

To test performance and throughput an environment that is not used by other applications is preferable. Testing includes the use of:

-          StreamServe build in log functionality

-          Using custom log functionality (e.g. ODBC logging)

-          Using system monitor software to monitor CPU use, network use and other resource usage.


Integration with source system

Depending on the integration with source system (file based or direct coupled) there is need for having testers initiate documents from within the actual source system. File based integration should use work scheduling software and this scheduling software should be monitored to ensure integration.

Change logs

Change logs provide important documentation of changes to the global resources of the solution (maintained by lead developer or CM responsible) and to the different integrations with source system. Change logs per document template it also important to monitor changes, keep track of regression testing and making fall back to earlier versions possible in a manageable way.

Comments (1)

  • hi,

    I am doing testing for a project can u share some test cases with excel sheet to suggest diffrent types of test cases .


    thanks in advance .


    Friday 03 February, 2012 by Anchal khare


Post comment