Many of us think of StreamServe merely as a tool for making documents nicer and tidier and have them delivered to wherever you want. But, have you ever considered how powerful StreamServe are as an integration tool?
With the capability to connect over a range of different protocols and to virtually any data source, a fast transformation process and delivery to almost any location – isn't this what integration is all about?
Common parts of an integration flow
- Connect to delivery system
- Collect data
- Transform data
- Send data
- Connect to receiving system
- Report result
All integrations do not necessarily involve all these parts. Some flows simply collects data and delivers it directly to the end point without "adding value" to the chain. Others do complex transformations involving merging data from different sources.
Examples of connections that can be done in StreamServe: Scan a mailbox, scan a file share, poll from WebSphere or JMS queues, expose web sevices and connect to web services, retrieve/collect data from databases using JDBC.
The data could be in any form. But you have to structure it in some way if you need to examine the content.
Often an integration task involves transformation. In StreamServe possible transformations are not only from a flat file to page formatted pdf. But it could easily be Purchase order files from one ERP system transformed into Order input files for another system. Or a combination of input files from a manufacturing system and a logistics system sent as a record to the invoicing system.
Delivery is of course as diverse as the collect. One special feature of StramServe is the ability to invoke a range of processes based on the same input data. So first you create a document for email, then you store the document in the archive, produce an xml to a business partner and finally deliver a pdf document as response to the web service that sent the initial request.
One recurring issue with great integration tools like Microsoft BizTalk Server is the capability to handle large files. Streamserve has always been good at handling large batches with data. And even more so now, with full 64 bit support and the job threading/job scaling functionality introduced in 5.6. I'm really looking forward to try to handle those large files.
From a development point of view StreamServe is a quite fast tool to use. Since much development is based on "predefined" building blocks, you will not need to test as thoroughly as you would custom code (like .NET or Java custom components). Consider for instance the scenario where you pick up an xml file and based on one or two fields in the file route the file to different locations. This task is done very easy in StreamServe. You'll only need to define a couple of fields in the incoming xml to trigger the event and the fields you need to select delivery on. And set up a RedirectOUT connector with dynamic delivery. That’s it. No custom pipelines or other stuff. And of course you can use the standard connectors for getting the data (directory, mail in, web service) and deliver it (MQ, sftp, JDBC).