Wednesday, December 5, 2012

WPS: Error flows in Mediations WID 7.5

Error flows :

An error flow is a new type of mediation flow that catches all unhandled errors. A mediation flow generated using IBM Integration Designer V7.5 automatically includes a single error flow within each operation, as shown below. Mediation flows that are migrated to use the text-based mediation flow format will have an empty error flow added to the operation.
Error flow

The error flow consists of an Error Input node, the entry point to the flow, an Input Response node for optionally returning a response, and an Input Fault node for each modelled fault, if defined. The flow can contain any combination of mediation primitives and can be used to log errors, handle errors externally through the use of Service Invoke primitive, or end the flow with an unmodelled fault by using a fail primitive. The error flow is invoked when an unwired fail terminal is encountered in a request or response flow, including the fail terminal on the Callout Response node. If an unwired fail terminal is encountered in the error flow itself, or if the error flow is not wired, then the flow will fail with an unmodelled fault. After the processing of the error flow is complete, the request or response flow is abandoned and no further processing occurs. A response or fault returned by the error flow replaces any response previously created by other flows.

WPS : Service Invoke mediation primitive enhancements in WID 7.5

 Service Invoke mediation primitive :    A new mode of operation called Message Enrichment Mode has been added to the Service Invoke mediation primitive to enable service invoke requests to be constructed from parts of the SMO and responses to be easily merged back into the SMO being processed in the mediation flow. So in this way, the same SMO passes through the Service Invoke and is augmented with information from the response to the service invoke. This process removes the need for additional transformation logic before and after the service invoke primitive, which in previous releases was required to achieve this scenario.

For example, imagine a scenario where a back-end service is used to augment a message with a customer's detailed address information. A single service invoke primitive can now construct a new request message containing the customer id information and send it to a back-end service. It can then merge the completed address in the response back into the SMO at a specific point, and flow the whole message on for further processing in the flow. In this way, the Service Invoke primitive can now be chained together to simply merge input from several services. Transport headers can be optionally propagated from the SMO into the request for the Service Invoke and can be merged back into the SMO from the response.

             When the Service Invoke mediation primitive is added to the canvas, a dialog opens to ask you to configure the reference operation to be used to invoke the back-end service. At this point, you can select the new Message Enrichment Mode. This decision will change the configuration required for the Service Invoke primitive and can be reversed later only by deleting and recreating the primitive on the canvas.

Select Message Enrichment Mode when creating a Service Invoke primitive
Once Message Enrichment Mode is selected and the Service Invoke primitive is created and wired into the flow, new configuration parameters appear in the Properties panel of the primitive. For each input, output, or fault part of the interface of the reference operation used for the Service Invoke, you must select an XPath expression to identify its mapping to the SMO flowing through the primitive:

Defining the input and output arguments
In addition to the parameter mappings, you can choose to propagate headers from the SMO into the request and also to propagate headers from the response back into the SMO.
  
 

Friday, October 19, 2012

WPS : For generating failed events for Mediations

Issue : When we try to generate failed events for mediations failed events are now generating in following situation.
$ : I am having one interface having one operation in that flow i am calling another module witch is having request response operation by using callout by asyn invocation style. In these request responce flow all the fail terminal i am leaving empty. in the ERROR flow i am adding the "FAIL Primitive". But still i am not able to generate failed events.

Solution : If mediation is the initiation flow in that case if we need failed events then we need to use "Service Invoke Primitive" instead of "Callout Primitive"  by async invocation style, then failed events generates for mediation if any fail occurs at target process/ mediation.
ex : MED1 --> MED2 (exception occurred : like not available) then failed events generates for MED2

** If in-case of flow initiation is by any Long running process( LRP BPEL) then failed events generates for that LRP if any fail occurs at mediation.
ex : LRP --> MED1 --> MED2((exception occurred : like not available) then failed events generates for LRP.


NOTE: Failed events will generate in case of async invocation pattern only.

Friday, September 28, 2012

WPS: Event Handling and it's usage in Business process

Introduction : Event handlers enable a running business process to react to events that might be triggered by a partner. By definition, events occur independently and asynchronously. There may be zero or multiple events at any time. Event handlers can be associated with either a scope or with the business process (which in turn is also a scope). When a scope starts, all associated event handlers are enabled. The event handlers belonging to a scope are disabled when the scope ends. If the scope ends with a fault, the processing of the event handler is terminated.
There are two types of events:
  • Incoming messages that correspond to a WSDL operation. A status query or a cancellation are common examples of such events. A correlation must be specified for the incoming messages.
  • Alarms that go off after a user-defined period of time, or when a predefined point in time is reached. You can specify alarm events to repeat after a specified period of time.
Events can happen at the following times:

    at any time during the Proce's lifetime
    any number of times (i.e. 0,1,2,...n times)

While a scope is active, the event handlers associated with that scope wait for specified events. If no event occurs while the scope is active, the event handler does nothing and is disabled when the scope completes. This behavior is different from a receive or pick activity: receive activity must encounter the message they are waiting for before processing can continue. Event handlers stop waiting after the associated scope has completed.
 If we use recive we have to wait for the request if we receive then only we can continue but in case of on event no need to wait to get the event.

Tuesday, July 31, 2012

WPS: XSLT Maps in detail in V7.5


Introduction :
An XML map exists to transform a source XML document into a target XML document.
Note : The XML document that is produced needs to be complete in that it contains all the expected data and the document also needs to be valid and match its corresponding schema.

 
 
Maps are generally created within a Mediation Module project for use within a particular Mediation Flow or within a Module project for use within a particular Business Process Flow. Within a Mediation Flow, map files are created using the XSL Transformation primitive. Within a Business Process Flow, map files are created using the Data Map basic action.

When creating maps within a Mediation Flow, a mapping root is required. The mapping root determines which part of the primitive input message is used as the mapping input and which part of the primitive output message is used as the mapping output. 
In the case of mediation flows, the messages are SMO messages that are broken up as follows:
  • Context
  • Headers
  • Body


Mapping refinements :

                   Once you create an association between the source and target, the association is called a transformation or a mapping. Each mapping can have a single refinement to indicate what type of mapping it is. This section describes the refinements.


1. Move :  Move is the most basic refinement. It takes a simple or complex field on the source side and moves it unchanged from the source to the target.
2. Convert : The Convert refinement is used to do simple conversions between simple data types. An example usage of the Convert refinement is to covert a Boolean value (true or false) to an Int value (1 or 0). Another example usage would be to extract a specific type of value from a string.
3. Assign : The Assign refinement is used when you want to assign a constant value to a target element or attribute. Assign is only available for assigning a value to simple type fields, such as string and int. 

4. Local Map: A Local map is a tool for organizing a mapping file. It allows you to nest mappings for complex types so that the top level mapping does not become cluttered with too much detail. Nothing will move from the source to the target until you go inside the Local map and create mappings using refinements.  Local map is used as a container mapping to localize nested mappings (such as Move), which perform the actual transformations.
A Local map contains a single input field and a single output field. In cases where multiple input fields are required, a Merge mapping replaces a Local mapping, but it behaves similarly.
Once you have created a Local map between a source and target, you can double-click the Local map refinement to navigate into the map. Once inside the Local map you can create the child mappings. While inside a Local map, you will notice that you can navigate out of the Local map, back to the parent mapping by using the "Up a level" icon in the top right corner of the mapping area. While inside a Local map, you will notice that a gray background is used to indicate that you are working within a nested mapping.
A Local map is not reusable. In cases where you are mapping source and target types that you know will be mapped the same way in other maps, consider using a Submap that you can reuse and share among many mapping files 
5. Merge : A Merge refinement is similar to a Local map in the sense that it is a container for nesting other mappings. Unlike a Local map, Merge supports multiple source inputs. This allows you to take data from two different source fields and merge them into a single target field.

6. Sub Maps : A Submap refinement is a mapping between two specific types that is stored in a separate file. A Submap is a root mapping in a regular map file, which you can reference from any other map file making it ideal for reuse. Since Submaps are designed for re-use, we recommend that you store Submaps in libraries where they can be easily shared and reused amongst dependent modules.
 Note: In some cases, you may find that you cannot create a Submap for a desired type because the type is not defined in an XSD file. This can be the case if the type is defined in a WSDL file. The Submap creation wizard will not allow you to create a Submap with a non XSD defined type as the input or output. In this case, you can refactor the type out of the WSDL file by doing the following:
  1. In the Business Integration view, locate the desired type in the Data Types category of the module or referenced library project.
  2. Right-click the type and select Refactor > Extract In-lined Business Objects.
After extracting the desired type, you can create a Submap using the extracted type as an input or output. The Submap refinement is not available when working with local elements or anonymous types. In the case of local elements or anonymous types, reusable mappings are not an option at this time.
Tip: In cases where there are many maps and submaps within a module or library, you can use the Data Map Catalog to view a detailed summary of available maps. To view the Data Map Catalog, select a project in the Business Integration view, right-click the project and select Open Data Map Catalog.
7. Built-in Functions : There are a few common built-in functions that you can use within the Mapping Editor, such as Concat, Normalize, and Substring. In addition to these, there are over 60 XPath and EXSLT Java™ functions that you can easily use to transform data.
we have set of functions available in XML map :
Core Transformation
String Functions
Date and Time Functions
Qname Functions
List Functions
Custom Transforms
Node Functions
Diagnostic Functions
Boolean Functions
Math Functions

WPS : How to increase heap size/memory for Process Server

By increasing memory we are able to resole the memory exception. Out of memory error occurs on start of WebSphere Process Server:
Increase the Java™ Virtual Machine (JVM) parameter MaxPermSize to 512M in the server.xml file of the created profile.

1. Go to <installroot>/profiles/<profileName>/config/cells/<cellname>/nodes/<nodename>/servers/ <servername>/server.xml
2. Open the file and search for "genericJvmArguments"
3. Scroll down towards the end of the file and find the "genericJvmArguments" property with in the jvmEntries attribute.
4. Add -XX:MaxPermSize=512m as the last value for the genericJvmArguments parameter.
Ex:
<jvmEntries xmi:id="...genericJvmArguments="${IBMSCMX} ${IBMGCPOLICY_GENCON} -XX:MaxPermSize=512m">

Alternatively, if the server can start successfully but then later runs out of memory, the MaxPermSize can be increased using the Administrative Console.

  1. Log in to the Administration Console
  2. Click Servers > Server Types > WebSphere application servers > WebSphere Process Server.
  3. Under Server Infrastructure, click Java and Process Management > Process Definitions > Additional Properties > Java Virtual Machine.
  4. In the Generic JVM arguments field, change the MaxPermSize value to -XX:MaxPermSize=numeric value, where numeric value is a quarter of the value entered for the Maximum Heap Size. For example, if your Maximum Heap Size is 3000 M, set MaxPermSize to 750 M. If your Maximum Heap Size is less than 2048 M, set MaxPermSize to 512 M.
  5. Important: If MaxPermSize does not exist in the Generic JVM arguments field, add it to the field but do not replace existing information in the Generic JVM arguments field with the MaxPermSize information.
  6. Click OK to save your changes.
  7. Click Save to save your changes to the master configuration.
  8. Log out of the Administration Console.
  9. Restart the your server.

Tuesday, June 12, 2012

WPS : BO CROSS COPY IMPLEMENTATION ERROR.

Issue : When we running the application at xslt mapping stage application is failed and throwing an error like "BO CROSS COPY IMPLEMENTATION ERROR"
Solution : make sure while we using xslt mapping it's validate the data. so it's expecting differnet format data but our data contains other.
ex : like it expecting float type but we are sending int type.

WPS : MQ overview

As q wps Developer we need to have basic knowledge on MQ, so this post helps you littile bit on MQ Overview.


IBM WebSphere MQ allows different applications to communicate asynchronously through queues across different operating systems, different processors, and different application systems.
WebSphere MQ includes the Message Queue Interface (MQI), a common low-level application program interface (API). Applications use MQI to read and write messages to the queues.
A queue manager is a system program that provides queuing services, and owns and manages the set of resources that are used by WebSphere MQ. These resources include queues, channels, process definitions, and so on.
A queue is a data structure used to store messages.There are several types of queue objects available in WebSphere MQ:
·         Local queue object – identifies a local queue belonging to the queue manager to which the application is connected. All queues are local queues in that each queue belongs to a queue manager, and for that queue manager, the queue is a local queue.
·         Remote queue object – identifies a queue belonging to another queue manager that is a different queue manager from the one to which the application is connected. This queue must be defined as a local queue to the queue manager to which the remote queue object belongs.
·         Alias queue object – is not a queue, but an object pointer to a local or remote queue.
·         Model queue object – defines a set of queue attributes that is used as a template to create a dynamic queue.
All types of queue objects can be sent in messages, but messages can be read only from local queue objects.
In addition to the queue object types that are available in WebSphere MQ, there are some other concepts about queues as well:
·         Remote queue definitions – are definitions for queues that are owned by another queue manager, and not queues themselves.
Remote queue definitions enable an application to put a message to a remote queue without having to specify the name of the remote queue or the remote queue manager, or the name of the transmission queue.
·         Predefined queues – are created by an administrator using the appropriate MQ Series commands (MQSC) or WebSphere MQ programmable command format (PCF) commands. Predefined queues are permanent, existing independently of the applications that use them, and persisting through WebSphere MQ restarts.
·         Dynamic queues – are created when an application issues an MQOPEN request specifying the name of a model queue. The queue created is based on a template queue definition, which is called a model queue. The attributes of dynamic queues are inherited from the model queue from which they are created.
·         Cluster queue objects – are hosted by a cluster queue manager and are made available to other queue managers in the cluster.
A channel is a logical communication link between a WebSphere MQ client and a WebSphere MQ server, or between two WebSphere MQ servers. There are two categories of channel in WebSphere MQ:
·         Message channels – are one-way links that connect two queue managers via message channel agents.
·         MQI channels – connect a WebSphere MQ client to a queue manager on a server machine, and are established when you issue an MQCONN or MQCONNX call. An MQ channel is a two-way link used to transfer only MQI calls and responses.
There are two channel types for MQI channel definitions:
o        Client-connection channel – connects to the WebSphere MQ client.
o        Server-connection channel – connects to the server running the queue manager, which communicates with the WebSphere MQ application that is running in an WebSphere MQ client environment.
The MQ channel supports the industry-standard Secure Sockets Layer (SSL) protocol. See your WebSphere MQ documentation from IBM for information on whether SSL is available on your platform in version 5.3 or 6.0 of MQ.
A process definition defines a process that executes when incoming messages cause a trigger event.
A WebSphere MQ message consist of two parts:
·         Message header – message control information that contains a fixed-sized portion and a variable-sized portion.
·         Message body – application data that contains any type of data (text or binary).
When you use rfhCommand to publish a publication, if the message payload returned by msgrecv is set to:
o        MQRHRF – the RF header is included in the message body.
o        MQRHRH – the RF header is not included.
You can obtain the name-value pairs in the RF header by querying @@msgproperties.
If the message body contains characters, code-set conversions are available either through MQ native services, or through user exit handlers. The format of the message body is defined by a field in the message header. MQ does not enumerate all possible message body formats, although some formats are provided in samples. Applications can enter any name of the format. For instance, “MQSTR” contains string data and “MQRHRF” contains topics for MQ pub/sub.
WebSphere MQ message types include:
·         Datagram – no reply is expected.
·         Request – a reply is expected.
·         Reply – reply to a request message.
·         Report – contains status information from the queue manager or another application.
When messages are sent, various message header properties can be set, such as expiration, persistence, priority, correlation ID, and reply queue.
Message grouping enables you to organize a group of messages into a logically named group. Within a group, each logical message can further be divided into segments. A group is identified by a name, each logical message within a group is identified by a sequence number (starting with 1), and each segment of a logical message is identified by the offset of the message data with respect to the logical message. Segmented messages are not supported by MQ pub/sub, and an attempt to send a segmented message results in an error.
In a queue, messages appear in the physical order in which they were sent to the queue. This means that messages of different groups may be interspersed, and, within a group, the sequence numbers of the messages may be out of order (the latter can occur if two applications are sending messages with the same group ID and partitioned sequence numbers).
When messages are received, the read mode can be either:
·         Destructive – message is removed, or
·         Nondestructive – the message is retained. This is known as “browsing,” and allows applications to peruse one or more messages before deciding to remove a particular message from the queue.
Receivers can select particular messages by specifying message header properties such as correlation ID or message ID.
When messages are read—as either destructive or nondestructive—the order in which they are returned can be physical or logical. The order is defined by the queue definition. The queue can be defined as being in priority order or first-in, first-out order.