Thursday, 8 October 2015

Design Principles: Fan-In vs Fan-Out

From:
http://it.toolbox.com/blogs/enterprise-solutions/design-principles-fanin-vs-fanout-16088

The fan-out of a module is the number of its immediately subordinate modules.  As a rule of thumb, the optimum fan-out is seven, plus or minus 2.  This rule of thumb is based on the psychological study conducted by George Miller during which he determined that the human mind has difficulty dealing with more than seven things at once.

The fan-in of a module is the number of its immediately superordinate (i.e., parent or boss) modules.  The designer should strive for high fan-in at the lower levels of the hierarchy.  This simply means that normally there are common low-level functions that exist that should be identified and made into common modules to reduce redundant code and increase maintainability.  High fan-in can also increase portability if, for example, all I/O handling is done in common modules.

Object-Oriented Considerations

In object-oriented systems, fan-in and fan-out relate to interactions between objects.  In object-oriented design, high fan-in generally contributes to a better design of the overall system.  High fan-in shows that an object is being used extensively by other objects, and is indicative of re-use.

High fan-out in object-oriented design is indicated when an object must deal directly with a large number of other objects.  This is indicative of a high degree of class interdependency.  In general, the higher the fan-out of an object, the poorer is the overall system design.

Strengths of Fan-In

High fan-in reduces redundancy in coding.  It also makes maintenance easier.  Modules developed for fan-in must have good cohesion, preferably functional. Each interface to a fan-in module must have the same number and types of parameters.

Designing Modules That Consider Fan-In/Fan-Out

The designer should strive for software structure with moderate fan-out in the upper levels of the hierarchy and high fan-in in the lower levels of the hierarchy. Some examples of common modules which result in high fan-in are: I/O modules, edit modules, modules simulating a high level command (such as calculating the number of days between two dates).

Use factoring to solve the problem of excessive fan-out.  Create an intermediate module to factor out modules with strong cohesion and loose coupling. 










In the example, fan-out is reduced by creating a module X to reduce the number of modules invoked directly by Z.

JMS Pending Messages - What harms does it do.

Generally it is not recommended to store large number of the pending messages in the EMS server for the following reasons:

1.     The pending messages in the EMS queue will either consume memory and/or disk space.
-       While the pending messages are in the reasonable amount, it will consume the memory and EMS would still treats the messages as for quick consumption by the client application
-       While the pending messages exceeds certain size for a queue, they will then be swapped-out to the EMS data store file located normally in the shared disk / SAN storage.
So a careful planning of the EMS server RAM and data store size is always needed while design the solution.

2.     The data store swapped EMS pending message may lead to EMS performance impact.
In the situation of a EMS queue having huge size of the pending message will cause the pending messages to be swapped-out to the EMS data store file located normally in the shared disk / SAN storage.
In that case, if the queue consumer would like to consume the messages from the queue (e.g. consumers recovered after x hours), then the EMS server needs to swap-in the messages which may severely impact the EMS server performance. Longer queue message recovery is also expected in such situation.

3.     If the number of the pending messages becomes huge, it could have overall performance impact for the entire EMS server as the overall awareness that should be kept in mind.

4.     Disaster Recovery slowness for persistent messages.
Persistent messages sent to the queue are always written to the disk. So, the more pending persistent messages pending on the queue, in DR situation, it would take longer for EMS to recover the messages, especially when EMS still need to receive high throughput / big size messages at the mean time.


Large number of pending messages can be caused by many factors such as slow consumer or no consumer, busy server, or network issues. In addition, queue properties such as maxBytes and maxMsgs that can help limit the growth of queue depth.

TIBCO EMS Message Delivery

Persistent Messages Sent to Queues

Persistent messages sent to a queue are always written to disk. Should the server fail before sending persistent   messages to consumers, the server can be restarted and the persistent messages will be sent to the consumers when they reconnect to the server.



Persistent Messages Sent to Topics
Persistent messages published to a topic are written to disk ONLY IF that topic has at least one durable subscriber  or one subscriber with a fault-tolerant connectionto the EMS server.



Non-durable subscribers that re-connectafter a server failure are considered newly created subscribers and are not entitled to receive any messages created prior to the time they are created.




When using file storage, persistent messages received by the EMS server are by default written asynchronously to disk.
When a producer sends a persistent message, the server does not wait for the write-to-disk operation to complete before returning control to the producer.
This means that the producer has no way of detecting the failure of persisting the message and take corrective action if the server fails before completing the write-to-disk operation.



What do you do if you want to SYNCHRONOUSLY write to disk?

You can set the mode parameter to sync for a given file storage in the stores.conffile to specify that persistent messages for the topic or queue be synchronously written to disk.
When mode = sync, the persistent producer remains blocked until the server has completed the write-to-disk operation.


Monday, 9 February 2015

BW SSL Security

BW SSL Security Related Topic
========================================================================

- TIBCO ActiveMatrix BusinessWorks can use Secure Sockets Layer (SSL)
 to provide secure communication. The successor to SSL is Transport Layer Security (TLS), but the term is used synonymously with TLS in this document.

- Secure Sockets Layer (SSL)
is a protocol that uses public and private keys to secure communication between parties.

- When an SSL connection is requested,
the initiator (or client) and responder (or server) perform a handshake where digital identities or certificates are exchanged to ensure that both parties are who each party expects.

- SSL can also be used to specify an encryption algorithm for the data that is exchanged between the parties.
========================================================================

- Introduction of TIBCO BW Security
TIBCO ActiveMatrix BusinessWorks can act as an initiator or a responder in an SSL connection. Several types of connections can optionally use SSL, such as,
FTP Connection
HTTP Connection
JMS Connection
Rendezvous Transport
In addition, the following activities can also specify SSL connections
ActiveEnterprise Adapter activities using JMS or RV transports
Send HTTP Request
SOAP Request Reply

- TIBCO ActiveMatrix BusinessWorks uses digital certificates
to validate the identity of parties in an SSL connection. TIBCO ActiveMatrix BusinessWorks requires that both initiators (clients) and responders (servers) must present certificates during SSL handshake. Typically, only the server is required to present its certificate to the client for verification, but TIBCO ActiveMatrix BusinessWorks enforces a bi-lateral model where both client and server must present certificates.
=====================================================================




Different ways in TIBCO for identifying the identity in SSL:

1> TIBCO ActiveMatrix BusinessWorks uses the Identity resource 
to configure the identity of activities that act as initiators (clients) or responders (servers) in an SSL connection. The Identity resource stores the certificate of the activity (initiator or responder) and the location of the folder in the project that contains the trusted certificates of other parties that can participate in an SSL connection.

2> Identity resources contain information that is used to authorize a connection.
The responder (or server) in an SSL connection request must have an identity, but the initiator (client) must also have an identity. The identity resource can be used to store one of the following types of identitites

3> Username/Password. 
Used to store a username and password. Useful when basic client authentication is needed. Typically not used within TIBCO ActiveMatrix BusinessWorks.

4> Certificate/Private Key.
Used when public key and the certificate are stored in two separate files. Typically certificates are stored in Privacy-enhanced Electronic Mail (PEM) format. The URL for the certificate and key must be provided, as well as the password for the key. This identity can be used when acting as initiator or responder in an SSL connection.

5> Identity File.
Used when the certificate includes the public key information in the certificate file. The URL and file type of the certificate must be provided as well as the password for the key. The certificate can be on of the following formats
Entrust.
JCEKS. Java Cryptography Extension key Store file format
JKS. Java Key Store file format.
PEM. Privacy-enhanced Electronic Mail file format.
PKCS12. Public Key Cryptography Standard (12) file format.

6> Trusted certificates
are typically issued by a trusted third party, such as a certificate authority. There are several commercial certificate authorities, such as Entrust or VeriSign.
========================================================================

- Both clients and servers can also store a list of trusted certificates.
When a connection is requested, each party presents their certificate and that certificate is checked against the list of trusted certificates. If the certificate is not found, the connection is refused. Checking trusted certificates allows clients to ensure that they are connecting to the correct server. For servers, trusted certificates are used to ensure only the authorized clients can connect to the server.

- Checking a certificate involves
checking the certificate of the party that signed the certificate.
There can be a hierarchy of intermediate certificates, also known as a certificate chain, that must be checked up to the root certificate to ensure that a certificate is authentic.
TIBCO ActiveMatrix BusinessWorks requires that all intermediate certificates are stored in the trusted certificate location so that certificates can be properly verified.
========================================================================

- Adding Certificates to a project. To add a certificate in PEM format to a project
1. Select a folder into which the certificate will be imported
2. From the menu bar, choose Tools > Trusted Certificates > Import into PEM Format.
3. Provide the certificate URL when prompted.
Certificates in PKCS7 and PEM formats (these formats do not store keys). A new certificate copy is created when the import is done. If the certificate to be imported is already in PEM format, a new copy is created as is. Certificates from storage formats that require a password cannot be imported (PKCS12 and KeyStore)
========================================================================

- Storing trusted certificates outside the project. (Avoids re-create EAR file and re-deploy)
1. Create a folder in the file system, where the trusted certificate will be stored. Copy this folder to each machine where the process engine will be deployed, or the location can be shared network area accessible by all process engines.

2. In the TIBCO ActiveMatrix BusinessWorks, create a global variable named BW_GLOBAL_TRUSTED_CA_STORE

3. Set BW_GLOBAL_TRUSTED_CA_STORE to the location of the trusted certificates folder on the file system. The location can either be the same for all deployed engines, or the global variable value can be changed when the project is deployed to the location on the machine where the trusted certificates is placed. The value for BW_GLOBAL_TRUSTED_CA_STORE must be a file URL (file:///c:/tibco/certs)

4. Specify a value in the Trusted Certificates field in the SSL Configuration dialog. When the project runs, the value of BW_GLOBAL_TRUSTED_CA_STORE overrides the value specified with the location provided.
========================================================================

- For connections that allow to use SSL,
there is a checkbox on the configuration that, when checked, allows to click the Configure SSL button which brings up an SSL configuration dialog with specific options for the type of activity or connection that is being configured. Potential configuration fields
FTP Connection. Used to specify FTP server. TIBCO BW acts as an initiator

-Trusted Certificates folder.
Folder in the project containing one or more trusted certificates. This folder is checked when an FTP activity connects to ensure that the responder’s certificate is from a trusted authority. This prevents connections to rogue servers.

- Identity.
Identity resource that contains the client digital certificate and a private key.Optional

- Verify Host Name.
Specifies to check that the host name of the FTP server against the host name listed in the server’s digital certificate. If it does not match, the connection is refused. If a hostname equivalent is specified in the Host field, but it does not match the host name, connection is refused.

- Strong Cipher Suites Only.
When checked, this field specifies that the minimum strength of the cipher suites used can be specified with the bw.plugin.security.strongcipher.minstrength custom engine property. Default value of the property disables cipher suites with an effective key length below 128 bits. When this field is unchecked, only cipher suites with an effective key length of up to 128 bits can be used.
========================================================================

HTTP Connection
- Requires Client Authentication.
Checking this field requires initiators to present their digital certificate before connecting to the HTTP server. When this field is checked , the Trusted Certificates folder becomes enabled so that a location containing the list of trusted certificates can be specified.

- Identity.
Identity resource that contains the client digital certificate and a private key.Optional

- Verify Host Name.
Specifies to check that the host name of the HTTP server against the host name listed in the server’s digital certificate. If it does not match, the connection is refused. If a hostname equivalent is specified in the Host field, but it does not match the host name, connection is refused.

- Strong Cipher Suites Only.
When checked, this field specifies that the minimum strength of the cipher suites used can be specified with the bw.plugin.security.strongcipher.minstrength custom engine property. Default value of the property disables cipher suites with an effective key length below 128 bits. When this field is unchecked, only cipher suites with an effective key length of up to 128 bits can be used.
========================================================================



DB - SQL Server: Options for output the query resultset to file (e.g. csv)

Option 1: 
In SQL Server MicroSoft Studio

In SSMS, "Query" menu item... "Results to"... "Results to File"
or
Shortcut = CTRL+shift+F

Then in the query, type in the SQL query then execute it, the window will prop-up and ask for the output file.


Option2:
Use SQLCMD Command in windows cmd prompt:

- SQL Server authentication:
SQLCMD -U Tib_Dev -P Tib_Dev -S MASSQL02\Inst2 -Q "USE Tib_DEV_CLE select top 10 eventid, transactionid, eventdatetime, eventtype, hostname, projectname from dbo.event" -s "," -o "C:\MyData1.csv"


-Windows Authentication:
SQLCMD -S STETIBSQL01 -E -Q "USE Tib_PROD_REF select consignmentid from Consignmentv2ErrorReplay" -s "," -o "C:\MyData.csv"






Sunday, 1 February 2015

TIBCO - How To - Configure SoapUI for TIBCO JMS





How to: Configure SoapUI for TIBCO JMS
– Version 1.00

DISTRIBUTION DATE: 2 February 2015  STATUS: Draft

By: Warren Chen


Introduction

This document provides guide for configure SoapUI for TIBCO EMS.


There are some details that you need to know about

a.    The EMS Server connection url:
e.g. For the server ULTTIB03 (IP:10.20.10.71)
It is tcp://10.20.10.71:7222
b.    The EMS factory details:
Run command line show factory after you connect to EMS Server by using admin login:
c.    EMS Authentication details (User/Password)

Ensure that you have the sample queue set up already in EMS Server

e.g. st.shippingOrderService.v1.Inquiry.q.v1
and you got queue details showing in following files:
d.    acl.conf:
QUEUE=st.shippingOrderService.v1.Inquiry.q.v1 USER=bw_user PERM=receive,send
e.    queue.conf
st.shippingOrderService.v1.Inquiry.q.v1 secure,maxbytes=10GB,store=$sys.failsafe
f.     user.conf
bw_user:$2$mK5LYAty$x5nM+Dx/VBd0svqbdJkjsOzq:"StarGate user."
admin:$2$IObMucWx$iyojS0rlJdWYAxNaA7VuBv/d:"Administrator"
[Explain: bw_user is related with acl.conf for the queue accessibility while admin is the admin user for the ems server]



Instruction to set up configuration in SOAPUI

(examples are using SOAPUI 4.5):
Open up SOAPUI then start up the Herms:
Once Herms starts up, create a new session called “TIBCO.JNDI”
After the new session “TIBCO.JNDI” careated, you then need to add the jar files and create a SOAPUI plugin provider.
In this case we will create a provider called “TIBCO.JNDI”.
In the “Providers” tab of the session configuration window, right click and create new group:
Types in the name as “TIBCO.JNDI” then clicks on OK
Add external TIBCO EMS Client Jar files into the newly created “TIBCO.JNDI”:
Browse to <TIBCO_EMS_HOME>/clients/java/ then multi-select all the jar files.
Then click on Open to finish this step.
Once finished, you should be able to see “TIBCO.JNDI” showing in the drop down of session “Connection Factory/Loader:”
in the “Sessions” tab in the session configuration window

Go to the session tab in the configuration window.
In the Plug In section, select “TIBCO EMS” as the plugin then configure the connection details, username and password for the ems server as:
In the Connection Factory section, select “TIBCO.JNDI” as loader, then select “com.tibco.tibjms.TibjmsQueueConnectionFactory” as Class, then finish the configuration as following:
 After that, leave the rest of the section as default then click on OK:
Check the EMS connection
Right click on the newly created “TIBCO.JNDI” session and discover the Queue/Topics to check the connectivity with EMS server that you just configured:
If succussed, then you should see a list of  destinations be discovered:
Queues:                                                    Topics:
 
Double click on the queue that you will work on, and see if you can view the queue details, if you cannot, then it means the  you didn’t configure the “Connection Factory” section right in the “Sessions” tab in the Session configuration window:
e.g. The queue details:

e.g. If you cannot see the queue:
Also check the queue property in Herms:
Load the JMS web service WSDL into SOAPUI then start configuring the endpoint.
For the project you are configuring, select the port type then create the endpoint:
For the project you are configuring, select the port type then create the endpoint:
[Note: the receive/Subscribe destination will need to be changed in the future, otherwise it won’t work, as the value is not right!]
Open up the request of one Soap-Action under the port that you are configuring:
Two things that you need to do here:
1>   Configure the “JMSReplyTo:”
Configure it as [QueueName].reply
e.g. In this case: st.shippingOrderService.v1.Inquiry.q.v1.reply
[Note: After this configuration, you should see st.shippingOrderService.v1.Inquiry.q.v1.reply show up next time in the “Add JMS endpoint” configuration window/”Receive/Subscribe destination” drop down]
2>   Tick “Add SoapAction as property” field:
3>   After configuration it should look as:
Configure JMS property tab:
Add “SOAPAction” property and the value should be /[SOAPAction]
[Note: The soap action value should be the same as in WSDL]
e.g.:
Start testing the soap request/response action
Check the endpoint is now changed to:
Jms://[SessionName]::[request destination queue]::[reply destination]
[Note: the reply destination should be the same as “JMSReplyTo” field in the JMS Headers Tab]
In the testing, you may receive the SOAP response as unformatted xml:
 it is due the incompatibility of the SOAP Envelop namespace.
Change the name space from:
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/ ">
To:
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://www.w3.org/2003/05/soap-envelope">
Then you can reformat the xml and get:
Beautiful!!

TIBCO - How To - TIBCO JDBC Query Result in Subsets



TIBCO JDBC Query Result in Subsets
– Version 1.00

DISTRIBUTION DATE: 2 February 2015  STATUS: Draft

By: Warren Chen
 

Introduction

When using JDBC Query in TIBCO Processes, there can be cases where you are getting a large number of records as query result but you don’t want to process and use all the records altogether. Rather you want JDBC Query Results to be processed as subsets.
In this Step by Step JDBC Tutorial, I will explain how you can process JDBC Query results in subsets in a Loop in TIBCO processes.

Example Scenario

For this tutorial, we take a scenario where we want to query a database table which has large number of records and we want to send the query results in subsets of 2 to a JMS Queue using JMS Queue Sender activity.
The table that we want to query has a total of 10 Records as shown below:


Now let’s proceed step by step with the tutorial to query and process the records from above table in subsets.

Step 1: Create JDBC Connection and Configure JDBC Query Activity

For this tutorial, I am using Oracle Database. Add JDBC Connection resource in your project from JDBC Palette and configure it based on your own database information. Configuration in my case is as shown below:


Now create a process and add JDBC Query activity in the process. In the configuration tab, select the connection and also specify the query to be executed as shown in screenshot below:

 
Now go to Advanced Tab and check the option for Process in Subsets. This checkbox enables you to process query results in subsets.
 

Now if you go to the input tab, you will observe that a new field subsetSize has been added. Specify some value for it. I have specified 2 in the field as I want to process query results in subset of 2.
 

Now In order to send query results to a JMS Queue, I have crated XML Schema. This schema will be used to map JDBC Query results to body of JMS Queue Sender. Schema is shown in below screenshot:
 

Now we need to add JMS Queue Sender activity in the process. For this we first need to have JMS Connection available. I have configured JMS Connection as shown below:
 

Add JMS Queue Sender and configure it by specifying JMS Connection, Queue name and other fields as shown in the diagram below:
 

After selecting XML Schema in the Input Editor, go to Input tab and map Query results to the body of JMS Queu Sender as you can see below:
 

Now Select JDBC Query and JMS Queue Sender activities and enclose them in a group by choosing Create Group option from toolbar.
In the Group Action, choose Repeat -Until-True and use LastSubset variable value in the Conditions field. LastSubset will be true when last subset of the query result will be in the Loop.
 

This completes our process design and configuration of all activities.
Now let’s proceed towards testing step to see how JDBC Query with Subsets feature works.




Step 2: Test JDBC Subset Query Process

Load JDBC Subset query process in designer tester and run it. As you can see below, It has run successfully:

 
You can go to EMS Server and verify number of Pending Messages in the queue:
 

Please note that number of messages are 6 instead of 5 as Loop has run 6 times as condition of LastSubset was true 6th time.
That’s it for today. Contact me for any further help or guidance needed.