Thursday, 8 October 2015

Design Principles: Fan-In vs Fan-Out

From:
http://it.toolbox.com/blogs/enterprise-solutions/design-principles-fanin-vs-fanout-16088

The fan-out of a module is the number of its immediately subordinate modules.  As a rule of thumb, the optimum fan-out is seven, plus or minus 2.  This rule of thumb is based on the psychological study conducted by George Miller during which he determined that the human mind has difficulty dealing with more than seven things at once.

The fan-in of a module is the number of its immediately superordinate (i.e., parent or boss) modules.  The designer should strive for high fan-in at the lower levels of the hierarchy.  This simply means that normally there are common low-level functions that exist that should be identified and made into common modules to reduce redundant code and increase maintainability.  High fan-in can also increase portability if, for example, all I/O handling is done in common modules.

Object-Oriented Considerations

In object-oriented systems, fan-in and fan-out relate to interactions between objects.  In object-oriented design, high fan-in generally contributes to a better design of the overall system.  High fan-in shows that an object is being used extensively by other objects, and is indicative of re-use.

High fan-out in object-oriented design is indicated when an object must deal directly with a large number of other objects.  This is indicative of a high degree of class interdependency.  In general, the higher the fan-out of an object, the poorer is the overall system design.

Strengths of Fan-In

High fan-in reduces redundancy in coding.  It also makes maintenance easier.  Modules developed for fan-in must have good cohesion, preferably functional. Each interface to a fan-in module must have the same number and types of parameters.

Designing Modules That Consider Fan-In/Fan-Out

The designer should strive for software structure with moderate fan-out in the upper levels of the hierarchy and high fan-in in the lower levels of the hierarchy. Some examples of common modules which result in high fan-in are: I/O modules, edit modules, modules simulating a high level command (such as calculating the number of days between two dates).

Use factoring to solve the problem of excessive fan-out.  Create an intermediate module to factor out modules with strong cohesion and loose coupling. 










In the example, fan-out is reduced by creating a module X to reduce the number of modules invoked directly by Z.

JMS Pending Messages - What harms does it do.

Generally it is not recommended to store large number of the pending messages in the EMS server for the following reasons:

1.     The pending messages in the EMS queue will either consume memory and/or disk space.
-       While the pending messages are in the reasonable amount, it will consume the memory and EMS would still treats the messages as for quick consumption by the client application
-       While the pending messages exceeds certain size for a queue, they will then be swapped-out to the EMS data store file located normally in the shared disk / SAN storage.
So a careful planning of the EMS server RAM and data store size is always needed while design the solution.

2.     The data store swapped EMS pending message may lead to EMS performance impact.
In the situation of a EMS queue having huge size of the pending message will cause the pending messages to be swapped-out to the EMS data store file located normally in the shared disk / SAN storage.
In that case, if the queue consumer would like to consume the messages from the queue (e.g. consumers recovered after x hours), then the EMS server needs to swap-in the messages which may severely impact the EMS server performance. Longer queue message recovery is also expected in such situation.

3.     If the number of the pending messages becomes huge, it could have overall performance impact for the entire EMS server as the overall awareness that should be kept in mind.

4.     Disaster Recovery slowness for persistent messages.
Persistent messages sent to the queue are always written to the disk. So, the more pending persistent messages pending on the queue, in DR situation, it would take longer for EMS to recover the messages, especially when EMS still need to receive high throughput / big size messages at the mean time.


Large number of pending messages can be caused by many factors such as slow consumer or no consumer, busy server, or network issues. In addition, queue properties such as maxBytes and maxMsgs that can help limit the growth of queue depth.

TIBCO EMS Message Delivery

Persistent Messages Sent to Queues

Persistent messages sent to a queue are always written to disk. Should the server fail before sending persistent   messages to consumers, the server can be restarted and the persistent messages will be sent to the consumers when they reconnect to the server.



Persistent Messages Sent to Topics
Persistent messages published to a topic are written to disk ONLY IF that topic has at least one durable subscriber  or one subscriber with a fault-tolerant connectionto the EMS server.



Non-durable subscribers that re-connectafter a server failure are considered newly created subscribers and are not entitled to receive any messages created prior to the time they are created.




When using file storage, persistent messages received by the EMS server are by default written asynchronously to disk.
When a producer sends a persistent message, the server does not wait for the write-to-disk operation to complete before returning control to the producer.
This means that the producer has no way of detecting the failure of persisting the message and take corrective action if the server fails before completing the write-to-disk operation.



What do you do if you want to SYNCHRONOUSLY write to disk?

You can set the mode parameter to sync for a given file storage in the stores.conffile to specify that persistent messages for the topic or queue be synchronously written to disk.
When mode = sync, the persistent producer remains blocked until the server has completed the write-to-disk operation.