mercredi 3 juin 2015

Follow me on BusHorn.com

You can now find all my contribution of articles on the site http://bushorn.com/author/a_bouchama/

BusHorn was started with an idea of creating a knowledge sharing platform among Open Source Middleware users and developers. Open Source Middleware/SOA products has gained larger momentum over the past 5 years.

mardi 7 janvier 2014

Flow Activity Monitoring : Logging, Analazying and Monitoring Data of our integration flows camel using ElasticSearch, Logstash and Kibana.

Many times we receive requests like: Where's my message? Are you sur that my result has been routed to the good destination ? how many orders has been received from our partner? 

All monitoring solutions gives only part of answer. We have experienced the use of wireTap in our camel routes, but we have to design a module that consumes these messages and stock them into Database.

The Deal for us was : how we can design our flows without thinking to the monitoring ? how we can audit our integration flows already in production without any impact?

I think that the solution will be : Flow Activity Monitoring

1. Each camel flow deployed in JBoss Fuse are audited by Fuse BAI [REF-1], You have to develop a flow Backend BAI that consumes from endpoint vm:audit and log the content of the event using camel mdc logging. 
2. You have to setup a custom logging appender in JBoss Fuse in order to create a specific log for each camelContext deployed: 

e.g:
# Appender App 
log4j.appender.app = org.apache.log4j.sift.MDCSiftingAppender 
log4j.appender.app.key = camel.contextId 
log4j.appender.app.default = unknown 
log4j.appender.app.appender = org.apache.log4j.RollingFileAppender 
log4j.appender.app.appender.layout = org.apache.log4j.PatternLayout 
log4j.appender.app.appender.layout.ConversionPattern =% d {ISO8601} |%-5.5p |% X {camel.contextId} |% {X} camel.routeId |% X {camel.exchangeId} |% m% n 
log4j.appender.app.appender.file karaf.home = $ {} / log/apps/mediation- $ \ \ {camel.contextId \ \}. log 
log4j.appender.app.appender.append = true 
log4j.appender.app.appender.maxFileSize = 1MB 
log4j.appender.app.appender.maxBackupIndex = 10 

3. logstash [REF-2] listening to all logs in the follwing directory ${karaf.home}/log/apps/  ,parse and filter the message to header fields, Properties fields, Body, breadcrumb..etc, after we sent the output to ElasticSearch [REF-3].
4. Kibana for good graphic representation ;) 

Here's the schema for the solution :



Kibana dashboard :



Conclusion:
- Your mission-critical projects need management and monitoring. Today, is just possible using open source products: logstash, Elasticsearch and Kibana.

- The future previous version of the JBoss Fuse 6.1 allow to send automatically all information of the message to Elasticsearch by enabling insight-camel, Though the downside is this assumes fabric and 6.1;

Best regards,
Abdellatif BOUCHAMA (@a_bouchama)



References
[REF-1] - https://github.com/jboss-fuse/fuse/tree/master/bai
[REF-2] - http://logstash.net/
[REF-3] - http://www.elasticsearch.org/

lundi 24 juin 2013

What's new in JMS 2.0 ?

In 2002 came out the first stable version of JMS version 1.1. More than a decade later, the specification has become widely used and today there are multiple implementations of open-source and commercial time.

Yet in eleven years, many vendors have developed new capabilities over those provided by JMS. 

Now the JMS 2.0 was finally released on 21 May 2013, Let's look at all the new features :

- DeliveryDelay

A property named DeliverDelay will be added in the JMS message header. This will enable a producer to specify a time interval before a message is delivered to a customer.

This property can be useful in delayed treatment, at the end of the day, for example.

- Sending a message asynchronously

The client calls the method send() and the API  makes her hand once the message has been sent successfully. With version 2.0, it will be possible to call this method asynchronously and have the hand immediately. The API then invoke a callback onAcknowledge () to indicate that the message has been sent.

The decision to use this feature will be on a case by case basis. My service of sending he needs to be more scalable as possible? Am I able to replay a flow error if it is detected asynchronously?

- JMSXDeliverCount becomes mandatory

The JMSXDeliverCount property to specify the number of times that a message must be delivered was optional in version 1.1. It becomes mandatory with version 2.0.

This non backward compatibility only affects the JMS provider, customers themselves are not affected by this change.

- Hierarchy of topics

A very interesting feature is the possibility to prioritize topics.

Suppose we have the following four topics:

CLIENTS.FRANCE.ACHAT
CLIENTS.FRANCE.VENTE
CLIENTS.USA.ACHAT
CLIENTS.USA.VENTE

It will be possible to use patterns for subscription JMS clients.

If you want to subscribe to all topics relating to French customers:

CLIENTS.FRANCE. *

Or all purchases:

CUSTOMERS. *. PURCHASE

This feature uses very well require the use of good governance to manage the naming of topics and types of data conveyed.

- Multiple customers on the same durable subscriber

Currently on a JMS queue, it is possible to have multiple consumers. The JMS provider use an algorithm to round-robin for load-balancing between the different customers.

But for JMS topics, it is not currently possible to have only one consumer in sustainable subscriber.
With version 2.0 it is possible to connect multiple clients on the same durable subscriber. The mechanism of round-robin will be also applicable to the topics to allow for better scalability.

- Batch mode

Although already implemented by some providers as webMethods JMS Broker, this feature will receive messages in batch mode. Namely the receipt by a JMS client several messages at once.

In 1.1, we had the following method:
void onMessage (Message message);

In 2.0 we have:
void onMessages (Message [] message);

Conclusion

JMS 2.0 proposes some very useful new features. The API will be greatly simplified and it will also allow the use of CDI for dependency injection.

lundi 17 juin 2013

TOGAF 9 Certified


It’s official now -  I passed the exam and are allowed to carry the TOGAF 9 Certified logo!

It was quite some work to prepare for the exam and took much longer than expected, but I learned a lot about Enterprise Architecture and TOGAF as a framework and method for establishing an EA capability and doing actually architecture work.

Now I feel much more comfortable to discuss about Enterprise Architectures and how to implement SOA through TOGAF 9.1.

I gotta look for some new goal, though…

lundi 12 novembre 2012

How to use perfharness for routes Camel & ActiveMQ


Use Performance Harness for JMS

§   This is the same tool that the WebSphere Message Broker and WebSphere MQ teams use when measuring the products.
§   Available to download on Alphaworks: http://www.alphaworks.ibm.com/tech/perfharness
§   Supports testing JMS, MQ, HTTP, SOAP
§   The tool provides:
Throttled operation (fixed rate or number of messages)
Multiple destinations
Live performance reporting
JNDI
Non JNDI for IBM JMS Providers

Running Perfharness

§   Requires Java 5 minimum
§   Set min and max heap size for the tool: -ms256M -mx256M
§   You have to put all jars of perharness in %ActiveMQ_HOME%\examples\perfharness

Example Output:

Sender1: START
rateR=3137.33,threads=1
rateR=4488.75,threads=1
rateR=4621.33,threads=1
rateR=4680.78,threads=1
rateR=4683.67,threads=1
rateR=4693.88,threads=1
rateR=4683.56,threads=1
Sender1: STOP
totalIterations=128133,avgDuration=26.53,maxrateR=4683.56
ControlThread1: STOP

The first case: Find the max rate for sending messages in a named queue and receiving messages from a named queue.

Sender.bat:

Module to measure “-tc jms.r11.Sender” sends messages to a named queue destination “-d dynamicQueues/testqueue”.
The message referenced in this batch file “Sender.bat” is persistent “-pp”, and transacted “-tx”.

java -cp "..\..\activemq-all-5.3.0-fuse-01-00.jar;./perfharness.jar" JMSPerfHarness -pc JNDI -ii org.apache.activemq.jndi.ActiveMQInitialContextFactory -iu tcp://localhost:61616?jms.useAsyncSend=true -cf ConnectionFactory -d dynamicQueues/testqueue -tc jms.r11.Sender -nt 1 -us system -pw manager -tx true -pp true

Receiver.bat:

Module to measure “-tc jms.r11.Receiver” receives messages from a named queue destination “-d dynamicQueues/testqueue”.
The message referenced in this batch file “Receiver.bat” is persistent “-pp”, and transacted “-tx”.

java -cp "..\..\activemq-all-5.3.0-fuse-01-00.jar;./perfharness.jar" JMSPerfHarness -pc JNDI -ii org.apache.activemq.jndi.ActiveMQInitialContextFactory -iu tcp://localhost:61616 -cf ConnectionFactory -d dynamicQueues/testqueue -tc jms.r11.Receiver -nt 1 -us system -pw manager -tx true -pp true

The second case: Find the max rate for my route camel.

If we consider this following route camel: the input is the queue orders and the output is the queue orderstatus.



We can find the max rate of this route camel, by sending a message in the queue “orders” and then waits for a reply on the output queue “orderstatus” with a matching CorrelationId.
The message referenced in this batch file “Req_Reply.bat” is persistent “-pp”, transacted “-tx” and contains a JMS Header “-pf”, and the Body “-mf”.
The tool will run for 120 secs “-rl” and print stats every 5 seconds “-ss”.


Req_Reply.bat

set BROKER_URL=tcp://localhost:61616?jms.useAsyncSend=true
set USER=system
set PASSWORD=manager
set QUEUE_OUT=dynamicQueues/orders
set QUEUE_IN=dynamicQueues/orderstatus
REM -nt Number of producer threads
set NT=1
java -cp "..\..\activemq-all-5.3.0-fuse-01-00.jar;./perfharness.jar" JMSPerfHarness -pc JNDI -ii org.apache.activemq.jndi.ActiveMQInitialContextFactory -iu %BROKER_URL% -cf ConnectionFactory -tc jms.r11.Requestor -iq %QUEUE_OUT% -oq %QUEUE_IN% -to 30000 -pf C:\Dev\tools\bench\InputMessages\JMSProperties.txt -mf C:\Dev\tools\bench\InputMessages\messageBody.xml -nt %NT% -ss 5 -rl 120 -pp true -tx true -us %USER% -pw %PASSWORD%


Enjoy !!

mardi 31 janvier 2012

Developing an Audit-logging for FuseESB (servicemix) & Karaf

For many secured environments there's a requirement to log every user management action.

The idea is to have an Audit logging module, that allow the production service, to have a trace for all administrative tasks done in Fuse ESB(servicemix) or karaf over the following channels : (SSH, WebConsole, JMX)

The trace should contain information about user logged, the command performend, channel used, date, ...etc

To run the service you need to download and add event admin service jar into system folder under appropriate path like <Fuse-ESB-install>/system/org/apache/felix/org.apache.felix.eventadmin/1.2.8-fuse-00-43/org.apache.felix.eventadmin-1.2.8-fuse-00-43.jar

Then add following to etc/startup.properties file to auto start EventAdmin service (which will generate events):

org/apache/felix/org.apache.felix.eventadmin/1.2.8-fuse-00-43/org.apache.felix.eventadmin-1.2.8-fuse-00-43.jar=9

The service contains a class LoggingEventListener that implement eventhandler, that just logs the events it receives:

StringBuffer buffer = new StringBuffer();
        buffer.append(String.format("Event [%n"));
        buffer.append(String.format("Topic: %s%n", event.getTopic()));
        for (String name : event.getPropertyNames()) {
            buffer.append(String.format("%n%s = %s", name, event.getProperty(name)));
        }
       
        buffer.append("]");
        LOGGER.info(buffer.toString());

All these events are filtered following properties fixed in blueprint.xml:

<bean id="handler" class="com.abouchama.LoggingEventListener" />

    <service ref="handler" interface="org.osgi.service.event.EventHandler">
        <service-properties>
                  <entry key="event.topics" value="org/apache/*"/>
            </service-properties>
    </service>

You can build & install bundle from Github and deploy it. Once that's done you should see the output I shown in the following example:
Example of log:

17:22:34,119 | INFO | Thread-11 | LoggingEventListener | com.abouchama.LoggingEventListener 25 | Event [
Topic: org/apache/felix/service/command/EXECUTING
 
command = osgi:list
event.topics = org/apache/felix/service/command/EXECUTING
event.subject = Subject:
                    Principal: UserPrincipal[karaf]
]

In this example you can see sample entries for actions taken via SSH. Log entries contain info like:
                         - Username : karaf
                         - Command performed : osgi:list
                         - Event : org/apache/felix/service/command/EXECUTING

Enjoy :)

vendredi 20 janvier 2012

Continuous Integration/Delivery featuring Maven, Nexus and Sonar


Continuous Integration:

In an enterprise project, it is important to continually check the non regression of the product realized. Like unit tests, acceptance tests are part of the test harness to implement a project.
Below are the usual set of tasks.
  • Build
  • Unit Test
  • Run Code Quality Checks
  • Deploy
  • Run Acceptance Test
In my current project, we have chosen below tools for Continuous Integration strategy:
  • Maven to build and unit test
  • Sonar to perform code quality checks
  • Nexus as Maven repository
  • Shell Scripting to deploy
In today's post, we will go over how to use Maven and Nexus to build and publish binaries.

Maven to Build and Unit Test:

Building java projects with maven is really easy. You just need to have maven-compiler-plugin in your pom.xml. Java sources will be compiled without doing any more work If you follow standard maven guidelines for your project folder structure.
<plugin>
                        <groupId>org.apache.maven.plugins</groupId>
                        <artifactId>maven-compiler-plugin</artifactId>
                        <version>2.3.2</version>
                  </plugin>
Unit testing is possible with maven-surefire-plugin. Maven-surefire-plugin can run tests with testing frameworks like Junit, TestNG etc.
<plugin>
                        <groupId>org.apache.maven.plugins</groupId>
                        <artifactId>maven-surefire-plugin</artifactId>
                        <configuration>
                             <forkMode>pertest</forkMode>
                             <childDelegation>false</childDelegation>
                             <useFile>true</useFile>
                             <failIfNoTests>false</failIfNoTests>
                             <includes>
                                   <include>**/*Test.java</include>
                             </includes>
                        </configuration>
                  </plugin>

Maven to Publish artifacts to Nexus:

All maven projects have artifacts that are generated by the build. An artifact can be a jar file, war file, zip file, ear file and a pom file. All these artifacts need to be stored in a repository for versioning purposes.
Your project’s pom.xml will have details of Nexus as maven repository in distribtionManagement section. Make sure your maven settings file has authentication details to publish to Nexus repository. Maven deploy goal needs to be executed to deploy to Nexus repository.
Pom.xml:
<project>
...
<distributionManagement>

            <repository>
                  <id>releases</id>
                  <uniqueVersion>false</uniqueVersion>
                  <name>Company Releases</name>       <url>http://localhost:8081/nexus/content/repositories/releases</url>
            </repository>
            <snapshotRepository>
                  <id>snapshots</id>
                  <uniqueVersion>false</uniqueVersion>
                  <name>Company Snapshots</name>           <url>http://localhost:8081/nexus/content/repositories/snapshots</url>
            </snapshotRepository>
      </distributionManagement>
...
</project> 
Settings.xml:
</settings>
...
<servers>
            <server>
                  <id>snapshots</id>
                  <username>deploy</username>
                  <password>deploypwd</password>
            </server>
            <server>
                  <id>releases</id>
                  <username>deploy</username>
                  <password>deploypwd</password>
            </server>
      </servers>
...
</settings>

NB: to prepare your release, and create a new tag on the source code repository:
mvn release: prepare -Dresume = false (alternatively mvn release: clean
 release: prepareand to deploy the new tag in the staging repository: mvn release:perform

Continuous delivery:
I invite you to read this interesting book: http://continuousdelivery.com/2010/02/continuous-delivery/

Enjoy !! :-)