All posts by tarimanga

Quickstart a git Linux box for repository sharing

I had the need to quick setup a Linux box for some git over ssh repository sharing. What follows are my DevOps notes on this task and setup.


If not already installed, install openssh-server and git by

sudo yum install openssh-server
sudo yum install git


sudo apt-get install openssh-server
sudo apt-get install git

add a ‘git’ user by:

sudo adduser --disabled-password git


sudo useradd --disabled-password git

Configuration of /etc/ssh/sshd_config

As I preferred to keep the option PasswordAuthentication yes as-is in the configuration of sshd, then I proceeded add following snippet at the end of the file:

Match User git
PasswordAuthentication no

Check setting of AuthorizedKeysFile option, comes handy next section

Configuration of Authorized Keys

Some system have ‘AuthorizedKeysFile‘ option configured in /etc/ssh/sshd_config as: ‘AuthorizedKeysFile %h/.ssh/authorized_keys‘ hence can follow the following document

sudo su - git
mkdir .ssh
chmod 700 .ssh
touch .ssh/authorized_keys
chmod 600 .ssh/authorized_keys

Then add to .ssh/authorized_keys one line per each of the authorized clients their SSH public keys, in the example form of:

ssh-rsa AAA<...>AbcdE== client1
ssh-rsa AAA<...>fgHiL== client2

This setup is ok for most system setups.

However, some other system have ‘AuthorizedKeysFile‘ option configured in /etc/ssh/sshd_config as: ‘AuthorizedKeysFile /etc/ssh/keys/‘ or similar, hence add (following previous example) to ‘/etc/ssh/keys/‘ one line per each of the authorized clients their SSH public keys, in the example form of:

ssh-rsa AAA<...>AbcdE== client1
ssh-rsa AAA<...>fgHiL== client2

Ensure bash is used

Check /etc/passwd is setup as example form of:


Setup bare git project repository

Connect via ssh using the git account, or alternatively still via SSH but using another account and then command ‘sudo su - git‘, do the following commands:

cd ~
mkdir <git-project-name>.git
cd ~/<git-project-name>.git
git --bare init

Setup push with Eclipse

Add a remote, for example from “repository view” of the project, “Remotes” node, “Create remote…” and option by configure push URI:


A git ssh box1
A git ssh box2
A git ssh box3

For refs mapping I prefer to push all the branches given it’s a private remote server

From    refs/heads/*    To  refs/heads/*

A git ssh box4
A git ssh box5
A git ssh box6


It’s very convenient to setup a Linux box for git repository sharing via ssh.

My first Java 8 Lambda applied while using Drools

In one of the projects I’m working on, a Java EE and web application using Drools, data is shard-ed (splitted) across many knowledge sessions having their own knowledge base definitions – for different business reasons not described in this post. From an end-user perspective it is sometime required to perform the “union” of the results coming from a given Drools’ query, which may be defined in several, but not necessarily all, of the knowledge sessions.

Technically the query can be identified by a specific name and I need to “pull” the query results from each of the several knowledge sessions which actually have such query defined, finally merging all results into a single response.

In order to determine if a given knowledge session, do actually contain the query or not, I came up with:

boolean containsQuery = kieSession.getKieBase().getKiePackages().stream()
		.anyMatch(p -> p.getQueries().stream()
			.anyMatch(q -> q.getName().equals( queryName )));

Why I like it

Before Java 8 I had to use external iteration, and this was a little bit tedious especially for optimization purposes as explicitly iterating through Packages and Query names, I needed to manually manage to break out of /exit the iterations once the query was actually found.

Now that Java 8 is here with Lambda and Streams, and now that I can use it also on this codebase, writing code to perform this kind of operation is more trivial, and it’s also quite more of a “fluent” code to read in my opinion.

Why I like Java 8 Lambdas, and Streams

Because I like Functional Programming concepts, and as above I can switch from an external iteration to an internal iteration, where not only I don’t have any longer to manually manage Iterators, but also I could expect optimizations to “automagically” pop-up sometimes (e.g.: the code above should early terminate with true, if a true is returned by any of the lambdas at some point), but also I can finally pass a Function (Lambda) instead of every time declaring anonymous classes!

Running the Rule Engine EXercise over a month

My reex2014 – a Rule Engine EXercise for 2014 has been running live for over a month now, time to draw some functional and technical feedbacks.

Real case scenario #1

A functional example based on a real case scenario, starts with:

Which raises the relevant Alerts:


Some time later, this happens:

Which again raises the related Alerts:


Later when traffic resume:


Real case scenario #2

Another functional example based on a real case scenario, traffic momentarily suspended due to a person in the tunnel, followed by a resume of traffic operations.

The reex2014 web application offers a representation of the relevant Alerts:



Which are also available on the Android widgets accordingly:

Technical notes

This is the first time I use OpenShift Online PaaS to host an application which has to run over some time, rather than quick one-shot/disposable experiment. I’m very pleased with what you can do, especially with the public, free offering!
The only couple of pain-points I have to deal with, are:

  • Need tedious work-arounds to solve for Maven dependencies which are not available publicly. Yes, customer Maven dependencies can be installed via ssh once the gear is already created, but I cannot found a reasonable way to create a gear starting from an existing github repo, if this latter contains non-public maven dependencies.
  • Despite application being constantly used by users, sometimes it restarts anyway. This happened two times. The latter was due, I suppose, to “security updates” related to the infamous shellshock/bashbug apocalypse, but the first time this happened, frankly I couldn’t trace back the reason.

In any case, not bad at all, considering is a free public offering!

Camel framework proved once again really, REALLY great for providing ESB logic and effective integrations! In this case I use it as a micro-ESB to interconnect the JavaEE application with Twitter, Rome RSS readers, etc. In other contexts, I use Camel framework for ETL, ESB and integration needs, and I’m super pleased by how much it is effective.


The exercise proved its benefit already and I’m using it daily for my commuting needs!

Evolutions could target general-improvements about rules, to avoid duplications of Alerts when the PA is made in multiple languages, and to cover more RSS-related cases. On the JavaEE side, the next is definitely target to support for PUSH notifications, which would need to be enabled on the Android projects as well.

reex2014 – a Rule Engine EXercise for 2014

I’m a Computer Science Engineer and I do believe there is a whole new range of unexplored applications for Expert Systems (AI) in Big Data scenarios, also within the Corporate business. You can read more about me on my LinkedIn profile.

I’ve found interest in Rule Engine applications while studying at University, and since, this has grown as a kind of NERD interest also on a personal perspective.

This is the reason every year I attempt an hobby-project where I can find an interesting application.

The theme I’ve chosen for this year 2014, is to solve a very practical problem: monitor data sources and social media for potential public transport issues, which I use for commuting. From a technological perspective, I’ve also wanted to seize a chance to experiment integrating several technologies.

In summary


  • Monitor data sources and social media for potential public transport issues
  • Use Expert Systems (AI) – Rule Engine (Drools)
  • Experiment for integration of other technologies with Java EE: like PaaS (OpenShift), Camel, Android


  • This is not an exercise of Sentiment analysis nor of Natural language processing
  • This is not an exercise to imitate other more complex systems for public transport information communications

Analyze RSS feed

Detect strike warning alerts from RSS stream, and others.


Analyze Twitter feed

Detect alerts for:

  • Several tweets for a specific of the different metro lines
  • Metro delays
  • Service interruptions

and others.


Android widgets

Distinct widgets for:

  • List display all Alerts
  • Display summary of inferred knowledge

with Settings page.


Source Code

The source code of this project is on github


This reex2014 project is NOT affiliated with, endorsed, or sponsored by any of the source of information which is connected to. This work has, instead, been created for demonstration of technological integration.

All trademarks of their respective owners.

Expert Systems and JavaEE on ARM: a simple benchmark

This post is to report my findings while experimenting and – a simple, overall – benchmark of a JavaEE use case on ARM platforms. I currently have on my desk a Raspberry Pi (model B) and an Odroid-U2 now: given my interest on Expert Systems, I thought this could be a great way to test them out!

Photo 14-08-13 10 53 28Premise: I’m not a guru on Expert Systems, in fact I consider myself just a happy power user, so it is not my intention to delve into the debate on how an Expert System should be benchmark-ed, this is not in the scope of this post. Likewise, is not in the scope of this post to report a fully comprehensive benchmark comparison of running Java/JavaEE on these platforms.

In fact, much simplier:

GOAL: Given the use case of a JavaEE application which provides a reasoning service, benchmark the overall performance on the different platforms.

The Use Case

For the reasoning service, I use my all time favorite, JBoss Drools. On their GitHub repository, they provide several examples and benchmarks, based on published papers related to the Rete algorithm. Again, while I’m aware of the big discussion if actually these benchmarks are still relevant nowadays, given the progress on the Expert System algorithms, that debate is not impacting on this use case, because here the benchmark is used for a relative comparison.

I have a very simple webservice:

public class WaltzWs {
	WaltzKb waltzKb;

	public String waltz(@WebParam(name="WaltzDTO")WaltzDTO dto) {

		StatefulKnowledgeSession session = waltzKb.getKbase().newStatefulKnowledgeSession();

		for (Line l : dto.getLine()) {
		long start = System.currentTimeMillis();
		session.setGlobal( "time", start );

		long time = System.currentTimeMillis() - start;
		System.err.println( time );

		return "time: "+time;

which exposes the reasoning functionality by webservice call. When the webservice is consumed, a new Knowledge session is created, the content of the SOAP message is insert-ed into the Working memory, and then all the rules are evaluated. This webservice relates to the second half of the Waltz benchmark as linked above on the Drools GitHub repo.

For the actual Knowledge base, this is created by a Singleton EJB:

public class WaltzKb {
	private static final transient Logger logger = LoggerFactory.getLogger(WaltzKb.class);
	private KnowledgeBase kbase;

	public void init() {
		KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();
        kbuilder.add( ResourceFactory.newClassPathResource("waltz.drl", WaltzKb.class), ResourceType.DRL );
        if (kbuilder.hasErrors()) {
        	for (KnowledgeBuilderError error : kbuilder.getErrors()) {
        		logger.error("DRL Error "+error);
        Collection<KnowledgePackage> pkgs = kbuilder.getKnowledgePackages();
        kbase = KnowledgeBaseFactory.newKnowledgeBase();
        kbase.addKnowledgePackages( pkgs );

	public KnowledgeBase getKbase() {
		return kbase;

Taking the Webservice + Singleton EJB approach, I can have several webservice calls happening at the same time, each with its own Knowledge session, while actually the Knowledge base is efficiently shared among them.

All the code, and the benchmark project file with results, available on GitHub.

The Benchmark

In order to load test this JavaEE application, i.e.: the webservice, I use SoapUI:

Screen Shot 2013-08-14 at 11.48.27

I created two webservice request template, each reflecting the “12” and “50” data file of the original JBoss Drools “waltz” benchmark. Then, before actually running the load test, I consume the webservice a couple of times, just to “warm up” the JavaEE container – in this case, JBoss AS.

I have performed load test session of 60s, with 1 thread first – i.e.: all webservice calls are sequential, await for the first webservice call to return before starting new one. Then followed by other load test session of 60s, this time with 2, 3 and 4 thread – i.e.: concurrent webservice calls, similarly to a stress test of the system being used by multiple “users”.

There are some limitations applying here, that’s why I put all the premises above to warn that this cannot be considered a comprehensive benchmark, more of a simple one to get the overall benchmark figures:

  • Raspberry Pi is single core, so this platform is put on disadvantage when the load test session is performed with 2+ threads.
  • for the performance baseline I’m using a MacBook Air (mid-2011, 1,8 GHz Intel Core i7) having for JVM the JDK 6, while on both the ARMs platforms I’ve got JDK 8ea, build 1.8.0-ea-b99. So yeah, JVM and architecture of the baseline for the figures is quite a different beast, but again, this is just to get an overall performance indicator.
  • while on both the MacBook Air and the Odroid I can leave both the flags: -server -Xmx512m, while starting the JavaEE container JBoss AS, this is not possible on the Raspberry Pi, where I have to change them into -client -Xmx400m given constraints of the memory and the ServerVM is currenlty implemented only since ARM7, and the Raspberry Pi is an ARM6. Please bear in mind on the ARM is a Early Access version of the JVM.
  • for the performance baseline test is performed on localhost, so the overhead of the LAN is not included in the figures.

The Results

I have to say I’m quite impressed with the results. Although it is an overall performance indicator, it provides great insights there is plenty of potential in using JavaEE on an ARM embedded platform – and I’m specifically referring to the Odroid. The Raspberry Pi suffers a lot in this case, possibly an unfair comparison due to the computational resource intensive use case of this scenario.

Below are the results of the load test; columns are type of test (waltz12, waltz50) and number of threads used for the load session, rows is platform (localhost is the baseline MacBook Air), figures are expressed in average ms of response of the webservice, within the load test session.

Screen Shot 2013-08-14 at 12.25.51

Below are the same results, this time figures are expressed in percentage with reference to localhost (MacBook Air) as the baseline.

Screen Shot 2013-08-14 at 15.02.39

My perspective on these results, considering the Raspberry Pi and the Odroid: Odroid is also an ARM embedded platform as the RPi, but with 4x the cores, 4x the RAM and priced $89 Vs $35 (meaning 2.5x) which is still very cheap. I think the most make it the fact that it is a multi-core. With this specs, we’re improving the performance of the above use case scenario with reference to the Intel i7 baseline, from ~130x slower on the Raspberry Pi, to ~4x slower on the Odroid. I mean, IMHO, this is A LOT.

Why do I blog this

I do believe this is a good experiment to show the potential of JavaEE on ARM embedded platform; I’m really curious to perform again these test once the JDK is fully released! Given the small size of these platforms and their small power requirements, I think is a great way to have Pervasive and Mobile Expert Systems!

(Bladerunner mode ON:) I do also believe we might see in the future a platform shift in the data centers, as we know them nowadays: from the current platforms, to smaller and less power-eager platforms, like these two ARM platforms I’ve presented in this post. Potentially this also make a case from shifting from air cooling, to liquid cooling, by submerging this tiny size computer in mineral oil?

Hacking with a Pervasive Expert System (AI) my electronic toothbrush


Just made some progress on my Mobile & Pervasive Expert System tool, finally enabling some Pervasive dimensions but, most importantly, drafting a first round over the AI-side of the system.

The current goal for this step is: enable the system to be pervasively aware when I’m using my electronic toothbrush, infer how much time I’ve used, and then finally post the result to Facebook.

The current implementation, which I’m going to briefly describe in this post, is based on the following technologies and Java libraries:

Description of the process

I will starting describing from the physical world. I’ve put an IR-sensor in front of the toothbrush stand, as shown in the picture above.

Then I connected this sensor, using some simple electronics, to the XBee module in order to perform the ADC and therefore to have the sensor’s measurement available in digital form.


This XBee mesh provides a wireless way to communicate the reading from the toothbrush stand, located in the bathroom, to the actual room where my pc/server is located having the software application running.

This concludes for the physical/hardware part, in fact the biggest part is actually in the software world!

Fortunately there is for the XBee modules a Java library called “xbee-api” available on Google Code, so in my case I just wanted to write for myself an Apache Camel component, to serve as a wrapper for the sensor readings. The code I’ve written for this, is available on GitHub, although the wrapper implementation is only partial at present – implementing only the “receive” direction of the packets from the XBee mesh to the USB, meaning from XBee to the Camel Endpoint, in turn to the Camel Consumer, and in turn you’ll have the packets on the Camel routes.

The great thing about this approach, is that you can later latch the Camel routing in a very simple way, thanks to this wrapper:


			.log("route log ${body.getAnalog1()}")
			.log("I don't know what to do with this packet from the XBee mesh: ${body}")

In this case I’ve used the Camel SEDA to have the two routes asynchronous to each other.

Then it comes the Expert System, AI part. I’ve always been a big fan and used extensively JBoss/RedHat Drools.

The main idea is to be able to write a simple rule, something as the following mock-up:

Screen Shot 2013-06-01 at 15.39.10

In the example picture, the rule should be self-explanatory, as it’s responsible to infer when a fact, representing current and previous status of the “Home Toothbrush” object, has changed from UNDOCKED to DOCKED, where this transition did last for more than 30 seconds.

However, in order to get there, I first still needed to transform the sensor readings in terms of the status object.

So first, I needed to have all XBee analog readings represented as an Event:

declare ZNetRxIoSampleResponse

This way, I see each sensor reading as an Event proper in the CEP (Complex Event Processing) working memory of the Rule Engine.

This, in turns, enable me to write a rule to detect the current status as DOCKED of the toothbrush object:

 * Detect Docked
 * The analog sensor reading for docked is about 1023.
 * The rule shall detect the Home Toothbrush as docked when the average is above 950 including at least 3 analog sensor reading.
rule "Detect Docked"
    accumulate ( ZNetRxIoSampleResponse( containsAnalog == true, $analog1 : analog1 ) over window:length( 3 );
    			 $avg : average( $analog1 ),
    			 $count : count( $analog1 );  
    			 $avg > 950 , $count == 3
    $cp : CurStatusPrevStatus( id == "Home Toothbrush" , curStatus != "DOCKED")

In this case I want the toothbrush object set to DOCKED when the average of the last 3 sensor readings is above 950 (this is the converted ADC value from the XBee).

Notice the accumulate function in this case constraints not only on the average, but also on the count: this is important otherwise the sliding-window defined by window:length(3) would also intercept the initial warm-up of the system, when only just one or two sensor reading Event are available in the working memory. I want the status be detected over at least 3 continuous readings.

The rest of the work is really all about Unit testing, testing, and more testing; and then, writing some little more JavaEE code, deploying it on the JBoss AS Container…

…and it works! :)

And you can find the source code, evolving, on GitHub of course.

Why do I blog this

I think there is a lot nowadays revolving around “Internet of Things” that’s just the tip of the iceberg, and to me that’s really all about technologies becoming more and more pervasive in our daily life. Personally, I’m very interested how Expert System can benefit in this scenario, and I find amazing the little hacks you can start to do even at home!!

Welcome to my RPi: simple JavaEE exercise with webservice and JMS

So my Raspberry Pi has arrived and I was eager to try out some Java programming on it; actually, I wanted to have a first simple exercise to stress test it with a JavaEE container, which I choose JBoss AS 7. Of course the overall performances are nothing comparable to usual servers where you would normally deploy a JavaEE application, however being for small/home projects it doesn’t seems too bad either in order to get started!

Photo 16-12-12 20 24 38

The goal for this exercise is simple: develop a webservice to be exposed in a JavaEE application, which “spool” the content of the webservice call onto a JMS Queue.


  • Rasberry Pi (model B)
  • Java 7 SE for Embedded
  • JBoss AS 7 as the JavaEE container
  • Apache Camel
  • SoapUI to test the webservice
  • Hermes JMS as a consolle to access the JMS Queue

First things first, in order to install the Java SE 7 RE, I overall followed the instructions found here, and also a maybe plain but still interesting video on YouTube of what seems to be a Java User Group session – check out the related James Gosling video as well, it’s not RPi related but a very interesting talk nevertheless from THAT James Gosling!

Anyway, once the JRE is installed on the RPi:
JRE on the RPi

… it’s time to install JBoss AS 7:
Screen Shot 2012-12-16 at 10.49.27

In this case, what I did is a custom standalone.xml configuration file to have all the basics, plus JMS which HornetQ is the implementation for JBoss AS.

Time to code!

With the JBoss Developer Studio IDE (basically Eclipse IDE + JBoss Tools), Maven for a simple webapp, and switch Java Compiler to 1.6:


and some simple dependencies:



The former block is to have Apache Camel to simplify as much as possible, while keeping loosely coupled, the integration between the webservice and the JMS, while the latter block is for the JavaEE libraries provided by the JBoss AS container.

Then, time to code the webservice:

public class SpoolOnQueue {

	CamelBootstrap cb;

	public String sayHello(String name) {
	    cb.getProducerTemplate().sendBody("direct:spoolOnJms", name);
	    return "Your message has been spooled on JMS";

Which is actually rather simple: it defines a webservice class thanks to the @WebService and related @WebMethod annotation, for a sayHello() method of our interest, in charge of spooling the content of the message onto the JMS queue via the Camel’s sendBody() to a specific route.

There is a CamelBoostrap cb dependency injection, which is the JavaEE component in charge of managing the Camel context and routing, defined as:

public class CamelBootstrap {
	private CamelContext camelContext;
	private ProducerTemplate producerTemplate;

	public CamelContext getCamelContext() {
		return camelContext;

	public ProducerTemplate getProducerTemplate() {
		return producerTemplate;

	protected void init() throws Exception {
		camelContext = new DefaultCamelContext();
		camelContext.addRoutes(new RouteBuilder() {
			public void configure() throws Exception {
				// in the JMS connection we can use # for the ConnectionFactory because by not using Spring the default for Camel is the JNDIRegistry
				// just remind by default it's implied it's a Queue as per Camel JMS doc
				.log("spoolOnJms: ${body}")
		producerTemplate = camelContext.createProducerTemplate();

The CamelBoostrap therefore is a simple Startup, Singleton JavaEE Bean, in charge of initializing the CamelContext, the Camel’s ProducerTemplate, and the one and only camel route of our interest here:

  • starting at direct:spoolOnJms
  • logging the body content
  • and spooling the body content onto the jms:sample JMS Queue

Time to amend the web.xml configuration file to the 3.0 Servlet specs (meaning as well EJB 3 in this case) and linking the aforementioned SpoolOnQueue webservice class:

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="" xmlns="" xmlns:web="" xsi:schemaLocation="" version="3.0">
  <display-name>Archetype Created Web Application</display-name>

Last thing before packaging up and deploy, is to define the JMS Queue, which in the camel route is defined as “sample”; to do so, there are several ways from JBoss consolle, to JBoss CLI, configuration files, etc. and in this case I opted for a deployable configuration file – in the WEB-INF/ directory it’s enough to place a HornetQ configuration file which must be ending with -jms.xml, et voilà!:

<?xml version="1.0" encoding="UTF-8"?>
<messaging-deployment xmlns="urn:jboss:messaging-deployment:1.0">
         <jms-queue name="sample">
            <entry name="jms/queue/sample"/>
            <entry name="java:jboss/exported/jms/queue/sample"/>

Note to self: in this case the -jms.xml HornetQ configuration file is to be placed in the WEB-INF/ directory because the application is packaged as a WAR, otherwise it should have been placed in the usual META-INF/ directory.

Time to deploy:
Screen Shot 2012-12-16 at 20.35.48

You may want to note the deployment takes longer if compared with today’s server or machine where you normally test and deploy these artifacts, however it’s not so bad for home project just reminds me the performance of my old Pentium II with JBoss 3 (or 5? can’t remember) with the difference that nowadays JavaEE 6 and JBoss AS 7 simplify and improve things by great extent.

Then, I use SoapUI to consume the SpoolOnQueue webservice with a call for a simple “Ciao here is some content to place on JMS. Matteo.” message:
Screen Shot 2012-12-16 at 20.37.47

In the JBoss log there is a line now for the Camel route started and spooling the body content on the JMS Queue.

Then, time to connect HermesJMS to this remote HornetQ / JBoss AS, as I hope to have contributed here:

When I start I can see the AS7 I can see in the log the RemoteConnectionFactory JNDI, but this cannot be seen in the console:9990 because of some bugs. However still:
12:47:16,653 INFO [] (pool-4-thread-2) JBAS011601: Bound messaging object to jndi name java:jboss/exported/jms/RemoteConnectionFactory. With reference to the aforementioned -jms.xml HornetQ configuration file, from a REMOTE perspective: the RemoteConnectionFactory is on java:jboss/exported/jms/RemoteConnectionFactory, the Queue is on java:jboss/exported/jms/queue/sample

Therefore to configure HermesJMS

Step 1: Create a classpath group for HornetQ in HermesJMS
Only one JAR is needed, is the jboss-client.jar inside the directory bin\client of the JBoss AS 7 / JBoss EAP 6 directory

Step 2: Configure a HornetQ session in HermesJMS:
Class: hermes.JNDIContextFactory
Loader: HornetQ – the one defined in step #1

Step 3: In the Destionations of the HornetQ session which you are defining as part of step#2, Add a new Desitionation
Name: jms/queue/sample
Domain: QUEUE

Now in HermesJMS in the left tree structure you have your Sessions > Your Session > Queue which you can double-click to connect to. You want to notice that for the binding you set the RemoteConnectionFactory without the java:jboss/exported prefix, and the same goes for the Queue defined, where from java:jboss/exported/jms/queue/sample I stripped away the java:jboss/exported prefix. Also notice that you can define the username/password either via the (securityPrincipal,securityCredentials) properties as above, or either use the “Connection” settings fields (User, Password) at the bottom of the Session Preference page of step#2.

So because the webservice has been consumed, which in turn started the Camel route, to spool on the JMS Queue:
Screen Shot 2012-12-16 at 20.38.12

The message now appears on the “sample” JMS Queue which is now inspected thanks to the HermesJMS consolle application.

Why do I blog this

I wanted to have a first “stress” test exercise with my new Raspberry Pi, not only to develop with some Java on it, but using a recent JavaEE container: the performances are not the best, still requires time to deploy or to compile .jsp pages (more in following posts) however I do believe for small/home projects the Rapsberry Pi is a very interesting, cheap both in terms of money and power consumptions, and cool platform where to deploy simple JavaEE applications!
ps: code available on github.