Tag Archives: pervasive

Running the Rule Engine EXercise over a month

My reex2014 – a Rule Engine EXercise for 2014 has been running live for over a month now, time to draw some functional and technical feedbacks.

Real case scenario #1

A functional example based on a real case scenario, starts with:
Screenshot_2014-08-20-15-14-36_2

Which raises the relevant Alerts:
DFG_2014-09-28-10-13-31

~

Some time later, this happens:
Screenshot_2014-08-20-16-16-06_2

Which again raises the related Alerts:
DFG_2014-09-28-10-36-51

~

Later when traffic resume:
Screenshot_2014-08-20-17-00-43_2

Accordingly:
DFG_2014-09-28-10-48-31

Real case scenario #2

Another functional example based on a real case scenario, traffic momentarily suspended due to a person in the tunnel, followed by a resume of traffic operations.

The reex2014 web application offers a representation of the relevant Alerts:
DFG_2014-09-28-10-58-43

DFG_2014-09-28-10-58-51

DFG_2014-09-28-10-59-00

Which are also available on the Android widgets accordingly:
DFG_2014-09-28-10-59-06

Technical notes

This is the first time I use OpenShift Online PaaS to host an application which has to run over some time, rather than quick one-shot/disposable experiment. I’m very pleased with what you can do, especially with the public, free offering!
The only couple of pain-points I have to deal with, are:

  • Need tedious work-arounds to solve for Maven dependencies which are not available publicly. Yes, customer Maven dependencies can be installed via ssh once the gear is already created, but I cannot found a reasonable way to create a gear starting from an existing github repo, if this latter contains non-public maven dependencies.
  • Despite application being constantly used by users, sometimes it restarts anyway. This happened two times. The latter was due, I suppose, to “security updates” related to the infamous shellshock/bashbug apocalypse, but the first time this happened, frankly I couldn’t trace back the reason.

In any case, not bad at all, considering is a free public offering!

Camel framework proved once again really, REALLY great for providing ESB logic and effective integrations! In this case I use it as a micro-ESB to interconnect the JavaEE application with Twitter, Rome RSS readers, etc. In other contexts, I use Camel framework for ETL, ESB and integration needs, and I’m super pleased by how much it is effective.

Conclusions

The exercise proved its benefit already and I’m using it daily for my commuting needs!

Evolutions could target general-improvements about rules, to avoid duplications of Alerts when the PA is made in multiple languages, and to cover more RSS-related cases. On the JavaEE side, the next is definitely target to support for PUSH notifications, which would need to be enabled on the Android projects as well.

Advertisements

Expert Systems and JavaEE on ARM: a simple benchmark

This post is to report my findings while experimenting and – a simple, overall – benchmark of a JavaEE use case on ARM platforms. I currently have on my desk a Raspberry Pi (model B) and an Odroid-U2 now: given my interest on Expert Systems, I thought this could be a great way to test them out!

Photo 14-08-13 10 53 28Premise: I’m not a guru on Expert Systems, in fact I consider myself just a happy power user, so it is not my intention to delve into the debate on how an Expert System should be benchmark-ed, this is not in the scope of this post. Likewise, is not in the scope of this post to report a fully comprehensive benchmark comparison of running Java/JavaEE on these platforms.

In fact, much simplier:

GOAL: Given the use case of a JavaEE application which provides a reasoning service, benchmark the overall performance on the different platforms.

The Use Case

For the reasoning service, I use my all time favorite, JBoss Drools. On their GitHub repository, they provide several examples and benchmarks, based on published papers related to the Rete algorithm. Again, while I’m aware of the big discussion if actually these benchmarks are still relevant nowadays, given the progress on the Expert System algorithms, that debate is not impacting on this use case, because here the benchmark is used for a relative comparison.

I have a very simple webservice:

@Stateless
@WebService
public class WaltzWs {
	@EJB
	WaltzKb waltzKb;

	@WebMethod
	public String waltz(@WebParam(name="WaltzDTO")WaltzDTO dto) {

		StatefulKnowledgeSession session = waltzKb.getKbase().newStatefulKnowledgeSession();

		for (Line l : dto.getLine()) {
			session.insert(l);
		}
		session.insert(dto.getStage());
		long start = System.currentTimeMillis();
		session.setGlobal( "time", start );

		session.fireAllRules();
		long time = System.currentTimeMillis() - start;
		System.err.println( time );

		session.dispose();
		return "time: "+time;
	}
}

which exposes the reasoning functionality by webservice call. When the webservice is consumed, a new Knowledge session is created, the content of the SOAP message is insert-ed into the Working memory, and then all the rules are evaluated. This webservice relates to the second half of the Waltz benchmark as linked above on the Drools GitHub repo.

For the actual Knowledge base, this is created by a Singleton EJB:

@Singleton
@Startup
public class WaltzKb {
	private static final transient Logger logger = LoggerFactory.getLogger(WaltzKb.class);
	private KnowledgeBase kbase;

	@PostConstruct
	public void init() {
		KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();
        kbuilder.add( ResourceFactory.newClassPathResource("waltz.drl", WaltzKb.class), ResourceType.DRL );
        if (kbuilder.hasErrors()) {
        	for (KnowledgeBuilderError error : kbuilder.getErrors()) {
        		logger.error("DRL Error "+error);
        	}
        }
        Collection<KnowledgePackage> pkgs = kbuilder.getKnowledgePackages();
        kbase = KnowledgeBaseFactory.newKnowledgeBase();
        kbase.addKnowledgePackages( pkgs );
	}

	public KnowledgeBase getKbase() {
		return kbase;
	}
}

Taking the Webservice + Singleton EJB approach, I can have several webservice calls happening at the same time, each with its own Knowledge session, while actually the Knowledge base is efficiently shared among them.

All the code, and the benchmark project file with results, available on GitHub.

The Benchmark

In order to load test this JavaEE application, i.e.: the webservice, I use SoapUI:

Screen Shot 2013-08-14 at 11.48.27

I created two webservice request template, each reflecting the “12” and “50” data file of the original JBoss Drools “waltz” benchmark. Then, before actually running the load test, I consume the webservice a couple of times, just to “warm up” the JavaEE container – in this case, JBoss AS.

I have performed load test session of 60s, with 1 thread first – i.e.: all webservice calls are sequential, await for the first webservice call to return before starting new one. Then followed by other load test session of 60s, this time with 2, 3 and 4 thread – i.e.: concurrent webservice calls, similarly to a stress test of the system being used by multiple “users”.

There are some limitations applying here, that’s why I put all the premises above to warn that this cannot be considered a comprehensive benchmark, more of a simple one to get the overall benchmark figures:

  • Raspberry Pi is single core, so this platform is put on disadvantage when the load test session is performed with 2+ threads.
  • for the performance baseline I’m using a MacBook Air (mid-2011, 1,8 GHz Intel Core i7) having for JVM the JDK 6, while on both the ARMs platforms I’ve got JDK 8ea, build 1.8.0-ea-b99. So yeah, JVM and architecture of the baseline for the figures is quite a different beast, but again, this is just to get an overall performance indicator.
  • while on both the MacBook Air and the Odroid I can leave both the flags: -server -Xmx512m, while starting the JavaEE container JBoss AS, this is not possible on the Raspberry Pi, where I have to change them into -client -Xmx400m given constraints of the memory and the ServerVM is currenlty implemented only since ARM7, and the Raspberry Pi is an ARM6. Please bear in mind on the ARM is a Early Access version of the JVM.
  • for the performance baseline test is performed on localhost, so the overhead of the LAN is not included in the figures.

The Results

I have to say I’m quite impressed with the results. Although it is an overall performance indicator, it provides great insights there is plenty of potential in using JavaEE on an ARM embedded platform – and I’m specifically referring to the Odroid. The Raspberry Pi suffers a lot in this case, possibly an unfair comparison due to the computational resource intensive use case of this scenario.

Below are the results of the load test; columns are type of test (waltz12, waltz50) and number of threads used for the load session, rows is platform (localhost is the baseline MacBook Air), figures are expressed in average ms of response of the webservice, within the load test session.

Screen Shot 2013-08-14 at 12.25.51

Below are the same results, this time figures are expressed in percentage with reference to localhost (MacBook Air) as the baseline.

Screen Shot 2013-08-14 at 15.02.39

My perspective on these results, considering the Raspberry Pi and the Odroid: Odroid is also an ARM embedded platform as the RPi, but with 4x the cores, 4x the RAM and priced $89 Vs $35 (meaning 2.5x) which is still very cheap. I think the most make it the fact that it is a multi-core. With this specs, we’re improving the performance of the above use case scenario with reference to the Intel i7 baseline, from ~130x slower on the Raspberry Pi, to ~4x slower on the Odroid. I mean, IMHO, this is A LOT.

Why do I blog this

I do believe this is a good experiment to show the potential of JavaEE on ARM embedded platform; I’m really curious to perform again these test once the JDK is fully released! Given the small size of these platforms and their small power requirements, I think is a great way to have Pervasive and Mobile Expert Systems!

(Bladerunner mode ON:) I do also believe we might see in the future a platform shift in the data centers, as we know them nowadays: from the current platforms, to smaller and less power-eager platforms, like these two ARM platforms I’ve presented in this post. Potentially this also make a case from shifting from air cooling, to liquid cooling, by submerging this tiny size computer in mineral oil?

A lean kick-start

I work as an Information System Analyst. I’m a geek and really passionate about technology. This blog is about my interest in mobile & pervasive technologies.

To kick-start the blog, in this post I’ll describe a quick experiment I’ve made by using conductive thread.

The conductive thread is just plain fabric, although with the additional characteristic of being a good electrical conductor – while common fabric usually is not.

Conductive thread placed on the index finger of the glove

In this case I’ve applied the conductive thread to the tip of the index finger of the glove, in such a way to enable it for the use with capacitive touchscreens – the common type of touchscreen found nowadays in smartphones and tablet.

Normally this type of glove is not good to be used with capacitive touchscreens; being made of rubber/plastic it does indeed provide good grip, ultimately the primary purpose of such kind of glove, however at the same time being a very poor conductor, it does not enable the tip of the fingers to interact properly with the touchscreen.

Conductive thread loose on the inside of the glove

I’ve bought a sample of conductive thread on eBay for less than 5 Euro and normally enough for this kind of experiment. I’ve applied the conductive thread with a dot pattern approximately 5mm wide, and let the fabric slightly loose on the internal side of the glove.

As a result, the capacitive touchscreen recognise the pattern made with the fabric equivalently to the tip of the human finger, as demonstrated in the video above.

Why do I blog this

Interesting to see how will evolve the paradigm of use of touchscreen, in general, in the industry.
While capacitive touchscreen are, as the time of writing, almost on every smartphones and tablet available, the usual interaction might be spoiled in the industry environment due to usage constrain, e.g.: the use of gloves. Conductive thread applied only on the fingertips convince me more, personally, than a full glove made of this kind of fabric (I’ve seen the latter commercialized now, but unaware if the former is also an available option on the market). Also, it would not solve other basic issues related, e.g.: dirty gloves leaving the touchscreen even more dirtier.
Possibly pens with conductive tips will be ultimately the widely adopted solution; at last the pen would be re-discovered as one of the greatest invention of mankind, after the wheel…