Thursday, December 18, 2008

Sharing an Oracle JDBC pool in SMX4

In a previous post, I talked about how to share a JDBC pool using ServiceMix4; in that example, I used a Postgres connection pool. I wanted to the same with Oracle as the base driver and the commons-dbcp pooling library and went on my merry way. But then, disaster struck: I got a dreaded ClassNotFound exception. "Can this be?" I thought... isn't OSGi supposed to shield me from class-loading hell and put some colour back into my graying hair?

Delving in, I figured out what the issue was, and, I'm glad to say, the ServiceMix OSGi kernel was behaving exactly as it should. I had wrapped the commons-dbcp JAR without much thought: it turns out that code in the commons org.apache.commons.dbcp.BasicDataSource class (see below) is doing a Class.forName() call to load the oracle.jdbc.driver.OracleDriver class.

<bean id="oracle-ds" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="oracle.jdbc.driver.OracleDriver"/>
<property name="url" value="jdbc:oracle:thin:@localhost:1521:BLAH"/>
<property name="username" value="blah"/>
<property name="password" value="blah"/>
<property name="maxActive" value="10"/>
<property name="poolPreparedStatements" value="true"/>
</bean>

Now, OSGi bundles have to specify the packages they import using the Import-Package tag. That's all well and good if you know what these packages are in advance. Poor commons-dbcp can't know in advance what kind of driver it should load, so Import-Package just doesn't work here. Instead, you've got to allow the commons-dbcp bundle to import anything: you can do this using the DynamicImport-Package tag, setting it to DynamicImport-Package: *.

We're currently patching the commons-dbcp bundle on the ServiceMix bundle repository so that it includes the DynamicImport-Package: * tag by default.

Wednesday, December 17, 2008

How to share a single JDBC Pool across SMX4 bundles

In the last week I have been working on a solution involving a number of Camel routes deployed into FUSE ESB 4 (ServiceMix4). The new OSGi archetypes for ServiceMix4 (servicemix-osgi-camel-archetype, servicemix-osgi-cxf-code-first-archetype and servicemix-osgi-cxf-wsdl-first-archetype) are really handy in that regard - nice work Ashwin Karpe! Those new archetypes have been checked in to SVN, but may not be in the release build yet so if you want them you may need to check them out and build them locally.

Because SMX4 is built on OSGi, you can deploy any Java artifact, not just JBI components. I put this to the test: my customer wanted to have a single, shared JDBC pool across all of his bundles. You can do this by creating the pool in a simple Spring file, and registering it as a service for all the other bundles to use. To do this, I installed the driver for PostgreSQL into my SMX container, using the nifty "wrap:" feature that will take a jar and osgi-ify it. Then, I created a Spring context file to intantiate the bean, like this:


<bean id="postgresPoolingDS" class="org.postgresql.ds.PGPoolingDataSource">
<property name="serverName" value="localhost"/>
<property name="databaseName" value="play"/>
<property name="portNumber" value="0"/>
<property name="user" value="play"/>
<property name="password" value="pa55w0rd"/>
<property name="dataSourceName" value="postgres"/>
<property name="initialConnections" value="1"/>
<property name="maxConnections" value="100"/>
</bean>

<osgi:service id="postgresPoolingDSService" ref="postgresPoolingDS"
auto-export="all-classes"/>


Here's the really neat bit: when you drop this spring file into the deploy directory of ServiceMix 4, the ServiceMix kernel will osgi-ify the file and deploy it as a bundle. Hey-presto: the pool is created and registered as a service in the OSGi service registry.

To access this service from one of my camel routes, I used the <osgi:reference> to get a reference to the dataSource in my bundle's spring file. You can then inject the reference into your route and away you go.


<osgi:reference id="dataSource" interface="javax.sql.DataSource" />


The "look-up" is done by interface type: in this case I'm OK as there's only one javax.sql.DataSource registered. I should really do a more explicit query, but I guess I'll leave that as an exercise for the interested reader.

For me, this kind of thing is OSGi at its best: allowing sharing and reuse of Java classes and Java objects in a really nice, modular way.

Monday, December 1, 2008

Take care when propagating transport headers through Camel!

Last week I worked with a fellow FUSE enthusiast on a very cool proof-of-concept showing how to route XML messages containing partially encrypted message payload. We used FUSE Services Framework (Apache CXF) to create a SOAP service and consumer, and used the nifty WSS4J interceptors (thanks to Glen Mazza's excellent blog entry on the subject) to encrypt selected elements of the SOAP message. CXF is such a great project - we got our service running really quickly without any trouble at all.

Feeling boisterous and buoyant with the taste of success, we forged ahead to put an intermediary Camel route between the service and the consumer, to do some content-based routing based on the non-encrypted part of the payload. This kind of X-Path routing is really easy to do with FUSE Medation Router (Apache Camel). Thinking that our coffee break was only minutes away we ran a quick test only to witness an exception on the SOAP consumer, deep in some HTTP-commons code: "bad chunk charachter '60'". Very strange, we thought: we knew that the unencrypted version worked fine. And, we know that it worked fine when the consumer talked directly with the service.

We investigated. A cursory look at the ASCII table shows that '60' is the code for '<'. Hmmm. Instantly this stank of something going on with the XML payload. But what? Our hopes for fresh coffee were now overrun by a burning desire to figure this out. We looked at all the angles, and eventually after an hour or so of scatching our heads, saw the light. The CXF consumer (the client!) was transmitting the payload using HTTP chunking: this has two effects at a payload level. First, a transport attribute, Transfer-Encoding, is set as "chunked"; second, the payload is then transmitted "chunked" by breaking it up into smaller chunks and transmitting each chunk preceded with a line containing a number indicating the size of the chunk.

CXF provides support for chunking, as do the Camel HTTP and Jetty components; so, the problem was not the chunking per se. Here's what was going on: the Camel route was receiving the payload and reassembling (or "unchunking") the content; however, our route was passing on the Transfer-Encoding header intact. So, when the payload arrived at the target service, the payload was unchunked, but the header suggested the opposite. The server read the first line of the payload to get the chunk size, got a '<', and correctly argued that this is an invalid character to describe the chunk size.

We disabled chunking on the client side in CXF, and then everything worked fine - effectively this just removes the problem by cutting it off at source. In general though, I think it raises a word of caution: be mindful of the headers you propagate through integration flows.

Monday, November 10, 2008

Does Unique Selling Point have any meaning in for Open Source professional services firms?

On my way home back from a gig in Kiev last week, I read a truly enlightening, fun and excellent white paper from the smart folk of Wellesley Hills Group, posted at QuickArrow. The paper - or rather, collection of articles - entitled "Truth and Lies in Professional Services Marketing", argues that the dynamics of services business are different from those of a product business: consequently, the rules and mantra's of marketing need to be tweaked (or thrown out) when applied to professional services businesses.

I was particularly interested in because, in the last year, I have studied a Diploma in Business Development at the Irish Management Institute; I graduate next week (with distinction: hurrah!). The Dip Bus. Dev. is a great program, particularly for myself as a person of technical origin. Regularly I would meet with my assigned business mentor, who would administer a ritual and deserved mental beating as she questioned everything I had ever held dear. One of the most difficult things for me on this course was to let go of my need for scientific, clear reasoning, and embrace fuzzier, more intuitive business concepts.

But back to that paper! Let's say you're running a professional services firm that specializes in open source. What can you say is your Unique Selling Point? Hmmm. That you know the source? Everyone knows the source dude, that's the whole point! That you have committers on your team? Better... however, anyone with smarts can become a committer in an open an vibrant community, so if this is your USP and your open source projects are successful, then your competitors will quickly become committers: there goes your USP, gently flying out the window. The open nature of open source means that the barriers to entry are low; sure, becoming an expert on an open-source project takes time, but anyone with smarts and the willingness to invest the time can eventually break into your space. Back to the USP: you could talk about experience, customer-focus, reliability, ... but then, so will all your competitors, so they're not good unique selling point candidates either.

So: forget about "uniqueness" for now. The nature of open source services is that, if your project is successful, then your firm will become one of many others - an ecosystem, if you will - offering similar services around the code. Being a PS firm in open source is very much like being a law firm, or a chartered accountancy firm, or a doctor: all rely on an open body of knowledge at which they have become experts, and they sell that expertise for a price. It's not about "uniqueness": when you want a lawyer, accountant or doctor you do not care if they're unique or different: you care that you can trust these guys to solve your problem and make things right. A strong reputation, with credible proof of their expertise, is more important than a unique selling point.

Successfully marketing your services, then, is not about USPs; it's about having a genuine, credible, reputation. So how can you build this? By taking part in the community. By contributing fixes, patches, and documentation. By helping newbie users into the community. By speaking, writing and blogging about the knowledge for which you are expert, at community events (with your peers) and at industry events. By being clear about how you your services can help others, who don't have your level of experience or expertise. By being reliably, trustworthy, and professional with your customers - so that they're happy to endorse you.

Perhaps, it can be argued, this "reputation" becomes your unique selling point. However, it's a reputation built on real evidence, not on slick marketing material or finely-tuned elevator pitches.

Give that paper a read - it's well worth it!

Wednesday, November 5, 2008

Explicitly creating dead letter queues in ActiveMQ using individualDeadLetterStrategy

There are lots of great features in ActiveMQ that are documented only in the code, which is a shame, as it means that it sometimes means developers are "six degrees of google searchin'" away from finding out how to enable the feature.

I came across this one while working for a customer: they're happy with the concept of the default dead letter queue in ActiveMQ, but need to know how configure an individual DLQ for each of their queues. The answer is to configure an 'individualDeadLetterStrategy', and I've updated the ActiveMQ wiki pages to show how to configure this. This change should make it's way through to the Apache ActiveMQ web site overnight.

Monday, November 3, 2008

JMS/JCA Flows in ServiceMix: the wrong level of abstraction for distributed ESB

The ongoing argument over REST versus RPC has created useful debate around programming abstractions, particularly with respect to the use of abstractions in distributed computing. Abstractions are useful in that they remove underlying complexities and allow us to focus on the task at hand. However astractions fail if they hide away important detail that could influence design or implementation decisions. For example, the problem with RPC, some RESTful folks would argue, is that in making remote invocations look like local invocations, RPC facilitates poor design decisions that neglect caching, error handling, or optimization of network traffic. And, they may have a good point here; however, in this blog entry I don't wish to go down the well-beaten REST/RPC track, instead though I want to discuss the use of a particular abstraction in implementing integration solutions with FUSE ESB (Enterprise Apache ServiceMix). I've found that while this abstraction - the JMS/JCA flows - offers exciting possibilities, it is the wrong abstraction, and should be avoided.

I had been pondering for some time about the use of abstractions within the ServiceMix implementation of the JBI Normalized Message Router (NMR). The NMR is used to send messages from one component to another; in ServiceMix, this is implemented using a number of different "flows". Depending on the kind of message exchange (syncronous or asynchronous), quality of service (reliable, transacted), and whether the target service is clustered, the NMR will choose a "flow" to match.

Four flows are provided in ServiceMix - single-threaded (ST) , SEDA, JMS & JCA. Of these, the latter two have interesting and exotic qualities. First, because they're both based on JMS, they are reliable: messages can be persisted as they travel between components. Second, and again, because they are implemented using ActiveMQ JMS queues, this means that if the ServiceMix container is deployed in a cluster, then messages can transparently be delivered across JVM containers. This gives us a truly "distributed ESB".

Many are drawn to this visionary usage of the NMR; however, I strongly believe that clustering & persistence of the NMR is the wrong level of abstraction. First, you are unwittingly exposing your solution to potentially unacceptable performance hit. With a little knowledge of the NMR's message passing semantics, you'll see that even a simple integration solution (like my File -> Pipeline EIP -> transfomer -> JMS) takes more NMR messages than you think: in this case, seven messages are used. Wow, a simple little integration solution using the JMS-flow takes a total of three persisted JMS queues and four temporary queues! And each of those persistant queues, in ActiveMQ, will need it's own thread! In fact, if you've left the default flow settings enabled in ServiceMix, you'll see that we create two queues (and hence, two threads) for each service on the NMR (one for JMS-flow, and another for JCA-flow!). I was curious about the impact of these flows on ServiceMix threading, so played around with the configuration. I found that by simply disabling the JMS & JCA flows in ServiceMix3, I was able to reduce the number of threads on startup by about 50%. Yup, that's right. I halved the number of threads lurking in the JVM.

So, a deeper look shows that this "flow" abstraction is going to consume resources and impact on your latency & throughput. It gets even hairier though: with a flow like the one I detailed above, it's possible that in a clustered environment your message exchange could start out on the file component of machine A, get pipelined by machine B, get transformed back in machine A, and then get send to the JMS provider on machine B. All that remote network traffic when all I ever wanted was to do a little transformation on a file and drop it into a queue.

I had the good fortune to be in town with Guillaume Nodet, Godfather of Servicemix, last week, and talked to him about this; Guillaume is an intelligent and fun guy and if you get a chance to meet him then take it! It turns out that the JMS/JCA flow feature of ServiceMix was there before the NMR was refactored to become JBI-compliant. So, back when it was invented, it wasn't burdened by the additional NMR traffic mandated by the JBI spec. So, these flows made more sense back then, but really don't make so much sense now. You can get everything you want in terms of reliability by using explicit queues as checkpoints in a transactional SEDA architecture: just use the SEDA flow, and, where transaction propagation is required, set the message exchange to be synchronous.

Rather than engage in feature debridement in ServiceMix3, Guillaume has made the right move in choosing to leave these flows in ServiceMix3 but omit them from ServiceMix4. There is talk about reimplementing the JMS flow for ServiceMix4 for "backwards compatability", but I for one would rather see Guillaume invest his and the community's efforts on other more pressing stuff. A really nice feature of ServiceMix4 is that you will be able to transactional propagation on the SEDA flow with asynchronous as well as synchronous messages, so those who invest in the SEDA appraoch now can avail of better thread-performance at the flick of a switch when they move to ServiceMix4.

Both Guillaume and I were agreed in that if you want to introduce clustering, reliability or transactionality into your integration solution, then do so using explicitly named JMS queues. That way, you have full control over when and where your message exchanges are persisted, transacted or clustered. It's easy, and more importantly, it's the Right Level of Abstraction for integration solutions.

Friday, October 24, 2008

AMQP back in the radar...

Just learned that Microsoft has joined up with AMQP. This is great news - I was playing with AMQP some time ago but in recent months I got a feeling that there was a lot of sitting on the fence about whether people are going to take up on it or not. There's nothing so sad as taking a great idea with a well thought out standard and seeing it get a lack-lustre take-up. This is just the kind of injection AMQP needed.

As I recall, Apache Camel already has support for AMQP, so that'll be a very nice bridging tool for organizations who want to migrate piecemeal towards an AMQP-based infrastructure.

It'll be nice to see what the full extend of Microsoft's participation will be: are they going to buy in wholesale or are they going to just look nice in the background of wedding photographs, enjoying the party? Will they be actively implementing AMQP brokers, or will they just add some AMQP connectivity options to their existing integration offerings?

Things are getting interesting again for AMQP :)

Wednesday, October 22, 2008

Sneak Preview: my FUSE ESB / ServiceMix screencasts...

Over the last few weeks I've worked on putting some screen casts together on how to get "over the hump" with ServiceMix: you've read all about it, you've downloaded it, and then you wonder: where do I start?

In the screen casts, I show:

  • Three things to make you more productive with ServceMix - see how to set up your development environment to get stuff done faster.

  • How to create a simple flow - from file pickup to JMS queue.

  • How to XSLT transform file data and place on a JMS queue using a pipeline EIP.


  • The screencasts aren't long, 8, 25 & 19 minutes respectively. Use the streamed video below to get an idea of what's going on in the screen casts: if you like them, then let me know and I'll send an FTP link to where you can download the full resolution 800x600 quick-time movies.

    Three things to make you more productive with ServceMix





    How to create a simple flow - from file pickup to JMS queue




    How to XSLT transform file data and place on a JMS queue using a pipeline EIP



    FUSE TV: Andy Warhol may have been right...

    Didn't Warhol say something about in the future everyone will have 15 minutes of fame? I, along with some of my Progress colleagues, have been recorded for FUSE TV. My own slot is 13m 13s, which means that I may have only 1m 47s left. Hmmmm... better use those precious seconds sparingly...

    Wednesday, October 8, 2008

    m2eclipse: developing with ServiceMix / FUSE archetypes has never been so easy

    Hurrah for Cool Tooling!

    Maven archetypes are great: anyone developing with ServiceMix will know and appreciate good archetypes, but will probably grumble disapprovingly at the long command lines they entail. In ServiceMix, the smx-arch command-line short-cut made things easier, but I was always uncomfortable with having to swich from Eclipse to the command-line to create my new projects.

    In the last few days, Guillaume Nodet put in a tiny little script onto the FUSE repository (http://repo.open.iona.com/maven2) that generates an archetypes-catalog.xml file containing all the archetypes. You can import this URL into m2eclipse, and then use the really neat m2eclipse dialogs to create your maven/eclipse projects directly from the catalog.

    I love it so much I added an entry on the FUSE wiki; go there for more details!

    Monday, October 6, 2008

    Carpe springem - community to the rescue?

    I was following a trail on the CXF users list, when Dan Kulp introduced "FreeSpring" (http://www.freespring.org), a community-based effort that has come about to alleviate the effects of the new SpringSource licensing system. Their aim is to solve three key problems for Spring users who do not want to pay SpringSource subscriptions for fixes after the three-month post-release dust-settling period: creating versioned binary distributions, applying community fixes, and, importantly, providing distributions through maven.

    I admire the initiative, but was of course keen to find out how freespring.org plan to keep the lights on... their intent is to get some corporate backing from benevolent organisations (perhaps SpringSource competitors?) and also to sell advertising space. I think the latter may be a go-er: freespring.org has the opportunity to become a hub for open-minded Java developers.

    Thursday, September 25, 2008

    Using AspectJ to diagnose ServiceMix component performance

    An inventive FUSE customer showed me an approach to diagnosing his ServiceMix integration flow by injecting some aspect-oriented code, and it's so cool that I thought it would be worth sharing. The problem is this: given that you have an integration flow, how can you diagnose how long each endpoint in the flow is taking to do its business? The customer in particular was experiencing exceptionally high latency on his integration flow (in the order of 900ms) that just seemed plain wrong: the question was, where in the flow was he incurring the hit? Was it in transformation, EIP, resequencing, or something else?

    I used AspectJ to reproduce my customer's approach, weaving in some diagnosis code into the ServiceMix call stack. You can do this by creating a simple aspect like the one below, which is called whenever a call is made to Flow.send(). The measurePeformance() method does some logging, but also does a simple timer around the message invocation. The result will only be accurate to the millisecond; however, if you're trying to isolate big-elephant-in-the-room bottlenecks then this should be sufficient.


    @Aspect
    public class PerformanceAdvice {

    @Around("execution(* org.apache.servicemix.jbi.nmr.flow.Flow.send(..))")
    public Object measurePerformance(ProceedingJoinPoint thisJoinPoint) {
    Object ret = null;
    try {
    MessageExchange me = (MessageExchange) thisJoinPoint.getArgs()[0];
    System.out.println("send() to " + me.getEndpoint().getServiceName() + "... (status: " + me.getStatus() + ", role: "
    + (me.getRole() == MessageExchange.Role.PROVIDER ? "PROVIDER" : "CONSUMER")
    + ")");
    long startTime = System.currentTimeMillis();
    ret = thisJoinPoint.proceed();
    long endTime = System.currentTimeMillis();
    System.out.println(me.getEndpoint().getServiceName() + " done; elapsed time = " + (endTime - startTime) + " ms.");


    } catch (Throwable e) {
    System.out.println(e.getMessage());
    }
    return ret;
    }
    }



    To weave this code into ServiceMix (I used version 3.3.1.6-fuse from http://open.iona.com) was surprising easily: I added the JARs for aspectj (aspectjlib.jar, aspectjtools.jar, aspectjrt.jar, aspectjweaver.jar) into ServiceMix's lib directory. I also jarred up my PerformanceAdvice class and dropped it into the lib directory too. Finally, I modified the ServiceMix configuration file - conf/servicemix.xml - to create my advice and turn on auto-proxying for AspectJ. First, I added the Spring AOP namespace to the tag:


    xmlns:aop="http://www.springframework.org/schema/aop"


    Then, I added the following elements within the element:


    <!-- Turn on AspectJ auto-proxying -->
    <aop:aspectj-autoproxy/>
    <!-- Create my performance advice aspect -->
    <bean id="performanceAdvice" class="ps.progress.com.PerformanceAdvice">


    And that's it! To test, I ran a simple integration flow that reads from a JMS queue and uses a pipeline to transform the message and send the result to another queue. Here's the output:

    Invoking on {http://progress.com/ps/smx/demo/transformer}Pipeline... (status: Active, role: PROVIDER)
    {http://progress.com/ps/smx/demo/transformer}Pipeline done; elapsed time = 0 ms.
    Invoking on {http://progress.com/ps/smx/demo/transformer}XsltTransformer... (status: Active, role: PROVIDER)
    {http://progress.com/ps/smx/demo/transformer}XsltTransformer done; elapsed time = 1 ms.
    Invoking on {http://progress.com/ps/smx/demo/transformer}XsltTransformer... (status: Active, role: CONSUMER)
    {http://progress.com/ps/smx/demo/transformer}XsltTransformer done; elapsed time = 0 ms.
    Invoking on {http://progress.com/ps/smx/demo/transformer}JmsOutput... (status: Active, role: PROVIDER)
    {http://progress.com/ps/smx/demo/transformer}JmsOutput done; elapsed time = 0 ms.
    Invoking on {http://progress.com/ps/smx/demo/transformer}JmsOutput... (status: Done, role: CONSUMER)
    {http://progress.com/ps/smx/demo/transformer}JmsOutput done; elapsed time = 1 ms.
    Invoking on {http://progress.com/ps/smx/demo/transformer}XsltTransformer... (status: Done, role: PROVIDER)
    {http://progress.com/ps/smx/demo/transformer}XsltTransformer done; elapsed time = 0 ms.
    Invoking on {http://progress.com/ps/smx/demo/transformer}Pipeline... (status: Done, role: CONSUMER)
    {http://progress.com/ps/smx/demo/transformer}Pipeline done; elapsed time = 1 ms.



    You can see that there's lots of messages being sent out - we're most interested in those where the status is Active and the role is PROVIDER (you can modify the PerformanceAdvice class to just print those out if you like). From this, I can see that the call to the XsltTransformer endpoint is taking just 1ms; a nice verification that my XSLT transform is doing anything crazy.

    Using an approach similar to this, my customer was able to show that his bottleneck was actually in one of his own handmade components that invoked on a back-end server using RMI - the hit was in the order of 850ms, using the lion's share of the latency. I'm not sure what he's done since then to minimize this hit, but at least we know where the problem is.

    In summary: Aspect-Oriented Programming is great for weaving in cross-cutting concerns such as logging, security or transactionality into your application code - it's also a nice tool to have for performance diagnosis.

    Friday, September 19, 2008

    CXF, JMS & the risk of acknowledged-but-not-processed messages

    May God speed Christian Schneider and his refactoring of the CXF JMS endpoints to be more configurable. I was looking into the reliability of the JMS implementation in CXF, and (from the code), I see that the JMS acknowledgement mode used by CXF is hard-coded to be Session.AUTO_ACKNOWLEDGE. So, I thought, it's not transactional - but that's ok, right? As long as the message is acknowledged after my CXF implementation code then I'm good... right?

    I wanted to verify that this was the case; so, I modified the jms_queue demo to exit unmercifully in the middle CXF impl code for the greetMe() and greetMeOneWay() methods. I had hoped that this would mean that the incoming message, having been unacknowledged, would be redelivered - however, it appears that the message is not redelivered. It looks like CXF is acknowledging the message before we actually process it. Ouch.

    The implications of this are serious: if you're implementing a listener for one-way (in-only) messages with CXF then there is a possibility that messages delivered just before an emergency server shutdown will have been acknowledged but not processed: lost forever. To get around this for now, I'd recommend using Camel to listen transactionally from the JMS queue and send the message to CXF for marshalling (as per Christian's article "Better JMS Transport for CXF"). That way, if the server goes down then message will be redelivered at a later stage. Of course, you should in your code check to see if the message is a redelivery, and take appropriate action to ensure that your application remains consistent.

    The impact of this "early acknowledgement" on request-response services over JMS is not so fatal, as failure of the sever will result in no reply message getting to the client - a timeout will alert the client that something has gone wrong and they can resend.

    Tuesday, September 9, 2008

    Setting up a PostgreSQL database for ActiveMQ

    The default ActiveMQ configuration file (in conf/activemq.xml) has a sample fragment of Spring configuration that shows how to use PostgreSQL as a data source. So, pointing ActiveMQ to PostgreSQL is easy; the problem you may run into is how to configure PostgreSQL in the first place with an appropriate account for ActiveMQ.

    The instructions below assume that you've installed postgres in the directory /usr/local/pgsql, and that postgres is being run by the user 'postgresql'. It also assumes that you're using an bash or sh shell; however, I"m sure it's not rocket science to map what follows to DOS shell for Windows.

    You should include the PostgreSQL bin directory in your path:


    $ export PATH=/usr/local/pgsql/bin:$PATH


    Decide on a file system location for the postgresql data files, say: /Users/postgresql/data. Set the PGDATA variable to point to this directory.


    $ export PGDATA=/Users/postgresql/data


    The commands that follow will use $PGDATA to determine the location of the database files.

    Initialize the database data directory:


    $ initdb


    ... This will initialize the database in the $PGDATA directory.

    Now start the database controller:


    $ pg_ctl -l logfile start


    You should see a message 'server starting'

    Now create a database called activemq-db


    $ createdb activemq-db


    To log into the database, use the psql command:


    $ psql activemq-db


    This gets you in as the current user ('postgresql') - the default security is very lax: let's enforce password protection for the database. To do this, first create a password for the postgresql user:


    activemq-db=> alter user postgresql with password 'foo';


    Create a username for activemq with appropriate password:


    activemq-db=> create user activemq with password 'pa55w0rd';


    Exit the psql shell using the \q command:


    \q


    Edit the $PGDATA/conf/pg_hba.conf file, to specify that the activemq user can access the activemq-db.


    # TYPE DATABASE USER CIDR-ADDRESS METHOD
    host all postgresql 127.0.0.1 255.255.255.255 md5
    local all postgresql md5
    host activemq-db activemq 127.0.0.1 255.255.255.255 md5
    local activemq-db activemq md5


    Now, bounce the database.


    $ pg_ctl reload


    You should now be prompted for a password when you log into to the database as the activemq user.


    $ psql activemq-db activemq
    Password for user activemq:
    Welcome to psql 8.3.3, the PostgreSQL interactive terminal.

    Type: \copyright for distribution terms
    \h for help with SQL commands
    \? for help with psql commands
    \g or terminate with semicolon to execute query
    \q to quit

    activemq-db=>


    In activemq, configure your postgres-ds data source to use the database 'activemq-db', with user 'activemq' and password 'pa55w0rd'. Now, when you start up, activemq will create the necessary tables to persist messages. You can validate that it has created these tables by using the \d command in psql:


    activemq-db=> \d
    List of relations
    Schema | Name | Type | Owner
    --------+---------------+-------+----------
    public | activemq_acks | table | activemq
    public | activemq_lock | table | activemq
    public | activemq_msgs | table | activemq
    (3 rows)


    And... you're done!

    Wednesday, July 16, 2008

    JAX-RS (JSR-311) & Apache CXF

    Yesterday I did my first Webinar on open.iona.com (see "Open Source in the Enterprise, Webinar III" in http://open.iona.com/resources/video-archived-webinars/). Dana Gardner covered big trends in the middleware and infrastructure sector, I talked about our support in FUSE for building RESTful services using JAX-RS, and my fellow IONA-ian Roland Tritsch did a live code demo showing, among other things, how to access both RESTful and SOAP services using browser-based javascript connected to a FUSE (Enterprise Apache CXF) service. Nice!

    I must echo Dan Kulp's thanks to Sergey Beryozkin for putting together some really nice work on CXF's JAX-RS implementation: I've been using the original JRA annotations with CXF previously, and the new JAX-RS stuff is simpler to use: no need to worry about wrapping parameter payload, and much easier to specify content types. Thanks Sergey!

    There's some good documentation on the JAX-RS CXF implementation on the CXF site. There's also a list of todo's there as well; one thing I'd love to see is support for marking responses as cacheable so that we really cash-in (errr, exuse the pun) on REST.

    Wednesday, June 25, 2008

    Reflections on Progress and IONA from Reykjavik


    I have spent the day traveling on airplanes, and finally arrived in beautiful Reykjavik at about 1600 without baggage but very glad to be in Iceland again. It's a beautiful, awe-inspiring landscape and the people are lovely. Is this a thing were folk from island nations naturally resonate with each other? Anyway: I'm here doing some work on Apache CXF with a big player in the telco market here. I look forward to getting stuck in tomorrow.

    I've been out of touch for most of the day, gleaning information on a drip about today's big news that Progress Software has agreed to purchased the company I work for, IONA Technologies, later this year. This is a really great move for Progress; IONA has strong technology offerings, an innovative engineering department and a smart services organization hell-bent on customer success. It's also a fantastic opportunity for the great people I work with in IONA to play an even bigger role in the SOA market. Today's statement also indicated continued investment in IONA's Orbix, Artix and open-source FUSE offerings; I'm looking forward to seeing how Progress will approach a blended, hybrid business model. And, I can't wait to get my hands on Progress's technology :)

    IONA has been such a great place to work, since my first incarnation here in 1995 as an engineer. I was reminiscing with myself over dinner on all the places I've traveled to while working as a consultant in the "old IONA", and got so excited that I mapped them on Google Maps. Check it out! Now, where is the new Progress going to take us?

    Friday, June 20, 2008

    Tales from the Serverside Prague - Day III

    Hurrah for Scala! Ted Neward showed us all the sheer joy of Scala this morning. I think I might be hooked. Yet now I am truly torn: to which language do I dedicate my spare cycles - jRuby, Groovy or Scala? The answer I think is all of them, for the sheer fun of it. The exotic part of me says jRuby first, but then the pragmatic part of me says Scala might be the way to go.

    I've been dipping in and out of the day since then: I sat in on John Davies replay of his extreme transaction processing material. After his talk the feedback forms were glowing with positives like "the best material in the whole conference" Nice work John - just what we'd expect from an IONA employee ;)

    Ola Bini from ThoughtWorks showed off couple of jRuby test frameworks; one of the nice side applications of this, apart from testing your jRuby code, is that you can use jRuby to test your Java code. As it turns out I spent most his talk down the back doing some Junit test cases for a project I'm working on at the moment; there is not doubt that Ola's tests in jRuby looked a hell-of-a-lot less verbose than my Java-bound test framework.

    I attended the fireside chat on Architectural Patterns in Mule by Antoine Borg. He's doing a great job; that said, it's all the same as the stuff I covered in my own fireside chat yesterday! The EIPs supported by Mule directory are implemented in XML; while functionaly equivalent it's no where near as ergonomic as the Camel DSL, in my humble opinion. I've done XML-based EIP before, and you can get bogged down in XML. It is nice to see that Mule 2.0 is using Spring Schemas to simplify the XML configuration.

    Off home in a few hours to Dublin. This year's TSSJS in Prague was a really high quality event; hats off to the speakers, and a big thank you to the tech target team!

    Tales from the Serverside Prague - Day II

    Neal Ford gave a cracking keynote this morning on the evolution of programming abstractions towards domain specific languages. He talked about internal DSL's (like Apache Camel) using fluent interfaces in languages like Java, and proposed that internal DSL's work better in languagues like ruby and groovy. Ultimately though, internal DSL's are limited by the language they're embedded in, forcing you to match the parent language's syntax . In an external DSL, you design your language from scratch; this year a number of products (like mps from JetBrains) will address the problem of how to easily create domain specific languages.

    There was a whole part of Neal's talk about considering XML to be "deprecated". My friend Ciaran McHale has believed this since day one ;) Neal did concede that XML is "marginally OK as a data format", but in terms of configuration or communicating intent it's overkill.

    Eugene Ciurana's presentation on real-world SOA & event-driven architecture showed a really nice case study of a SOA system. One of the biggest challenges on this is how to get information from millions of network-enabled children's toys. Interesting stuff, but also interesting in terms of the price tag; he saved quite a lot of money by using open source infrastructure along with best-of-breed commercial products.

    I hooked up with Ted Neward and the new editor of TSS, Peter Varhol on the way to John Davies's talk on extreme transaction processing. Ted is in great form; we had a chat about DSLs and what it might take to get the JVM to make the most of multi-core CPUs. Dammit we should have had a tape recorder; who knows though, maybe there was some secret agents knocking around. That said, many of the things we had chatted about came up again in the expert panel on next generation languages, with Ted Neward, Ola Bini and Guillaume LaForge. Ola's jRuby talk later on has shown me the light; I am now torn between going headlong into either a scala or jRuby devlopement binge.

    Kirk Pepperdine blew everyone away with a presentation on Java performance and concurrency. The rise of the multi-core CPU means that now, more than ever, sloppy multi-threaded code is going to begin to throw up problems that never arose in our single CPU machines. Hmmm. Maybe I'll hang on to this old single-CPU Dell laptop for a bit longer...

    I gave a fireside chat on implementing enterprise integration patterns using Apache software. I've been working with the FUSE distribution of ServiceMix, Active MQ, Camel and CXF for some time now and it was nice to get a chance to share the experience. I showed off the Camel DSL, and we went on to talk about innovation adoption (getting around the "let's just code it up in Java" objection), open source licensing (is there a best license / business model?), and whether standards really matter. On that last point I think we reached some agreement: standards matter at the edges of your system where you're working with other parties. Internally, however, you're free to do whatever you want.

    Wednesday, June 18, 2008

    Tales from the Serverside Prague - Day I

    Day one at the server-side Java Symposium in Prague; but the fun started last night when Kirk Pepperdine smacked me for suggesting that the agenda was very like last year. Last year in Barcleona was all about caching technologies; this year the focus is on dynamic languages: jRuby, Groovy and Scala are all on the agenda.

    Stephan Janssen kicked off the day's proceedings with a really good keynote on the landscape of technologies for developing rich internet applications, including DHTML, Google Web Toolkit, Flex/Air & JavaFX. I think what was most impressive is that he showed how the parleys.com web-site (an educational web-site for posting up tech talks including audio, video and slides) has been built using all four (five, if you include this Silverlight version). Kudos to Stephan: I've always felt that when you apply a number of techhnologies to the same problem you get a really good picture of which works best. The message? DHTML was difficult. GWT was great, but needed one or two tickles to support Internet Explorer. Flex/Air was great, but you have two different development streams for your online & offline application code. Finally, JavaFX is very promising, but adoption may be hindered by the fact that the nice guys at Sun haven't got a version of JavaFX for Apple Mac yet.

    Costin Leau from SpringSource gave an in-depth view of how Spring Dynamic Modules supports OSGi development and deployment. Benefits? Better modulatory, versioning of artifacts and operations control. I look forward to seeing how the new SpringSources Application Platform (based on Tomcat, OSGi and Spring) gets on. Hmmm. Am I the only on that's shocked though that after more than 12 years of Java, we're still coming up with new ways to prevent class loading problems?

    GigaSpaces Nati Shalom's session was notable in that he discussed that business drivers behind the need of scalable systems: the number of financial transactions, data, and users is constantly growing. Further, the number of traffice spikes (overload situations that can bring down your business is growing). The challenge is to scale up cost-effectively while not sacrificing reliability and performance. Over-provisioning is one expensive way of catering for peak traffic, but has lead to average industry server utilization rates of 15-20%.

    Michael Keith from Oracle presented on SOA using Service Component Architecture (SCA). SCA is all very reasonable and sensible, but I'm not convinced that it's something that developers can get really excited about.

    Mike Aizatsky from Google's presentation on synchronization models for multi-threaded applications was a gem, where he described nine models including the unexpected "No Multithreading" threading model. His advice: pick the simplest model, don't over-engineer your solution. I really enjoyed this talk and got a lot out of it. On the way out of Mike's talk I caught the end of Holly Cummin's presentation on java performance tuning & contented(!) locks; it's a pity these two sessions were on the same time, as they would have really complemented each other.

    Wednesday, April 9, 2008

    Making powerpoint movies: RESTful services with FUSE

    I made my first powerpoint movie last week, on building RESTful services with FUSE. It's part of the FUSE Master Class, and has been posted on open.iona.com wiki. I've presented that material at a number of places now, but there really is something very challenging about presenting to nothing but Camtasia on a dusty laptop. Not having an audience to feed off is unnerving: I found when I did a playback that there was an irritating amount of "ommm" and "ehhh", and I ended up doing a lot re-takes.

    In the end, I settled on writing a script for my narration. Reading from a script does take a bit of magic out of the delivery, but you can still improvise around it and it saves a whole lot of time. Also, I filmed segments of no more than 4 or 5 slides at a time. This keeps you on track and allows you to redo stuff quickly. I have a hunch it also saves on edit time trying to edit large tracts of movie.

    Camtasia's an OK tool - this being my first time doing screen capture I don't have any thing else to compare with. I do drool with envy at my colleague's MacBook Pro: Apple's "Keynote" provides built-in support for "filming" your powerpoint, and then you can use i-Movie to edit in talking heads to minimise the "death-by-powerpoint" effect and add a little character. Gotta get me one of those machines :)

    Monday, April 7, 2008

    That not found in this context JAX-B exception

    I was throwing together a Java code-first example for a CXF web service this morning, and I encountered a scary gotcha. I want to create a simple customer information service, with an interface like this:

    public interface CustomerService {
    Customer findCustomerByName(String firstName, String lastName);
    }


    ... The Customer type is a straightforward bean, as per below:


    public class Customer {
    private String firstName;
    private String lastName;
    private String skypeId;
    private String telephoneNumber;
    private int age;

    public String getFirstName() {
    return firstName;
    }

    public void setFirstName(String firstName) {
    this.firstName = firstName;
    }

    public String getLastName() {
    return lastName;
    }

    public void setLastName(String lastName) {
    this.lastName = lastName;
    }

    public String getSkypeId() {
    return skypeId;
    }

    public void setSkypeId(String skypeId) {
    this.skypeId = skypeId;
    }

    public String getTelephoneNumber() {
    return telephoneNumber;
    }

    public void setTelephoneNumber(String telephoneNumber) {
    this.telephoneNumber = telephoneNumber;
    }

    public int getAge() {
    return age;
    }

    public void setAge(int age) {
    this.age = age;
    }

    public Customer(String firstName, String lastName,

    String skypeId,
    String telephoneNumber, int age) {
    super();
    this.firstName = firstName;
    this.lastName = lastName;
    this.skypeId = skypeId;
    this.telephoneNumber = telephoneNumber;
    this.age = age;
    }
    }


    When my service implementation returns a Customer, I get an exception:

    org.apache.cxf.interceptor.Fault: Marshalling Error: com.iona.ps.codefirst.Customer is not known to this context

    Looks quite scary. I lost a lot of time trying to figure out what was going on, thinking that maybe I needed to add JAXB annotations to my Customer class. It turns out however that the problem is that my Bean class doesn't have a default constructor. Ouch. Without a default constructor in place, the JAXB runtime cannot create a default instance of the class. Adding the default constructor to the bean makes everything better.

    I've seen through Google that lots of people have encountered this problem before - feels like a usability issue to me. While it may be the case that you need a default constructor in JAXB, CXF should give a more helpful message than "not known to this context".

    Thursday, March 6, 2008

    Little fluffy cloudsmith

    I built my first public software distribution over the last week using Cloudsmith. Cloudsmith is such a cool idea: you can assemble your own favourite software distributions and share them with like-minded people who can use the Cloudsmith "materializer" to download the lot in one go. Check out my FUSE Big Bang Distro, containing the latest (at time of writing) set of IONA FUSE components for Windows. I've also created a distribution I call Joolbox for all those bits and pieces you need in Java development. Right now it just contains Maven, Ant and a few other bits and pieces... watch that space.

    I found the current beta UI (based on Seam) to be a little clunky at times; for example, there's no hour-glass when you you're waiting, which can be a little disconcerting when you don't know if you've really pressed something or not. However this clunkiness was more than made up for by the responsiveness of the Cloudsmith team who were able to iron things out; they assure me that a new update of the UI is underway for their big rollout at Eclipsecon. In retrospect, it would have gone a lot easier if I'd actually Followed the Instructions on each dialogue rather than just racing ahead three sheets to the wind ;)

    What might Cloudsmith lead to? Thinking with a big hat on, Cloudsmith could lead to the absolute democratization of software distribution: anyone anywhere can rapidly assemble a full software stack from freely available components, and then provide services, support and training on that distribution. And just think of the potential value of having all the worlds developers huddle together as a community in the cloud. In the meantime, Cloudsmith has the potential to solve some of my problems in the small like how to rapidly pull down all my develop tools the next time my hard drive fails, or how to share my development environment with customers and colleagues.

    Thursday, February 21, 2008

    Confused with JBI exchanges? Use the spec, Luke.

    In a previous entry I mentioned my new found love of JBI. After my initial flirtations, my love has grown deeper: I believe that JBI and I are now something of an item. Hurrah!

    I'd like to share something about my new relationship. At first, I had admired JBI from afar, and had hoped that I would be able to master JBI without having to go off and read the specification. In some respects this is true - you can build a Fuse ESB JBI-compliant solution entirely through configuration without any code, or without having to dig deep into JBI. This is good: after all, that Fuse ESB uses JBI internally should not be a concern to most users.

    However, if you want to code up your own POJO then you need to get an understanding of how your code should interact with JBI's Normalised Message Router (NMR) for different message exchange paradigms (e.g. in-only, on-only-robust, in-out). Implementing an in-only provider? Your code should set the status of the exchange to DONE, and then send the exchange back to the NMR. Doing in-out? Your code should set a response, leave the status alone, send the exchange back to the NMR, and then expect to receive a DONE exchange later on.

    For new users accustomed to simple RPC approaches this feels a little strange, confusing, and perhaps a little frustrating. I empathise: if it's an "in-only" message exchange, then why does my code have to send back the exchange? If it's an "in-out" message exchange, and my code has returned a response, then why should I subsequently receive a notification that the exchange is done?

    I urge you though to resolve the awkward silence by looking to the JBI 1.0 specification - it's remarkably readable, and the sections on the NMR provide some real insight on what the JBI container expects of you. With these insights in place you can do some great stuff. The answers, by the way to the questions above concern the decoupling in time of the consumer from the provider, and providing support for reliability in the underlying message exchange.

    Thursday, February 14, 2008

    Fuse ESB and JBI, my new-found loves...

    I've having the pleasure of working with Ashwin Karpe on producing some training material on JBI, in particular the Fuse ESB implementation of JBI based on Apache ServiceMix. I first looked into JBI a few years ago back when Apache CXF was Celtix and we were all a lot younger. Back then, I just couldn't see the point of JBI: whenever someone would try and explain it to me they would attempt to explain something I didn't know (JBI) in terms of something I still didn't know (Binding Components (BCs), Service Engines (SEs), Normalized Message Router (NMR) etc.). My eyes would gloss over, and I'd go back to what I was doing previously.

    My eyes where opened when I got a play with the great set of JBI components in FUSE ESB. Using FUSE, I can create listeners (for example HTTP, JMS, FTP, FILE) and wire them using enterprise integration patterns (like filters, transformers and content-based routers) into some sweet integration flows, sending data on to other systems (again using transport components like HTTP, JMS, FTP and FILE). Now, that's something tangible that I know I can use when I'm out on the road. Just recently, a guy I know asked me about creating an FTP plugin for CXF to accept CSV (comma separated value) payload: maybe a year or two ago I would have tried to find a way to do that, now, I'd just do it using Fuse ESB.

    Most of what you do with FUSE ESB/ServiceMix can be done by simply customising (via configuration) the transport components and patterns that are provided in the box. Occasionally though you may want to write a POJO that sits in your integration flow: I'm currently looking into writing a chapter in our training material on how to wire POJO's (i.e. JBI's MessageExchangeListeners) into a flow.

    In the meantime, for anyone interested in playing with JBI, take a look at the documentation on open.iona.com and Tom Purcell's blog, which has some interesting entries on JBI from a beginner's perspective.

    Monday, January 28, 2008

    OOP 2008

    Just back from OOP 2008 in Munich - this is a fantastic conference, with a great programme and a really smart bunch of attendees. My thanks to Wolfgang Reuter and all the OOP staff for putting together such as great event.

    I gave two presentations at the event, a 45 minute presentation on using CXF and GWT to create and consume RESTful services, and then a full-day workshop on building services with CXF - I really enjoyed both; in particular, I was chuffed with the attendance and enthusiasm for CXF at the workshop. One of the attendees has already written up some thoughts on the workshop - thanks James!

    I'll happily send on the PDF of my presentation slides for the RESTful services session - just drop me a line in the comments below.

    Monday, January 14, 2008

    The Dark Side of the Source

    One of the great benefits of using open-source, or any source-available software in general, is that, well, you have the source. If you run into a problem then you can debug right into the source-code of your third-party libraries and figure out what's going on. Fantastic. As a consultant on the road I welcome the ability to dive deep.

    However, there is a dark side to source code availability: you can get sucked in very easily debugging code that isn't yours. Think about that. You encounter a bug, and, instead of having X lines of code to debug (where X is the number of lines of application code) you've got X + Y lines of code to debug (where Y is the number of of open-source lines of code). Now, the extra lines of code (Y) may give you greater visibility in solving your problem. Or, you may end up loosing time trying to figure out somebody elses code: I call this "The Dark Side of the Source". Recently, I found myself deep in code involving Jetty Continuations, and lost a lot of time trying to figure out what was going on. In the end we had to get directly in touch with the code author to get a fix: if only we'd known to go their first!

    So: having the source is great. But don't assume that just because you have the source you can solve all the problems all the time: the source will help you, but you have to be mindful of the impact on your productivity. Go with an open source community that can help you if you get into trouble, and one that gives you a professional support offering for when you get into deeper trouble. The guys at Redmonk (Michael Cote and Stephen O'Grady) have written a nice paper on this - "Working with Open-Source Companies". I got the paper for free from the open.iona.com communities site; I recommend the read.