All geeky stuff

Lombok for java and GWT project

Lately I heard this wonderful library for java project and had a go with it. Most java developers have used IDE to generate those getter/setter, toString, equals and hashcode methods. They are annoying and with little interest to the code reader. Groovy get that away from us but what if you still want to use strongly typed language or are bound to java? Lombok is the rescuer.

To get it working in maven isn’t hard. Just need to include the dependency in your pom with provided scope(it’s not needed at runtime only in compile time which is cool). Then it does the magic and make your annotated classes have all those boiled plate code generated. To have your IDE recognize those, install plugin for it. I am using intelliJ and the whole process is quite smooth. Here is the list of supported features intelliJ plugin has(missing two at the moment but I don’t care those two that much). Eclipse has a plugin as well.


If you want to make it work in Maven with GWt that’s a bit tricky. Firstly according to the official lombok site you need to add following VM arguments to your compile script:
java -javaagent:lombok.jar=ECJ (rest of arguments)
it needs to point to your actual lombok.jar location. After a bit research I found that maven-dependency-plugin (version 2.4) has a new goal “properties” which will store dependency jar path into a property “groupId:artifactId:type”.
So having following in your pom will have it done nicely.

    <!--this will get set by maven dependency:properties goal and be used in gwt-maven-plugin-->
                <!-- compile, generateAsync, test -->

But this only makes your GWT code compile and run. When you what to run GWT in DEV mode, or any of the tools that work on source code rather than bytecode, you need to delombok your code first. Again in maven it’s a bit tricky. There is a ant task included in the jar but it uses JDK tools.jar, which means using maven antrun will be tricky as from memory it only use JRE. There is a lombok maven plugin available at sonatype OSS:
Adding following in pom:


Then change your IDE and maven to include the generated source as source and exclude the annotated ones, you are now run your GWT project in DEV mode as usual:)

FileNet, JAAS and WebLogic Troubleshooting

This week I was asked to help with troubleshooting a strange problem using IBM FileNet. My colleague was working on a small project and it uses FileNet api to connect to it and pull back some data.

    public void connect() {
        Connection connection = Factory.Connection.getConnection(uri);
        Subject subject = UserContext.createSubject(connection, username, password, null);
        Domain domain = Factory.Domain.getInstance(connection, null);
        // Get an object store
        objectStore = Factory.ObjectStore.fetchInstance(domain, objectStoreName, null);

Unit test runs fine(connecting to the real server) but after deploy to weblogic, it start to throw an authentication exception:

com.filenet.api.exception.EngineRuntimeException: E_NOT_AUTHENTICATED: The user is not authenticated. errorStack={
        at com.filenet.apiimpl.core.UserPasswordToken.getSubject(

and the root cause is:

Caused by: http://filenetserver:9999: Destination unreachable; nested exception is:
       Response: '404: Not Found' for url: 'http://filenetserver:9999/bea_wls_internal/HTTPClntLogin/a.tun?wl-login=http+dummy+WLREQS+;rand=5604411675924680429&amp;AS=2048&amp;HL=19'; No available router to destination

It failed at line: Subject subject = UserContext.createSubject(connection, username, password, null);

After digging around I found this article. Basically FileNet uses JAAS to do authentication. I don’t know anything about JAAS but since it’s using SPI design pattern, to me it looks like Weblogic’s JAAS infrastructure kicks in and trying to redirect to a unknown url hence the exception.

Tried hosted the web app in jetty and it works fine. So confirmed the issue is with weblogic. Not uncommon class loading issue when dealing with SPI and full-blow EE server…

After a few round of trial and error finally get it working properly.
Create a jaas.config file on disk with content:

FileNetP8 {
com.filenet.api.util.WSILoginModule required;

Then add following line to weblogic domain’s (should be able to add this to server start in weblogic console) to the file/jaas.config

If this works for you. Good on you. But I have done more to get it working. First I follow the article specifying the config as: required;

Then it complains WSLoginModuleProxy class not found. From TCPmon I can see that the remote server is WebSphere v7.0. So I grabbed a copy of WebSphere application server v7 and used jarscan to find the jars I needed(can’d find it in maven repo or The two jars needed are:

mvn install:install-file these two into my local repo and bundled in the web app. Starts to work. But afterwards I’ve tried taken out these two dependencies in my pom and redeploy the app it still works. Could be caching thing or they aren’t really needed? I don’t know but certainly a good two-days troubleshooting exercise🙂

Junit, hamcrest, mockito and powermock

Junit, hamcrest and mockito are my must-have libraries. Lately I have to work on some lagecy smelly code base. Before I refactor the hell out of it, I really don’t have much confidence without any test to cover myself. There are static methods and new object creation sitting around. I have to bring up powermock, which can be used to mock static and final methods as well as new object creation. Although it’s powerful, I am not a fan of it. It’s hard to setup and since it will modify normal class loading which may cause side effect and slower test run. One colleague of mine used to say: “if you find yourself using a lot of powermock, it probably reflects the code is poorly designed.” Well I get no choice with lagecy code…

pom is easy.



The reason to exclude junit and mockito-all it’s due to the fact that they all contain a lagecy hamcrest core. If you get:
java.lang.NoSuchMethodError: org.hamcrest.Matcher.describeMismatch(Ljava/lang/Object;Lorg/hamcrest/Description;)V

You may want to consider changing junit to junit-dep, and include hamcrest-core as dependency and sepecify latest version of both.


Spring Batch, Java based Spring 3 Configuration and Mongo DB

Toy program to experiment interesting libraries. Have been working with spring batch at work for a few weeks now and here is some afterwards reference. I also used it to experiment spring’s java configuration approach which more or less can leverage some verboseness of spring batch job definition. And mongo DB which I am completely new. There is a blog(1) talks about the pro and cons(mostly) that I tend to agree. But overall it’s still quite a handy framework to do some of the tasks.

Ok enough talk let’s get started. Spring batch consists of a bunch of beans that I think can be categorized into two: job definition and job execution.
Job definition:
A Job has various Steps (default flow but can be conditional), each Step has Chunk of work plus pluggable job execution event listener and transaction boundary.
Obviously the most concrete portion will be in Chunk, it has a mandatory reader and writer, optionally a processor sits in between.
Job execution:
At minimum you need a Job Launcher, Job Repository and a transaction manager. Optionally can have job registry, job operator and job explorer.


As a Nokia phone user
When I export my text messages into a csv file
I want to parse and persist the file into database (Mongo)
So that it’s easier to do future viewing and searching.
Sample received text: sms,deliver,"006141234567","","","2009.03.04 20:49","","blah blah blah"


1. maven

mvn dependency:tree yield out following:

[INFO] +- org.codehaus.groovy:groovy-all:jar:1.8.1:compile
[INFO] +- org.slf4j:slf4j-api:jar:1.6.1:compile
[INFO] +- org.slf4j:slf4j-log4j12:jar:1.6.1:compile
[INFO] +- org.slf4j:jcl-over-slf4j:jar:1.6.1:compile
[INFO] +- log4j:log4j:jar:1.2.16:compile
[INFO] +- org.springframework:spring-core:jar:3.1.0.M2:compile
[INFO] |  +- org.springframework:spring-asm:jar:3.1.0.M2:compile
[INFO] |  \- commons-logging:commons-logging:jar:1.1.1:compile
[INFO] +- org.springframework:spring-context:jar:3.1.0.M2:compile
[INFO] |  +- org.springframework:spring-aop:jar:3.1.0.M2:compile
[INFO] |  +- org.springframework:spring-beans:jar:3.1.0.M2:compile
[INFO] |  \- org.springframework:spring-expression:jar:3.1.0.M2:compile
[INFO] +- cglib:cglib-nodep:jar:2.1_3:compile
[INFO] +-
[INFO] |  +-
[INFO] |  \- org.mongodb:mongo-java-driver:jar:2.6.5:compile
[INFO] +- org.springframework:spring-tx:jar:3.1.0.M2:compile
[INFO] |  \- aopalliance:aopalliance:jar:1.0:compile
[INFO] +- org.springframework.batch:spring-batch-core:jar:2.1.8.RELEASE:compile
[INFO] |  +- org.springframework.batch:spring-batch-infrastructure:jar:2.1.8.RELEASE:compile
[INFO] |  +- com.thoughtworks.xstream:xstream:jar:1.3:compile
[INFO] |  |  \- xpp3:xpp3_min:jar:1.1.4c:compile
[INFO] |  \- org.codehaus.jettison:jettison:jar:1.1:compile
[INFO] +- hsqldb:hsqldb:jar:
[INFO] +- org.springframework:spring-test:jar:3.1.0.M2:test
[INFO] +- org.hamcrest:hamcrest-library:jar:1.2.1:test
[INFO] |  \- org.hamcrest:hamcrest-core:jar:1.2.1:test
[INFO] \- junit:junit-dep:jar:4.10:test

spring batch excludes spring-aop since I am using spring 3.1.0.M2 and it has it.
gmaven plugin to compile groovy class:


2. Domain

Pretty simple.

class Record implements Comparable<Record> {
    String messageType
    String sendOrReceive
    String number
    Date timestamp
    String content

    int compareTo(Record other) {

    String toString() {
        "$messageType : $number at $timestamp [$content]"

3. Spring java based configuration

Fun starts. First annotate a java/groovy class to be the configuration context.

class AppContext {
    //this is from @PropertySource above
    @Autowired Environment env
    //PropertySource included properties can be used this way and take advantage of spring's type resolving power
    @Value('${input}') Resource input
  • @Configuration: marks this class a spring configuration. It’s also a subtype of @Component which means if you do component scan, it will be part of it.
  • @ImportResource: can import xml configuration. I define jobs in xml since it has better support with xml namespace.
  • @ComponentScan: this is only possible in spring 3.1+ I think. Or maybe the spring context runner only supports annotation driven context from 3.1+? I can’t remember but just embrace the latest and greatest:). Be careful your scanning package should not include the config java class itself since @Configuration is also @Component. Of cause you can filter or exclude it but it’s easier to put your config class somewhere separate.
  • PropertySource: This is realy a cool feature in 3.1.0.M2. Now you have property place holder config free from xml. You need to define a @Autowired Environment field to access those properties. Or use @Value. See below.

My propertie file(under test/resources)


Now define our batch job reader bean.

@Bean(name = "smsFileReader")
FlatFileItemReader<Record> reader() {
    String[] inputFields = env.getProperty('input.fields', String[].class)
    int[] includedFields = env.getProperty('input.included', int[].class)
    //default use comma as delimiter
    def tokenizer = new DelimitedLineTokenizer(names: inputFields, includedFields: includedFields)
    def mapper = new BeanWrapperFieldSetMapper<Record>(targetType: Record, customEditors: customEditors())
    def lineMapper = new DefaultLineMapper<Record>(lineTokenizer: tokenizer, fieldSetMapper: mapper)

    new FlatFileItemReader<Record>(resource: input, lineMapper: lineMapper)

As you can see there is another way to use the @PropertySource plus Environment field injection. It can’t resolve Resource type. i.e. Resource file = env.getProperty(‘input’, Resource.class) will throw exception. The @Value works fine so I stick with that.
Here I use spring batch included FlatFileItemReader class. It requires a line mapper with line tokenizer and a field mapper. Using groovy really cut down quite a lot of verbose code comparing to java or xml bean definition.
When defining tokenizer, you can define(optional) included fields to be mapped to your domain. Looking at the sample row: sms,deliver,"006141234567","","","2009.03.04 20:49","","blah blah blah", only 0,1,2,5,7 column is needed. This is in my file.
field set mapper can also define an optional custom editor if you want some custom property editor just like normal java bean does. Here since the date format is special we need a special date editor.

private def customEditors() {
    def customEditors = [:]
    def format = new SimpleDateFormat("yyyy.MM.dd HH:mm")
    customEditors.put(Date, new CustomDateEditor(format, true))

There are a lot of reader and writer shipped with spring batch. File, JDBC, Hibernate, JMS etc. But since I decided to use mongo DB I need to write my own. All you need to do is implement an interface.

class RecordWriter implements ItemWriter<Record> {
    MongoTemplate mongoTemplate

    void write(List<? extends Record> items) {
        items.each {

To complete the minimum requirement of running spring batch, we also need some infrastructure beans:

//default use method name as bean name
JobLauncher jobLauncher() {
    new SimpleJobLauncher(jobRepository: jobRepository())

JobRepository jobRepository() {
    new MapJobRepositoryFactoryBean(transactionManager()).jobRepository

PlatformTransactionManager transactionManager() {
    new ResourcelessTransactionManager()

Note: as part of the reference blog(1) complains, you have to have a transaction manager and a job repository to “persist” job execution state even if you don’t need it at all. Spring batch does provide you an option to use a map based repository and a resourceless transaction manager. But that map based repository is not thread safe. At work I have to use an in memory database and use spring batch shipped DDL schema script(for various DBMS) to create the schema on start up. The reason to have persisted job execution state is in case of failure, each job instance can know where to start again when re-run. It also makes sure you don’t have more than one job instance running at any given time to avoid conflicts.

There are also some mongo db related beans:

MongoTemplate mongoTemplate() {
    new MongoTemplate(mongo().object, "smsdb")

MongoFactoryBean mongo() {
    MongoFactoryBean mongo = new MongoFactoryBean()
    return mongo

5. Job definition

After all the infrastructure work, we are ready to write the real job definition. I choose to use xml since it get pretty good namespace support and it’s also flexible and make sense to do it in xml.

<?xml version="1.0" encoding="UTF-8"?>
<bean:beans xmlns:bean=""

    <job id="sampleJob">
         <step id="singleStep">
                 <chunk commit-interval="100" reader="smsFileReader" writer="recordWriter" skip-limit="10000">
                         <include class="org.springframework.batch.item.file.FlatFileParseException" />

                     <listener ref="skippedRecordListener" />


You can define skippable exception type and set a skip limit so your job will keep running until it hits that limit. You can also register listeners(optional) to deal with various events. Here I just append reading/parsing error lines to a file.

//first generic argument is the item type when skipped in process, second is skipped in write
class SkippedRecordListener extends SkipListenerSupport<Object, Record> {
    private static final Logger LOGGER = LoggerFactory.getLogger(SkippedRecordListener)

    Resource errorFile

    def ln = System.getProperty('line.separator')

    void onSkipInRead(Throwable t) {
        if (t in FlatFileParseException) {
            def parseException = t as FlatFileParseException
            LOGGER.error("error reading line {}: {}",parseException.lineNumber, parseException.input)
            errorFile.file << "$parseException.input$ln"
        } else {
            LOGGER.error("error reading", t)

6. Wire all up and test

Assuming you have mongo DB installed and running(shame there is no embedded version), following test should test everything including error record handling.

class BatchIntegrationTest {
    private static final Logger LOGGER = LoggerFactory.getLogger(BatchIntegrationTest)
    static ApplicationContext context
    static MongoOperations mongoOps

    static void setupContext() {
        context = new AnnotationConfigApplicationContext(AppContext.class)
        mongoOps = context.getBean(MongoOperations)

    public void setup() {

    public void smoke() throws Exception {
        File errorFile = context.getResource(context.environment.getProperty('error')).file

        JobLauncher jobLauncher = context.getBean(JobLauncher);
        Job job = context.getBean(Job);
        JobExecution execution =, new JobParametersBuilder().toJobParameters());

        Assert.assertThat(execution.exitStatus.exitCode, equalTo("COMPLETED"));

        def records = mongoOps.find(new Query(), Record)
        Assert.assertThat(records, hasSize(2));
        Assert.assertThat(errorFile.readLines(), contains(
                "sms,deliver,this is a totally busted line",
                "another intentionally corrupted record"));

If you are using spring 3.1.0.M2 you should be able to use the spring configuration test runner but here I just construct the context manually.

Complete source is on github.

1. Spring Batch or How Not to Design an API
2. Java-Based Configuration of Spring Dependency Injection
3. Spring Batch user guide

Maven bash autocompletion

Google it will get quite a number of options.
Maven 2 site: here
A page on willcode4beer site: here
Bash Completion script from Ludovic Claude’s PPA: here

Ludovic Claude’s PPA is the best. Download it and put it under /etc/bash_completion.d/ and enjoy the tab tab🙂


Get every new post delivered to your Inbox.