Showing posts with label java. Show all posts
Showing posts with label java. Show all posts

Sunday, March 29, 2015

JCodec - Grabbing and saving picture frames from video

JCodec is an open source pure java implementation of video and audio codecs and formats (this statement was ripped from the website (http://jcodec.org/index.html). It's a nifty API, with a nice license (BSD), but sadly, as with open source software, the documentation can get a little bit stale.

The code listed on the website (as of Mar 29, 2015) to get a single frame works on v0.1.5, but as usual, being a person who wants to work with the latest and greatest piece of API, I got the v0.1.9 version from Maven (http://search.maven.org/#search%7Cga%7C1%7Cjcodec) and tried to run the code sample.

Alas, it didn't compile. After searching high and low for a remedy with the javadocs, and sidetracked to trying Apache Imaging (still in incubation), and took a look on OpenIMAJ (www.openimaj.org/), I still can't find a way to get a single frame with v0.1.9.

I finally got frustrated and just downloaded the v0.1.5 sources and see how it was implemented then. I realised that v0.1.9 removed the dependency on javax.imageio. That's why the example code didn't work. So I took some code out in v0.1.5 and reapplied to v0.1.9 just to test it out:



import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.io.File;

import javax.imageio.ImageIO;

import org.jcodec.api.FrameGrab;
import org.jcodec.common.FileChannelWrapper;
import org.jcodec.common.NIOUtils;
import org.jcodec.common.model.ColorSpace;
import org.jcodec.common.model.Picture;
import org.jcodec.scale.ColorUtil;
import org.jcodec.scale.Transform;
import org.junit.Test;

public class JCodecTests {

 @Test
 public void testSingleFrame_v019() throws Exception {
  int frameNumber = 150;
  Picture frame = FrameGrab.getNativeFrame(new File("D:/yk/mymovie.mp4"), frameNumber);

        Transform transform = ColorUtil.getTransform(frame.getColor(), ColorSpace.RGB);
        Picture rgb = Picture.create(frame.getWidth(), frame.getHeight(), ColorSpace.RGB);
        transform.transform(frame, rgb);
        BufferedImage bi = toBufferedImage(rgb);
        ImageIO.write(bi, "png", new File("frame_150.png"));
 }
}


It's a little bit 'exposed', but this would work, for now.

Lesson learnt: Go directly to the source! Long live open source!

Friday, March 28, 2014

Base64 in Java 8 - It's Not Too Late To Join In The Fun

Finally, Java 8 is out. Finally, there's a standard way to do Base64 encoding. For too long we have been relying on Apache Commons Codec (which is great anyway). Memory-conscious coders will desperately use sun.misc.BASE64Encoder and sun.misc.BASE64Decoder just to avoid adding extra JAR files in their programs, provided they are super sure of using only Sun/Oracle JDK. These classes are still lurking around in Java 8.

To try things out, I've furnished a JUnit test to show how to use the following APIs to encode:

  • Commons Codec: org.apache.commons.codec.binary.Base64
  • Java 8's new java.util.Base64
  • The sort-of evergreen internal code of Sun/Oracle's JDK: sun.misc.BASE64Encoder

package org.gizmo.util;

import java.util.Random;

import org.apache.commons.codec.binary.Base64;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
import static org.junit.Assert.assertArrayEquals;

import sun.misc.BASE64Encoder;

public class Base64Tests {

 private static byte[] randomBinaryData = new byte[5000000];
 private static long durationCommons = 0;
 private static long durationJava8 = 0;
 private static long durationSun = 0;
 
 private static byte[] encodedCommons;
 private static byte[] encodedJava8;
 private static String encodedSun;
 
 @BeforeClass
 public static void setUp() throws Exception {
  
  //We want to test the APIs against the same data
  new Random().nextBytes(randomBinaryData);  
 }

 @Test
 public void testSunBase64Encode() throws Exception {
  
  BASE64Encoder encoder = new BASE64Encoder();

  long before = System.currentTimeMillis();

  encodedSun = encoder.encode(randomBinaryData);
  
  long after = System.currentTimeMillis();
  durationSun = after-before;
  System.out.println("Sun: " + durationSun);
 } 
 
 @Test
 public void testJava8Base64Encode() throws Exception {
  
  long before = System.currentTimeMillis();

  java.util.Base64.Encoder encoder = java.util.Base64.getEncoder();
  encodedJava8 = encoder.encode(randomBinaryData);
  
  long after = System.currentTimeMillis();
  durationJava8 = after-before;
  System.out.println("Java8: " + durationJava8);
 }
 
 @Test
 public void testCommonsBase64Encode() throws Exception {
  
  long before = System.currentTimeMillis();
  
  encodedCommons = Base64.encodeBase64(randomBinaryData);
  
  long after = System.currentTimeMillis();
  durationCommons = after-before;
  System.out.println("Commons: " + durationCommons);
 }

 @AfterClass
 public static void report() throws Exception {

  //Sanity check
  assertArrayEquals(encodedCommons, encodedJava8);
  System.out.println(durationCommons*1.0/durationJava8);
 }
}



What about the performance of these 3 ways? Base64 seems to be a small enough method so there are less ways to screw it up, but you'll never know what lies beneath the surface. From general timing (in the JUnit tests), it seems that the 3 methods can be arranged like this, from the fastest to the slowest: Java 8, Commons, Sun. A sample of the timing (encoding a byte array of size 5,000,000):

Sun: 521
Commons: 160
Java8: 37

Java 8's method ran 4x faster than Commons, and 14x faster than Sun. But this sample is just simplistic. Do try to benchmark for yourselves to come to your own conclusions.

So, which APIs to use? As any expert will tell you...it depends. If you have enough power to dictate that your code should only run on Java 8 and above, then by all means use the new java.util.Base64. If you just need to support multiple JDK versions and vendors, you can stick with Commons Codec or some other 3rd party API. Or wait until the older Javas to be out of circulation or usage, and rewrite your precious codebase. Or move on to another programming language.

Note: I did not even mention about using sun.misc.BASE64Encoder. Avoid it when possible. Perhaps one day this class will be removed in another (alos) version of JDK...it isn't present in other (heteros) JDKs by other vendors.

References:

Tuesday, May 14, 2013

Invoking EJB3 in WebSphere using Spring


 Here are the steps to call a method of an EJB (v3.0) deployed in WebSphere 7 from a plain old Java client, either using Spring to shield your code from unnecessary implementation details, or bare-bones straightforward coding style.

Note: Tested against WAS 7 only.

Common Steps

1. Firstly, the Remote interface class is needed and will be referenced in your client code to call the EJB method. Copy the class file or Java source to your Java client workspace. In my example, the remote interface class is org.gizmo.minyakangin.CapKapakRemote
2. If you have a local development WebSphere 7, run the following command to create the stub class

[Class_Folder_Path]:>[WAS_HOME]\AppServer\bin\createEJBStubs.bat org.gizmo.minyakangin.CapKapakRemote -cp .

You should see an output similar to the one below:

Processing the org.gizmo.minyakangin.CapKapakRemote input file.
Command Successful

A file with the name _CapKapakRemote_Stub.class will be created under folder [Class_Folder_Path]\org\gizmo\minyakangin.

3. A missing stub file may produce the following errors when getting a reference to the EJB:

java.lang.ClassCastException: Unable to load class: org.gizmo.minyakangin._CapKapakRemote_Stub

java.lang.ClassCastException: org.omg.stub.java.rmi._Remote_Stub incompatible with org.gizmo.minyakangin.CapKapakRemote

4. At the WebSphere Admin Console, locate the EJB application and ensure a proper JNDI reference is assigned e.g. ejb/CapKapak. It is best not to rely on the default JNDI reference. But if you insist, the default is the fully qualified remote class name i.e. org.gizmo.minyakangin.CapKapakRemote.



5. Also, check the bootstrap port of the application server instance hosting the EJB (my environment is 9812):



6. At your Java client workspace, ensure that com.ibm.ws.webservices.thinclient_7.0.0.jar (under [WAS_HOME]\AppServer\runtimes) and com.ibm.ws.runtime.jar (under [WAS_HOME]\AppServer\plugins) are included in the classpath. Not sure why including only com.ibm.ws.ejb.thinclient_7.0.0.jar doesn't work in my case.

Caveat: The steps only work using IBM JDK (v7). You may need to do some 'hacking' to get it working under a different JVM.


Using Spring 3.x

You can interface with a plain Java interface (no EJB classes or annotations needed) using Spring. Here's the interface:


package org.gizmo.minyakangin;

public interface CapKapak {
	public String getBrand();
}



An EJB3 'bean' can be easily constructed using a JNDI object factory bean, and it's even easier to just use the 'jee' namespace, as below:




<beans xmlns:jee="http://www.springframework.org/schema/jee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.springframework.org/schema/beans" xsi:schemalocation="http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
        http://www.springframework.org/schema/jee
        http://www.springframework.org/schema/jee/spring-jee-3.0.xsd">

	<jee:jndi-lookup id="capKapak" jndi-name="ejb/CapKapak">
		<jee:environment>
			java.naming.factory.initial=com.ibm.websphere.naming.WsnInitialContextFactory
			java.naming.provider.url=corbaloc:iiop:localhost:9812
		</jee:environment>
	</jee:jndi-lookup>

</beans>




Note: The 'java.naming.provider.url' value should refer to the WebSphere server's bootstrap address as shown above.

Then in the code (using Spring's JUnit class):


package org.gizmo.minyakangin;

import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.AbstractJUnit4SpringContextTests;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = {
		"classpath:org/minyakangin/spring-context.xml"})
public class EjbTests extends AbstractJUnit4SpringContextTests {

	CapKapakRemote capKapak;
	
	@Before
	public void before() {
		capKapak = (CapKapakRemote)applicationContext.getBean("capKapak");
	}
	
	@Test
	public void testLoginManager() throws Exception {
		String brand = capKapak.getBrand();
		System.out.println(brand);
	}
}



Using plain Java



package org.gizmo.minyakangin;

import java.util.Properties;

import javax.naming.InitialContext;
import javax.rmi.PortableRemoteObject;

import com.cv.ibs.user.session.LoginManagerRemote;
import com.cv.ibs.utils.SessionUser;

public class StandaloneEjbClient {

	public static void main (String args[]) throws Exception {
		Properties props = new Properties();
		props.put("java.naming.factory.initial", "com.ibm.websphere.naming.WsnInitialContextFactory");
		props.put("java.naming.provider.url", "corbaloc:iiop:localhost:9812");

		InitialContext ctx = new InitialContext(props);
		
		Object ejbBusIntf = ctx.lookup("ejb/CapKapak");
		CapKapakRemote loginManager = (CapKapakRemote)PortableRemoteObject.narrow(ejbBusIntf, CapKapakRemote.class);
		String brand = capKapak.getBrand();
		System.out.println(brand);
	}
}






Tuesday, April 16, 2013

Strategy Pattern Applied - KeyStoreTemplate

Update: I wrongly attributed the design pattern implementation below to the Template pattern, it's actually the Strategy pattern as pointed out by a user in Reddit. However, the article below is kept in its original form, with just the title changed. Sorry for the confusion :)

The Template design pattern is useful if you have methods that follow a similar pattern but differ only in a few places. Usually, the 'generic' method will call one or several abstract methods, which are then implemented in a concrete class.

The JdbcTemplate of the Spring Framework was (and still is) a life saver. No more unclosed Connection objects that may leak to JDBC connection leaks. No more try/catch blocks that obscure the code logic. Converting the JDBC DAOs of legacy systems to use JdbcTemplate objects provided immediate benefits in terms of simplicity (eliminating copy-and-paste codes) and robust resource management (e.g. closing Connections, ResultSets, Statements).

There are a lot of useful articles detailing what, why and how of this design pattern, so I will not rehash them here. I will just document down how I applied this pattern to solve some age-old problem.

I decided to follow the JdbcTemplate's design style to come up with a KeyStoreTemplate, a class that handles importing or exporting of keys to/from a keystore. To make things simple, this class will only handle 2 things: Importing a key, and exporting a key.

Firstly, the KeyStoreTemplate code (abridged for simplicity):


import org.apache.commons.io.IOUtils;

public class KeyStoreTemplate {

 /**
 * Just enough parameters to open a keystore.
 */
 public KeyStoreTemplate(File keyStoreFile, Provider provider, String keyStoreType, String keyStorePassword) {
  this.keyStoreFile = keyStoreFile;
  this.provider = provider;
  this.keyStoreType = keyStoreType;
  this.keyStorePassword = keyStorePassword;
 }

 /**
 * This is to get/export a key from the keystore.
 */
 public Key getFromKeyStore(KeyStoreExporter exporter) {
  FileInputStream fis = null;
  try {
   KeyStore ks = KeyStore.getInstance(keyStoreType, provider);
         fis = new FileInputStream(keyStoreFile);

         //Keystore file must exists!
         if(keyStoreFile.exists()) {
          ks.load(fis, keyStorePassword.toCharArray());
         } else {
          throw new RuntimeException("Keystore file cannot be located!");
         }
         
         String keyPass = keyStorePassword;

         return exporter.exportKey(ks, keyPass);
         
  } catch (Exception e) {
   throw new RuntimeException("Something went wrong!", e);
  } finally {
   IOUtils.closeQuietly(fis);
  }  
 }

 /**
 * This is to save/import a key to the keystore.
 */
 public void saveToKeyStore(KeyStoreImporter importer) {
  
  FileInputStream fis = null;
  FileOutputStream fos = null;
  try {
   KeyStore ks = KeyStore.getInstance(keyStoreType, provider);
         fis = new FileInputStream(keyStoreFile);

         //Keystore file must exists!
         if(keyStoreFile.exists()) {
          ks.load(fis, keyStorePassword.toCharArray());
         } else {
          throw new BusinessException("Keystore file cannot be located!");
         }
         
         String keyPass = keyStorePassword;

         importer.importPrivateKey(ks, keyPass);
         
         fos = new FileOutputStream(keyStoreFile);
         ks.store(fos, keyStorePassword.toCharArray());
         
  } catch (Exception e) {
   throw new RuntimeException("Something went wrong!", e);
  } finally {
   IOUtils.closeQuietly(fis);
   IOUtils.closeQuietly(fos);
  }
 }
 
}


Notice that there are 2 interfaces: KeyStoreExporter and KeyStoreImporter. These interfaces are implemented by the caller class of KeyStoreTemplate to do the 'effective' stuff, without worrying about the KeyStore details. The KeyStore object is passed in, much like the ResultSet object is passed in when using JdbcTemplate.


import java.security.GeneralSecurityException;
import java.security.KeyStore;

public interface KeyStoreImporter {

 public void importPrivateKey(KeyStore keyStore, String keyPassword) throws GeneralSecurityException;
}


import java.security.GeneralSecurityException;
import java.security.Key;
import java.security.KeyStore;

public interface KeyStoreExporter {

 public Key exportKey(KeyStore keyStore, String keyPassword) throws GeneralSecurityException;
}


So, how to use them? Example as below:


//Key generation and key import
KeyPair keyPair = generateRSAKeyPair();
final Certificate[] certs = CryptoUtils.generateCertificate(keyPair);

RSAPublicKey publicKey = (RSAPublicKey)keyPair.getPublic();
final RSAPrivateKey privateKey = (RSAPrivateKey)keyPair.getPrivate();

logger.debug("[generateAndStoreRSAKeyPair] pubKey={},privKey={}", publicKey.getClass(), privateKey.getClass());

keyStoreTemplate.saveToKeyStore(new KeyStoreImporter() {
 @Override
 public void importPrivateKey(KeyStore ks, String keyPassword) throws KeyStoreException {
  ks.setKeyEntry(alias, privateKey, keyPassword.toCharArray(), certs);
 }
});


//Method to get an RSA public key
public RSAPublicKey getRSAPublicKey(final String alias) throws BusinessException {

 return (RSAPublicKey)keyStoreTemplate.getFromKeyStore(new KeyStoreExporter() {
  @Override
  public Key exportKey(KeyStore keyStore, @SuppressWarnings("unused") String keyPassword) throws GeneralSecurityException {
   Certificate cert = keyStore.getCertificate(alias);
   RSAPublicKey publicKey = (RSAPublicKey)cert.getPublicKey();
   return publicKey;
  }
 });
}


//Method to get a secret key
public SecretKey getSecretKey(final String alias) throws BusinessException {

 return (SecretKey)keyStoreTemplate.getFromKeyStore(new KeyStoreExporter() {
  @Override
  public Key exportKey(KeyStore keyStore, String keyPassword) throws GeneralSecurityException {
   SecretKey secretKey = (SecretKey)keyStore.getKey(alias, keyPassword.toCharArray());
   return secretKey;
  }
 });
}

Thursday, March 14, 2013

Cryptography Using JCA - Services In Providers


The Java Cryptography Architecture (JCA) is an extensible framework that enables you to use perform cryptographic operations. JCA also promotes implementation independence (program should not care about who's providing the cryptographic service) and implementation interoperability (program should not be tied to a specific provider of a particular cryptographic service).

JCA allows numerous cryptographic services e.g. ciphers, key generators, message digests to be bundled up in a java.security.Provider class, and registered declaratively in a special file (java.security) or programmatically via the java.security.Security class (method 'addProvider').

Although JCA is a standard, different JDKs implement JCA differently. Between Sun/Oracle and IBM JDKs, the IBM JDK is sort of more 'orderly' than Oracle's. For instance, IBM's uber provider (com.ibm.crypto.provider.IBMJCE) implements the following keystore formats: JCEKS, PKCS12KS (PKCS12), JKS. Oracle JDK 'spreads' the keystore format implementations into the following providers:

  • sun.security.provider.Sun - JKS
  • com.sun.crypto.provider.SunJCE - JCEKS
  • com.sun.net.ssl.internal.ssl.Provider - PKCS12
Despite the popular recommendation to write applications that do not point to a specific Provider class, there are some use cases that require an application/program to know exactly what services a Provider class is offering. This requirement becomes more prevalent when supporting multiple application servers that may be tightly coupled with a particular JDK e.g. WebSphere bundled with IBM JDK. I usually use Tomcat+Oracle JDK for development (more lightweight, faster), but my testing/production setup is WebSphere+IBM JDK. To further complicate matters, my project needs the use of a hardware security module (HSM) which uses the JCA API via the provider class com.ncipher.provider.km.nCipherKM. So, when I am at home (without access to the HSM), I would want to continue writing code but at least get the codes tested on a JDK provider. I can then switch to use the nCipherKM provider for another round of unit testing before committing the code to source control.

The usual assumption is that one Provider class is enough e.g. IBMJCE for IBM JDKs, SunJCE for Oracle JDKs. So the usual solution is to implement a class that specifies one provider, using reflection to avoid compile errors due to "Class Not Found":


//For nShield HSM
Class c = Class.forName("com.ncipher.provider.km.nCipherKM");
Provider provider = (Provider)c.newInstance();

//For Oracle JDK
Class c = Class.forName("com.sun.crypto.provider.SunJCE");
Provider provider = (Provider)c.newInstance();

//For IBM JDK
Class c = Class.forName("com.ibm.crypto.provider.IBMJCE");
Provider provider = (Provider)c.newInstance();



This design was OK, until I encountered a NoSuchAlgorithmException error running some unit test cases on Oracle JDK. And the algorithm I was using is RSA, a common algorithm! How can this be, the documentation says that RSA is supported! The same test cases worked fine on IBM JDK.

Upon further investigation, I realised that much to my dismay, the SunJCE provider does not have an implementation for the KeyPairGenerator service for RSA. An implementation however is found in the provider class sun.security.rsa.SunRsaSign. So the assumption of "1 provider to provide them all" is broken. But thanks to JCA's open API, a Provider object can be passed in when requesting for a Service instance e.g.


KeyGenerator kgen = KeyGenerator.getInstance("AES", provider);


To help with my inspection of the various Provider objects, I've furnished a JUnit test to pretty-print out the various services of each registered Provider instance in a JDK.


package org.gizmo.jca;

import java.security.Provider;
import java.security.Provider.Service;
import java.security.Security;
import java.util.Comparator;
import java.util.SortedSet;
import java.util.TreeSet;

import javax.crypto.KeyGenerator;

import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.junit.Test;

public class CryptoTests {

 @Test
 public void testBouncyCastleProvider() throws Exception {
  Provider p = new BouncyCastleProvider();
  String info = p.getInfo();
  System.out.println(p.getClass() + " - " + info);
  printServices(p);
 }
 
 @Test
 public void testProviders() throws Exception {
  
  Provider[] providers = Security.getProviders();
  for(Provider p : providers) {
   String info = p.getInfo();
   System.out.println(p.getClass() + " - " + info);
   printServices(p);
  }
 }
 
 private void printServices(Provider p) {
  SortedSet<Service> services = new TreeSet<Service>(new ProviderServiceComparator());
  services.addAll(p.getServices());
  
  for(Service service : services) {
   String algo = service.getAlgorithm();
   System.out.println("==> Service: " + service.getType() + " - " + algo);
  }
 }

 /**
  * This is to sort the various Services to make it easier on the eyes...
  */
 private class ProviderServiceComparator implements Comparator<Service> {

  @Override
  public int compare(Service object1, Service object2) {
   String s1 = object1.getType() + object1.getAlgorithm();
   String s2 = object2.getType() + object2.getAlgorithm();;

   return s1.compareTo(s2);
  }

 }
}


Anyway, if the algorithms you use are common and strong enough for your needs, the BouncyCastle provider can be used. It works well across JDKs (tested against IBM & Oracle). BouncyCastle does not support JKS or JCEKS keystore formats, but if you are not fussy, the BC keystore format works just fine. BouncyCastle is also open source and can be freely included in your applications.

Tip: JKS keystores cannot store SecretKeys. You can try it as your homework :)

Hope this post will enlighten you to explore JCA further, or at least be aware of the pitfalls of 'blissful ignorance' when working with JCA.

Wednesday, January 02, 2013

IntelliJ CE for Maven WebApps


IntelliJ Community Edition is a free and good IDE for plain old Java programming, and is also good for Android development. When it comes to enterprise/web development, the Ultimate Edition is the better choice if you can afford it. However, you can also do some simple Java web development using Maven and running your application server outside of IntelliJ. No doubt it's less integrated but you can at least breathe some life exploring more of IntelliJ.

Software used:

  • Oracle JDK 7
  • Maven v3.0.4
  • IntelliJ Community Edition (CE) v12.0.1
  • Apache Tomcat 7


Before starting, ensure that the M2_HOME environment variable has been set to point to your local Maven home folder.

IntelliJ Steps:

  1. Enable on-the-fly compilation. Go to File > Settings > Compiler, check 'Use external build' and 'Make project automatically'.
  2. Ensure the Maven plugin is enabled. Go to File > Settings > Plugins (under IDE Settings), item 'Maven Integration' should be checked. Restart the IDE if needed/prompted.
  3. Create a new project. 
    1. Go to File > New Project..., choose 'Maven Module' (under Java). Set the project location to the desired folder which will house your pom.xml file. Click 'Next' (proceed to create the folder if needed).
    2. Check 'Create from archetype' and select 'maven-archetype-webapp'. Click 'Next'. Click 'Finish'.
  4. At the right side of the IDE, there will be a button for 'Maven Projects'. Click it to bring up the Maven panel. From the panel, you can then run targets e.g. install, clean, package and check the dependencies.
  5. At the right side of the IDE, there will be a button for 'Maven Projects'. Click it to bring up the Maven panel. From the panel, you can then run targets e.g. install, clean, package and check the dependencies.
  6. Point the class files to the web application's WEB-INF/classes folder. Right-click the project root folder, select 'Open Module Settings'. At the 'Paths' tab, under 'Compile output', select/ensure 'Use module compile output path', and change the 'Output Path' to your project's 'src/main/webapp/WEB-INF/classes' folder*.
  7. Add 'maven-dependency-plugin' plugin to copy dependency JARs to WEB-INF/lib folder. Add the following snippet to the pom.xml file (top-level):
    
     
      local
      
       
        
         maven-dependency-plugin
         
          
           install
           
            copy-dependencies
           
           
            ${basedir}/src/main/webapp/WEB-INF/lib
           
          
         
        
       
      
     
    
    
  8. This plugin will instruct Maven to copy all dependency JARs into the 'outputDirectory' path when executing the 'install' task when using profile 'local'. At the Maven panel, check 'local' under Profiles and right-click 'install' under Lifecycles, and select 'Run mavenweb [install]'. After running you should see the junit-3.8.1.jar in the WEB-INF/lib folder.
  9. Add Tomcat webapp config file. This is to specify the Maven webapp folder to be loaded by Tomcat. The simplest:
    
    
  10. Save the contents to the file @ [TOMCAT_HOME]\conf\Catalina\localhost\mavenweb.xml
  11. Run Tomcat. You can use 'catalina start' command under [TOMCAT_HOME]\bin to fire up an instance. Then to test the web app, point your browser to http://127.0.0.1:8080/mavenweb/index.jsp. You should be greeted with a 'Hello World!'.
* I use the 'src' folder so that there's no need to run any Maven 'compile' task and Tomcat can detect a change and reload the webapp. But it's up to personal preference as this way will mean extra care to be taken to cleanup or ignore files for source control check-ins.

Final note
If you need to use a proxy to access the Internet, you can configure Maven to use it by right-clicking the project root folder, select 'Maven' > [Open/Create] settings.xml. Then add the proxy settings e.g.


    
        
            true
            http
            127.0.0.1
            8118
            proxyuser
            somepassword
            www.google.com|*.somewhere.com
        
    

Links:
  • Maven - http://maven.apache.org/download.cgi
  • IntelliJ Community Edition (CE) - http://www.jetbrains.com/idea/
  • http://forum.springsource.org/showthread.php?100694-Maven-dependencies-to-lib-directory
  • http://maven.apache.org/guides/mini/guide-proxies.html

Monday, December 10, 2012

Generating Certificates With BouncyCastle

Usually, after a serious public-private key generation session, the private key would need to be stored. The obvious place would be a key store. The commonly used method to accomplish this using java.security.KeyStore class:

void setKeyEntry(String alias, byte[] key, Certificate[] chain) 

You'll need at least one Certificate object to accompany the private key. You may be tempted (or already attempted) to 'skip' producing the Certificate object using this method:

void setEntry(String alias, KeyStore.Entry entry, KeyStore.ProtectionParameter protParam)

But you'll discover that a java.security.KeyStore.PrivateKeyEntry object needs a Certificate[] object passed in as well.

This led to the code below to generate the required Certificate object, just enough to fulfil the requirements to save the private key:


@Deprecated
 private Certificate[] generateCertificateOld(KeyPair keyPair) throws Exception {
        Date startDate = new Date(System.currentTimeMillis());
        Date expiryDate = DateUtils.parseDate("01/01/2100", new String[] { "dd/MM/yyyy" });
        BigInteger serialNumber = new BigInteger(String.valueOf(System.currentTimeMillis()));

        X509V1CertificateGenerator certGen = new X509V1CertificateGenerator();
        X500Principal dnName = new X500Principal("CN=Storage Certificate");

        certGen.setSerialNumber(serialNumber);
        certGen.setIssuerDN(dnName);
        certGen.setNotBefore(startDate);
        certGen.setNotAfter(expiryDate);
        certGen.setSubjectDN(dnName); // note: same as issuer
        certGen.setPublicKey(keyPair.getPublic());
        certGen.setSignatureAlgorithm("SHA256withRSA");

        return new Certificate[] {certGen.generate(keyPair.getPrivate())};
 }
 

Of course, if you have the aid of an IDE, the BouncyCastle methods used are deprecated (using v1.47). So, being a good Java coder, I've followed the advice in the BC javadocs and replaced the deprecated methods with the following equivalent code:


private Certificate[] generateCertificate(KeyPair keyPair) throws Exception {
  X509v1CertificateBuilder certGen = new JcaX509v1CertificateBuilder(
    new X500Name("CN=Storage Certificate"),
    BigInteger.valueOf(System.currentTimeMillis()),
    new Date(System.currentTimeMillis()),
    DateUtils.parseDate("01/01/2100", new String[] { "dd/MM/yyyy" }),
    new X500Name("CN=Storage Certificate"),
    keyPair.getPublic());

  JcaContentSignerBuilder contentSignerBuilder = new JcaContentSignerBuilder("SHA256withRSA");
  ContentSigner contentSigner = contentSignerBuilder.build(keyPair.getPrivate());
  
  X509CertificateHolder certHolder = certGen.build(contentSigner);

  JcaX509CertificateConverter certConverter = new JcaX509CertificateConverter();
  Certificate cert = certConverter.getCertificate(certHolder);

  Certificate[] certs = { cert };
  return certs;
 }
 

May your sanity be in check as you navigate through the waters of Java security.

Thursday, November 01, 2012

Late Adopter's Guide To JSF


'The Late Adopter's Guide' is a short composition to help developers who are busy supporting legacy (e.g. JDK 1.4, EJB 2.1, Struts 1.x, JSP) Java technologies (like me!) that they have little time to try out newer stuff or there were technical limitations trying to introduce newer stuff into existing systems. But now you are itching to learn that one technology you've been craving all along.

This post will focus on JSF (Java Server Faces).

JSF isn't a new technology but was saddled with limited functionality in its early iterations. If it were not a part of the Java EE standard, it wouldn't have survived till today. Now at version 2.1, JSF is mature and now's a good time to pick it up.

There were lots of stuff happening since version 1 till 2.1. This guide will allow you to skip the noise and brings you to the goodness of 2.1 without getting bogged down by legacy issues e.g. adding Facelets support to JSF 1.x.

My recommendation of certain APIs over others is based on personal experience, so feel free to try out others if you are compelled to do so.

So, here are the pointers to smooth JSF-ing:

1) Use JSF 2.1 (and above)
Do not use a lower version of JSF. This will save you lots of heartache. If your system does not meet the minimal requirements for JSF 2.1, then it's better to stick with existing frameworks you're currently using (if it gets the job done, why not?).

JSF 2.1 requires Java 1.5 or later, JSP 2.1, JSTL 1.2 and a Java Servlet 2.5 implementation. Tomcat 6 or 7 is recommended.

Use the latest JSF 2.1 with all its new features i.e. Facelets, parameterized methods in EL

2) Use the Mojarra JSF distribution
It's the RI (reference implementation) and stable. You can try Apache MyFaces but the real gem of JSF is not really the framework, but the component libraries.

3) Use Facelets
Forget about JSPs, even though you'll be tempted to leverage on your existing JSP skills and tools e.g. JSTL, taglibs. Facelets are just XHTML pages which is quite similar to JSPs, except that it won't allow you to embed Java code using scriptlet tags (<% %>), and you'll have to learn the new EL (expression language) syntax which is not JSTL but is almost functionally the same. However, if you have JSTL or OGNL background, then EL will be a piece of cake for you.

Facelets also has in-build support for page templating, so there's no need for 3rd-party page templating libraries like SiteMesh.

4) Don't worry about JBoss EL
There was a buzz about JBoss EL, where it lets you pass parameters to a managed bean's methods (Why was it not included in earlier JSF version? I have no idea). These feature has been incorporated into JSF 2.1

5) Manage JSF Managed Beans via Spring
If you are an ardent Spring user, then it's recommended to use Spring to manage JSF's managed beans' dependencies. JSF's dependency injection framework is decent but you'll definitely miss Spring's more powerful and sophisticated features.

Or you can go CDI if you want to go standard Java EE.

6) Use a component library
The best thing about JSF is there are so many component libraries to choose from. RichFaces, OpenFaces, IceFaces, you name it.

But if you want to skip R&D and dive right into the action, I recommend to start with PrimeFaces. But chances are that if you are comfortable with a particular library, you won't be inclined to switch. Caveat emptor?

If you don't want to use a component library, you'll be missing most of the goodness that JSF has to offer.

You can use multiple component libraries in a single web application, but it would be messy and incur unnecessary overhead in memory.

Summary

Well, I hope this abridged guide is a good start for you to look in the right direction in learning JSF. Lots of good tutorials in the wild :)


Thursday, October 04, 2012

Springing to Spring's Defense

Lately there have been less Spring-related articles posted (or probably publicized). Some took it as a sign of decline in its popularity. What about the silent majority? Well, I believe that this bunch of people are not undecided, but are already happy saving the world from oblivion using Spring. Here's my story.

This blog entry's title is funny as in actual fact, Spring doesn't need any defending. It has been around since the early 2000s, when J2EE was exciting but full of teething issues. Ever since its introduction, the Java community has been gobbling every bit of goodness that came out of Rod Johnson's team. I can happily say that even though Spring has sort of 'matured' and may be daunting to a beginner, it still does one thing really well: IoC. Other Spring sub-projects are complements that build upon an already solid framework, which is really "true reuse" in action.

So, without further adieu, here's what I like about Spring that keeps me coming back to it:

1. Solid IoC Framework
This is worth repeating here. From the simple DTD to the flexible XML schemas and config annotations, the features for applying dependency injection made almost all hand-made singletons, factories unnecessary.

2. Great utility APIs
An example: JdbcTemplate made hand-coded DataSource-Connection-PreparedStatement-ResultSet (prone to connection leaks) code blocks obsolete. However using it doesn't require using the IoC container, which is great for refactoring legacy code without introducing drastic architectural changes. There are dozens more of such utility APIs that I won't elaborate here and leave it to the reader to explore.

3. Great 'glue' APIs
You like Struts, Quartz, Velocity, Freemarker, even Axis? How about JSF, Struts2, Hibernate or JPA? Spring has the integration code necessary for you to use a wide variety of other frameworks/APIs without attempting to supercede them. Spring even complements with its own offerings e.g. Spring MVC, Spring Web Services and Spring Security. You are not forced to accept the entire Spring stack. Just pick and choose.

4.Pick & Choose
Code-wise and even package-wise, Spring now comes in modules packed in different JAR files. The Spring developers wisely decided to abandon the "1 JAR to run them all" approach and let's the users include only the minimal JAR files required to use a specific feature.

5. Unit-Testing Support
EJB3 and above has better support that its predecessors but there's still no beating Spring with its support to run the IoC container outside of the application server. I use this feature everyday :)

Well, I'll stop here. I want to continue saving the world with Spring. Have a good day.

Friday, September 28, 2012

Turbo-charge your Android emulator for faster development


I came across an article, which claims to boost the Android emulator's performance using Intel's Hardware Accelerated Execution Manager (HAXM) driver. It got me excited and I decided to verify this claim. This blog entry is my story.

My tools:

  • Android SDK r20.0.3
  • Intellij Community Edition 11.1.3


Basically, the special 'enhancement' provided by Intel is a special x86 Atom system image which utilizes the HAXM driver that enables better emulator performance. I'll not repeat the technical details here, you can access the links below for more info.

Caveat: This trick only works on Intel hardware and with the Virtualization Technology for Directed I/O (VT-d) enabled (usually via BIOS).

Also, Intel x86 system images are currently (as of this blog posting) available for Android versions 2.3.3 (Gingerbread), 4.0.3 (ICD), and 4.1 (Jelly Bean) only.

To avoid headaches, set the environment variable ANDROID_SDK_HOME to point to your Android SDK root folder before proceeding.

High-level steps:
1. Download & install relevant packages via Android SDK Manager
2. Create Android Virtual Devices (AVD)
3. Create an Android Module project in IntelliJ CE
4. Test launching the Android application using the AVDs

1. Download relevant packages via Android SDK Manager

Launch the SDK Manager and ensure the following is installed:

  • Intel x86 Atom System Images (shown below is for Android 2.3.3)
  • Intel x86 Emulator Accelerator (HAXM)



Next, you'll need to install the HAXM driver manually. Go to the Android SDK root folder and navigate to extras\intel\Hardware_Accelerated_Execution_Manager. Execute file IntelHaxm.exe to install.

2. Create Android Virtual Devices (AVD)

Launch the AVD Manager and create 2 AVDs with the same options but different Target:

  • DefaultAVD233 - Android 2.3.3 - API Level 10
  • IntelAVD233 - Intel Atom x86 System Image (Intel Corporation) - API Level 10




3. Create an Android Module project in IntelliJ CE

In IntelliJ, create a new project of type "Android Module", as shown:



Under "Android SDK", select the appropriate Android platform. You'll need to point to your Android SDK root folder in order to choose the appropriate build target. As shown below, "Android 2.3.3" is chosen:



Ensure that the "Target Device" option is set to Emulator, then click "Finish" to complete the project creation.

4. Test launching the Android application using the AVDs

Ok, we'll test using the default Android 2.3.3 AVD first.

At the IntelliJ menubar, select "Run" > "Edit Configurations...". Go to the "Target Device" section. At the "Prefer Android Virtual Device" option, select "DefaultAVD233". Then Run the Android application. After a while, you should see the emulator window with the "Hello World" message.

To run with the Intel AVD, choose the "IntelAVD233" instead.

What's most exciting is the speed of the emulator launch (timed from clicking 'Run' in IntelliJ up to the 'Hello World' message is shown in the emulator). The rough timings recorded using my notebook (Intel i3 380M, 3GB RAM):

  • DefaultAVD233 - 1m 7s
  • IntelAVD233 - 35s


Wow, that's fast (~50% faster), without tuning other parameters to speed things up even further.

References:
http://www.developer.com/ws/android/development-tools/supercharge-your-android-emulator-speed-with-intel-emulation-technologies.html
http://software.intel.com/en-us/articles/installing-the-intel-atom-x86-system-image-for-android-emulator-add-on-from-the-android-sdk
http://stackoverflow.com/questions/1554099/slow-android-emulator
http://developer.android.com/sdk/index.html
http://www.jetbrains.com/idea/download/

Thursday, September 27, 2012

Embedding HSQLDB server instance in Spring

I was using XAMPP happily for development until I had to host it somewhere accessible via the Internet for the client to test and use. I have a VPS that only has 384 RAM, and needing to find a way fast, I decided to install XAMPP into the VPS. Because of the low RAM, when MySQL was running, Tomcat failed to start, even though the initial Java heap size was set to 64m. I managed to host the site temporarily in Jelastic, before moving to OpenShift.

I toyed with the idea of combining the database and application server instances in 1 JVM, to reduce RAM usage (compared to running MySQL + Tomcat). After searching the Internet, I came across several articles on running HSQL server instances together with Tomcat. No doubt I have to update my site to be compatible with HSQL first, but as a POC (proof-of-concept) attempt, I decided to explore the feasibility of running the HSQL server instance in a Spring container.

There are several reasons to run the HSQL server just like a bean in Spring:
1. All-in-one configuration. Everything that is needed to be configured is done in Spring. There are examples in the Net to run the HSQL instance alongside Tomcat, but this requires adding stuff to Tomcat (see links below).
2. Application server independence. 'Theoretically' (in quotes as I successfully tested this in Tomcat only), since everything is done in Spring, there's no or little that needs to be configured in the appserver.

The HSQL server 'bean' is also meant to launch an instance in network mode (not in-process e.g. mem or file). Some reasons for this:
1. 'mem' in-process access is the fastest, but is not persistent. There are other means to initiate a 'mem' data source using Spring's spring-jdbc tags, which is a better approach.
2. 'file' in-process access is persistent, but like 'mem', it can only be accessed within the Java process.
3. Network mode (hsql) is both persistent and accessible using external JDBC client tools. This is useful for troubleshooting and verification.

After reading HSQLDB's documentation, here's the code that does the HSQL server instance bean lifecycle management:

package org.gizmo.hsql.spring;

import java.io.IOException;
import java.util.Properties;

import org.hsqldb.Server;
import org.hsqldb.persist.HsqlProperties;
import org.hsqldb.server.ServerAcl.AclFormatException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.SmartLifecycle;

public class HyperSqlDbServer implements SmartLifecycle
{
 private final Logger logger = LoggerFactory.getLogger(HyperSqlDbServer.class);
 private HsqlProperties properties;
 private Server server;
 private boolean running = false;
 
 public HyperSqlDbServer(Properties props)
 {
  properties = new HsqlProperties(props);
 }
 
 @Override
 public boolean isRunning()
 {
  if(server != null)
   server.checkRunning(running);
  return running;
 }

 @Override
 public void start()
 {
  if(server == null)
  {
   logger.info("Starting HSQL server...");
   server = new Server();
   try
   {
    server.setProperties(properties);
    server.start();
    running = true;
   }
   catch(AclFormatException afe)
   {
    logger.error("Error starting HSQL server.", afe);
   }
   catch (IOException e)
   {
    logger.error("Error starting HSQL server.", e);
   }
  }
 }

 @Override
 public void stop()
 {
  logger.info("Stopping HSQL server...");
  if(server != null)
  {
   server.stop();
   running = false;
  }
 }

 @Override
 public int getPhase()
 {
  return 0;
 }

 @Override
 public boolean isAutoStartup()
 {
  return true;
 }

 @Override
 public void stop(Runnable runnable)
 {
  stop();
  runnable.run();
 }
}

The abridged Spring configuration:


<beans xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.springframework.org/schema/beans" xsi:schemalocation="
   http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd">

 <bean class="org.gizmo.hsql.spring.HyperSqlDbServer" id="hsqldb" init-method="start">
  <constructor-arg>
   <value>
    server.database.0=file:d:/hsqldb/demobase
    server.dbname.0=demobase
    server.remote_open=true
    hsqldb.reconfig_logging=false
   </value>
  </constructor-arg>
 </bean>
</beans>



Sample output when starting Spring in Tomcat:
[Server@1e893ae]: [Thread[pool-2-thread-1,5,main]]: checkRunning(false) entered
[Server@1e893ae]: [Thread[pool-2-thread-1,5,main]]: checkRunning(false) exited
[Server@1e893ae]: Initiating startup sequence...
[Server@1e893ae]: Server socket opened successfully in 7 ms.
Sep 27, 2012 9:26:23 AM org.hsqldb.persist.Logger logInfoEvent
INFO: checkpointClose start
Sep 27, 2012 9:26:23 AM org.hsqldb.persist.Logger logInfoEvent
INFO: checkpointClose end
[Server@1e893ae]: Database [index=0, id=0, db=file:d:/hsqldb/demo
base, alias=demobase] opened sucessfully in 442 ms.
[Server@1e893ae]: Startup sequence completed in 451 ms.
[Server@1e893ae]: 2012-09-27 09:26:23.395 HSQLDB server 2.2.8 is online on port 9001
[Server@1e893ae]: To close normally, connect and execute SHUTDOWN SQL
[Server@1e893ae]: From command line, use [Ctrl]+[C] to abort abruptly


References:

  • http://hsqldb.org/doc/2.0/guide/index.html
  • http://dkuntze.wordpress.com/2009/01/28/hsql-on-tomcat/
  • http://www.ibm.com/developerworks/data/library/techarticle/dm-0508bader/


Tuesday, September 18, 2012

Spring 3.1 - Loading Properties For XML Configuration From Database

Spring makes it easy to inject values obtained from properties files via its PropertyPlaceholderConfigurer and (pre-Spring 3.1) PropertySourcesPlaceholderConfigurer (Spring 3.1). These classes implement the BeanFactoryPostProcessor interface, which enables them to manipulate the values within the Spring XML configuration file before the beans are initialized. So if you specify ${jdbc.driverClassName} to be set to the property 'driverClassName', this variable will be replaced/swapped with the value with the key 'jdbc.driverClassName' in a properties file.

Apart from properties files, the database table can also be a place to get key-value pairs. Great, so just extend the PropertySourcesPlaceholderConfigurer, and have it read a table containing the key-value pairs, populate them and we're done!

However, there's a slight problem. If the DataSource bean also relies on values obtained from a properties file (e.g. JDBC URL, username, password), and being good Springers, inject this bean to the bean class extending PropertySourcesPlaceholderConfigurer, the bean container will fail to startup properly, because the 'jdbc.driverClassName' variable cannot be resolved. Strange, but true.

The reason for this is that any bean injected into a BeanFactoryPostProcessor class will trigger bean initialization BEFORE the BeanFactoryPostProcessor classes are run. You know, dependency injection...all depending beans have to be ready before being injected into the consumer. So this creates a cyclic-dependency kind of thing. All dependencies in the XML configuration are resolved first before the BeanFactoryPostProcessor classes are run.

So, how to go about this? Well, there's a trick you can employ. A BeanFactoryPostProcessor class has access to the ConfigurableListableBeanFactory object via the 'postProcessBeanFactory' method. From this object, you can do a 'getBean' and get a reference of any bean with an id. And guess what, you can get the vaunted DataSource bean without triggering premature bean initialization.

Let's say there's a table 'sys_param' with the following data:

 PARAM_CD        PARAM_VALUE  
 --------------  --------------
 service.charge  1.5          
 rebate.amount   15.99        
 smtp.ip         173.194.79.16

The DbPropertySourcesPlaceholderConfigurer is shown here:


package org.gizmo.labs.utils.spring;

import javax.sql.DataSource;

import org.springframework.beans.BeansException;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.context.support.PropertySourcesPlaceholderConfigurer;

public class DbPropertySourcesPlaceholderConfigurer extends PropertySourcesPlaceholderConfigurer
{
 @Override
 public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) throws BeansException
 {
  DataSource dataSource = beanFactory.getBean(DataSource.class);
  DbProperties dbProps = new DbProperties(dataSource);
  
  setProperties(dbProps);
  super.postProcessBeanFactory(beanFactory);
 }
}



The DbProperties class will make use of the DataSource reference and queries the database to get the key-value pairs:


package org.gizmo.labs.utils.spring;

import java.util.List;
import java.util.Map;
import java.util.Properties;

import javax.sql.DataSource;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.jdbc.core.JdbcTemplate;

public class DbProperties extends Properties
{
 private final Logger logger = LoggerFactory.getLogger(DbProperties.class);
 private static final long serialVersionUID = 1L;

 public DbProperties(DataSource dataSource)
 {
  super();
  JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource); 
  List> l = jdbcTemplate.queryForList("select param_cd, param_value from sys_param");
  
  for(Map m: l)
  {
   logger.debug("Loading from DB: [{}:{}]", m.get("PARAM_CD"), m.get("PARAM_VALUE"));
   setProperty((m.get("PARAM_CD")).toString(), (m.get("PARAM_VALUE")).toString());
  }
 }
}



To demonstrate that the values from the table are properly injected, here's the class which acts as the consumer:


package org.gizmo.labs.utils.spring;

import java.math.BigDecimal;

import org.apache.commons.lang.builder.ReflectionToStringBuilder;
import org.apache.commons.lang.builder.ToStringStyle;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.InitializingBean;

public class DbPropConsumer implements InitializingBean
{
 private final Logger logger = LoggerFactory.getLogger(DbPropConsumer.class);

 private BigDecimal serviceCharge;
 private double rebateAmount;
 private String smtpIp;
 
 @Override
 public void afterPropertiesSet() throws Exception
 {
  logger.debug("I have consumed: {}", this);
 }

 public String toString()
 {
  return ReflectionToStringBuilder.toString(this, ToStringStyle.MULTI_LINE_STYLE);
 } 
 
 public BigDecimal getServiceCharge() {
  return serviceCharge;
 }

 public void setServiceCharge(BigDecimal serviceCharge) {
  this.serviceCharge = serviceCharge;
 }

 public double getRebateAmount() {
  return rebateAmount;
 }

 public void setRebateAmount(double rebateAmount) {
  this.rebateAmount = rebateAmount;
 }

 public String getSmtpIp() {
  return smtpIp;
 }

 public void setSmtpIp(String smtpIp) {
  this.smtpIp = smtpIp;
 }

}


Last but not least, the Spring configuration (DataSource bean not shown, simplified for clarity):



<beans xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.springframework.org/schema/beans" xsi:schemalocation="
     http://www.springframework.org/schema/beans 
     http://www.springframework.org/schema/beans/spring-beans-3.1.xsd">

 <bean class="org.springframework.context.support.PropertySourcesPlaceholderConfigurer">
     <property name="order" value="1">
     <property name="locations">
      <list>
       <value>classpath:system.properties</value>
      </list>
     </property>
 </property></bean>

 <bean class="org.gizmo.labs.utils.spring.DbPropertySourcesPlaceholderConfigurer">
  <property name="order" value="2">
     <property name="placeholderPrefix" value="%{">
     <property name="placeholderSuffix" value="}">
 </property></property></property></bean>

 <bean class="org.gizmo.labs.utils.spring.DbPropConsumer" lazy-init="false">
  <property name="serviceCharge" value="%{service.charge}">
  <property name="rebateAmount" value="%{rebate.amount}">
  <property name="smtpIp" value="%{smtp.ip}">
 </property></property></property></bean>
</beans>



The first 2 bean definitions are the BeanFactoryPostProcessor classes, and to ensure the first one is run first, the 'order' property is set (lower means higher precedence).

For the DbPropertySourcesPlaceholderConfigurer, a different placeholder prefix and suffix is used for clarity (notice the placeholders for DbPropConsumer).

So, upon Spring container startup, you should be able to view a similar output as below:
2012-09-18 00:03:14, DEBUG, org.gizmo.labs.utils.spring.DbProperties, Loading from DB: [service.charge:1.5]
2012-09-18 00:03:14, DEBUG, org.gizmo.labs.utils.spring.DbProperties, Loading from DB: [rebate.amount:15.99]
2012-09-18 00:03:14, DEBUG, org.gizmo.labs.utils.spring.DbProperties, Loading from DB: [smtp.ip:173.194.79.16]

2012-09-18 00:03:14, DEBUG, org.gizmo.labs.utils.spring.DbPropConsumer, I have consumed: org.gizmo.labs.utils.spring.DbPropConsumer@189b939[
  logger=Logger[org.gizmo.labs.utils.spring.DbPropConsumer]
  serviceCharge=1.5
  rebateAmount=15.99
  smtpIp=173.194.79.16
]

Sunday, September 16, 2012

Spring 3.1 Bean Profiles

One of the most exciting new features (to me) introduced in Spring 3.1 is the bean definition profiles (http://blog.springsource.com/2011/02/11/spring-framework-3-1-m1-released/). Bean definition profiles are simply bean configurations that are 'activated' based on an the existence of an indicator or marker.

A lot of examples of bean definition profiles uses the JDBC DataSource bean to illustrate its usefulness. Usually for unit testing, a DriverManagerDataSource class is sufficient, as shown below:



<bean class="org.springframework.jdbc.datasource.DriverManagerDataSource" id="dataSource">
 <property name="driverClassName" value="${jdbc.driverClassName}">
 <property name="url" value="${jdbc.url}">
 <property name="username" value="${jdbc.username}">
 <property name="password" value="${jdbc.password}">
</property></property></property></property></bean>


In order to deploy the application (typically a webapp) for active use or production, the DataSource bean is usually configured to get from a JNDI lookup, as shown:


 


The JNDI lookup way also requires configuring the DataSource connection pool at the application server.

Before Spring 3.1, in order to make one of the bean definitions 'active', the desired bean definition will need to be loaded last, to override the earlier appearing bean definition e.g. the 'dataSource' bean using DriverManagerDataSource will need to be in a Spring XML file that is loaded last. One of the ways to achieve this is to specify the XML file as the last file in a comma-separated value of the context-param 'contextConfigLocation' in web.xml. An Ant task may be invoked to automate a 'deletion' of this file entry so that when deployed to production, the 'dataSource' bean using JNDI lookup will be the 'active' bean. Or, just comment out the undesired bean :). It was clumsy and led to different builds for different environments.

So, how does Spring 3.1 solve this problem? Spring 3.1 allows the incorporation of the 2 different bean definitions with the same id in the same XML file, by having nested 'beans' XML elements, as shown (simplified for clarify):



<beans xmlns="http://www.springframework.org/schema/beans" xsi:schemalocation="
     http://www.springframework.org/schema/beans 
     http://www.springframework.org/schema/beans/spring-beans-3.1.xsd">

 <!-- Generic bean definitions ABOVE -->
  
 <beans profile="openshift,default">
  <bean class="org.springframework.jndi.JndiObjectFactoryBean" id="dataSource">
         <property name="jndiName" value="${jndi.datasource}">
     </property></bean>
 </beans>
 
 <beans profile="testing">
     <bean class="org.springframework.jdbc.datasource.DriverManagerDataSource" id="dataSource">
         <property name="driverClassName" value="${jdbc.driverClassName}">
         <property name="url" value="${jdbc.url}">
         <property name="username" value="${jdbc.username}">
         <property name="password" value="${jdbc.password}">
     </property></property></property></property></bean>
 </beans>
</beans>


A few things to note:
- The nested beans XML elements must be placed at the end of the configuration XML file
- Ensure that the Spring 3.1 Beans XSD file is used (earlier versions won't work).
- It is recommended to have one bean profile to be the default profile.

To 'activate' a bean profile, one of the ways is to pass in a system property 'spring.profiles.active'. For my testing environment, I use Tomcat, so I placed this property in setEnv.bat (not in Tomcat by default, just create it under ${TOMCAT_HOME}/bin folder, and the file will be processed by Tomcat when starting up). So for testing, the value to pass is: -Dspring.profiles.active=testing

If the 'spring.profiles.active' value is not specified, then the default profile will be the active profile.

To check or test whether which profile is active, you can get the Environment object from an ApplicationContext, as shown:

Environment env = ctx.getEnvironment();
String[] activeProfiles = env.getActiveProfiles();


In my example above, it seems that only 1 profile can be active at a time, but the Spring API allows activation of multiple profiles. Just comma-separate the profiles when specifying the value of 'spring.profiles.active'. But proceed with caution as too many profiles may lead to confusion.

Friday, September 14, 2012

Default transformations for AES, DES, DESede (Triple DES) in Java


I posted on StackOverflow regarding the default transformation used for AES in Java (http://stackoverflow.com/questions/6258047/java-default-crypto-aes-behavior/10084030). Initially I couldn't find any mention of it in the JDK documentation, but I recently stumbled upon it (http://docs.oracle.com/javase/6/docs/technotes/guides/security/crypto/CryptoSpec.html#trans).

I was working with a vendor who was using .NET to emulate the AES encryption on a sensitive value that my Java web service is requiring as input. He kept asking about things like what mode e.g. ECB, CBC and what padding to use e.g. no padding or PKCS. I checked the Java code used to encrypt, and the only input needed is "AES":

Cipher cipher = Cipher.getInstance("AES");

So I dug deeper into the JDK documentation and found out that to pass the mode and padding, you can pass it in one string e.g. AES/CBC/NoPadding. From the documentation, I took the recommended transformations (http://docs.oracle.com/javase/6/docs/technotes/guides/security/StandardNames.html) and wrote a JUnit test to compare the cipher texts of the various transformation options against the default plain-vanilla "AES".

I've extended it further to include DES and DESede (Triple DES). Running it on Oracle JDK 7 yielded the following output (the transformation that produced the same cipher text value as the default transformation is in bold):

AES/ECB/PKCS5Padding: true
AES/ECB/NoPadding: Input length not multiple of 16 bytes
AES/CBC/NoPadding: Input length not multiple of 16 bytes
AES/CBC/PKCS5Padding: false
==========================================================
DES/CBC/NoPadding: Input length not multiple of 8 bytes
DES/CBC/PKCS5Padding: false
DES/ECB/NoPadding: Input length not multiple of 8 bytes
DES/ECB/PKCS5Padding: true
==========================================================
DESede/CBC/NoPadding: Input length not multiple of 8 bytes
DESede/CBC/PKCS5Padding: false
DESede/ECB/NoPadding: Input length not multiple of 8 bytes
DESede/ECB/PKCS5Padding: true
==========================================================

The JUnit test for reference:

package org.gizmo.crypto;

import java.security.Provider;
import java.security.Security;
import java.util.ArrayList;
import java.util.List;

import javax.crypto.Cipher;
import javax.crypto.KeyGenerator;
import javax.crypto.SecretKey;

import org.apache.commons.codec.binary.Hex;
import org.junit.Before;
import org.junit.Test;

public class DefaultEncryptionAlgoTests
{
 private List<MyCipher> myCiphers = new ArrayList<MyCipher>();
 
 private String plainText = "Sticks and stones may break my bones but coding makes me joyous";

 private class MyCipher
 {
  public String cipherName;
  public String[] algorithms;
  public SecretKey secretKey;
 }
 
 @Before
 public void generateKeys() throws Exception
 {
  //AES
  MyCipher c = new MyCipher();
  c.cipherName = "AES";
  c.algorithms = new String[] {"AES/ECB/PKCS5Padding", "AES/ECB/NoPadding", "AES/CBC/NoPadding", "AES/CBC/PKCS5Padding"};
  KeyGenerator kgen = KeyGenerator.getInstance(c.cipherName);
  kgen.init(128);
  c.secretKey = kgen.generateKey();
  myCiphers.add(c);
  
  //DES
  c = new MyCipher();
  c.cipherName = "DES";
  c.algorithms = new String[] {"DES/CBC/NoPadding", "DES/CBC/PKCS5Padding", "DES/ECB/NoPadding", "DES/ECB/PKCS5Padding"};
  kgen = KeyGenerator.getInstance(c.cipherName);
  kgen.init(56);
  c.secretKey = kgen.generateKey();
  myCiphers.add(c);
  
  //DESede (or Triple DES)
  c = new MyCipher();
  c.cipherName = "DESede";
  c.algorithms = new String[] {"DESede/CBC/NoPadding", "DESede/CBC/PKCS5Padding", "DESede/ECB/NoPadding", "DESede/ECB/PKCS5Padding"};
  kgen = KeyGenerator.getInstance(c.cipherName);
  kgen.init(168);
  c.secretKey = kgen.generateKey();
  myCiphers.add(c);
 }

 @Test
 public void testSecurityProvider() throws Exception
 {
  for (Provider provider: Security.getProviders())
  {
   System.out.println(provider.getName());
   for (String key: provider.stringPropertyNames())
   {
    System.out.println("\t" + key + "\t" + provider.getProperty(key));
   }
  }
 }
 
 @Test
 public void testAlgorithms() throws Exception
 {
  for(MyCipher c :  myCiphers)
  {
   //Default algorithm
   Cipher cipher = Cipher.getInstance(c.cipherName);
   cipher.init(Cipher.ENCRYPT_MODE, c.secretKey);
         byte[] cipherText = cipher.doFinal(plainText.getBytes());
         String defaultAlgoEncryptedHex = Hex.encodeHexString(cipherText);
   
   //Possible algorithms
   for(String a : c.algorithms)
   {
    try
    {
     cipher = Cipher.getInstance(a);
     cipher.init(Cipher.ENCRYPT_MODE, c.secretKey);
     cipherText = cipher.doFinal(plainText.getBytes());
     
           String encryptedHex = Hex.encodeHexString(cipherText);
           
           System.out.println(a + ": " + defaultAlgoEncryptedHex.equals(encryptedHex));
    }
    catch (Exception e)
    {
     System.out.println(a + ": " + e.getMessage());
    }
   }   
   System.out.println("==========================================================");
  }
 }
}


Anyway, to make things clear in your program, it's best to specify the full string rather than relying on default transformations in Java.

Thursday, September 13, 2012

Lazy-initializing beans in Spring: Exceptions


Lazy-initializing beans in a Spring container is generally desirable to allow for faster startup times, especially if there are lots of beans involved. However, there are some beans that must be loaded up or initialized when the application starts e.g. a background Quartz scheduler running jobs, or just doing 1st-level diagnostics or health check that writes results to a log file.

By default, all beans with scope="singleton" are initialized during container startup, but this behaviour can be changed by adding default-lazy-init="true" at the 'beans' top-level element, as shown (simplified for clarity):





To ensure a Spring-managed bean is initialized during container startup regardless of the attribute default-lazy-init="true" at the 'beans' top-level element in Spring configuration XML (or in Java code), you can one of the following:

  • Configure your bean definition to force initialization by adding attribute lazy-init="false", as shown in the example below:
    
    
  • Have your class implement the interface org.springframework.context.SmartLifecycle. This will guarantee the class will be initialized as long as isAutoStartup() is implemented to return 'true'. See the class Javadoc (http://static.springsource.org/spring/docs/3.0.x/javadoc-api/org/springframework/context/SmartLifecycle.html) for more info.


Thursday, September 06, 2012

Logback NT Event Log Appender

I've been searching the Net for a logback-based Windows event log appender, but couldn't find any. Log4j has a built-in NTEventLogAppender, as stated in http://stackoverflow.com/questions/8945187/logback-and-windows-event-system-integration, but I came across a nice alternative in the form of log4jna (https://github.com/dblock/log4jna), which doesn't require placing native DLLs in a system directory.

So I based my logback Windows event log appender on org.apache.log4jna.nt.Win32EventLogAppender, to speed things up. After some trial and error, I concluded that it is better to manually create the Windows registry keys and setup some values to ensure the implementation works instead of relying on the current user's administrative rights and having the registry keys created during the appender's initialization. So the code is 'simplified'. More info @ http://code.dblock.org/log4jna-log4jerror-could-not-register-event-source-access-is-denied

By default, the log records will be placed under the Application log, which will contain records from other sources/systems. Viewing the records specific to your application will then require great perception or the use of filters. Since I am using Windows 7 (development) and Windows 2008 Standard (testing/production), there can be a dedicated menu item under 'Applications and Services Logs', as shown below:


To create it, just add the following using regedit: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\${Custom_Log_Name}\${Custom_Source} 

Replace
  •  ${Custom_Log_Name} with the value that will be shown under 'Applications and Services Logs'. In my example, is 'MCI-CA'
  • ${Custom_Source} with the source value that you'll need to set and specify in the logback.xml configuration file. In my example, is 'mci' 
Note: The 'source' value of the appender needs to be unique (system-wide).

Next, configure the appender in logback.xml:


    
        
            %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n%throwable
        
    

    
     mci
        
            %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n%throwable
        
    

 
        
    
    
    
        
    



Whack in the JARs to classpath: jna.jar, platform.jar (both from log4jna zip package), logback-core-1.0.5.jar, logback-classic-1.0.5.jar, slf4j-api-1.6.4.jar

Also, copy Win32EventLogAppender.dll (log4jna) to Window's system32 folder (e.g. C:\Windows\System32) and add a registry value (refer to .reg file in the zip file link at bottom). You can refer to http://code.dblock.org/log4jna-the-description-for-event-id-4096-from-source-log4jna-cannot-be-found for more details.

Then just get the logger instance and start logging away! I've furnished a simple JUnit test as a sample (link below). I used the system property to load a specific logback.xml file e.g. -Dlogback.configurationFile=D:\eclipse-workspaces\indigo-workspace\Sandbox\src\logback.xml

Caveat: The logback appender can be coded to cater for logback 'classic' and 'access', but I only implemented it for 'classic'. See http://logback.qos.ch/manual/appenders.html for more details.

Accompanying files can be accessed here: http://dl.dropbox.com/u/103052906/blog/tech/nt-logback/logback-ntappender.zip

Thursday, December 15, 2011

Shortcut to apply SyntaxHighlighter in Blogger for Java code

This is a shortcut post to enable syntax highlighting of your Java code examples in Blogger.

First, open the HTML template under "Design". Locate the <head> section, then add the following:






Then, in your posts, just enclose <pre class="brush: java">...</pre> tags around your code snippets.

Monday, December 12, 2011

Bad Certificate in WAS 5.1.0 (IBM JDK 1.4.1)

Once upon a time, there was a legacy web application system handling loans processing. It ran on Windows 2000 but used IBM WebSphere Application Server (WAS) 5.1.0 installation (yes, no fixpacks!). It ran well until recently when the system tried to access a remote URL using HTTPS/SSL, an error occurred. The log files were investigated and to the horrified faces of the support personnel...

javax.net.ssl.SSLHandshakeException: bad certificate

A few recommendations here and there led to them summoning me to have a look at this issue. At first I thought it was just a simple case of not importing the SSL certificates into the keystore. But after using ikeyman (WAS tool) to do the necessary, it still doesn't work. Ok, time to Google...

Version 5.1.0 of WAS runs on IBM JDK 1.4.1 (not even 1.4.2), which made matters worse. A quick search using Google yielded 2 most relevant results:

Problem is, both links didn't actually provide any solution, but they did nudge me a little to a workaround which I am documenting here right now.

This problem is not common if the WAS installations are patched up to at least v5.1.1, but due to the system's legacy status and most likely be replaced by another system early next year, there was no real incentive to patch it and 'hope for the best'. The personnel of the vendor supporting the system was also a blur case contract worker who spew common buzzwords yet lack any substance and logic.

So, what to do, what to do??? There were no source codes available (the vendor decided not to give an earlier contract worker his time off, so he retaliated by deleting all the source codes...a case of coder gone cuckoo/crazy). So, I decided to decompile the source codes using JD-GUI (http://java.decompiler.free.fr/?q=jdgui) after learning from the IT support girl on the URL that triggers the remote host handshaking. A quick look at the web.xml file and the servlet responsible for the remote request was identified.

After looking through the servlet code, I was able to identify the root cause and reconfirmed it by writing some JUnit tests:
  • Test #1 - Using Sun JDK1.4.2, no imported certs: Fail, "unknown certificate" (expected)
  • Test #2 - Using Sun JDK1.4.2, with imported certs: Pass (expected)
  • Test #3 - Using IBM JDK1.4.1 (similar to WAS 5.1.0), no imported certs: Fail (expected)
  • Test #4 - Using IBM JDK1.4.1 (similar to WAS 5.1.0), with imported certs: Fail, "bad certificate" (expected)

So the problem is the JDK, not the application code (sort of). IBM JDK1.4.1 is using IBM JSSE v1 (not even v2). I wanted to try using IBM JSSE v2 API files, but after looking through several Google results, no one was triumphant doing this.

To call the remote URL using HTTPS/SSL, Apache HttpClient API was used. A quick look at the online guide (http://hc.apache.org/httpclient-3.x/sslguide.html) gave me an idea to replace the default class to handle HTTPS connections with a custom class which can be coded to use Sun JSSE implementation. To do this, some class-overriding was necessary. The final code for the socket factory (I had to dig through the Sun JSSE classes to find the implementation class which actually is 'internal' and not recommended to be referenced externally, but desperate times require desperate measures):


import java.io.IOException;
import java.net.InetAddress;
import java.net.Socket;
import java.net.UnknownHostException;

import org.apache.commons.httpclient.protocol.SecureProtocolSocketFactory;

import com.sun.net.ssl.internal.ssl.SSLSocketFactoryImpl;

public class SunJsseSslSocketFactory implements SecureProtocolSocketFactory {

 public Socket createSocket(String host, int port) 
  throws IOException, UnknownHostException
 {
  SSLSocketFactoryImpl sfi = new SSLSocketFactoryImpl();
  return sfi.createSocket(host, port);
 }

 public Socket createSocket(String host, int port, 
  InetAddress clientHost, int clientPort) 
   throws IOException, UnknownHostException
 {
  SSLSocketFactoryImpl sfi = new SSLSocketFactoryImpl();
  return sfi.createSocket(host, port, clientHost, clientPort);
 }

 public Socket createSocket(Socket socket, String host, 
  int port, boolean autoClose) 
   throws IOException, UnknownHostException
 {
  SSLSocketFactoryImpl sfi = new SSLSocketFactoryImpl();
  return sfi.createSocket(socket, host, port, autoClose);
 }

}


Then, to modify the HttpClient call to use this new socket factory class:

Protocol myHTTPS = new Protocol( "https", new SunJsseSslSocketFactory(), url.getPort() );

This line is placed before any calls to the remote URL, and only needed to be executed once.

2 other steps to do to complete the fix:
  • Place the Sun JSSE file (jsse.jar) into [WAS_HOME]/java/jre/lib/ext
  • Add the Sun JSSE provider into file [WAS_HOME]/java/jre/lib/security/java.security:

security.provider.6=com.sun.net.ssl.internal.ssl.Provider

In the end, the fix was applied to the production boxes and the remote call finally worked. All credit goes to the HttpClient API makers who at least coded some hooks for this customisation.

Monday, December 20, 2010

XMLBeans - First Steps

I attempted to revisit XMLBeans to generate XML-Java mapping codes based on the IFX schema. I've tried JAXB v1 but failed miserably (there might be some tweaking needed but I've searched high and low, to no avail...few years ago) The multi-XSD file structure from IFX Forum website was used.

In the XSD folder (all files are from the IFX Forum):
$pain.001.001.01.xsd
$pain.002.001.01.xsd
$pain.004.001.01.xsd
IFX170_XSD.xsd
RemitDetail_Type.xsd

Using XMLBeans 2.5.0 distribution, the command I run:
E:\experiments\exploded-apis\xmlbeans-2.5.0\bin>scomp -mx 1024M -out ifx17.jar E:\yk\xpdesktop\IFX-SIT\IFX1.7_XSD\IFX170_XSD.xsd

IFX is a HUGE schema so the heap size is increased to 1024M in order for the command to complete successfully.

Results:
Time to build schema type system: 3.39 seconds
Time to generate code: 109.938 seconds
Time to compile code: 145.939 seconds
Compiled types to: ifx17.jar

Wokay, this is good. First try without fancy options, and a JAR file is produced. I couldn't achieve the same thing in JAXB v1. Will need to try out using JAXB v2. More on this later.

Wednesday, December 15, 2010

Java VisualVM - Remote Display, Firewall-Protected, Offline Environment

VisualVM can be used for remote monitoring of Java applications. Things work fine for an environment without firewall "interference". For folks that are fortunate enough to have the protection of a firewall, here's a trick which worked for me, and may work for you.


Basically, I used SSH + X-Windows to view the VisualVM GUI on my local PC. In my current company's testing environment, SSH access is given and certain number of ports e.g. 80-90 (HTTP), 443-449 (HTTPS) are open for access as well. The rest are inaccessible i.e. blocked by firewall. No doubt firewall ports can be opened to allow RMI traffic essential for VisualVM to connect to the remote machine, but due to the nature of the out-of-the-box RMI implementation that dynamically assigns its port numbers, ports assigned are only valid for that particular time. A restart will have different ports assigned. There are ways to go around this behaviour, but exploring this option is out of the scope of this blog entry.


So the tools I used: Putty (for SSH), NetSarang's Xmanager2 (for X-Windows display), and of course, Sun JDK6 (u21).


First, I run Putty to access Machine A (Linux box) via SSH. Xmanager2 is also running in the background in Passive mode. At the Putty console, I ran the command below to enable X-Window display on my PC (assuming IP=10.123.124.67):


export DISPLAY=10.123.124.67:0.0


Then just run VisualVM on Machine A. On Machine A (could be different in your own environment):


/usr/java/jdk1.6.0_21/bin/jvisualvm &


The VisualVM GUI will then appear on my PC, but running on Machine A. So you can just "remotely" monitor your Java applications running on Machine A as a local client.


But unfortunately, the stock VisualVM application that comes with Sun JDK6 doesn't have the MBeans viewer plugin installed. And Machine A does not have access to the Internet, so online installation of the plugin is not possible. But fortunately, the plugin files (*.nbm) can be downloaded and installed offline (https://visualvm.dev.java.net/pluginscenters.html).


So that's it. I will be using the MBeans viewer to monitor my application's Tomcat database connection pool, after I'm done converting from Apache DBCP to Tomcat Jdbc Pool. More on this later.

If there's no firewall to make this complicated, you can add this to your Java application startup command:

-Dcom.sun.management.jmxremote=true
-Dcom.sun.management.jmxremote.port=9394
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false

This will disable SSL and authentication (username/password). You can then use VisualVM to connect to the remote machine.