Sunday, December 12, 2010

Deleting .svn directories in directory tree

If you use Subversion, you'd probably noticed a hidden directory ".svn" in every directory managed by Subversion. This is where Subversion keeps its version data. If you're in Linux, and you want to get rid of it for any reason. You can use the following command to do so:

rm -rf `find . -type d -name .svn`

This command will find and delete any directory with the name ".svn".

Saturday, December 4, 2010

Calling PL/SQL code in Java

In this blogpost, I will show you how to interact with Oracle PL/SQL code in Java. Before we continue with the Java code, let's create a simple Oracle package we can use to test our code.

Compile the following package specification and body in the Oracle database.

Package specification:

create or replace
package test_package as

procedure test_procedure(v varchar2);

function test_function(v varchar2) return varchar2;

end test_package;

Package body:

create or replace
package body package test_package as

procedure test_procedure(v varchar2) is

function test_function(v varchar2) return varchar2 is
return v;

end test_package;

The package contains a procedure and a function that returns a VARCHAR2 type. We can call the procedure with the following Java code:

DataSource ds = ...
Connection con = ds.getConnection();
CallableStatement cs =
con.prepareCall("{call test_package.test_procedure(?)}");
cs.setString(1, "Calling test_procedure!");

The function is slightly different, because we have to register the type the function returns.

DataSource ds = ...
Connection con = ds.getConnection();
CallableStatement cs =
con.prepareCall("{? = call test_package.test_function(?)}");
cs.registerOutParameter(1, java.sql.Types.VARCHAR);
cs.setString(2, "Calling test_function!");
String s = cs.getString(1);

The function can also be called in another way. Like this:

DataSource ds = ...
Connection con = ds.getConnection();
PreparedStatement ps =
"SELECT test_package.test_function(?) S FROM dual"
ps.setString(1, "Calling test_function!");
ResultSet r = ps.executeQuery();
String result = null;

if ( {
result = r.getString("S");


This way is similar to executing a standard SQL-statement.

Friday, November 26, 2010

Plug-ins for Eclipse

Eclipse is my favorite Java IDE (Integrated Development Environment). One of the best functionality offered by Eclipse, is the possibility of adding (free) third-party plug-ins, which makes Eclipse flexible, and customizable to satisfy your requirements. Here is a list of plug-ins I installed in Eclipse:

Integrate SVN in Eclipse with this tool. This tool makes it effortless to keep your project files in sync with your SVN repository.

If you PHP for development, then this tool is useful. It provides stuff like: syntax high-lighting, code-completion, and integrated manual.

If you use JavaScript, you can use this tool to make the coding experience better. It provides similar functionality like PHPEclipse, but for JavaScript instead of PHP.

This plug-in forces you to write Java code that adheres to a coding standard. Checkstyle also checks class design problems, duplicate code, and bugs.

This tool, well, finds bugs in your Java code. It uses static analysis on Java bytecode to detect bug patterns.

This is definitely my favorite tool to draw UML diagrams. Very easy to use, and it's free!

Friday, November 19, 2010

XML Document validation with XML Schema

In todays blogpost, I will show you how to validate an XML-document to its XSD-schema. Before we begin, create an example XML-schema:

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs=""
<xs:element name="person">
<xs:element name="name" type="xs:string"/>
<xs:element name="age" type="xs:integer"/>
<xs:element name="birthdate" type="xs:date"/>

You can use this schema to generate an XML-document with Eclipse:

<?xml version="1.0" encoding="UTF-8"?>
" schema.xsd">

Now, the last thing we need to do, is to build the validator:

package com.javaeenotes;


import javax.xml.XMLConstants;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;
import javax.xml.transform.dom.DOMSource;
import javax.xml.validation.Schema;
import javax.xml.validation.SchemaFactory;
import javax.xml.validation.Validator;

import org.w3c.dom.Document;
import org.xml.sax.SAXException;

public class Main {
private static final String SCHEMA = "schema.xsd";
private static final String DOCUMENT = "document.xml";

public static void main(String[] args) {
Main m = new Main();

public void validate() {
DocumentBuilderFactory dbf =

try {
DocumentBuilder parser = dbf.newDocumentBuilder();
Document document = parser.parse(new File(DOCUMENT));
SchemaFactory factory = SchemaFactory

Schema schema = null;
schema = factory.newSchema(new File(SCHEMA));
Validator validator = schema.newValidator();
validator.validate(new DOMSource(document));
} catch (SAXException e) {
} catch (IllegalArgumentException e) {
} catch (IOException e) {
} catch (ParserConfigurationException e) {

The validator will throw an exception if the validation process fails. Based on the specific type of error, the following exceptions could be thrown:

  • SAXException: XML-document not valid.

  • IllegalArgumentException: XML-artifact/instruction not supported.

  • IOException: document/schema file read error.

  • ParserConfigurationException: parser could not be configured.

Saturday, November 13, 2010

Saving large XMLTypes in Oracle databases

If you try to save a large XMLType that is over 4k characters long, you'll get this error:

can bind a LONG value only for insert into a LONG column

Oracle database has troubles binding large strings. One solution is to use a temporary clob to store the XML data, and bind that to the SQL-statement.

Connection con = ...
String xml = "<test>test</test>";
oracle.sql.CLOB clob = null;

clob = CLOB.createTemporary(con, true,
clob.setString(1, xml);

PrepareStatement s = con.prepareStatement(
"INSERT INTO table (xml) VALUES(XMLType(?))"

s.setObject(1, clob);

Using this method, you can store large XML-documents as XMLType in the Oracle database. But there is still a limitation with the XMLType. The XMLType cannot store any text-nodes and attributes values larger than 64k. Oracle suggests to use a XMLType CLOB-storage column to store really large XML-documents. But then, we cannot use native XML-parsing functions, which defeats the purpose of the XMLType column...

In newer versions of Oracle XML DB, this limit has been lifted.


Thursday, November 11, 2010

Retrieve remote web service client host and IP

If you've created a web service, and you want to find out what the host or IP-address of the remote client is? The following code will be useful.

public class WebService {
WebServiceContext wsc;

public String webMethod() {
MessageContext mc = wsc.getMessageContext();
HttpServletRequest req = (HttpServletRequest)
return "Client: " +
req.getRemoteHost() + " (" +
req.getRemoteAddr() + ").";

The code above will return the host and IP-address of the remote web service client back to the client.

Saturday, November 6, 2010

Using Oracle WebLogic deployment plans

As a developer, I often develop applications that have to be deployed and run in different execution environments. Initially, I start out developing for a development environment. Application tests are executed on a testing environment. The end-users can test and accept the system in the acceptance environment. The final environment is the production environment, where the application is actually used by end-users. This software development cycle is called a DTAP-street.

This has consequences on the Java development, configuration, and deployment process. On every execution environment, there are: different systems we have to connect to, different database/JDBC names we have to use, different IP-addresses/ports we can use, etcetera. This affects the way we package our application (WAR, JAR, and EAR). We want to avoid changing code or annotations for every execution environment. This way we don't need a specific tailor made package for every execution environment.

This blogpost is about how to deal with environment dependent parameters like: IP-addresses and JDBC-names. when you use Oracle WebLogic as your application server. We put these parameters in the deployment plan. For every execution environment, we create a specific deployment plan for it. The parameters in the deployment plan will override the default parameters in the application when deployed. This enables us to use the same application package (WAR, JAR, and EAR) for all the different execution environments.

An example application EAR file is created in this blogpost to illustrate the use of a deployment plan. It consists of the following steps:

  1. Create an EJB project, a web project, and an EAR project that contains the first two projects
  2. Use JNDI lookup and resource injection to read out or inject environment parameters from deployment descriptors
  3. Create an Oracle WebLogic deployment plan to override the parameters in the standard deployment descriptors

First, we create an EJB project with the following stateless session bean:

package com.javaeenotes;

import javax.annotation.Resource;
import javax.ejb.Stateless;

@Stateless(mappedName = "ejb/ejbEnv")
public class EjbEnv implements EjbEnvRemote, EjbEnvLocal {
private String var1;

private int var2;

public String getVar1() {
return var1;

public int getVar2() {
return var2;

Make sure you also implement the local and remote interfaces to expose the "getter" methods. We have two @Resource annotations to mark the variables we are going to inject with parameters in the ejb-jar.xml deployment descriptor.

Now, create the ejb-jar.xml deployment descriptor:

<?xml version="1.0" encoding="UTF-8"?>
<ejb-jar xmlns:xsi=""


Environment variables from ejb-jar.xml



When the stateless session bean is loaded, both parameters will be injected into the attributes of the bean. Notice that we don't need "setter" methods to do this.

Next, create a web project with a simple servlet:

package com.javaeenotes;


import javax.ejb.EJB;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

public class WebEnv extends HttpServlet {
private EjbEnvRemote ejbEnv;

protected void doGet(HttpServletRequest request,
HttpServletResponse response)
throws ServletException, IOException {

try {
Context env = (Context) new InitialContext()

String s = (String) env.lookup("webVar1");
int i = ((Integer) env.lookup("webVar2")).intValue();

"webVar1: " + s + "\n");
"webVar2: " + i + "\n");

"ejbVar1: " + ejbEnv.getVar1() + "\n");
"ejbVar2: " + ejbEnv.getVar2() + "\n");
} catch (NamingException e) {

This servlet uses JNDI-lookup to read parameters defined in the web.xml deployment descriptor. You can also see that the stateless session bean we created earlier is injected as attribute when this class is instantiated. When this servlet is called, it will print the environment parameters defined in both the ejb-jar.xml and web.xml.

The web.xml deployment descriptor looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi=""
id="env_web" version="2.5">




Environment variables from web.xml


Finally, package both projects in an EAR file, and deploy it on the Oracle WebLogic application server. Use the browser to view the output of the servlet. It will print:

webVar1: Environment variables from web.xml
webVar2: 888
ejbVar1: Environment variables from ejb-jar.xml
ejbVar2: 999

Viewing the output, we can verify that it works!

The last step is to create a deployment plan "Plan.xml", which we use to override the parameters defined in both deployment descriptors. The example contents of the deployment plan:

<?xml version='1.0' encoding='UTF-8'?>
<deployment-plan xmlns=""

<value>NEW environment variables from web.xml</value>
<value>NEW environment variables from ejb-jar.xml</value>

<module-descriptor external="false">
<module-descriptor external="false">
<module-descriptor external="true">
<module-descriptor external="false">
<module-descriptor external="false">



<module-descriptor external="false">
<module-descriptor external="false">




At the top we define the variables we want to use to override parameters:

<value>NEW environment variables from web.xml</value>
<value>NEW environment variables from ejb-jar.xml</value>

Then we use the <variable-assignment>-element to specify what we want to override with XPath. If you're not familiar with XPath, you should find a tutorial for explanation. Simply said, XPath makes it possible to "walk" to the element you want to override. The value string between the <xpath>-elements is actually one line. But for this blogpost, I need to split up the string, because otherwise it won't fit the blog.


This actually means: replace the content of the <env-entry-value>-element where the <env-entry-name> equals "WebVar2", with the value specified as "WebEnv_Var2" in the variable definitions.

If we deploy the same EAR-file, but this time with the deployment plan, the servlet will output:

webVar1: NEW environment variables from web.xml
webVar2: 800
ejbVar1: NEW environment variables from ejb-jar.xml
ejbVar2: 900

The environment parameters defined in the deployment plan successfully override the parameters defined in the deployment descriptors.


Sunday, October 31, 2010

Creating file realm users in JBoss

As promised in an earlier post, I'll show you how to create file realm users in the JBoss application server. A realm is a database that contains user, password, and role information. The application server can use this information to identify users in order to provide container managed security to applications. Realm information can be stored and retrieved in various ways:

  • JDBC
  • Datasource
  • JNDI
  • File (JBoss calls this MemoryRealm)
  • JAAS

We'll be only discussing File realm for this blogpost. When you use File realm, the information is stored in a plain text file.

1. Create a realm
The first step is to create a realm by editing server.xml in your deployment directory of your server. For this example, we'll use the default server configuration in the directory: <JBOSS_DIR>/server/default/deploy/jbossweb.sar. Decide if the realm is shared by applications. You can share it at three different levels:

  • Shared across all applications on all virtual hosts (<Engine>)
  • Shared across all applications on a particular host (<Host>)
  • Not shared, only used by one application (<Context>)

Create and place the <realm>-element nested in one of the elements mentioned above. Example: if you want the realm to be shared for all applictions on a host, you nest the <realm>-element in <host>.
digest="" pathname="conf/users.xml" />

It is recommended to specify a digest to store the passwords encrypted, otherwise the passwords will be stored in clear text. If the attribute pathname is not specified, the default file conf/tomcat-users.xml is used.

2. Create file with users
Create the file specified in the realm configuration, and add users to the file in the following format:

<user name="john"
roles="user" />
<user name="peter"
roles="admin" />
<user name="carol"
roles="role,role2" />

3. Restart JBoss
The realm becomes active when JBoss is restarted.

Sunday, October 17, 2010

Securing JAX-WS web services in Eclipse

In this tutorial, we're going to secure the web service created in previous blogposts. If you don't have done that already, please go through the previous two blogposts, because this blogpost will build on the web service created there. We're using container managed security to achieve security. The tutorial consists of three parts. In the first part, we will implement BASIC authentication. In the second part we extend BASIC authentication with Secure Socket Layer (SSL). In the third part, we replace BASIC authentication with mutual authentication using certificates.

PART 1: BASIC authentication


  1. Create realm users
  2. Modify server application for BASIC authentication
  3. Modify client application for BASIC authentication

1. Create realm users
In this step we're creating users that are authorized to use the web service. The steps described here are only applicable for the Glassfish application server. I will describe the steps needed for the JBoss application server in a future blogpost.

First, make sure Glassfish is running, and go to the admin console or interface. Use the menu tree on the left to browse to: Security -> Realm -> file

Now in the next screen, click on "Manage Users".

This will take you to the screen "File Users". Click on "New..." to create a new user. In this example, create an user "peter" in the group "wsusers". The password is "qwerty1".

Finally, click "Finish". Now, we have completed creating an user in Glassfish.

2. Modify server application for BASIC authentication
In this step, we have to modify two files: web.xml and sun-web.xml. If you don't have these files, just create them under the WEB-INF directory. In the web.xml file, we protect the web service by adding: a security constraint, login configuration, and roles. To do this, we add the following code directly nested in <web-app>:

<!-- Security constraint for resource only accessible to role -->

<web-resource-name>Authorized users only</web-resource-name>



<!-- BASIC authorization -->

<!-- Definition of role -->

In the file sun-web.xml, we map the user group created in the application server to the role "user". Put the following code directly under <sun-web-app>:


At this point, you can deploy the application and test it with the web service client. If correct, a 404-authorization error will be displayed when the client tries to connect to the web service.

3. Modify client application for BASIC authentication
In this step, we're going to supply an username and password when connecting to a web service in the client. The altered start() method in the file should look like this:

public void start() {
// This statement is not needed when run in container.
service = new ExampleWSService();
ExampleWS port = service.getExampleWSPort();

// Authentication
BindingProvider bindProv = (BindingProvider) port;
Map<String, Object> context = bindProv.getRequestContext();
context.put("", "peter");
context.put("", "qwerty1");

System.out.println(port.multiply(3, 4));

If we run the client now, we notice that we can successfully connect to the web service without getting authorization errors.

PART 2: Implementing Secure Socket Layer (SSL)

Now, we continue to add Transport Level Security (TLS) using SSL to encrypt the transportation channel. This will prevent eavesdropping and tampering communication data between client and server.


  1. Export server certificate from server keystore
  2. Create client truststore and import server certificate
  3. Modify server application for SSL
  4. Modify client application for SSL

1. Export server certificate from server keystore
Open a command prompt and navigate to the root directory of the client. Create a new directory called "server" with the command "mkdir server" and navigate into it. This will be our working directory when manipulating the server keystore/truststore. Copy the following files from the Glassfish_installation/domains/domain1/config directory to the newly created server directory: cacerts.jks (truststore) and keystore.jks (keystore).

Now, we can use the keytool to export the server certificate from the server keystore using the default store password "changeit":

keytool -export -alias s1as -keystore keystore.jks
-storepass changeit -file server.cer

The exported server certificate is now written to the file server.cer.

2. Create client truststore and import server certificate
The next step is to create the client truststore and import the server certificate to it. Create a "client" directory in the root directory of the client, and navigate to it using the command prompt. Now use the keytool to create the client truststore and import the server certificate:

keytool -import -v -trustcacerts -alias s1as -keypass changeit
-file ../server/server.cer -keystore client_cacerts.jks
-storepass changeit

When prompted to trust the certificate, type "yes" and press enter to accept.

3. Modify server application for SSL
Add the following code under the <security-constraint>-element in web.xml to enable SSL:


When you run the client now, you'll get a 302-error, indicating that the resource has moved.

4. Modify client application for SSL
Edit (generated by wsimport) and lookup the WSDL-URL in the code. Replace the URL with:


Save and use the following VM parameters to run the client:

You can configure the VM parameters in Eclipse here: Run -> Run Configurations

PART 3: Mutual authentication

In the final part, we're replacing the BASIC authentication with mutual authentication with certificates.


  1. Create client keystore
  2. Export client certificate
  3. Import client certificate into server truststore
  4. Modify server application for mutual authentication
  5. Modify client application for mutual authentication
  6. Configure Glassfish for mutual authentication

1. Create client keystore
Now, create a client keystore to store its own certificate.

keytool -genkey -alias client -keypass changeit
-storepass changeit -keystore client_keystore.jks

Use the following values to answer questions when prompted:

2. Export client certificate
Navigate to the "client" directory with the command prompt. The client certificate is needed on the server side. So we need to export it from the client keystore using the following command:

keytool -export -alias client -keystore client_keystore.jks
-storepass changeit -file client.cer

3. Import client certificate into server truststore
Navigate to the "server" directory and import the client certificate with the command:

keytool -import -v -trustcacerts -alias client
-keystore cacerts.jks -keypass changeit
-file ../client/client.cer

Use the default password "changeit" when asked, and finally type "yes" to accept the certificate.

4. Modify server application for mutual authentication
The file web.xml has to be changed for mutual authentication configuration. Lookup the <auth-method>-element and replace BASIC with CLIENT-CERT.

The sun-web.xml file has also to be modified. Instead of mapping the role to a realm group, we have to map the role to a certificate with the <principal-name>-element:

<!-- <group-name>wsusers</group-name> -->
<principal-name>CN=Name, OU=Department,
O=Organization, L=City, ST=State,

5. Modify client application for mutual authentication
The username and password we used in the client code for part 1 are not used anymore. So we can remove that part.

The client is now finished, and can be run with the following VM-settings:

6. Configure Glassfish for mutual authentication
If you experience connection problems when using the client, you're probably using the default Glassfish installation. By default, mutual authentication is disabled. To enable mutual authentication, go to: Configuration -> Network Listeners -> http-listener-2. Click the SSL tab. Make sure "SSL3" is not checked. The checkboxes "TLS" and "Client Authentication" should be checked.

Everything should be working now. In a future blogpost, I will show you how to configure realm users for JBoss.

Friday, October 8, 2010

Web service client with JAX-WS in Eclipse

In this blogpost, I will use JAX-WS to show how easy it is to create a simple client that makes use of the web service we created in the previous blogpost. The client can be implemented in various ways, like a web application or an EJB. In both mentioned ways, you run the client in a Java container. Therefore, you can take advantage of the resource injection facility provided by the container. This means you can use annotations to automatically inject the web service object where needed. We start out by creating a standard client, which does not run in a container.

Let's get started by creating a standard Java project in Eclipse. When done, start up a command prompt and navigate it to the root directory of the newly created project. Use the wsimport command to generate the classes you need to access the web service. You need the URL of the WSDL for this to work. The following command generates the sources and classes, and place them in the default Eclipse directories.

wsimport -s src
-d bin

Now, create a new class, which acts as the client. An example client:

package com.javaeenotes;


public class WebserviceClient {

// This annotation only has effect in a container.
wsdlLocation =
private static ExampleWSService service;

public static void main(String[] args) {
WebserviceClient wsc = new WebserviceClient();

public void start() {
// This statement is not needed when run in container.
service = new ExampleWSService();
ExampleWS port = service.getExampleWSPort();
System.out.println(port.multiply(3, 4));

The resource injection annotation is not necessary here, because the client does not run in a container. I left it here to show how to use annotation to inject the resource into your class when run in a container.

Sunday, October 3, 2010

Web service with JAX-WS in Eclipse

In this tutorial, I'll show you how to use JAX-WS to build a web service in Eclipse. JAX-WS stands for Java API for XML Web Services. You can use it to build web services and clients that communicate using XML messages. We are not using the built-in web service generation tool provided by Eclipse. How to build a sample client will be explained in a future blog post.

The basic steps for building a web service:

  1. Code the implementation class
  2. Annotate the class
  3. Use wsgen to generate web service artifacts
  4. Deploy in Glassfish from Eclipse

1. Code the implementation class

First create a dynamic web project in Eclipse. I called my project webservice, but you call your project anything you like. Make sure the target runtime is Glassfish, and select Minimal Configuration in the configuration screen.

In the next screen, accept the default output folder or choose a custom one. Memorize the output folder, because we need this when we generate the web service artifacts.

You don't have to select the last checkbox to auto generate the web.xml deployment descriptor, but you'll need it when your application is also a web application.

Click finish, and your project is ready to use.
Create a new class in the newly created project. This will be the implementation class of the web service. I called mine ExampleWS.

2. Annotate the class

Now it's time to complete the class and use JAX-WS annotation to make it a valid web service.

package com.javaeenotes;

import javax.jws.WebMethod;
import javax.jws.WebService;

public class ExampleWS {

public int sum(int a, int b) {
return a + b;

public int multiply(int a, int b) {
return a * b;

public String greet(String name) {
return "Hello " + name + "!";

3. Use wsgen to generate web service artifacts

Start a command prompt and navigate to the root of the project directory. Make sure the JDK is in your PATH, or else you can't execute the wsgen command. Create a wsdl directory in WebContent/WEB-INF/ or else wsgen will complain. Use the wsgen command to generate the artifacts like this:

wsgen -classpath build/classes/ -wsdl
-r WebContent/WEB-INF/wsdl -s src
-d build/classes/ com.javaeenotes.ExampleWS

This command will generate the source, class files, and WSDL file for the web service implemented as com.javaeenotes.ExampleWS. When wsgen complains about not finding the class, you have to build the project first. When the command is successful, you can refresh the project and you'll find the artifacts in your project tree.

The generated WSDL-file needs some editting, before you can use it. Open the WSDL-file and search for REPLACE_WITH_ACTUAL_URL. Replace this string with the web service URL:, and save the file.

4. Deploy in Glassfish from Eclipse

I assume you have Glassfish integrated in Eclipse, or else you have to build a WAR-file using ANT and manually deploy it into Glassfish. There is a blog post about integrating Glassfish into Eclipse. You can use the search engine to find it.

Deploy the project in Glassfish by right-clicking the project, click Run As, and finally Run on Server. Select the Glassfish server in the selection screen, and complete this step. When the configuration is finished, Eclipse tries to open a web page, which fails with a 404 error. Don't worry, this is normal, because we just created a web application without pages. Our project contains only a web service.

We can test the newly created web service by using SOAP-UI or the integrated Web Services Explorer in Eclipse. To use the Web Sercice Explorer in Eclipse, browse to the WSDL-file, right-click on it, select Web Services, and click on Test with Web Services Explorer. This will present you the web services test screen, which is pretty easy to use.

Saturday, September 25, 2010

Apache log4j

Apache log4j is a very popular framework for building logging facilities in Java applications. Logging is essential in debugging, localizing problems, and security auditing. Logging statements in the application do not have to be removed when the application is finished for deployment, and will not influence performance.

Log4j is an hierarchical logger, and has 6 logging levels to control/grade logging messages:

  1. TRACE
  2. DEBUG
  3. INFO
  5. ERROR
  6. FATAL

The list is ordered by importance, where the last level is the highest in hierarchy. The current log level and hierarchy determine what to log. When a certain log level is used in an application, all lower ranked log statements are also logged. Example: when the current log level is WARNING, all lower log statements like: TRACE, DEBUG, and INFO are logged. But when the log level is TRACE, no other log level messages will be printed, because the DEBUG level is the lowest log level. Log level can be changed during run-time.

Another control mechanism for hierarchical logging, is that classes/objects of the application obtain a logger separately attached to its class hierarchy. Like the statement below in the constructor of Foo.class.

Log log = Logger.getLogger(Foo.class);

This way, you can configure the logger to print log messages of a selection of classes only. You can use the root logger to print log messages from all classes, or use a class specific logger to print only message from this class and/or its children.

Apache log4j can be configured in two ways:

  1. Using property files (example:
  2. Using XML files (example: log4j.xml)

I prefer the properties file way, because it's less verbose. We can configure many things in the configuration file, like:

  • output log file
  • log file name pattern
  • initial log level
  • log line format
  • log file management

Example application
Let's create an example application that uses Apache log4j:

package test;

import org.apache.log4j.Logger;

public class LogMain {
private Logger log;

public static void main(String[] args) {
LogMain app = new LogMain();;

public LogMain() {
System.out.print("Application started.\n");
this.log = Logger.getLogger(LogMain.class);

public void run() {
this.log.trace("TRACE message!");
this.log.debug("DEBUG message!");"INFO message!");
this.log.warn("WARN message!");
this.log.error("ERROR message!");
this.log.fatal("FATAL message!");

To configure Apache log4j, you can use a properties file or an XML file. The properties file looks like this:

log4j.rootLogger=INFO, CONSOLE, FILE

.conversionPattern=%d{HH:mm:ss:SSS} - %p - %C{1} - %m%n

.conversionPattern=%d{HH:mm:ss:SSS} - %p - %C{1} - %m%n

The equivalent XML-file is this:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">


<appender name="file"
<param name="file" value="logs/app.log" />
<param name="datePattern" value="yyyyMMDD." />
<param name="append" value="true" />
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern"
value="%d{HH:mm:ss:SSS} - %p - %C{1} - %m%n" />

<appender name="console"
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern"
value="%d{HH:mm:ss:SSS} - %p - %C{1} - %m%n" />

<priority value="info" />
<appender-ref ref="console" />
<appender-ref ref="file" />


Put the file in the class path, and Apache log4j will automatically find and load it.

The example configuration uses two log appenders that define the output of logging messages:

  1. DailyRollingFileAppender: automatically rotates logs every day
  2. Console: prints to console

The output can also be customized by the layout class, which is configured to use a conversion pattern to format the output. The output of the example is formatted like this:

19:23:54:938 - WARN - LogMain - WARN message!

This blog post gives you enough information to start using Apache log4j. For more information you can also read the manual here.

Extra note
When using Apache log4j for applications in containers (web and EJB), it might clash with the logging used by the container. For example: Oracle Application Server (OAS) also uses Apache log4j for its application server logging, and will not deploy applications that bundle Apache log4j. This problem can be circumvented by disabling the library apache.commons.logging for deployment.

Sunday, September 19, 2010

SUN certificates rebranded to Oracle

Oracle has rebranded SUN certificates to Oracle certificates.

Click this link to see how the SUN certification titles have been rebranded under the Oracle certification program.

What this means:

  • SUN certificates you're holding will still be valid, and will not expire.
  • Exam objectives remain unchanged for current certification exams.
  • Candidates who pass the exam after 1-3 september 2010 will get the Oracle branded certificate.

Glassfish integration in Eclipse 3.6 (Helios)

Developing in Eclipse for Glassfish is easier when Glassfish is integrated into the IDE. This way, it's possible to right-click the project to deploy the selected project automatically in Glassfish.

In this tutorial, I will show you how to integrate a local Glassfish installation into Eclipse 3.6 (Helios). Before we start, I assume you've installed: Eclipse 3.6, the Java RE, the Java JDK, and the Java EE SDK. Install Java RE and JDK first, before you install the Java SDK.

The first step is to start up Eclipse and locate the servers tab, which is found at the bottom of the screen for default installations.

Now, right-click somewhere in the tab, select "new", and then "server". The following screen will pop up, with the default server adapters.

The server adapter for Glassfish is not included by default. So we have to click "Download additional server adapters" to download the server adapter for Glassfish. A screen will pop up with a selection of server adapters. Select "Oracle Glassfish Server Tools", which is the adapter we're looking for.

Click "next" to download and install the server adapter. Restart Eclipse, when the installation is finished. After the restart, the Glassfish server adapter will show up in the list from the servers tab.

Select it, and click next. The next screen lets you configure the server adapter. Select a working JRE environment and select the installation path of your Glassfish application server.

Click next to go to the next screen. If you have set an administrator password during the installation of Glassfish, you have to fill in this password in the form. Otherwise, use the default values and click next to proceed.

The next screen lets you select existing projects to be added to the newly configured server adapter. The screen is empty in this tutorial, because we're using a fresh install.

When the configuration is succesful, you'll see the server adapter in the servers tab. Now, you can start using it by right-clicking on the server.

From this menu, you can:

  • start/stop the server
  • add/remove new projects
  • publish/unpublish projects
  • view the administration console
  • update Glassfish

Have fun playing with Glassfish in Eclipse!

Friday, September 17, 2010

Number of Sun Certified Enterprise Architects (SCEA) in the world

According to this topic, there are over 5900 SCEAs in the world. Below is an incomplete list of the numbers of SCEAs per country.

Argentina: 14
Australia: 107
Austria: 27
Belgium: 52
Brazil: 200
Canada: 435
China: 138
Denmark: 58
Finland: 45
France: 45
Germany: 276
Hong Kong: 146
India: 421
Italy: 46
Netherlands: 142
New Zealand: 28
Norway: 47
Sweden: 72
UK: 348
Uraguay: 1
USA: 2334

Monday, September 6, 2010


SoapUI is a tool that has a lot in common with the tool discussed in my previous blog post. SoapUI also lets you analyze traffic. But this time, it's all about web services traffic like SOAP, and RESTful web services.

One neat feature is importing WSDL-documents in order to create automatic simulation requests. A generated request template can be filled out, before SoapUI sends it to the server. The response of the web service will be nicely printed on the screen for you to analyze. This way, you can easily test or try out your web service.


Friday, August 27, 2010


WebScarab is a tool created by OWASP that can be used to analyze the HTTP(S) traffic between your browser and the web server. In order to intercept traffic, you have to configure WebScarab as a proxy server in your browser. Whenever a request is send to the web server by your browser, WebScarab intercepts the request and holds it for reviewing or even modifying before actually sending the request to the web server. The complete request can be analyzed in all its detail, which includes HTTP headers.

It's a great tool for debugging complex problems or reviewing the security of your applications. Download WebScarab here.

Thursday, August 19, 2010


I have a colleague, who is always monitoring 2392 blogs, 503 RSS-feeds, and 3922 websites. So he doesn't miss out on the little things that make life or work just a little bit more efficient. His laptop and PC have over 293 little tools installed, which enables him to get his work done three clicks faster than the regular guy. He is Tool Man Tony.

Yesterday, he forwarded me an e-mail notification of one of his (many) RSS-feeds, which drew my attention. It was a blog post about a syntax highlighter tool, like the one found in your favorite IDE (Integrated Development Environment). But this one is different. This one is meant for highlighting code in web pages.

The tool is called SyntaxHighlighter. It's basically JavaScript files you can include in your blog or website to highlight code fragments, which make your code much more readable. Additionally, it can also number the lines of your code fragments automatically. All of this happens client-side. A screenshot of highlighted code:

SyntaxHighlighter supports most popular programming languages. The complete list of supported languages can be found here.

Installing and using SyntaxHighLighter is easy, and consists of the following basic steps:

  1. Download the installation files here.
  2. Extract the files and include them to your web page: shCore.js and shCore.css.
  3. Add brushes (syntax files) for the languages you want to highlight.
  4. Use <pre> or <script> tags to enclose your code fragment.
  5. Call the JavaScript method SyntaxHighlighter.all() to invoke SyntaxHighlighter.

As you can see, SyntaxHighlighter is loosely coupled to your web page, and is very easy to use.

Thanks Tool Man Tony for the tip!

Official SyntaxHighlighter site:

Tuesday, August 17, 2010

Getting Oracle SQL result set pages

When you want to display the result of a query that returns a large number of rows, it's common to divide the complete result in pages, and display them one page at a time. To offer the user a way of navigating through the pages, there are clickable page numbers or "next"/"previous" buttons at the bottom of the page. This technique is called result set paging.

To implement this efficiently, the paging should also be built into the query. Because it's not efficient when we use a query to return the complete result from the database, and discard rows we don't want to display afterwards.

Let's say that you want to page the following query:

SELECT col1, col2
FROM example_table
WHERE col2 = 'some_condition'

Before we can divide the result set, we need to number the rows of the result set. The pseudo column "rownum", provided by Oracle databases, numbers the rows of the result. The first row of the result is always 1.

Using this knowledge, we can retrieve the "page" we want using the WHERE clause with the corresponding start row and end row numbers. In the example below, we want to fetch rows from number 201 to 300.

FROM (SELECT r.*, ROWNUM AS row_number
FROM (SELECT col1, col2
FROM example_table
WHERE col2 = 'some_condition'
ORDER BY col1 DESC) r)
WHERE row_number >= 201 AND row_number <= 300;

The reason why we wrap the original query in another SELECT query, is that the database assigns ROWNUMs before the ordering, which makes our ordering ineffective for paging. To make sure the ROWNUMs are ordered, we retrieve the ROWNUM column in the outer SELECT query. The database will then assign the ROWNUM in the already correctly ordered result set.

This query can be optimized for newer Oracle databases by removing the top outer SELECT clause, while leaving the WHERE clause intact.

Friday, August 13, 2010

Mind mapping with FreeMind

Mind mapping is a learning method that uses tree-like graphs to help your brain to structurize information visually. The brain is better at remembering visuals than words.

As an experiment, I am using mind mapping to prepare for the Sun Certified Developer for Java Web Services(SCDJWS) certificate. I will publish this in the future. Unlike the traditional way of drawing mind maps using pen and paper, I'm using a tool recommended by a colleague. The tool is called FreeMind and can be downloaded here.

The first impression I got, was the compact way of storing a lot of information. Thanks to the possibility of folding away information nodes, to hide related information, and zooming in the information by unfolding directly connected nodes.

Although mind mapping is originally a learning method, I find it's also suited for storing information you regularly need in a structured way, because it makes the information much more "findable".

Monday, August 2, 2010

Friday, July 16, 2010

Extend Oracle tablespace files

Sometimes you want to enlarge the tablespace file, when the size is not enough to handle transactions. This is the case when you get an error like:

ORA-01650: Unable to extend rollback segment RBS6
by %s in tablespace ROLLBACKSEGS

In this case, the rollback segment is not large enough to handle the database transaction. Now, you have to check if the autoextensible flag is set or if the datafile is too small.

First, find the correct database file of the rollback segment:

SELECT * FROM dba_rollback_segs;

Use the file ID from the result above to find the correct file on the file system:

SELECT * FROM dba_data_files;

Now enlarge the file:

RESIZE 1024m;

or set the autoextend to ON:


Monday, June 28, 2010

Java 4 Ever

This is a really funny video about Java!

Monday, June 21, 2010

Sonar (Code Quality Management System)

Sonar is an open system to manage code quality. It monitors the 7 axes of code quality:

  • Architecture & Design
  • Comments
  • Duplications
  • Coding rules
  • Unit tests
  • Potential bugs
  • Complexity

More info:

Wednesday, June 16, 2010

Setting response MIME-type in Java ServerFaces

To set the MIME-type in JSF, use the following code in your JSP-page:

<% response.setContentType("text/plain"); %>

In this example, the MIME-type is set to plain text. The browser will treat the page as plain text and will not render the page as a HTML-page.

Here is a list of common MIME-types.

Monday, June 14, 2010

Fixing ^M characters in VI

The new line in UNIX-like OS is represented differently than in Windows OS. When a Windows file is viewed in the VI editor, you can see ^M where the line ends. You can remove the ^M character by using the following VI regular expression command:


The ^-sign means holding the CTRL-key while pressing the next character.

Wednesday, June 9, 2010

Getting client information in Java ServerFaces

I always log client information in JSF applications in debug mode. I use the following code in a managed bean to retrieve the IP-address, hostname and HTTP-headers from the browser.

// Get the request-object.
HttpServletRequest request = (HttpServletRequest)

// Get the header attributes. Use them to retrieve the actual
// values.

// Get the IP-address of the client.

// Get the hostname of the client.

Friday, May 21, 2010

MySQLs auto_increment in Oracle

MySQL has a reserved keyword "auto_increment" that can be used in the ID/primary key column of a new table. Whenever a new record is added to the table, this field will be automatically set to an unique incremented integer value.

In the Oracle database, this feature is missing. But we can simulate this feature using the SEQUENCE and TRIGGER functionality provided by Oracle.

A sequence is a sequential value stored in the database. It starts with an initial user supplied value, and can be incremented by an arbitrary value to a given maximum value. The statement to create a sequence for table ATABLE:

create sequence ATABLE_SEQ
minvalue 1
maxvalue 999999999999999999999999999
start with 1
increment by 1
cache 20;

Now, we want to hook up a trigger on an INSERT event of the table. Whenever a record is inserted into the table, the current value of the sequence is incremented and set in the primary key field of the inserted record.

create or replace trigger ATABLE_TRIG
before insert on ATABLE
for each row
select atable_seq.nextval into from dual;

Thursday, May 20, 2010

Accessing MS Active Directory using LDAP

In this example, we'll implement a simple client that performs two simple lookups in Microsoft Active Directory to retrieve groups and users through the LDAP interface. The LDAP interface can be accessed using the standard JNDI API.

The Microsoft implementation of the LDAP interface is limited to 1000 query results. So to handle more than 1000 results, we have to maintain a cookie that stores the current position of retrieved results and pass the cookie back to the server to continue the query from the cookie stored position. This is called a paged query.

We first start by creating a LdapContext object, which we'll use to access Active Directory through the LDAP interface. The Context object is configured using a Hashtable.

Hashtable<String, String> env = new Hashtable<String, String>();
env.put(Context.SECURITY_AUTHENTICATION, "simple");
env.put(Context.SECURITY_PRINCIPAL, "<user>");
env.put(Context.SECURITY_CREDENTIALS, "<password>");
env.put(Context.PROVIDER_URL, "ldap://<host>:389");
env.put(Context.REFERRAL, "follow");

// We want to use a connection pool to
// improve connection reuse and performance.
env.put("com.sun.jndi.ldap.connect.pool", "true");

// The maximum time in milliseconds we are going
// to wait for a pooled connection.
env.put("com.sun.jndi.ldap.connect.timeout", "300000");

Next, we need to specify how we are going to search the directory with a SearchControls object, from where we want to start the search with a search base string, and how we want to filter results.

SearchControls searchCtls = new SearchControls();
// We start our search from the search base.
// Be careful! The string needs to be escaped!
String searchBase = "dc=company,dc=com";

// There are different search scopes:
// - OBJECT_SCOPE, to search the named object
// - ONELEVEL_SCOPE, to search in only one level of the tree
// - SUBTREE_SCOPE, to search the entire subtree

// We only want groups, so we filter the search for
// objectClass=group. We use wildcards to find all groups with the
// string "<group>" in its name. If we want to find users, we can
// use: "(&(objectClass=person)(cn=*<username>*))".
String searchFilter = "(&(objectClass=group)(cn=*<group>*))";

// We want all results.

// We want to wait to get all results.

// Active Directory limits our results, so we need multiple
// requests to retrieve all results. The cookie is used to
// save the current position.
byte[] cookie = null;

// We want 500 results per request.
new Control[] {
new PagedResultsControl(500, Control.CRITICAL)

// We only want to retrieve the "distinguishedName" attribute.
// You can specify other attributes/properties if you want here.
String returnedAtts[] = { "distinguishedName" };

// The request loop starts here.
do {
// Start the search with our configuration.
NamingEnumeration<SearchResult> answer =
searchBase, searchFilter, searchCtls);

// Loop through the search results.
while (answer.hasMoreElements()) {
SearchResult sr =;
Attributes attr = sr.getAttributes();
Attribute a = attr.get("distinguishedName");
// Print our wanted attribute value.
System.out.println((String) a.get());

// Find the cookie in our response and save it.
Control[] controls = ctx.getResponseControls();
if (controls != null) {
for (int i = 0; i < controls.length; i++) {
if (controls[i] instanceof
PagedResultsResponseControl) {
PagedResultsResponseControl prrc =
(PagedResultsResponseControl) controls[i];
cookie = prrc.getCookie();

// Use the cookie to configure our new request
// to start from the saved position in the cookie.
ctx.setRequestControls(new Control[] {
new PagedResultsControl(500,
Control.CRITICAL) });
} while (cookie != null);

// We are done, so close the Context object.

We can also print all attributes from an object using its fully distinguished name using the following code:

String dN = "CN=JohnDoe,OU=Users,DC=company,DC=com";
Attributes answer = ctx.getAttributes(dN);

for (NamingEnumeration<?> ae = answer.getAll(); ae.hasMore();) {
Attribute attr = (Attribute);
String attributeName = attr.getID();
for (NamingEnumeration<?> e = attr.getAll(); e.hasMore();) {
System.out.println(attributeName + "=" +;

Monday, May 17, 2010

Search implementation strategies

In this blog post, I will talk briefly about two approaches in providing search functionality in a JEE application. I will talk about how content is offered to a core search engine like Lucene, which is responsible for indexing, storing and retrieving. Lucene offers a library to build search engines. It does not provide a way to retrieve or accept content from sources for indexing. Third party vendors that provide complete search engine products based on Lucene exist to fulfill this need. How the indexing works is outside the scope of this blog post. Interesting sites powered by Lucene can be found here.

The two approaches I'm going to discuss:

  • Web crawler/spider
  • Enterprise search

The web crawler is an automated bot that starts with indexing a webpage supplied by an user or read from an internal list. Every hyperlink found in the page will also be scheduled to be indexed. In this way, the web crawler hops from page to page, indexing the content after every visit. The web crawler conforms to a pull model, as it initiates the request for content for indexing.

The enterprise search that conforms to a push model, is an API or service that waits for clients to provide content for indexing. A typical scenario is a CMS that connects to the search engine webservice, with the content as parameter or content. The client initiates the indexing process in this case.

Advantages of web crawling:

  • No modification needed in existing application
  • Easy to implement search on multiple websites

Disadvantages of web crawling:

  • Pages not reachable by hyperlinks cannot be indexed
  • New content only visible in search engine after a crawler visit
  • Crawling impacts performance due to crawler traffic
  • Only public HTML-pages can be indexed by default

Advantages of enterprise search:

  • Finer control of search results (authorization, meta-data, keywords)
  • New content explicitly added/removed/updated by application
  • Support for all kinds of content (even non-public content or database content)
  • Minimal performance impact

Disadvantages of enterprise search:

  • Integration code needed in every application to facilitate search functionality

There is no best way in search strategy. Choosing between the two approaches heavily depends on the context and requirements of the system.

Friday, May 14, 2010

DeMilitarized Zone (DMZ)

Most of the time, your webserver or application server is protected by a firewall that limits external traffic to your server. To improve security even more, an additional firewall can be installed between the webserver and the internal network, which includes the data being served. In most cases, this will be the database server.

The area between the two firewalls is called the DeMilitarized Zone or DMZ. When an intruder manages to compromise the web server, he still has to find a way to circumvent the inner firewall to actually get to the internal network or data.

Here is an PDF-article that explains how to design a DMZ for your network. More information about DMZ can be found here at Wikipedia.

Saturday, May 8, 2010

Java ServerFaces Tutorial

An excellent tutorial about Java ServerFaces can be found at JSF Tutorial. I use this link regularly as reference. When I search information about how to do something in JSF, I often end up on this site. It's an excellent resource about everything related to software engineering.

Monday, May 3, 2010

Version control with GIT on Windows

GIT is a Bitkeeper inspired version control system created by Linus Torvalds. It features fast distributed, decentralized version control. Originally developed for Linux, the application is now also available to Windows under the name MsysGIT.

If you don't like the standard integrated Windows Explorer interface, you can also download the interface by Tortoise here: It still needs MsysGIT to work.

Quick reference of commands I regularly use:

Basic commands:
git init - place current directory under version control
git add . - stage all files in the current directory for commit
git commit - commit all staged files
git commit -a - commit all files (including unstaged ones)
git log - view history of changes
git diff - show changes between commits

Clone commands:
git clone <path_to_existing_repos> <clone_name> - clone an existing repository
git pull - fetch latest changes from master branch
git push - push latest changes to master branch
git pull <existing clone> [<branch>] - fetch latest changes from clone
git push <existing clone> [<branch>] - push latest changes to clone

Branche commands:
git branch - list all branches
git branch <branch_name> - create a new branch in current repository
git checkout <branch_name> - checkout current branch in current repository
git merge <branch_name> - merge branch into master branch

There is also a GIT plugin available for Eclipse:

Next, I will describe a real life scenario of a team working on the same project.

John starts a new project and creates a local repository from his working directory:

cd project
git init
git add .
git commit

Now he wants to share this project with his team members. To do that, John creates a central empty "bare" project that is going to hold all the project files:

cd <path_to_central_directory>
mkdir project
cd project
git init --bare

Once the central repository is ready, John pushes the content from his local working directory to the central repository:

git remote add origin <path_to_central_directory>/project
git push origin master

Now Karen can join this project by cloning the central repository to her local directory:

git clone <path_to_central_directory>/project

She can now edit some files and push the changes to the central repository:

git add file
git commit
git push [origin master]

John can fetch or pull the changes from the central repository to merge them with his own repository with:

git fetch
git merge HEAD


git pull origin master

Now, John wants to start a new version in a branch:

git branch new_version

The current checked out branch can be viewed using this command:

git branch

John can start editting the new branch by checking out the new branch by using:

git checkout new_version

When John is done, he can merge the changes in the new branch to the master with:

git merge new_version


Wednesday, April 28, 2010

Importing data with SQL*Loader

Oracle created a tool for loading large amounts of data into an Oracle database. The tool is called SQL*Loader and can be run by using the command sqlldr.

In this example, we load data from a Microsoft Excel sheet. Before we can do that, we have to save the sheet as CSV (Comma Separated Values). Next, we define the structure of the data in a CTL-file, which also contains configuration data for the import process.

An example CTL-file (import.ctl):

options (errors=100, discard='import.discards')
load data
infile 'import.csv'
badfile 'import.errors'
into table import_table
fields terminated by ','
optionally enclosed by '"'

In the configuration above, we load the data in the table "import_table". The table has two columns: COL1 and COL2. The CSV-file has two values on every line, which are separated by a comma.

This configuration has two options defined:

  • errors=100
  • discard=import.discards

This means that the load is aborted after 100 errors, and the discarded lines are saved in the import.discards.

Other configuration settings:

  • infile: the file to load
  • badfile: the file to save errors when they occur
  • insert: insert data in empty table. Other modes: append, replace, truncate

We can start the load process by executing the following command:

sqlldr <user>/<password>@<host>
control=import.ctl log=import.log