Showing posts with label database. Show all posts
Showing posts with label database. Show all posts

Friday, October 28, 2011

MyBatis and Oracle Database

While I was using MyBatis with the Oracle Database, I ran into two problems:

  1. Generating primary keys when inserting database rows
  2. Mapping Java boolean type to a database type
The next sections describe how I solved these problems.

Inserting new rows

Unlike other databases like MySQL, the Oracle database does not provide the functionality to generate primary keys automatically when inserting rows. Normally, I would create a sequence together with a trigger in the Oracle database. The sequence is a thread-safe number generator, and the trigger is custom code that is called on certain type of events. In this case an "insert" event, which will update the row with the primary key.

With MyBatis, we still need the sequence. The trigger functionality is provided by MyBatis. Let's get started by creating an example class in Java. I left the "getters" and "setters" out, but we still need them in the actual code ofcouse.

public class Client {
    private Integer id;
    private String name;

    // Setters and Getters for the attributes here.
}

Next, we have to prepare the database by create a table that corresponds with the "Client" class, and a sequence.

CREATE TABLE client (
    id INTEGER primary key,
    name VARCHAR2(100)
);

CREATE SEQUENCE client_seq
    START WITH 1
    INCREMENT BY 1
    NOCACHE
    NOCYCLE;

MyBatis uses "mapper" interfaces (defined and provided by us) to map the class to actual SQL-statements. Additional configuration can be provided by either XML, or annotations. When the configuration is simple, I prefer using annotations. Complex configurations are often only possible by using XML. The following mapper interface provides mapping for inserting the "Client" class. The interface is only used by MyBatis, we don't have to provide an actual implementation of it.

public interface ClientMapper {
  @Insert("INSERT INTO client (id, name) VALUES (#{id}, #{name})")
  @SelectKey(
            keyProperty = "id",
            before = true,
            resultType = Integer.class,
            statement = { "SELECT client_seq.nextval AS id FROM dual" })
  public Integer insertClient(Client client);
}

The "@SelectKey" annotation tells MyBatis to get the primary key value from the sequence, and use it as primary key when executing the insert statement.

Mapping boolean types

The final example illustrates how we deal with boolean types when using MyBatis. We extend the "Client" class with a boolean attribute "admin".

public class Client {
    private Integer id;
    private String name;
    private boolean admin;

    // Setters and Getters for the attributes here.
}

Add a "VARCHAR(1)" column that acts like a boolean attribute to the corresponding "Client" database table. The column value "Y" corresponds to boolean value true, and "N" to boolean false.

CREATE TABLE client (
    id INTEGER primary key,
    name VARCHAR2(100),
    admin varchar(1)
);

MyBatis doesn't know how the Java boolean is mapped to a database type, as there's no primitive database boolean type. We have to provide MyBatis with a converter, which implements the "TypeHandler" interface. This converter will then be used to convert a Java boolean to a database type, and vice versa.

package com.javaeenotes;

import java.sql.CallableStatement;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;

import org.apache.ibatis.type.JdbcType;
import org.apache.ibatis.type.TypeHandler;

public class BooleanTypeHandler implements TypeHandler {

    private static final String YES = "Y";
    private static final String NO = "N";


    @Override
    public Boolean getResult(ResultSet resultSet, String name)
            throws SQLException {

        return valueOf(resultSet.getString(name));
    }


    @Override
    public Boolean getResult(CallableStatement statement, int position)
            throws SQLException {

        return valueOf(statement.getString(position));
    }


    @Override
    public void setParameter(
            PreparedStatement statement,
            int position,
            Boolean value,
            JdbcType jdbcType) throws SQLException {

        statement.setString(position, value.booleanValue() ? YES : NO);
    }


    private Boolean valueOf(String value) throws SQLException {

        if (YES.equalsIgnoreCase(value)) {
            return new Boolean(true);
        } else if (NO.equalsIgnoreCase(value)) {
            return new Boolean(false);
        } else {
            throw new SQLException("Unexpected value "
                    + value
                    + " found where "
                    + YES
                    + " or "
                    + NO
                    + " was expected.");
        }
    }
}

Now, configure MyBatis to use the type handler for booleans in the mapper interface where needed.


    @Insert("INSERT INTO client (id, name, admin) VALUES (#{id}, #{name}, #{admin,typeHandler=com.javaeenotes.BooleanTypeHandler})")
    @SelectKey(
            keyProperty = "id",
            before = true,
            resultType = Integer.class,
            statement = { "SELECT client_seq.nextval AS id FROM dual" })
    public Integer insertClient(Client client);

    @Select("SELECT * FROM client WHERE id = #{clientId}")
    @Results(
            value = {
                @Result(
                    property = "id",
                    column = "id"),

                @Result(
                    property = "name",
                    column = "name"),

                @Result(
                    property = "admin",
                    column = "admin",
                    typeHandler = BooleanTypeHandler.class)
             })
    public Client getClient(int clientId);

    @Update("UPDATE client SET name = #{name}, admin = #{admin,typeHandler=com.javaeenotes.BooleanTypeHandler} WHERE id = #{id}")
    public void updateClient(Client client);

Thursday, October 13, 2011

Adding Oracle JDBC driver to Maven 2 repository

I like my library JAR-files to be managed by Maven 2. It makes life much easier. Sadly, due to the binary license there is no public Maven 2 repository with the Oracle JDBC driver. So we have to add the JDBC driver manually to our local Maven 2 repository.

First, download the latest Oracle JDBC driver here: http://www.oracle.com/technetwork/indexes/downloads/index.html. Do a search on "JDBC" to find the correct download link.

If you're using Maven inside Eclipse like I am, the "mvn" command will not work from the command prompt. We have to install Maven 2 on our system. You can download Maven 2 here: http://maven.apache.org/download.html. Simply unzip, and make sure the "bin" directory is in your command path.

Now, we can install the JAR file with the following command:

mvn install:install-file -DgroupId=com.oracle -DartifactId=ojdbc6
  -Dversion=11.2.0.3 -Dpackaging=jar -Dfile=ojdbc6.jar -DgeneratePom=true
Be sure to check the parameters, like the version number.

Finally, we can use the Oracle JDBC driver in our Maven 2 project by adding the following dependency to our "pom.xml" file:

<dependency>
    <groupId>com.oracle</groupId>
    <artifactId>ojdbc6</artifactId>
    <version>11.2.0.3</version>
</dependency>

Friday, January 21, 2011

Converting a database CLOB into a String

I've seen code that converts database CLOBs to String by using CharacterStreams and BufferedReaders to read CLOBs in a loop until the CLOB is fully read. A simpler way is using the getSubString() method of the Clob object. While knowing the length or size of the CLOB, you can read the whole CLOB into a String like this:

DataSource ds = ...
Connection c = null;
PreparedStatement s = null;
ResultSet r = null;

try {
c = ds.getConnection();
s = c.prepareStatement("SELECT clobColumn FROM someTable");
r = s.executeQuery();

while (r.next()) {
Clob clob = r.getClob(1);

if (clob != null) {

if ((int) clob.length() > 0) {
String s = clob.getSubString(1, (int) clob.length());
// Do something with string.
}
}
}
} catch (SQLException e) {
// Handle exception.
} finally {
// Close Connection, PreparedStatement, and ResultSet.
}

Beware if the data in the CLOB is really large (larger than Integer.MAX_VALUE). In that case, you'll still need a streaming method.

Saturday, December 4, 2010

Calling PL/SQL code in Java

In this blogpost, I will show you how to interact with Oracle PL/SQL code in Java. Before we continue with the Java code, let's create a simple Oracle package we can use to test our code.

Compile the following package specification and body in the Oracle database.

Package specification:

create or replace
package test_package as

procedure test_procedure(v varchar2);

function test_function(v varchar2) return varchar2;

end test_package;

Package body:

create or replace
package body package test_package as

procedure test_procedure(v varchar2) is
begin
null;
end;

function test_function(v varchar2) return varchar2 is
begin
return v;
end;

end test_package;

The package contains a procedure and a function that returns a VARCHAR2 type. We can call the procedure with the following Java code:

DataSource ds = ...
Connection con = ds.getConnection();
CallableStatement cs =
con.prepareCall("{call test_package.test_procedure(?)}");
cs.setString(1, "Calling test_procedure!");
cs.executeUpdate();
cs.close();

The function is slightly different, because we have to register the type the function returns.

DataSource ds = ...
Connection con = ds.getConnection();
CallableStatement cs =
con.prepareCall("{? = call test_package.test_function(?)}");
cs.registerOutParameter(1, java.sql.Types.VARCHAR);
cs.setString(2, "Calling test_function!");
cs.execute();
String s = cs.getString(1);
cs.close();

The function can also be called in another way. Like this:

DataSource ds = ...
Connection con = ds.getConnection();
PreparedStatement ps =
con.prepareStatement(
"SELECT test_package.test_function(?) S FROM dual"
);
ps.setString(1, "Calling test_function!");
ResultSet r = ps.executeQuery();
String result = null;

if (r.next()) {
result = r.getString("S");
}

con.close();

This way is similar to executing a standard SQL-statement.

Saturday, November 13, 2010

Saving large XMLTypes in Oracle databases

If you try to save a large XMLType that is over 4k characters long, you'll get this error:

ORA-01461:
can bind a LONG value only for insert into a LONG column

Oracle database has troubles binding large strings. One solution is to use a temporary clob to store the XML data, and bind that to the SQL-statement.

Connection con = ...
String xml = "<test>test</test>";
oracle.sql.CLOB clob = null;

clob = CLOB.createTemporary(con, true,
CLOB.DURATION_SESSION);
clob.open(CLOB.MODE_READWRITE);
clob.setString(1, xml);
clob.close();
clob.freeTemporary();

PrepareStatement s = con.prepareStatement(
"INSERT INTO table (xml) VALUES(XMLType(?))"
);

s.setObject(1, clob);

Using this method, you can store large XML-documents as XMLType in the Oracle database. But there is still a limitation with the XMLType. The XMLType cannot store any text-nodes and attributes values larger than 64k. Oracle suggests to use a XMLType CLOB-storage column to store really large XML-documents. But then, we cannot use native XML-parsing functions, which defeats the purpose of the XMLType column...

In newer versions of Oracle XML DB, this limit has been lifted.

References:

Tuesday, August 17, 2010

Getting Oracle SQL result set pages

When you want to display the result of a query that returns a large number of rows, it's common to divide the complete result in pages, and display them one page at a time. To offer the user a way of navigating through the pages, there are clickable page numbers or "next"/"previous" buttons at the bottom of the page. This technique is called result set paging.

To implement this efficiently, the paging should also be built into the query. Because it's not efficient when we use a query to return the complete result from the database, and discard rows we don't want to display afterwards.

Let's say that you want to page the following query:

SELECT col1, col2
FROM example_table
WHERE col2 = 'some_condition'
ORDER BY col1 DESC

Before we can divide the result set, we need to number the rows of the result set. The pseudo column "rownum", provided by Oracle databases, numbers the rows of the result. The first row of the result is always 1.

Using this knowledge, we can retrieve the "page" we want using the WHERE clause with the corresponding start row and end row numbers. In the example below, we want to fetch rows from number 201 to 300.

SELECT *
FROM (SELECT r.*, ROWNUM AS row_number
FROM (SELECT col1, col2
FROM example_table
WHERE col2 = 'some_condition'
ORDER BY col1 DESC) r)
WHERE row_number >= 201 AND row_number <= 300;

The reason why we wrap the original query in another SELECT query, is that the database assigns ROWNUMs before the ordering, which makes our ordering ineffective for paging. To make sure the ROWNUMs are ordered, we retrieve the ROWNUM column in the outer SELECT query. The database will then assign the ROWNUM in the already correctly ordered result set.

This query can be optimized for newer Oracle databases by removing the top outer SELECT clause, while leaving the WHERE clause intact.

Friday, July 16, 2010

Extend Oracle tablespace files

Sometimes you want to enlarge the tablespace file, when the size is not enough to handle transactions. This is the case when you get an error like:

ORA-01650: Unable to extend rollback segment RBS6
by %s in tablespace ROLLBACKSEGS

In this case, the rollback segment is not large enough to handle the database transaction. Now, you have to check if the autoextensible flag is set or if the datafile is too small.

First, find the correct database file of the rollback segment:

SELECT * FROM dba_rollback_segs;

Use the file ID from the result above to find the correct file on the file system:

SELECT * FROM dba_data_files;

Now enlarge the file:

ALTER DATABASE DATAFILE 'D:\ORACLE\ROLLBACKSEGS01.DBF'
RESIZE 1024m;

or set the autoextend to ON:

ALTER DATABASE DATAFILE 'C:\ORACLE\ROLLBACKSEGS01.DBF'
AUTOEXTEND ON NEXT 256m MAXSIZE 2048m;

Friday, May 21, 2010

MySQLs auto_increment in Oracle

MySQL has a reserved keyword "auto_increment" that can be used in the ID/primary key column of a new table. Whenever a new record is added to the table, this field will be automatically set to an unique incremented integer value.

In the Oracle database, this feature is missing. But we can simulate this feature using the SEQUENCE and TRIGGER functionality provided by Oracle.

A sequence is a sequential value stored in the database. It starts with an initial user supplied value, and can be incremented by an arbitrary value to a given maximum value. The statement to create a sequence for table ATABLE:

create sequence ATABLE_SEQ
minvalue 1
maxvalue 999999999999999999999999999
start with 1
increment by 1
cache 20;

Now, we want to hook up a trigger on an INSERT event of the table. Whenever a record is inserted into the table, the current value of the sequence is incremented and set in the primary key field of the inserted record.

create or replace trigger ATABLE_TRIG
before insert on ATABLE
for each row
begin
select atable_seq.nextval into :new.id from dual;
end;

Friday, May 14, 2010

DeMilitarized Zone (DMZ)

Most of the time, your webserver or application server is protected by a firewall that limits external traffic to your server. To improve security even more, an additional firewall can be installed between the webserver and the internal network, which includes the data being served. In most cases, this will be the database server.

The area between the two firewalls is called the DeMilitarized Zone or DMZ. When an intruder manages to compromise the web server, he still has to find a way to circumvent the inner firewall to actually get to the internal network or data.

Here is an PDF-article that explains how to design a DMZ for your network. More information about DMZ can be found here at Wikipedia.

Wednesday, April 28, 2010

Importing data with SQL*Loader

Oracle created a tool for loading large amounts of data into an Oracle database. The tool is called SQL*Loader and can be run by using the command sqlldr.

In this example, we load data from a Microsoft Excel sheet. Before we can do that, we have to save the sheet as CSV (Comma Separated Values). Next, we define the structure of the data in a CTL-file, which also contains configuration data for the import process.

An example CTL-file (import.ctl):

options (errors=100, discard='import.discards')
load data
infile 'import.csv'
badfile 'import.errors'
insert
into table import_table
fields terminated by ','
optionally enclosed by '"'
(
COL1,
COL2
)

In the configuration above, we load the data in the table "import_table". The table has two columns: COL1 and COL2. The CSV-file has two values on every line, which are separated by a comma.

This configuration has two options defined:

  • errors=100
  • discard=import.discards

This means that the load is aborted after 100 errors, and the discarded lines are saved in the import.discards.

Other configuration settings:

  • infile: the file to load
  • badfile: the file to save errors when they occur
  • insert: insert data in empty table. Other modes: append, replace, truncate


We can start the load process by executing the following command:

sqlldr <user>/<password>@<host>
control=import.ctl log=import.log

References:

Friday, April 9, 2010

How to use a DataSource

In this post, we'll use the DataSource that is configured in the previous post.

Get the initial JNDI context object, which provides access to the JNDI interface of the application server.

Context ctext = new InitialContext();

Now, use the logical name that is configured in the application server to retrieve the datasource.

DataSource ds =
(DataSource)ctext.lookup("jdbc/<logical_name>");

Get the connection from the pool. The two parameters are optional. The default credentials are used by the application server, when the parameters are omitted.

Connection conn = ds.getConnection("<username>","<password>");

Use the connection.

PreparedStatement stmt = conn.prepareStatement(
"UPDATE table SET int_col=? WHERE varchar_col=?");
stmt.setInt(1, 75);
stmt.setString(2, "condition");
stmt.executeUpdate():

Finally, return the connection to the pool.

conn.close();

Wednesday, April 7, 2010

Configuring an Oracle DataSource on Glassfish

A DataSource connection has important advantages over the use of DriverManager:

- The use of logical names in the application, instead of using hard coded database addresses (in property files)
- Connection pooling managed by the container
- Distributed transactions managed by the container

To configure an Oracle datasource on Glassfish, follow the following steps:

1. Copy the JDBC driver of Oracle to the lib directory of your Glassfish installation directory.

2. Restart Glassfish.

3. Login to the admin page.

4. Browse to the Connection Pools page (Resources -> JDBC -> Connection Pools)

5. Create a new connection pool by clicking on New.

6. Give the pool a name (NOT prefixed with jdbc/), choose oracle.jdbc.pool.OracleDataSource as resource type and Oracle as vendor.

7. Now click next. The driver class is automatically detected, so it's automatically filled in. Next, make sure you fill in the properties: user, password and URL.

Example:

url=jdbc:oracle:thin:@<host>:1521:<sid>
user=<user>
password=<password>

8. Save and ping to see if it works. If so, the pool is ready to be used.

9. Now browse to the JDBC Resources page (Resources -> JDBC -> JDBC Resources).

10. Create a new JDBC resource by clicking on New.

11. Fill in the JDBC resource name (prefixed with jdbc/), and select the newly created connection pool in Pool Name. The DataSource is now ready for use.

Wednesday, March 24, 2010

Daily remote sync Oracle refresh group

One way to integrate Oracle systems is to share data between databases. In this example we will create a group of materialized views, which is then daily synced with a remote database.

In this scenario we have a couple of remote tables on an external Oracle database. The tables can be accessed using a database link we create.

create public database link <link_name>  connect to <schema>  using '<tns_service_name>';

The <link_name> variable is the name you want to use for the database link. The <schema> variable is the schema name of the remote database. The <tns_service_name> variable is the TNS-name of the remote database.

We fill in these variables with example data.

create public database link MY_DB_LINK;  connect to REMOTE_SCHEMA  using 'remote_database';

Now we create two materialized views of the two remote tables using the database link we just created.


CREATE MATERIALIZED VIEW m_table1 AS
SELECT col1, col2 FROM table1@remote_database;
CREATE MATERIALIZED VIEW m_table2 AS
SELECT col1, col2 FROM table2@remote_database;

Finally, we create the refresh group using the PL/SQL code below. The materialized views are refreshed every day at 6:00 in the morning.


begin
dbms_refresh.make(
name => 'mviews',
list => 'm_table1,
m_table2',
next_date => to_date('06:00:00', 'hh24:mi:ss'),
interval => 'sysdate+1'
);
end;

To check if everything works, the following internal tables of Oracle can be queried to see if everything we created registered successfully by Oracle.

  • dba_mviews
  • dba_refresh
  • dba_refresh_children
  • dba_jobs
Sometimes the job just doesn't want to start. Maybe, the database is configured not allowed to start database jobs. Use the following query to check the number of jobs allowed to run on the database.

select t.value from v$parameter t
where name='job_queue_processes';
The number must be greater than 0. The standard value is 4. You can alter the value using:
alter system set job_queue_processes = 4;

Monday, March 22, 2010

Migrating Oracle database content

There are many times that I have to migrate data from one Oracle database to another.

Depending on the situation, I can choose between two different methods. The first method is to use SQL*Plus to copy data from one remote table to a local table.

copy from <schema>/<password>@<host>INSERT <new_table>Using select * from <old_table>

Instead of INSERT, you can also use: APPEND, CREATE or REPLACE. For more information about the differences, please follow the link at the end of this post.

When multiple tables and/or triggers are involved, I use the second method. The second method uses the exp/imp tools provided by Oracle. First, an export file is created using exp. In this example, I migrate only two tables (table1 and tabl2). You can add more tables if you want.

exp <schema>/<password>@<host>  file=export.dmp log=export.log tables=(table1, table2) rows=yes indexes=no

Finally, we load the export file into the new database. All the new tables and triggers are created automatically. We use imp to do the import.

imp <schema>/<password>@<host> file=export.dmp full=yes

Make sure the exp/imp tools are the same version as the database, or it won't work.

More info can be found here: