0 Replies Latest reply: Mar 1, 2011 12:07 PM by Paul Jackson RSS

HSql cache loader seems to be committing after each put()

Paul Jackson Newbie

I have been doing some performance testing of various approaches to a persistent map.  I tested HSql 1.8 (no JBoss cache yet) and found it extremely fast (1,000,000 records written and read in under 5 seconds) until I configured it for durable transactions (using statement.execute("SET WRITE_DELAY FALSE")) at which point the performance became abysmal (100,000 records took 7 minutes).  An approach I have used with other implementations was to batch multiple records into a single transaction but there is no way to do that with version 1.8 of HSql.  I thought JBoss Cache might be a good way to add this functionality - it does support transactions, so I might be able to use it with the jdbc driver to queue up a number of transactions into a batch.  I coded this up, and could see it building a jbosshsql.script file while it ran, so I thought it might be working.  Alas, the performance was on par with the straight HSql implementation with no write delay.

 

Here's the main bit of my setup code:

    CacheFactory<K, V> factory = new DefaultCacheFactory<K, V>();

    cache = factory.createCache(false);

    Configuration config = cache.getConfiguration();

    config.setClusterName(name);

    Configuration.CacheMode cacheMode = config.getCacheMode();

    config.setCacheMode(Configuration.CacheMode.LOCAL);

    config.setTransactionManagerLookupClass("org.jboss.cache.transaction.GenericTransactionManagerLookup");

    CacheLoaderConfig cacheLoaderConfig = new CacheLoaderConfig();

    JDBCCacheLoaderConfig jdbcCacheLoaderConfig = new JDBCCacheLoaderConfig();

    Properties properties = jdbcCacheLoaderConfig.getProperties();

    jdbcCacheLoaderConfig.setDriverClass("org.hsqldb.jdbcDriver");

    jdbcCacheLoaderConfig.setJdbcURL("jdbc:hsqldb:file:" + path.getAbsolutePath());

    jdbcCacheLoaderConfig.setJdbcUser("SA");

    jdbcCacheLoaderConfig.setJdbcPassword("");

    jdbcCacheLoaderConfig.setConnectionFactoryClass("org.jboss.cache.loader.NonManagedConnectionFactory");

    jdbcCacheLoaderConfig.setNodeType("VARBINARY");

    cacheLoaderConfig.addIndividualCacheLoaderConfig(jdbcCacheLoaderConfig);

    config.setCacheLoaderConfig(cacheLoaderConfig);

    cache.create();

    cache.start();

    rootNode = cache.getRoot();

    transactionManager = cache.getConfiguration().getRuntimeConfig().getTransactionManager();

 

My thought was that I was being stupid, and that although JBoss was queueing up the records, when I committed it was still writing them to the database one at a time, but when I look at the script file that it was writing, it does look like it was trying to batch them into consolidated INSERTS:

 

CREATE SCHEMA PUBLIC AUTHORIZATION DBA

CREATE MEMORY TABLE JBOSSCACHE(FQN VARCHAR(255) NOT NULL PRIMARY KEY,NODE VARBINARY,PARENT VARCHAR(255))

CREATE MEMORY TABLE JBOSSCACHE_D(I CHAR)

CREATE USER SA PASSWORD ""

GRANT DBA TO SA

SET WRITE_DELAY 10

SET SCHEMA PUBLIC

INSERT INTO JBOSSCACHE VALUES('/','aced0005770700200c98bf010974000d5f746573745f69645f373939..... // This goes on and on.

 

Any thoughts?

 

Thanks,

-Paul