2 Replies Latest reply: Mar 22, 2011 8:55 PM by Jeff Constantin RSS

RpcManagerImpl:113 unexpected error while replicating

Jeff Constantin Newbie

I am using 4.2.1.CR4, Sun JVM 6 and Tomcat 6 and I built a simple web application as an Infinispan POC. The app allows me to do basic operation on a cache including put values, delete values, get values, see cache inventory, see cluster information, clear a cache, and ceate a new cache dynamically.  I deploy the WAR and start one Tomcat instance and everything appears to work as expected. I start a second Tomcat instance and it can see the cache and I can retrieve cached values from the cache created on the first instance. So far so good. Then I try to add a cache dynamically "cacheD" and the following exception is thrown while replicating. I can ask for the cluster information on either node and "cacheD" exist and has inventory. I have seen this app allow me to create a few caches dynamically before this exception is thrown.

 

I am very new to Infinispan ( 1 week ) but this smells a bit like a race condition while definining a new cache and the background replication threads are trying to distribute the cache information to other nodes. OR I am not understanding proper implementation.

 

Any help is greatly appreciated.

 

I have included my code to create the cache dynamically and the resulting configuration values below.

 

 

exception from my log file:

 

 

2011-03-13 07:57:40,068 []   INFO DistributedMemoryCache:79 - INFINISPAN: cacheName [cacheD] exist...returning

2011-03-13 07:57:40,193 []  ERROR RpcManagerImpl:113 - unexpected error while replicating

org.infinispan.manager.NamedCacheNotFoundException: Cache: cacheD

        at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:123)

        at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:159)

        at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:144)

        at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:578)

        at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:489)

        at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:365)

        at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:771)

        at org.jgroups.JChannel.up(JChannel.java:1465)

        at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:954)

        at org.jgroups.protocols.pbcast.FLUSH.up(FLUSH.java:478)

        at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.up(STREAMING_STATE_TRANSFER.java:265)

        at org.jgroups.protocols.FRAG2.up(FRAG2.java:190)

        at org.jgroups.protocols.FlowControl.up(FlowControl.java:419)

        at org.jgroups.protocols.FlowControl.up(FlowControl.java:419)

        at org.jgroups.protocols.pbcast.GMS.up(GMS.java:888)

        at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234)

        at org.jgroups.protocols.UNICAST.up(UNICAST.java:310)

        at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:806)

        at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:671)

        at org.jgroups.protocols.BARRIER.up(BARRIER.java:120)

        at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:169)

        at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:269)

        at org.jgroups.protocols.MERGE2.up(MERGE2.java:210)

        at org.jgroups.protocols.Discovery.up(Discovery.java:292)

        at org.jgroups.protocols.PING.up(PING.java:67)

        at org.jgroups.protocols.TP.passMessageUp(TP.java:1093)

        at org.jgroups.protocols.TP.access$100(TP.java:56)

        at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1633)

        at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1615)

        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

        at java.lang.Thread.run(Thread.java:619)

2011-03-13 07:58:44,832 []   INFO DistributedMemoryCache:79 - INFINISPAN: checking for cacheName [cacheD]

2011-03-13 07:58:44,832 []   INFO DistributedMemoryCache:79 - INFINISPAN: cacheName [cacheD] exist...returning

 

 

cache configuration file:


<?xml version="1.0" encoding="UTF-8"?>

<infinispan

      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

      xsi:schemaLocation="urn:infinispan:config:4.2 http://www.infinispan.org/schemas/infinispan-config-4.2.xsd"

      xmlns="urn:infinispan:config:4.2">

 

   <global>

      <transport

         clusterName="edgeCluster"

         transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport"/>

      <globalJmxStatistics enabled="true"/>

   </global>

 

   <default>

      <jmxStatistics enabled="true"/>

      <clustering mode="distribution">

         <l1 enabled="true" lifespan="60000"/>

         <hash numOwners="2" rehashRpcTimeout="120000"/>

         <async/>

      </clustering>

   </default>

</infinispan>

 

 

how I create cache dynamically:

 

 

      /*

       * see if the cache already exist

       */

      if( manager.getCacheNames( ) != null && manager.getCacheNames( ).contains( cacheName ) )

      {

         return manager.getCache( cacheName );

      }

      else

      {

         /*

          * define the new cache

          */

         Configuration configuration = manager.getCache( ).getConfiguration( );

         manager.defineConfiguration( cacheName, configuration );

 

         /*

          * get the new cache

          */

         Cache cache = manager.getCache( cacheName );

 

         /*

          * add a LoggingListener

          */

         cache.addListener( new LoggingListener( ) );

 

         /*

          * dump cache configuration values so I can see the configuration

          */

         configurationValues( cache.getName( ), cache.getConfiguration( ) );

 

         /*

          * return the new cache

          */

         return cache;

 

 

Configuration: [cacheD]

-------------------------------

getCacheModeString = DIST_ASYNC

getCacheLoaderManagerConfig = CacheLoaderManagerConfig{shared=false, passivation=false, preload='false', cacheLoaderConfigs.size()=0}

getCacheMode = DIST_ASYNC

getCacheStopTimeout = 30000

getClass = class org.infinispan.config.Configuration

getConcurrencyLevel = 32

getConsistentHashClass = org.infinispan.distribution.ch.DefaultConsistentHash

getCustomInterceptors = []

getDeadlockDetectionSpinDuration = 100

getEvictionMaxEntries = -1

getEvictionStrategy = NONE

getEvictionThreadPolicy = DEFAULT

getEvictionWakeUpInterval = 5000

getExpirationLifespan = -1

getHashFunctionClass = org.infinispan.util.hash.MurmurHash2

getIsolationLevel = READ_COMMITTED

getL1Lifespan = 60000

getLockAcquisitionTimeout = 10000

getName = null

getNumOwners = 2

getRehashRpcTimeout = 120000

getRehashWaitTime = 60000

getReplQueueClass = org.infinispan.remoting.ReplicationQueueImpl

getReplQueueInterval = 5000

getReplQueueMaxElements = 1000

getStateRetrievalInitialRetryWaitTime = 500

getStateRetrievalLogFlushTimeout = 60000

getStateRetrievalMaxNonProgressingLogWrites = 100

getStateRetrievalNumRetries = 5

getStateRetrievalRetryWaitTimeIncreaseFactor = 2

getStateRetrievalTimeout = 10000

getSyncReplTimeout = 15000

getTransactionManagerLookup = null