10 Replies Latest reply: Jul 11, 2012 5:51 AM by Michał Chmielarz RSS

Infinispan cluster

Michał Chmielarz Newbie

Hello Folks!

 

I wonder is it possible to set up a cluster for Infinispan where coexists nodes that runs on JBoss with Infinispan and nodes where Infinispan runs in standalone mode? I.e. I'd like to have a solution where Infinispan "server" nodes runs on JBoss and "client" nodes runs as a usual Java SE applications for example. Does it make any sense? I'm a newbie in Infinispan and such question/problem can look silly but just help me to figure out is it at all possible?

 

thanks

  • 1. Re: Infinispan cluster
    Galder Zamarreño Master

    In theory that's possible but needs some work, because to get it to work properly, you need to make both sides use the same JGroups stack, and there're differences in terms of configuration. AS uses their configuration to hide away JGroups configuration, so, you'd need to figure the equivalent of that configuration in JGroups xml format or programmatic. Doing this requires a deep understanding of JGroups and how JBoss AS configures JGroups.

  • 2. Re: Infinispan cluster
    Michał Chmielarz Newbie

    Hi Galder,

     

    Since I posted the problem I provided the same configuration of JGroups for both - standalone and JBoss Infinispans. In logs there is info that both instances see each other in a cluster and send some info. But the cache view created by them isn't stable. I'm receiving IllegalStateExcpetion from both Infinispan nodes.

     

    For JBoss:

    13:37:09,053 ERROR [org.infinispan.cacheviews.CacheViewsManagerImpl] (CacheViewInstaller-1,ppl-poz-nb0074/testy) ISPN000172: Failed to prepare view CacheView{viewId=3, members=[PPL-POZ-NB0074-33316, PPL-POZ-NB0074-33316, PPL-POZ-NB0074-33316]} for cache  Demo, rolling back to view CacheView{viewId=2, members=[PPL-POZ-NB0074-33316, PPL-POZ-NB0074-33316]}: java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Cannot prepare new view CacheView{viewId=3, members=[PPL-POZ-NB0074-33316, PPL-POZ-NB0074-33316, PPL-POZ-NB0074-33316]} on cache Demo, we are currently preparing view CacheView{viewId=3, members=[PPL-POZ-NB0074-33316, PPL-POZ-NB0074-33316, PPL-POZ-NB0074-33316]}

        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232) [rt.jar:1.6.0_30]

        at java.util.concurrent.FutureTask.get(FutureTask.java:91) [rt.jar:1.6.0_30]

        at org.infinispan.cacheviews.CacheViewsManagerImpl.clusterPrepareView(CacheViewsManagerImpl.java:319) [infinispan-core-5.1.2.FINAL.jar:5.1.2.FINAL]

        at org.infinispan.cacheviews.CacheViewsManagerImpl.clusterInstallView(CacheViewsManagerImpl.java:250) [infinispan-core-5.1.2.FINAL.jar:5.1.2.FINAL]

        at org.infinispan.cacheviews.CacheViewsManagerImpl$ViewInstallationTask.call(CacheViewsManagerImpl.java:876) [infinispan-core-5.1.2.FINAL.jar:5.1.2.FINAL]

        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) [rt.jar:1.6.0_30]

        at java.util.concurrent.FutureTask.run(FutureTask.java:138) [rt.jar:1.6.0_30]

        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_30]

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_30]

        at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_30]

    Caused by: java.lang.IllegalStateException: Cannot prepare new view CacheView{viewId=3, members=[PPL-POZ-NB0074-33316, PPL-POZ-NB0074-33316, PPL-POZ-NB0074-33316]} on cache Demo, we are currently preparing view CacheView{viewId=3, members=[PPL-POZ-NB0074-33316, PPL-POZ-NB0074-33316, PPL-POZ-NB0074-33316]}

        at org.infinispan.cacheviews.CacheViewInfo.prepareView(CacheViewInfo.java:102) [infinispan-core-5.1.2.FINAL.jar:5.1.2.FINAL]

        at org.infinispan.cacheviews.CacheViewsManagerImpl.handlePrepareView(CacheViewsManagerImpl.java:481) [infinispan-core-5.1.2.FINAL.jar:5.1.2.FINAL]

        at org.infinispan.commands.control.CacheViewControlCommand.perform(CacheViewControlCommand.java:126) [infinispan-core-5.1.2.FINAL.jar:5.1.2.FINAL]

        at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:95) [infinispan-core-5.1.2.FINAL.jar:5.1.2.FINAL]

        at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:221) [infinispan-core-5.1.2.FINAL.jar:5.1.2.FINAL]

        at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:201) [infinispan-core-5.1.2.FINAL.jar:5.1.2.FINAL]

        at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:456) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:363) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:238) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:543) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.JChannel.up(JChannel.java:716) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.FRAG2.up(FRAG2.java:181) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.FlowControl.up(FlowControl.java:418) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.FlowControl.up(FlowControl.java:400) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.pbcast.GMS.up(GMS.java:889) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.UNICAST.up(UNICAST.java:332) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:602) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.BARRIER.up(BARRIER.java:102) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:177) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.MERGE2.up(MERGE2.java:205) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.Discovery.up(Discovery.java:359) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.TP.passMessageUp(TP.java:1180) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1728) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1710) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_30]

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_30]

        at java.lang.Thread.run(Thread.java:619) [rt.jar:1.6.0_30]

     

    And for standalone version:

    [13:37:08,442] [WARN] (CacheViewControlCommand.java:141) - ISPN000071: Caught exception when handling command CacheViewControlCommand{cache=Demo, type=PREPARE_VIEW, sender=ppl-poz-nb0074/testy, newViewId=2, newMembers=[PPL-POZ-NB0074-33316, PPL-POZ-NB0074-33316], oldViewId=1, oldMembers=[PPL-POZ-NB0074-33316]}

    java.lang.IllegalStateException: Cannot prepare new view CacheView{viewId=2, members=[PPL-POZ-NB0074-33316, PPL-POZ-NB0074-33316]} on cache Demo, we are currently preparing view CacheView{viewId=2, members=[PPL-POZ-NB0074-33316, PPL-POZ-NB0074-33316]}

        at org.infinispan.cacheviews.CacheViewInfo.prepareView(CacheViewInfo.java:102)

        at org.infinispan.cacheviews.CacheViewsManagerImpl.handlePrepareView(CacheViewsManagerImpl.java:481)

        at org.infinispan.commands.control.CacheViewControlCommand.perform(CacheViewControlCommand.java:126)

        at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:95)

        at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:221)

        at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:201)

        at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:456)

        at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:363)

        at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:238)

        at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:543)

        at org.jgroups.JChannel.up(JChannel.java:716)

        at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)

        at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)

        at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)

        at org.jgroups.protocols.FlowControl.up(FlowControl.java:400)

        at org.jgroups.protocols.pbcast.GMS.up(GMS.java:889)

        at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)

        at org.jgroups.protocols.UNICAST.up(UNICAST.java:332)

        at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:602)

        at org.jgroups.protocols.BARRIER.up(BARRIER.java:102)

        at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:177)

        at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)

        at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)

        at org.jgroups.protocols.Discovery.up(Discovery.java:359)

        at org.jgroups.protocols.TP.passMessageUp(TP.java:1180)

        at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1728)

        at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1710)

        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

        at java.lang.Thread.run(Thread.java:619)

     

     

    I assume this comes when an error occur during sending current state of a node to the second one but no idea how could I fix it. Maybe you have some suspicion what is the reason of it or how to avoid it?

  • 3. Re: Infinispan cluster
    Michał Chmielarz Newbie

    Hi Galder, once again.

     

    I've made the connection between JBoss server and standalone cache node through JGroups conffiguration as you've suggested. I see that both nodes exchange the data as well but still have same errors I posted earlier. It seems that one of the sides tries to add itself again to the  cache view. It is probably the standalone node - I see in logs that an attempt to add that node once again causes the exceptions. Would you have some idea what can be wrong there? Could look at the logs below?

     

     

     

     

    For Java SE cache node:

     

     

     

    [12:15:16,877] [TRACE] [Timer-4,PPL-POZ-NB0074-17416] (UNICAST.java:519) - PPL-POZ-NB0074-17416 --> XMIT(ppl-poz-nb0074/testy: #3)

    [12:15:16,877] [TRACE] [Timer-4,PPL-POZ-NB0074-17416] (TP.java:1076) - sending msg to ppl-poz-nb0074/testy, src=PPL-POZ-NB0074-17416, headers are RequestCorrelator: id=200, type=REQ, id=2, rsp_expected=true, UNICAST: DATA, seqno=3, UDP: [channel_name=testy]

    [12:15:16,894] [TRACE] [OOB-1,PPL-POZ-NB0074-17416] (TP.java:1160) - received [dst: PPL-POZ-NB0074-17416, src: ppl-poz-nb0074/testy (3 headers), size=6 bytes, flags=OOB|DONT_BUNDLE], headers are RequestCorrelator: id=200, type=RSP, id=2, rsp_expected=false, UNICAST2: DATA, seqno=7, conn_id=1, UDP: [channel_name=testy]

    [12:15:16,894] [TRACE] [OOB-1,PPL-POZ-NB0074-17416] (FlowControl.java:455) - ppl-poz-nb0074/testy used 6 credits, 499604 remaining

    [12:15:16,903] [TRACE] [OOB-1,PPL-POZ-NB0074-17416] (TP.java:1160) - received [dst: PPL-POZ-NB0074-17416, src: ppl-poz-nb0074/testy (3 headers), size=218 bytes, flags=OOB|DONT_BUNDLE], headers are RequestCorrelator: id=200, type=REQ, id=5, rsp_expected=true, UNICAST2: DATA, seqno=8, conn_id=1, UDP: [channel_name=testy]

    [12:15:16,904] [TRACE] [OOB-1,PPL-POZ-NB0074-17416] (FlowControl.java:455) - ppl-poz-nb0074/testy used 218 credits, 499386 remaining

    [12:15:16,904] [TRACE] [OOB-1,PPL-POZ-NB0074-17416] (RequestCorrelator.java:451) - calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 5

    [12:15:16,905] [TRACE] [OOB-1,PPL-POZ-NB0074-17416] (CommandAwareRpcDispatcher.java:220) - Attempting to execute command: CacheViewControlCommand{cache=DDS_EE, type=PREPARE_VIEW, sender=ppl-poz-nb0074/testy, newViewId=3, newMembers=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416], oldViewId=2, oldMembers=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416]} [sender=ppl-poz-nb0074/testy]

    [12:15:16,907] [TRACE] [OOB-1,PPL-POZ-NB0074-17416] (CacheViewsManagerImpl.java:469) - DDS_EE: Preparing cache view CacheView{viewId=3, members=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416]}, committed view is CacheView{viewId=2, members=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416]}

    [12:15:16,907] [TRACE] [OOB-1,PPL-POZ-NB0074-17416] (CacheViewInfo.java:99) - DDS_EE: Preparing view CacheView{viewId=3, members=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416]}

    [12:15:16,907] [TRACE] [OOB-1,PPL-POZ-NB0074-17416] (BaseStateTransferManagerImpl.java:319) - Received new cache view: DDS_EE CacheView{viewId=3, members=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416]}

    [12:15:16,908] [DEBUG] [OOB-1,PPL-POZ-NB0074-17416] (AbstractWheelConsistentHash.java:134) - Using 1 virtualNodes to initialize consistent hash wheel

    [12:15:16,909] [TRACE] [OOB-2,PPL-POZ-NB0074-17416] (TP.java:1160) - received [dst: PPL-POZ-NB0074-17416, src: ppl-poz-nb0074/testy (3 headers), size=218 bytes, flags=OOB|DONT_BUNDLE], headers are RequestCorrelator: id=200, type=REQ, id=6, rsp_expected=true, UNICAST2: DATA, seqno=9, conn_id=1, UDP: [channel_name=testy]

    [12:15:16,910] [TRACE] [OOB-2,PPL-POZ-NB0074-17416] (FlowControl.java:455) - ppl-poz-nb0074/testy used 218 credits, 499168 remaining

    [12:15:16,910] [TRACE] [OOB-2,PPL-POZ-NB0074-17416] (RequestCorrelator.java:451) - calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 6

    [12:15:16,911] [TRACE] [OOB-2,PPL-POZ-NB0074-17416] (CommandAwareRpcDispatcher.java:220) - Attempting to execute command: CacheViewControlCommand{cache=DDS_EE, type=PREPARE_VIEW, sender=ppl-poz-nb0074/testy, newViewId=3, newMembers=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416], oldViewId=2, oldMembers=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416]} [sender=ppl-poz-nb0074/testy]

    [12:15:16,911] [TRACE] [OOB-2,PPL-POZ-NB0074-17416] (CacheViewsManagerImpl.java:469) - DDS_EE: Preparing cache view CacheView{viewId=3, members=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416]}, committed view is CacheView{viewId=2, members=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416]}

    [12:15:16,911] [TRACE] [OOB-2,PPL-POZ-NB0074-17416] (CacheViewInfo.java:99) - DDS_EE: Preparing view CacheView{viewId=3, members=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416]}

    [12:15:16,912] [WARN] [OOB-2,PPL-POZ-NB0074-17416] (CacheViewControlCommand.java:141) - ISPN000071: Caught exception when handling command CacheViewControlCommand{cache=DDS_EE, type=PREPARE_VIEW, sender=ppl-poz-nb0074/testy, newViewId=3, newMembers=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416], oldViewId=2, oldMembers=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416]}

    java.lang.IllegalStateException: Cannot prepare new view CacheView{viewId=3, members=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416]} on cache DDS_EE, we are currently preparing view CacheView{viewId=3, members=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416]}

        at org.infinispan.cacheviews.CacheViewInfo.prepareView(CacheViewInfo.java:102)

        at org.infinispan.cacheviews.CacheViewsManagerImpl.handlePrepareView(CacheViewsManagerImpl.java:481)

        at org.infinispan.commands.control.CacheViewControlCommand.perform(CacheViewControlCommand.java:126)

        at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:95)

        at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:221)

        at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:201)

        at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:456)

        at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:363)

        at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:238)

        at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:543)

        at org.jgroups.JChannel.up(JChannel.java:716)

        at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)

        at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)

        at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)

        at org.jgroups.protocols.FlowControl.up(FlowControl.java:400)

        at org.jgroups.protocols.pbcast.GMS.up(GMS.java:889)

        at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)

        at org.jgroups.protocols.UNICAST.up(UNICAST.java:332)

        at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:602)

        at org.jgroups.protocols.BARRIER.up(BARRIER.java:102)

        at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:177)

        at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)

        at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)

        at org.jgroups.protocols.Discovery.up(Discovery.java:359)

        at org.jgroups.protocols.TP.passMessageUp(TP.java:1180)

        at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1728)

        at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1710)

        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

        at java.lang.Thread.run(Thread.java:619)

    [12:15:16,910] [TRACE] [OOB-1,PPL-POZ-NB0074-17416] (AbstractWheelConsistentHash.java:135) - Positions are: {1101562664=ppl-poz-nb0074/testy, 1771768555=PPL-POZ-NB0074-17416}

    [12:15:16,925] [TRACE] [OOB-1,PPL-POZ-NB0074-17416] (AbstractWheelConsistentHash.java:149) - Consistent hash initialized: DefaultConsistentHash {1101562664: ppl-poz-nb0074/testy, 1771768555: PPL-POZ-NB0074-17416}

    [12:15:16,925] [TRACE] [OOB-2,PPL-POZ-NB0074-17416] (RequestCorrelator.java:506) - sending rsp for 6 to ppl-poz-nb0074/testy

    [12:15:16,925] [DEBUG] [OOB-1,PPL-POZ-NB0074-17416] (ReplicatedStateTransferTask.java:79) - Commencing state transfer 3 on node: PPL-POZ-NB0074-17416. Before start, data container had 4 entries

    [12:15:16,925] [TRACE] [OOB-2,PPL-POZ-NB0074-17416] (FlowControl.java:338) - bypassing flow control because of synchronous response Thread[OOB-2,PPL-POZ-NB0074-17416,5,Thread Pools]

    [12:15:16,926] [DEBUG] [OOB-1,PPL-POZ-NB0074-17416] (StateTransferLockImpl.java:225) - Blocking new write commands for cache view 3

    [12:15:16,926] [TRACE] [OOB-2,PPL-POZ-NB0074-17416] (UNICAST.java:415) - PPL-POZ-NB0074-17416 --> DATA(ppl-poz-nb0074/testy: #7, conn_id=0)

    [12:15:16,926] [TRACE] [OOB-1,PPL-POZ-NB0074-17416] (StateTransferLockImpl.java:244) - New write commands blocked

    [12:15:16,926] [TRACE] [OOB-2,PPL-POZ-NB0074-17416] (TP.java:1076) - sending msg to ppl-poz-nb0074/testy, src=PPL-POZ-NB0074-17416, headers are RequestCorrelator: id=200, type=RSP, id=6, rsp_expected=false, UNICAST: DATA, seqno=7, UDP: [channel_name=testy]

    [12:15:16,927] [TRACE] [OOB-1,PPL-POZ-NB0074-17416] (ReplicatedStateTransferTask.java:88) - No joiners in view 3, skipping replication

    [12:15:16,928] [TRACE] [OOB-1,PPL-POZ-NB0074-17416] (RequestCorrelator.java:506) - sending rsp for 5 to ppl-poz-nb0074/testy

     

     

     

     

     

    For JBoss cache node:

     

     

     

    12:15:16,878 FINER [org.jgroups.protocols.UDP] (OOB-19,null) received [dst: ppl-poz-nb0074/testy, src: PPL-POZ-NB0074-17416 (3 headers), size=56 bytes, flags=OOB|DONT_BUNDLE|RSVP], headers are RequestCorrelator: id=200, type=REQ, id=2, rsp_expected=true, UNICAST: DATA, seqno=3, UDP: [channel_name=testy]

    12:15:16,880 FINER [org.jgroups.protocols.UFC] (OOB-19,null) PPL-POZ-NB0074-17416 used 56 credits, 1999872 remaining

    12:15:16,880 FINER [org.jgroups.blocks.RequestCorrelator] (OOB-19,null) calling (org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher) with request 2

    12:15:16,882 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (OOB-19,null) Attempting to execute command: CacheViewControlCommand{cache=DDS_EE, type=REQUEST_JOIN, sender=PPL-POZ-NB0074-17416, newViewId=0, newMembers=null, oldViewId=0, oldMembers=null} [sender=PPL-POZ-NB0074-17416]

    12:15:16,883 DEBUG [org.infinispan.cacheviews.CacheViewsManagerImpl] (OOB-19,null) DDS_EE: Node PPL-POZ-NB0074-17416 is joining

    12:15:16,884 TRACE [org.infinispan.cacheviews.PendingCacheViewChanges] (OOB-19,null) DDS_EE: Node PPL-POZ-NB0074-17416 is joining

    12:15:16,884 TRACE [org.infinispan.cacheviews.CacheViewsManagerImpl] (OOB-19,null) Waking up cache view installer thread

    12:15:16,885 TRACE [org.infinispan.cacheviews.CacheViewsManagerImpl] (CacheViewTrigger,ppl-poz-nb0074/testy) Woke up, shouldRecoverViews=false

    12:15:16,885 FINER [org.jgroups.blocks.RequestCorrelator] (OOB-19,null) sending rsp for 2 to PPL-POZ-NB0074-17416

    12:15:16,886 TRACE [org.infinispan.cacheviews.PendingCacheViewChanges] (CacheViewTrigger,ppl-poz-nb0074/testy) Previous members are [ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416], joiners are [PPL-POZ-NB0074-17416], leavers are [], recovered after merge = false

    12:15:16,886 FINER [org.jgroups.protocols.UFC] (OOB-19,null) bypassing flow control because of synchronous response Thread[OOB-19,null,5,Thread Pools]

    12:15:16,887 TRACE [org.infinispan.cacheviews.PendingCacheViewChanges] (CacheViewTrigger,ppl-poz-nb0074/testy) DDS_EE: created new view CacheView{viewId=3, members=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416]}

    12:15:16,888 FINER [org.jgroups.protocols.UNICAST2] (OOB-19,null) ppl-poz-nb0074/testy --> DATA(PPL-POZ-NB0074-17416: #7, conn_id=1)

    12:15:16,889 FINER [org.jgroups.protocols.UDP] (OOB-19,null) sending msg to PPL-POZ-NB0074-17416, src=ppl-poz-nb0074/testy, headers are RequestCorrelator: id=200, type=RSP, id=2, rsp_expected=false, UNICAST2: DATA, seqno=7, conn_id=1, UDP: [channel_name=testy]

    12:15:16,889 DEBUG [org.infinispan.cacheviews.CacheViewsManagerImpl] (CacheViewInstaller-1,ppl-poz-nb0074/testy) Installing new view CacheView{viewId=3, members=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416]} for cache DDS_EE

    12:15:16,892 TRACE [org.infinispan.cacheviews.CacheViewsManagerImpl] (CacheViewInstaller-1,ppl-poz-nb0074/testy) DDS_EE: Preparing view 3 on members [ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416]

    12:15:16,895 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (transport-thread-6) dests=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416], command=CacheViewControlCommand{cache=DDS_EE, type=PREPARE_VIEW, sender=ppl-poz-nb0074/testy, newViewId=3, newMembers=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416], oldViewId=2, oldMembers=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416]}, mode=SYNCHRONOUS, timeout=1000

    12:15:16,897 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (transport-thread-6) Replication task sending CacheViewControlCommand{cache=DDS_EE, type=PREPARE_VIEW, sender=ppl-poz-nb0074/testy, newViewId=3, newMembers=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416], oldViewId=2, oldMembers=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416]} to addresses [PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416]

    12:15:16,899 FINER [org.jgroups.blocks.Request] (transport-thread-6) sending request (id=5)

    12:15:16,900 FINER [org.jgroups.protocols.UNICAST2] (transport-thread-6) ppl-poz-nb0074/testy --> DATA(PPL-POZ-NB0074-17416: #8, conn_id=1)

    12:15:16,901 FINER [org.jgroups.protocols.UDP] (transport-thread-6) sending msg to PPL-POZ-NB0074-17416, src=ppl-poz-nb0074/testy, headers are RequestCorrelator: id=200, type=REQ, id=5, rsp_expected=true, UNICAST2: DATA, seqno=8, conn_id=1, UDP: [channel_name=testy]

    12:15:16,902 FINER [org.jgroups.blocks.Request] (transport-thread-6) sending request (id=6)

    12:15:16,903 FINER [org.jgroups.protocols.UNICAST2] (transport-thread-6) ppl-poz-nb0074/testy --> DATA(PPL-POZ-NB0074-17416: #9, conn_id=1)

    12:15:16,906 FINER [org.jgroups.protocols.UDP] (transport-thread-6) sending msg to PPL-POZ-NB0074-17416, src=ppl-poz-nb0074/testy, headers are RequestCorrelator: id=200, type=REQ, id=6, rsp_expected=true, UNICAST2: DATA, seqno=9, conn_id=1, UDP: [channel_name=testy]

    12:15:16,908 TRACE [org.infinispan.cacheviews.CacheViewsManagerImpl] (transport-thread-7) DDS_EE: Preparing cache view CacheView{viewId=3, members=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416]}, committed view is CacheView{viewId=2, members=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416]}

    12:15:16,911 TRACE [org.infinispan.cacheviews.CacheViewInfo] (transport-thread-7) DDS_EE: Preparing view CacheView{viewId=3, members=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416]}

    12:15:16,912 TRACE [org.infinispan.statetransfer.BaseStateTransferManagerImpl] (transport-thread-7) Received new cache view: DDS_EE CacheView{viewId=3, members=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416]}

    12:15:16,914 DEBUG [org.infinispan.distribution.ch.DefaultConsistentHash] (transport-thread-7) Using 1 virtualNodes to initialize consistent hash wheel

    12:15:16,916 TRACE [org.infinispan.distribution.ch.DefaultConsistentHash] (transport-thread-7) Positions are: {1101562664=ppl-poz-nb0074/testy, 1771768555=PPL-POZ-NB0074-17416}

    12:15:16,917 TRACE [org.infinispan.distribution.ch.DefaultConsistentHash] (transport-thread-7) Consistent hash initialized: DefaultConsistentHash {1101562664: ppl-poz-nb0074/testy, 1771768555: PPL-POZ-NB0074-17416}

    12:15:16,919 DEBUG [org.infinispan.statetransfer.ReplicatedStateTransferTask] (transport-thread-7) Commencing state transfer 3 on node: ppl-poz-nb0074/testy. Before start, data container had 4 entries

    12:15:16,921 DEBUG [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-7) Blocking new write commands for cache view 3

    12:15:16,922 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (transport-thread-7) New write commands blocked

    12:15:16,923 TRACE [org.infinispan.statetransfer.ReplicatedStateTransferTask] (transport-thread-7) No joiners in view 3, skipping replication

    12:15:16,927 FINER [org.jgroups.protocols.UDP] (OOB-20,null) received [dst: ppl-poz-nb0074/testy, src: PPL-POZ-NB0074-17416 (3 headers), size=2360 bytes, flags=OOB|DONT_BUNDLE], headers are RequestCorrelator: id=200, type=RSP, id=6, rsp_expected=false, UNICAST: DATA, seqno=7, UDP: [channel_name=testy]

    12:15:16,929 FINER [org.jgroups.protocols.UFC] (OOB-20,null) PPL-POZ-NB0074-17416 used 2360 credits, 1997512 remaining

    12:15:16,929 FINER [org.jgroups.protocols.UDP] (OOB-19,null) received [dst: ppl-poz-nb0074/testy, src: PPL-POZ-NB0074-17416 (3 headers), size=6 bytes, flags=OOB|DONT_BUNDLE], headers are RequestCorrelator: id=200, type=RSP, id=5, rsp_expected=false, UNICAST: DATA, seqno=8, UDP: [channel_name=testy]

    12:15:16,932 FINER [org.jgroups.protocols.UFC] (OOB-19,null) PPL-POZ-NB0074-17416 used 6 credits, 1997506 remaining

    12:15:16,933 FINER [org.jgroups.blocks.Request] (OOB-19,null) received response for request 5, sender=PPL-POZ-NB0074-17416, val=SuccessfulResponse{responseValue=null}

    12:15:16,956 FINER [org.jgroups.blocks.Request] (OOB-20,null) received response for request 6, sender=PPL-POZ-NB0074-17416, val=ExceptionResponse

    12:15:16,957 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (transport-thread-6) Responses: [sender=PPL-POZ-NB0074-17416, retval=ExceptionResponse, received=true, suspected=false]

     

    12:15:16,959 ERROR [org.infinispan.cacheviews.CacheViewsManagerImpl] (CacheViewInstaller-1,ppl-poz-nb0074/testy) ISPN000172: Failed to prepare view CacheView{viewId=3, members=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416]} for cache  DDS_EE, rolling back to view CacheView{viewId=2, members=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416]}: java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Cannot prepare new view CacheView{viewId=3, members=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416]} on cache DDS_EE, we are currently preparing view CacheView{viewId=3, members=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416]}

        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232) [rt.jar:1.6.0_30]

        at java.util.concurrent.FutureTask.get(FutureTask.java:91) [rt.jar:1.6.0_30]

        at org.infinispan.cacheviews.CacheViewsManagerImpl.clusterPrepareView(CacheViewsManagerImpl.java:319) [infinispan-core-5.1.2.FINAL.jar:5.1.2.FINAL]

        at org.infinispan.cacheviews.CacheViewsManagerImpl.clusterInstallView(CacheViewsManagerImpl.java:250) [infinispan-core-5.1.2.FINAL.jar:5.1.2.FINAL]

        at org.infinispan.cacheviews.CacheViewsManagerImpl$ViewInstallationTask.call(CacheViewsManagerImpl.java:876) [infinispan-core-5.1.2.FINAL.jar:5.1.2.FINAL]

        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) [rt.jar:1.6.0_30]

        at java.util.concurrent.FutureTask.run(FutureTask.java:138) [rt.jar:1.6.0_30]

        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_30]

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_30]

        at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_30]

    Caused by: java.lang.IllegalStateException: Cannot prepare new view CacheView{viewId=3, members=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416]} on cache DDS_EE, we are currently preparing view CacheView{viewId=3, members=[ppl-poz-nb0074/testy, PPL-POZ-NB0074-17416, PPL-POZ-NB0074-17416]}

        at org.infinispan.cacheviews.CacheViewInfo.prepareView(CacheViewInfo.java:102) [infinispan-core-5.1.2.FINAL.jar:5.1.2.FINAL]

        at org.infinispan.cacheviews.CacheViewsManagerImpl.handlePrepareView(CacheViewsManagerImpl.java:481) [infinispan-core-5.1.2.FINAL.jar:5.1.2.FINAL]

        at org.infinispan.commands.control.CacheViewControlCommand.perform(CacheViewControlCommand.java:126) [infinispan-core-5.1.2.FINAL.jar:5.1.2.FINAL]

        at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:95) [infinispan-core-5.1.2.FINAL.jar:5.1.2.FINAL]

        at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:221) [infinispan-core-5.1.2.FINAL.jar:5.1.2.FINAL]

        at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:201) [infinispan-core-5.1.2.FINAL.jar:5.1.2.FINAL]

        at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:456) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:363) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:238) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:543) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.JChannel.up(JChannel.java:716) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.FRAG2.up(FRAG2.java:181) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.FlowControl.up(FlowControl.java:418) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.FlowControl.up(FlowControl.java:400) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.pbcast.GMS.up(GMS.java:889) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.UNICAST.up(UNICAST.java:332) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:602) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.BARRIER.up(BARRIER.java:102) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:177) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.MERGE2.up(MERGE2.java:205) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.Discovery.up(Discovery.java:359) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.TP.passMessageUp(TP.java:1180) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1728) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1710) [jgroups-3.0.6.Final.jar:3.0.6.Final]

        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_30]

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_30]

        at java.lang.Thread.run(Thread.java:619) [rt.jar:1.6.0_30]

     

    12:15:16,964 FINER [org.jgroups.protocols.UDP] (OOB-20,null) received [dst: ppl-poz-nb0074/testy, src: PPL-POZ-NB0074-17416 (3 headers), size=6 bytes, flags=OOB|DONT_BUNDLE], headers are RequestCorrelator: id=200, type=RSP, id=2, rsp_expected=false, UNICAST: DATA, seqno=4, UDP: [channel_name=testy]

    12:15:16,990 FINER [org.jgroups.protocols.UDP] (OOB-19,null) received [dst: ppl-poz-nb0074/testy, src: PPL-POZ-NB0074-17416 (3 headers), size=4 bytes, flags=OOB|DONT_BUNDLE], headers are RequestCorrelator: id=200, type=RSP, id=3, rsp_expected=false, UNICAST: DATA, seqno=5, UDP: [channel_name=testy]

    12:15:16,990 FINER [org.jgroups.protocols.UFC] (OOB-20,null) PPL-POZ-NB0074-17416 used 6 credits, 1997500 remaining

    12:15:16,994 FINER [org.jgroups.protocols.UFC] (OOB-19,null) PPL-POZ-NB0074-17416 used 4 credits, 1997502 remaining

    12:15:17,001 FINER [org.jgroups.protocols.UDP] (OOB-19,null) received [dst: ppl-poz-nb0074/testy, src: PPL-POZ-NB0074-17416 (3 headers), size=6 bytes, flags=OOB|DONT_BUNDLE], headers are RequestCorrelator: id=200, type=RSP, id=4, rsp_expected=false, UNICAST: DATA, seqno=6, UDP: [channel_name=testy]

    12:15:17,002 FINER [org.jgroups.protocols.UFC] (OOB-19,null) PPL-POZ-NB0074-17416 used 6 credits, 1997490 remaining

     

     

    Michal

  • 4. Re: Infinispan cluster
    Dmitry Udalov Newbie

    I have similar problems, but in my case apart from those exceptions the cluster cannot be built. Besides, the result depends on the order. If node A starts first then I see tons of exceptions on both nodes:

       java.lang.IllegalStateException: Cannot prepare new view

       org.infinispan.transaction.tm.DummyTransaction (DummyTransaction.java:310) - ISPN000112: exception while committing

     

    If the node B is started first, then it’s pretty silent. Here node “A” reports

          no physical address for 44274f1c-dfcf-ec92-fde3-ad7d03ff0ae8, dropping message

    while another node reports

          dropped message 2 from A (sender not in table [B])

     

    The address reported by the first node doesn’t look related to the second node; I guess it’s some internal UUID (AdditionalDataUUID, PayloadUUID, TopologyUUID). It’s hard to tell exactly - none of these classes report its type.

     

    The address reported by node B makes good sense, but I’m afraid JGroups cannot resolve that host “A”, so it drops that host without explaining why. I assume that there could be lack of some permissions, services which makes it impossible for JGroups to build a cluster. But how do I know what exactly? The included tests McastReceiverTest/McastSenderTest didn’t reveal any problem.

     

    I figured out that one of the requirements is having PING enabled, which by default is not the case on Windows 7. But what are other requirements? It’s really a nightmare to build a cluster.  So what does it mean “sender not in table”? Why there is no “physical address for …”? What needs to be done to make it happy? Anybody?

  • 5. Re: Infinispan cluster
    Dan Berindei Apprentice

    @Michal, it appears there are two issues here:

    1. Your DDS-ES cache on JBoss AS sent two CacheViewControlCommand(REQUEST_JOIN) commands to the coordinator. I'm not sure why, please attach the full AS log so I can investigate.

    2. Because of the way we handle joins, receiving a join request from a node that's already a member of the cache view results in a new cache view with a duplicate member. The exception you're seeing means that the same node received the CacheViewControlCommand(PREPARE_VIEW, 3) twice. I've opened ISPN-2096 for this issue.

     

    @Dmitry: It could be that your JGroups configuration is not exactly the same on both nodes. Please open a new forum thread and post your configurations and the full logs.

    BTW, having PING enabled means having the PING protocol in your JGroups stack configuration. It doesn't have anything to do with the ping command that may or may not exist in your OS.

  • 6. Re: Infinispan cluster
    Michał Chmielarz Newbie

    Hi Dan,

     

    I'm attaching both log files - for server and standalone client.

     

    If I try to put some data on standalone node I see that server receives them and stores to the JDBC store. So the main problem is this doubled REQUEST_JOIN command. I saw them in the log but have no idea why it is so.

     

    Thanks for your help,

    Michał

  • 7. Re: Infinispan cluster
    Michał Chmielarz Newbie

    Hi Dimitry,

     

    I don't have a problem with setting up the Infinispan cluster (that spans single Java SE node and single JBoss instance). I see in log that JGroups set up its own internal cluster view and Infinispan creates cache view with both nodes. The problem is exactly what Dan is pointing to in 2. point of its post - doubled REQUEST_JOIN command send from one node.

     

    Cheers,

    Michał

  • 8. Re: Infinispan cluster
    Dmitry Udalov Newbie

    I don't have problems to form a cluster if instances of my Java application run on the same box. The problem happens only if the nodes are on different boxes. And since I used the same configuration files, I guess there is something fishy about boxes' configurations. However they passed the test with McastReceiverTest/McastSenderTest. And the tests worked fine with and without -Djava.net.preferIPv4Stack=true

     

    I opened another discussion https://community.jboss.org/thread/200765 for that.

  • 9. Re: Infinispan cluster
    Dan Berindei Apprentice

    Michal, sorry for the long delay, I missed your reply.

     

    A colleague helped me understand what the problem was: one node was configured to use UNICAST, and the other was configured to use UNICAST2. Because of this, both nodes received the other's messages and processed them, but didn't see that the other received their message and so kept resending.

     

    se_node.log:[15:50:01,726] [TRACE] [main] (TP.java:1076) - sending msg to ppl-poz-nb0074/testy, src=PPL-POZ-NB0074-3324, headers are GMS: GmsHeader[JOIN_REQ]: mbr=PPL-POZ-NB0074-3324, UNICAST: DATA, seqno=1, first, UDP: [channel_name=testy]

    server.log:15:50:01,844 FINER [org.jgroups.protocols.UDP] (Incoming-1,null) sending msg to ppl-poz-nb0074/testy, src=ppl-poz-nb0074/testy, headers are GMS: GmsHeader[VIEW_ACK]: view=null, UNICAST2: DATA, seqno=1, first, UDP: [channel_name=testy]

  • 10. Re: Infinispan cluster
    Michał Chmielarz Newbie

    Hi Dan!

     

    Thank you for the response!

     

    I've spotted additionally that the failure detection protocols must be the same for both sides. I resolved the issue with using the same JGroups protocols stack for both sides. So probably beside unicast and FD protocols others have to be the same for both types of nodes as well.