3 Replies Latest reply: Apr 12, 2011 11:41 AM by Chris Collison RSS

Verifying Infinispan setup

Chris Collison Newbie

I've got Infinispan running in Stand-alone mode as my Hibernate L2 region.factory_class(per docs here: Link) and have picked Atomikos(3.7.0) to handle the Transactions and am using Tomcat 6.0.26 as my main app server. I've got entities marked as Cacheable and queries set up as cacheable as well. It starts up Infinispan with the default config just fine and prints a fair amount to the logs during start up. I then start another App server instance(Spring tc Server) on another port(9080) and have it use the default Infinispan config file also.

 

I'm using the folllowing maven dependency to pull in infinispan

            <dependency>
                <groupId>org.hibernate</groupId>
                <artifactId>hibernate-infinispan</artifactId>
                <version>3.6.2.Final</version>
            </dependency>

 

It doesn't look like the two instances discover one another.

Log entries from Spring tc server:

 

2011-04-08 17:01:48,485 INFO  org.jgroups.JChannel  - JGroups version: 2.11.0.GA
2011-04-08 17:01:49,415 WARN  org.jgroups.protocols.UDP  - send buffer of socket java.net.DatagramSocket@60437dcb was set to 640KB, but the OS only allocated 131.07KB. This might lead to performance p$
2011-04-08 17:01:49,415 WARN  org.jgroups.protocols.UDP  - receive buffer of socket java.net.DatagramSocket@60437dcb was set to 20MB, but the OS only allocated 131.07KB. This might lead to performance$
2011-04-08 17:01:49,415 WARN  org.jgroups.protocols.UDP  - send buffer of socket java.net.MulticastSocket@6ef7cbcc was set to 640KB, but the OS only allocated 131.07KB. This might lead to performance $
2011-04-08 17:01:49,415 WARN  org.jgroups.protocols.UDP  - receive buffer of socket java.net.MulticastSocket@6ef7cbcc was set to 25MB, but the OS only allocated 131.07KB. This might lead to performanc$
2011-04-08 17:01:50,477 INFO  org.infinispan.remoting.transport.jgroups.JGroupsTransport  - Received new cluster view: [marathon-18017|0] [marathon-18017]
2011-04-08 17:01:50,481 INFO  org.infinispan.remoting.transport.jgroups.JGroupsTransport  - Cache local address is marathon-18017, physical addresses are [fe80:0:0:0:221:70ff:fee8:e73e:51280]
2011-04-08 17:01:50,481 INFO  org.infinispan.factories.GlobalComponentRegistry  - Infinispan version: Infinispan 'Ursus' 4.2.1.CR1
2011-04-08 17:01:50,538 INFO  org.infinispan.factories.ComponentRegistry  - Infinispan version: Infinispan 'Ursus' 4.2.1.CR1

 

 

Log entries from Tomcat server:

2011-04-08 17:11:52,591 INFO  org.jgroups.JChannel  - JGroups version: 2.11.0.GA
2011-04-08 17:11:53,520 WARN  org.jgroups.protocols.UDP  - send buffer of socket java.net.DatagramSocket@4ce2db0 was set to 640KB, but the OS only allocated 131.07KB. This might lead to performance pr$
2011-04-08 17:11:53,520 WARN  org.jgroups.protocols.UDP  - receive buffer of socket java.net.DatagramSocket@4ce2db0 was set to 20MB, but the OS only allocated 131.07KB. This might lead to performance $
2011-04-08 17:11:53,520 WARN  org.jgroups.protocols.UDP  - send buffer of socket java.net.MulticastSocket@7c198046 was set to 640KB, but the OS only allocated 131.07KB. This might lead to performance $
2011-04-08 17:11:53,521 WARN  org.jgroups.protocols.UDP  - receive buffer of socket java.net.MulticastSocket@7c198046 was set to 25MB, but the OS only allocated 131.07KB. This might lead to performanc$
2011-04-08 17:11:54,579 INFO  org.infinispan.remoting.transport.jgroups.JGroupsTransport  - Received new cluster view: [marathon-29794|0] [marathon-29794]
2011-04-08 17:11:54,582 INFO  org.infinispan.remoting.transport.jgroups.JGroupsTransport  - Cache local address is marathon-29794, physical addresses are [fe80:0:0:0:221:70ff:fee8:e73e:34960]
2011-04-08 17:11:54,583 INFO  org.infinispan.factories.GlobalComponentRegistry  - Infinispan version: Infinispan 'Ursus' 4.2.1.CR1
2011-04-08 17:11:54,655 INFO  org.infinispan.factories.ComponentRegistry  - Infinispan version: Infinispan 'Ursus' 4.2.1.CR1

 

 

Both appear to be starting their own cluster view. The box host name is "marathon" and there doesn't appear to be any interaction between the two instances. Both will show tons of individual Atomikos transaction activity and I can view the individual instances cache contents and verify that caching is working correctly based on front-end user interactions, but nothing is being updated between the two instances.

 

So my questions are:

1) Shouldn't they be able to to discover each other on the same physical machine running on two different ports with the default included Infinispan config file? If not, what steps are needed? Include my own infinispan config file? Change the JGroups config?

 

Ultimately the application will be hosted in EC2 and I understand that S3_Ping and JDBC_Ping work for discovery there. I would like the verify that the deafult Infinispan config works with our setup though.

  • 1. Verifying Infinispan setup
    Chris Collison Newbie

    A few more days of tinkering and creating a custom infinispan config file but no progress. I looked at the network status in netstat and it shows UDP activity on the jgroups.udp.mcast_port:

     

    Proto     Recv-Q      Send-Q      Local Address               Foreign Address             State

    udp        0               0                ff0e::8:8:8:45588           *:*

    udp        0               0                ff0e::8:8:8:45588           *:*

     

    Each instance is registering an identical UDP entry which matches the default flush-udp.xml entry in the Jgroups jar.

     

    transport session section of my infinispan-configs.xml:

      <transport transportClass = "org.infinispan.remoting.transport.jgroups.JGroupsTransport"

                clusterName="dev-cluster" distributedSyncTimeout="50000">

             <!-- Note that the JGroups transport uses sensible defaults if no configuration property is defined. -->

             <properties>

                <!-- TODO: Change to udp.xml once streaming transfer requirement has been removed.  -->

                <property name="configurationFile" value="flush-udp.xml"/>

             </properties>

             <!-- See the JGroupsTransport javadocs for more flags -->

          </transport>

     

    Anything wrong with what I'm trying to do here? Anyone care to point me in the right direction here?

  • 2. Verifying Infinispan setup
    Sanne Grinovero Master

    Hi Chris,

    1. Try using IPv4 instead of IPv6, using this JVM parameter: -Djava.net.preferIPv4Stack=true
    2. Make sure JGroups is binding to the correct interface, -Djgroups.bind_addr=ExternalAddressIPAddressHere (this one works only if there's a property placeholder named jgroups.bind_addr in the jgroups configuration you're using - alternatively you can hardcode it in your configuration file)
    3. Latest versions are 3.6.3.Final for Hibernate 4.2.1.FINAL for Infinispan, 2.12.0.Final for JGroups. I believe especially JGroups should be updated if you want to use IPv6.
    4. Verify firewall settings ? Try disabling all security measures first, then narrow it down.

     

    If it doesn't help, please post your jgroups configuration. You could try the JGroups demo as well, it's very light so you can figure out quickly if network is configured properly.

     

    Finally, don't ignore those warnings for best performance (doesn't matter for functionality, so not related to this problem).

  • 3. Verifying Infinispan setup
    Chris Collison Newbie

    1) Done, it changed the UDP traffic entry in netstat accordingly.

    2) Done, I just used the default flush-udp.xml for jgroups with a bind_addr added to it.

    3) Done, probably something that needed to be done anyway.

    4) I'm using Redhat Core 13 x86_64 on my dev box with relatively basic out of the box firewall settings. I went ahead and disabled iptables temporarily and tried this again.

     

    Sure enough this showed up in the logs:

    2011-04-12 10:24:54,760 INFO  org.infinispan.remoting.transport.jgroups.JGroupsTransport  - Received new cluster view: [marathon-30523|1] [marathon-30523, marathon-56181]

    It looks like it uses port 45588 for the default mcast port( mcast_port="${jgroups.udp.mcast_port:45588}". It seems to be sufficient to add this port to the firewall exceptions to let JGroups work correctly. Eventually this will be used in EC2 instances which will require some more config setup with JDBC_Ping.

     

    I think I'm good to go for now. Thanks for the help!