1 2 3 4 Previous Next 51 Replies Latest reply on Jul 18, 2013 12:23 PM by kasik Go to original post
      • 15. Re: Infinispan on EC2
        galder.zamarreno

        Shadar, checkout this link where this exception is explained: http://shlomoswidler.com/2009/08/amazon-s3-gotcha-using-virtual-host.html

         

        There's some workarounds that you might be able to use.

        • 16. Re: Infinispan on EC2
          raulraja

          Having issues running infinispan on ec2 as well. I'm trying to get infinispan to work on elastic beanstalk.

          I've tried both the JDBC_PING and S3_PING to get JGroups talking to all the instances but despite seeing the records and S3 files for the cluster the instance just don't communicate with each other. I've also ensured is not a firewall or security group issue as the ports are opened.

          When I enable full logging debug for infinispan, jgroups, etc...

           

          This is what I see:

           

          Thread-2 06/22 19:22:01 DEBUG org.infinispan.marshall.jboss.JBossMarshaller - Using JBoss Marshalling

          Thread-2 06/22 19:22:01 INFO org.infinispan.remoting.transport.jgroups.JGroupsTransport - ISPN00078: Starting JGroups Channel

          Thread-2 06/22 19:22:01 INFO org.jgroups.JChannel - JGroups version: 2.12.0.Final

          Thread-2 06/22 19:22:01 DEBUG org.jgroups.conf.ClassConfigurator - Using jg-magic-map.xml as magic number file and jg-protocol-ids.xml for protocol IDs

          Thread-2 06/22 19:22:02 DEBUG org.jgroups.stack.Configurator - set property TCP.bind_addr to default value /10.122.26.239

          Thread-2 06/22 19:22:02 DEBUG org.jgroups.stack.Configurator - set property TCP.diagnostics_addr to default value /224.0.75.75

          Thread-2 06/22 19:22:02 DEBUG org.jgroups.stack.Configurator - set property FD_SOCK.bind_addr to default value /10.122.26.239

          Thread-2 06/22 19:22:03 INFO org.infinispan.remoting.transport.jgroups.JGroupsTransport - ISPN00079: Cache local address is ip-10-122-26-239-10078, physical addresses are [10.122.26.239:7800]

          Thread-2 06/22 19:22:03 DEBUG org.infinispan.remoting.transport.jgroups.JGroupsTransport - Waiting on view being accepted

           

          And it waits there forever...

           

          I looked at the implementation of that file and I see that the message is displayed because no members have joined the cluster.

          Any helps or ideas are greatly appreciated.

           

          Thanks!

          • 17. Re: Infinispan on EC2
            sannegrinovero
            I've also ensured is not a firewall or security group issue as the ports are opened.

            Did you double check opening another connection on the configured ports? Personally I often forget that there are several firewalls:

            1. Amazon's security settings between machines
            2. The operating system's own firewall (iptables on linux, try disabling it to debug connectivity issues then re-enable it)

             

            Also you have several network interfaces: make sure JGroups is binding to the right interface, by default it binds to the first non-loopback interface, but this may not the one you mean to use.

            • 18. Re: Infinispan on EC2
              raulraja

              I have mapped elastic IP's to those instance to ensure the ports were open from anywhere.

              telnet 50.19.246.61 7800

              Trying 50.19.246.61...

              Connected to ec2-50-19-246-61.compute-1.amazonaws.com.

              Escape character is '^]'.

              Connection closed by foreign host.

               

              It seems like it's not a firewall issue. I'm using JDBC_PING now with the same results as I obtained with S3_PING.

              Thread-2 06/22 21:29:21 INFO org.infinispan.remoting.transport.jgroups.JGroupsTransport - ISPN00079: Cache local address is ip-10-87-35-77-43753, physical addresses are [10.87.35.77:7800]

              Thread-2 06/22 21:29:21 DEBUG org.infinispan.remoting.transport.jgroups.JGroupsTransport - Waiting on view being accepted

              Timer-2,infinispan-cluster,ip-10-87-35-77-43753 06/22 21:30:21 DEBUG org.jgroups.protocols.JDBC_PING - Removed 62fec09b-80b8-fafd-a575-4783d28ddf1f for clustername infinispan-cluster from database.

              Timer-2,infinispan-cluster,ip-10-87-35-77-43753 06/22 21:30:21 DEBUG org.jgroups.protocols.JDBC_PING - Registered 62fec09b-80b8-fafd-a575-4783d28ddf1f for clustername infinispan-cluster into database.

               

              I can see that communication with the database is working yet the instances are not able to see each other.

              It seems by the database entries that the address that is using to publish the instances on the cluster are the private ec2 hostnames:

              f8153851-c2b9-9f6a-9cd5-d27ca78ad5aainfinispan-cluster œÕÒ|§ŠÕªø 8Q¹Ÿj  ip-10-88-230-187-31132  

              Xæ» x

               

              Is there a way to tell JDBC_PING to use the elastic ips or the actual ip's and not the hostnames when inserting records in the db?

              Also the same applies for the S3_PING method.

               

              Here are my configs in case it helps. I couldn't find a way to attach files to this post sorry amd thanks for any help!

               

              This is my jgroups config:

               

              <config>

                        <TCP bind_port="7800" />

                        <JDBC_PING connection_url="jdbc:mysql://..."

                                               connection_username="..."

                                               connection_password="..."

                                               connection_driver="com.mysql.jdbc.Driver" />

                        <MERGE2 max_interval="30000" min_interval="10000" />

                        <FD_SOCK start_port="9777" />

              </config>

               

              And this is my infinispan config:

               

              <?xml version="1.0" encoding="UTF-8"?>

              <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

                      xsi:schemaLocation="urn:infinispan:config:5.0 http://www.infinispan.org/schemas/infinispan-config-5.0.xsd"

                      xmlns="urn:infinispan:config:5.0">

               

               

               

               

                  <!-- *************************** -->

                  <!-- System-wide global settings -->

                  <!-- *************************** -->

               

               

               

               

                  <global>

               

               

               

               

                      <!-- Duplicate domains are allowed so that multiple deployments with default configuration

                          of Hibernate Search applications work - if possible it would be better to use JNDI to share

                          the CacheManager across applications -->

                      <globalJmxStatistics enabled="false" cacheManagerName="Lucene" allowDuplicateDomains="true"/>

               

               

               

               

                      <!-- If the transport is omitted, there is no way to create distributed or clustered

                          caches. There is no added cost to defining a transport but not creating a cache that uses one,

                          since the transport is created and initialized lazily. -->

                      <!--<transport clusterName="Lucene" distributedSyncTimeout="50000">-->

                          <!--&lt;!&ndash; Note that the JGroups transport uses sensible defaults if no configuration-->

                              <!--property is defined. See the JGroupsTransport javadocs for more flags &ndash;&gt;-->

                      <!--</transport>-->

               

               

                                  <transport clusterName="infinispan-cluster" distributedSyncTimeout="50000"

                                                      transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport">

                                            <properties>

                                               <property name="configurationFile" value="jgroups-jdbc_ping-aws.xml" />

                                            </properties>

                                   </transport>

               

               

               

               

                      <!-- Used to register JVM shutdown hooks. hookBehavior: DEFAULT, REGISTER, DONT_REGISTER.

                          Hibernate Search takes care to stop the CacheManager so registering is not needed -->

                      <shutdown hookBehavior="DONT_REGISTER"/>

               

               

               

               

                  </global>

               

                  <!-- *************************** -->

                  <!-- Default "template" settings -->

                  <!-- *************************** -->

               

                  <default>

               

                      <locking lockAcquisitionTimeout="20000" writeSkewCheck="false" concurrencyLevel="500" useLockStriping="false"/>

               

                                  <storeAsBinary enabled="false" />

               

               

                      <!--<lazyDeserialization enabled="false"/>-->

               

               

                      <!-- Invocation batching is required for use with the Lucene Directory -->

                      <invocationBatching enabled="true"/>

               

                      <!-- This element specifies that the cache is clustered. modes supported: distribution

                          (d), replication (r) or invalidation (i). Don't use invalidation to store Lucene indexes (as

                          with Hibernate Search DirectoryProvider). Replication is recommended for best performance of

                          Lucene indexes, but make sure you have enough memory to store the index in your heap.

                          Also distribution scales much better than replication on high number of nodes in the cluster. -->

                      <clustering mode="replication">

               

               

                          <!-- Prefer loading all data at startup than later -->

                          <stateRetrieval timeout="50000" logFlushTimeout="30000" fetchInMemoryState="true"

                                  alwaysProvideInMemoryState="true"/>

               

                          <!-- Network calls are synchronous by default -->

                          <sync replTimeout="50000"/>

                      </clustering>

               

                      <jmxStatistics enabled="false"/>

               

                      <eviction maxEntries="-1" strategy="NONE"/>

               

                      <expiration maxIdle="-1"/>

               

                  </default>

               

                  <!-- ******************************************************************************* -->

                  <!-- Individually configured "named" caches.                                         -->

                  <!--                                                                                 -->

                  <!-- While default configuration happens to be fine with similar settings across the -->

                  <!-- three caches, they should generally be different in a production environment.   -->

                  <!--                                                                                 -->

                  <!-- Current settings could easily lead to OutOfMemory exception as a CacheStore     -->

                  <!-- should be enabled, and maybe distribution is desired.                           -->

                  <!-- ******************************************************************************* -->

               

               

               

               

                  <!-- *************************************** -->

                  <!--  Cache to store Lucene's file metadata  -->

                  <!-- *************************************** -->

                  <namedCache name="LuceneIndexesMetadata">

                      <!--<eviction maxEntries="1000" strategy="LIRS"/>-->

                      <loaders passivation="false" shared="true">

                          <loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="true">

                              <properties>

                                  <property name="location" value="lucene_cache"/>

                              </properties>

                          </loader>

                      </loaders>

                      <clustering mode="replication">

                          <stateRetrieval fetchInMemoryState="true" logFlushTimeout="30000"/>

                          <sync replTimeout="25000"/>

                      </clustering>

                  </namedCache>

               

               

               

               

                  <!-- **************************** -->

                  <!--  Cache to store Lucene data  -->

                  <!-- **************************** -->

                  <namedCache name="LuceneIndexesData">

                      <!--<eviction maxEntries="1000" strategy="LIRS"/>-->

                      <loaders passivation="false" shared="true">

                          <loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="true">

                              <properties>

                                  <property name="location" value="lucene_cache"/>

                              </properties>

                          </loader>

                      </loaders>

                      <clustering mode="replication">

                          <stateRetrieval fetchInMemoryState="true" logFlushTimeout="30000"/>

                          <sync replTimeout="25000"/>

                      </clustering>

                  </namedCache>

               

                  <!-- ***************************** -->

                  <!--  Cache to store Lucene locks  -->

                  <!-- ***************************** -->

                  <namedCache name="LuceneIndexesLocking">

                      <clustering mode="replication">

                          <stateRetrieval fetchInMemoryState="true" logFlushTimeout="30000"/>

                          <sync replTimeout="25000"/>

                      </clustering>

                  </namedCache>

               

              </infinispan>

              • 19. Re: Infinispan on EC2
                raulraja

                I can also confirm now that instances are able to see each other when trying to ssh from one to another by using their private ec2 hostname.

                [root@ip-10-87-35-77 ~]# ssh ec2-user@ip-10-122-26-239

                The authenticity of host 'ip-10-122-26-239 (10.122.26.239)' can't be established.

                RSA key fingerprint is 17:b0:3a:d5:d1:1d:db:f1:fa:a1:be:f6:64:27:3f:e8.

                 

                My firewall rules on ec2 for the security group assigned to the elastic beanstalk cluster are these:

                 

                 

                ICMP
                Port (Service)SourceAction
                ALLsg-e271ee8b (default)Delete
                TCP
                Port (Service)SourceAction
                0 - 65535sg-e271ee8b (default)Delete
                12000sg-e271ee8b (default)Delete
                22 (SSH)0.0.0.0/0Delete
                80 (HTTP)0.0.0.0/0Delete
                3306 (MYSQL)0.0.0.0/0Delete
                7800 - 78500.0.0.0/0Delete
                97770.0.0.0/0
                • 20. Re: Infinispan on EC2
                  sannegrinovero

                  Hi,

                  that's not your complete JGroups configuration is it? It seems your stack is missing some protocols, I'd suggest starting with a tested configuration before attempting to tune it. You can find some example configurations in the distributions, there are some EC2 based examples, and even infinispan-core.jar contains a  configuration for ec2: jgroups-ec2.xml

                   

                  Also while your port settings look ok to me, I might be wrong as some of those settings are ranges, and you're quite strict on which ports are open and which are not. I'd try starting with a full-open configuration.

                   

                  Consider using groups instead of ports: if you create an "infinispan" group in the security roles of ec2, then you can specify that all nodes in the "infinispan" group are completely open to traffic between nodes, and still have all the other ports blocked from outside your group, i.e. the rest of the world. Usually I open to public only port 80, and 22 "on demand" , but prefer to be flexible about which ports are open cross-nodes.

                   

                   

                  Finally, a cost hint: don't use your elastic IPs for clustering:

                  • the machine itself doesn't know about them, so it makes your configuration harder
                  • you pay for the data traffic going through external IPs, while data transfer across nodes in the same region is free
                  • 21. Re: Infinispan on EC2
                    raulraja

                    Thanks for the comments Sanne. I got passed this issue by adding a more complete jgroups file and tweaking the config around based on the files you had mentioned.

                    I'm facing a different issue now. It seems as if any time a "State transfer" is goign to take place between nodes, first the nodes are unable to keep open sockets for the state transfer. Also I can see the nodes ping each other continously and receive view invalidation messages but still unable to perform state transfers. I made sure that the Security group rules are fully open for all ports for now to ensure there was no firewall issues trying to bind to the sockets.

                     

                    Here is some of the errors that I see regarding state transfer and my new config:

                     

                    Thread-2 06/26 23:24:49 INFO org.infinispan.remoting.transport.jgroups.JGroupsTransport - ISPN00078: Starting JGroups Channel

                    Thread-2 06/26 23:24:49 INFO org.jgroups.JChannel - JGroups version: 2.12.0.Final

                    Thread-2 06/26 23:24:49 DEBUG org.jgroups.conf.ClassConfigurator - Using jg-magic-map.xml as magic number file and jg-protocol-ids.xml for protocol IDs

                    Thread-2 06/26 23:24:49 DEBUG org.jgroups.protocols.pbcast.GMS - changed role to org.jgroups.protocols.pbcast.ClientGmsImpl

                    Thread-2 06/26 23:24:49 DEBUG org.jgroups.stack.Configurator - set property TCP.bind_addr to default value /10.35.81.89

                    Thread-2 06/26 23:24:49 DEBUG org.jgroups.stack.Configurator - set property TCP.diagnostics_addr to default value /224.0.75.75

                    Thread-2 06/26 23:24:49 DEBUG org.jgroups.stack.Configurator - set property FD_SOCK.bind_addr to default value /10.35.81.89

                    Thread-2 06/26 23:24:49 DEBUG org.jgroups.stack.Configurator - set property VERIFY_SUSPECT.bind_addr to default value /10.35.81.89

                    Thread-2 06/26 23:24:49 DEBUG org.jgroups.stack.Configurator - set property STREAMING_STATE_TRANSFER.bind_addr to default value /10.35.81.89

                    Thread-2 06/26 23:24:49 DEBUG org.jgroups.protocols.FRAG2 - received CONFIG event: {bind_addr=/10.35.81.89}

                    Thread-2 06/26 23:24:50 DEBUG org.jgroups.protocols.pbcast.GMS - initial_mbrs are [own_addr=ip-10-85-154-136-43344, view id=[ip-10-85-154-136-43344|4], is_server=true, is_coord=true, logical_name=ip-10-85-154-136-43344, physical_addrs=10.85.154.136:7800]

                    Thread-2 06/26 23:24:50 DEBUG org.jgroups.protocols.pbcast.GMS - election results: {ip-10-85-154-136-43344=1}

                    Thread-2 06/26 23:24:50 DEBUG org.jgroups.protocols.pbcast.GMS - sending handleJoin(ip-10-35-81-89-57293) to ip-10-85-154-136-43344

                    Thread-2 06/26 23:24:51 DEBUG org.jgroups.protocols.pbcast.NAKACK -

                    [setDigest()]

                    existing digest:  []

                    new digest:       ip-10-35-81-89-57293: [0 : 0], ip-10-85-154-136-43344: [3341 : 3347 (3347)]

                    resulting digest: ip-10-35-81-89-57293: [0 : 0 (0)], ip-10-85-154-136-43344: [3341 : 3347 (3347)]

                    Thread-2 06/26 23:24:51 DEBUG org.jgroups.protocols.pbcast.GMS - [ip-10-35-81-89-57293]: JoinRsp=[ip-10-85-154-136-43344|5] [ip-10-85-154-136-43344, ip-10-35-81-89-57293] [size=2]

                     

                     

                     

                     

                    Thread-2 06/26 23:24:51 DEBUG org.jgroups.protocols.pbcast.GMS - new_view=[ip-10-85-154-136-43344|5] [ip-10-85-154-136-43344, ip-10-35-81-89-57293]

                    Thread-2 06/26 23:24:51 DEBUG org.jgroups.protocols.pbcast.GMS - ip-10-35-81-89-57293: view is [ip-10-85-154-136-43344|5] [ip-10-85-154-136-43344, ip-10-35-81-89-57293]

                    Thread-2 06/26 23:24:51 DEBUG org.jgroups.protocols.FD_SOCK - VIEW_CHANGE received: [ip-10-85-154-136-43344, ip-10-35-81-89-57293]

                    Thread-2 06/26 23:24:51 DEBUG org.jgroups.protocols.pbcast.STABLE - [ergonomics] setting max_bytes to 800KB (2 members)

                    Thread-2 06/26 23:24:51 DEBUG org.infinispan.remoting.transport.jgroups.JGroupsTransport - New view accepted: [ip-10-85-154-136-43344|5] [ip-10-85-154-136-43344, ip-10-35-81-89-57293]

                    Thread-2 06/26 23:24:51 INFO org.infinispan.remoting.transport.jgroups.JGroupsTransport - ISPN00094: Received new cluster view: [ip-10-85-154-136-43344|5] [ip-10-85-154-136-43344, ip-10-35-81-89-57293]

                    Thread-2 06/26 23:24:51 DEBUG org.jgroups.protocols.pbcast.GMS - ip-10-35-81-89-57293:

                    Thread-2 06/26 23:24:51 INFO org.infinispan.remoting.transport.jgroups.JGroupsTransport - ISPN00079: Cache local address is ip-10-35-81-89-57293, physical addresses are [10.35.81.89:7800]

                    Thread-2 06/26 23:24:51 DEBUG org.infinispan.remoting.transport.jgroups.JGroupsTransport - Waiting on view being accepted

                    Thread-2 06/26 23:24:51 INFO org.infinispan.factories.GlobalComponentRegistry - ISPN00128: Infinispan version: Infinispan 'Pagoa' 5.0.0.CR6

                    FD_SOCK pinger,infinispan-hibernate-search-cluster,ip-10-35-81-89-57293 06/26 23:24:51 DEBUG org.jgroups.protocols.FD_SOCK - ping_dest is ip-10-85-154-136-43344, pingable_mbrs=[ip-10-85-154-136-43344, ip-10-35-81-89-57293]

                    Thread-2 06/26 23:24:51 DEBUG org.infinispan.interceptors.InterceptorChain - Interceptor chain size: 9

                    Thread-2 06/26 23:24:51 DEBUG org.infinispan.interceptors.InterceptorChain - Interceptor chain is:

                              >> org.infinispan.interceptors.BatchingInterceptor

                              >> org.infinispan.interceptors.InvocationContextInterceptor

                              >> org.infinispan.interceptors.TxInterceptor

                              >> org.infinispan.interceptors.NotificationInterceptor

                              >> org.infinispan.interceptors.CacheLoaderInterceptor

                              >> org.infinispan.interceptors.CacheStoreInterceptor

                              >> org.infinispan.interceptors.LockingInterceptor

                              >> org.infinispan.interceptors.ReplicationInterceptor

                              >> org.infinispan.interceptors.CallInterceptor

                    Thread-2 06/26 23:24:51 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - Initiating state transfer process

                    Thread-2 06/26 23:24:51 INFO org.infinispan.remoting.rpc.RpcManagerImpl - ISPN00074: Trying to fetch state from ip-10-85-154-136-43344

                    Thread-2 06/26 23:24:51 DEBUG org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER - GET_STATE: asking ip-10-85-154-136-43344 for state, passing down a SUSPEND_STABLE event, timeout=20000

                    Thread-2 06/26 23:24:51 DEBUG org.jgroups.protocols.pbcast.STABLE - suspending message garbage collection

                    Thread-2 06/26 23:24:51 DEBUG org.jgroups.protocols.pbcast.STABLE - resume task started, max_suspend_time=22000

                    Incoming-1,infinispan-hibernate-search-cluster,ip-10-35-81-89-57293 06/26 23:24:51 DEBUG org.jgroups.protocols.pbcast.NAKACK -

                    [overwriteDigest()]

                    existing digest:  ip-10-35-81-89-57293: [0 : 0 (0)], ip-10-85-154-136-43344: [3341 : 3347 (3347)]

                    new digest:       ip-10-35-81-89-57293: [0 : 0 (0)], ip-10-85-154-136-43344: [3341 : 3347 (3347)]

                    resulting digest: ip-10-35-81-89-57293: [0 : 0 (0)], ip-10-85-154-136-43344: [3341 : 3347 (3347)]

                    Incoming-1,infinispan-hibernate-search-cluster,ip-10-35-81-89-57293 06/26 23:24:51 DEBUG org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER - Connecting to state provider /10.85.154.136:39264, original buffer size was 43690 and was reset to 8192

                    Incoming-1,infinispan-hibernate-search-cluster,ip-10-35-81-89-57293 06/26 23:24:51 DEBUG org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER - Connected to state provider, my end of the socket is /10.35.81.89:41869 passing inputstream up...

                    Incoming-1,infinispan-hibernate-search-cluster,ip-10-35-81-89-57293 06/26 23:24:51 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - Applying state

                    Incoming-1,infinispan-hibernate-search-cluster,ip-10-35-81-89-57293 06/26 23:24:51 ERROR org.infinispan.remoting.transport.jgroups.JGroupsTransport - ISPN00096: Caught while requesting or applying state

                    org.infinispan.statetransfer.StateTransferException: java.io.EOFException: The stream ended unexpectedly.  Please check whether the source of the stream encountered any issues generating the stream.

                              at org.infinispan.statetransfer.StateTransferManagerImpl.applyInMemoryState(StateTransferManagerImpl.java:311)

                              at org.infinispan.statetransfer.StateTransferManagerImpl.applyState(StateTransferManagerImpl.java:280)

                              at org.infinispan.remoting.InboundInvocationHandlerImpl.applyState(InboundInvocationHandlerImpl.java:230)

                              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.setState(JGroupsTransport.java:604)

                              at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.handleUpEvent(MessageDispatcher.java:711)

                              at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:771)

                              at org.jgroups.JChannel.up(JChannel.java:1441)

                              at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1074)

                              at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.connectToStateProvider(STREAMING_STATE_TRANSFER.java:523)

                              at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.handleStateRsp(STREAMING_STATE_TRANSFER.java:462)

                              at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.up(STREAMING_STATE_TRANSFER.java:223)

                              at org.jgroups.protocols.FRAG2.up(FRAG2.java:189)

                              at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)

                              at org.jgroups.protocols.FlowControl.up(FlowControl.java:400)

                              at org.jgroups.protocols.pbcast.GMS.up(GMS.java:891)

                              at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:246)

                              at org.jgroups.protocols.UNICAST.handleDataReceived(UNICAST.java:613)

                              at org.jgroups.protocols.UNICAST.up(UNICAST.java:294)

                              at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:703)

                              at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:133)

                              at org.jgroups.protocols.FD.up(FD.java:275)

                              at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:275)

                              at org.jgroups.protocols.MERGE2.up(MERGE2.java:209)

                              at org.jgroups.protocols.Discovery.up(Discovery.java:291)

                              at org.jgroups.protocols.TP.passMessageUp(TP.java:1102)

                              at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1658)

                              at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1640)

                              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)

                              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)

                              at java.lang.Thread.run(Thread.java:636)

                    Caused by: java.io.EOFException: The stream ended unexpectedly.  Please check whether the source of the stream encountered any issues generating the stream.

                              at org.infinispan.marshall.VersionAwareMarshaller.objectFromObjectStream(VersionAwareMarshaller.java:193)

                              at org.infinispan.statetransfer.StateTransferManagerImpl.applyInMemoryState(StateTransferManagerImpl.java:306)

                              ... 29 more

                    Caused by: java.io.EOFException: Read past end of file

                              at org.jboss.marshalling.SimpleDataInput.eofOnRead(SimpleDataInput.java:126)

                              at org.jboss.marshalling.SimpleDataInput.readUnsignedByteDirect(SimpleDataInput.java:263)

                              at org.jboss.marshalling.SimpleDataInput.readUnsignedByte(SimpleDataInput.java:224)

                              at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)

                              at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:37)

                              at org.infinispan.marshall.jboss.GenericJBossMarshaller.objectFromObjectStream(GenericJBossMarshaller.java:191)

                              at org.infinispan.marshall.VersionAwareMarshaller.objectFromObjectStream(VersionAwareMarshaller.java:191)

                              ... 30 more

                     

                     

                    ------------

                     

                    <config xmlns="urn:org:jgroups"

                                        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

                                        xsi:schemaLocation="urn:org:jgroups JGroups-2.12.xsd">

                     

                     

                              <!--bind_addr="${jgroups.tcp.address:127.0.0.1}"-->

                              <TCP

                                                  bind_addr="${jgroups.tcp.address:127.0.0.1}"

                                                  bind_port="${jgroups.tcp.port:7800}"

                                                  loopback="true"

                                                  port_range="30"

                                                  recv_buf_size="20000000"

                                                  send_buf_size="640000"

                                                  discard_incompatible_packets="true"

                                                  max_bundle_size="64000"

                                                  max_bundle_timeout="30"

                                                  enable_bundling="true"

                                                  use_send_queues="true"

                                                  sock_conn_timeout="300"

                                                  enable_diagnostics="false"

                     

                     

                                                  thread_pool.enabled="true"

                                                  thread_pool.min_threads="2"

                                                  thread_pool.max_threads="30"

                                                  thread_pool.keep_alive_time="5000"

                                                  thread_pool.queue_enabled="false"

                                                  thread_pool.queue_max_size="100"

                                                  thread_pool.rejection_policy="Discard"

                     

                     

                                                  oob_thread_pool.enabled="true"

                                                  oob_thread_pool.min_threads="2"

                                                  oob_thread_pool.max_threads="30"

                                                  oob_thread_pool.keep_alive_time="5000"

                                                  oob_thread_pool.queue_enabled="false"

                                                  oob_thread_pool.queue_max_size="100"

                                                  oob_thread_pool.rejection_policy="Discard"

                                                  />

                     

                     

                              <S3_PING secret_access_key="${aws.secret.key}" access_key="${aws.access.key}" location="${jgroups.s3.bucket:jgroups}" />

                     

                     

                              <MERGE2 max_interval="30000"

                                                  min_interval="10000"/>

                              <FD_SOCK/>

                              <FD timeout="3000" max_tries="3"/>

                              <VERIFY_SUSPECT timeout="1500"/>

                              <pbcast.NAKACK

                                                  use_mcast_xmit="false" gc_lag="0"

                                                  retransmit_timeout="300,600,1200,2400,4800"

                                                  discard_delivered_msgs="false"/>

                              <UNICAST timeout="300,600,1200"/>

                              <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"

                                                               max_bytes="400000"/>

                              <pbcast.GMS print_local_addr="false" join_timeout="7000" view_bundling="true"/>

                              <UFC max_credits="2000000" min_threshold="0.10"/>

                              <MFC max_credits="2000000" min_threshold="0.10"/>

                              <FRAG2 frag_size="60000"/>

                              <pbcast.STREAMING_STATE_TRANSFER />

                    </config>

                     

                    --------------------

                     

                    <?xml version="1.0" encoding="UTF-8"?>

                    <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

                                                  xsi:schemaLocation="urn:infinispan:config:5.0 http://www.infinispan.org/schemas/infinispan-config-5.0.xsd"

                                                  xmlns="urn:infinispan:config:5.0">

                     

                     

                              <!-- *************************** -->

                              <!-- System-wide global settings -->

                              <!-- *************************** -->

                     

                     

                              <global>

                     

                     

                                        <!-- Duplicate domains are allowed so that multiple deployments with default configuration

                                                                      of Hibernate Search applications work - if possible it would be better to use JNDI to share

                                                                      the CacheManager across applications -->

                                        <globalJmxStatistics enabled="false" cacheManagerName="HibernateSearch" allowDuplicateDomains="true"/>

                     

                     

                     

                     

                                        <transport clusterName="infinispan-hibernate-search-cluster" distributedSyncTimeout="20000" transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport">

                                                  <properties>

                                                            <property name="configurationFile" value="jgroups-ec2.xml"/>

                                                  </properties>

                                        </transport>

                     

                     

                                        <!-- Used to register JVM shutdown hooks. hookBehavior: DEFAULT, REGISTER, DONT_REGISTER.

                                                                      Hibernate Search takes care to stop the CacheManager so registering is not needed -->

                                        <shutdown hookBehavior="DONT_REGISTER"/>

                     

                     

                              </global>

                     

                     

                              <!-- *************************** -->

                              <!-- Default "template" settings -->

                              <!-- *************************** -->

                     

                     

                              <default>

                     

                     

                                        <locking lockAcquisitionTimeout="20000" writeSkewCheck="false" concurrencyLevel="500" useLockStriping="false"/>

                     

                     

                                        <lazyDeserialization enabled="false"/>

                     

                     

                                        <!-- Invocation batching is required for use with the Lucene Directory -->

                                        <invocationBatching enabled="true"/>

                     

                     

                                        <!-- This element specifies that the cache is clustered. modes supported: distribution

                                                                      (d), replication (r) or invalidation (i). Don't use invalidation to store Lucene indexes (as

                                                                      with Hibernate Search DirectoryProvider). Replication is recommended for best performance of

                                                                      Lucene indexes, but make sure you have enough memory to store the index in your heap.

                                                                      Also distribution scales much better than replication on high number of nodes in the cluster. -->

                                        <clustering mode="replication">

                     

                     

                                                  <!-- Prefer loading all data at startup than later -->

                                                  <stateRetrieval timeout="20000" logFlushTimeout="30000" fetchInMemoryState="true" alwaysProvideInMemoryState="true"/>

                     

                     

                                                  <!-- Network calls are synchronous by default -->

                                                  <sync replTimeout="20000"/>

                                        </clustering>

                                        <jmxStatistics enabled="false"/>

                                        <eviction maxEntries="-1" strategy="NONE"/>

                                        <expiration maxIdle="-1"/>

                     

                     

                              </default>

                     

                     

                              <!-- ******************************************************************************* -->

                              <!-- Individually configured "named" caches.                                         -->

                              <!--                                                                                 -->

                              <!-- While default configuration happens to be fine with similar settings across the -->

                              <!-- three caches, they should generally be different in a production environment.   -->

                              <!--                                                                                 -->

                              <!-- Current settings could easily lead to OutOfMemory exception as a CacheStore     -->

                              <!-- should be enabled, and maybe distribution is desired.                           -->

                              <!-- ******************************************************************************* -->

                     

                     

                              <!-- *************************************** -->

                              <!--  Cache to store Lucene's file metadata  -->

                              <!-- *************************************** -->

                              <namedCache name="LuceneIndexesMetadata">

                                        <loaders passivation="false" shared="true">

                                                  <loader class="com.fortysevendeg.cloud.cluster.infinispan.CustomFileCacheStore" fetchPersistentState="true">

                                                            <properties>

                                                                      <property name="location" value="lucene_cache"/>

                                                            </properties>

                                                  </loader>

                                        </loaders>

                                        <clustering mode="replication">

                                                  <stateRetrieval fetchInMemoryState="true" logFlushTimeout="30000"/>

                                                  <sync replTimeout="25000"/>

                                        </clustering>

                              </namedCache>

                     

                     

                              <!-- **************************** -->

                              <!--  Cache to store Lucene data  -->

                              <!-- **************************** -->

                              <namedCache name="LuceneIndexesData">

                                        <loaders passivation="false" shared="true">

                                                  <loader class="com.fortysevendeg.cloud.cluster.infinispan.CustomFileCacheStore" fetchPersistentState="true">

                                                            <properties>

                                                                      <property name="location" value="lucene_cache"/>

                                                            </properties>

                                                  </loader>

                                        </loaders>

                                        <clustering mode="replication">

                                                  <stateRetrieval fetchInMemoryState="true" logFlushTimeout="30000"/>

                                                  <sync replTimeout="25000"/>

                                        </clustering>

                              </namedCache>

                     

                     

                              <!-- ***************************** -->

                              <!--  Cache to store Lucene locks  -->

                              <!-- ***************************** -->

                              <namedCache name="LuceneIndexesLocking">

                                        <clustering mode="replication">

                                                  <stateRetrieval fetchInMemoryState="true" logFlushTimeout="30000"/>

                                                  <sync replTimeout="25000"/>

                                        </clustering>

                              </namedCache>

                     

                     

                     

                     

                     

                     

                    </infinispan>

                     

                    -------

                     

                    I also have another infinispan file for the hibernate 2nd Level cache.

                     

                    Thanks for your time and any advice.

                    • 22. Re: Infinispan on EC2
                      sannegrinovero

                      The exception is reporting that the received stream was incomplete, but this error is not necessarily due to networking issues: it's possible that the sending node encountered an error while marshalling state.

                       

                      As the exception message suggests, did you find any error on the logs on the sending node?

                      • 23. Re: Infinispan on EC2
                        raulraja

                        Sanne,

                         

                        I shutdown both nodes started one and let it start succesfully alone in the cluster. I can see that it states that it's the only node in the cluster.

                        Afterward I start the second node and this is the stack trace i see in the second node. I could reproduce the same behavior locally on my local network.

                         

                        NODE 1

                         

                        Incoming-1,infinispan-hibernate-search-cluster,ip-10-35-81-89-22964 06/27 18:46:33 DEBUG org.infinispan.remoting.transport.jgroups.JGroupsTransport - New view accepted: [ip-10-35-81-89-22964|2] [ip-10-35-81-89-22964]

                        Incoming-1,infinispan-hibernate-search-cluster,ip-10-35-81-89-22964 06/27 18:46:33 INFO org.infinispan.remoting.transport.jgroups.JGroupsTransport - ISPN00094: Received new cluster view: [ip-10-35-81-89-22964|2] [ip-10-35-81-89-22964]

                        Incoming-1,infinispan-hibernate-search-cluster,ip-10-35-81-89-22964 06/27 18:46:50 DEBUG org.infinispan.remoting.transport.jgroups.JGroupsTransport - New view accepted: [ip-10-35-81-89-22964|3] [ip-10-35-81-89-22964, ip-10-85-154-136-16711]

                        Incoming-1,infinispan-hibernate-search-cluster,ip-10-35-81-89-22964 06/27 18:46:50 INFO org.infinispan.remoting.transport.jgroups.JGroupsTransport - ISPN00094: Received new cluster view: [ip-10-35-81-89-22964|3] [ip-10-35-81-89-22964, ip-10-85-154-136-16711]

                        STREAMING_STATE_TRANSFER-sender-2,infinispan-hibernate-search-cluster,ip-10-35-81-89-22964 06/27 18:46:50 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - Generating state.  Can provide? true

                        STREAMING_STATE_TRANSFER-sender-2,infinispan-hibernate-search-cluster,ip-10-35-81-89-22964 06/27 18:46:50 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - Writing 129 StoredEntries to stream

                        STREAMING_STATE_TRANSFER-sender-2,infinispan-hibernate-search-cluster,ip-10-35-81-89-22964 06/27 18:46:50 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - State generated, closing object stream

                        STREAMING_STATE_TRANSFER-sender-2,infinispan-hibernate-search-cluster,ip-10-35-81-89-22964 06/27 18:46:50 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - Generating state.  Can provide? true

                        STREAMING_STATE_TRANSFER-sender-2,infinispan-hibernate-search-cluster,ip-10-35-81-89-22964 06/27 18:46:50 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - Writing 94 StoredEntries to stream

                        STREAMING_STATE_TRANSFER-sender-2,infinispan-hibernate-search-cluster,ip-10-35-81-89-22964 06/27 18:46:50 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - State generated, closing object stream

                        STREAMING_STATE_TRANSFER-sender-2,infinispan-hibernate-search-cluster,ip-10-35-81-89-22964 06/27 18:46:51 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - Generating state.  Can provide? true

                        STREAMING_STATE_TRANSFER-sender-2,infinispan-hibernate-search-cluster,ip-10-35-81-89-22964 06/27 18:46:51 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - Writing 0 StoredEntries to stream

                        STREAMING_STATE_TRANSFER-sender-2,infinispan-hibernate-search-cluster,ip-10-35-81-89-22964 06/27 18:46:51 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - State generated, closing object stream

                        Incoming-1,infinispan-hibernate-cache-cluster,ip-10-35-81-89-31542 06/27 18:46:52 DEBUG org.infinispan.remoting.transport.jgroups.JGroupsTransport - New view accepted: [ip-10-35-81-89-31542|1] [ip-10-35-81-89-31542, ip-10-85-154-136-48077]

                        Incoming-1,infinispan-hibernate-cache-cluster,ip-10-35-81-89-31542 06/27 18:46:52 INFO org.infinispan.remoting.transport.jgroups.JGroupsTransport - ISPN00094: Received new cluster view: [ip-10-35-81-89-31542|1] [ip-10-35-81-89-31542, ip-10-85-154-136-48077]

                        STREAMING_STATE_TRANSFER-sender-1,infinispan-hibernate-cache-cluster,ip-10-35-81-89-31542 06/27 18:46:52 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - Generating state.  Can provide? true

                        STREAMING_STATE_TRANSFER-sender-1,infinispan-hibernate-cache-cluster,ip-10-35-81-89-31542 06/27 18:46:52 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - Writing 1 StoredEntries to stream

                        STREAMING_STATE_TRANSFER-sender-1,infinispan-hibernate-cache-cluster,ip-10-35-81-89-31542 06/27 18:46:52 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - State generated, closing object stream

                        STREAMING_STATE_TRANSFER-sender-1,infinispan-hibernate-cache-cluster,ip-10-35-81-89-31542 06/27 18:46:53 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - Generating state.  Can provide? true

                        STREAMING_STATE_TRANSFER-sender-1,infinispan-hibernate-cache-cluster,ip-10-35-81-89-31542 06/27 18:46:53 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - Writing 1 StoredEntries to stream

                        STREAMING_STATE_TRANSFER-sender-1,infinispan-hibernate-cache-cluster,ip-10-35-81-89-31542 06/27 18:46:53 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - State generated, closing object stream

                        STREAMING_STATE_TRANSFER-sender-1,infinispan-hibernate-cache-cluster,ip-10-35-81-89-31542 06/27 18:46:55 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - Generating state.  Can provide? true

                        STREAMING_STATE_TRANSFER-sender-1,infinispan-hibernate-cache-cluster,ip-10-35-81-89-31542 06/27 18:46:55 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - Writing 1 StoredEntries to stream

                        STREAMING_STATE_TRANSFER-sender-1,infinispan-hibernate-cache-cluster,ip-10-35-81-89-31542 06/27 18:46:55 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - State generated, closing object stream

                        STREAMING_STATE_TRANSFER-sender-1,infinispan-hibernate-cache-cluster,ip-10-35-81-89-31542 06/27 18:46:59 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - Generating state.  Can provide? true

                        STREAMING_STATE_TRANSFER-sender-1,infinispan-hibernate-cache-cluster,ip-10-35-81-89-31542 06/27 18:46:59 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - Writing 1 StoredEntries to stream

                        STREAMING_STATE_TRANSFER-sender-1,infinispan-hibernate-cache-cluster,ip-10-35-81-89-31542 06/27 18:46:59 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - State generated, closing object stream

                        STREAMING_STATE_TRANSFER-sender-1,infinispan-hibernate-cache-cluster,ip-10-35-81-89-31542 06/27 18:47:07 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - Generating state.  Can provide? true

                        STREAMING_STATE_TRANSFER-sender-1,infinispan-hibernate-cache-cluster,ip-10-35-81-89-31542 06/27 18:47:07 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - Writing 1 StoredEntries to stream

                        STREAMING_STATE_TRANSFER-sender-1,infinispan-hibernate-cache-cluster,ip-10-35-81-89-31542 06/27 18:47:07 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - State generated, closing object stream

                         

                        NODE 2

                         

                        Thread-2 06/27 18:46:55 WARN org.infinispan.remoting.rpc.RpcManagerImpl - ISPN00075: Could not find available peer for state, backing off and retrying

                        Thread-2 06/27 18:46:59 INFO org.infinispan.remoting.rpc.RpcManagerImpl - ISPN00074: Trying to fetch state from ip-10-35-81-89-31542

                        Incoming-1,infinispan-hibernate-cache-cluster,ip-10-85-154-136-48077 06/27 18:46:59 DEBUG org.infinispan.statetransfer.StateTransferManagerImpl - Applying state

                        Incoming-1,infinispan-hibernate-cache-cluster,ip-10-85-154-136-48077 06/27 18:46:59 ERROR org.infinispan.remoting.transport.jgroups.JGroupsTransport - ISPN00096: Caught while requesting or applying state

                        org.infinispan.statetransfer.StateTransferException: java.lang.NullPointerException

                                  at org.infinispan.statetransfer.StateTransferManagerImpl.applyInMemoryState(StateTransferManagerImpl.java:311)

                                  at org.infinispan.statetransfer.StateTransferManagerImpl.applyState(StateTransferManagerImpl.java:280)

                                  at org.infinispan.remoting.InboundInvocationHandlerImpl.applyState(InboundInvocationHandlerImpl.java:230)

                                  at org.infinispan.remoting.transport.jgroups.JGroupsTransport.setState(JGroupsTransport.java:604)

                                  at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.handleUpEvent(MessageDispatcher.java:711)

                                  at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:771)

                                  at org.jgroups.JChannel.up(JChannel.java:1441)

                                  at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1074)

                                  at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.connectToStateProvider(STREAMING_STATE_TRANSFER.java:523)

                                  at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.handleStateRsp(STREAMING_STATE_TRANSFER.java:462)

                                  at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.up(STREAMING_STATE_TRANSFER.java:223)

                                  at org.jgroups.protocols.FRAG2.up(FRAG2.java:189)

                                  at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)

                                  at org.jgroups.protocols.FlowControl.up(FlowControl.java:400)

                                  at org.jgroups.protocols.pbcast.GMS.up(GMS.java:891)

                                  at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:246)

                                  at org.jgroups.protocols.UNICAST.handleDataReceived(UNICAST.java:613)

                                  at org.jgroups.protocols.UNICAST.up(UNICAST.java:294)

                                  at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:703)

                                  at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:133)

                                  at org.jgroups.protocols.FD.up(FD.java:275)

                                  at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:275)

                                  at org.jgroups.protocols.MERGE2.up(MERGE2.java:209)

                                  at org.jgroups.protocols.Discovery.up(Discovery.java:291)

                                  at org.jgroups.protocols.TP.passMessageUp(TP.java:1102)

                                  at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1658)

                                  at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1640)

                                  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)

                                  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)

                                  at java.lang.Thread.run(Thread.java:636)

                        Caused by: java.lang.NullPointerException

                                  at org.jboss.marshalling.reflect.SerializableClass$LazyWeakConstructorRef.access$300(SerializableClass.java:569)

                                  at org.jboss.marshalling.reflect.SerializableClass.invokeConstructor(SerializableClass.java:380)

                                  at org.jboss.marshalling.reflect.SerializableClass.callNoArgConstructor(SerializableClass.java:355)

                                  at org.jboss.marshalling.river.RiverUnmarshaller.doReadNewObject(RiverUnmarshaller.java:1248)

                                  at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:272)

                                  at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)

                                  at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:37)

                                  at org.infinispan.container.entries.ImmortalCacheEntry$Externalizer.readObject(ImmortalCacheEntry.java:132)

                                  at org.infinispan.container.entries.ImmortalCacheEntry$Externalizer.readObject(ImmortalCacheEntry.java:123)

                                  at org.infinispan.marshall.jboss.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:357)

                                  at org.infinispan.marshall.jboss.ExternalizerTable.readObject(ExternalizerTable.java:245)

                                  at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:351)

                                  at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)

                                  at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:37)

                                  at org.infinispan.marshall.exts.SetExternalizer.readObject(SetExternalizer.java:78)

                                  at org.infinispan.marshall.exts.SetExternalizer.readObject(SetExternalizer.java:47)

                                  at org.infinispan.marshall.jboss.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:357)

                                  at org.infinispan.marshall.jboss.ExternalizerTable.readObject(ExternalizerTable.java:245)

                                  at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:351)

                                  at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)

                                  at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:37)

                                  at org.infinispan.marshall.jboss.GenericJBossMarshaller.objectFromObjectStream(GenericJBossMarshaller.java:191)

                                  at org.infinispan.marshall.VersionAwareMarshaller.objectFromObjectStream(VersionAwareMarshaller.java:191)

                                  at org.infinispan.statetransfer.StateTransferManagerImpl.applyInMemoryState(StateTransferManagerImpl.java:306)

                                  ... 29 more

                        Caused by: an exception which occurred:

                                  in object of type org.hibernate.cache.infinispan.util.CacheHelper$EvictAll

                                            -> classloader hierarchy:

                                            -> type classloader = WebappClassLoader

                          context:

                          delegate: false

                          repositories:

                            /WEB-INF/classes/

                        ----------> Parent Classloader:

                        org.apache.catalina.loader.StandardClassLoader@2897a560

                         

                        (After this point a list of all my jars on the lib folder are displayed and the state transfer repeats with the same result)

                        • 24. Re: Infinispan on EC2
                          raulraja
                          • 25. Re: Infinispan on EC2
                            hussam.galal

                            Thanks Galder,

                             

                            The exception was very misleading and here is the pitfall that I was falling into that costed me a couple of days, basically the bucketname in the S3PING file contained dots '.' so it was sometihng like mylab.infinispan.cache, so by replacing the buckname to mylab.infinispan.cache it works and that exception is no longer received.

                            • 26. Re: Infinispan on EC2
                              sannegrinovero

                              Raúl Raja Martínez ha scritto:

                               

                              Found this: https://issues.jboss.org/browse/JBMAR-121

                              Hi Raúl, good you found that. So could you try updating JBoss Marshaller to an updated snapshot?

                               

                              Also, I'm understanding you have this issue only when using Infinispan as an hibernate cache, right?

                              • 27. Re: Infinispan on EC2
                                raulraja

                                Sanne,

                                 

                                I finally got it all working. These have been my findings in case anybody runs in a similar scenario.

                                 

                                1. JBOSS Marrshalling is just not reporting the real exception, one of the latest snapshots shows the real exceptions.

                                 

                                2. All versions of hibernate-infinispan part of the hibernate-core had private static classes that extended Externalizable but provided no default constructors besides being private which is not ok with JBOSS Marshalling.

                                 

                                3. Hibernate versions above 4.0 including Betas and snapshots had already fixed this issue but unfortunatly none of the current Hibernate-Search snapshots or latest code in github has been migrated to 4.0. You would think that if Hibernate-Core is launching a beta for 4.x Hibernate-Search would have a compatible snapshot or something that goes with it but my findings is that they're more decoupled than I thought they would.

                                 

                                4. The mismatch between hibernate-search and hibernate-core available builds and sources made imposible to run infinispan as cache provider at the same time as a lucene directory with any of the Hibernate-Core 4.x builds.

                                I had to manually patch latest hibernate-infinispan on the 3.x series to have those classes not be externalizable and private and create a manual build of hibernate-core that I could run with latest hibernate search so I can use infinispan for oth cache and lucene.

                                 

                                5. Besides all of the above I had to create separate infinispan configurations for cache and lucene since both integration libraries create named caches that colide with each other when using the fully qualified domain names for entity classes and that have default values for sync timeouts that were not appropiate when doing a initial state transfer for anything that would take longer than 20 secs. Also all the log messages even when TRACE for those named caches did not include the cache name so I had to get the sources and step through the debugger to understand what the issues were.

                                 

                                6. For Jgroups I had to create to separate configurations for the cache and another for the search since when shared all kinds of state transfer exceptions were thrown if both cache manager where bound to the same set of ports on my elastic beanstalk cluster.

                                 

                                7. Hibernate and infinispan report issues when all nodes of the cluster are initialized at the same time.

                                 

                                In conclusion....

                                 

                                I'm not sure how anybody has ever got infinispan working in a cluster with hibernate-infinispan and hibernate-search at the same time.

                                This is my last working config after having patched hibernate to have compatible Externalizable classes with JBOSS Marshalling.

                                It took me several days but as result of all this research I have a better understanding on how infinispan, jgroups and hibernate work together and I got a clustered Hibernate app on elastic beanstalk which allows us to deploy on ec2 with 0 server config and scale up as needed.

                                I tested the configuration below with 4 nodes and it works fine if the nodes are initiallized sequentially but it fails sometimes if all nodes are started at the same time in parallel due to initial state transfer syncs. I wish infinspan had a way to let nodes wait for initialization until other nodes are ready.

                                 

                                Anyway here is the config that I used and feel free to critize or ask any questions if I can be of help to track any of this issues down....

                                 

                                persistence.xml

                                 

                                {code:xml}

                                 

                                <?xml version="1.0" encoding="UTF-8" ?>

                                <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd" version="2.0">

                                 

                                          <persistence-unit name="appPersistenceUnit" transaction-type="RESOURCE_LOCAL">

                                  ... domain classes here

                                                    <properties>

                                                              <property name="hibernate.ejb.naming_strategy" value="org.hibernate.cfg.DefaultComponentSafeNamingStrategy"/>

                                                              <property name="hibernate.search.default.directory_provider" value="infinispan"/>

                                                              <property name="hibernate.search.infinispan.configuration_resourcename" value="hibernate-search-infinispan.xml"/>

                                 

                                                              <property name="hibernate.search.indexing_strategy" value="event" />

                                                              <!--<property name="hibernate.cache.provider_class" value="org.hibernate.cache.SingletonEhCacheProvider"/>-->

                                                              <property name="hibernate.cache.use_second_level_cache" value="true"/>

                                                              <property name="hibernate.cache.use_query_cache" value="true"/>

                                                              <property name="hibernate.generate_statistics" value="true" />

                                                              <property name="hibernate.cache.use_structured_entries" value="true" />

                                                              <property name="hibernate.cache.region.factory_class" value="org.hibernate.cache.infinispan.InfinispanRegionFactory"/>

                                                              <property name="hibernate.cache.infinispan.cfg" value="hibernate-cache-infinispan.xml"/>

                                                              <property name="hibernate.cache.infinispan.cachemanager" value="java:CacheManager"/>

                                                              <property name="javax.persistence.sharedCache.mode" value="ENABLE_SELECTIVE" />

                                                    </properties>

                                          </persistence-unit>

                                 

                                </persistence>

                                 

                                {code}

                                hibernate-cache-infinispan.xml

                                 


                                {code:xml}

                                 

                                 

                                <?xml version="1.0" encoding="UTF-8"?>

                                 

                                <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

                                                              xsi:schemaLocation="urn:infinispan:config:5.0 http://www.infinispan.org/schemas/infinispan-config-5.0.xsd"

                                                              xmlns="urn:infinispan:config:5.0">

                                 

                                          <!-- *************************** -->

                                          <!-- System-wide global settings -->

                                          <!-- *************************** -->

                                 

                                          <global>

                                 

                                                    <!-- Duplicate domains are allowed so that multiple deployments with default configuration

                                                                                  of Hibernate Search applications work - if possible it would be better to use JNDI to share

                                                                                  the CacheManager across applications -->

                                                    <globalJmxStatistics enabled="false" cacheManagerName="HibernateCache" allowDuplicateDomains="true"/>

                                 

                                 

                                 

                                 

                                                    <transport clusterName="infinispan-hibernate-cache-cluster" distributedSyncTimeout="60000" transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport">

                                                              <properties>

                                                                        <property name="configurationFile" value="jgroups-ec2-cache.xml"/>

                                                              </properties>

                                                    </transport>

                                 

                                 

                                                    <!-- Used to register JVM shutdown hooks. hookBehavior: DEFAULT, REGISTER, DONT_REGISTER.

                                                                                  Hibernate Search takes care to stop the CacheManager so registering is not needed -->

                                                    <shutdown hookBehavior="DONT_REGISTER"/>

                                 

                                 

                                          </global>

                                 

                                 

                                          <!-- *************************** -->

                                          <!-- Default "template" settings -->

                                          <!-- *************************** -->

                                 

                                 

                                          <default>

                                                    <locking lockAcquisitionTimeout="60000" writeSkewCheck="false" concurrencyLevel="500" useLockStriping="false"/>

                                 

                                 

                                                    <!--<lazyDeserialization enabled="false"/>-->

                                                    <storeAsBinary enabled="false"/>

                                 

                                 

                                                    <!-- Invocation batching is required for use with the Lucene Directory -->

                                                    <invocationBatching enabled="true"/>

                                 

                                 

                                                    <!-- This element specifies that the cache is clustered. modes supported: distribution

                                                                                  (d), replication (r) or invalidation (i). Don't use invalidation to store Lucene indexes (as

                                                                                  with Hibernate Search DirectoryProvider). Replication is recommended for best performance of

                                                                                  Lucene indexes, but make sure you have enough memory to store the index in your heap.

                                                                                  Also distribution scales much better than replication on high number of nodes in the cluster. -->

                                                    <clustering mode="replication">

                                 

                                 

                                                              <!-- Prefer loading all data at startup than later -->

                                                              <stateRetrieval timeout="60000" logFlushTimeout="60000" fetchInMemoryState="true" alwaysProvideInMemoryState="true"/>

                                 

                                 

                                                              <!-- Network calls are synchronous by default -->

                                                              <sync replTimeout="60000"/>

                                                    </clustering>

                                                    <jmxStatistics enabled="false"/>

                                                    <eviction maxEntries="-1" strategy="NONE"/>

                                                    <expiration maxIdle="-1"/>

                                 

                                 

                                          </default>

                                 

                                 

                                </infinispan>

                                 

                                {code}

                                hibernate-search-infinispan.xml

                                 

                                {code:xml}

                                 

                                <?xml version="1.0" encoding="UTF-8"?>

                                 

                                <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

                                                              xsi:schemaLocation="urn:infinispan:config:5.0 http://www.infinispan.org/schemas/infinispan-config-5.0.xsd"

                                                              xmlns="urn:infinispan:config:5.0">

                                 

                                 

                                          <!-- *************************** -->

                                          <!-- System-wide global settings -->

                                          <!-- *************************** -->

                                 

                                 

                                          <global>

                                 

                                 

                                                    <!-- Duplicate domains are allowed so that multiple deployments with default configuration

                                                                                  of Hibernate Search applications work - if possible it would be better to use JNDI to share

                                                                                  the CacheManager across applications -->

                                                    <globalJmxStatistics enabled="false" cacheManagerName="HibernateSearch" allowDuplicateDomains="true"/>

                                 

                                 

                                 

                                 

                                                    <transport clusterName="infinispan-hibernate-search-cluster" distributedSyncTimeout="60000" transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport">

                                                              <properties>

                                                                        <property name="configurationFile" value="jgroups-ec2-search.xml"/>

                                                              </properties>

                                                    </transport>

                                 

                                 

                                                    <!-- Used to register JVM shutdown hooks. hookBehavior: DEFAULT, REGISTER, DONT_REGISTER.

                                                                                  Hibernate Search takes care to stop the CacheManager so registering is not needed -->

                                                    <shutdown hookBehavior="DONT_REGISTER"/>

                                 

                                 

                                          </global>

                                 

                                 

                                          <!-- *************************** -->

                                          <!-- Default "template" settings -->

                                          <!-- *************************** -->

                                 

                                 

                                          <default>

                                 

                                 

                                                    <locking lockAcquisitionTimeout="60000" writeSkewCheck="false" concurrencyLevel="500" useLockStriping="false"/>

                                 

                                 

                                                    <!--<lazyDeserialization enabled="false"/>-->

                                                    <storeAsBinary enabled="false"/>

                                 

                                 

                                                    <!-- Invocation batching is required for use with the Lucene Directory -->

                                                    <invocationBatching enabled="true"/>

                                 

                                 

                                                    <!-- This element specifies that the cache is clustered. modes supported: distribution

                                                                                  (d), replication (r) or invalidation (i). Don't use invalidation to store Lucene indexes (as

                                                                                  with Hibernate Search DirectoryProvider). Replication is recommended for best performance of

                                                                                  Lucene indexes, but make sure you have enough memory to store the index in your heap.

                                                                                  Also distribution scales much better than replication on high number of nodes in the cluster. -->

                                                    <clustering mode="distribution">

                                 

                                 

                                                              <!-- Prefer loading all data at startup than later -->

                                                              <stateRetrieval timeout="60000" logFlushTimeout="60000" fetchInMemoryState="true" alwaysProvideInMemoryState="true"/>

                                 

                                 

                                                              <!-- Network calls are synchronous by default -->

                                                              <sync replTimeout="60000"/>

                                                    </clustering>

                                                    <jmxStatistics enabled="false"/>

                                                    <eviction maxEntries="-1" strategy="NONE"/>

                                                    <expiration maxIdle="-1"/>

                                 

                                 

                                          </default>

                                 

                                 

                                          <!-- ******************************************************************************* -->

                                          <!-- Individually configured "named" caches.                                         -->

                                          <!--                                                                                 -->

                                          <!-- While default configuration happens to be fine with similar settings across the -->

                                          <!-- three caches, they should generally be different in a production environment.   -->

                                          <!--                                                                                 -->

                                          <!-- Current settings could easily lead to OutOfMemory exception as a CacheStore     -->

                                          <!-- should be enabled, and maybe distribution is desired.                           -->

                                          <!-- ******************************************************************************* -->

                                 

                                 

                                          <!-- *************************************** -->

                                          <!--  Cache to store Lucene's file metadata  -->

                                          <!-- *************************************** -->

                                          <namedCache name="LuceneIndexesMetadata">

                                                    <loaders passivation="false" shared="true">

                                                              <loader class="com.fortysevendeg.commons.cloud.cluster.infinispan.CustomFileCacheStore" fetchPersistentState="true">

                                                                        <properties>

                                                                                  <property name="location" value="lucene_cache"/>

                                                                        </properties>

                                                              </loader>

                                                    </loaders>

                                                    <clustering mode="replication">

                                                              <stateRetrieval fetchInMemoryState="true" logFlushTimeout="60000"/>

                                                              <sync replTimeout="60000"/>

                                                    </clustering>

                                          </namedCache>

                                 

                                 

                                          <!-- **************************** -->

                                          <!--  Cache to store Lucene data  -->

                                          <!-- **************************** -->

                                          <namedCache name="LuceneIndexesData">

                                                    <loaders passivation="false" shared="true">

                                                              <loader class="com.fortysevendeg.commons.cloud.cluster.infinispan.CustomFileCacheStore" fetchPersistentState="true">

                                                                        <properties>

                                                                                  <property name="location" value="lucene_cache"/>

                                                                        </properties>

                                                              </loader>

                                                    </loaders>

                                                    <clustering mode="replication">

                                                              <stateRetrieval fetchInMemoryState="true" logFlushTimeout="60000"/>

                                                              <sync replTimeout="60000"/>

                                                    </clustering>

                                          </namedCache>

                                 

                                 

                                          <!-- ***************************** -->

                                          <!--  Cache to store Lucene locks  -->

                                          <!-- ***************************** -->

                                          <namedCache name="LuceneIndexesLocking">

                                                    <clustering mode="replication">

                                                              <stateRetrieval fetchInMemoryState="true" logFlushTimeout="60000"/>

                                                              <sync replTimeout="60000"/>

                                                    </clustering>

                                          </namedCache>

                                 

                                </infinispan>

                                 

                                {code}

                                jgroups-ec2-cache.xml

                                 

                                {code:xml}

                                 

                                <config xmlns="urn:org:jgroups"

                                                    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

                                                    xsi:schemaLocation="urn:org:jgroups JGroups-2.12.xsd">

                                 

                                 

                                          <!--bind_addr="${jgroups.tcp.address:127.0.0.1}"-->

                                          <TCP

                                 

                                 

                                                              bind_port="${jgroups.tcp.port:8800}"

                                                              loopback="true"

                                                              port_range="30"

                                                              recv_buf_size="20000000"

                                                              send_buf_size="640000"

                                                              discard_incompatible_packets="true"

                                                              max_bundle_size="64000"

                                                              max_bundle_timeout="30"

                                                              enable_bundling="true"

                                                              use_send_queues="true"

                                                              sock_conn_timeout="300"

                                                              enable_diagnostics="false"

                                 

                                 

                                                              thread_pool.enabled="true"

                                                              thread_pool.min_threads="2"

                                                              thread_pool.max_threads="30"

                                                              thread_pool.keep_alive_time="5000"

                                                              thread_pool.queue_enabled="false"

                                                              thread_pool.queue_max_size="100"

                                                              thread_pool.rejection_policy="Discard"

                                 

                                 

                                                              oob_thread_pool.enabled="true"

                                                              oob_thread_pool.min_threads="2"

                                                              oob_thread_pool.max_threads="30"

                                                              oob_thread_pool.keep_alive_time="5000"

                                                              oob_thread_pool.queue_enabled="false"

                                                              oob_thread_pool.queue_max_size="100"

                                                              oob_thread_pool.rejection_policy="Discard"

                                                              />

                                 

                                 

                                          <S3_PING secret_access_key="${aws.secret.key}" access_key="${aws.access.key}" location="${jgroups.s3.bucket:jgroups}" />

                                 

                                 

                                          <MERGE2 max_interval="30000"

                                                              min_interval="10000"/>

                                          <FD_SOCK/>

                                          <FD timeout="3000" max_tries="3"/>

                                          <VERIFY_SUSPECT timeout="1500"/>

                                          <pbcast.NAKACK

                                                              use_mcast_xmit="false" gc_lag="0"

                                                              retransmit_timeout="300,600,1200,2400,4800"

                                                              discard_delivered_msgs="false"/>

                                          <UNICAST timeout="300,600,1200"/>

                                          <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"

                                                                           max_bytes="400000"/>

                                 

                                 

                                          <pbcast.GMS print_local_addr="false" join_timeout="7000" view_bundling="true"/>

                                          <UFC max_credits="2000000" min_threshold="0.10"/>

                                          <MFC max_credits="2000000" min_threshold="0.10"/>

                                          <FRAG2 frag_size="60000"/>

                                          <pbcast.STREAMING_STATE_TRANSFER />

                                </config>

                                 

                                {code}

                                jgroups-ec2-search.xml

                                 

                                {code:xml}

                                 

                                <config xmlns="urn:org:jgroups"

                                                    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

                                                    xsi:schemaLocation="urn:org:jgroups JGroups-2.12.xsd">

                                 

                                 

                                          <!--bind_addr="${jgroups.tcp.address:127.0.0.1}"-->

                                          <TCP

                                 

                                 

                                                              bind_port="${jgroups.tcp.port:7800}"

                                                              loopback="true"

                                                              port_range="30"

                                                              recv_buf_size="20000000"

                                                              send_buf_size="640000"

                                                              discard_incompatible_packets="true"

                                                              max_bundle_size="64000"

                                                              max_bundle_timeout="30"

                                                              enable_bundling="true"

                                                              use_send_queues="true"

                                                              sock_conn_timeout="300"

                                                              enable_diagnostics="false"

                                 

                                 

                                                              thread_pool.enabled="true"

                                                              thread_pool.min_threads="2"

                                                              thread_pool.max_threads="30"

                                                              thread_pool.keep_alive_time="5000"

                                                              thread_pool.queue_enabled="false"

                                                              thread_pool.queue_max_size="100"

                                                              thread_pool.rejection_policy="Discard"

                                 

                                 

                                                              oob_thread_pool.enabled="true"

                                                              oob_thread_pool.min_threads="2"

                                                              oob_thread_pool.max_threads="30"

                                                              oob_thread_pool.keep_alive_time="5000"

                                                              oob_thread_pool.queue_enabled="false"

                                                              oob_thread_pool.queue_max_size="100"

                                                              oob_thread_pool.rejection_policy="Discard"

                                                              />

                                 

                                 

                                          <S3_PING secret_access_key="${aws.secret.key}" access_key="${aws.access.key}" location="${jgroups.s3.bucket:jgroups}" />

                                 

                                 

                                          <MERGE2 max_interval="30000"

                                                              min_interval="10000"/>

                                          <FD_SOCK/>

                                          <FD timeout="3000" max_tries="3"/>

                                          <VERIFY_SUSPECT timeout="1500"/>

                                          <pbcast.NAKACK

                                                              use_mcast_xmit="false" gc_lag="0"

                                                              retransmit_timeout="300,600,1200,2400,4800"

                                                              discard_delivered_msgs="false"/>

                                          <UNICAST timeout="300,600,1200"/>

                                          <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"

                                                                           max_bytes="400000"/>

                                 

                                 

                                          <pbcast.GMS print_local_addr="false" join_timeout="7000" view_bundling="true"/>

                                          <UFC max_credits="2000000" min_threshold="0.10"/>

                                          <MFC max_credits="2000000" min_threshold="0.10"/>

                                          <FRAG2 frag_size="60000"/>

                                          <pbcast.STREAMING_STATE_TRANSFER />

                                </config>

                                 

                                {code}

                                • 28. Re: Infinispan on EC2
                                  sannegrinovero

                                  great work! And thanks for posting this.

                                  So you had to fix the Infinispan integration on Hibernate core 3.6.x right? If you send a patch or a pull request we can include the fix, so that you won't have to maintain your version.

                                   

                                  Personally I admit we regularly test the Infinispan/Hibernate Search integration (also by continuous integration) but I didn't run it with 2nd level cache recently: older versions didn't have the externalizer and so where not affected by this bug.

                                  In fact whe're working on Hibernate Search for Hibernate4, but it will still take some time.

                                   

                                  I didn't understand your point 5. Which names are clashing exactly? All cache names in InfinispanDirectoryProvider are configurable, and also default to "Lucene-something" which is hardly going to clash with the 2LC configurations?

                                  About the missing cache names in TRACE messages, I think that's a very good idea. could you point out where you would have needed them? If you prefer to just change it and send a pull request, again I'll make sure we pull it in. All improvements you want in the code, feel free to make them directly, or if you have no time, point them out on JIRA at least we track/prioritize them, or if you're not sure, feel free to open a thread on the mailing list.

                                   

                                  I see you're using S3 as JGroups ping; that's fine but I would have expected people using hibernate to use JDBC_PING as you could point it to the same datasource used by Hibernate, so you have one less authorization point to configure, and one less resource to manage.

                                  About the cacheloaders used for the index; again since you're using a database, why not store the index in the database too? Or on S3, since you're having that setup as well. Using the local disk doesn't sound very cool as you will loose the data when you scale down.

                                   

                                  I'm also thinking that it's just silly to need all these configuration files to basically say "cluster: yes"; alas most options are really needed so it won't be that simple soon, still if you have any suggestion on what we could do to make life a bit easier, I'll be glad to hear about it.

                                  • 29. Re: Infinispan on EC2
                                    sannegrinovero

                                    Raúl, thinking more about this, it seems strange that such a bug as having the wrong externalizers in the Infinispan second level cache was applied only on Hibernate4. Could you find which JIRA that was and check if the patch applied to 4 could be easily ported? I know you fixed it too, but I'd like to see if there are other differences which might not be included in your patch. Also you should open a new JIRA issue for this and link to the previous issue so that others will be able to find it and figure out what happened.