12 Replies Latest reply on Aug 9, 2012 2:41 AM by sv_srinivaas

    MDB outbound config in cluster

    sv_srinivaas

      HI,

       

      I'm using Hornetq2.2.5-Final, OS - Windows XP and App server is JBoss 5,1,0 GA.

       

      I've questionn regd RemoteJmsXA configuration. Can I give a comma separated list of host and port in the outbound connection properties in -ds.xml

      as shpown below so that my MDBs can send the reply message to any of the available jms node in cluster? I dont see any example that has a list of host/port specified in the outbound config of jca adaptor and hence thought of asking is that something we should not?

       

      <tx-connection-factory>
         <jndi-name>RemoteJmsXA</jndi-name>
          ........
         <config-property name="ConnectorClassName" type="java.lang.String">org.hornetq.core.remoting.impl.netty.NettyConnectorFactory
         </config-property>
         <config-property name="ConnectionParameters" type="java.lang.String">host=jms1;port=5446, host=jms2;port=5446</config-property>
      ,.... 
       </tx-connection-factory>
       
      

       

      Cluste config info: I've configured two live jms nodes in cluster and a 3rd node (stand alone and not part of cluster) where I've the MDB deployed to consume messages from the jms cluster.

       

      On starting the jms cluster followed by the MDB,  I see the consumer count in jms node1 as 7 and jms node2 as 8 (as the default MDB pool size is 15). Is this the exepcted behavior?

       

      Then I sent 2 messages from client (java application) and it got load balanced and got 1 message each in each of the jms nodes. Since the MDBs were configured to connect to both the nodes in cluster, both the messages were processsed simultanieously, so far eveything looks fine.

       

      Now when the MDB tries to send the reply message (using RemoteJmsXA), if both the jms nodes are alive then no issues, but if the jms node specified in outbound config goes down (after reading the message and before sending the reply) then the MDBs fail to send the reply message and thats why I would like to know if I can specify the list of jms nodes in outbound config as well.

       

      Thanks!

        • 1. Re: MDB outbound config in cluster
          gaohoward

          I don't think it's working. However why you want a mdb to send a message to different nodes in the cluster? What's your use case?

          • 2. Re: MDB outbound config in cluster
            sv_srinivaas

            Yong, thanks so much for a quick response.  My requirement is that we should have a cluster of jms nodes having request and reply queues in it. MDB from a remote machine should consume messages (from request queue) from any of the nodes in cluster, process it (within a transaction) and should send the reply message back (to reply queue) to whichever node that is available in cluster.

             

            Currently this is what is happening:

            1. Sent a message to jms cluster and msg was sent to jms node2 and MDB consumed it and sent the the reply msg back to jms node2

            2. Sent another message and this time msg was sent to jms node1

            3. While MDB was processing the message, I killed jms node1

            4. After processing the msg, MDB failed to send the reply message even though jms node2 (that is part of cluster) is still alive. Instead I see the below exception in mdb server logs and also I dont see the message being redelivered to jms node2.

             

            INFO [STDOUT] message This is a text message

            INFO [STDOUT] Kill a node and press enter

            WARN [RemotingConnectionImpl] Connection failure has been detected: The connection was disconnected because of server shutdown [code=4]

            WARN [loggerI18N] Can't find resource for bundle java.util.PropertyResourceBundle, key com.arjuna.ats.internal.jta.transaction.arjunacore.timeouterror: [key='com.arjuna.ats.internal.jta.transaction.arjunacore.timeouterror']TransactionImple.enlistResource, XAException.XAER_RMERR, < 131075, 28, 26, 494545535110110048495050584951515658521011015654495351585110145535110110048495050584951515658521011015654495351585252 >,

             

            If this is an issue with transaction / message acknowlegment, should not the mdb roll back the transaction and redeliver the request msg back to request queue and the same gets reprocessed by another mdb instance from jms node2 ?

             

            Note: I've attached the mdb code and xmls.

            • 3. Re: MDB outbound config in cluster
              gaohoward

              In that case I think you probably need to set up backup servers so that the client can fail over to the backup if the live server is killed.

              • 4. Re: MDB outbound config in cluster
                sv_srinivaas

                Yong,

                Thanks for your suggestions. I moved to live-backup model and it works fine so far except for a minor issue. Again this is not an issue with live-backup rather this is just the issue with client reattachment.

                 

                 

                I'm Using hornetq-2.2.5-Final. JBboss 5.1.0 GA and OS: Windows XP.

                 

                 

                1. Just started the live jms node

                2. Sent few messages using MDBRemoteFailoverStaticClientExample.java (that comes with distro) to live node and it worked fine.

                3. Then I pulled the network cable of live node for 15 to 20 seconds and reconnected it back.

                4. Tried sending more messages using MDBRemoteFailoverStaticClientExample.java and got the below exception from sender java client.

                 

                 

                run:

                [java] Sending message: This is message 1 - gid : 1

                [java] Exception in thread "main" javax.jms.JMSException: The specified network name is no longer available

                [java] at org.hornetq.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:286)

                [java] at org.hornetq.core.client.impl.ClientProducerImpl.doSend(ClientProducerImpl.java:287)

                [java] at org.hornetq.core.client.impl.ClientProducerImpl.send(ClientProducerImpl.java:142)

                [java] at org.hornetq.jms.client.HornetQMessageProducer.doSend(HornetQMessageProducer.java:451)

                [java] at org.hornetq.jms.client.HornetQMessageProducer.send(HornetQMessageProducer.java:199)

                [java] at org.hornetq.javaee.example.MDBRemoteFailoverStaticClientGroupMessageExample.main(MDBRemoteFailoverStaticClientGroupMessageExample.java:62)

                [java] Caused by: HornetQException[errorCode=6 message=The specified network name is no longer available]

                [java] ... 6 more

                [java] Java Result: 1

                 

                 

                Pls find the xmls attached for live node.

                I did all the config that is mentioned as part of the reattach-node example that comes with distro but still I cant get this to work. Pls help!

                • 5. Re: MDB outbound config in cluster
                  gaohoward

                  What's your network infrastructure look like? Did you try using IP addresses instead of host names in the connectors?

                  • 6. Re: MDB outbound config in cluster
                    sv_srinivaas

                    Hi, I'm using windows NFS shared drive as the common storage location for both live and backup. Just now I found that the connection to nfs network gets terminated the moment I pull the cable from live and I do see a warning message from windows that the network path to nfs drive is no longer valid. Is this something specific to windows nfs?

                     

                    I dont see any issue while killing or stopping live node and live fails over to backup without any issue and even multiple failback failover works fine. MDB and spring listeners are able to failover without any issues. Only the sender client java application causes this issue that too only when the network cable is pulled off from the live node for few seconds,


                    Also I'm using only IP and not the name.

                     

                    Note: Currently I'm testing large messages and with this again I get the below exception while sending from java client. I've set the <min-large-message-size>10240</min-large-message-size> in the connection factory of jms. Using the example code that comes with distro for sending a large message (commented the consumer code as I have an MDB for the same)

                     

                    Are these two issues related by any means?

                     

                    javax.jms.JMSException
                     at org.hornetq.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:286)
                     at org.hornetq.core.client.impl.ClientProducerImpl.largeMessageSendStreamed(ClientProducerImpl.java:542)
                     at org.hornetq.core.client.impl.ClientProducerImpl.largeMessageSend(ClientProducerImpl.java:371)
                     at org.hornetq.core.client.impl.ClientProducerImpl.doSend(ClientProducerImpl.java:279)
                     at org.hornetq.core.client.impl.ClientProducerImpl.send(ClientProducerImpl.java:142)
                     at org.hornetq.jms.client.HornetQMessageProducer.doSend(HornetQMessageProducer.java:451)
                     at org.hornetq.jms.client.HornetQMessageProducer.send(HornetQMessageProducer.java:199)
                     at com.jms.queue.LargeMsgSender.sendMessage(LargeMsgSender.java:91)
                     at com.jms.queue.LargeMsgSender.runExample(LargeMsgSender.java:59)
                     at com.jms.queue.LargeMsgSender.main(LargeMsgSender.java:27)
                    Caused by: HornetQException[errorCode=0 message=null]
                     ... 10 more
                    
                    

                     

                    Message was edited by: srinivaas venkat

                    • 7. Re: MDB outbound config in cluster
                      hgupta

                      Hi Venkat,

                       

                      Are you able to resolve the issue that you have mentioned here? I am facing same issue with my hornetQ JMS cluster with MDB connecting from a remote machine.

                       

                      Thanks,

                      Himanshu

                      • 8. Re: MDB outbound config in cluster
                        clebert.suconic

                        Himanshu: What version are you using? All you need to do is to provide the remote configuration to the Resource Adapter. this issue here was probably somthing solved in a later version.

                         

                        Provide us some configuration you're doing so we can figure out what you're doing wrong.

                        • 9. Re: MDB outbound config in cluster
                          hgupta

                          Hi Clebert,

                           

                          Thanks for the reply.

                           

                          Please see the following topic where I am trying to get closure on the issue: https://community.jboss.org/message/751516#751516

                           

                          Reason why I commented and asked for help here because I am seeing the same issue which Venkat reported in his second comment.

                           

                          Thanks,

                            Himanshu

                          • 10. Re: MDB outbound config in cluster
                            sv_srinivaas

                            Himanshu,

                             

                            I had issues with Jboss 5.1.0 and Hornetq-2.2.5-Final, but then I was able to make the MDBs work in JMS cluster using JBoss 7.1.1-Final and Hornetq 2.2.13-Final version. I used the stand-alone configuration/ static connectors for this to work. Pls find the sample xml attached.

                             

                            Srinivaas Venkat

                            • 11. Re: MDB outbound config in cluster
                              hgupta

                              Thanks Venkat.Can you please also share the configuration (annotations or ejb xml file) changes you did for this?

                              • 12. Re: MDB outbound config in cluster
                                sv_srinivaas

                                Himanshu, I just did a poc on jms cluster and hence did not configure any ejb or so. I just had an MBD with sysout and the objective was to check the load balabce and failover of MDBs.