1 Reply Latest reply on Jul 14, 2010 4:30 PM by mikemil

    MDBs using RemoteJMSProvider seem to stop under large number of MDBs

    mikemil

      Hi,

       

      We are in the process of migrating  from JBoss MQ over to JBoss Messaging, running on 4.2.3 GA.   We are  using the jboss-remoting.jar from 2.2.3.  A little background, we have a  Central server and many Store servers.   We create transactions on the  Store servers, which then save to our database and then put the  transaction data on a local store queue.   On our Central server, we  have a set of MDBs using a Remote JMS Provider definition to actually  pull the transaction data out of the queue (mdb) and save it to our  Central server.    We have about 400 stores running and our Central  server.   For the most part, we are using the base configuration from  the JBoss documentation, with the exception that we are in a firewall  configuration so in the remoting configuration, we are setting the  secondaryBindPort and secondaryControlPort values to the same value,  27900 - based on some previous forum responses from Ron Sigal in the Remoting forums.

       

      Our Central server deploys 400 MDBs, one pointed at each store server to pull transaction data from the Store servers.   Once the Central server gets started successfully we intermittently encounter problems where transaction stops being delivered to our MDBs running on Central.   This problem does not happen with lower numbers of Store servers, like 200.

       

      We have noticed thru using JConsole that for each MDB pointing at a store, we are getting 4-5 threads created for each store:

      1) Connection Consumer for dest JBossQueue(mdb name here)

      2) control:Socket[addr=store1/ipaddr of store1, port=27900, localport=xyz...]   --> appears only when store connected

      3) control:Socket[addr=store1/ipaddr of store1, port=27900, localport=abc...]   --> appears only when store connected

      4) WorkerThread#[idaddr of store1] --> this gets started the first time transaction data arrives from Store 1 and does not appear to go away

      5) The MDB actually runs in a thread "JMS SessionPool Worker-z"

       

      Is there some limitation on the number of remote MDBs that messaging can handle reliably?    Exact same code, config etc works with smaller numbers of MDBs.   With 400 MDBs it looks like just those could generate upto 2000 threads, assuming all stores are connected to our Central server.  Does this pose any problems for Messaging?

       

       

        • 1. Re: MDBs using RemoteJMSProvider seem to stop under large number of MDBs
          mikemil

          fyi... for anyone that may find my initial post:

           

          I think we have worked thru our problem, which at this point we will  categorize as 'bad architecture'.   We had a hub & spoke  architecture with over 800 spokes.  The hub had a remote MDB pointing to  each spoke and each spoke had an remote MDB pointing to the hub.   The  main problem mostly at the hub - using Messaging w/Remoting bisocket -  this meant that, if all spokes were connected to the hub, we ended up  with over 2400 threads (800 hubs each getting a Connection Consumer  thread, along with 2 separate control socket thread (firewall between  hub and spokes)) along with everything else our hub server was trying to  do.

           

          Data  flowed correctly for some time, then things appeared to break and  apparently we were getting networking problems on port 4457.    We have  changed our architecture to no longer use the remote MDBs and seem to be  much more stable.

           

          Not sure exactly what in the flow was broken but things  are much more stable w/o the remote mdbs - may just too many remote  mdbs....