1 2 Previous Next 18 Replies Latest reply: May 1, 2012 5:07 PM by craigm123 Go to original post RSS
  • 15. Re: Journals do not get cleaned up fast enough under high load
    Clebert Suconic Master

    You have configured static cluster, the bridge will be created to the other node even if it's not connected and messages are building up on the bridge queue.

     

    If you configure paging at # (global level) the bridge queue will also be paged, otherwise everything will build up on the page queue.

     

    YOu have to proper configure the clusters so they can connect to each other.

  • 16. Re: Journals do not get cleaned up fast enough under high load
    craigm123 Newbie

    I gave that a shot.  Here's the new address settings for both mb1/cfg/hornetq-configuration.xml and mb2/cfg/hornetq-configuration.xml.

     

       <address-settings>

          <address-setting match="#">

             <redistribution-delay>0</redistribution-delay>

             <max-size-bytes>${message.memory.max:10485760}</max-size-bytes>

             <page-size-bytes>5242880</page-size-bytes>

             <address-full-policy>PAGE</address-full-policy>

          </address-setting>

       </address-settings>

     

    I again ran the ./run-clustered.sh (same as above) and still see the journal directory of one of the brokers growing well beyond what I expect. 

     

    Every 1.0s: du -ch mb1 mb2                                                                                    Tue May  1 10:13:57 2012

     

    12K     mb1/scripts

    4.0K    mb1/data/large-messages

    397M    mb1/data/paging/d158a4c9-93b0-11e1-ada8-a1259ed0a225

    397M    mb1/data/paging

    4.1M    mb1/data/bindings

    501M    mb1/data/journal

    901M    mb1/data

    32K     mb1/cfg

    901M    mb1

    12K     mb2/scripts

    4.0K    mb2/data/large-messages

    394M    mb2/data/paging/d24bb767-93b0-11e1-bc57-6bee5219c840

    394M    mb2/data/paging

    4.1M    mb2/data/bindings

    111M    mb2/data/journal

    508M    mb2/data

    32K     mb2/cfg

    508M    mb2

    1.4G    total

     

    And after I ctrl+c to end the sending client, the journal size remains the same.

     

    You mentioned I need to properly configure the cluster nodes so they can connect to each other.  What do I have misconfigured?  I see messages going to both paging directories of the two nodes and the log messages indicate they are clustered.

  • 17. Re: Journals do not get cleaned up fast enough under high load
    Clebert Suconic Master

    >>  I see messages going to both paging directories of the two nodes and the log messages indicate they are clustered.

     

     

    That means that the other node is not receiving the messages. The bridge queue should be emptied as the bridge sends to the other node.

     

    It seems a connectivity issue.

  • 18. Re: Journals do not get cleaned up fast enough under high load
    craigm123 Newbie

    I setup another example to show that both hornetq nodes are indeed clustered and connected.

     

    In the new attachment, run the run-clustered-withsender.sh script.  This will do the following -

    - Build the client java code

    - Start both hornetq nodes (same configuration as before)

    - Start the sending application, which will send 100k messages to one of the two hornetq nodes (I have specifically set it up to connect to mb1)

    - Stop the sending application

    - Start the listening application, which will receive all 100k messages from the other hornetq node (I have specifically set it up to connect to mb2)

    - Wait for all the messages to be recieved, then shutdown

    - Stop the hornetq nodes

     

    To execute the above, do the following:

    > ./run-clustered-withsender.sh

     

    When sending traffic, you can observe the paging directories on both nodes grow (as expected).  On mb1, the journal directory also grows (I'm not sure why this occurs still).

     

    When receiving traffic, the paging directory on mb1 shrinks right away (messages are forwarded mb2 as it has the listener, as expected) and the mb2 paging directory shrinks as messages are consumed.

     

    Getting back to my original question and concern, I simply need a way to limit the size the journal directory.  Is there a way to set a hard limit on the size of the journal directory or number of journal files that can be created?

1 2 Previous Next