1 2 Previous Next 18 Replies Latest reply on May 1, 2012 5:07 PM by craigm123

    Journals do not get cleaned up fast enough under high load

    craigm123

      I have a two node symmetric cluster (without backup nodes) that both use ramdisk to write the journals.  The ramdisk is set to 1 GB on both machines.  Under high loads when both machines are taking traffic (both machines have producers and consumers), I see new journals created very frequently and then later cleaned up.  I've had to adjust the journal-compact-percentage to 75 just so the journals are compacted well before I run out of space on the ramdisk.  For example, over a couple of days I see the journal id (the id in the file name) get into the 200k to 300k range.

       

      Oddly enough, when I force all consumers and producers to be on just one of the two nodes, I see very little new journal creation and cleanup. 

       

      I have 22 queues, which about 6 have about 2000 messages go in and out of them every second (each machine has about 60 Mbps of traffic in and out split evenly).  Each queue has max-size-bytes set to 10485760, so it shouldn't be that journals should fill up any more than 220 MiB, right?  My journal-min-files is set to 10 and journal size is 10485760, so I start out with 100 MiB in journal space.

       

      I also see this message in the logs, but not sure if it's related -

       

      [2012-04-28 00:16:57,325] 183289211 WARN  [Thread-0 (HornetQ-server-HornetQServerImpl::serverUUID=b67431cf-8f1c-11e1-b7e8-000c29848f2f-31583366)] org.hornetq.core.server.impl.QueueImpl - Couldn't finish waiting executors. Try increasing the thread pool size

      java.lang.Exception: trace

              at org.hornetq.core.server.impl.QueueImpl.flushExecutor(QueueImpl.java:549)

              at org.hornetq.core.server.impl.QueueImpl.addTail(QueueImpl.java:441)

              at org.hornetq.core.postoffice.impl.PostOfficeImpl.addReferences(PostOfficeImpl.java:1255)

              at org.hornetq.core.postoffice.impl.PostOfficeImpl.access$200(PostOfficeImpl.java:79)

              at org.hornetq.core.postoffice.impl.PostOfficeImpl$1.done(PostOfficeImpl.java:1077)

              at org.hornetq.core.persistence.impl.journal.OperationContextImpl$1.run(OperationContextImpl.java:239)

              at org.hornetq.utils.OrderedExecutorFactory$OrderedExecutor$1.run(OrderedExecutorFactory.java:100)

              at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

              at java.lang.Thread.run(Thread.java:662)

       

      Attaching my hornetq config files, as well as the start script I use to bring it up (where I read the local.properties file) and the hornetq log of when the ramdisk did fill up.

        • 1. Re: Journals do not get cleaned up fast enough under high load
          clebert.suconic

          What version?

           

          There was a bug at some point causint this.

           

           

          You need to provide a thread dump. The Queue was busy doing something else. it seems a dead lock.

          • 2. Re: Journals do not get cleaned up fast enough under high load
            craigm123

            Thanks for responding so quickly!

             

            I'm using 2.2.11.  I was able to reproduce the issue and have stack traces from both nodes in the cluster.

             

            Here are the steps I took to reproduce:

            - Started both nodes and verified they were clustered

              - Started the producer application that will send traffic to two different queues

              - Waited until queue sizes on those two queues was around 1,000,000 items each

              - Started the consumer applications - they take data off and put data back in other queues (it's a modify and pass along sorta app)

            - Watched the disk space on the brokers and saw the ramdisk filled up on one hornetq node, then the other

             

            A colleague of mine pointed out this looks like HORNETQ-798.  I do only see the issue after items in the queue are getting depaged.  I'll give 2.2.16 a try and see if this resolves my issue.

             

            Craig

            • 3. Re: Journals do not get cleaned up fast enough under high load
              craigm123

              I tried paging out on 2.2.14 and 2.2.16 (but kept my clients 2.2.11) and I managed to reproduce the issue on both versions.  This time I only had a producer, no consumer.  I sent in about 10k messages per second to two different queue (5k per queue) and saw the ramdisk fill all the way up.  It was paging too, but the journals were getting created faster than the paging could occur.  The paging dir is on a RAID 10 set of 15k SAS drives, so it's can't support a huge amount of IO.

               

              Attaching the logs and thread dump for each run.

               

              Craig

              • 4. Re: Journals do not get cleaned up fast enough under high load
                clebert.suconic

                That's the issue I was referring to. YOu should try 2.2.14 or beyond. Also increment the clients.

                 

                Are you using paging really? It seems you don't have paging configured at all. Maybe you are running out of memory or something for the lack of configuration.

                • 5. Re: Journals do not get cleaned up fast enough under high load
                  craigm123

                  I upgraded the clients to 2.2.16.  I'm definitely paging, though maybe there's something I'm missing the the config?

                   

                        <address-setting match="jms.#">

                           <redistribution-delay>0</redistribution-delay>

                           <max-size-bytes>10485760</max-size-bytes>

                           <page-size-bytes>5242880</page-size-bytes>

                           <address-full-policy>PAGE</address-full-policy>

                        </address-setting>

                   

                  This is what the journal and paging directories looks like after I have two different queues with 300k items in them each (each item is around 1-4 KB in size).  I have purposely not turned out any consumers.

                   

                  [17:48 root@myhost messagebroker-feedback]$ ls /ramdisk/data/journal/

                  hornetq-data-10.hq  hornetq-data-20.hq  hornetq-data-30.hq  hornetq-data-40.hq  hornetq-data-50.hq  hornetq-data-60.hq  hornetq-data-70.hq

                  hornetq-data-11.hq  hornetq-data-21.hq  hornetq-data-31.hq  hornetq-data-41.hq  hornetq-data-51.hq  hornetq-data-61.hq  hornetq-data-71.hq

                  hornetq-data-12.hq  hornetq-data-22.hq  hornetq-data-32.hq  hornetq-data-42.hq  hornetq-data-52.hq  hornetq-data-62.hq  hornetq-data-72.hq

                  hornetq-data-13.hq  hornetq-data-23.hq  hornetq-data-33.hq  hornetq-data-43.hq  hornetq-data-53.hq  hornetq-data-63.hq  hornetq-data-73.hq

                  hornetq-data-14.hq  hornetq-data-24.hq  hornetq-data-34.hq  hornetq-data-44.hq  hornetq-data-54.hq  hornetq-data-64.hq  hornetq-data-74.hq

                  hornetq-data-15.hq  hornetq-data-25.hq  hornetq-data-35.hq  hornetq-data-45.hq  hornetq-data-55.hq  hornetq-data-65.hq  hornetq-data-75.hq

                  hornetq-data-16.hq  hornetq-data-26.hq  hornetq-data-36.hq  hornetq-data-46.hq  hornetq-data-56.hq  hornetq-data-66.hq  hornetq-data-7.hq

                  hornetq-data-17.hq  hornetq-data-27.hq  hornetq-data-37.hq  hornetq-data-47.hq  hornetq-data-57.hq  hornetq-data-67.hq  hornetq-data-8.hq

                  hornetq-data-18.hq  hornetq-data-28.hq  hornetq-data-38.hq  hornetq-data-48.hq  hornetq-data-58.hq  hornetq-data-68.hq  hornetq-data-9.hq

                  hornetq-data-19.hq  hornetq-data-29.hq  hornetq-data-39.hq  hornetq-data-49.hq  hornetq-data-59.hq  hornetq-data-69.hq  server.1.lock

                  hornetq-data-1.hq   hornetq-data-2.hq   hornetq-data-3.hq   hornetq-data-4.hq   hornetq-data-5.hq   hornetq-data-6.hq   server.lock

                   

                  [17:48 root@myhost messagebroker-feedback]$ ls -R data/paging/

                  data/paging/:

                  556dc63f-92ec-11e1-af95-8d8be61396ef  56821c70-92ec-11e1-af95-8d8be61396ef

                   

                   

                  data/paging/556dc63f-92ec-11e1-af95-8d8be61396ef:

                  000000001.page  000000008.page  000000015.page  000000022.page  000000029.page  000000036.page  000000043.page  000000050.page  000000057.page

                  000000002.page  000000009.page  000000016.page  000000023.page  000000030.page  000000037.page  000000044.page  000000051.page  000000058.page

                  000000003.page  000000010.page  000000017.page  000000024.page  000000031.page  000000038.page  000000045.page  000000052.page  address.txt

                  000000004.page  000000011.page  000000018.page  000000025.page  000000032.page  000000039.page  000000046.page  000000053.page

                  000000005.page  000000012.page  000000019.page  000000026.page  000000033.page  000000040.page  000000047.page  000000054.page

                  000000006.page  000000013.page  000000020.page  000000027.page  000000034.page  000000041.page  000000048.page  000000055.page

                  000000007.page  000000014.page  000000021.page  000000028.page  000000035.page  000000042.page  000000049.page  000000056.page

                   

                   

                  data/paging/56821c70-92ec-11e1-af95-8d8be61396ef:

                  000000001.page  000000008.page  000000015.page  000000022.page  000000029.page  000000036.page  000000043.page  000000050.page  000000057.page

                  000000002.page  000000009.page  000000016.page  000000023.page  000000030.page  000000037.page  000000044.page  000000051.page  address.txt

                  000000003.page  000000010.page  000000017.page  000000024.page  000000031.page  000000038.page  000000045.page  000000052.page

                  000000004.page  000000011.page  000000018.page  000000025.page  000000032.page  000000039.page  000000046.page  000000053.page

                  000000005.page  000000012.page  000000019.page  000000026.page  000000033.page  000000040.page  000000047.page  000000054.page

                  000000006.page  000000013.page  000000020.page  000000027.page  000000034.page  000000041.page  000000048.page  000000055.page

                  000000007.page  000000014.page  000000021.page  000000028.page  000000035.page  000000042.page  000000049.page  000000056.page

                   

                  At this point my ramdisk is about 70-80% full.

                   

                  I stopped the producer and after some time the journal directory gets compacted and looks like this...

                   

                  [17:51 root@hmhost messagebroker-feedback]$ ls /ramdisk/data/journal/

                  hornetq-data-76.hq  hornetq-data-79.hq  hornetq-data-82.hq  hornetq-data-85.hq  hornetq-data-88.hq  server.1.lock

                  hornetq-data-77.hq  hornetq-data-80.hq  hornetq-data-83.hq  hornetq-data-86.hq  hornetq-data-89.hq  server.lock

                  hornetq-data-78.hq  hornetq-data-81.hq  hornetq-data-84.hq  hornetq-data-87.hq  hornetq-data-90.hq

                  • 6. Re: Journals do not get cleaned up fast enough under high load
                    clebert.suconic

                    By your logs it seems you got out of disk:

                     

                    2012-04-29 00:42:28,447] 223520 ERROR [JournalImpl::FilesExecutor] org.hornetq.core.journal.impl.JournalFilesRepository - Error pre allocating the file

                    HornetQException[errorCode=208 message=Error pre allocating the file]

                              at org.hornetq.core.asyncio.impl.AsynchronousFileImpl.fill(Native Method)

                              at org.hornetq.core.asyncio.impl.AsynchronousFileImpl.fill(AsynchronousFileImpl.java:415)

                              at org.hornetq.core.journal.impl.AIOSequentialFile.fill(AIOSequentialFile.java:176)

                              at org.hornetq.core.journal.impl.JournalFilesRepository.createFile(JournalFilesRepository.java:585)

                              at org.hornetq.core.journal.impl.JournalFilesRepository.takeFile(JournalFilesRepository.java:531)

                              at org.hornetq.core.journal.impl.JournalFilesRepository.pushOpenedFile(JournalFilesRepository.java:498)

                              at org.hornetq.core.journal.impl.JournalFilesRepository$1.run(JournalFilesRepository.java:94)

                              at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

                              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

                              at java.lang.Thread.run(Thread.java:662)

                    [2012-04-29 00:42:29,792] 224865 ERROR [JournalImpl::FilesExecutor] org.hornetq.core.journal.impl.JournalFilesRepository - Error pre allocating the file

                    HornetQException[errorCode=208 message=Error pre allocating the file]

                              at org.hornetq.core.asyncio.impl.AsynchronousFileImpl.fill(Native Method)

                              at org.hornetq.core.asyncio.impl.AsynchronousFileImpl.fill(AsynchronousFileImpl.java:415)

                              at org.hornetq.core.journal.impl.AIOSequentialFile.fill(AIOSequentialFile.java:176)

                              at org.hornetq.core.journal.impl.JournalFilesRepository.createFile(JournalFilesRepository.java:585)

                              at org.hornetq.core.journal.impl.JournalFilesRepository.takeFile(JournalFilesRepository.java:531)

                              at org.hornetq.core.journal.impl.JournalFilesRepository.pushOpenedFile(JournalFilesRepository.java:498)

                              at org.hornetq.core.journal.impl.JournalFilesRepository$1.run(JournalFilesRepository.java:94)

                              at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

                              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

                              at java.lang.Thread.run(Thread.java:662)

                    [2012-04-29 00:42:34,785] 229858 WARN  [Old I/O server worker (parentId: 697340805, [id: 0x29909385, /10.50.4.99:5445])] org.hornetq.core.journal.impl.JournalFilesRepository - Couldn't open a file in 60 Seconds

                    • 7. Re: Journals do not get cleaned up fast enough under high load
                      craigm123

                      Yes, that's right, but only after the ramdisk reaches 100% capacity (only the journal directory is on the ramdisk).  Once it does reach 100%, the node does not recover and I need to kill the process, remove the data directories, and restart it.

                       

                      I tried the same experiment with only one of the two nodes in the cluster (same configs, I just didn't start the second node) and I was not able to reproduce the issue.  I allowed the two queues to grow in to 100k messages each and number of journals never seemed to change that much.  Here's what those directories looks like -

                       

                      [19:19 root@myhost messagebroker-feedback]$ ls /ramdisk/data/journal/

                      hornetq-data-10.hq  hornetq-data-2.hq  hornetq-data-4.hq  hornetq-data-6.hq  hornetq-data-8.hq  server.1.lock

                      hornetq-data-1.hq   hornetq-data-3.hq  hornetq-data-5.hq  hornetq-data-7.hq  hornetq-data-9.hq  server.lock

                       

                      [19:19 root@myhost messagebroker-feedback]$ ls -R data/paging/

                      data/paging/:

                      3e2094ae-92f9-11e1-a4b6-6743b1fbbabd  3ec0cc4f-92f9-11e1-a4b6-6743b1fbbabd

                       

                       

                      data/paging/3e2094ae-92f9-11e1-a4b6-6743b1fbbabd:

                      000000001.page  000000013.page  000000025.page  000000037.page  000000049.page  000000061.page  000000073.page  000000085.page  000000097.page

                      000000002.page  000000014.page  000000026.page  000000038.page  000000050.page  000000062.page  000000074.page  000000086.page  000000098.page

                      000000003.page  000000015.page  000000027.page  000000039.page  000000051.page  000000063.page  000000075.page  000000087.page  000000099.page

                      000000004.page  000000016.page  000000028.page  000000040.page  000000052.page  000000064.page  000000076.page  000000088.page  000000100.page

                      000000005.page  000000017.page  000000029.page  000000041.page  000000053.page  000000065.page  000000077.page  000000089.page  000000101.page

                      000000006.page  000000018.page  000000030.page  000000042.page  000000054.page  000000066.page  000000078.page  000000090.page  000000102.page

                      000000007.page  000000019.page  000000031.page  000000043.page  000000055.page  000000067.page  000000079.page  000000091.page  address.txt

                      000000008.page  000000020.page  000000032.page  000000044.page  000000056.page  000000068.page  000000080.page  000000092.page

                      000000009.page  000000021.page  000000033.page  000000045.page  000000057.page  000000069.page  000000081.page  000000093.page

                      000000010.page  000000022.page  000000034.page  000000046.page  000000058.page  000000070.page  000000082.page  000000094.page

                      000000011.page  000000023.page  000000035.page  000000047.page  000000059.page  000000071.page  000000083.page  000000095.page

                      000000012.page  000000024.page  000000036.page  000000048.page  000000060.page  000000072.page  000000084.page  000000096.page

                       

                       

                      data/paging/3ec0cc4f-92f9-11e1-a4b6-6743b1fbbabd:

                      000000001.page  000000013.page  000000025.page  000000037.page  000000049.page  000000061.page  000000073.page  000000085.page  000000097.page

                      000000002.page  000000014.page  000000026.page  000000038.page  000000050.page  000000062.page  000000074.page  000000086.page  000000098.page

                      000000003.page  000000015.page  000000027.page  000000039.page  000000051.page  000000063.page  000000075.page  000000087.page  000000099.page

                      000000004.page  000000016.page  000000028.page  000000040.page  000000052.page  000000064.page  000000076.page  000000088.page  000000100.page

                      000000005.page  000000017.page  000000029.page  000000041.page  000000053.page  000000065.page  000000077.page  000000089.page  000000101.page

                      000000006.page  000000018.page  000000030.page  000000042.page  000000054.page  000000066.page  000000078.page  000000090.page  address.txt

                      000000007.page  000000019.page  000000031.page  000000043.page  000000055.page  000000067.page  000000079.page  000000091.page

                      000000008.page  000000020.page  000000032.page  000000044.page  000000056.page  000000068.page  000000080.page  000000092.page

                      000000009.page  000000021.page  000000033.page  000000045.page  000000057.page  000000069.page  000000081.page  000000093.page

                      000000010.page  000000022.page  000000034.page  000000046.page  000000058.page  000000070.page  000000082.page  000000094.page

                      000000011.page  000000023.page  000000035.page  000000047.page  000000059.page  000000071.page  000000083.page  000000095.page

                      000000012.page  000000024.page  000000036.page  000000048.page  000000060.page  000000072.page  000000084.page  000000096.page

                      • 8. Re: Journals do not get cleaned up fast enough under high load
                        clebert.suconic

                        The Journal server is not supposed to run out of disk.

                         

                        That's supposed to be a system level failure. All we can do about is to restart the server.

                         

                        There's an event on hardware failures that should include this one on calling server.stop(); This has nothing to do with the previous error you were seeing.

                        • 9. Re: Journals do not get cleaned up fast enough under high load
                          craigm123

                          It's my understanding that the journal directory should not expand too much more than the size of the max-size-bytes x the number of queues.  I see this to be true when I run with one node in clustered mode, but not with two nodes.

                           

                          I attached an application you can try to see the problem I'm referring to.  The client application sends messages to one queue as fast as it can. The queue is setup to page after the queue reaches 10 MiB. 

                           

                          There are two run scripts...

                          - run-clustered.sh builds the client, starts up two different brokers, then starts the client  (Ctrl-C to stop)

                            - run-standalone.sh builds the client, starts up one broker, then starts the client

                           

                          The run scripts do not stop the brokers, but I created a stop.sh script to do that.  The clean.sh script cleans up the compiled bits from the client, and removes the data directories and log files from the brokers.

                           

                          I ran it on my desktop, where both journal and paging directories are on my SSD (no ramdisk).  I watched disk usage as the application was running.

                           

                          With clustering...

                           

                          Every 1.0s: du -ch mb1 mb2                                                                                                    Mon Apr 30 15:34:05 2012

                           

                          12K     mb1/scripts

                          4.0K    mb1/data/large-messages

                          135M    mb1/data/paging/86300b97-9314-11e1-aebb-97f74ee83f62

                          135M    mb1/data/paging

                          4.1M    mb1/data/bindings

                          191M    mb1/data/journal

                          329M    mb1/data

                          32K     mb1/cfg

                          329M    mb1

                          12K     mb2/scripts

                          4.0K    mb2/data/large-messages

                          131M    mb2/data/paging/86d4620b-9314-11e1-a521-cbe7d0cd9926

                          131M    mb2/data/paging

                          4.1M    mb2/data/bindings

                          101M    mb2/data/journal

                          235M    mb2/data

                          32K     mb2/cfg

                          235M    mb2

                          563M    total

                           

                          After just a few seconds of running the application, I see the journal directory of one of the nodes grow quite fast.  Once I stop the client (but keep the nodes running), the journal directory does not empty out.

                           

                          Without clustering...

                           

                          Every 1.0s: du -ch mb1 mb2                                                                                                    Mon Apr 30 15:35:49 2012

                           

                          12K     mb1/scripts

                          4.0K    mb1/data/large-messages

                          474M    mb1/data/paging/bdf8e62c-9314-11e1-a1d0-1941ea45adee

                          474M    mb1/data/paging

                          4.1M    mb1/data/bindings

                          101M    mb1/data/journal

                          578M    mb1/data

                          32K     mb1/cfg

                          578M    mb1

                          12K     mb2/scripts

                          32K     mb2/cfg

                          48K     mb2

                          578M    total

                           

                          This time I don't see the journal directory grow at all.

                           

                          Again, thanks for the help!

                          • 10. Re: Journals do not get cleaned up fast enough under high load
                            clebert.suconic

                            It seems to me that one of the nodes was down, and while that was happening messages were being load balanced to that node, and then messages were building up there.

                             

                             

                            How can I easily run this test. Please provide me some step by step? I'm doing three things in parallel, but if I can easily run your test I will be able to figure out what you are saying.

                             

                             

                            Before coming back to me with instructions, please Take a look first if one of your cluster nodes is not down. You may be able to configure everything to page by simply doing # instead of jms.# on the address settings.

                             

                             

                            Thanks,

                             

                             

                            Clebert

                            • 11. Re: Journals do not get cleaned up fast enough under high load
                              craigm123

                              When running the clustered test in test application, I'm sure both nodes are up.

                               

                              Here are some instructions -

                               

                              To run the clustered setup

                              > tar -xzf hornetq-test.tgz

                              > cd hornetq-test

                              > ./run-clustered.sh

                               

                              While the client is sending traffic, run this to watch disk usage

                              > watch -n 1 du -ch mb*

                               

                              To stop, ctrl+c stop run.clustered.sh, then

                              > ./stop.sh

                              > ./clean.sh

                               

                              This will stop the hornetq nodes and clean up

                               

                              To run the non-clustered setup (same configs, but starts only one node)

                              > ./run-standalone.sh

                               

                              Watch the disk usage again

                               

                              To stop, ctrl+c to stop run-standalone.sh

                              > ./stop.sh

                              • 12. Re: Journals do not get cleaned up fast enough under high load
                                clebert.suconic

                                Client, what client?

                                 

                                The step by step certainly doesn't work, as the mb1 directory is under where run-standalone.sh is located.

                                 

                                although this is simple stuff and I would figure out by reading your stuff.. I would prefer you telling me exactly what you are doing...  really step by step..

                                 

                                something like:

                                 

                                - download hornetq.zip (is that really what you are using)

                                - unzip your stuff at...

                                - run the client at...

                                 

                                You should validate your step by step in the end.

                                 

                                 

                                This is for two reasons:

                                 

                                - It will save me time

                                - Sometimes the error is on the details of your procedure. If I figure out what you're doing I may accidently fix whatever is wrong and won't be able to replicate your issue

                                 

                                -> waste of time at the end.

                                • 13. Re: Journals do not get cleaned up fast enough under high load
                                  craigm123

                                  Sorry, maybe I left off some details about what the shell scripts do -

                                   

                                  run-clustered.sh starts builds the client (hornetq-test/jmstest), then starts the first broker (hornetq-test/mb1), then the second one (hornetq-test/mb2), then starts the client.  Here's my shell output from executing the above instructions to the 't'.

                                   

                                  craig@server:/temp$ wget https://community.jboss.org/servlet/JiveServlet/download/733095-56969/hornetq-test.tgz

                                  --2012-04-30 18:30:40--  https://community.jboss.org/servlet/JiveServlet/download/733095-56969/hornetq-test.tgz

                                  Resolving community.jboss.org... 209.132.182.48

                                  Connecting to community.jboss.org|209.132.182.48|:443... connected.

                                  HTTP request sent, awaiting response... 200 OK

                                  Length: 12151606 (12M) [application/gzip]

                                  Saving to: `hornetq-test.tgz'

                                   

                                   

                                  100%[============================================================================================>] 12,151,606  2.19M/s   in 5.0s   

                                   

                                   

                                  2012-04-30 18:30:45 (2.34 MB/s) - `hornetq-test.tgz' saved [12151606/12151606]

                                   

                                   

                                  craig@server:/temp$ tar -xzf hornetq-test.tgz

                                  craig@server:/temp$ cd hornetq-test/

                                  craig@server:/temp/hornetq-test$ ./run-clustered.sh

                                  Buildfile: /temp/hornetq-test/jmstest/build.xml

                                   

                                   

                                  init:

                                      [mkdir] Created dir: /temp/hornetq-test/jmstest/target

                                   

                                   

                                  compile:

                                      [javac] Compiling 2 source files to /temp/hornetq-test/jmstest/target

                                   

                                   

                                  package:

                                        [jar] Building jar: /temp/hornetq-test/jmstest/jmstest.jar

                                   

                                   

                                  BUILD SUCCESSFUL

                                  Total time: 1 second

                                  ***********************************************************************************

                                  java

                                  -XX:-HeapDumpOnOutOfMemoryError

                                  -Xms2048M -Xmx4096M

                                  -Dcom.sun.management.jmxremote

                                  -Dcom.sun.management.jmxremote.authenticate=false

                                  -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=6003 -Djava.rmi.server.hostname=localhost -Dlocal.ip=localhost -Dlocal.port=5444 -Dcluster.user=root -Dcluster.password=root -Djournals.dir=data/journal -Djournals.num=10 -Dmessage.memory.max=10485760 -Dotherserver.ip=localhost -Dotherserver.port=5445  -XX:+UseParallelGC -XX:+AggressiveOpts -XX:+UseFastAccessorMethods -Dhornetq.config.dir=cfg -Djava.util.logging.config.file=cfg/logging.properties -Djava.library.path=../lib -classpath /temp/hornetq-test/mb1/../lib/xml-apis-2.9.1.jar:/temp/hornetq-test/mb1/../lib/xercesImpl-2.9.1.jar:/temp/hornetq-test/mb1/../lib/spring-webmvc-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-web-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-tx-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-test-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-oxm-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-jms-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-expression-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-core-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-context-support-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-context-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-beans-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-asm-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-aop-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/netty-3.2.3.Final.jar:/temp/hornetq-test/mb1/../lib/log4j-1.2.16.jar:/temp/hornetq-test/mb1/../lib/junit-4.8.2.jar:/temp/hornetq-test/mb1/../lib/jnpserver-4.2.3.GA.jar:/temp/hornetq-test/mb1/../lib/jbossxb-2.0.1.GA.jar:/temp/hornetq-test/mb1/../lib/jboss-reflect-2.0.2.GA.jar:/temp/hornetq-test/mb1/../lib/jboss-mdr-2.0.2.GA.jar:/temp/hornetq-test/mb1/../lib/jboss-logging-spi-2.0.5.GA.jar:/temp/hornetq-test/mb1/../lib/jboss-kernel-2.0.9.GA.jar:/temp/hornetq-test/mb1/../lib/jboss-jms-api-1.1.0.GA.jar:/temp/hornetq-test/mb1/../lib/jboss-dependency-2.0.9.GA.jar:/temp/hornetq-test/mb1/../lib/jboss-common-core-2.2.14.GA.jar:/temp/hornetq-test/mb1/../lib/jaxb-api-2.1.9.jar:/temp/hornetq-test/mb1/../lib/hornetq-spring-integration-2.2.16.Final.jar:/temp/hornetq-test/mb1/../lib/hornetq-logging-2.2.16.Final.jar:/temp/hornetq-test/mb1/../lib/hornetq-jms-client-2.2.16.Final.jar:/temp/hornetq-test/mb1/../lib/hornetq-jms-2.2.16.Final.jar:/temp/hornetq-test/mb1/../lib/hornetq-jboss-as-integration-2.2.16.Final.jar:/temp/hornetq-test/mb1/../lib/hornetq-core-client-2.2.16.Final.jar:/temp/hornetq-test/mb1/../lib/hornetq-core-2.2.16.Final.jar:/temp/hornetq-test/mb1/../lib/hornetq-bootstrap-2.2.16.Final.jar:/temp/hornetq-test/mb1/../lib/dtdparser121-1.2.1.jar:/temp/hornetq-test/mb1/../lib/commons-logging-1.1.1.jar:/temp/hornetq-test/mb1/../lib/commons-lang-2.6.jar:/temp/hornetq-test/mb1/../lib/commons-io-1.3.2.jar:/temp/hornetq-test/mb1/../lib/commons-httpclient-3.0.1.jar:/temp/hornetq-test/mb1/../lib/commons-digester-1.8.1.jar:/temp/hornetq-test/mb1/../lib/commons-configuration-1.7.jar:/temp/hornetq-test/mb1/../lib/commons-collections-3.2.1.jar:/temp/hornetq-test/mb1/../lib/commons-codec-1.5.jar:/temp/hornetq-test/mb1/../lib/commons-beanutils-1.8.3.jar:/temp/hornetq-test/mb1/../lib/aopalliance-1.0.jar:/temp/hornetq-test/mb1/../lib/activation-1.1.1.jar:cfg:/temp/hornetq-test/mb1/schemas/ org.hornetq.integration.bootstrap.HornetQBootstrapServer hornetq-beans.xml

                                  ***********************************************************************************

                                  ***********************************************************************************

                                  java

                                  -XX:-HeapDumpOnOutOfMemoryError

                                  -Xms2048M -Xmx4096M

                                  -Dcom.sun.management.jmxremote

                                  -Dcom.sun.management.jmxremote.authenticate=false

                                  -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=6004 -Djava.rmi.server.hostname=localhost -Dlocal.ip=localhost -Dlocal.port=5445 -Dcluster.user=root -Dcluster.password=root -Djournals.dir=data/journal -Djournals.num=10 -Dmessage.memory.max=10485760 -Dotherserver.ip=localhost -Dotherserver.port=5444  -XX:+UseParallelGC -XX:+AggressiveOpts -XX:+UseFastAccessorMethods -Dhornetq.config.dir=cfg -Djava.util.logging.config.file=cfg/logging.properties -Djava.library.path=../lib -classpath /temp/hornetq-test/mb2/../lib/xml-apis-2.9.1.jar:/temp/hornetq-test/mb2/../lib/xercesImpl-2.9.1.jar:/temp/hornetq-test/mb2/../lib/spring-webmvc-3.0.3.RELEASE.jar:/temp/hornetq-test/mb2/../lib/spring-web-3.0.3.RELEASE.jar:/temp/hornetq-test/mb2/../lib/spring-tx-3.0.3.RELEASE.jar:/temp/hornetq-test/mb2/../lib/spring-test-3.0.3.RELEASE.jar:/temp/hornetq-test/mb2/../lib/spring-oxm-3.0.3.RELEASE.jar:/temp/hornetq-test/mb2/../lib/spring-jms-3.0.3.RELEASE.jar:/temp/hornetq-test/mb2/../lib/spring-expression-3.0.3.RELEASE.jar:/temp/hornetq-test/mb2/../lib/spring-core-3.0.3.RELEASE.jar:/temp/hornetq-test/mb2/../lib/spring-context-support-3.0.3.RELEASE.jar:/temp/hornetq-test/mb2/../lib/spring-context-3.0.3.RELEASE.jar:/temp/hornetq-test/mb2/../lib/spring-beans-3.0.3.RELEASE.jar:/temp/hornetq-test/mb2/../lib/spring-asm-3.0.3.RELEASE.jar:/temp/hornetq-test/mb2/../lib/spring-aop-3.0.3.RELEASE.jar:/temp/hornetq-test/mb2/../lib/netty-3.2.3.Final.jar:/temp/hornetq-test/mb2/../lib/log4j-1.2.16.jar:/temp/hornetq-test/mb2/../lib/junit-4.8.2.jar:/temp/hornetq-test/mb2/../lib/jnpserver-4.2.3.GA.jar:/temp/hornetq-test/mb2/../lib/jbossxb-2.0.1.GA.jar:/temp/hornetq-test/mb2/../lib/jboss-reflect-2.0.2.GA.jar:/temp/hornetq-test/mb2/../lib/jboss-mdr-2.0.2.GA.jar:/temp/hornetq-test/mb2/../lib/jboss-logging-spi-2.0.5.GA.jar:/temp/hornetq-test/mb2/../lib/jboss-kernel-2.0.9.GA.jar:/temp/hornetq-test/mb2/../lib/jboss-jms-api-1.1.0.GA.jar:/temp/hornetq-test/mb2/../lib/jboss-dependency-2.0.9.GA.jar:/temp/hornetq-test/mb2/../lib/jboss-common-core-2.2.14.GA.jar:/temp/hornetq-test/mb2/../lib/jaxb-api-2.1.9.jar:/temp/hornetq-test/mb2/../lib/hornetq-spring-integration-2.2.16.Final.jar:/temp/hornetq-test/mb2/../lib/hornetq-logging-2.2.16.Final.jar:/temp/hornetq-test/mb2/../lib/hornetq-jms-client-2.2.16.Final.jar:/temp/hornetq-test/mb2/../lib/hornetq-jms-2.2.16.Final.jar:/temp/hornetq-test/mb2/../lib/hornetq-jboss-as-integration-2.2.16.Final.jar:/temp/hornetq-test/mb2/../lib/hornetq-core-client-2.2.16.Final.jar:/temp/hornetq-test/mb2/../lib/hornetq-core-2.2.16.Final.jar:/temp/hornetq-test/mb2/../lib/hornetq-bootstrap-2.2.16.Final.jar:/temp/hornetq-test/mb2/../lib/dtdparser121-1.2.1.jar:/temp/hornetq-test/mb2/../lib/commons-logging-1.1.1.jar:/temp/hornetq-test/mb2/../lib/commons-lang-2.6.jar:/temp/hornetq-test/mb2/../lib/commons-io-1.3.2.jar:/temp/hornetq-test/mb2/../lib/commons-httpclient-3.0.1.jar:/temp/hornetq-test/mb2/../lib/commons-digester-1.8.1.jar:/temp/hornetq-test/mb2/../lib/commons-configuration-1.7.jar:/temp/hornetq-test/mb2/../lib/commons-collections-3.2.1.jar:/temp/hornetq-test/mb2/../lib/commons-codec-1.5.jar:/temp/hornetq-test/mb2/../lib/commons-beanutils-1.8.3.jar:/temp/hornetq-test/mb2/../lib/aopalliance-1.0.jar:/temp/hornetq-test/mb2/../lib/activation-1.1.1.jar:cfg:/temp/hornetq-test/mb2/schemas/ org.hornetq.integration.bootstrap.HornetQBootstrapServer hornetq-beans.xml

                                  ***********************************************************************************

                                  0      INFO  [main] org.springframework.context.support.ClassPathXmlApplicationContext - Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@42b988a6: startup date [Mon Apr 30 18:31:02 PDT 2012]; root of context hierarchy

                                  31     INFO  [main] org.springframework.beans.factory.xml.XmlBeanDefinitionReader - Loading XML bean definitions from URL [file:cfg/senderContext.xml]

                                  263    INFO  [main] org.springframework.beans.factory.support.DefaultListableBeanFactory - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@7b2884e0: defining beans [transportConfig1,transportConfig2,connectionFactory,connectionFactoryAdapter,cachingConnectionFactory,jmsTemplate,testQueue,jmsSender]; root of factory hierarchy

                                  349    INFO  [main] com.myco.jmstest.SenderMain - Sending 1000000 messages

                                  440    INFO  [JmsSender-7] org.springframework.jms.connection.CachingConnectionFactory - Established shared JMS Connection: org.hornetq.jms.client.HornetQConnection@1167e3a5

                                  502    INFO  [JmsSender-3] com.myco.jmstest.JmsSender - Messages sent: 1

                                  5503   INFO  [JmsSender-0] com.myco.jmstest.JmsSender - Messages sent: 21633

                                  10504  INFO  [JmsSender-18] com.myco.jmstest.JmsSender - Messages sent: 36251

                                  15505  INFO  [JmsSender-5] com.myco.jmstest.JmsSender - Messages sent: 34185

                                  20506  INFO  [JmsSender-10] com.myco.jmstest.JmsSender - Messages sent: 27839

                                  25507  INFO  [JmsSender-3] com.myco.jmstest.JmsSender - Messages sent: 31964

                                  30546  INFO  [JmsSender-13] com.myco.jmstest.JmsSender - Messages sent: 25929

                                  ^Ccraig@server:/temp/hornetq-test$

                                   

                                  Berfore ctrl+c above, I was executing this in another shell

                                   

                                  craig@server:/var/www/jquery$ cd /temp/hornetq-test/

                                  craig@server:/temp/hornetq-test$ watch -n 1 du -ch mb*

                                   

                                  After running, I want to shutdown the brokers and clean up.

                                   

                                  craig@server:/temp/hornetq-test$ ./stop.sh

                                  craig@server:/temp/hornetq-test$ ./clean.sh

                                   

                                  The run-standalone.sh does everything the run-clustered.sh script does, it just doesn't start the second broker.  Again, here's my shell output for that - 

                                   

                                  craig@server:/temp/hornetq-test$ ./run-standalone.sh

                                  Buildfile: /temp/hornetq-test/jmstest/build.xml

                                   

                                   

                                  init:

                                      [mkdir] Created dir: /temp/hornetq-test/jmstest/target

                                   

                                   

                                  compile:

                                      [javac] Compiling 2 source files to /temp/hornetq-test/jmstest/target

                                   

                                   

                                  package:

                                        [jar] Building jar: /temp/hornetq-test/jmstest/jmstest.jar

                                   

                                   

                                  BUILD SUCCESSFUL

                                  Total time: 1 second

                                  ***********************************************************************************

                                  java

                                  -XX:-HeapDumpOnOutOfMemoryError

                                  -Xms2048M -Xmx4096M

                                  -Dcom.sun.management.jmxremote

                                  -Dcom.sun.management.jmxremote.authenticate=false

                                  -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=6003 -Djava.rmi.server.hostname=localhost -Dlocal.ip=localhost -Dlocal.port=5444 -Dcluster.user=root -Dcluster.password=root -Djournals.dir=data/journal -Djournals.num=10 -Dmessage.memory.max=10485760 -Dotherserver.ip=localhost -Dotherserver.port=5445  -XX:+UseParallelGC -XX:+AggressiveOpts -XX:+UseFastAccessorMethods -Dhornetq.config.dir=cfg -Djava.util.logging.config.file=cfg/logging.properties -Djava.library.path=../lib -classpath /temp/hornetq-test/mb1/../lib/xml-apis-2.9.1.jar:/temp/hornetq-test/mb1/../lib/xercesImpl-2.9.1.jar:/temp/hornetq-test/mb1/../lib/spring-webmvc-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-web-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-tx-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-test-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-oxm-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-jms-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-expression-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-core-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-context-support-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-context-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-beans-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-asm-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/spring-aop-3.0.3.RELEASE.jar:/temp/hornetq-test/mb1/../lib/netty-3.2.3.Final.jar:/temp/hornetq-test/mb1/../lib/log4j-1.2.16.jar:/temp/hornetq-test/mb1/../lib/junit-4.8.2.jar:/temp/hornetq-test/mb1/../lib/jnpserver-4.2.3.GA.jar:/temp/hornetq-test/mb1/../lib/jbossxb-2.0.1.GA.jar:/temp/hornetq-test/mb1/../lib/jboss-reflect-2.0.2.GA.jar:/temp/hornetq-test/mb1/../lib/jboss-mdr-2.0.2.GA.jar:/temp/hornetq-test/mb1/../lib/jboss-logging-spi-2.0.5.GA.jar:/temp/hornetq-test/mb1/../lib/jboss-kernel-2.0.9.GA.jar:/temp/hornetq-test/mb1/../lib/jboss-jms-api-1.1.0.GA.jar:/temp/hornetq-test/mb1/../lib/jboss-dependency-2.0.9.GA.jar:/temp/hornetq-test/mb1/../lib/jboss-common-core-2.2.14.GA.jar:/temp/hornetq-test/mb1/../lib/jaxb-api-2.1.9.jar:/temp/hornetq-test/mb1/../lib/hornetq-spring-integration-2.2.16.Final.jar:/temp/hornetq-test/mb1/../lib/hornetq-logging-2.2.16.Final.jar:/temp/hornetq-test/mb1/../lib/hornetq-jms-client-2.2.16.Final.jar:/temp/hornetq-test/mb1/../lib/hornetq-jms-2.2.16.Final.jar:/temp/hornetq-test/mb1/../lib/hornetq-jboss-as-integration-2.2.16.Final.jar:/temp/hornetq-test/mb1/../lib/hornetq-core-client-2.2.16.Final.jar:/temp/hornetq-test/mb1/../lib/hornetq-core-2.2.16.Final.jar:/temp/hornetq-test/mb1/../lib/hornetq-bootstrap-2.2.16.Final.jar:/temp/hornetq-test/mb1/../lib/dtdparser121-1.2.1.jar:/temp/hornetq-test/mb1/../lib/commons-logging-1.1.1.jar:/temp/hornetq-test/mb1/../lib/commons-lang-2.6.jar:/temp/hornetq-test/mb1/../lib/commons-io-1.3.2.jar:/temp/hornetq-test/mb1/../lib/commons-httpclient-3.0.1.jar:/temp/hornetq-test/mb1/../lib/commons-digester-1.8.1.jar:/temp/hornetq-test/mb1/../lib/commons-configuration-1.7.jar:/temp/hornetq-test/mb1/../lib/commons-collections-3.2.1.jar:/temp/hornetq-test/mb1/../lib/commons-codec-1.5.jar:/temp/hornetq-test/mb1/../lib/commons-beanutils-1.8.3.jar:/temp/hornetq-test/mb1/../lib/aopalliance-1.0.jar:/temp/hornetq-test/mb1/../lib/activation-1.1.1.jar:cfg:/temp/hornetq-test/mb1/schemas/ org.hornetq.integration.bootstrap.HornetQBootstrapServer hornetq-beans.xml

                                  ***********************************************************************************

                                  0      INFO  [main] org.springframework.context.support.ClassPathXmlApplicationContext - Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@2e8f4fb3: startup date [Mon Apr 30 18:38:57 PDT 2012]; root of context hierarchy

                                  31     INFO  [main] org.springframework.beans.factory.xml.XmlBeanDefinitionReader - Loading XML bean definitions from URL [file:cfg/senderContext.xml]

                                  292    INFO  [main] org.springframework.beans.factory.support.DefaultListableBeanFactory - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@56c163f: defining beans [transportConfig1,transportConfig2,connectionFactory,connectionFactoryAdapter,cachingConnectionFactory,jmsTemplate,testQueue,jmsSender]; root of factory hierarchy

                                  375    INFO  [main] com.myco.jmstest.SenderMain - Sending 1000000 messages

                                  493    INFO  [JmsSender-4] org.springframework.jms.connection.CachingConnectionFactory - Established shared JMS Connection: org.hornetq.jms.client.HornetQConnection@614a75bb

                                  562    INFO  [JmsSender-8] com.myco.jmstest.JmsSender - Messages sent: 3

                                  5567   INFO  [JmsSender-3] com.myco.jmstest.JmsSender - Messages sent: 26427

                                  ^Ccraig@server:/temp/hornetq-test$

                                   

                                  Were you getting errors when running the scripts?

                                  • 14. Re: Journals do not get cleaned up fast enough under high load
                                    clebert.suconic

                                    ah.. you have the servers already as part of the zip. I thought I needed to make a reference to the server somewhere.

                                     

                                     

                                    It's a bit hard to understand where the bits are coming from (like where is the test) but I will take a look between later and tomorrow. (It's a bit late here now)

                                    1 2 Previous Next