5 Replies Latest reply on Jan 26, 2016 2:06 PM by jbertram

    Why is Direct ByteBuffer ever increasing on HornetQ server leading to OOM?

    tushargargus

      Configuration

       

      I have setup a standalone HornetQ (2.4.7-Final) cluster on Ubuntu 12.04.3 LTS (GNU/Linux 3.8.0-29-generic x86_64). The instance has 16GB of RAM with 2 cores and I have allocated -Xms5G -Xmx10G to the JVM.

       

      Following is the address setting in the HornetQ configuration:

       

             <address-settings>

                <address-setting match="jms.queue.pollingQueue">

                   <dead-letter-address>jms.queue.DLQ</dead-letter-address>

                   <expiry-address>jms.queue.ExpiryQueue</expiry-address>

                   <redelivery-delay>86400000</redelivery-delay>

                   <max-delivery-attempts>10</max-delivery-attempts>

                   <max-size-bytes>1048576000</max-size-bytes>

                   <page-size-bytes>10485760</page-size-bytes>

                   <address-full-policy>PAGE</address-full-policy>

                   <message-counter-history-day-limit>10</message-counter-history-day-limit>

                </address-setting>

                <address-setting match="jms.queue.offerQueue">

                   <dead-letter-address>jms.queue.DLQ</dead-letter-address>

                   <expiry-address>jms.queue.ExpiryQueue</expiry-address>

                   <redelivery-delay>3600000</redelivery-delay>

                   <max-delivery-attempts>25</max-delivery-attempts>

                   <max-size-bytes>1048576000</max-size-bytes>

                   <page-size-bytes>10485760</page-size-bytes>

                   <address-full-policy>PAGE</address-full-policy>

                   <message-counter-history-day-limit>10</message-counter-history-day-limit>

                </address-setting>

                <address-setting match="jms.queue.smsQueue">

                   <dead-letter-address>jms.queue.DLQ</dead-letter-address>

                   <expiry-address>jms.queue.ExpiryQueue</expiry-address>

                   <redelivery-delay>3600000</redelivery-delay>

                   <max-delivery-attempts>25</max-delivery-attempts>

                   <max-size-bytes>1048576000</max-size-bytes>

                   <page-size-bytes>10485760</page-size-bytes>

                   <address-full-policy>PAGE</address-full-policy>

                   <message-counter-history-day-limit>10</message-counter-history-day-limit>

                </address-setting>

                <!--default for catch all-->

                <!-- delay redelivery of messages for 1hr -->

                <address-setting match="#">

                   <dead-letter-address>jms.queue.DLQ</dead-letter-address>

                   <expiry-address>jms.queue.ExpiryQueue</expiry-address>

                   <redelivery-delay>3600000</redelivery-delay>

                   <max-delivery-attempts>25</max-delivery-attempts>

                   <max-size-bytes>1048576000</max-size-bytes>

                   <page-size-bytes>10485760</page-size-bytes>

                   <address-full-policy>PAGE</address-full-policy>

                   <message-counter-history-day-limit>10</message-counter-history-day-limit>

                </address-setting>

             </address-settings>

       

      There are 10 other queues bound to the default address specified by wildcard.

       

       

      Problem

       

      Over a period of time the Direct ByteBuffer memory gradually increases in size and even occupies the swap space eventually throwing OutOfMemoryError ("Direct buffer memory").

       

      I have tried a lot of JVM and JMS tuning but in vain. Even specifying a -XX:MaxDirectMemorySize=4G to the JVM resulted in an early OOME for the same reason. It seems either the ByteBuffer isn't being read or GC isn't claiming the unreferenced memory.

       

      Has anybody faced the same issue before?

       

      Any suggestions are welcome and thanks in advance.

        • 1. Re: Why is Direct ByteBuffer ever increasing on HornetQ server leading to OOM?
          jbertram

          It's impossible to offer much help with the current information.  Can you simplify the use-case and provide a reproducible test-case?  At the very least, can you provide a clear description of exactly what the application is doing?

          • 2. Re: Why is Direct ByteBuffer ever increasing on HornetQ server leading to OOM?
            tushargargus

            jbertram

             

            The application using this HornetQ JMS server cluster is a message producer and consumer application storing and reading the messages for persistence in different queues. We create ObjectMessage from different message objects serialized to String and also set HDR_DUPLICATE_DETECTION_ID message property for duplicate detection.

             

            The producers and consumers are executing in transacted session mode. We are using Spring's CachingConnectionFactory to cache sessions and producers while we have setup consumers to use Spring's DefaultMessageListenerContainer with a concurrency="5-10" and caching of consumers.

             

            On the HornetQ server jms configuration we have specified <consumer-window-size>5242880</consumer-window-size> in the connection factory. Moreover, in hornetq-configuration.xml following other configurations have been specified:

             

            <id-cache-size>800000</id-cache-size>

             

            <connectors>

                <connector name="netty">

                 <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>

                 <param key="host"  value="${hornetq.remoting.netty.host:localhost}"/>

                 <param key="port"  value="${hornetq.remoting.netty.port:5445}"/>

                 <param key="use-nio"  value="true"/>

                 <param key="tcp-receive-buffer-size"  value="65536"/>

                 <param key="batch-delay" value="100"/>

                </connector>

            </connectors>


            <acceptors>

              <acceptor name="netty">

                 <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>

                 <param key="host"  value="${hornetq.remoting.netty.host:localhost}"/>

                 <param key="port"  value="${hornetq.remoting.netty.port:5445}"/>

                 <param key="use-nio"  value="true"/>

                 <param key="tcp-receive-buffer-size"  value="65536"/>

                 <param key="batch-delay" value="100"/>

                 <param key="direct-deliver" value="false"/>

              </acceptor>

            </acceptors>

             

            Please let me know what other specifics do you want to know about the application and I would furnish those details to you.

            • 3. Re: Why is Direct ByteBuffer ever increasing on HornetQ server leading to OOM?
              jbertram

              Can you work up a test-case I can use to reproduce what you're seeing?

              • 4. Re: Why is Direct ByteBuffer ever increasing on HornetQ server leading to OOM?
                tushargargus

                I understand that using a test-case it would be easy for you to reproduce the issue I am facing but meanwhile I provide that can you please give some pointers on where probably the problem could be?

                • 5. Re: Why is Direct ByteBuffer ever increasing on HornetQ server leading to OOM?
                  jbertram

                  If I could give you any pointers I would.  However, there's just too much to sort through given your current description, and I don't have a lot of time to whittle it down with you.  If you can get me something I can reproduce then I can investigate a bit more.