I'm facing a totally unexpected issue with Joss 5.1 JPA implementation.
Actually I was trying to write my own interceptor in order to collect some statistic about performance of my EJB's when I found this phenomenon:
My interceptor didn't show the real time of EJB methods invocation but it showed very small value comparing to the real time. I realized that because average time of responses from Jboss were was like about 10-20s when interceptor was showing about 1-1.5s. All my EJBs have CMT and I don't work with transactions directly. So when I found out that after method's invocation was done Jboss was waiting for some time before returning results to a chain of methods I decided to try tx.commit() instead of using CMT. And I was really surprised about what I'd found out.
When a load on a server starts growing time of tx.commit() invocation is growing with a count of concurrent clients. Like this:
1 client: 18:07:45,720 INFO [STDOUT] 161 tx.commit() took : 66ms
from 328ms to 1545ms
from 950ms to 1732ms
from 2358ms to 4692ms
from 5947ms to 9553ms
from 5947ms to 9553ms
from 8408ms to 13046ms
Could someone tell me why tx.commit() invocation starts taking so much long time?
I'm using MySQL DB but I don't this that it could cause this issues. I think that problem somewhere in Hibernate but I'm not sure what it's.
I will appreciate any help.
Thanks for the reply. But it looks a little bit obviuse for me that when the server is getting more concurrent requests it should start slowing down but I just cann't anderstand what causes such a long time for transations to commit.
Is this an issue with MySQL? If so I will probably have to move to Oracle.
Is this an issue with Hibernate or SecondLevel Caches? If so I will probably have to move to another ORM or to plain JDBC.
Is this an issue with Jboss 5.1 EJB/JPA implementaion? If so I will have to move to another server.
I'm sure that it's not an option when my application starts to respond in 20-30s to requests when there are ~150 concurrent requests.
P.S. I guess that I will finally find an answer/solution which sutisfy me, but I just wonder if someone has already faced and solved such a problem before.
Anyway that you for the responce. I do apprecite when someone is trying to help. That is what the cumminity exists for.
Something you can try is to store all your data in an in-memory database and see how the system performs. This would eliminate the overhead of disk access from the measurements.
I suggest to you to gather data using a profiling tool. It's not more than a day's worth of work (maybe even hours) to find out the hotspots in the code.
Thank you for your responce. I was actually going to use something like JProfiler in order to find out the hotspots. I don't think that an in-memory database could help me because it won't match my requirements anyway, but sill if the issue is inside MySQL I could move to Oracle DB.
I actually don't believe that the problem is in disk access, I sould better bet on some kind of locks somewhere.
P.S. I'm also going to play with flush method and decrasing sizes of transactions in order to get rid of this issue. Let's see if it helps.
Regards and Thanks again,
Several performance issues were identified in JBoss Application Server 5.1 and subsequently fixed in Enterprise Application Platform 5.1. These included issues with the transaction manager itself.
You can download a trial version of EAP 5.1 from http://www.jboss.com/downloads/.
Please let me know if that solves your issue.
Thanks Carlo. I tried EAP 5.1 but it worked almost the same. I also tried to make my transactions smaller, to use flush method of entity manager, to use another isolation level in mysql-ds.xml etc. But it worked the same on high load so that I suppose that the reason weather in jboss/ hibernate or in MySQL. I'm gonna try JProfiler soon and also I will probably try to move to Oracle DB.
So thanks everyone again. I will post here the results as soon as they are available and may be we will continue with this.