-
1. Re: 2nd level cache for JPA connector and LargeValues
rhauch Dec 16, 2011 9:05 AM (in response to jamat)I think we are including them in the 2nd level cache, which is a ridiculous oversight. If you don't mind, would you please log a bug in JIRA, and we'll get it fixed by next week's 2.7 release. Thanks.
-
2. Re: 2nd level cache for JPA connector and LargeValues
jamat Dec 16, 2011 9:25 AM (in response to rhauch)https://issues.jboss.org/browse/MODE-1349
Kind of related.
It seems that I have some issue with the lucene indexing with large binary values.
(note: I was just testing the behavior of modeshape with large binary and not really focusing on lucene indexes)
In my repository I do have like 20 big binaries (
mysql> select LENGTH, TYPE from MODE_SIMPLE_LARGE_VALUES;
+-----------+--------+
| LENGTH | TYPE |
+-----------+--------+
| 112932843 | BINARY |
| 112932843 | BINARY |
| 112932843 | BINARY |
| 112932843 | BINARY |
| 112932842 | BINARY |
| 112932840 | BINARY |
| 112932843 | BINARY |
| 112932842 | BINARY |
| 112932843 | BINARY |
| 112932843 | BINARY |
| 112932842 | BINARY |
| 3017 | BINARY |
| 112932843 | BINARY |
| 112932842 | BINARY |
| 112932842 | BINARY |
| 112932842 | BINARY |
| 112932842 | BINARY |
| 112932842 | BINARY |
| 112932843 | BINARY |
| 112932842 | BINARY |
+-----------+--------+
20 rows in set (0.00 sec)
)I was adding and deleting a node and I got stuck.
The deletion thread is waiting for a lock that is held by another thread which is also waiting:
"modeshape-indexing" prio=10 tid=0x0f829800 nid=0x400e waiting on condition [0x0d00b000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x86f7a760> (a java.util.concurrent.CountDownLatch$Sync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:747)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:905)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1217)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:207)
at org.modeshape.graph.request.CompositeRequestChannel.addAndAwait(CompositeRequestChannel.java:283)
at org.modeshape.graph.search.SearchEngineIndexer.indexSubgraph(SearchEngineIndexer.java:394)
at org.modeshape.graph.search.SearchEngineIndexer.index(SearchEngineIndexer.java:252)
at org.modeshape.graph.search.SearchEngineIndexer.index(SearchEngineIndexer.java:160)
at org.modeshape.search.lucene.LuceneSearchEngine.index(LuceneSearchEngine.java:274)
at org.modeshape.jcr.RepositoryQueryManager$SelfContained.process(RepositoryQueryManager.java:418)
at org.modeshape.jcr.RepositoryQueryManager$SelfContained$4$1.run(RepositoryQueryManager.java:341)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)I am trying to debug to see what is going on. But you may be quicker.
-
3. Re: 2nd level cache for JPA connector and LargeValues
rhauch Dec 16, 2011 9:25 AM (in response to rhauch)We're having a discussion on IRC. We're not certain, but we think the @Lob annotation on our LargeValueEntity class means that the LargeValueEntity won't be cached, regardless of our setting. We're trying to verify this, if it that's correct then there shouldn't be a problem.
-
4. Re: 2nd level cache for JPA connector and LargeValues
jamat Dec 16, 2011 12:00 PM (in response to rhauch)After further debugging my issue with the lucene indexing thread stuck was that one of the READ_BRANCH request failed with java.lang.OutOfMemoryError. This error is not caught and we end up with this deadlock. I guess we cannot do much about this error but at least Modeshape should be more vocal.
My concern though is how to avoid this OOM. I was running with a 2 G heap. And it is true that all my binaries will exceed this size but I could wonder why modeshape tries to load everything.
One thing though that I have just noticed. The INDEX_READ_DEPTH is not set in my config file and the default value is 4. I will try with a value of 2 and see if it behaves better.
-
5. Re: 2nd level cache for JPA connector and LargeValues
jamat Dec 16, 2011 12:11 PM (in response to jamat)2 did not help. Nor 1.
Can you tell me how this lucene indexing works because it seems to me that my whole repository is retrieve from the db.
So no matter what the heap size that I have if my repository is bigger than that it will not work.
What am I missing ?
-
6. Re: 2nd level cache for JPA connector and LargeValues
rhauch Dec 16, 2011 1:02 PM (in response to jamat)After further debugging my issue with the lucene indexing thread stuck was that one of the READ_BRANCH request failed with java.lang.OutOfMemoryError. This error is not caught and we end up with this deadlock. I guess we cannot do much about this error but at least Modeshape should be more vocal.
You don't happen to have the stack trace for this OOM exception, do you? That'd make it a lot easier to track down where we should be catching the problem and releasing the lock.
-
7. Re: 2nd level cache for JPA connector and LargeValues
jamat Dec 16, 2011 1:10 PM (in response to rhauch)Sorry I do not have it and I am trying something else right now.
I do not know where the exception is generated from but in
modeshape-graph/src/main/java/org/modeshape/graph/request/processor/RequestProcessor.java
public void process( CompositeRequest request ) {
when we call process(embedded); we only catch RuntimeException.
When the OOM is thrown then we terminate this thread and then later on when we try to add a new request for the
CompositeRequestChannel we are stuck as the request will never be processed.
HTH.
-
8. Re: 2nd level cache for JPA connector and LargeValues
rhauch Dec 16, 2011 1:25 PM (in response to jamat)Can you tell me how this lucene indexing works because it seems to me that my whole repository is retrieve from the db.
So no matter what the heap size that I have if my repository is bigger than that it will not work.
We tried hard to make sure that we didn't keep everything in memory, and I'm a bit surprised to hear that the "indexReadDepth" repository option isn't affecting what you're seeing.
When rebuilding the indexes, the indexing logic reads through the workspace content a portion (subgraph) at a time. The "indexReadDepth" repository option (in the configuration file) controls how deep the subgraph should be; if the value is one, it only reads a node, its properties, and the references to the children. If the value is 2, then a node is read, its properties, its children (with their properties), and the references to the grandchildren are read into the subgraph.
If you're structure contains 'nt:file'-like nodes where the binary values are one level below, then when using a value of '1' or '2' the subgraph should never contain the binary values from more than "file" (i.e., "myFile/jcr:content/jcr:data" requires at least 3 levels of the subgraph to contain more than one binary value). If, however, your structure is flatter than this (with binaries not so deep), then even '2' could be a problem. A value of '1', however, never loads the properties for more than one node at a time.
Can you post the section of the configuration that shows your repository options? I'd like to just to make sure that the option is set properly.
-
9. Re: 2nd level cache for JPA connector and LargeValues
rhauch Dec 16, 2011 1:27 PM (in response to rhauch)I've been looking at the indexing code, and there's one section where it seems like we might be holding onto the subgraphs longer than we should be. I'll do some debugging to see if my hunch is right. I'm not sure why we haven't seen this problem before in our testing - maybe we're just not using enough large binary values.
-
10. Re: 2nd level cache for JPA connector and LargeValues
jamat Dec 16, 2011 1:41 PM (in response to rhauch)I think that the parameter was correctly configured (but see below anyway) as I could see that the READ_BRANCH requests were reading one or two levels. Also I do not have a flat structure and the binary values are in nodes that have one parent and one grandparent.
My config (only the index part)
<!-- Explicitly specify the directory where the index files should be stored. -->
<mode:option jcr:name="queryIndexDirectory" mode:value="${jboss.server.data.dir}/modeshape/repositories/store/indexes"/>
<mode:option jcr:name="QUERY_INDEXES_UPDATED_SYNCHRONOUSLY" mode:value="false"/>
<mode:option jcr:name="INDEX_READ_DEPTH" mode:value="2"/><!-- Should indexes should be rebuilt synchronously when the repository restarts, default true -->
<mode:option jcr:name="queryIndexesRebuiltSynchronously" mode:value="true"/>
<!-- specifies the strategy (always or ifMissing) used to determine which query indexes need to be rebuilt when the repository restarts -->
<mode:option jcr:name="rebuildQueryIndexOnStartup" mode:value="ifMissing"/> -
11. Re: 2nd level cache for JPA connector and LargeValues
rhauch Dec 16, 2011 3:46 PM (in response to jamat)When used in the XML file, the names of the options should be camel case, while the Java constants are upper-cased. Therefore, your configuration should look more like this (with my changes in bold):
<!-- Explicitly specify the directory where the index files should be stored. -->
<mode:option jcr:name="queryIndexDirectory" mode:value="${jboss.server.data.dir}/modeshape/repositories/store/indexes"/>
<mode:option jcr:name="queryIndexesUpdatedSynchronously" mode:value="false"/>
<mode:option jcr:name="indexReadDepth" mode:value="2"/><mode:option jcr:name="queryIndexesRebuiltSynchronously" mode:value="true"/>
<!-- specifies the strategy (always or ifMissing) used to determine which query indexes need to be rebuilt when the repository restarts -->
<mode:option jcr:name="rebuildQueryIndexOnStartup" mode:value="ifMissing"/> -
12. Re: 2nd level cache for JPA connector and LargeValues
jamat Dec 16, 2011 3:53 PM (in response to rhauch)To tell you the truth, I found about those parameters while reading the code and this is how they are spelled.
I may have missed the complete list of parameters in the documentation though.
-
13. Re: 2nd level cache for JPA connector and LargeValues
rhauch Dec 16, 2011 4:02 PM (in response to jamat)All of the repository options are listed in Section 6.2 of the Reference Guide.
Also, earlier you mentioned that RequestProcessor.process(CompositeRequest) is only catching RuntimeException around processing each of the embedded requests. That works because each process(Request) call can't throw caught exceptions.
Don't worry about not having the trace anymore. But if you do come across it again, please let us know.
-
14. Re: 2nd level cache for JPA connector and LargeValues
jamat Dec 17, 2011 3:00 AM (in response to rhauch)All of the repository options are listed in Section 6.2 of the Reference Guide.
Of course. Silly me.
So I have added a catch all to be able to print the stack trace of the exception and this is what I got
23:31:35,641 ERROR [STDERR] java.lang.OutOfMemoryError: Java heap space
23:31:35,642 ERROR [STDERR] at java.util.Arrays.copyOf(Arrays.java:2786)
23:31:35,642 ERROR [STDERR] at java.io.ByteArrayOutputStream.toByteArray(ByteArrayOutputStream.java:133)
23:31:35,642 ERROR [STDERR] at org.hibernate.type.descriptor.java.DataHelper.extractBytes(DataHelper.java:190)
23:31:35,642 ERROR [STDERR] at org.hibernate.type.descriptor.java.PrimitiveByteArrayTypeDescriptor.wrap(PrimitiveByteArrayTypeDescriptor.java:124)
23:31:35,643 ERROR [STDERR] at org.hibernate.type.descriptor.java.PrimitiveByteArrayTypeDescriptor.wrap(PrimitiveByteArrayTypeDescriptor.java:41)
23:31:35,643 ERROR [STDERR] at org.hibernate.type.descriptor.sql.BlobTypeDescriptor$5.doExtract(BlobTypeDescriptor.java:115)
23:31:35,643 ERROR [STDERR] at org.hibernate.type.descriptor.sql.BasicExtractor.extract(BasicExtractor.java:64)
23:31:35,643 ERROR [STDERR] at org.hibernate.type.AbstractStandardBasicType.nullSafeGet(AbstractStandardBasicType.java:254)
23:31:35,643 ERROR [STDERR] at org.hibernate.type.AbstractStandardBasicType.nullSafeGet(AbstractStandardBasicType.java:250)
23:31:35,643 ERROR [STDERR] at org.hibernate.type.AbstractStandardBasicType.nullSafeGet(AbstractStandardBasicType.java:230)
23:31:35,644 ERROR [STDERR] at org.hibernate.type.AbstractStandardBasicType.hydrate(AbstractStandardBasicType.java:331)
23:31:35,644 ERROR [STDERR] at org.hibernate.persister.entity.AbstractEntityPersister.hydrate(AbstractEntityPersister.java:2283)
23:31:35,644 ERROR [STDERR] at org.hibernate.loader.Loader.loadFromResultSet(Loader.java:1527)
23:31:35,644 ERROR [STDERR] at org.hibernate.loader.Loader.instanceNotYetLoaded(Loader.java:1455)
23:31:35,644 ERROR [STDERR] at org.hibernate.loader.Loader.getRow(Loader.java:1355)
23:31:35,644 ERROR [STDERR] at org.hibernate.loader.Loader.getRowFromResultSet(Loader.java:611)
23:31:35,644 ERROR [STDERR] at org.hibernate.loader.Loader.doQuery(Loader.java:829)
23:31:35,644 ERROR [STDERR] at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:274)
23:31:35,644 ERROR [STDERR] at org.hibernate.loader.Loader.loadEntity(Loader.java:2037)
23:31:35,644 ERROR [STDERR] at org.hibernate.loader.entity.AbstractEntityLoader.load(AbstractEntityLoader.java:86)
23:31:35,645 ERROR [STDERR] at org.hibernate.loader.entity.AbstractEntityLoader.load(AbstractEntityLoader.java:76)
23:31:35,645 ERROR [STDERR] at org.hibernate.persister.entity.AbstractEntityPersister.load(AbstractEntityPersister.java:3293)
23:31:35,645 ERROR [STDERR] at org.hibernate.event.def.DefaultLoadEventListener.loadFromDatasource(DefaultLoadEventListener.java:496)
23:31:35,645 ERROR [STDERR] at org.hibernate.event.def.DefaultLoadEventListener.doLoad(DefaultLoadEventListener.java:477)
23:31:35,645 ERROR [STDERR] at org.hibernate.event.def.DefaultLoadEventListener.load(DefaultLoadEventListener.java:227)
23:31:35,645 ERROR [STDERR] at org.hibernate.event.def.DefaultLoadEventListener.proxyOrLoad(DefaultLoadEventListener.java:285)
23:31:35,645 ERROR [STDERR] at org.hibernate.event.def.DefaultLoadEventListener.onLoad(DefaultLoadEventListener.java:152)
23:31:35,645 ERROR [STDERR] at org.hibernate.impl.SessionImpl.fireLoad(SessionImpl.java:1090)
23:31:35,645 ERROR [STDERR] at org.hibernate.impl.SessionImpl.get(SessionImpl.java:1005)
23:31:35,645 ERROR [STDERR] at org.hibernate.impl.SessionImpl.get(SessionImpl.java:998)
23:31:35,645 ERROR [STDERR] at org.hibernate.ejb.AbstractEntityManagerImpl.find(AbstractEntityManagerImpl.java:614)
23:31:35,646 ERROR [STDERR] at org.hibernate.ejb.AbstractEntityManagerImpl.find(AbstractEntityManagerImpl.java:589)Does not show more.
Doing another test to have a jstack just before the exception to have the rest of the stack:
"search-store-7-thread-1" prio=10 tid=0x122d7000 nid=0x26a6 runnable [0x0f10b000]
java.lang.Thread.State: RUNNABLE
at java.util.Arrays.copyOf(Arrays.java:2786)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
- locked <0x888c29e0> (a java.io.ByteArrayOutputStream)
at org.hibernate.type.descriptor.java.DataHelper.extractBytes(DataHelper.java:170)
at org.hibernate.type.descriptor.java.PrimitiveByteArrayTypeDescriptor.wrap(PrimitiveByteArrayTypeDescriptor.java:124)
at org.hibernate.type.descriptor.java.PrimitiveByteArrayTypeDescriptor.wrap(PrimitiveByteArrayTypeDescriptor.java:41)
at org.hibernate.type.descriptor.sql.BlobTypeDescriptor$5.doExtract(BlobTypeDescriptor.java:115)
at org.hibernate.type.descriptor.sql.BasicExtractor.extract(BasicExtractor.java:64)
at org.hibernate.type.AbstractStandardBasicType.nullSafeGet(AbstractStandardBasicType.java:254)
at org.hibernate.type.AbstractStandardBasicType.nullSafeGet(AbstractStandardBasicType.java:250)
at org.hibernate.type.AbstractStandardBasicType.nullSafeGet(AbstractStandardBasicType.java:230)
at org.hibernate.type.AbstractStandardBasicType.hydrate(AbstractStandardBasicType.java:331)
at org.hibernate.persister.entity.AbstractEntityPersister.hydrate(AbstractEntityPersister.java:2283)
at org.hibernate.loader.Loader.loadFromResultSet(Loader.java:1527)
at org.hibernate.loader.Loader.instanceNotYetLoaded(Loader.java:1455)
at org.hibernate.loader.Loader.getRow(Loader.java:1355)
at org.hibernate.loader.Loader.getRowFromResultSet(Loader.java:611)
at org.hibernate.loader.Loader.doQuery(Loader.java:829)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:274)
at org.hibernate.loader.Loader.loadEntity(Loader.java:2037)
at org.hibernate.loader.entity.AbstractEntityLoader.load(AbstractEntityLoader.java:86)
at org.hibernate.loader.entity.AbstractEntityLoader.load(AbstractEntityLoader.java:76)
at org.hibernate.persister.entity.AbstractEntityPersister.load(AbstractEntityPersister.java:3293)
at org.hibernate.event.def.DefaultLoadEventListener.loadFromDatasource(DefaultLoadEventListener.java:496)
at org.hibernate.event.def.DefaultLoadEventListener.doLoad(DefaultLoadEventListener.java:477)
at org.hibernate.event.def.DefaultLoadEventListener.load(DefaultLoadEventListener.java:227)
at org.hibernate.event.def.DefaultLoadEventListener.proxyOrLoad(DefaultLoadEventListener.java:285)
at org.hibernate.event.def.DefaultLoadEventListener.onLoad(DefaultLoadEventListener.java:152)
at org.hibernate.impl.SessionImpl.fireLoad(SessionImpl.java:1090)
at org.hibernate.impl.SessionImpl.get(SessionImpl.java:1005)
at org.hibernate.impl.SessionImpl.get(SessionImpl.java:998)
at org.hibernate.ejb.AbstractEntityManagerImpl.find(AbstractEntityManagerImpl.java:614)
at org.hibernate.ejb.AbstractEntityManagerImpl.find(AbstractEntityManagerImpl.java:589)
at org.modeshape.connector.store.jpa.model.simple.SimpleJpaRepository$LargeValueSerializer.read(SimpleJpaRepository.java:873)
at org.modeshape.connector.store.jpa.util.Serializer.deserializePropertyValues(Serializer.java:850)
at org.modeshape.connector.store.jpa.util.Serializer.deserializeProperty(Serializer.java:696)
at org.modeshape.connector.store.jpa.util.Serializer.deserializeAllProperties(Serializer.java:614)
at org.modeshape.connector.store.jpa.model.simple.SimpleJpaRepository$JpaNode.ensurePropertiesLoaded(SimpleJpaRepository.java:619)
at org.modeshape.connector.store.jpa.model.simple.SimpleJpaRepository$JpaNode.getProperties(SimpleJpaRepository.java:685)
at org.modeshape.connector.store.jpa.model.simple.SimpleRequestProcessor.process(SimpleRequestProcessor.java:127)
at org.modeshape.graph.request.processor.RequestProcessor.process(RequestProcessor.java:299)
at org.modeshape.graph.request.processor.RequestProcessor.process(RequestProcessor.java:376)
at org.modeshape.graph.request.processor.RequestProcessor.process(RequestProcessor.java:239)
at org.modeshape.connector.store.jpa.model.simple.SimpleJpaConnection.execute(SimpleJpaConnection.java:133)
at org.modeshape.graph.connector.RepositoryConnectionPool$ConnectionWrapper.execute(RepositoryConnectionPool.java:1135)
at org.modeshape.graph.request.CompositeRequestChannel$2.call(CompositeRequestChannel.java:193)
at org.modeshape.graph.request.CompositeRequestChannel$2.call(CompositeRequestChannel.java:183)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)This is it.