I noticed that while using Infinispan cache, following objects end up taking roughly the same amount of memory as the Raw Data Objects stored in the Infinispan Map.
Just wondering if others have noticed the same. If yes, are there ways to release this memory or avoid this? This causes our memory footprint to double! - I'm using Infinispan5.1.1.final
Any pointers would be greatly appreciated.
yes those "infrastructure" objects take quite some space, but to really double the amount of memory used you must be using many and very small values, is that the case?
There is an upcoming patch for version 5.2 which will make it possible to replace the actual map implementation used internally: Java 8 will have a new concurrent map which has no internal segments at all, and the implementation is already available so it can be backported in a separate jar for us to use.
Are you measuring a well warmed up JVM ? Those objects are mostly a cost only before JIT decides to compile the map implementation. And of course profiling is going to lie as well one needs to look at the heap sizes with a good amount of entries in it, but not instrument those entries preventing JIT to reach the optimal code.
no we never observed such high memory consumption, but yes you could try the new map already as explained at http://infinispan.blogspot.com/2012/03/jdk-8-backported-concurrenthashmaps-in.html
While some of these internal objects are "bulky", taking anywhere close to 5MB per entry is far from reasonable, there must be something wrong; I'm bringing this up to team mates, but I'd appreciate some help from you. Is there any test you could share? or the dumps?
From the heap dump report you posted it would appear that you have about 2124 ConcurrentHashMaps, and between them those maps have 11186942 Segments. Since the number of segments is equal to the concurrencyLevel, that would imply that you have a concurrency level of ~ 5000, which is a bit excessive (it's supposed to be the number of concurrent threads writing to the map).
Almost all of the ConcurrentHashMap$HashEntry arrays and ReentrantLock$NonfairSyncs are also retained by the CHM Segments (because Segment extends ReentrantLock).
If I got it right, decreasing your concurrency level from 5000 to 500 should make your memory usage much more reasonable. If I didn't, please post your config and we'll look further.