ClientSessionImpl uses synchronized on its producers field, which is a ConcurrentHashSet (actually a HornetQ wrapper for ConcurrentHashMap).
There is a private method Set<ClientProducerInternal> cloneProducers(), that takes a copy of all keys.I think that this method is what motivated the double concurrency policy. I suggest we live with the concurrency of it and copy the set using an iterator.
If that concurrency in the creation of the clone is not desirable, I suggest just using a regular HashSet there, as using synchronized for every access defeats the purpose of a concurrent set.
In certain places we can afford having just concurrentHashMap playing (adding / removing), but at the time we iterate we just need an instant copy.
But if all the access are going through a synchronized block, then I agree there's no reason to use a ConcurrentHashMap.
This was fixed through tests on EAP. It was done under a lot of pressure to have it fixed. Maybe I didn't do a proper analysis on the fix.
|Retrieving data ...|