6 Replies Latest reply: Feb 14, 2011 12:01 PM by Shane Johnson RSS

Failed Joins / Invalidation

Shane Johnson Novice

Why does RehashTask.invalidateInvalidHolders block until all responses have been received?

 

for (Future f : futures) f.get();

 

My first thought was that this might be for consistency reasons. However, in the event that one or more of these requests results in an exception the join still completes. In addtion, while the node may enter a FAILED state it still remains in the cluster topology.

 

Perhaps this line can be removed. Perhaps a timeout can be added to the get call, and the resulting timeout exception caught and logged.

 

In our testing, the invalidation process can take longer than 15s if there are tens of thousands of entries to be invalidated. We're still trying to determine why it takes so long. In any event...

 

Here is why I am looking at this.

 

  1. Nodes 1, 2, and 3 started.
  2. Node 4 starts and attempts to join.
  3. Node 4 sends an invalidation request to Node 1.
  4. Node 4 throws a timeout exception after 15s.
  5. Node 4 finishes the join.
  6. Node 4 enters a FAILED state.

 

There appear to be 2 problems here.

 

One is obviously that the timeout is causing this node to enter a FAILED state. The other is that even though this node is in a FAILED state, it is still a part of the cluster topology.

 

Any node added in the future that includes Node 4 in a rehash will subsequently fail.

  • 1. Failed Joins / Invalidation
    Kapil Nayar Newbie

    Just to add to what Shane has indicated.

     

    I think the cache entries are migrated to this new node (failed node) - really depends upon the point of failure (timeout). If it is during invalidation request processing then the Node has already applied the state. The node which is invalidating keeps going it seems and finishes.

     

    As a result, we may be in a state where we have a failed Cache node with valid entries not accessible to the rest of the cluster.

     

    Kapil

  • 2. Failed Joins / Invalidation
    Shane Johnson Novice

    Good call Kapil.

     

    This leads me to believe that perhaps the join process shoud be as follows.

     

    1. Send State Request
    2. Send Update Topology Request
    3. Send Invalidation Request

     

    If the state request (rehashing) fails, then the topology should not be updated and the cluster should continue to function properly.

    If the topology request fails, the cluster should continue to function properly.

     

    If the invalidation request fails, there may be issues. I'm wondering if an invalidation request is actually necessary. Can it not be a part of the update topology request such that rather than sending a list of entries to invalidate, each node determines what entries to invalidate based on the topology change?

  • 3. Failed Joins / Invalidation
    Manik Surtani Master

    @Shane you are correct that the invalidation should not block.  I've created a JIRA for this, will be addressed in 4.2.1.

     

    See ISPN-914

  • 4. Failed Joins / Invalidation
    Shane Johnson Novice

    Thanks Manik,

     

    That will be helpful. Do you have any thoughts regarding the join process as a whole? We actually have two issues with the join process: invalidation & state transfer. Either one can timeout and fail. This will certainly eliminate the timeout problem with the invalidation. However, both issues are a result of updating the topology before completing the rehash. If either of these fail, the node enters a FAILED state. However, the join still completes 'successfully' and the node remains in the topology.

     

    Perhaps the simplest thing to do would be to remove the node from the topology if either of these two tasks fail?

  • 5. Failed Joins / Invalidation
    Erik Salter Newbie

    In addition to this -- what should the application layer do if the node enters a FAILED state?  I didn't see any listeners that allow the cache (or cache manager) to restart.