Version 10

    This article outlines the design of event handling in Hot Rod protocol: https://issues.jboss.org/browse/ISPN-374

     

    • The idea is that Hot Rod servers should be able to notify remote clients of events such as cache entry modified...etc.
    • Clients would opt in to listen to these events to avoid swamping all connected clients.
    • It assumes that clients are able to maintain persistent connections to the servers
    Minimum set of requirements
    • Hot Rod protocol
      • A new version of the Hot Rod protocol established, version 20
      • Hot Rod protocol to be enhanced with two new operations: add listener and remove listener
      • Ping to be enhanced to add a source byte[] payload that identifies the client source. This way, multiple clients belonging to the same source can act as a single entity:
        • Only one of the clients in the single source entity will receive a notification (in case client does pooling) - by default, this will be the last connected client of that source
        • With this source id, server can figure out whether the event was generated locally in that source or not, and so avoid sending notifications to the source.
        • NOTE: Eventually, when we add authentication, the source id will come via that command rather than ping.
      • These listeners will be able listen for the following remote events:
        • cache entry created
        • cache entry modified
        • cache entry removed
      • Add listener operation takes:
        • request operation byte: 0x1B
        • response operation byte: 0x1C
        • listener id (vint)
        • event interest (vint where each event interested represents a bit)
        • event granularity:
          • 0x00 - listen for interested events on a particular key on a cache (key follows as byte[] and cache name comes in header)
          • 0x01 - listen for interested events on all keys in a particular cache (again, cache name comes in header)
        • if the same listener id is added twice for the same source id, the listener is overriden taking on the new event interest (and key if relevant)
      • Remove listener operation takes:
        • request operation byte: 0x1D
        • request operation byte: 0x1E
        • listener id (vint)
      • When fired, a remote event response would look something like this:
        • [Response header with op code 0x51], listened id, event (byte: 0x00 entry created, 0x01 entry modified, 0x02 entry removed), key (byte[])
    • Hot Rod Java client
      • Add addListener(Object) and removeListener(Object) to RemoteCache
      • Define @RemoteListener, @RemoteCacheEntryCreated, @RemoteCacheEntryModified, @RemoteCacheEntryRemoved
      • To identify listeners, client needs to be able to generate unique listener ids

     

    Optional requirements
    1. With near cache use case in mind and design above, if a node updates the value in the near cache and has signed up for cache modified events, as it stands, it would also receive a notification from the server. However, this node does not know that the event it received is due to its own modification, and near caches might wanna deal with this differently (i.e. if I just updated a value, I know the others will be invalidated but I dont wanna invalidate myself cos I know I have the latest data). Using something like a channel's id to identify whether a notification is local or not is not enough, cos you might have Hot Rod client (i.e. remote cache store client), that opens 20 channels. So, it might be useful for the source of then modifications (put, replace, remove) to be identified at a logical level. This way, the server could identify when a modification comes from a particular logical entity and it could indicate the event "is local", which the client could use to act differently. This can be taken even further to optimise remote events, by only sending one event for all channels belonging to the same logical entity.
    2. Be able to sign up for interested events on all keys in all caches in a cache manager (granularity: 0x02)
    3. At the protocol level, put/replace operations could be combined, or extended, to enable adding remote listeners at the same time as the put/replace is called, as opposed to calling add listener and then calling put/replace (or viceversa)
    4. Should clients acknowledge receipt of events? Do we wanna add event retransmission?

     

    Update (6/11/2012)

     

    - Timing of the events, as highlighted by Vincent, have to be taken into account. If sent too early, it could lead to retrieving stale data (from a node to which data has not yet been replicated), but if sent too late, it has the potential of committing in server but in not in client. The former is more dangerous, so it's probably better to send notifications after the event. Now, to deal with potential stale data, servers could keep track of which data has been retrieved by which client (a similar thing is already available in distributed mode L1 implementation), and if the notification fails to reach the client, it could retry.

     

    - JMS, although it provides a quick fix for remote events, it has a few issues, such as being limited only to Java clients, and it would not work on its own. For example, how would a client now some notification originated locally? Clients are gonna somehow send a client ID of some sort to identify the source of the operation.