8 Replies Latest reply: Aug 20, 2012 9:01 AM by Galder Zamarreño RSS

more than one hotrod server

jessica jia Newbie

With Beta2, hotrod client and server are working for me. I can use a client to put data in server, and get them back. When I start another client, I can get the data the previous client put in  too. Then I start another server on another network node, pass in the configuration file with its IP address, and start another client of it, hope the servers can share the data, so this new client to this new server can get the data put in by previous client to the previous server. But this did not work. Seems like the server do not know the existence of each other. When I start the server, I pass in the configuration file as I cluster cache from different JVM and different network node. Is this the right way to do it or I need to do something else?

 

I will really appreciate any help,

 

Jessica

  • 1. Re: more than one hotrod server
    Galder Zamarreño Master

    Can you show the configuration that you're using?

     

    Did you check the logs to see whether the two Hot Rod servers are forming a cluster?

     

    In order for both Hot Rod servers to cluster, both have to be started with the same configuration that has transport correctly configured with clustering. See the GUI demo in Infinispan distribution for an example configuration.

  • 2. Re: more than one hotrod server
    jessica jia Newbie

    Galder,

     

        The servers do not seem connected. I am using the same config file just with different IP address on different IP node. I use the same config file for plain infinispan cache(not hotrod server), they works fine. Attached are the config files and log files, let me know if you need more information.

     

    Following is a piece of log file:

     

    1)The server seems up, and the server seems get the right IP address and port

     

    2010-06-03 10:08:19,453 INFO  [Main$] (main) Start main with args: -r, hotrod, -c, /home/jia/infinispan-4.1.0.BETA2/etc/config-samples/proto-r-t.xml
    2010-06-03 10:08:21,038 INFO  [JGroupsTransport] (InfinispanServer-Main) Starting JGroups Channel
    2010-06-03 10:08:21,181 INFO  [JChannel] (InfinispanServer-Main) JGroups version: 2.10.0.Beta2

    -------------------------------------------------------------------
    GMS: address=ccserv3-40702, cluster=demoCluster, physical address=170.137.230.104:7800
    -------------------------------------------------------------------
    2010-06-03 10:08:25,294 INFO  [JGroupsTransport] (InfinispanServer-Main) Received new cluster view: [ccserv3-40702|0] [ccserv3-40702]
    2010-06-03 10:08:25,306 INFO  [JGroupsTransport] (InfinispanServer-Main) Cache local address is ccserv3-40702, physical addresses are [170.137.230.104:7800]
    2010-06-03 10:08:25,307 INFO  [GlobalComponentRegistry] (InfinispanServer-Main) Infinispan version: Infinispan 'Radegast' 4.1.0.BETA2
    2010-06-03 10:08:25,335 INFO  [ComponentRegistry] (InfinispanServer-Main) Infinispan version: Infinispan 'Radegast' 4.1.0.BETA2
    2010-06-03 10:08:25,495 INFO  [ComponentRegistry] (InfinispanServer-Main) Infinispan version: Infinispan 'Radegast' 4.1.0.BETA2

     

    2) I do not really understand the following part, why the server seems pinging 127.0.0.1:7800, 7801, 7802, obvious it can not go anywhere with it. It basically stuck here, it keeps doing this again and again. So I wonder maybe some configuration can fix this?

     

     

     

    2010-06-03 10:08:21,626 2173  DEBUG [org.jgroups.protocols.FRAG2] (InfinispanServer-Main:) received CONFIG event: {bind_addr=/170.137.230.104}
    2010-06-03 10:08:22,238 2785  TRACE [org.jgroups.blocks.MessageDispatcher$ProtocolAdapter] (InfinispanServer-Main:) setting local_addr (null) to ccserv3-40702
    2010-06-03 10:08:22,238 2785  DEBUG [org.jgroups.protocols.FRAG2] (InfinispanServer-Main:) received CONFIG event: {flush_supported=true}
    2010-06-03 10:08:22,240 2787  TRACE [org.jgroups.protocols.pbcast.STABLE] (InfinispanServer-Main:) stable task started
    2010-06-03 10:08:22,251 2798  TRACE [org.jgroups.protocols.TCP] (InfinispanServer-Main:) joined /224.0.75.75:7500 on e1000g1
    2010-06-03 10:08:22,252 2799  TRACE [org.jgroups.protocols.TCP] (InfinispanServer-Main:) joined /224.0.75.75:7500 on e1000g0
    2010-06-03 10:08:22,252 2799  TRACE [org.jgroups.protocols.TCP] (InfinispanServer-Main:) joined /224.0.75.75:7500 on lo0
    2010-06-03 10:08:22,267 2814  TRACE [org.jgroups.protocols.FD_SOCK] (FD_SOCK server socket acceptor,demoCluster,ccserv3-40702:) waiting for client connections on /170.137.230.104:48472
    2010-06-03 10:08:22,272 2819  TRACE [org.jgroups.protocols.TCPPING] (Timer-2,demoCluster,ccserv3-40702:) [FIND_INITIAL_MBRS] sending PING request to 127.0.0.1:7801
    2010-06-03 10:08:22,273 2820  TRACE [org.jgroups.protocols.TCPPING] (Timer-2,demoCluster,ccserv3-40702:) [FIND_INITIAL_MBRS] sending PING request to 127.0.0.1:7802
    2010-06-03 10:08:22,273 2820  TRACE [org.jgroups.protocols.TCPPING] (Timer-2,demoCluster,ccserv3-40702:) [FIND_INITIAL_MBRS] sending PING request to 127.0.0.1:7800
    2010-06-03 10:08:22,274 2821  TRACE [org.jgroups.protocols.TCP] (Timer-3,demoCluster,ccserv3-40702:) sending msg to 127.0.0.1:7802, src=ccserv3-40702, headers are TCPPING: [PING: type=GET_MBRS_REQ, cluster=demoCluster, arg=own_addr=ccserv3-40702, view id=null, is_server=false, is_coord=false, logical_name=ccserv3-40702, physical_addrs=170.137.230.104:7800], TCP: [channel_name=demoCluster]
    2010-06-03 10:08:22,274 2821  TRACE [org.jgroups.protocols.TCP] (Timer-1,demoCluster,ccserv3-40702:) sending msg to 127.0.0.1:7801, src=ccserv3-40702, headers are TCPPING: [PING: type=GET_MBRS_REQ, cluster=demoCluster, arg=own_addr=ccserv3-40702, view id=null, is_server=false, is_coord=false, logical_name=ccserv3-40702, physical_addrs=170.137.230.104:7800], TCP: [channel_name=demoCluster]
    2010-06-03 10:08:22,274 2821  TRACE [org.jgroups.protocols.TCP] (Timer-4,demoCluster,ccserv3-40702:) sending msg to 127.0.0.1:7800, src=ccserv3-40702, headers are TCPPING: [PING: type=GET_MBRS_REQ, cluster=demoCluster, arg=own_addr=ccserv3-40702, view id=null, is_server=false, is_coord=false, logical_name=ccserv3-40702, physical_addrs=170.137.230.104:7800], TCP: [channel_name=demoCluster]
    2010-06-03 10:08:22,277 2824  TRACE [org.jgroups.protocols.TCP] (Timer-3,demoCluster,ccserv3-40702:) dest=127.0.0.1:7802 (110 bytes)
    2010-06-03 10:08:22,277 2824  TRACE [org.jgroups.protocols.TCP] (Timer-1,demoCluster,ccserv3-40702:) dest=127.0.0.1:7801 (110 bytes)
    2010-06-03 10:08:22,277 2824  TRACE [org.jgroups.protocols.TCP] (Timer-4,demoCluster,ccserv3-40702:) dest=127.0.0.1:7800 (110 bytes)
    2010-06-03 10:08:22,307 2854  TRACE [org.jgroups.blocks.TCPConnectionMap$Mapper] (Timer-3,demoCluster,ccserv3-40702:) failed creating connection to 127.0.0.1:7802
    2010-06-03 10:08:22,308 2855  TRACE [org.jgroups.blocks.TCPConnectionMap$Mapper] (Timer-1,demoCluster,ccserv3-40702:) failed creating connection to 127.0.0.1:7801
    2010-06-03 10:08:22,310 2857  TRACE [org.jgroups.blocks.TCPConnectionMap$Mapper] (Timer-4,demoCluster,ccserv3-40702:) failed creating connection to 127.0.0.1:7800
    2010-06-03 10:08:23,774 4321  TRACE [org.jgroups.protocols.TCPPING] (Timer-3,demoCluster,ccserv3-40702:) [FIND_INITIAL_MBRS] sending PING request to 127.0

     

    Thank you very much,

     

    Jessica

  • 3. Re: more than one hotrod server
    Galder Zamarreño Master

    The logs do not match what you have in the flush-tcp.xml configuration, so that file is not in use. The logs show TCPPING in action.

     

    Even though you modified flush-tcp.xml, the file within the jgroups.jar might be picked up. So you can either, rename your file and change the infinispan configuration to point to it, or if you wanna carry on using TCPPING, just pass -Djgroups.tcpping.initial_hosts=host1[7800],host2[7800] as system property to override the XML configuration.

     

    See http://community.jboss.org/docs/DOC-12352 for more info on JGroups system properties and http://community.jboss.org/docs/DOC-10915 for more info on how TCPPING works.

     

    Btw, in the file you have, the multicast port is far too big: ${jgroups.udp.mcast_port:4666666630}

  • 4. Re: more than one hotrod server
    fealves78 Newbie

    Galder,

     

    I believe I am having the same problem as Jessica... I am trying to make 2 infinispan/hotrod servers to connect and sync to each other using TCP/IP only (UDP and multicast are not available on my network). Could you please review my configuration files and let me know what I am doing wrong? I am using Infinispan 5.1.0.0Alpha2, and my configuration files are as following:

     

    - 1st Server (10.81.0.52): config.xml and tcp.xml

    - 2nd Server (10.81.0.54): config2.xml and tcp2.xml

     

    This is how I am starting the servers:

     

    - 1st Server: ./startServer.sh -r hotrod -c config.xml

    - 2nd: Server: ./startServer.sh -r hotrod -c config2.xml

     

    I also tried:

     

    - 1st Server:  ./startServer.sh -r hotrod -c config.xml -Djgroups.tcpiping.initial_hosts=10.81.0.52[7900],10.81.0.54[7900] --host=10.81.0.52 --port=7900

    - 2nd Server:  ./startServer.sh -r hotrod -c config2.xml -Djgroups.tcpiping.initial_hosts=10.81.0.52[7900],10.81.0.54[7900] --host=10.81.0.54 --port=7900

     

     

    I thank you in forward for your help.

     

    Francisco.

  • 5. Re: more than one hotrod server
    Galder Zamarreño Master

    Looks Ok. Most likely there's a firewall or routing issue of some sort between the two machines. Make sure firewalls are disabled and router firewall that could block comms is disabled too. If that doesn't work, I'd suggest you check whether the cluster forms following the instructions in http://www.jgroups.org/manual/html/ch02.html. Those instructions are generally directed for UDP uses, but can be tailored for TCP.

     

    Btw, don't use --host and --port for JGroups ip/port combos cos that host and port parameters are for the server endpoints from which remote cache requests are received.

  • 6. Re: more than one hotrod server
    frank luo Newbie

    Hi fealves78,

    I have set up a cluster as your way successlly,thank you.But how does the configuration file of client like?  Now I want to access the cluster from a third node( a client node ), but I don't know how to write the configuration file of the client ,is it the same as the configuration file in server node? But the configuration files in the 2 server nodes are not  the same. 

  • 7. Re: more than one hotrod server
    frank luo Newbie

    Hi Galder,

    I have set up a cluster as the method fealves78 use.But now I don't know how to write a right configuration file for client to access the cluster.I thought that the configuration file for client node should be the same as which for server nodes,but the configuration file in two server nodes are not the same.How is the client's configuation file like? Could you help me?

  • 8. Re: more than one hotrod server
    Galder Zamarreño Master

    If you start a Hot Rod server, then use a Hot Rod client: https://docs.jboss.org/author/x/NgY5