1 2 Previous Next 17 Replies Latest reply on Mar 6, 2006 10:05 AM by brian.stansberry

    MBean Clustering , JBoss 4.0.3

    hannes.koller

      Hi,
      I have searched the forums, but have found no recent information on this. This question has been asked before(in 2003):


      I have a set of MBeans that provide services for my application. I would like to deploy these MBeans on multiple servers (for load-balancing and failover). I do not want to set them up as singletons, but I do want one and only one MBean to be utilized per request. I also would like to have the invocations of the MBeans load-balanced (any flavor is fine) and fault-tolerant (so that if a server goes down, the invocation of the service is routed to another server).


      I am having the exact same problem and I am wondering if a nice builtin solution for this has been developed in the meantime. If someone could point me in the right direction (Articles, Example Code, anything) I would be most grateful. TIA. :)




        • 1. Re: MBean Clustering , JBoss 4.0.3
          starksm64

          Nothing built in. You can use the same ha proxy framework to create your own ha jmx invoker.

          http://docs.jboss.org/jbossas/jboss4guide/r4/html/ch2.chapter.html#ch2.remoteaccess.sect

          • 2. Re: MBean Clustering , JBoss 4.0.3
            hannes.koller

            Thanks for your reply. In the meantime I have tried to implement a clustered MBean with Round Robin scheduling (I found the a message where the poster claims he has done this at http://www.jboss.org/index.html?module=bb&op=viewtopic&p=3834455#3834455).

            I have created an MBean (WorkflowControllerService) which extends the HAServiceMBeanSupport class, and exposes a test() method. The jboss-service.xml of the MBean looks as follows

            <?xml version="1.0" encoding="UTF-8"?>
            
            <server>
            
             <!-- Create JRMPHA proxy for our service -->
             <mbean code="org.jboss.proxy.generic.ProxyFactoryHA" name="jboss.test:service=ProxyFactory,name=HAService,protocol=jrmpha">
            
             <!-- Use the default partition -->
             <depends optional-attribute-name="PartitionObjectName">jboss:service=DefaultPartition</depends>
            
             <!-- Use the standard JRMPInvoker from conf/jboss-service.xml -->
             <depends optional-attribute-name="InvokerName">jboss:service=invoker,type=jrmpha</depends>
            
             <!-- The load balancing policy -->
             <attribute name="LoadBalancePolicy">org.jboss.ha.framework.interfaces.RoundRobin</attribute>
            
             <!-- The target MBean -->
             <depends optional-attribute-name="TargetName">arsenal.at:service=WorkflowController</depends>
            
             <!-- Where to bind the proxy -->
             <attribute name="JndiName">MUH</attribute>
            
             <!-- The interface exposed to the client -->
             <attribute name="ExportedInterface">at.arsenal.spirit.services.workflow.WorkflowControllerServiceMBean</attribute>
            
             <!-- Client side behaviour -->
             <attribute name="ClientInterceptors">
             <interceptors>
             <interceptor>org.jboss.proxy.ClientMethodInterceptor</interceptor>
             <interceptor>org.jboss.invocation.InvokerInterceptor</interceptor>
             </interceptors>
             </attribute>
             </mbean>
            
            
             <mbean code="at.arsenal.spirit.services.workflow.WorkflowControllerService" name="arsenal.at:service=WorkflowController">
            
             <depends>
             jboss:service=HAJNDI
             </depends>
             </mbean>
            </server>
            


            The service can be deployed just fine, and when I do a lookup I get a valid reference to a JRMPInvokerProxyHA class. But if I try to invoke the test() method on the Proxy I get an exception. Example code and exception follow:

            WorkflowControllerServiceMBean controller = (WorkflowControllerServiceMBean)ctx.lookup("MUH"); //lookup proxy
            log.debug("Calling Test Method on Controller");
            controller.test(); //exception here
            


            The exception which is thrown is:

            
            08:48:01,000 INFO [STDOUT] java.lang.IllegalArgumentException: null object name
            08:48:01,000 INFO [STDOUT] at org.jboss.mx.server.registry.BasicMBeanRegistry.get(BasicMBeanRegistry.java:494)
            08:48:01,001 INFO [STDOUT] at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:638)
            08:48:01,002 INFO [STDOUT] at org.jboss.invocation.jrmp.server.JRMPInvokerHA.invoke(JRMPInvokerHA.java:163)
            08:48:01,002 INFO [STDOUT] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            08:48:01,003 INFO [STDOUT] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            08:48:01,003 INFO [STDOUT] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            08:48:01,003 INFO [STDOUT] at java.lang.reflect.Method.invoke(Method.java:585)
            08:48:01,004 INFO [STDOUT] at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:294)
            08:48:01,004 INFO [STDOUT] at sun.rmi.transport.Transport$1.run(Transport.java:153)
            08:48:01,005 INFO [STDOUT] at java.security.AccessController.doPrivileged(Native Method)
            08:48:01,005 INFO [STDOUT] at sun.rmi.transport.Transport.serviceCall(Transport.java:149)
            08:48:01,006 INFO [STDOUT] at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:460)
            08:48:01,006 INFO [STDOUT] at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:701)
            08:48:01,007 INFO [STDOUT] at java.lang.Thread.run(Thread.java:595)
            08:48:01,007 INFO [STDOUT] at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:247)
            08:48:01,008 INFO [STDOUT] at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:223)
            08:48:01,008 INFO [STDOUT] at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:126)
            08:48:01,008 INFO [STDOUT] at org.jboss.invocation.jrmp.server.JRMPInvoker_Stub.invoke(Unknown Source)
            08:48:01,009 INFO [STDOUT] at org.jboss.invocation.jrmp.interfaces.JRMPInvokerProxyHA.invoke(JRMPInvokerProxyHA.java:172)
            08:48:01,009 INFO [STDOUT] at org.jboss.invocation.InvokerInterceptor.invokeInvoker(InvokerInterceptor.java:227)
            08:48:01,010 INFO [STDOUT] at org.jboss.invocation.InvokerInterceptor.invoke(InvokerInterceptor.java:167)
            08:48:01,010 INFO [STDOUT] at org.jboss.proxy.ClientMethodInterceptor.invoke(ClientMethodInterceptor.java:59)
            08:48:01,011 INFO [STDOUT] at org.jboss.proxy.ClientContainer.invoke(ClientContainer.java:86)
            08:48:01,013 INFO [STDOUT] at $Proxy63.test(Unknown Source)
            08:48:01,014 INFO [STDOUT] at at.arsenal.spirit.request.RequestListenerServlet.service(RequestListenerServlet.java:80)
            08:48:01,014 INFO [STDOUT] at javax.servlet.http.HttpServlet.service(HttpServlet.java:810)
            08:48:01,014 INFO [STDOUT] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252)
            08:48:01,015 INFO [STDOUT] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
            08:48:01,015 INFO [STDOUT] at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:81)
            08:48:01,016 INFO [STDOUT] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202)
            08:48:01,016 INFO [STDOUT] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
            08:48:01,017 INFO [STDOUT] at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
            08:48:01,017 INFO [STDOUT] at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178)
            08:48:01,018 INFO [STDOUT] at org.jboss.web.tomcat.security.CustomPrincipalValve.invoke(CustomPrincipalValve.java:39)
            08:48:01,018 INFO [STDOUT] at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:159)
            08:48:01,019 INFO [STDOUT] at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:59)
            08:48:01,019 INFO [STDOUT] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126)
            08:48:01,019 INFO [STDOUT] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105)
            08:48:01,020 INFO [STDOUT] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107)
            08:48:01,020 INFO [STDOUT] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148)
            08:48:01,021 INFO [STDOUT] at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:856)
            08:48:01,021 INFO [STDOUT] at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:744)
            08:48:01,022 INFO [STDOUT] at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527)
            08:48:01,022 INFO [STDOUT] at org.apache.tomcat.util.net.MasterSlaveWorkerThread.run(MasterSlaveWorkerThread.java:112)
            08:48:01,023 INFO [STDOUT] at java.lang.Thread.run(Thread.java:595)
            
            


            I traced the exception and the root cause seems to be the following: An Invocation object is created, which contains -among other things- the invoked method name, invocation context and an objectName Integer field, which seems to contain some kind of hash value. In the invoke() method of the JRMPInvokerHA class, a lookup is done to get the ObjectName which is associated with this hash Value:

            Integer beanNameHash = (Integer) invocation.getObjectName();
            ObjectName mbean = (ObjectName) Registry.lookup(beanNameHash);
            

            (http://www.cenqua.com/clover/eg/jboss/report/org/jboss/invocation/jrmp/server/JRMPInvokerHA.html , Line 194 )


            This lookup returns null in my tests, and thus causes the subsequent exception.. I checked the HashValue in the objectName field and tried to find it in the org.jboss.sysetm.Registry.entries Map manually, but it is not there.

            So my theory is that the actual MBean that should be called by the proxy is not registered.. As I am completely new to JBoss and clustering i would be most thankful if someone could tell me what I am doing wrong here, or maybe point me to the sourcecode of a complete example which uses the HAServiceMBeanSupport class.

            Thanks for your time. :-)

            • 3. Re: MBean Clustering , JBoss 4.0.3
              hannes.koller

              Ok nevermind, I found it out :-)

              If anybody is interested, the MBean needs to register itself at the Registry during startup. It also needs to expose an invoke() method, which injects the Method Resolution into the remote invocation... err sounds confusing, anyway the classes at

              http://cvs.sourceforge.net/viewcvs.py/jboss/jbosstest/src/main/org/jboss/test/jmx/ha/

              gave me an impression how it has to be done. Thanks for your time anyways :-)

              • 4. Re: MBean Clustering , JBoss 4.0.3
                hannes.koller

                Sorry if I am annoying.. one more thing I stumbled upon..

                Clustering the MBean and deploying it via the JBoss farming service works very well now.. I tried the RoundRobin and RandomRobin LoadBalancing Policies and this works the way one would expect (impressively easy once you get it going, I love it :-) ).. the next thing I tried was fooling around with the failover behavior of the clustered MBeans and I noticed some strange behavior (I am not blaming JBoss but rather my cluelessness, but I really dont know what I am doing wrong here)...

                I have a cluster of three nodes. Two of them (Node A and Node B) run the clustered MBean service I have been developing. They are deployed via the farming service, and each of them creates a ProxyFactoryHA. The third node (Node C) looks up the ProxyFactory via HA-JNDI and performs a number of invocations on the MBean. These invocations are evenly distributed between the two MBeans as one would expect..

                Now my problem is the failover: as soon as one of the MBean nodes (say Node A) "fails" (by shutting down the server via ctrl+c or the shutdown script) I would expect the Node B to receive all the invocations from Node C. But instead, an exception is thrown on Node C:

                java.lang.reflect.UndeclaredThrowableException
                at $Proxy62.test(Unknown Source)
                at at.arsenal.spirit.request.RequestListenerServlet.service(RequestListenerServlet.java:83)
                at javax.servlet.http.HttpServlet.service(HttpServlet.java:810)
                at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
                at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:81)
                at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202)
                at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
                at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
                at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178)
                at org.jboss.web.tomcat.security.CustomPrincipalValve.invoke(CustomPrincipalValve.java:39)
                at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:159)
                at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:59)
                at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126)
                at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105)
                at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107)
                at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148)
                at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:856)
                at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:744)
                at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527)
                at org.apache.tomcat.util.net.MasterSlaveWorkerThread.run(MasterSlaveWorkerThread.java:112)
                at java.lang.Thread.run(Thread.java:595)
                
                
                Caused by: org.jboss.invocation.ServiceUnavailableException: Service unavailable.
                at org.jboss.invocation.jrmp.interfaces.JRMPInvokerProxyHA.invoke(JRMPInvokerProxyHA.java:292)
                at org.jboss.invocation.InvokerInterceptor.invokeInvoker(InvokerInterceptor.java:227)
                at org.jboss.invocation.InvokerInterceptor.invoke(InvokerInterceptor.java:167)
                at org.jboss.proxy.ClientMethodInterceptor.invoke(ClientMethodInterceptor.java:59)
                at org.jboss.proxy.ClientContainer.invoke(ClientContainer.java:86)
                ... 22 more
                


                Still the cluster detects the failed node correctly and updates it's state, therefore I think the problem has somehow to do with the ProxyFactoryHA..

                I fooled around a little and discovered, if I shut down the ProxyFactoryHA MBean of a Node by hand (call the stop() method of the MBean via the JMX Console) then all works as expected. The other Node gets all the Invocations, no exceptions. If I start the ProxyFactoryHA again (via JMX console) the Invocations are distributed again.

                But as soon as I shut down the Node via ctrl+c the invocations fail. When I restart the shut down node, the invocations succeed again, but now all calls are answered by the node that was restarted. The node that was running all the time is ignored now.

                Now this is just a guess, but I believe part of the problem is, that the JBoss server should call the stop() method of the ProxyHAFactory when it is shut down normally...
                but this would not solve the problem of a node failing (the failing Node would not call the stop() method of the ProxyFactoryHA, because it failed).. so my guess is that something with my configuration (posted a few messages above) is wrong.. or maybe the ProxyFactoryHA is not supposed to be able to handle the situation of a Node failing?

                Can anybody help me with this? Thanks a lot. :-)



                • 5. Re: MBean Clustering , JBoss 4.0.3
                  brian.stansberry

                  You're getting the ServiceUnavailableException (bottom of your stack trace) when you've killed just one node. The client proxy should have targets for Nodes A and B; you shouldn't get ServiceUnavailable unless calls to both fail. This leads me to suspect the proxy only had a target for Node A.

                  Recommend that on the client you enable TRACE level logging for categories org.jboss.invocation and org.jboss.ha. This should give you insight into what's going on inside the client proxy.

                  When you kill a node via ctrl-c (or the shutdown script), stop() should be called, as ctrl-c results in a ShutdownHook running. Should be no different than calling stop() from the JMX console.

                  If a true failure of Node A occurred (e.g. powered off the server) failover should still work. The proxy should have a target list that includes Node B and it should fail over to it. After a period of time Node B will recognize that A is dead, and will remove A from the proxy target list. A clean shutdown on A just makes this process cleaner, since the stop() method results in B being notified that A is no longer providing the service.

                  • 6. Re: MBean Clustering , JBoss 4.0.3
                    hannes.koller

                    Thanks for your reply. I have turned on TRACE as you suggested, and found out the following:


                    This is a line from my logs when the Invocations work as expected (deployed to two nodes via farming):

                    2006-03-01 09:18:19,678 TRACE [org.jboss.invocation.jrmp.interfaces.JRMPInvokerProxyHA] Init, clusterInfo: org.jboss.ha.framework.interfaces.FamilyClusterInfoImpl@c513c5e5{familyName=DefaultPartition/arsenal.at:service=WorkflowController/H,targets=[JRMPInvoker_Stub[UnicastRef2 [liveRef: [endpoint:[192.168.0.10:4447](remote),objID:[4b46ef39:109b4b3c025:-8000, 2]]]], JRMPInvoker_Stub[UnicastRef2 [liveRef: [endpoint:[192.168.0.12:4447](remote),objID:[-40e2315:109b4e0dbe2:-8000, 2]]]]],currentViewId=-243866888,isViewMembersInSyncWithViewId=true,cursor=1,arbitraryObject=null}, policy=org.jboss.ha.framework.interfaces.RoundRobin@8ab562
                    


                    Notice the two liveRefs that are present in the targets[]. Everything works fine...

                    As soon as I shut down one node the same trace looks like this:

                    2006-03-01 09:20:12,082 TRACE [org.jboss.invocation.jrmp.interfaces.JRMPInvokerProxyHA] Init, clusterInfo: org.jboss.ha.framework.interfaces.FamilyClusterInfoImpl@c513c5e5{familyName=DefaultPartition/arsenal.at:service=WorkflowController/H,targets=[],currentViewId=0,isViewMembersInSyncWithViewId=true,cursor=1,arbitraryObject=null}, policy=org.jboss.ha.framework.interfaces.RoundRobin@a3a716
                    


                    _Both_ liveRefs have vanished and targets[] is emtpy, which explains the ServiceUnavailableException, as you expected. So it boils down to this question: Why are both liveRefs removed when I shut down one node?

                    (As I am absolutely new to this, it is highly probable that I am making some kind of obvious mistake, which I am overlooking..)

                    Thanks for helping :)




                    • 7. Re: MBean Clustering , JBoss 4.0.3
                      hannes.koller

                      Ok to further illustrate this behavior I have played around with the jmx console.

                      The DistributedReplicantManager of the DefaultPartition has a listContent() method which displays the availibility of services in the cluster partition. My setup is as follows, I have the service (WorkflowControllerService) deployed on two nodes: 192.168.0.10 and 192.168.0.12 ... On a third node I invoke the listContent() method on the local DistributedReplicantManager.. the following happens

                      * start all three nodes
                      * deploy WorkflowControllerService on 192.168.0.10 and 192.168.0.12 with farming
                      * start up jmx console on 192.168.0.11 (the third node) and invoke listContent()

                      -----------------------------------------------
                      Service : HAJNDI
                      
                       - Service *is* also available locally
                       - 192.168.0.10:1099
                       - 192.168.0.12:1099
                      
                      -----------------------------------------------
                      Service : arsenal.at:service=WorkflowController
                      
                       - Service is *not* available locally
                       - 192.168.0.10:1099
                       - 192.168.0.12:1099
                      
                      -----------------------------------------------
                      Service : DCacheBridge-DefaultJGBridge
                      
                       - Service *is* also available locally
                       - 192.168.0.10:1099
                       - 192.168.0.12:1099
                      



                      Ok thats what I expected to see... At this stage the LoadBalanced invocations work fine... but now..

                      * Kill Node 192.168.0.12 (ctrl+c) .. or the other one it does not matter I have tried it with both..

                      * invoke listContent() on 192.168.0.11

                      -----------------------------------------------
                      Service : HAJNDI
                      
                       - Service *is* also available locally
                       - 192.168.0.10:1099
                      
                      -----------------------------------------------
                      Service : DCacheBridge-DefaultJGBridge
                      
                       - Service *is* also available locally
                       - 192.168.0.10:1099
                      


                      There! Both of the WorkflowController Instances are gone... I would expect the Instance on 192.168.0.10 to be still here.

                      The funny thing is, if I do not shutdown the whole server but only use the stop() and destroy() methods of the associated ProxyFactoryHA on node 192.168.0.10 then it works as expected:

                      -- The output of listContents() then shows the remaining Instance of the WorkflowController on 192.168.0.12 and everything works well.
                      -- I can even restart the ProxyFactoryHA by using create() and start(), then listContent() displays it again, and Invocations are again RoundRobin scheduled as I would expect them to be.

                      But as soon as I shutdown one node completely it breaks everything, and I can only get it to work again by manually undeploying the service from the farm (delete the .sar from the /farm directory) and then redeploying the service.

                      Awfully long post, sorry about that. But I am really not sure what to make of this :/


                      • 8. Re: MBean Clustering , JBoss 4.0.3
                        hannes.koller

                        Me again... :)

                        I managed to narrow down the area where things go wrong: Undeployment.

                        When a node is shut down with ctrl+c JBoss undeploys all the MBeans. Naturally it also undeploys my custom MBean. What seems to go wrong here is that when it undeploys the ProyxFaxtoryHA it does not only remove the one instance of the service from the DistributedReplicantManager but rather it removes them all..

                        Why do I believe this?

                        I tried to put a sleep() command in the stop() method of my MBean so I had time to reload the listContent() view of the DistributedReplicantManager while the MBean was being undeployed. I discovered that _all_ instances of the arsenal.at:service=WorkflowController disappear as soon as the _first_ cluster node starts to undeploy the MBean (since the ProxyFactoryHA depends on that MBean I assume that it is removed before the stop() method on my MBean is called, and the sleep() gives me time to reload the view)...

                        Consequently I tested a node failure by pulling the network cable on one of the nodes.. and it _worked_ .. the cluster discovered the dead member.. the view in listContent() lost only the service instance of the failed node. All subsequent invocations were passed to the living node. No Exceptions. I restarted the "dead" node and reconnected it to the network. It re-joined the cluster, and the invocations were distributed among the two nodes again.

                        That's why I am fairly certain that things go wrong during undeployment. Perhaps now the situation is clearer and someone can give me a hint on this? Thanks a lot. :)



                        • 9. Re: MBean Clustering , JBoss 4.0.3
                          hannes.koller

                          Ha, I figured it out (talking to myself seems to help a great deal ;) No offense I know you are most likely on the other side of the planet and are probably still asleep while I am writing this)

                          Anyway... I had a look at the ProxyFactoryHA , and came across this:

                          151 protected void containerIsAboutToStop ()
                          152 {
                          153 if( target != null )
                          154 {
                          155 target.setInvocationsAuthorization (HATarget.DISABLE_INVOCATIONS);
                          156 target.disable ();
                          157 }
                          158 }
                          


                          After some more asking google for sourcecode I found out that target.disable() does a
                          this.partition.getDistributedReplicantManager().remove (this.replicantName);
                          

                          which - it seems to me - removes all replicas (removes it's localReplicant and propagates the remove operation to all other nodes on the cluster) ...

                          I take it this is not a bug, but rather the expected behavior for this class ? As I said I am pretty new to this and could not find much documentation... anyway.. I discovered eralier that the stop() and destroy() methods of the ProxyFactoryHA seem to be sufficient to disable the replicant anyway.. so I simply made a new subclass which does nothing in the containerIsAboutToStop() method:

                          public class CustomProxyFactoryHA extends ProxyFactoryHA {
                          
                           protected void containerIsAboutToStop() {
                           //do nothing
                           }
                          
                          }
                          


                          and updated my jboss-service.xml file:

                           <mbean code="at.arsenal.spirit.commons.jndimbean.CustomProxyFactoryHA" name="jboss.test:service=ProxyFactory,name=HAService,protocol=jrmpha">
                          ........
                          



                          .... now everything seems to work absolutely smoothly. Turn off node, switch node back on... everything behaves as I would expect it to.

                          Looks like this solved my problem. Or am I missing something else which I could have broken with this patch? :-)

                          • 10. Re: MBean Clustering , JBoss 4.0.3
                            brian.stansberry

                            Yep, I think I'm on the other side of the planet.

                            Thanks for your hard work on this -- later today I'll dig into your last post and try to give you deeper feedback. But the behavior you describe, where cleanly shutting down a node causes across-the-cluster removal of the service certainly shouldn't be happening. At this point you're getting to be a guru on this stuff, so I doubt it's due to any config mistake on your part.

                            • 11. Re: MBean Clustering , JBoss 4.0.3
                              starksm64

                              I would create a jira issue with the example work you have done as DistributedReplicantManager.remove does not remove all values for the given key. It just removes the value associated with the key. The behavior you describe does not occur when shutting down a node with clustered ejbs, so something does not seem correct with the usage.

                              • 12. Re: MBean Clustering , JBoss 4.0.3
                                hannes.koller

                                 

                                "scott.stark@jboss.org" wrote:
                                DistributedReplicantManager.remove does not remove all values for the given key. It just removes the value associated with the key.


                                Yes you are correct. I tested the remove's behavior in the JMX console today and it does what it is supposed to do. My bad. Still the patch I posted yesterday solved my problem and I was curious why, so I played around some more... as soon as a target.destroy() appears in the containerIsAboutToStop() method, the behavior I described occurs...

                                The problem with this is the following: the containerIsAboutToStop() method gets called from the StateChangeListener. When one Replicant is undeployed (for whatever reason) this method is called. But not only for the Replicant which is going down, but for ALL the replicants (they all registered a listener after all). And consecutively every replicant thinks itself is about to stop and disables itself. Now that was a tough one. :-)

                                I guess thats the core reason for the behavior I have been noticing the last few days. Should not be so hard to fix, the ProxyFactoryHA just has to check if the ServiceMBean.STOPPING event was meant for itself or a different replica.

                                "scott.stark@jboss.org" wrote:

                                The behavior you describe does not occur when shutting down a node with clustered ejbs, so something does not seem correct with the usage.


                                This is just a guess, but I think this behavior does not occur with ejbs because they never recieve ServiceMBean.STOPPING events?



                                • 13. Re: MBean Clustering , JBoss 4.0.3
                                  starksm64

                                  I think this is a problem related to how your integrating with clustering jmx though. The ProxyFactoryHA should not be seeing a state change notification from a remote node. This should only be coming from the local node. What have you clustered?

                                  • 14. Re: MBean Clustering , JBoss 4.0.3
                                    hannes.koller

                                    Hmm I have clustered a simple test MBean class at the moment it is called WorkflowControllerServiceMBean:


                                    public interface WorkflowControllerServiceMBean extends HAMBean {
                                     public void test();
                                    }
                                    


                                    I extracted the invoke() Method to a separate interface called HAMBean:

                                    import org.jboss.system.ServiceMBean;
                                    
                                    public interface HAMBean extends ServiceMBean {
                                     public Object invoke(Invocation invocation) throws Exception;
                                    }
                                    


                                    The implementation of the WorkflowControllerService just outputs some string to the console.

                                    public class WorkflowControllerService
                                     extends AbstractHAMBean implements WorkflowControllerServiceMBean {
                                    
                                    
                                     public void test() {
                                     System.out.println("TEST TEST TESTTEST TEST TESTTEST TEST TESTTEST TEST TEST");
                                     }
                                    


                                    The AbstractHAMBean class the WorkflowController extends contains the methods for setting up the method hashes and registering the service name hash in the Registry. It is more or less a copy and paste from http://cvs.sourceforge.net/viewcvs.py/jboss/jbosstest/src/main/org/jboss/test/jmx/ha/HAService.java?rev=1.2.6.1&view=markup

                                    import org.jboss.ha.jmx.HAServiceMBeanSupport;
                                    
                                    public class AbstractHAMBean extends HAServiceMBeanSupport implements HAMBean {
                                    
                                     protected Map marshalledInvocationMapping = null;
                                    
                                     public void startService() throws Exception {
                                     super.startService();
                                     log.info("Calculating Method Hashes for " + serviceName);
                                     Class[] ifs = this.getClass().getInterfaces();
                                     HashMap<Long,Method> tmpMap = new HashMap<Long,Method>();
                                     for (Class i : ifs) {
                                     log.info("Adding Methods Interface: " + i.getName());
                                     // Calulate method hashes for remote invocation
                                     Method[] methods = i.getMethods();
                                     for (int m = 0; m < methods.length; m++) {
                                     Method method = methods[m];
                                     Long hash = new Long(MarshalledInvocation.calculateHash(method));
                                     //log.info("Adding Method: " + method.getName());
                                     tmpMap.put(hash, method);
                                     }
                                     }
                                    
                                     Registry.bind(new Integer(serviceName.hashCode()), serviceName);
                                     marshalledInvocationMapping = Collections.unmodifiableMap(tmpMap);
                                    
                                     log.info("Started MBean: " + serviceName);
                                     }
                                    
                                     public void stopService() throws Exception {
                                     super.stopService();
                                     Registry.unbind(new Integer(serviceName.hashCode()));
                                     log.info("Stopped MBean: " + serviceName);
                                     }
                                    
                                     /**
                                     * This is the "remote" entry point
                                     */
                                     public Object invoke(Invocation invocation) throws Exception {
                                     // Invoked remotely, inject method resolution
                                     if (invocation instanceof MarshalledInvocation) {
                                     MarshalledInvocation mi = (MarshalledInvocation) invocation;
                                     mi.setMethodMap(marshalledInvocationMapping);
                                     }
                                     Method method = invocation.getMethod();
                                     Object[] args = invocation.getArguments();
                                    
                                     // Setup any security context (only useful if something checks it, this
                                     // impl doesn't)
                                     Principal principal = invocation.getPrincipal();
                                     Object credential = invocation.getCredential();
                                     SecurityAssociation.setPrincipal(principal);
                                     SecurityAssociation.setCredential(credential);
                                    
                                     // Dispatch the invocation
                                     try {
                                     return method.invoke(this, args);
                                     } catch (InvocationTargetException e) {
                                     Throwable t = e.getTargetException();
                                     if (t instanceof Exception) {
                                     throw (Exception) t;
                                     } else {
                                     throw new UndeclaredThrowableException(t, method.toString());
                                     }
                                     } finally {
                                     // Clear the security context
                                     SecurityAssociation.clear();
                                     }
                                     }
                                    
                                    }
                                    




                                    The associated jboss-service.xml looks like this:

                                    <?xml version="1.0" encoding="UTF-8"?>
                                    
                                    <server>
                                    
                                     <!-- Create JRMPHA proxy for our service -->
                                     <mbean code="org.jboss.proxy.generic.ProxyFactoryHA" name="arsenal.at:service=ProxyFactoryHA,name=WorkflowControllerFactory,protocol=jrmpha">
                                     <!-- Use the default partition -->
                                     <depends optional-attribute-name="PartitionObjectName">jboss:service=DefaultPartition</depends>
                                    
                                     <!-- Use the standard JRMPInvoker from conf/jboss-service.xml -->
                                     <depends optional-attribute-name="InvokerName">jboss:service=invoker,type=jrmpha</depends>
                                    
                                     <!-- The load balancing policy -->
                                     <attribute name="LoadBalancePolicy">org.jboss.ha.framework.interfaces.RoundRobin</attribute>
                                    
                                     <!-- We depend on the HA-JNDI naming service -->
                                     <depends>jboss:service=HAJNDI</depends>
                                    
                                     <!-- The target MBean -->
                                     <depends optional-attribute-name="TargetName">arsenal.at:service=WorkflowController</depends>
                                    
                                     <!-- Where to bind the proxy -->
                                     <attribute name="JndiName">WorkflowController</attribute>
                                    
                                     <!-- The interface exposed to the client -->
                                     <attribute name="ExportedInterface">at.arsenal.spirit.services.workflow.WorkflowControllerServiceMBean</attribute>
                                    
                                     <!-- Client side behaviour -->
                                     <attribute name="ClientInterceptors">
                                     <interceptors>
                                     <interceptor>org.jboss.proxy.ClientMethodInterceptor</interceptor>
                                     <interceptor>org.jboss.invocation.InvokerInterceptor</interceptor>
                                     </interceptors>
                                     </attribute>
                                     </mbean>
                                    
                                     <mbean code="at.arsenal.spirit.services.workflow.WorkflowControllerService" name="arsenal.at:service=WorkflowController">
                                     <depends>jboss:service=HAJNDI</depends>
                                     </mbean>
                                    </server>
                                    


                                    The .sar with the MBean is deployed with farming on several nodes. Associated farm-service.xml:

                                    <?xml version="1.0" encoding="UTF-8"?>
                                    <server>
                                     <mbean code="org.jboss.ha.framework.server.FarmMemberService"
                                     name="jboss:service=FarmMember,PartitionName=DefaultPartition">
                                     <depends>jboss:service=${jboss.partition.name:DefaultPartition}</depends>
                                     <attribute name="PartitionName">DefaultPartition</attribute>
                                     <attribute name="ScanPeriod">5000</attribute>
                                     <attribute name="URLs">farm/</attribute>
                                     <attribute name="FilterInstance"
                                     attributeClass="org.jboss.deployment.scanner.DeploymentFilter"
                                     serialDataType="javaBean">
                                     <property name="prefixes">#,%,\,,.,_$</property>
                                     <property name="suffixes">#,$,%,.BAK,.old,.orig,.rej,.bak,.sh,\,v,~</property>
                                     <property name="matches">.make.state,.nse_depinfo,CVS,CVS.admin,RCS,RCSLOG,SCCS,TAGS,core,tags</property>
                                     </attribute>
                                     <attribute name="URLComparator">org.jboss.deployment.scanner.PrefixDeploymentSorter</attribute>
                                     <attribute name="Deployer">jboss.system:service=MainDeployer</attribute>
                                     </mbean>
                                    </server>
                                    


                                    Thats the configuration which showed the behavior with all replicants disappearing when one node is shut down. It would be great if you could give me a hint if something is wrong with this setup. Thanks :-)


                                    1 2 Previous Next