Richfaces 4 test plan

Introduction

The following test plan describes the formal testing to be performed by Red Hat/JBoss QA team for Richfaces project. This test plan covers the included items in the test project, the specific risks to product quality we intend to address, the test environment, problems that could threaten the success of testing, test tools and harnesses we will need to develop, and the test execution process. Development of unit testing occurs outside of the test team’s area but QA team shall provide correct test environment so that they can be run in QA lab effectively as part of the testing process. This document also lays out the strategies, resources, and roles involved in performing Richfaces testing as a distinct testing subproject within the JBoss QE.

     

Expansion of test coverage as well as development of automated tests shall be a continuous process. The Test Team will develop both automated and manual tests to cover the quality risks identified after discussion with development team, as well as augmenting that test set to the extent possible under resource and time constraints with community based tests. However when a release date closes by, Richfaces QA team will  receive the tagged code base from development team approximately one week before the scheduled release date and will execute those tests against the tagged code base.

 

Scope

  • Features to be developed and tested
    • Successful run of unit tests
    • Automated and Manual browser compatibility tests
    • Tools and Testing Framework Extension or Development
    • Regression Tests
    • Integration testing with CDI
    • Integration testing with Seam
    • Integration testing with Portal Server through Portlet Bridge
    • Application Server Compatibility
    • Testing on cloud environments (Priority II)
    • Load and Performance
    • Failover in Clustered Deployment
    • Backward Compatibility when applicable
    • Compatibility with JBoss Developer Studio (JBDS)
    • Documentation
  • Features not to be developed and tested
    • Unit tests
    • Intergration with other portal servers such as Liferay (This depends on time and resource constrain)
    • Mobile Client

Software Risk Issues

  • JSF 2 is new so there is learning curve
  • RF4 is currently in alpha stage so priority may have to be reshuffled based on status quo
  • New version of Seam or Portal or Weld which may not be in synch with Richfaces development
  • Too many combinations to test such as  AS versions, Tomcat versions, Portal versions
  • Too many front end interfaces primarily browser + OS matrix.

 

Test Deliverables

  • Report containing 100% success rate for unit tests
  • Automated and manual test scripts and results for browser compatibility tests
  • Automated and manual test scripts and results for integration testing with Seam and Portal Servers
  • Automated and manual test scripts and results for load and performance testing

Test Approach

Test Development:

  • Richfaces Selenium Library
    • An extension to Selenium that provides more type safe way to develop selenium test cases in terms of jQuery, CSS, locators etc.
    • Library provides less hacky way to make tests pass with different FF and IE versions.
  • Component Testing Framework
    • A new application based on JSF2 and Richfaces to test all possible attrributes, events of each Richfaces componet.
    • This application also allows a component to be tested in to another container components such as datatable, panels etc.
    • For each component, a corresponding functional test case will be written to test the component.
  • Example Applications
    • Different example applications of different complexity which will not only help us proactively test components being used in practical application but also provide examples for community.
    • Different example applications that show integration with Weld and Portal.
  • Performance Testing
    • Several performance test suites will be developed using Smartfrog and automated in Hudson. These tests shall test simple application with few components to a complex application with multiple components.
    • CPU profiling, memory profiling data shall be captured.
    • Tools for client side performance is not known and we may have to yield to manual testing using Firebug and similar other browser plugins.
  • Regression Testing
    • For each bug found by community or qe or dev, we will add a corresponding test in regression suite or add a unit test case. It's expected that this testsuite size will gradually increase.

Test Execution

  • Unit tests
    • Unit tests which are based on various frameworks such as junit, easymock, jsfunit will be run in Hudson [More details to be filled]
    • RF QE will monitor builds and fix problems caused by environment misconfiguration
  • Component Testing
    • RF has several components so listing how to test each of them is not feasible in test plan.
    • We will use Component Testing Framework mentioned in Test Development.
    • Assuming that not every release changes all the components, it is expected that a list of components which behavior have changed shall be given to QE so that QE can focus more on those components as part of manual testing.
  • Functional Testing
    • Going through several issues reported in user forum for 3.3.X as well from interaction with several groups within JBoss such as JOPR, Portal, it is concluded that most of issues have been found when RF components have been used together. New testing framework similar to test-applications shall be used as mentioned above which allows to test components with modal panel, datatable and other container components.
    • We will use Component Testing Framework mentioned in Test Development.

 

  • Integration and Browser Compatibility tests with Selenium
    • Rewrite component demo app testing and run it with all supported browsers that are availabe in QA lab
    • Run tests developed for Furnctional testing against different browsers.
    • After analysis of commony found bugs and gap in test coverage, RF QE shall develop test apps and use those apps and selenium to automate and exapand test coverage.
  • JBoss Portal Compatibility tests -
    • RF Demo app as well as other example applications shall be used along with JBoss Portlet Bridge to test compatibilty with latest GateIn Portal and EPP server. JSF2 support of bridge is still a work in progress so there is hard dependency on bridge so this can not be well defined.
    • RF QE shall work with JBoss Portal QE to automate these tests in portal environment.
    • There are some unknonws here.
  • Seam framework and CDI compatibility tests TBD
    • Existing Richfaces based application in Seam codebase as well as new applications shall be used to test compatibility with Seam framework
    • RF QE shall work with Seam QE to automate these tests.
    • RF QE shall develop example applications that use CDI
    • Should  both Seam 2 and Seam 3 be supported?
  • JOPR testing
    • Latest JOPR test shall be tested with during each release of Richfaces when applicable.
  • Manual tests
    • No automation is enough by itself to verify Look and Feel as well as complete functionality of a web applicaiton or web application framework.
    • Rendering as well as functionality of each demo app and major components shall be verified manually using each of available skins
  • Application Server Compatibility
    • RF QE shall automate testing of various demo and example application using Selenium with all the certifiied applicaiton and web servers mentioned in Test Environment section
  • Cloud Environment testing
    • A subset of tests such component demo and example applications will be run on GAE first and then to other cloud providers as time permitting. This is a priority II.
  • Load and Performance Testing
      • Base Coverage
        • A very simple demo application shall be used for performance testing to analyze and verify RF framework.
        • Performance shall be monitored as number of concurrent  users ramp up from 100 to 2000 users in increment of 100.
        • Todo: We need to define expected average response time under each load.
        • This demo application shall be run with 1000 concurrent users for a period of 12 hours as part of stress testing. Average response time shall be monitored and analyzed.
        • Grinder/Smartfrog shall be used for load generation as well as monitoring. We would have to see if Smartfrog-Sniff can be used here.
      • Expanded Coverage
        • Once base is covered, a more complex application shall be used for load testing
        • Memory leaks and usage analysis
      • Client Performance
        • Tools TBD
        • Performance on browser need to be monitored.
        • Looking for ideas.

     

    • Security Testing
      • TBD
    • Misc
      • Global namespace pollution
      • Usability testing (will be covered as part of developing example applications)
    • Backward Compatibility
      • Yet to be defined
    • JBDS Compatibility
      • RF QE shall use JBDS to develop and test demo and example applicaitons
    • Documentation Check
      • RF QE will do a sanity check of RF document but shall not be responsible for documentation development.

    Pass/Fail Criteria

    This depends on whether release is Final release or Milestones releases.

    For Final releases:

    • All unit tests pass
    • All automated integration tests pass
    • All major and minor Richfaces components pass with manual testing across different test environments
    • Seam and Portal compatiblity tests pass
    • Document is in synch with development

    For Milestone Releases:

    • All unit tests pass
    • All automated integration tests pass
    • All major Richfaces components pass with manual testing across priority I test environments

    For Alpha releases:

    • Alhpa releases have too many moving targets so only unit tests pass is a requirement
    • QE shall use this release to get familiar with new codebase and for test development

    Suspension Criteria and Resumption Criteria

    Suspension if:

    • unit tests fail
    • there is problem in richfaces demo app
    • does not deploy in priority I test environment

     

    Test Environment

    • OS & Browser

     

    OSBrowserPriority
    LinuxFirefox  3.6, 3.5I
    WindowsIE8/IE7I
    WindowsFirefox - 3.6, 3.5

    I

    WindowsChromeII
    WindowsOperaIII
    LinuxChromeIII
    MacSafariIII

     

    • Application and Web Servers
      • Latest compatible Community JBoss Applicaiton Server  such as JBoss AS 6.*
      • Latest compatible Apache Tomcat such as Tomcat 6.x
    • Java Versions
      • Sun JVM Version 6
      • Open JDK 6
      • Sun JVM version 5 (TBD)
    • JSF Version
      • Only Reference Implementation shall be used. Other implementation such as MyFaces shall not be tested and would be left to community.
    • Seam and Weld
      • The latest compatible release
    • JBoss Portal/GateIn
      • The latest JBoss Portal release along with corresponding JBoss Portlet Bridge

    Test Case Tracking

         Jira shall be used as test case managment and tracking

     

    Bug Tracking

         Jira shall be used as bug tracking tool. Once a bug is resolved, a snippet of code and/or explanation shall be commented in jira to better help understand the problem so that a test case can be added.

     

    Staffing/Team

    • Prabhat Jha
    • Lukas Fryc
    • Pavol Pitonak
    • Somebody from Portal team
    • TBD fom Seam/Weld team
    • TBD from JON team

     

    Schedule

    • Dev will tag the code base in SVN 2 weeks prior to expected release date
    • QE shall work with the tag and it's possible that tag may need to be updated based on intermediate test results
    • Any bugs not fixed in current tag shall be marked as to be fixed in the next release if it's not GA release
    • For GA release, bug needs to be fixed if it's a blocker otherwise need to be postponed for next release.

    Responsibilities

    • Prabhat Jha is the lead and 50% devoted to Richfaces project
    • Lukas will expand the test coverage by writing new example applications, tools and tests and automating them
    • Pavol will expand the test coverage by writing new example applications, tools and tests and automating them
    • Others to be decided

    Planning Risks and Contigencies

    • Lack of hardware/software
    • It's possible that not all hardware and software will be available. In that case, tests dependent on missing hardware/software combination shall not be executed.

    • Delay in training on the applicaiton and tools
    • Will rely on manual tests with avaialable applications