7 Replies Latest reply on Apr 6, 2012 11:25 AM by lfryc

    Web Consistency Testing

    lfryc

      I came across interesting project for testing of differences between pages in different rendering engines / browsers:

       

      http://webconsistencytesting.com/

       

      It has similar goals like RushEye, but the implementation really differs.

       

      Would be interesting to explore how we could integrate with that using Drone.

        • 1. Re: Web Consistency Testing
          dan.j.allen

          Absolutely. In fact, in the process we may discover the need for an abstraction layer around diff engines.

           

          Btw, this would be a nice GSoC proposal. Even if it doesn't get picked up by GSoC, putting together a brief statement of work helps establish goals and ultimately get it kicked off.

          • 2. Re: Web Consistency Testing
            aslak

            hmm, this could be done with RushEye as well right, just a little different use case. Use one Image as base of comparison for another?

            • 3. Re: Web Consistency Testing
              dan.j.allen

              You said that the implementation really differs. Could you quickly enumerate what makes the two approaches so different?

              • 4. Re: Web Consistency Testing
                dan.j.allen

                On a bit of a tangent, I came across information about openQA from openSUSE, which also compares images, but of desktop screenshots.

                 

                "openQA can test any operating system which is able to run in a virtual machine. By taking screenshots of the process and comparing these to reference images it determines if tests are passed or not. The OS-autoinst tool, part of openQA, controls the testing process by sending virtual keyboard and mouse events to the operating system being installed and run. It is able to respond to what is shown on the screen so it can handle a variety of issues while not having to stop the test."

                 

                http://news.opensuse.org/2011/10/11/opensuse-announces-first-public-release-of-openqa/

                 

                This would sort of be where we start to cross over with the Fedora QA team. I think working at a conceptual level with them is the right place to begin (sharing techniques, not necessarily code).

                • 5. Re: Web Consistency Testing
                  lfryc

                  (I wish I would noticed the comments in this thread sooner.)

                   

                  Web Consistency Testing compares the layout of the page, verifying the position of DOM elements.

                   

                  In the RushEye project, we focus on automated image comparison.

                  We use images generated automatically from any Selenium/WebDriver test suite.

                   

                  Additionally, what should make the project really successful is tool simplifying the review phase

                  by implementing the review tool.

                   

                   

                  There are currently two proposals for GSoC projects for implementing review tool and improving RushEye's alghoritms.

                  • 6. Re: Web Consistency Testing
                    lfryc

                    The one disadvantage of image comparison is that there is almost no possibility to compare rendered pages across browser implementations.

                     

                    That is something what Web Consistency Testing can offer - taking one browser as reference and second as target of the test.

                    • 7. Re: Web Consistency Testing
                      lfryc

                      The openQA project seems very interesting.

                       

                      However I don't find the web interface good choice for reviewing process.

                       

                       

                      The RushEye process is based on running the regular comparison in CI env.

                      The results are separated per Selenium test method. (similarly to openQA).

                       

                      Once your test fail, you can review it from CI web view, but once you will need to do modifications,

                      you start the desktop client.

                       

                      It downloads configurations and test results.

                      Then it downloads images on demand and allows you to review them.

                      Once you will recognize the problem, you can do the change into the test suite, which is either:

                       

                      • accept added/removed test
                      • accept modified image
                      • define mask which will filter your fail-positives out

                       

                      What's great is that you can use several alghoritms to mask issues which goes through many images and quickly re-verify locally.

                      Like change of the logo in the header of the page.