I came across interesting project for testing of differences between pages in different rendering engines / browsers:
It has similar goals like RushEye, but the implementation really differs.
Would be interesting to explore how we could integrate with that using Drone.
On a bit of a tangent, I came across information about openQA from openSUSE, which also compares images, but of desktop screenshots.
"openQA can test any operating system which is able to run in a virtual machine. By taking screenshots of the process and comparing these to reference images it determines if tests are passed or not. The OS-autoinst tool, part of openQA, controls the testing process by sending virtual keyboard and mouse events to the operating system being installed and run. It is able to respond to what is shown on the screen so it can handle a variety of issues while not having to stop the test."
This would sort of be where we start to cross over with the Fedora QA team. I think working at a conceptual level with them is the right place to begin (sharing techniques, not necessarily code).
(I wish I would noticed the comments in this thread sooner.)
Web Consistency Testing compares the layout of the page, verifying the position of DOM elements.
In the RushEye project, we focus on automated image comparison.
We use images generated automatically from any Selenium/WebDriver test suite.
Additionally, what should make the project really successful is tool simplifying the review phase
by implementing the review tool.
There are currently two proposals for GSoC projects for implementing review tool and improving RushEye's alghoritms.
The openQA project seems very interesting.
However I don't find the web interface good choice for reviewing process.
The RushEye process is based on running the regular comparison in CI env.
The results are separated per Selenium test method. (similarly to openQA).
Once your test fail, you can review it from CI web view, but once you will need to do modifications,
you start the desktop client.
It downloads configurations and test results.
Then it downloads images on demand and allows you to review them.
Once you will recognize the problem, you can do the change into the test suite, which is either:
- accept added/removed test
- accept modified image
- define mask which will filter your fail-positives out
What's great is that you can use several alghoritms to mask issues which goes through many images and quickly re-verify locally.
Like change of the logo in the header of the page.