1 Reply Latest reply on Nov 29, 2010 8:59 AM by jesper.pedersen

    TestSuite - redesign of some tests and "false" coverage

    maeste

      Hi all,

       

      A deep look at our test suite reveal that some of the coverage report, is not saying the true about test situation.

      Coverage is sometime overestimated. To say that better, cobertura is not wrong, from a pure coverage point of view, but some of our tests (for examples parsers' ones) are not well defined in terms of atomicity of behaviors verified. In a lot of cases, to achieve our very short time release goal, we have written a generic tests stressing all possible problems in just one tests. For example I've written my self this horrible test:

       

      @Test
         public void shouldParseAnyExample() throws Exception
         {
            FileInputStream is = null;
            //given
            File directory = new File(Thread.currentThread().getContextClassLoader().getResource("ds").toURI());
            for (File xmlFile : directory.listFiles(new FileSuffixFilter("-ds.xml")))
            {
               System.out.println(xmlFile.getName());
               try
               {
                  is = new FileInputStream(xmlFile);
                  DsParser parser = new DsParser();
                  //when
                  DataSources ds = parser.parse(is);
                  //then
                  assertThat(ds.getDataSource().size() + ds.getXaDataSource().size(), is(1));
               }
               finally
               {
                  if (is != null)
                     is.close();
               }
            }
         }
         @Test
         public void shouldParseAnyExample() throws Exception
         {
            FileInputStream is = null;
      
            //given
            File directory = new File(Thread.currentThread().getContextClassLoader().getResource("ds").toURI());
            for (File xmlFile : directory.listFiles(new FileSuffixFilter("-ds.xml")))
            {
               System.out.println(xmlFile.getName());
               try
               {
                  is = new FileInputStream(xmlFile);
                  DsParser parser = new DsParser();
                  //when
                  DataSources ds = parser.parse(is);
                  //then
                  assertThat(ds.getDataSource().size() + ds.getXaDataSource().size(), is(1));
      
               }
               finally
               {
                  if (is != null)
                     is.close();
               }
            }
         }
      

       

      Of course it give us a good coverage parsing a lot of file, bit in case of failure the reasons for that failure could become a pain to understand and of course it doesn't help a lot as regression tests.

      I think we should review this kind of code to have more proper UNIT test

      From this point of view I'd like very much also to limit as much as possible xml files as test resources, defining xml to be parsed inline inside the tests (for this kind of work scala language and its xml type would be a dream). Of course this approach has the plus of having a real unit test with a fail containing really everything to read and understand given-when-then conditions. The minus of this approach is that we would provide less example files in our tests suite.

      Please let me know what you think about. My ideal solution would be to keep both approach, keeping the above test to parse and use real xml file put in an integration-test phase (no coverage consideration there) and the PURE UNIT tests described with inline xml file genration.

       

      best regards

      S.

        • 1. Re: TestSuite - redesign of some tests and "false" coverage
          jesper.pedersen

          I think that individual XML files are easier to understand for contributors - especially if they contain comments if the metadata is supposed to be mixed (XML and annotations).

           

          Using other JVM languages for testing could be a benefit in the long run, but I think using a "standard" approach to the problem at the moment is best.

           

          And of course splitting up test cases is good - a big bang approach is too difficult to maintain.