5 Replies Latest reply on Aug 27, 2009 12:04 AM by johnbailey

    Archive Conversion Design Descussion

    johnbailey

      In flushing out the design of a archive conversion utility, I wanted to get some more input. It seems like there are a few possible use cases for converting archive types.

      1. Change the internal storage archive of an existing container archive (this seems like the primary use case):

      JavaArchive javaMemArchive = new JavaArchiveImpl(new MemoryMapArchiveImpl("test.jar"));
      
      JavaArchive javaVfsArchive = MagicArchiveConverter.asVfsArchive(javaMemArchive);
       ....
      


      2. Change the container archive but maintain storage archive (not sure how often this will occur):

      JavaArchive javaMemArchive = new JavaArchiveImpl(new MemoryMapArchiveImpl("test.jar"));
      
      WebArchive warMemArchive = MagicArchiveConverter.asWebArchive(javaMemArchive);
      ...
      


      3. Change the container archive and storage archive:
      JavaArchive javaMemArchive = new JavaArchiveImpl(new MemoryMapArchiveImpl("test.jar"));
      
      WebArchive warMemArchive = MagicArchiveConverter.asWebArchive(javaMemArchive);
      
      WebArchive waVfsArchive = MagicArchiveConverter.asVfsArchive(warMemArchive);
      ...
      


      Am I missing any use cases?

      The 3ed is not really a different use case, as it is just 1 and 2 in subsequent calls, but it would be nice to do both in one call.

      In previous discussions we discussed a builder style API with nice "asVfs" like methods. An issue I see with the builder style API is extension will become an issue. As new impls are added, we would need to update the base. Should we be looking at more of a factory to support extension? We would most likely loose some of the niceness of the API, but would be more flexible.

      Any thoughts


        • 1. Re: Archive Conversion Design Descussion
          johnbailey

          To start with I will assume converting between storage types is the primary use case. I have started to prototype some archive conversion strategies for storage archives. Eventually we will need to allow converting the storage type in an existing container archive, but I am starting with the just the conversion of storage archives, as it is a prerequisite.

          Below is an example syntax I have been prototyping for very basic conversion:

          MemoryMapArchive memoryMapArchive = new MemoryMapArchiveImpl("testArchive.jar");
          
          VfsArchive newArchive = ArchiveConverter.convert(memoryMapArchive).to(VfsArchive.class);
          
          


          The above example is assuming there is a known implementation type for the VfsArchive, and it can be constructed using only the archive name. This relies on a very basic ArchiveFactory I have created to create Archive implementations. Once constructed all the content will be converted from the source archive to the newly created.

          There will be Archive implementations that can not be constructed with only the name. For this there is an alternate conversion syntax that allows for custom conversion strategies.

          Below is an example syntax for custom conversion:

          
          MemoryMapArchive memoryMapArchive = new MemoryMapArchiveImpl("testArchive.jar");
          
          VfsArchiveConverter converter = new VfsConverter();
          VfsConverterMetadata = metadata = new VfsConverterMetatdata(File root);
          
          VfsArchive newArchive = ArchiveConverter.convert(memoryMapArchive).using(converter, metadata);
          
          // or ....
          
          VfsArchive newArchive = ArchiveConverter.convert(memoryMapArchive).using(new VfsConverter(File root));
          
          // or ....
          
          VfsArchive newArchive = ArchiveConverter.convert(memoryMapArchive).using(new VfsConverter());
          
          
          


          This example could then allow the conversion of any archive type using whatever strategy necessary to make the conversion.

          This is not the slickest syntax ever, but something similar could be used to get an easily extensible conversion system in place. Then alternate conversion impls could be created to provide even nicer API/SPIs..

          Below is where I could see it going:

          MemoryMapArchive memoryMapArchive = new MemoryMapArchiveImpl("testArchive.jar");
          
          VfsArchive newArchive = VfsArchiveConverter.convert(memoryMapArchive).toVfs(File root);
          
          


          Which internally could make the call above with a custom converter and metadata.

          Anyway, just throwing this out to get some thoughts on the syntax and overall design. I am not stuck on it, just the first design that I came up with.

          What do you think?

          • 2. Re: Archive Conversion Design Descussion
            alrubinger

             

            "johnbailey" wrote:
            1. Change the internal storage archive of an existing container archive (this seems like the primary use case):


            I think this is the only case we need to be concerned with at this point. If even that.

            I hate to throw a wrench in this system, but I've recently had a moment of clarity and discussed a bit on IRC with Aslak earlier.

            The thing is that storage engines aren't as interesting to the user as export format options.

            For instance:

            JavaArchive archive = ArchiveFactory.createJavaArchive("name.jar")


            The above creates a Java container view, backed by default storage archive. The storage impl isn't important. It could be MemoryMap for the time being. What's nice about this is that the user doesn't care about the storage engine and just gets the view they want.

            But what *is* important is an "export" option. After the user is done mucking around and adding resources, classes, and the like, he'll want to deploy it.

            At this point the deployment tools can:

            InputStream in = ZipExporter.exportZip(archive);


            ...obtaining the InputStream of a real JAR. In the case of Embedded, this gets passed into deployment and AS never knows that an archive was used at all. Or a user can serialize the thing to disk and make an actual JAR file. Alternatively:

            File file = ExplodedExporter.exportExploded(archive,File parent);


            ...to obtain the root of an exploded representation.

            So the shift in approach I'm proposing is that storage engines aren't as important as how archives are exported. We can process the add(), contains(), delete(), etc operations however are most efficient, and only when the user is done does he want some real view of it.

            WDYT?

            S,
            ALR

            • 3. Re: Archive Conversion Design Descussion
              johnbailey

              I agree. I have taken some of the prototyping and implemented an ZipExporter as described. The example below demonstrates the usage.

              @Test
               public void testExportZip() throws Exception
               {
               // Get an archive instance
               MemoryMapArchive archive = new MemoryMapArchiveImpl("testArchive.jar");
              
               // Add some content
               Asset assetOne = new ClassLoaderAsset("org/jboss/declarchive/impl/base/asset/Test.properties");
               Path pathOne = new BasicPath("test.properties");
               archive.add(pathOne, assetOne);
               Asset assetTwo = new ClassLoaderAsset("org/jboss/declarchive/impl/base/asset/Test2.properties");
               Path pathTwo = new BasicPath("nested", "test2.properties");
               archive.add(pathTwo, assetTwo);
              
               // Export as Zip InputStream
               InputStream zipStream = ZipExporter.exportZip(archive);
              
               // Validate the InputStream was created
               Assert.assertNotNull(zipStream);
              
               // Create a temp file
               File outFile = File.createTempFile("test", ".zip");
               // Write Zip contents to file
               writeOutFile(outFile, zipStream);
              
               // Use standard ZipFile library to read in written Zip file
               ZipFile expectedZip = new ZipFile(outFile);
              
               // Validate first entry
               assertAssetInZip(expectedZip, pathOne, assetOne);
               assertAssetInZip(expectedZip, pathTwo, assetTwo);
              
               }
              


              I will continue to work on the ExplodedExporter as well.

              • 4. Re: Archive Conversion Design Descussion
                alrubinger

                 

                "johnbailey" wrote:
                The example below demonstrates the usage.


                Beautiful.

                "johnbailey" wrote:
                I will continue to work on the ExplodedExporter as well.


                Cool, thanks.

                BTW, the API for these should be in "api", the impls in "impl-base". API can be a factory that returns an exporter instance by creating one via reflection. Therefore we can lock down the API without a dependency upon impl-base.

                S,
                ALR

                • 5. Re: Archive Conversion Design Descussion
                  johnbailey

                   


                  BTW, the API for these should be in "api", the impls in "impl-base". API can be a factory that returns an exporter instance by creating one via reflection. Therefore we can lock down the API without a dependency upon impl-base.


                  That is how I have it. API just loads a default instance based on a FQN class name.