For as long as I can recall I've always liked good tools and been infuriated with those that get in the way of me being "creative" or working "efficiently". (Subjective terms, I know.) Whether it was the MetaComCo C/Pascal compiler tools for my Atari those many years back (great customization capabilities) or plain emacs (yes, it's a tool!) I've admired those groups who can make good tools. Over the years I've met many tooling groups from HP, Bluestone, BEA, Microsoft and of course JBoss/Red Hat. Some of the more successful groups have been staffed by a combination of good engineers but also people who have good HCI skills (including psychology backgrounds). But they've all usually had the same comments to make about how tooling is seen by developers: under appreciated and an afterthought. That's a shame because as far as I'm concerned a good tooling experience can bring a better ROI than adding some super cool feature here or there.

 

I think the key to good tooling is having the involvement of the developers on the product for which tooling is being developed as early as possible. Where I've seen this work well is when there's actually a tooling person sitting in that team full time, learning about the requirements and acting as a conduit to impart that knowledge to the rest of the tooling team. To do it otherwise can often take longer (inefficient?) and may result in something that isn't quite what is required. Despite the fact that they're all engineers, there is often an impedance mismatch between tooling engineers and project engineers; almost a language barrier. But for good tooling to work well, the conversations need to be bi-directional. This is another reason why the approach of having a tooling person sitting in the project works well, as it provides immediacy of responses.

 

Max, our tooling lead, is keen to say that good tools shouldn't be used to cover up poor underlying projects. He's right! I've seen that happen a lot across the industry, where the tools look fantastic but there's very little under the covers, or what is there is horribly baroque and hard to understand. Designing for tooling from the outset should be considered in the same way as designing for robustness or performance or security. It's not something that is easy to retro-fit.

 

Good tools (and yes, I count JBDS in that list) also grow with you. Too often I've seen tools that are either way too complex to use for beginners or are so basic as to encourage you to grow out of them pretty quickly and look for something else. (There's a reason I've been using shell and emacs for 20 years.) And of course in this world of ever changing runtimes, you really want a tool suite (or IDE in this case) that can work with more than one at a time: I hate having to fire up different IDEs for different versions of the same product, especially when there may only be a few months age difference between the runtime versions.

 

Fortunately we have some great people here who are passionate about tooling and understand its importance in making the whole product experience work well. That doesn't mean we've got to that nirvana just yet, but we are on the right path. We need to work more closely with the projects and vice versa in order to push this mantra of thinking about tooling through all phases of the project lifecycle and not just after the fact. The improvements we've made over the past couple of years are pretty significant and there's much more to come. I'm excited and maybe this will finally encourage me to move away from emacs ;-)

 

BTW, thanks to Max for the title of this entry!

 

You've probably all seen or heard of Transformers ("Transformers ... robots in disguise.") The gist is that these robot are flexible enough to be reconfigured into a variety of different forms depending upon the need at hand. Pretty important if you need to battle enemies from the stars and then make your way silently through the streets disguised as a Bugatti Veyron. But what, you ask, has this to do with JBoss? Well we've been working on our own adaptable infrastructure for a few years; not so we can fight Decepticons, but so that we can offer a way for the same software components to be used in a variety of different environments without requiring major rewrites, different implementations, recompilations or several months of on-site consultants. We also want to support a range of different frameworks or component models, such as SCA, Ruby and OSGi.

 

So how have we been able to accomplish this? With the JBoss Microcontainer. It's been in development for several years as well as being an evolution from the original JMX Micro-kernel. The basic concept is pretty simple: you can define your core services/components and their interdependencies no matter what their flavour (e.g., POJOs, MBeans) dynamically and potentially on-the-fly. What was a full-featured JEE Application Server one minute could be a scaled down embedded ESB the next. What was a basic Web server yesterday could seamlessly acquire transactions and security tomorrow.

 

The aim here is clear though: to allow existing investments in components that have proven their maturity over the years to be used in both lightweight and heavyweight environments. Other approaches to solving this problem typically revolve around completely different technology stacks, requiring different expertise, learning curves, support contracts etc. And that kind of solution does not evolve with your changing requirements (at least not without going back to the vendor to arrange delivery of the new product, learning it, training etc.)

 

But what about other deployment models, such as OSGi and Spring? Although JBoss is popular there are people who need to use these alternative frameworks/component models. In the past they meant embracing that entire framework for everything in the assumption that the choice you make today is the right choice for tomorrow. Unfortunately frameworks come and go, as well as requirements changing. So an investment in something today is not necessarily the right approach for the future. But in that case what do you do when you're left with an OSGi bundle and you don't want to stay with OSGi, for example? Well fortunately the JBoss Microcontainer offers a possible solution there too. supporting a flexible state machine model of components we can support native component model deployments as well as foreign component models on the same codebase and track dependencies across those component models.

 

The architecture of the Microcontainer has evolved over the past few years, so even if you looked at it a while ago it's worth looking again. For example, we've added increased flexibility to the deployment model such that we now support an AOP-like manipulation of a metadata pipeline down to the final component deployer. There's also a Virtual File System for deployments, which is a major improvement over the past. Finally it's now possible to declare that any implementors of an interface should be "injected/un-injected" via specified methods, which allows for containers to specify plugin interfaces and easily have plugin implementors associated with the container as the plugins are instantiated. These examples and others just go to prove how much thought and effort has gone into this new architecture in order for us to be able to deliver on the promise of flexibility and adaptability for user requirements now and in the future. We spent a lot of time doing this so that we could do it right once and for all time: the future is bright for JBoss and its users, because we know now that we don't have to worry about re-architecting again in a years time when another deployment environment comes along, or some subtle differences in needs force a rethink of "fat" versus "thin" deployments or "rich" versus "poor". You can safely deploy to the new Microcontainer in the knowledge that it's future-proofing you.

 

As an industry one thing that we often fail to remember is that standards come and go, but core requirements remain. If you look at the evolution of distributed systems over the past 4 decades, for example, you'll see the transition through DCE, CORBA, JEE, Web Services etc. These all define their own component model(s), APIs, development methodologies etc. Yet at the heart of them all critical services such as transactions, messaging, security etc. remain the same. The only thing that changes is the way in which they are wrapped into the infrastructure. Well that's something we've tried to embrace with the new Microcontainer: leveraging our tried and tested components and providing a way to use them in environments/standards past, present and future.

Filter Blog

By date:
By tag: