1 2 3 Previous Next

Mark Little's Blog

155 posts

So I mentioned in a separate entry that I have problems with the term "web-scale". I get what the comment author is saying in that entry and I agree with several of the sentiments that are made but it did raise a few interesting thoughts in my head. Some of them I thought I'd addressed over the years but perhaps it is time to revisit. Or simply reference again in case your favourite search engine isn't working.

 

I've definitely addressed the issue of monolithic application servers in the past, whether through articles like this one or presentations. Monolithic application servers do exist, but there are application servers (Java and Java EE based) that simply don't match those characteristics. Whilst an application server may not be right for your use cases it's far better to rule it out based upon objective analysis than some (almost) subjective definition. I said a few months back that containers aren't all bad! Even as early as last year there were some good discussions about monoliths, microservices and balls of mud that would be well worth spending a few minutes reading and thinking about, because a knee-jerk reaction to a so-called monolithic architecture isn't necessarily going to give you the benefits you're after.

 

It's also interesting to think that the move we're seeing in the industry towards separate services is very much in line with what we had back in the CORBA days (something I've repeated a few times.) No, I'm not suggesting CORBA got it all right! Something else that was very much at the heart of CORBA was asynchronous invocations. This wasn't just something that was an add-on/bolt-on to synchronous APIs. Now in the transition from CORBA to Java EE we lost some of this work in favour of the more traditional (synchronous) RPC approach. If you've done your homework (or were around back then) you'll understand why RPC was (and still is) used so much in distributed systems: most of the languages we use(d) are procedural and it's natural to want to extend that paradigm to distributed programming and make remote invocations opaque. Now of course there are well know issues with this approach but that's not really the subject of this article.

 

The point I'm trying to make is that asynchronous programming isn't the domain of new languages and frameworks. Whilst I do agree that many of the APIs we have grown used to today are synchronous in nature and retro-fitting asynchronous approaches on top would be hard or impossible, it's worth remembering that there's often a good separation between interface and implementation in many mature implementations. For instance, some component implementations within various application servers date back to the CORBA days (or even further), and may have even been designed with asynchronous in mind. So whilst the APIs of some Java EE components may not easily be used asynchronously it's worth remembering that there is a large body of research and development which will help that rapidly evolving front of our industry build on much of what is there today, assuming of course that some implementations aren't there already. As I've said many times before, I hate reinventing the wheel.

Mark Little

Web-scale: a complaint

Posted by Mark Little May 10, 2015

An interesting comment to an entry I made the other day got me thinking about "web-scale". It's a term I may have used. I know others have too. Yet WTF does it really mean?

Mark Little

Microservices and events

Posted by Mark Little Apr 25, 2015

OK it's Saturday night (my time) and I've spent the last week travelling across the Atlantic for meetings so have had a bit of time to think on a few things. One of them is how reactive platforms and microservices fit together. I took the time to write down some of what I've been thinking. I think frameworks such as Vert.x and Node.js have a lot to offer microservices and other use cases, but as Robert Frost once said we still have "miles to go before I sleep". That is good because it means there's a lot of exciting research and development still to be done, building on the work of others such as Fischer, Lynch and Paterson.

I've had a few conversations with different people recently on the subject of microservices and (distributed) transactions. One of my biggest worries about microservices in general is that we end up repeating the same conversations and disagreements that we had with SOA, REST and other related technologies. And one of the longest running debates we had during those specific waves was around transactions and their applicability, or not. I continue to believe that transactions are useful for some SOA-based services, though that doesn't mean they have to be atomic transactions. Therefore, I feel that they also have a role to play for microservices and took the opportunity to write down some of my thoughts on the topic.

In one of my previous entries I may have inadvertently given the impression that in order to do microservices you need Linux containers, such as Docker. I want to clarify that so have written a new article which hopefully clarifies where those containers matter, especially in the Java world where we already have a container in the form of the JVM.

After writing my previous entry of how a container image, such as Docker, makes a natural unit of failure for an individual microservice or groups of related services, I wanted to follow up on one of the things I didn't get into detail on: state. Where does the state of the microservice reside? Is it within the container image, which may seem intuitively correct, or is it outside of the image? I've been thinking about state and replication for many years (well over 20) and was revisiting one of my earlier papers recently and got to thinking that it was relevant to microservices, or at least containers (since of course a container doesn't need to be tied to only being useful for microservices!) With that in mind, I found a few hours today to write up some thoughts on state and containers in general, but specifically microservices, Hopefully it makes some sense.

Another mini-vacation so another few hours to sit and write down some things that have been rumbling around in my head for a while. This time it's about Docker (container) images, microservices and how they can be combined to provide a natural unit of failure when building multi-service applications or composite services. Check it out on my personal blog, but hopefully the conclusion should be obvious:

 

"If you are building multiple microservices, or using them from other groups and organisations, within your applications or composite service(s), then do some thinking about how they are related and if they should fail as a unit then pull them together into a single image."

It's a Saturday evening and I've finally had time to put down further thoughts on container-less development. OK, ignore the fact that I'm at home on a Saturday night instead of out "on the town" - but work is still my hobby (or is it vice versa?) and I'm enjoying it!

Mark Little

Microservices and history

Posted by Mark Little Feb 21, 2015

I've finally made time to write down some of my thoughts on microservices and how they should relate to SOA and other approaches which have gone before. I've cross-linked the article here for those who may be interested. As I mentioned in the InfoQ interview, I think some of the recent advances in development tools, such as docker-based containers, should make service development easier. However, if we'd had them 10 years ago they'd have fallen under the banner of SOA and I don't think we'd have coined a new term to describe a specific implementation. Whether it's SOA, microservices or some other term, I do think that in the last year or so there have been some interesting developments around frameworks, languages, containers etc. that do make a difference for distributed service-based applications. We just need to join them together with all of the good things we did over the past decade or so and called SOA.

I've been thinking about approaches to container-less development and deployment for a while. However, it was whilst I was at DevConf.cz the other day that I decided to write down some of my thoughts. Well that and the interview I did with Markus for my keynote at JavaLand in a few weeks. I wanted to write something that was objective though and without pushing a particular implementation agenda - often hard given what I do as my day job. However, that's the reason I wrote it over on my personal blog and am cross-linking it here. Obviously I do think that the work we've been doing with AS7/WildFly/EAP is one of the main catalysts for improving the whole dev-to-deploy experience when people work with a Java EE container, with projects such as Forge and Arquillian as critical pieces in that puzzle. There's more to be done and I'm excited about some of the improvements the teams have planned. And I also think that new approaches such as Vert.x offer a view of where things are going and can still benefit from experiences and components elsewhere.

Mark Little

The Red Hat Platform

Posted by Mark Little Dec 15, 2014

As the saying goes ... a long time ago in a galaxy far, far away ... Red Hat was created around Linux and did a great job of working with communities and customers to provide the best enterprise-grade Linux platform around and to this day. However, over the years Red Hat's aspirations and reach have grown to include virtualisation, storage, cloud, middleware and mobile, to name but five new pieces. Whilst the operating system is critical for running applications or services and Linux has all the hooks you'd need to build a huge range of them, they're mostly too low level for many developers. Hence the need for these advanced capabilities so you don't have to do that - someone else has done that hard work for you. Additionally, at least where enterprise software is concerned, those companies and groups will have ironed out many of the bugs and security flaws. Following on from this logic, whilst we've implemented a number of critical technologies "in house", Red Hat has also acquired them - why build when we can buy? JBoss, FeedHenry, eNovance, Makara, Inktank ... We've done a pretty good job of bringing in key software components and adding them to our evolving stack.

 

I can't think of another pure open source company that has such a deep and broad stack as Red Hat. And this is important for the evolving world we live in today. Whether it's running critical back-end services on RHEL, Java and Java EE with EAP, integration via Fuse, both on or off OpenShift, or even addressing Internet of Things with something like MQTT (A-MQ) and Vert.x, we've got so many of the software tools that a huge range of developers need these days. And these aren't disparate capabilities that don't know about each other - we're continually working hard to make everything we do work well together. In essence, the Red Hat Platform is all of our deep, broad stack integrated and operating together. Of course pieces of this Platform can be used independently of each other, but that's just another compelling reason for developers to consider using it - you're not locked into having to use all or nothing.

Mark Little

Open source trends talk

Posted by Mark Little Nov 9, 2014

A few weeks ago I have the privilege of giving the keynote at OpenSlava 2014. Rather than talk about JBoss or Red Hat technologies/projects I decided to try something new and talk about how open source has impacted our lives over the past few decades. Most of us take this stuff for granted and I know that's definitely the case for me: having lived through all of it and helped participate in some of it, I found pulling together the presentation both enjoyable and a reminder. I hope to get the opportunity to give the presentation again as I've already thought of a few other things I'd like to add to it and also get more audience participation. Hopefully those people at OpenSlava got as much out of it as I did writing it.

 

The video of the presentation is available online and linked earlier. The actual presentation is also available. However, for those who can't watch the video I thought I'd include a few of the slides. We all know about Linux and the huge impact it has had on the software industry since it started over twenty years ago:

 

Screen Shot 2014-11-09 at 15.19.34.png

Not only was it also used within the PS3 but it's the basis of Android, a fact many of those phone users don't know. Then of course we have to remember the role open source played in creating the Web:

 

Screen Shot 2014-11-09 at 15.22.42.png

And for those in the audience who didn't know, the gopher is from Caddyshack! Of course we talk about the Web and implicitly believe we all understand what "it" means and has become. I thought this image of the connectivity map for the Web helped bring it and hence the impact open source has had on our lives, to life:

 

Screen Shot 2014-11-09 at 15.26.05.png

Of course Java, not open source originally, has had its own significant impact on the software industry. I like to think that JBoss/Red Hat have played an important role in that journey too. But as this slide shows, it's also had an impact outside of our industry including an important one for many kids: Minecraft. I've seen children teach themselves all about Java, jars, manifest files etc. just so they can create mods for Minecraft and exchange them with their friends! I'm not sure any other language could have had quite that impact without open source!

 

Screen Shot 2014-11-09 at 15.27.51.png

I've already touched on how open source is helping to drive mobile, and in the presentation I also mentioned cloud. Of course there's also Big Data and NoSQL, which are most definitely being driven by open source first and foremost. So over the years where open source was the follower, now it is the leader.

 

Screen Shot 2014-11-09 at 15.32.51.png

As I concluded in the presentation, open source is no longer a second-class citizen, the domain of some developers working "on the edge". It's often the first line of attack to a problem and has become accepted by many users and developers. Where we take it now is no longer constrained by hurdles such as persuading people open source is a viable alternative (often there is no alternative). Now the limitations are our imagination and determination to make open source solutions work!

 

Screen Shot 2014-11-09 at 15.35.05.png

I'll leave it as an exercise to the interested reader, but I've written many times about the capabilities that we have in the JBoss middleware products and projects. Many of them have been developed over several years, some such as Narayana/JBoss Transactions, since before Java was even called Java! Our customers and communities regularly deploy the software we have developed in conjunction with our communities, into mission critical environments.

 

Screen Shot 2014-11-08 at 20.21.55.png

Obviously given our heritage we've focused on what this all means for Java and Java EE. But our various polyglot efforts, around JRuby, Clojure, Ceylon and other languages have shown clearly that the capabilities can be made available to a wider variety of languages than just Java. But obviously not all languages people are interested in run on the JVM and even those that do, such as JavaScript, have a massive non-JVM following. I've said time and again that our industry can't afford to re-invent wheels, especially critical wheels such as transactions, security and high performance messaging. In most cases it's a waste of time or brings dubious value. Making these capabilities available to other languages even outside of the JVM has to be a serious consideration before any wheel reinvention takes place! But how do we do this?

 

In the "good old days" before J2EE we had CORBA where everything was a service. With J2EE and threads within the Java language, people took co-location of capabilities as the default - remote invocations do add some overhead after all (give them a try!) Recently we've had SOA, where business components can be services. Along comes cloud and Software-as-a-Service takes off - the clues in the name! Services offer one such possibility and with the increasing adoption of REST (typically over HTTP) this is an approach with which many users and developers are comfortable. Exposing business components as services is just one step. Taking core capabilities and exposing them as services (just as CORBA did) is the obvious next step and something others have done, especially in the cloud arena. About 5 years ago I gave a presentation about this, a slide from which is included, with circles representing core Java EE capabilities and rectangles as containers, JVMs or VMs:

 

Screen Shot 2014-11-08 at 20.13.11.png

Now where's all of this going? Well we (Red Hat) have made some progress towards approach this over the years but recently we've seen a significant acceleration. With the advent of xPaaS, integrated services and products for the DevOps world, and now Fabric8, Kubernetes and OpenShift, the creation of docker-based services (components) is happening now. Our aim is to ensure that these services are available to all (public, private, hybrid cloud as well as non-cloud deployments). Within the cloud they'll appear via Fabric8/OpenShift within xPaaS, perhaps as individual services or applications, or within a composite - time will tell, as we are quite early in some aspects of this development process.

 

Ultimately what docker, Kubernetes, Farbric8, OpenShift and many other efforts allows us to do is realise a long held belief and define an enterprise middleware layer that is independent of programming language - the Red Hat JBoss Enterprise Middleware layer. Often our industry spends too much time focussing on the specific languages, the latest shiny object and not enough on the what it takes to build enterprise mission critical applications. This is all part of improving developer productivity, because without these capabilities (transactions, messaging etc.) someone has to implement them and if that's your developers then they're clearly not able to spend time working on the applications or business components you're paying them to build! With our approach we're exposing these services to whatever language makes sense for our customers and communities; those we don't get around to can be worked on by our communities, as that's the beauty of open source!

 

We are embracing and extending our Java heritage but in whatever language makes sense. We have the heritage and pedigree and we should build upon this. The JBoss name should become synonymous with this enterprise layer with associated capabilities (services) irrespective of the programming language you're using. We started this with JBossEverywhere but it needs to go much further. In the polyglot, mobile and cloud era the language for backend services can be varied and often down to an individual developer, but the maturity and enterprise worthiness that the backend provides should not be open to debate. If you are relying upon your backend to be available, reliable, fault tolerant etc. then the maturity, pedigree and heritage you're using are extremely important.

 

JBoss is a platform for the future.

Mark Little

FeedHenry meets JBoss

Posted by Mark Little Sep 18, 2014

I've done a few acquisitions over the years along with this kind of blog post welcoming the new team to the fold. It's with great pleasure that I can do it again and this time for FeedHenry. The official press release can be located here, but I want to be more personal than that. We've been working in the area of mobile and backend-as-a-service for a while now, with the AeroGear and LiveOak projects. FeedHenry compliments them (and vice versa) as well as offering the opportunity to make the collective an even more compelling developer and deployer experience for our customers and communities - especially when you factor in OpenShift and xPaaS! The technology that we've acquired is wonderful, but let's not forget their (our) team: from what I've seen over the past few months they are as passionate about their work as we are within JBoss and Red Hat, so that should be a good meeting of minds. The challenges of working remotely are something we have met and bettered here for many years, so I'm not worried that the FeedHenry team is located in one place and the rest of the teams are dotted around the globe.

 

Now of course the fact that FeedHenry uses Node.js (and Java) within their offerings gives us another opportunity: exploring the Node.js developer landscape in more depth and seeing how we can bring some of our enterprise capabilities to the fore. I expect projects such as Vert.x and Nodyn to play a critical role here. All of the teams involved in the integration and forward development of the JBoss/FeedHenry combination will have ample opportunities to learn from each other and benefit from individual and shared experiences. So don't worry that the acquisition of a company that uses Node.js means we're ditching Java because nothing could be further from the truth. We are a polyglot company and this acquisition bolsters that story!

 

Onward!

Mark Little

Enterprise IoT

Posted by Mark Little Jul 30, 2014

It may seem strange to think of enterprises and IoT, but there are already many companies looking to use the data they obtain from "edge" devices, such as sensors, in their day-to-day business. Of course as a result we've all heard of Big Data and how it plays here with the need to store Petabytes of information so that analytics can be performed on it, both in realtime and batched. But this is just the start of where enterprise and IoT will intersect. When I talk about Enterprise IoT I'm implicitly thinking about the kinds of scenarios I've discussed before around JBoss Everywhere. Now of course we should abstract away the specific technology because you could do this in a variety of ways. However, the use cases remain.

 

Today people consider edge devices as dumb; message exchanges are best effort; they're typically stateless; they do one thing well, such as read the ambient temperature; they only know about one endpoint with which they exchange data; they can't be easily (field) updated; network bandwidth is limited. All of these restrictions are imposed by the limited scenarios we have today, or the limits of the technology. However, if the last 50 years of software and hardware evolution have told us anything, it's that both of these problems will change and probably more rapidly than many people believe.

 

I believe we are in the very early years of IoT and as technology waves of the past have shown us, these are typically the periods where advancements come quick and furious. Why should edge devices be limited by memory or processor? A Raspberry Pi today costs $30 but in less than 18 months something equivalent will be sub $10 and the form factor will look more like an RFID chip. Developers will want to store more data on these devices so they can do local processing (prediction), maintaining state to help them determine what to do next. They will become much more autonomous, with some of them having peer-to-peer relationships with other devices, clients and servers within the IoT cloud. These devices will communicate with each other. They may work together for fault tolerance and reliability, sharing data, requiring consistent updates to that data. And they'll need to be much more secure so they cannot be hacked or spoofed. Certain message exchanges may need to be delivered reliably as well as capturing an audit trail for non-repudiation purposes.

 

The potential for Enterprise IoT is significant and only limited by our imaginations. In just the same was as two decades ago edge devices of the time evolved into today's enterprise critical laptops and servers! And of course I want to see JBoss and Red Hat technologies and communities leading the way.

Filter Blog

By date:
By tag: