Microservices tools and frameworks


I am a fairly newby within the Microservices world but i have more experience with Service Oriented Architecture, Event Driven Architectures and more. So lately i began to read more about the architecture style of Microservices. When I read about within each blog item or news item i come across a new name of a tool/framework. So the idea came to try to make a reference architecture and a toolsheet in which each tool is mapped to that reference architecture. I have come to a state that i can use your help! Which concepts are missing and which tools can i add?
Please leave comments.

Reference Architecture

This is my first go of a kind of hybrid Microservices reference architecture, in which an integration layer is used to expose COTS products and legacy.

Tool/framework sheet

The next sheet is freely available and contains a list of tools and frameworks, that can be used within the Microservices ecosystem. I know it is nearly complete, so thats why i need you!
The type is the most clearest characteristic of the tool. The other characteristics are also present. This list of characteristics needs to be extended too. And also here your help is helfull !


Microservices Conference 2018 in Berlin - A small recap


On the 22 and 23rd of March i went to the "Microservices Conference 2018, MicroXchg" in Berlin. This blog post is a short recap of my experience of this conference, which was held for the 4th time.
My goal was to learn more about the Microservices world, to fill my toolset of possible architecture solutions and of course to see Berlin a bit. Furthermore i interviewed Chris Richardson about Microservices and his upcoming book on Patterns. This interview is added to this blog as well.
A note: everything within this blog is purely my own opinion.


The location where the conference was held, was Kalkscheune in Berlin. A nice location with big conference rooms. There were 5 different rooms and everything was recorded.

Overall program

There were 4 parallel sessions with 1 indepth-session. The normal sessions were 50 minutes long, so this was a good time to get some more detailed information and there was also time for questions. As i wanted to have a global idea of the Microservices world, i did not attend any of the indepth-session. I therefore can not tell anything about the level of these sessions. 
The division of the type of sessions was:

  • Technical sessions (i.e. tooling, languages, products): 21 sessions
  • Architectural sessions (i.e. modeling, design, architecture): 17 sessions
  • Organizational sessions (i.e. impact on organisations): 3 sessions


I did not speak to everyone, but in my opinion the majority of the audience was technical oriented and mostly German. There were also foreign speakers. This gives me the idea that Microservices is still mostly a technical IT party. Maybe not everyone does agree me on that, but the number of managers and CIOs was nihil or maybe even zero. If you want to use the Microservices architecture style within your organization, there will also be a lot of structural consequences. This was a little bit underexposed within the conference.
Of course there were also a couple of stands of the sponsors of the event.


Micro frontends

I went to the conference with a couple of questions, one of them was if the UI is part of a Microservice or not. There were 2 great sessions about this topic from Matthias Laug and Elisabeth Engel. Even the term micro frontends was introduced (at least a new term for me). In the talk of Matthias he opt that the front end part should be deployed as 1 microservice, together with the backend part. Aggregation of several Microservices is done using frames and this should be a thin layer.


A talk by Chris Richardson about using Sagas to implement data consistency within Microservices and using local transactions i.s.o. two-phase-commit transactions. Saga uses compensation handling and/or event driven architectures to deal with data consistency. This makes the design of such systems not easier.

Event storming

An interesting talk by Lutz Huehnken about designing the microservices boundaries using event storming. This is a method where you do not look at the business entities, but at the events that happening within the processes. The focus is more on the dynamics than on the nouns and structures. The events will then lead to the commands needed and which entities (nouns) are involved. He argued that this will lead to better reactive systems as stated within its manifesto (https://www.reactivemanifesto.org/). I think we will use the design techniques that will have the closest relation with the business problem at hand. But thrilled to have new tools available again.

Service Mesh

I also read about service meshes so was curious what this is all about. Fortunately there was a talk about this subject by Daniel Bryant. A service mesh is about the communication between the services. As services can be executed in several containers within even several hosts, the requests must be routed to a service. So runtime service discovery is needed. A service mesh will take care of this. It can also contain some more (business) rules, so that for example european requests are sent to different containers or hosts. Linkedr, Conduit and Istio are some examples of services meshes. However Daniel warned that care must be taken when using them in production, because they are not mature enough yet. It also sounded for me the new replacement of the ESB, because it is also a central component within service communication and contains routing rules. Lets see how these service meshes evolve.

How to be an architect in a Microservices world

This talk was about the architect Felix Muller and his experiences within the Architecture field. For me there were no specific new elements. He talked about (software) architects in teams and an architecture board for alignment. This is not very different in the way organisations already work. Also technical reviews were mentioned which are not specifically new (at least for me).In a truly Microservice architecture you hope to have less alignment because of the self management of the teams and the loose coupled services. In practice i think this will be not the case, because the services will also share common services (i.e. CI/CD pipelines, service meshes, PaaS and IaaS, security ,API gateways).

Patterns and anti-patterns

Stefan Tilkov is a well known speaker and I know him as one of the first RESTefarian. It is always good fun to watch him and he has always good content to think about. His talk was about patterns and anti-patterns. Those patterns are used as a standard for communication and are not part (yet?) of an official pattern catalog. Chris Richardson is also working on Microservices patterns that can be used. The following anti-patterns were discussed:
  • Distributed monolith (microservices gone through the roof, getting too complex)
  • Decoupling illusion (technical separation does not match the business domain separation)
  • Micro platform (standardization of shared functionality)
  • Entity Service (wide business entities as boundaries)
  • Anemic Service (layering in data services)
  • Unjustified re-use (extremely generic utility functions)
  • Autonomous cells (decentralized domain focused cells)

    Size patterns
  • Function as a Service (FaaS, small services, serverless)
  • microSOA (small self hosted, synchronous)
  • Distributed Domain Driven Design (business events)
  • Self-contained systems (UI+DB)
  • Monoliths

Interview with Chris Richardson

One of the speakers of the conference was Chris Richardson, a well known man within the Microservices world. He is currently writing a book on Microservices patterns, and this will be released soon (see also http://microservices.io/patterns/microservices.html). I got the change to meet him and interview him. Here you can see and listen the interview. Be aware, it was my first interview, i was a little bit nervos and i used my phone to record it. Oh and forgive me for the tight blouse ;-)


The following conclusions i would like to point out:
  • The conference was visited well, but mainly by technical audience
  • A good mix between tooling, frameworks and architecture sessions
  • It gave me a good impression about the status of the Microservices architectural pattern, which on the tooling level is still immature on some points (i.e. Service Meshes)
  • Hopefully will Service Mesh not be the new ESB hell.
  • Breaking down the problem domain in Microservices is hard
  • Languages/frameworks emerge that are more closely related to the modeling of events (i.e. eventuate). This means again that the model is within the language itself (just like it was for object oriented design and OO languages)
  • Entity modeling and business capabilities modeling are really two different modeling techniques of which entity modeling is the most wrongly? used for modeling Microservices
  • CQRS and Event Sourcing are two topics i have to dig into
  • There are a lot of (new) terms and tools within the Microservices world
  • Hopefully next time all presentations and slides are available on the conference site: http://microxchg.io/2018/index.html
In a next blog i hope to share a Microservices reference architecture with a sheet of possible tools/frameworks/products within a particular concept within that architecture.


Lets take you through some new features of WSO2 API Management 2.0.0

WSO2 has recently released a new version of the API Manager: version 2.0.0. So what are the new features of the product?
I will take you through some new features.


The API Manager consists of three packages to be downloaded:
  • API Manager
  • Tooling (For Eclise)
  • Analytics (Full fledged Data Analytics Server, ready to be used)

Traffic Manager

The API Manager has a new component besides the Publisher, Store and Key Manager, and that is the traffic manager. This component handles the throttling policies.

Advanced throttling

With the previous product it was possible to set throttling limits, i.e. 20 calls per minute allowed. With the new possible some more advanced policies can be configured.
Filtering based on properties.

  • IP Address and range
  • http request headers
  • JWT claims
  • Query parameters

Custom Rules

Note that this dialog is shown when logging into the admin console: https://:9443/admin
Also custom throttling policies can be configured using the scripting language Siddhi. You can use the following keys to define the policy: resourceKey, userId, apiContext, apiVersion, appTenant, apiTenant, appId 

Subscription Tiers

It will also be easier to edit the subscription tiers and to add tiers.

This adds a lot of new possible use cases to the throttling possibilities of the api manager. The Message Broker and Complex Event Processor components are used to implement the traffic manager.
The Siddhi syntax is very technical of course but it is a good step.

Log Analyzer

The log analyzer is added and is especially usefull when you are not able to login to the server yourself. This can be the case in a multi-tenant cloud environment.
You are for example able to view the number of errors and warnings of the application.

I am wondering who usefull this feature is in standalone mode.

API Store look-and-feel

The API Store has a new theme and looks better. Some screen shots are shown below.



For the applications you are able to select the possible grant types fort he token generation of that application.


Configuring analytics with API Manager is easy when you use the analytics server package of the download page of API Manager. Then you just have to set Enable to true within the api-manager.xml configuration file. I think it will be more complex when you want to use an existing DAS or BAM server.

Batch statistics

There has been added some more statistics on the usage of the APIs.

Geolocation based statistics is also possible to be configured.

Realtime analytics

It is also possible to configure realtime analytics and to receive mails when something extraordinary happens. Note that the admin and Publisher/Store possibilities differ. There are more possible settings as admin.


A client recently asked if he could receive mails when new api versions are available. Well WSO2 has added this feature. For now you have to configure that within some configuration files and not through a nice UI, and only a notification is sent when a new api version is available. However the first step is taken to implement more notifications (for example when an api will become obsolete or deprecated).

So this were some of the most important features added to the product. Will keep you informed in case i tried some more!


WSO2 ESB - Design for testability


Testability is one of the underestimated qualities of software. This is also the case for WSO2 ESB projects. However it is important to design the integrations for testability and this starts with the way you setup the proxies. This blog gives some guidelines which you can use to design for testability.


Sequences are a way in WSO2 to group Mediators. A proxy has by default an in-Sequence, out-sequence and optionally an error-sequence. These sequences can be split up in sub-sequences and this is a good way for reuse, but also a way to split up the design. A proxy usually contains the following functional parts:
  • Validation
  • Transformation
  • Sending to an endpoint

These parts can be put in separate sequences. This has the advantage that the part scan be reused in other proxies. This is also the way to enable testing for these parts. 

Design for testability

The sequences are the basis for the testability of WSO2 ESB proxies. The following design guidelines and steps can be used for testing your proxies.
  1. Configure separate testable parts within a separate sequence
    A good guideline in splitting is that the parts should be as independant as possible from other parts. Good examples are: input validation, data transformation, sending a message to an endpoint, a step within an iteration.

  2. Define a separate developer studio project for the test proxies
    This will be the proxies that will contain a testable sequence and can be called from soapUI for example. This way also the test package can be deployed separately. This way the test project is not deployed on production.

  3. Define a soapUI project for testing the component
    It is wise to define a separate soapUI project for each component (WSO2 proxy) you want to test. Note that this can also be used within a Continuous Integration environment for automatic testing.


The following is an example in which a data transformation is tested.
Step 1 – Define data transformation in a separate sequence

Step 2 – Define separate ESB project with test proxy
Note that the sequence is referenced and the result of the sequence is returned with respond mediator.
This result can be checked within soapUI using Asserts later on.


Step 3 - Define a soapUI project with test cases
In the request message put a soap message as expected by the sequence. Note that the ESB uses soap as its common message format within the mediation. The response can be checked with assertions.



The sequence split can be used to configure testable parts within a WSO2 ESB implementation.

This may not always be easy todo but hopefully you can use it to test your ESB implementations.
Feel free to comment on this blog ! All feedback is welcome


WSO2 ESB 5.0.0: Data Mapper

One of the features i missed within the WSO2 Developer Studio was a data mapper.
We had to write our own XSLT or PayloadFactory within the product.
But everything is going to change with the release of ESB 5.0.0 !

Or is it ? ....

There is a first blog item on the datamapper that i read and tried:

A very good blog indeed. However when i tried the new developer studio, i was somehow disappointed. I am positive about the fact that the Mapper is a real resource (even project) now. But i miss a lot of features (like for example XPath functions, if-then-else-construct, constraints) within the mapper.

As you can see from the picture below, only the following operators are supported:

  • Concat
  • Split
  • LowerCase
  • UpperCase

And when you investigate the generated project files, you will notice that JScript? is generated to do the actual mapping.

So my first impression is disappointed but it is a good start ! Hopefully new features and operations will be added soon, because at the moment i doubt if it is production ready.

But hey that's my opinion ;-)