Wednesday, July 8, 2015

Microservice with Spring Boot and Docker with Weave (Full Example)

For the time being I'm working as a Docker Developer (not working on the Docker project).

I work for a company here in Gothemburg (Sweden) hired as a consultant.

This has given me a lot of opportunities to use Docker and learn its pits and falls.

Multihost:

One of the bigger problems have been multihost applications. To have containers on different hosts that talk to eachother.

One company that set out to solve this problem is WeaveWorks (http://weave.works/).
But the new experimental branch of Docker 1.7 has the new libnetwork so you can do multihost.

But as I said, it is in the experimental branch and some of it's functions isn't mature enough.
This is one of the reasons that I've been looking at Weave. Another is that the experimental branch of Docker also have plugin support. And Weave is one of the first to actually create a network plugin (many big companies are working on this).

So I can use Weave as I do now or I can use it later as a plugin.

Components:
Weave Binary that pulls Weave, WeaveExec, WeaveDNS containers (they run in containers).

So what are some pros with Weave:


  • Mulicast (or Unicast) support
  • The containers have there own IP so they are more like traditional containers
  • Containers can have Hostnames that just works with WeaveDNS
  • Password for your Weave network
  • NaCl crypt
  • Uses "Vanilla" Docker, so no real changes to your containers
  • The network hasn't one point failure, start a network on one host, connect another to that host, connect a third to the second and they all find eachother and registers eachothers as peers.

Some cons:

  • Bad documentation (when is crypt used and more advanced examples). The only documentation on how to use it is the simple guides, but no real guides for configuration.
  • Not apparent what the Weave Binary does



I'm probably gonna have a session on the next Docker Meetup Gothemburg (Aug 2015) and maybe in
JavaForum Gothemburg.

So for these sessions I have prepared 2 demos. One with minimum code and one with more complete code (still very little code).

Demo

So the demo is 2 services:

Weave Producer:

The Producer is a Spring Boot Rest service that retrieves a Environment Variable and sends it. It listens on the url: host:8080/rest/hostname

This is packed as a standalone jar. I put it in a Docker container (extended Image from OpenJRE).

Weave Consumer:

The Consumer is a Spring Boot Rest service that retrieves a value from the producer and then adds some text. The result is meant to be which Consumer and which Producer is involved.

This is packed as a standalone jar. I put it in a Docker container (extended Image from OpenJRE).

Here comes the fun!
Since the Consumer doesn't know the exakt host it will talk to, it talks to a DNS-name.
weave-service.weave.local

weave-service is my name for the service and weave.local is the mandatory part. This part it takes from a properties file if I want to change it.

So a hostname might not seem like a magical thing, but the fun part is that you can start several containers of the same Image with the same hostname! 

WeaveDNS understands that it is the same service. I think (not sure) that it will loadbalance between them.

But what I do now is that if you have 1 Consumer and 2 Producers and the Producer that the Consumer is using for its service goes down (docker stop or unavailable) then (since I coded a retry function) it will automagically start using the other one (The DNS understands that it has gone down and will remove it from the DNS-list).

Weave'ing It

So the Demo is actually a Vagrantfile that starts 3 VM's running Ubuntu with Docker.
This takes up around 50-60 GB (sorry!).

The Github-page gives instructions on how to run them and build the Docker Images on the VM's.

All the code is included and I use Gradle Wrapper so you don't need to download Gradle or Maven for building the services. (You only need Java 8, the services are built on you host, not the VM's).

One VM will have a consumer and the other two will have producers. The Environment Variable that the Producer uses is set when you create the container (the name Producer 1 or Producer 2).



(Green is Weave connections, peers, and red is the DNS)

Conclusion:

A small and fun way to try out microservices with high availability using Docker and Weave.
Nothing in the Java-code does anything to do with the high availability part (just the retry but you want that even if you only use one host to talk to).

So for Weave to work you only need to use hostname, you don't need any other change.

Sorry for any misspelled things or typos.

Here is the Links:


Better Code with fast Retry: https://github.com/Khazrak/WeaveDemoEX

Feedback:

I would love some feedback on these or some idés for improvments.


Happy Testing!


Saturday, May 2, 2015

Docker: Hop on now or get passed by

So right now I'm actually working full time with docker as a consultant. The IT-division that I'm a part of has a plan to containerize their services and I'm the only one (in that division that's actually doing anything with Docker).

And I can say right now that NOW is the time to jump on to Docker and try it out, because soon many (not every one) will use it!


Scalability

I'm not pulling this prediction out of empty air. First of all, Docker makes scalability easier (not easy, but easier).
Docker-Machine can spin up a new Cloud-instance with Docker installed on many popular cloud vendors (Azure, Amazon, Digital Ocean, Google and more).

Docker-Compose can spin up a whole stack of Containers based on a config file (written in YAML).

Docker-Swarm can connect the Docker-machines in a cluster and soon it will seem as the whole cluster is one big host. They are getting there but Swarm is just 0.2-beta.

Windows is coming 

This is the client so you still need a Linux Host to connect to, but you can actually control Docker from native Windows now.

Microsoft (with Docker.inc) is creating Hyper-V Containers and Windows Server Container to be side by side with Linux Containers through Docker.

They also have created a Docker Image (Linux-container) with ASP.NET 5 preview (https://registry.hub.docker.com/u/microsoft/aspnet/) and the move to "open source" parts of the .NET-platform means that some Windows-apps will be able to run in Docker Linux Containers.

Microsoft have also created a editor for cross-platform editing, Visual Studio Code https://code.visualstudio.com/

These pieces means you can code and run C# and Windows Apps from Linux/OSX and use them with Dockers scalability and isolation.




Conclusion:

And all this is happening right now!
So download Docker, try it, tinker with it and be ready for the ever developing world of IT.



Monday, March 23, 2015

More about Microservices

A long time since the last post.

I've now started a new job as a System Integrator, but I actually code Java the most.

So I thought a should ramble a bit about Microservices.

Pieces of a Microservice architecture:


Communication-technique

One of the big PROs with microservices is that they can be language independent. This however creates a demand for a language independent way to communicate.

Rest:
One of the most common ways right now is to make REST-services. Most languages have a httpclient-library to call services, usually urls built by Strings. For load-balancing you might need a services orchestration application, maybe Consul/Eureka/Zookeeper.

AMQP:
Another way is to use a message technique like AMQP. The AMQP-servers (Brokers) are very stable and have great clustering abilities. AMQP is a network protocol, which has it's perks. This way has less support but has almost automatic load-balancing (if several clients listen to one broker the use a round-robin strategy for messages). Request-respond actions can be a bit tricky (usually a queue for the request and then listen to a respond-queue).

Summary:
What do these have in common? They are stateless. This means you have to plan it in that way. Microservices demands a lot of modular thinking.

Data Persistance:

How about databases? How do we save data?
The important question is, how consistent must the data be?

One idea is to have a polyglot solution and have a datasource in every service which connects to eachother. Maybe MongoDB's sharding?

Another is to connect to a classic cluster of 2 or more databases.

Just be sure to not lock yourself in, don't forget that the individual microservices usually talk to the datasource, they need to be able to talk to them.

Another aspect is should the different services actually use any of the same data? If yes, then communication between teams (that code the services) needs to be top notch.

So preferably the answer is No.

Is DevOps a prequest for Microservices?

Microservice architecture has gotten a fresh start with the agile way of developing. And these small services are developed fast and change fast. There has to be a way to quickly deploy a new version of a service.

Many automation-tools have come together to solve the "pipeline", jenkins, go.cd, etc.

This is where Docker has made a good impact. If the service can be contained in a single container (or 2 if data is needed), it can move from development to test to production very easy.

But do we have time to teach a ops-guy how to administrate your service if it changes rapidly? If we need to scarp it and change the whole application-stack for another.

The developers are the experts, if they can administrate the service even in production, they have all the power to update, change and migrate the service.

So even if DevOps technically isn't a prequest for Microservices, it makes for a more agile way of developing.

Should I do Microservices?

The big question is SHOULD you do microservices?
Todays development is moving more to the modular architecture.
Which is a step towards microservices. But microservices are not a hail-mary solution. They come with a price, complexity. You need orchestration and communication-layer and a way to deploy all the services. They will probably take more performance too.

So what do we gain? Scalability and the option to quickly change parts of it.
Good modularity can also help against spaghetti code.

So my 2 cents if you should do microservices:

If you KNOW that you need to scale big some day or will aquire more functionality then microservices might be for you.

If your only doing a website, then microservices might only be unncecessary complex.


I will do a small Microservice project and put it on Github for a more concrete example, to show what I'm talking about. But it is not this day.

Happy Coding!


Wednesday, December 31, 2014

Microservices: Shared libraries or big services

After some fun with microservices a fork in the road has appeared.

Say we're developing an app (just like I am):

The web-part doesn't exist yet, it will be the web GUI (Spring boot MVC).

We have two small services:

  • UserRegistration
  • UserInfo

Since they use the same data I choose to group them with the same database.

So when we register a new user, the info will be sent to the UserRegistration-service by AMQP-message via RabbitMQ.

The UserRegistration-service puts the data in the database. To control the structure we make a POJO (JPA or other, depends on the database eg. MongoDB).

And there is the problem. The structure (the POJO) is in the UserRegistration-service project!

So when the UserInfo-service retrieves the data you probably want the same structure as you put it in.

There is probably more solutions but...

I see two obvious roads to go about:

  1. Make the services bigger and have all the user-related stuff in one big service
  2. Have a shared library with the code that needs to be shared
Personally I think option 1 is too risky. The service might become a giant service so the "micro"-part might disappear. 

So how do we do option 2? 
The code could either be a separate project and use some form of distributer to a artifact repository or try to make it part of one of the services (probably the one who puts it in the database) and try to package that part of the code some how.

This adds another level to the complexity. The need to share libraries, that the services can't be 100% independent.

This is a big part of the microservice architecture, it is somewhat complex. There is a lot of benefits to reap but you can probably solve this much easier in a monolith. 
On the other hand there is a lot of benefits if you can get the microservices right:
  • Fast development
  • Independent deployment of services
  • Scaling
  • Services can be in different programming languages
So now I think I will deploy a Sonatype Nexus repository as a Docker-instance and share the User-structure.

To be continued...

Quick update:
After reading a bit about others experience concerning microservices another problem came to mind.
https://rclayton.silvrback.com/failing-at-microservices
If you have shared code between services, especially Models (entity structures) this is most likely written in a specific programming language. This takes away one of the aspects of microservices.

So how do we solve this? This will take some thinking (I would like a good place to discuss these problems). My first thought is to find some JSON-structure due to the AMQP and send all messages by JSON-format (formatted as Byte arrays). Hopefully this makes it possible to use the message in another language.

And Happy New Year!




Tuesday, December 23, 2014

Microservices continued: The messaging begins

So I've started with a fun project: http://java-viking.blogspot.com/2014/12/mircoservices-with-java-spring-boot-and.html

So the first piece of the puzzle was to have individual service running. This is easily solved with Spring Boot. Since most services doesn't even be need to run as Web services (run in a servlet-container) it can be as small as a few mb!

So I wanted all the communication between the services to be solved by messaging.
The "normal" way is point-to-point. The problem is that you are bound by the API, sure you can do good coding and use a interface so you actually can switch the implementation. But somewhere the implementation is bound. When you need to do a swap, that redeployment will disturb the things that uses it.

Can you do a redeploy without disturbing the system?

Answer: Not sure, but I will try!

My thought is this: If you have services that uses messages queues then you can theoretically have several instances of that service handling messages in the same queue (not the same messages but the same type of messages).

If that works then you could have an instance of the old version and an instance of a new version (where a bug is fixed) and then remove the old instance and work would probably not have been interrupted.

This is also the solution for scaling that I'm testing. Can you have several instances of the same service?
So far so good, my fear is when users and session come and messes things upp.

AMQP

So instead of JMS (Java Message System) I'm using AMQP (Advanced Message Queuing Protocol).
One of the reasons is that is has clients for several languages so you can have some services with another language.

"AMQP mandates the behavior of the messaging provider and client to the extent that implementations from different vendors are truly interoperable, in the same way as SMTP, HTTP, FTP, etc. have created interoperable systems. Previous attempts to standardizemiddleware have happened at the API level (e.g. JMS) and thus did not ensure interoperability.[2] Unlike JMS, which merely defines an API, AMQP is a wire-level protocol. A wire-level protocol is a description of the format of the data that is sent across the network as a stream of octets. Consequently any tool that can create and interpret messages that conform to this data format can interoperate with any other compliant tool irrespective of implementation language." (Wikipedia)

Spring, of course, has a project for this: Spring AMQP. I'm using Spring Boot Starter AMQP (not fully sure yet what the "starter" is about)

Status:

So where am I right now (December 23)?
I've done a service which listens to a queue on RabbitMQ (I named the queue and RabbitMQ is running in docker, use the official dockerfile/rabbitmq)

This service will handle every message in that queue, it will print what the message is and change the message.

If the service that sends the message demands that it wants a response, that service will get the changed message.

The other service just sends 10 messages (with Thread.sleep(1000) between each message) to the queue. This now works like a charm!

But I had some big problems with this! First off, I'm new to spring boot and messaging.
So I could quickly make these services with examples from the web.

Problems:

The problem was that this only worked with one instance at a time! No bueno!
I started the MessageHandler-service, no problem.
I started message-sending-service instance 1, no problem.
I started message-sending-service instance 2, the messages got jacked by instance 1!
And instance 1 saw it as the message was sent after the timeout (no kidding! since it actually already got a response before!)
The problem was that the MessageListenerContainer that was used was still running after the response-demand was fulfilled. 
If I stated instance 1 and 2 and then started the message-handler (love queues!) they got there responses correct.

So how do I fix this? (many hours of fun)

First came the o-so-fun task of actually finding out what it's called. (send-respond/receive-respond and more)

It seems that most people use either a separate queue for each client or each request!
This seemed like a performance hog.
But in RabbitMQ 3.4.0 there is a function for a pseudo-queue for just these occasions.
It's called Direct-to. We let the code and RabbitMQ handle the response (so that the right message comes to the right sender).

The Docker-RabbitMQ was version 3.4.2 lucky!
Next problem: this is supported in Spring AMQP 1.4.1.

What version does Spring Boot 1.2.0.RELEASE have of AMQP? 1.4.0....     O joy...

I tried that version and it didn't work. I'm not sure if Spring Boot allows me to switch a version of one of its sub-projects. I didn't get it to work, but it might just be my incompetence .

So I took my chance with version 1.2.1.BUILD-SNAPSHOT! 

And hey what do you know, it worked! I deleted the response-listener completely in the message-sender, the Direct-to function handles it for ous.


Minor setback:

The "pseudo-queue" for the Direct-to function https://www.rabbitmq.com/direct-reply-to.html#use
Have an official queue name " amq.rabbitmq.reply-to", it needs to be exactly this!

I stumbled upon this in a StackOverflow answer. The Spring Documentation actually had a typo of this: http://docs.spring.io/spring-amqp/reference/html/amqp.html#direct-reply-to

under section 3.7.1 it said " amqp.rabbitmq.reply-to" that "p" should not be there!
And I understand how easy of a mistake this is, I thought it was the other way around, why would it be "amq" instead of "amqp"?

But no problem, I submitted the typo to the Spring Jira issue site (issue AMQP-458) and the awesome Gary Russell has already fixed it, it'll be in the 1.4.2 version: https://github.com/spring-projects/spring-amqp/commit/fca82ce3d54bb2024efa6275a2e661cd20829de6

You got to love Open Source!

I'm happy to help fix this, cause you don't get any fun errors if you do this typo.
You just don't get any response. The message gets handled but that's it. Now the message-senders is just waiting for the postman that never will come.


So I hope this will help someone, I will keep you posted with my findings and results. In the end I will probably give some code, just now it's not so pretty ^^ except the gradle, that is pretty!

If your trying to replicate my idea and get stuck, don't hesitate to comment and I will try to help you as best as I can.

PS: These services will have there own Docker-containers and be run from there. They don't need to know about each other or there respective IP-numbers, you gotta love Messaging systems.

So code away!




Sunday, December 21, 2014

Mircoservices with Java, Spring Boot and Docker

Intro to microservices

One of the current buzz-words flying around is "Microservices".

Most big java apps (web-based at least) are packaged as a big WAR-file. The project is one big app, a monolith. This comes with some problems concerning development.

First the obvious, if you change anything you need to redeploy the whole thing. Even the smallest thing.

Second is scaling. If you need to scale up then you scale the whole application.


So what is microservices? Instead of having a monolith you make services as separate projects.
A service would have just one area of responsibility ex. Login-service.

Pros:
Fast development and deployments
Easier to write tests (small project > don't disturb other tests)
Can redeploy that service without redeploying everything
Scaling, just start another instance of the service (needs a good framework for this!)

Problems

So is it a hail mary-technique?

As always, this technique/architecture has its own flaws and problems.

The main con is the complexity of service-handling. The services needs some way of connecting to each other. And "hard-coding" the connection gives us a BIG risk. Because if you change one service, all the services that uses the API of that service might crash in runtime, fun to be that admin.

And how do we know that a service is down?
Say the login-service goes down, the positive is that the app might still work for the people that is already logged in. But no new logins can occur until the service is up again.
With a good health monitoring system ex. Nagios it can check if the service is down, some can even fix the problem!

Solutions

So how do we solve these problems?

First the decoupling from the API. If the services are decouple and doesn't use the API one-to-one we'll escape the hellish nightmare of the total system crash (in this case, might be others).

So we can use some form of integration system like an ESB (Enterprise Service Buss) or just a messaging system (ex. RabbitMQ or ActiveMQ).
This will send the message in to a queue, the service will then grab the message and handle it (complexity gets higher).


Second

A awesome system would be some form of clustering over different locations, load balancing, service discovery and health monitoring.

Some frameworks can actually call the "mother ship" and say "hey I'm bla bla service and I'm ready to Rock! I'm at this location".
The service would have an adjacent health monitoring script.

Is there an actual solution for ALL of this? Answer: Not that I know, but there is some puzzle-pieces.

My Experiment (the plan!)

I'm going to try to make a simple project using microservices.

I'm going to use Spring Boot for the service. Spring Boot is a small application which has a jetty/tomcat server in it and it can either be run as a WAR or by itself.
I'm going to run it by itself. This becomes a app with 12-20 mb per service.
Gradle will be used for the build and dependency scripts.

After the service-app is done, I will contain it in a simple Docker image that is built on start.

Ain't it a problem to start all these docker containers (I will have 3-5 services)?
No, I will use fig (http://www.fig.sh/index.html) to start all the services at once, and to document all the start-commands, linkages, port exposures and more. To sum it: A docker kick-starter.


Ok so Java: check
Containers: check


So regarding the health monitoring. I will try out Consul (https://consul.io/intro/index.html).
I hope I can rock it good with docker and use both the service discovery and the health monitoring.


Onwards

So right now I have made a test-service (hello world in spring boot) and made a docker image of it and ran it successfully.

I will keep you posted of my results, hopefully I can put it all on github when it's done so you can get all the source code.

Another hope I have is that Docker Swarm takes of and either becomes powerful enough on its own or that some awesome frameworks gets created that uses it.

Sunday, October 26, 2014

Docker and Mule

So I've been looking in to Mule ESB (Enterprise Service Bus) for my new job.

After testing it, it seemed like a lot of fun. Another of my projects was to learn Docker (scorching hot right now).

So I thought "Why not combine them?", little did I know that I was in for a world of Internet-searching, documentation exploring and more.

So first some more Info on Mule and Docker:

Mule ESB


Is kind of an adapter between services. It's message-based and has a lot of functionality already done.

Ex: Say you have a Java site that needs paypal. Mule can sit between, take the request by JMS, HTTP and more and pipe it to paypal and then back again. Maybe you want a facebook post for every 1000 visitor? No problem.

Why do this when you can code it your self? One of the reasons is that you can use much of the code again, you can separate some of the work to this Mule server or you can do a microservice style and have Mule be the glue between them all.

Docker (Docker.io)

Docker is a way to make Linux Containers (LXC) manageable and share-able. It's a technology for making small VM's where you can isolate you app.

Why? Say that your a developer. Your app works in your developer environment. But that's not always the case for the production environment. 
What if you could have a environment under development and just ship the whole environment? That's what you do with Docker.

It's also a good subject if you want have microservice's, every service can have its own Docker Container. 

And it's scalable, you can have servers with CoreOS (made for running in HUGE clusters, we're talking Google-size) and run your Docker Container.

Cause that's one of the best perks, your Docker Container runs ANYWHERE where there is a Docker Engine. You can develop under Windows and Mac with boot2docker via Virtualbox and then Deploy to a real Linux Server running Docker Engine.

 And people share many of their Docker Containers via buildfiles (DockerFile). You can have your own private registry of Images or use the Central DockerHub.

My Problem

So there is a couple of people that have done Docker Containers for Mule Server.
And I could not get this to work!

I was a rookie at Docker so I blamed my lack of knowledge.
I followed the guide and it would not work.

The regular Mule Server and my test app work perfectly.

The Docker version blocked the right ports, but it would not give me a response.

I scratched my head...

Spinning up a Wildfly server in docker was 2 commands and it worked perfectly. 

So I compared the DockerFiles, but nothing looked weird. Was there something special with Mule?
But why? The creator seemed to get it to work without problem.

After that I found a solution!... Or so I thought. I was able to publish the port so I could run and get a response...from localhost but not from a remote computer.

But wildfly worked with the remote computer, what was the difference???

Frankly I thought I was missing a puzzle piece with Docker, like firewall, config or something.

After 2 weeks of leaving and coming back to it (Googled some on breaks at works, coming up with new ideas what it could be) I finally found the answer.

After being really frustrated I started to really look at the test app and compare it to the Docker Container Creators test app.

My app said that localhost should run the http start-point, the other app said 0.0.0.0....

OF COURSE! So I was looking at the wrong place!
I thought I made the simplest test app ever, a little hello world, but I succeeded in not doing it correctly. 

After changing it to 0.0.0.0 it worked flawlessly. So simple that it's extra hard to find.

But on the up side I learned A LOT about Docker and how to use it.


So I can recommend Docker and Mule, Docker is not as hard as some think, just know what your putting in it before blaming Docker.

Code ahoy!