Just as deployment has reached new velocity, users also expect 100 percent availability. When was the last time you saw a “Site temporarily down for maintenance” page? Consumers and end users expect applications to be available 24×7.
Yet the Promise of DevOps Has Underdelivered
DevOps was supposed to deliver nirvana for such greenfield projects. Well-designed applications built from services that can be updated or replaced independently; delivery pipelines that automatically push code changes to quality assurance (QA), even better to production environments; and platforms that allow you to run applications almost anywhere without changing source code. If only the promise of DevOps was true. If we could develop and deploy apps in this way today, responding to business demands would be a breeze!
The reality, however, is quite different: Monolithic applications, duplicated functionality across several applications (built by different units or acquired), different technology stacks that make code sharing nearly impossible, risky manual deployments, etc. These challenges slow down an organization and prevent it from taking advantage of technological innovation.
That legacy application was likely written by the same engineers working on your apps today; they just used a slightly different paradigm in the application architecture.
It’s one big headache. Yet, the great thing about legacy apps is that they work! They deliver value and often know more about your business than you do.
But how do you modernize these monolithic apps?
Where’s the middle ground?
What are your options for reaching DevOps nirvana?
One solution is to rewrite everything from scratch. In some cases, it may be the only option (think Cobol running on mainframes), but rewrites are costly. We’re talking a multi-million-dollar undertaking, a drain on resources, and a huge risk of failure. For these reasons, a rewrite is rarely an option.
Don’t Rewrite, Refactor
A different approach to legacy application modernization is to refactor everything with the goal of splitting legacy apps into microservices and connecting these microservices into one platform, such as Docker or Kubernetes. The platform may have everything you need to run microservices on the backend; the front-end will look the same.
Refactoring legacy apps and the development of microservices isn’t something you want to rush into. Adopt a gradual approach. After each iteration (we’re not talking about development sprints), you’ll need to ensure you have a working application in play. It may not be in production, but it must be fully functional.
Here are the five principle steps to an iterative approach:
- Plan a controlled move: As a first step, you must prove that refactoring works. Take the easiest component, something that you can easily extract from your apps, then refactor or rewrite. Essentially, you’re taking the version that is the closest to the implementation you need and building on it.
- Limit disruption to your app pool: Meanwhile, try to ensure that the rest of your apps remain unchanged. You don’t want to rewrite a large amount to extract a service. For your remaining apps, use stubs that mimic an old behavior and will talk to a new service in the background.
- Choose your application delivery platform: Don’t postpone your application delivery platform selection. In our engagements, we’ve found that the earlier you do it the better. If adding a new microservice is a headache, your team will recoil from doing it. Make it as easy as possible so that everyone knows that as soon as they finish with the code the platform will take care of everything else. Once the first component is up and running and the platform is established, repeat the process.
- Prioritize your components: Prioritize components based on ease of extraction, your roadmap, risks, and so on. Mix it with the new functionality delivery and avoid focusing too much on refactoring else you’ll run the risk of turning it into a full rewrite. And, of course, some of the functionalities may never be refactored. If you know that some of your front-end is going to be rebuilt sometime soon, let’s say to support responsiveness, new mobile platforms, etc., why spend time refactoring what will soon be obsolete.
- Stay away from data unification: Try to avoid data unification. These projects rarely succeed as they focus more on data structure than on application needs. Instead, create proxies that will transform data from different databases into a format that your new microservices understand. This will define what kind of data is needed and how and where to store it.
Putting it All Together in a Real-Life Use Case
How does this all come together in real life? Let’s look at the process of transitioning from a monolithic application to a microservice, using a concrete example.
Typically, when you hear the words monolithic, enterprise, or legacy, you imagine something complex and archaic — something you don’t want to deal with. But think of it this way, that application was likely written by the same engineers working on your apps today; they just used a slightly different paradigm in the application architecture.
Look at these apps and you’ll see that, despite their monolithic nature, they are also divided into modules. Also known as layers, these modules consist of the database, the object-relational mapping (ORM — the layer responsible for accessing the database), the business logic layer, and finally the layer that corresponds to the generation of the user interface (UI). You may also identify a module that’s responsible for integration with other systems, there are usually several of these and they often stand alone.
The application may also have several UIs, one for the back office and another for the front office. The former is implemented on a technology that allows for the automatic generation of UI forms. This part is more difficult to separate. However, the front office UI is usually implemented for some sort of JavaScript technology and is much easier to separate.
One module that takes up significant real estate on the application is the authorization module — the part that stores user privilege data. Separating this part can be a challenge since user rights can be stored in the database and be closely related to that data (more on this later).
Finally, there may be several auxiliary modules in the application. For example, audit, storage of historical information and archiving, a module responsible for the flow of business processes, reports, etc.
None of these modules can be grouped by business functionality, but they are always a fundamental part of each legacy application.
With these parities between legacy and modern in mind, it makes sense that the process of modernizing these monolithic apps starts with selecting application modules — the elements designed by developers to be less logically divided — and run them as unique services in separate containers. Although not true microservices, we’re at a stage where we can begin to manage their lifecycle in separation. In the future, we’ll be able to divide each service into several smaller services.
Here are the steps of execution:
Run Your Database in the Container
Using a foundation platform of Docker containers and the Kubernetes container orchestration engine, take your current database and run it in the container. If you use a commercial vendor database, it’s wise to consult with them first about the principal possibility of such a solution. If you’re using an open source database (such as MySQL or PostgreSQL family), you can run it in a container, even in high availability mode. You’ll also need to consider the right storage solution for containers, especially if you’re using an on-premise installation, of which are several commercial and open source options.
Pay Attention to the ORM Layer
Once the database is started, pay attention to the ORM layer. This layer allows you to set the interface as web services at the CRUD level and provides a set of interface options. Depending on your programming language, a quick Internet search will lead you to a library that converts methods to the REST interfaces.
Separate the Modules Responsible for Integration
Next, you’ll separate the modules responsible for integration. These are usually logically well separated, expose an interface, and have a connection to the database. The front office UI is often designed using the JavaScript framework and be easily installed in a Docker container. Most auxiliary modules can be separated relatively simply since the enterprise application is not entirely written from scratch.
For example, one of the frameworks in embedded mode could have been used as reporting and the organization of business flow. If your application uses a framework for reporting, you can find standalone applications and transfer the configuration, instead of manually separating the code. Study the architecture of your enterprise application and you will find a way to separate it.
Probably one of the most complex modules is the module containing business logic, which may also contain a UI portion. You can start by placing it all in one container, then, if necessary, gradually divide the module into several smaller modules. If the connectivity of the business logic and the UI is large, then dividing this module into small microservices may not be an easy task. But remember, we have already separated several other modules and at this stage, we are no longer dealing with a monolithic application, just a large one.
One of the drivers for the subsequent step can be the analysis of the current stack, including the stack responsible for the UI. Today, technology is developing quickly and it’s worth considering how long the technology will be used — will there be an expert who can support this product in ten years? The second point may be the answer to the question of how satisfied users are with the current implementation of the UI.
Since we’ve already separated the application, it’s worth waiting for the next business feature request before deciding whether to create that feature as a new microservices or adjusting the current application to fit the needs. Perhaps this feature will be the first microservice that will implement the business requirement.
During this decision process, you may find that making a change to existing code (for example, if it is written in the old language) can be costlier than creating a separate microservice. The main thing is that you have already implemented the microservice platform by dividing the applications into several services. You also installed the necessary infrastructure, so the introduction of a new service is essentially the writing of some code, without overhead for implementation. On the other hand, if the module containing business logic is organized in the form of business logic and UI separately, then you can go further and start dividing it into several microservices.
Consider Your Software Infrastructure
Now a few words about the software infrastructure around the application. To create your microservices architecture, use microservice patterns (answers to research material as well as guidance can be found here) and select a service mesh to introduce a proxy service into each application, followed by the centralized management of these proxies.
One such tool is Istio. In other words, a service mesh helps design applications in such a way as to remove the business logic of the application. You can write the business logic of the application in any programming language, container it, and Istio takes care of not such insignificant things as authentication, authorization, discovery, intelligent routing and load balancing, traffic control, security. That is, all the functionality needed in the microservice environment is placed outside the scope of your application.
The next important infrastructure consideration is the use of a unified approach to managing the lifecycle API. Each service must declare the API. If the application is large or there are several applications, it is worth implementing the fully-fledged API management platform. This unifies the approach and makes communication between services and services, and services and people more transparent. Together, service mesh and the Management API will solve the challenging issues of authentication, authorization, as we discussed above.
Be Open and Portable
As a final point, modernizing any legacy application is best achieved with open source, portable tools. To deploy to Kubernetes, we normally establish a pipeline that uses Jenkins for CI (continuous integration) and Spinnaker for CD (continuous deployment). As for the image repository, every cloud provider offers this functionality today, and there are several products available for on-premise deployments, although we most commonly use Nexus.
Tip: Don’t overuse Jenkins. Organizations tend to build their own deployment mechanisms as Jenkins jobs. This is an inefficient use of resources. Tools like Spinnaker have most scenarios already implemented, for example, blue\green deployments and canary releases.
We also recommend you invest in developing unit and integration tests for your microservices. The availability of those tests will define how quickly and efficiently you can deliver microservices and take pressure off your QA team.
The Result: Less Risk, Immediate Benefits, and Modernization
Although the amount of work refactoring and containerizing your apps is not going to be any faster or cheaper than a full rewrite, it’s a much less risky approach to application modernization. Because our approach is iterative, you can more effectively manage your risk profile as you go. You’ll also see immediate benefits. Instead of waiting for a re-write project to be completed to see results, you can build outcomes onto your refactoring at every iteration. You can also mix it well with the delivery of new features, combining your refactoring effort with new development, based on your newly refactored components.
Utilize this approach to modernize your legacy applications — keep them, update them and only re-write what needs to be re-written.