I discussed a successful use-case of application migration in my previous blog. In this blog, I cover the other aspect of application migration – managing monoliths. Enhancing an application and managing IT systems is like walking on a tightrope. It is a delicate balancing act – where even the most experienced developers are cautious to change or add new features for the fear of disrupting operations. If, for some unforeseen reason, an application fails in production, the game of blames starts – production personnel blame developers, developers blame operations, and eventually the project team blames deadlines and budgets. Managing monoliths is no different. Even a well-architected monolith application can eventually become complex, unmanageable, limiting the capacity for continuous deployment.
One way to offset this domino effect is re-architecting a monolith to microservices. You could call microservices as the MVP (Most Valuable Player) of the application world. It enables rapid scalability and agility – something that changing business requirements crave for and what traditional applications could never provide.
What is a Microservice?
Think of it as a software architecture pattern. Here you decompose a monolith application into smaller manageable chunks of functionality, where each mini-application is autonomous and can communicate using lightweight mechanisms such as HTTP API. The idea is to design your application such that each individual component can be updated and deployed at will without disrupting other logical flows. A scalable solution, microservices enable true modularity in terms of both language and data.
Is it that simple?
Some of the best practices followed in building a monolith architecture may not directly translate into the microservice-based architecture. Your approach should be to identify existing functionalities in the monolith that are non-critical and loosely connected to the rest of the applications. Then, replace the functionality while minimizing impact of migration. It is a paradigm shift and its adoption should be an iterative process.
Identify application dependencies
The first step is to conduct a microanalysis of the application to identify code interdependencies. Many tools are available in the market for this. At Virtusa, our Cloud Code Analysis Tool (CCAT) identifies any potential fails, discrepancies, and code risks. It also checks for code readiness of the application to be migrated.
Component identification and planning
Transforming a monolith is not a one size fits all approach. Neither is it a big bang concept. A phasal manner in design and development of microservices is the best approach. The starting point could be to create a Request Handler that accepts API request and redirects to a new functionality microservice or legacy application depending on the case. The next phase could be to modularize the monolith application based on application layers such as presentation layer, business logic layer, and data access layer. Sub-modularizing these layers can also be an option based on business functionality. The final phase could be to ‘decompose’ or breakdown the monolith application into functionally independent modules a.k.a microservices. Finally, the Request Handler or a load balancer can direct the requests to the new microservices.
As fancy as it sounds, putting a microservice to work comes with its complexity of interactions between individual services. Code refactoring should account for challenges like dealing with asynchronous communications, cascading failures, data consistency problems, discovery, and authentication of services.
Data model refactoring
A typical monolith has a comprehensive, unified data model. However, in case of microservices, we encapsulate a business function that includes the data layer as well. To achieve this, we can refactor our data model to use private tables, private schemas, or database-per-service model. The goal should be to avoid more than one service writing to the same table. Another strategy could be to have the transactions in the application layer instead of the database layer.
The co-existence of monoliths and microservices emphasize the need for enterprise-wide support, especially cross-functional collaboration. Success largely depends on the way we redefine the core elements of People, Process and Technology / Tool in more agile, productive, and valuable ways.
Microservices and Cloud
Cloud technology is an alluring proposition for microservices due to its capabilities such as on-demand resources and low cost managed services (monitoring, logging, security) that help deliver scalability while reducing operational complexity. Cloud hosting enables enterprises to automate provisioning and deployment to achieve continuous integration, deployment, and delivery.
AWS, GCP, and Azure offer cloud services to implement different layers of a typical microservices architecture. Leveraging a container technology such as Docker along with clustering and orchestration solutions such as Kubernetes, AWS EC2 Container Service (ECS), Docker Swarm, Google Container Engine (GKE), enable companies to choose the right tooling for the job, and quickly adopt them without complex licensing cycles
Until a few years ago, monoliths were considered the best – one deployment unit that does it all. This spelt doom because monoliths with their interdependent applications limited agility, innovation, which led to prolonged time to market. Therefore, organizations looking to maximize productivity, improve agility, resilience, and scalability, reduce time to market, and enhance customer experience should embrace microservices. Its modular architecture accelerates development, testing, and deployment and helps organizations meet current and future business requirements.