Introduction to microservices
- By Arvind Chandaka and Ovais Mehboob Ahmed Khan
- 7/26/2021
Core fundamentals of microservices
A fundamental understanding of microservices is vital to understanding the examples used throughout this book, as well as the real-life case study.
First, it’s important to know that microservices are autonomous, independent, and loosely coupled services that cover business scenarios and collaborate with each other to form an application. As we know, each service will have the back-end code needed to perform a particular operation. In addition, each service will most likely have an endpoint to communicate with other services for entire application functionality. Finally, these services will also have a data access layer for maintaining service-specific data in individual databases.
Benefits
We have seen some of the scenarios and understood some of the pains that enterprise architects, developers, and consultants face when it comes to application architecture. Based on our experiences, we divided these benefits into three different categories:
Agility
Scalability
Technology diversity
Agility
As shown in Figure 1-9, microservices are representative of individual functions within an overall application, and as such, business functionality being encapsulated into small targeted services is one of the major points of agility. In a traditionally monolithic application, there are three layers: presentation layer, business layer, and the data access layer. In this form of architecture, all the business logic is packed together in a single layer. With microservices, this is abstracted so that each microservice is representative of one component of the overall business logic, and a single team can be responsible for building that out. In fact, this attribute alone is considered one of the major hallmarks of microservices.
This is a method to ensure logical separation of concerns. When individual teams are aligned to microservices, this tends to lend itself to more agile development methodologies and ultimately, sections of the app can be changed without regard to issues stemming from tightly coupled functions.
The second benefit in agility is that each service can evolve independently and can be deployed frequently. This is of course with the assumption that the service endpoints are well defined and agreed upon by the teams building and consuming the services. In the team dynamic setting mentioned earlier in this chapter, we noted that each service can be built out by different teams because the overall business logic of the application is further abstracted. Additionally, there isn’t dependency on the deployment side either, and separate continuous integration and deployment pipelines can be set up to make changes to each team’s respective service. Ultimately, one team’s work on a particular service will not affect another team’s work. This independence ultimately reduces the mean time to delivery from a time-to-market perspective.
This results in development teams being more agile and being able to produce at a high velocity. Ultimately, they can focus better on their individual services. Development teams are now abstracted from the internal intricacies present when working with other teams from both a technological and process standpoint, which means they can be more productive overall.
Scalability
Microservices are very scalable in relation to an application. They can scale on respective demands without affecting overall performance. This occurs because of the abstraction of business functions combined with components of the overall architecture that build out the ability to handle more traffic and requests, and it has high availability. In fact, scalability was the feature that drove the transition from SOA to microservices.
We touched on some of these components earlier. Granular services mean that requests are oriented to particular areas within the architecture. However, when we have services communicating with others for an end-to-end scenario, communication patterns and messaging channels between individual services help streamline logic to complete scenarios. For microservices architecture, we have individual communications and patterns such as publish-subscribe which makes communication easier and doesn’t throttle the performance of the application.
Also, microservices tend to be containerized in enterprise scenarios and thus can be deployed efficiently and frequently with the use of infrastructure to containerize the services (like Docker), deploy with scale with Kubernetes, and be able to deploy quickly with pipelining and continuous integration and deployment.
Technology Diversity
Often, balance and flexibility are overlooked in developer environments, which gives way to some rigid architecture that has specific problems and thus requires specific answers.
Developers who work on applications that utilize microservices architectures can mix and match different programming platforms and storage technologies. This offers great flexibility for developers, architects, and consultants to design with almost limitless permutations for a final business solution. This flexibility to leverage numerous technologies is an added benefit that we see time and time again in enterprise environments.
Consequently, applications can be refactored, modernized, or experimented on with new languages and practices, one service at a time. However, the introduction of new technology into longstanding enterprise environments tends to create new problems and headaches for incumbent IT professionals. This signals the start of large-scale refactoring and modernization projects, which equate to massive expenditures in time and money. Despite this, microservices change the way we think about these initiatives and instead of generating more problems, provides solutions for IT managers.
To understand this better, suppose we started with a monolithic application a decade ago to develop a product. In the last 10 years, we find that the application code base has become huge, and the database has become monolithic, too. Today, we realized that the technology we used earlier is getting outdated and making modifications to the existing application is difficult as one change will potentially break other pieces of the monolithic application. We want to provide a modern user experience to the consumer with new technologies and applications. This is where microservices solves these problems. With microservices, because the application is segregated into set of different services, the front-end could be a micro front-end or even a basic terminal that contains some HTML pages and consumes those services to represent application features. The segregation allows teams to independently make improvements to their service with minimal impact to the rest of the application. Adding new functionality is not only easy, but the adoption of cutting-edge technologies are faster and complementary.
Challenges
We’ve seen some of the fantastic benefits of using microservices architecture, many of which have themes related to componentization and flexibility. On the other hand, many of these same benefits pose new challenges for your IT professionals as well.
These challenges include:
Learning curve
Deployment
Interaction
Monitoring
Learning curve
In the modern enterprise environment, we have seen the shift from a traditional team setting to a DevOps-oriented team. A traditional company might have individualized teams of developers, QA engineers, an infrastructure team, an information security team, an operations team, and database admins—all in individual silos that have to work together in a waterfall-oriented project.
Today, agile projects are the norm and teams in a DevOps setting are almost mandatory, which means that teams are composed of a smattering of people from different traditional teams to make a completely iterative team that is capable of building out and maintaining some function.
However, this means that everyone on that agile team that is assigned to building out a particular microservice must ramp up based on the different kinds of technology that are selected for the final design.
The number of technologies here poses a problem, where more options cause decision fatigue and might exacerbate the learning curve. From the developer perspective, a design choice could be made to use Java for the business logic in a particular microservice. If this isn’t the primary language that developers are used to building on, then ramping up to learn the language from a syntax, semantic, and even a knowledge perspective (considering libraries available) could be a significant barrier, which can pose both time and fiscal challenges when training time and resources are taken into consideration.
This isn’t a situation that is unique to developers; it can happen to any role within your agile team. Let’s take an infrastructure team who is responsible for deploying the underlying components needed to build out the application. The decision on what tool to use for infrastructure-as-code (IaC)—such as Terraform, ARM Templates, and so on—and for pipelines (build and deployments, i.e. Jenkin, Azure DevOps etc.) could cause delays and expenditures out of budget for training.
Building expertise on many different kinds of technology is difficult for teams in general, and it is even more dynamic when working with the microservices architecture. That being said, good planning and an understanding of your team’s strengths and weaknesses can enable you to make the right decisions to balance the time and money to make the team productive when using microservices, despite these challenges.
Deployment
Agility is one of the big benefits of microservices, particularly because builds and deployments are faster and more frequent because teams own smaller parts of an overall application. However, this bonus comes with difficulty, where one needs a strong focus and expertise in automation for faster and frequent deployments.
A manual process to address builds and deployments would be cumbersome and painstaking. Thus, automation is required to reduce this overhead. The problem is deeper here because automation of pipelines in continuous integration and continuous deployment requires specialized technologies and skills, and automated pipelines are complex to set up based on how customized your environment is. Again, this could hinder the potential pros of using microservices.
Let’s understand this better by walking through a scenario. With the assumption that the code base for your service is already complete, we need to ascertain where we will be storing the code. Traditionally, tools such as GitHub provide repos and source code version control, which is useful for developers to push code, generate builds, or make improvements to the service. Primarily, we want to focus on the integration of these pushes to the repo in kicking off the workflow for a build. There are many ways to do this with your tools, but then there is a need to create the build using the artifacts created from the code and running through tests to ensure the build is working before moving it over for automatic deployment to your various environments. Each layer and step of this is complex and requires a deep understanding of multiple roles to work. This is why we see there is a high demand for DevOps and automation engineers. Furthermore, the tools that you could use for this end-to-end workflow can all vary in capability and have their own associated learning curve.
That being said, many enterprise IT professionals are observing and reacting to this need by seeking out people to help accomplish this in their environments based on incumbent talent, hired talent, and consultants they might be bringing in for these particular tasks. Microservices aside, to be successful in an agile IT world, we see this as being a scenario you cannot ignore.
Interaction
Adding on to some of the knowledge base needed to work in microservices are patterns and messaging channels. In a microservices architecture, although services are disparate and independent, the overall application functionality requires services to talk and communicate with one another for triggering of actions and functions to conceive an end-to-end cohesive workflow. Therefore, knowing messaging patterns and related technology is vital.In a traditional monolithic application, all the layers are stacked on top of each other, allowing interfacing between all parts of the application. In microservices, each function is a miniaturized monolithic application, but now it needs to communicate with several others to complete an end-to-end scenario and provide a rich user scenario/experience. Ultimately, seamless communication and independent scaling can only be achieved if services can asynchronously communicate with one another. This gave rise to technologies such as service buses and queues to help delineate communication between services.
Let’s walk through an example that will reappear in detail in Chapter 6, which is how we implemented interaction with our OAS case study. The goal of this application is to provide a one-stop shop for people to auction off items, bid for items, pay once a bid is won, and subsequently receive the items they’ve purchased. This is a simplification of the scenario, but as you can see, with the moving parts of an application in place, several systems must talk to each other. We can see that the auction and bid services must communicate with each other, the front-end would have to communicate to all the back-end services, and the payment service must communicate with the bid services.
A service-oriented architecture has a single enterprise bus and doesn’t account for all of these scenarios. Thus, a more loosely coupled architecture will require you to learn more patterns (such as request/response and pub/sub) and messaging technologies (such as Apache Kafka and RabbitMQ) to be productive. Again, this is a burden on the developer and infrastructure teams.
However, there are standard ways of learning this information, so although there is a learning curve, the information you gather will be useful in understanding how to design further architecture to handle various interactions.
Monitoring
Monitoring is a core component of many developers’ day-to-day jobs. Monitoring is vital for understanding the performance of your particular application and helping identity bottlenecks, and it is used for overall troubleshooting. This is something already ingrained in the mindset of most developers. However, monitoring gets more difficult and is a strict requirement when it comes to microservices.
There are several choices when it comes to monitoring, but we have found certain technologies and frameworks that work particularly well for microservices. One of the most useful things to do would be to understand the native features available from the cloud provider you are using. Today, it is common for many businesses to build modern, cloud-native applications. Because of this, there are likely cloud architects on teams that are experts on the monitoring capabilities native to your cloud platform. It is extremely valuable to leverage their insights and skills to building this piece of the puzzle.
In the OAS, we leveraged Azure and so our application metrics and performance are recorded by tools such as Azure Monitor, Log Analytics, Application Insights, and much more. Whatever your platform might be, the key to addressing a potential challenge here is being informed and intelligent about the different options available to you, whether it be cloud-native, third-party tools, or open-source tools, to make good decisions for your monitoring frameworks.