Playing with Service Mesh – Linkerd and Azure Kubernetes Service

As we know, Microsoft launched many innovations this year at Barcelona KubeCon. I think one of the great was SMI(Service Mesh Interface).

As far as I understand, I can say that for the definition of SMI, it provides “interoperability” topic between service meshes just as AMQP. In essence, it offers a standard interface for service meshes on Kubernetes. In this way, it provides an opportunity/abstraction for us to use the technology which we want without falling into provider-lock status for service mesh.

After the announcement of SMI, I had the opportunity to review Linkerd service mesh, which I was thinking about for a long time and wanted to write something about it.

Service Mesh, huh?

First of all, I would like to briefly mention what is the service mesh and what it provides to us.

As we know, many organizations are trying to adapt to microservice architecture in order to keep up with today’s technology and market and get a big part from that market.

The important point in this adaptation process is how the services, that have been decoupled from each other, will communicate with each other in a fast, resilient, and secure manner. Of course, also load-balancing, traffic management, and health monitoring too. We are already implementing many of these requirements via different tools. For example, we implement the resiliency concern using some frameworks such as Polly or via API Gateways.

Well, what does the service mesh provide?

Service mesh, by managing service-to-service communication, allows us to decouple such network operations such as “resiliency“, “scalability“, “security” and “monitoring” from our codes instead of dealing with different solutions. It also provides us these requirements from a single point.

Well, Linkerd?

Linkerd is an open source service mesh for kubernetes that hosted by CNCF.

If we look at how it works, as we can see it takes a place next to each service as a transparent sidecar proxy instance. It encapsulates/handles the “service-to-service” communication complexities for us by invoking the local proxy of service B instead of calling service B directly from service A.

Here are some key features of Linkerd:

  • Intelligent Load Balancing (HTTP/HTTP2, gRPC): It uses an algorithm called EWMA (Exponentially Weighted Moving Average) to send requests to the fastest endpoint
  • Automatic Retries and Timeouts
  • Automatic mTLS
  • Powerfull Telemetry and Monitoring features: One of the key features for Observability
  • Dashboard and Grafana

If you want, you can reach more detailed information about its architecture from here.

Prerequisites

Before start to the installation of Linkerd, we need a kubernetes cluster. At this point, I will use Azure’s managed Kubernetes Service. If you don’t have Azure Kubernetes Service, you can create it here.

First, let’s login to Azure with the following command.

az login

NOTE: Azure CLI should be installed for these operations. If not, you can reach it here.

Then we need to get the required credentials to access the cluster with the following command.

az aks get-credential --resource-group={YOUR_AKS_RESOURCE_GROUP} --name {YOUR_AKS_NAME}

Now we are ready for the installation of linkerd.

In order to proceed with the installation, we need to complete the first 3 steps here. After completing the steps, let’s use the following command in order to make sure everything goes fine.

linkerd check

Then we should see the following result.

Let’s Play!

So we are ready to mesh. In order to perform a demo, I developed 3 basic APIs, which have swagger, as like below.

In order to get the product response that we can saw the above, the “Product.Gateway.API” will send requests to both “Product.API” and “Price.API” for us. After aggregating the relevant responses, it also will return the full product response to us.

You can reach APIs here.

First, let’s look at the “ProductsController” of the “Product.Gateway.API”.

using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Configuration;
using Newtonsoft.Json;

namespace ProductGateway.API.Controllers
{
    [Route("api/products")]
    [ApiController]
    public class ProductsController : ControllerBase
    {
        private readonly IHttpClientFactory _clientFactory;
        private readonly IConfiguration _configuration;

        public ProductsController(IHttpClientFactory clientFactory, IConfiguration configuration)
        {
            _clientFactory = clientFactory;
            _configuration = configuration;
        }

        [HttpGet("{productId}")]
        public async Task<ActionResult<GetProductResponse>> Get([FromRoute]int productId)
        {
            var productDetail = GetProductDetailAsync(productId);
            var productPrice = GetProductPriceAsync(productId);

            await Task.WhenAll(productDetail, productPrice);

            return Ok(new GetProductResponse
            {
                    ProductId = productDetail.Result.ProductId,
                    Name = productDetail.Result.Name,
                    Description = productDetail.Result.Description,
                    Price = productPrice.Result.Price
            });
        }

        private async Task<GetProductDetailResponse> GetProductDetailAsync(int productId)
        {
            GetProductDetailResponse productDetailResponse = null;

            HttpClient client = _clientFactory.CreateClient();

            string productApiBaseUrl = _configuration.GetValue<string>("Product_API_Host");

            HttpResponseMessage response = await client.GetAsync(requestUri: $"{productApiBaseUrl}/api/products/{productId}");

            if (response.IsSuccessStatusCode)
            {
                productDetailResponse = JsonConvert.DeserializeObject<GetProductDetailResponse>(await response.Content.ReadAsStringAsync());
            }

            return productDetailResponse;
        }

        private async Task<GetPriceResponse> GetProductPriceAsync(int productId)
        {
            GetPriceResponse productPriceResponse = null;

            HttpClient client = _clientFactory.CreateClient();

            string priceApiBaseUrl = _configuration.GetValue<string>("Price_API_Host");

            HttpResponseMessage response = await client.GetAsync(requestUri: $"{priceApiBaseUrl}/api/prices?productId={productId}");

            if (response.IsSuccessStatusCode)
            {
                productPriceResponse = JsonConvert.DeserializeObject<GetPriceResponse>(await response.Content.ReadAsStringAsync());
            }

            return productPriceResponse;
        }
    }

    public class GetProductResponse
    {
        public int ProductId { get; set; }
        public string Name { get; set; }
        public string Description { get; set; }
        public double Price { get; set; }
    }

    public class GetProductDetailResponse
    {
        public int ProductId { get; set; }
        public string Name { get; set; }
        public string Description { get; set; }
    }

    public class GetPriceResponse
    {
        public int ProductId { get; set; }
        public double Price { get; set; }
    }
}

In the “Get” method, we are simply sending requests to relevant APIs in order to get product detail and price information. We are also reading base URLs of APIs from the configuration service. When we deploy the APIs on kubernetes, we will be setting these API URLs as an environment variable.

In order to dockerize APIs, we need to use these relevant Dockerfiles.

Sample dockerfile for the “Product.Gateway.API“:

#Build Stage
FROM microsoft/dotnet:2.2-sdk AS build-env

WORKDIR /workdir

COPY ./src/ProductGateway.API ./src/ProductGateway.API/

RUN dotnet restore ./src/ProductGateway.API/ProductGateway.API.csproj
RUN dotnet publish ./src/ProductGateway.API/ProductGateway.API.csproj -c Release -o /publish

FROM microsoft/dotnet:2.2-aspnetcore-runtime
COPY --from=build-env /publish /publish
WORKDIR /publish
EXPOSE 5000
ENTRYPOINT ["dotnet", "ProductGateway.API.dll"]

I will use Azure Container Registry service as a container registry.

Let’s build images and push them to container registry with the following command.

docker build -f ./*.Dockerfile . -t {YOUR_CONTAINER_REGISTRY}/*-api:dev

az acr login --name {YOUR_CONTAINER_REGISTRY_NAME}

docker push {IMAGE_NAME_WITH_TAG}

Also, we will use these yaml files to kubernetes deployment operation.

Sample deployment and service file for the “Product.Gateway.API“:

---
apiVersion: v1
kind: Namespace
metadata:
  name: linkerd-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: product-gateway-api-deploy
  namespace: linkerd-test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: product-gateway-api
  template:
    metadata:
      labels:
        app: product-gateway-api
    spec:
      containers:
      - name: product-gateway-api
        image: ggplayground.azurecr.io/product-gateway-api:dev
        imagePullPolicy: Always
        ports:
        - containerPort: 5000
          name: http
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: Product_API_Host
          value: http://product-api-svc.linkerd-test:9090
        - name: Price_API_Host
          value: http://price-api-svc.linkerd-test:8080
---
apiVersion: v1
kind: Service
metadata:
  name: product-gateway-api-svc
  namespace: linkerd-test
spec:
  type: LoadBalancer
  selector:
    app: product-gateway-api
  ports:
  - port: 80
    targetPort: http

We will deploy APIs to a namespace called “linkerd-test“. We have said we will set related API URLs as an environment variable in the “Product.Gateway.API“. If you look carefully, we have set both “Product” and “PriceAPI’s service addresses under the “env” section.

Now let’s perform deployment operations of these 3 APIs with relevant yaml files.

kubectl apply -f price-api-deploy.yaml
kubectl apply -f product-api-deploy.yaml
kubectl apply -f product-gateway-api-deploy.yaml

Let’s check the deployments operation is completed successfully or unsuccessfully as like below.

kubectl get deploy -n linkerd-test

Currently, everything seems fine.

Let’s also test the APIs. To do this, we need to get the service address where “Product.Gateway.API” is exposed to the internet.

kubectl get svc -n linkerd-test

NOTE: It may take several minutes to get an External IP address.

In order to perform the test, I have added a dummy product with id “1” in the “Product.API” and “Price.API“.

For test purpose, let’s send a request to the “Product.Gateway.API” as follows.

http://{YOUR_EXTERNAL_IP}/api/products/1

Great, APIs are working.

Now, all we need to do for meshing is to inject linkerd into APIs. To do this, we will use linkerd’s CLI as follows.

kubectl get -n linkerd-test deploy -o yaml | linkerd inject - | kubectl apply -f -

With the above command line, we are injecting the linkerd into our applications under the “linkerd-test” namespace.

Now let’s see what is going on from the dashboard of linkerd.

To access the dashboard:

linkerd dashboard &

In the overview section, it welcomes us with a screen like the one above. Here we can see the deployments and pods. It is also possible to see which services we mesh.

One of my favorite parts is the metrics. We can see the information such as “Success Rate“, “RPS” and “Latency” specific to each service.

Now for the “Product.Gateway.API” that is the first entry point, let’s see the details it by clicking the “product-gateway-api-deploy” deployment under the “Deployments” section.

Then, to see some metrics, let’s send some request to the “Product.Gateway.API” as follows. I will use the ApacheBench for this operation.

ab -n 1000 http://{YOUR_EXTERNAL_IP}/api/products/1

Another my favorite part is that it provides us automatic service dependency map and live traffic information. If we look at the dependency map, we can see the “product-gateway-api” is depended on both “price-api” and “product-api“.

In addition, in the “LIVE CALLS” tab, we can see the samples of the current calls.

Well, there is one more great topic I want to mention. Route-based runtime metrics and retries.

Service Profiles

One of the important topics in linkerd is service profiles. A service profile is a custom kubernetes resource which provides additional information about relevant service to linkerd.

By defining service profiles, we can enable linkerd to give us “route-based runtime metrics” for each service. We can also enable features such as “retries” and “timeouts“.

There are several different ways of defining service profiles such as “Swagger“, “Protobuf“, “Auto-Creation” and “Template“. Since I implemented swagger when developing APIs, I will use the swagger method to define service profiles.

You can reach the corresponding swagger files of the APIs here.

Route-based Metrics

We will use the following command line to define a profile.

linkerd -n linkerd-test profile --open-api ./price-api-swagger.json price-api-svc | kubectl -n linkerd-test apply -f -
linkerd -n linkerd-test profile --open-api ./product-api-swagger.json product-api-svc | kubectl -n linkerd-test apply -f -
linkerd -n linkerd-test profile --open-api ./product-gateway-api-swagger.json product-gateway-api-svc | kubectl -n linkerd-test apply -f -

Yes, service profiles have created, now we can see the route-based metrics.

We need to send some requests to the “Product.Gateway.API” again. And this time, let’s see the “ROUTE METRICS” tab of the “price-api-deploy” on the deployments screen.

As we can see, we can see route-based metrics with the service profile. Now let’s take a look at how we can configure the retries.

Retries

For example, let’s assume some requests, which are sent to the “Get” endpoint of the “Price.API“, failed and we want to enable auto retry feature.

In order to do this, we need to add “isRetryable” variable for the relevant route by editing service profiles that we have created as like below.

kubectl -n linkerd-test edit sp/price-api-svc.linkerd-test.svc.cluster.local

That’s all.

You can also use the “Retry Budget” mechanism to customize. If you want, you can reach detailed information from here. It is a great capability to be able to add functionality such as “retry” and “timeouts” without any touch to the codes of the APIs.

And the last thing that I want to mention is Grafana support. Besides live metrics, past metrics also can be visualized with Grafana and Prometheus support.

Conclusion

The service mesh is an important infrastructure layer for microservice architectures. By abstracting the network, it helps us to handle the challenges of distributes architectures (reliability, security, monitoring, etc…) without increasing the complexity within our codes. Linkerd2 is a good service mesh option, especially with the intelligent load balancing (low-latency), although it doesn’t yet have all the features compared to other service meshes.

Demo app: https://github.com/GokGokalp/service-mesh-linkerd-sample

References

https://linkerd.io/2/getting-started/
https://www.zdnet.com/article/what-is-a-service-mesh-and-why-would-it-matter-so-much-now/

Gökhan Gökalp

View Comments

  • Hocam şöyle bir durumda nasıl bir yol izlememiz lazım. price ve product servislerinin authorize ile erişmemiz gerektiğini düşünürsek authorize işlemini ProductGateway üzerinde mi yapmamız lazım yoksa güvenlik açısından product ve price servislerinde ayrı ayrı yapmak mı daha doğru?

    • Selam, benim görüşüm ilgili API'lar ayrı ayrı authorization işlemini kendisi gerçekleştirmeli. Örneğin, Price API içerisinde kullanıcının fiyat bilgisini alabilme claim'i olabilir, fakat update etme claim'i olmayabilir. O yüzden ilgili API'ın kendisinin gerçekleştirmesi daha doğru olacaktır.

  • nacizane detayli bilgi meraklisina docker routing mesh loadbalancer algoritmasi ...

    Bu arada yalin bir anlatim olmus emegine saglik :)

Recent Posts

Securing the Supply Chain of Containerized Applications to Reduce Security Risks (Policy Enforcement-Automated Governance with OPA Gatekeeper and Ratify) – Part 2

{:tr} Makalenin ilk bölümünde, Software Supply Chain güvenliğinin öneminden ve containerized uygulamaların güvenlik risklerini azaltabilmek…

6 months ago

Securing the Supply Chain of Containerized Applications to Reduce Security Risks (Security Scanning, SBOMs, Signing&Verifying Artifacts) – Part 1

{:tr}Bildiğimiz gibi modern yazılım geliştirme ortamında containerization'ın benimsenmesi, uygulamaların oluşturulma ve dağıtılma şekillerini oldukça değiştirdi.…

8 months ago

Delegating Identity & Access Management to Azure AD B2C and Integrating with .NET

{:tr}Bildiğimiz gibi bir ürün geliştirirken olabildiğince farklı cloud çözümlerinden faydalanmak, harcanacak zaman ve karmaşıklığın yanı…

1 year ago

How to Order Events in Microservices by Using Azure Service Bus (FIFO Consumers)

{:tr}Bazen bazı senaryolar vardır karmaşıklığını veya eksi yanlarını bildiğimiz halde implemente etmekten kaçamadığımız veya implemente…

2 years ago

Providing Atomicity for Eventual Consistency with Outbox Pattern in .NET Microservices

{:tr}Bildiğimiz gibi microservice architecture'ına adapte olmanın bir çok artı noktası olduğu gibi, maalesef getirdiği bazı…

2 years ago

Building Microservices by Using Dapr and .NET with Minimum Effort – 02 (Azure Container Apps)

{:tr}Bir önceki makale serisinde Dapr projesinden ve faydalarından bahsedip, local ortamda self-hosted mode olarak .NET…

2 years ago