Development, deployment, and management of applications has become easier with the introduction of containerization, which is why cloud deployment is gaining increasing popularity. Azure, being the best in the business, provides the fastest and easiest to use Kubernetes deployment in the form of Azure Kubernetes Service (AKS).
To improve application’s performance in Azure Kubernetes environment, NCache should be deployed and used inside the AKS cluster. NCache is an in-memory distributed caching solution that boosts your application’s performance by many folds as your cache is closer to the application. Using NCache, which is distributed in nature, allows you to add as many servers as you need to improve latency, thus inducing extreme scalability in AKS.
NCache Details Container Deployments NCache AKS Docs
NCache Deployment Architecture in Azure Kubernetes Service
The overall layout of NCache’s deployment in Azure Kubernetes Service is like this: You have applications that are connected to a headless Cache Discovery service. This service is responsible for allowing clients access to the cluster pods that are running the cache service. There’s also a Gateway Service that provides a load balancer to bring the traffic down to specific pods based on the provided client IP.
A pod is a basic unit for building service that ensures that all containers are on the same host. A pod contains one or more containers that share resources like RAM, CPU, and network but it’s better to have one container per pod.
The flow of requests and the structure of an AKS cluster with NCache deployed in it is shown in the diagram below.
To start using many out-of-the-box features provided by NCache in your Azure Kubernetes Service cluster, you need to deploy NCache and the required services in an AKS setup. The steps provided below will help you get started in deploying and using NCache in the Azure Kubernetes cluster.
NCache Details Container Deployments NCache AKS Docs
Step 1: Create NCache Deployment
In Azure Kubernetes Service, whenever we talk about deploying an application or service, we need to create a YAML file. This YAML file contains all the information required to create a pod inside your AKS cluster. Let me show you what your YAML file should look like to successfully create a pod that contains the NCache service.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
kind: Deployment apiVersion: apps/v1beta1 # underlying Kubernetes version metadata: name: ncache labels: app: ncache spec: replicas: 2 template: metadata: labels: app: ncache spec: nodeSelector: "beta.kubernetes.io/os": linux containers: - name: ncache image: docker.io/alachisoft/ncache:enterprise-server-linux-5.0.2 ports: - name: cache-mgmt-tcp # for tcp communication containerPort: 8250 - name: cache-mgmt-http # for http communication containerPort: 8251 ... # remaining necessary ports |
For the cluster to understand that what you are creating is going to be a deployment pod, you need to mention “kind” as Deployment. What you need to be wary of here is the underlying version of Kubernetes under the “apiVersion” tag. Kubernetes keeps modifying this version number so, you need to be careful when deploying NCache that this version number is set to the corresponding version number of underlying Kubernetes.
The number of “replicas” here indicates the number of pods this deployment is going to have, which in this case is 2. You can change this value as per your requirement. Under the “containers” tag, you provide the path to the NCache Enterprise Server Docker image. You can find this path on Docker Hub.
NCache Details Container Deployments NCache AKS Docs
Some of the other requirements that you need to know to deploy NCache in the Azure Kubernetes cluster are ports information. For your clients to successfully interact with NCache servers, you need to specify the container port number in your YAML file.
Mainly, these are the basic requirements that you need to understand to successfully deploy NCache in an AKS cluster. Once this YAML file is created, you use this file to create pods in AKS.
Creating this YAML file is all that you have to do to successfully deploy NCache in an AKS cluster. Run the following command in Azure Cloud Shell and voila! your NCache deployment is now a full-fledged running pod in Azure Kubernetes Service!
1 |
kubectl create -f [dir]/ncache.yaml |
Step 2: Create NCache Discovery Service
Outside the Kubernetes cluster, when you talk about cache clients connecting with cache servers, it is quite understandable that they need IP addresses of the cache servers. These IP addresses are static and known to every client that is a part of that system. But when you take the same elements and put them inside the Kubernetes environment, the implementation changes. Inside a Kubernetes cluster, every deployment pod is assigned a dynamic IP address at runtime which is unknown to the client applications. This implementation gets in the way of your client applications identifying NCache servers to achieve performance and scalability.
To counter this issue, Kubernetes allows you to create a Service that is fixed instead of being dynamic. So, utilizing this you need to create a headless discovery service that allows your client application to effortlessly access the pod on which NCache service is running. The information provided in this service, as a YAML file, lets all the client applications to connect to this service. This service is then responsible for assigning one server to every client connection request; all while staying inside the AKS cluster.
So, without further ado, let us start creating the YAML file ready for deployment.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
kind: Service apiVersion: v1 # underlying Kubernetes version metadata: name: cacheserver labels: app: cacheserver spec: clusterIP: None selector: app: ncache # same label as provided in the ncache YAML file ports: - name: management-tcp port: 8250 targetPort: 8250 - name: client-port port: 9800 targetPort: 9800 |
Your “kind” needs to be a service with the “apiVerison” set to the underlying version of Kubernetes. To make this a headless service, you need to set the “clientIP” tag to none which specifies that your discovery service will not have any public IP assigned to it. The rest are the ports required for NCache clients to communicate with the NCache servers.
From here, you go to Azure Cloud Shell and run the provided command to have a fully functional running headless discovery service inside your Kubernetes cluster.
1 |
kubectl create -f [dir]/discoveryservice.yaml |
NCache Details Container Deployments NCache AKS Docs
Step 3: Create NCache Gateway Service
Inside an AKS cluster, whatever happens is confined to the cluster. And for you to use NCache from your local machine, there needs to be a way through which NCache management operations can be performed within that cluster. This is exactly why we create a Gateway service; a service that is responsible for accessing, managing and monitoring NCache from outside the Azure Kubernetes Service.
Again, to use this functionality in the cluster, you need a running pod. To create a running pod, you need to create a YAML file containing all the necessary tags and values. So, let’s start by writing a YAML file for creating a gateway service for NCache management.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
kind: Service apiVersion: v1 # underlying Kubernetes version metadata: name: gateway spec: selector: app: ncache # same label as provided in the ncache YAML file type: LoadBalancer sessionAffinity: ClientIP ports: - name: management-http port: 8251 targetPort: 8251 |
Here, for your Azure Kubernetes Service cluster to know that this pod will act as a service for a particular purpose instead of deploying anything, you need to state the “kind” as a service. This file should also mention the ports a gateway service needs to function error-free. Tag “type” as LoadBalancer states that this gateway service will be an external load balancer that balances clients’ requests on multiple servers. The point that you need to ensure is that the “sessionAffinity” is set to ClientIP to make sure that one client gets redirected to the same server every time.
This is pretty much all the information you need to create a gateway service for your NCache deployment. What you need to do now is run the following create command from the Azure Shell and AKS will create and start this service for you.
1 |
kubectl create -f [dir]/gatewayservice.yaml |
NCache Details Container Deployments NCache AKS Docs
Step 4: Create a Cache Cluster
What you have till this point is working NCache servers, a gateway service and a discovery service for NCache clients. What you need now to fully enjoy NCache in your Azure Kubernetes Service is to create a cache cluster inside your Kubernetes cluster; which is simple.
You can carry out this step using NCache Web Manager that comes integrated with the NCache deployment. The steps you need to successfully create a cluster and add server nodes to it are provided in NCache docs on Create Clustered Cache. The only twist in this step is the IPs of the servers that you need. These have to be the same IPs that the Kubernetes cluster has assigned to your cache pods. You can get these IPs by executing the get pods command in Azure Cloud Shell.
Step 5: Create Application Deployments
To deploy and run client applications (be that .NET or Java) in your cluster, you need to create a YAML file. Your client deployment YAML file should be something like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
kind: Deployment apiVersion: apps/v1beta1 # Underlying Kubernetes version metadata: name: client spec: replicas: 1 template: metadata: labels: app: client spec: imagePullSecrets: - name: client-private nodeSelector: "beta.kubernetes.io/os": linux containers: - name: client image: # Your docker client image here ports: - name: port1 containerPort: 8250 - name: port2 containerPort: 9800 |
The “nodeSelector” mentioned in the file could be windows as Kubernetes supports both operating systems. Plus, you have the edge of deploying multiple client applications inside the same cluster depending on your requirements. For each client application, you need to create a similar YAML file so that every application runs on a separate pod.
Run the following command in the Cloud Shell provided by Microsoft Azure to successfully create and start your client application pod.
1 |
kubectl create -f [dir]/client.yaml |
The provided NCache client is extremely intelligent when it comes to creating connections within the cluster. All this client needs is the name of the service that it needs to talk to for it to automatically discover all the underlying NCache cluster nodes for a given cache present inside your Azure Kubernetes cluster.
The most feasible advantage of using NCache in AKS is that you do not need to provide IP addresses of the cache pods for client connection. The headless discovery service you created before is responsible for providing IP addresses of the cache pods to your client application at runtime.
NCache Details Container Deployments NCache AKS Docs
Step 6: Monitor NCache Cluster
Now that you have got your services, servers, and application up and running, you need a way to monitor cache activity inside the cluster. For this exact reason, NCache comes packed with various tools to help you monitor your cache cluster. These tools help you get a better idea about your cluster’s health, performance, network glitches, and connectivity.
NCache provides a Web Monitor that graphically shows the real-time performance of your cache.
Similarly, you have a Cache Statistics option that provides a more detailed analysis of your cache activity.
Step 7: Scaling NCache Cluster
NCache, being an extremely scalable distributed cache, allows you to add and remove server nodes at runtime to enhance the overall performance of NCache. While monitoring your cluster, if you feel like the requests/sec are far greater than the number of servers available to entertain those requests, you can add one or multiple cache nodes inside your deployment.
There are multiple ways through which you can scale the NCache cluster in your AKS deployment. You can use the NCache Web Manager or NCache PowerShell tool or even the NCache YAML file. To know more about how these methods are used to add and remove nodes from the cluster, visit our documentation on Adding Cache Servers in an AKS Cluster and Removing Cache Servers from an AKS Cluster.
NCache Details Container Deployments NCache AKS Docs
What have we learned?
From what we have seen, we can deduce that Azure Kubernetes Service is a fully integrated and managed container orchestrator that automates upgrades and patching. To achieve scalability and high availability in an AKS cluster, where your applications and resources reside, you need to deploy NCache in it. NCache is a scalable in-memory distributed cache that provides high performance and scalability inside your AKS cluster.
To get a detailed step-by-step illustration of deploying NCache in AKS, refer to our documentation on Deploying NCache in Azure Kubernetes Service.
Can you please use clean wordings then larger and longer fancy sentences? It’s really suffering reading the articles and documentations in your web site…