The introduction of container technology has improved the ease of application development. Kubernetes is an Opensource platform and it handles the orchestration of your containers on multiple machines. Red Hat OpenShift is a container platform by Red Hat that provides an auto-scaling cloud application platform. It is built around application containers that are powered by Docker and the orchestration and management services are provided by Kubernetes. With an emerging cloud deployment need, OpenShift is gaining a lot of popularity for offering a simple architecture with containers. It provides an integrated deployment architecture for managing the containers using Kubernetes orchestration services.
NCache fully supports the Red Hat OpenShift deployment. NCache is an in-memory distributed caching solution that ensures high performance and scalability. Caching your data with NCache reduces your network trips and the load on your database as your data resides in the cache, closer to your application. NCache is extremely fast and resolves your performance bottlenecks by scaling your .NET and Java applications. In this article, I am focusing on the steps for NCache deployment in Red Hat OpenShift.
NCache Deployment Architecture in Red Hat OpenShift
With NCache, you can enjoy the cloud orchestration in your OpenShift environment with additional NCache features and that too with an easy to manage container application.
Start with a single Kubernetes cluster with different application deployments as well as an NCache cluster deployment. The Docker-based container applications running in the environment are:
- Java Web application
- Java Web Services application
- ASP.NET Core application
These applications have NCache installed on them; the Java applications are using Java client of NCache whereas a separate deployment for ASP.NET core applications using a Docker image on Linux and this is using the .NET Core client for NCache communication.
As far as server-side deployment is concerned, it is using Linux based Docker image for NCache which is available on Docker Hub.
The applications are connected to the service called Cache Discovery Service which is a headless service within Kubernetes. It serves the purpose of managing the routing and allocation of resources that are part of NCache cluster. Similarly, a remote monitoring gateway also connects to this service to allow monitoring the cluster from outside the Kubernetes for any operations like NCache cache management.
A pod is a Kubernetes object, which encapsulates the underlying container instance and is a virtual layer on top of a container So, in Kubernetes, when an IP is assigned, it is assigned to the pod instead of the container. A single pod can have multiple containers but it is highly recommended for a single pod to contain a single container. In short, all the resource allocation in a Kubernetes cluster is done on the pod instead of the container.
The following diagram gives an overall depiction of the architecture flow of the deployment with NCache:
I will now go through the steps for the deployment of NCache in Red Hat OpenShift.
Step 1: Deploy NCache Servers
Deploying NCache servers in Red Hat OpenShift requires you to create a YAML file with your NCache configurations. These YAML deployments contain all your application’s components and are very easy to deploy. Make sure to tune these components according to your own application’s requirements. Given below is the sample YAML file with the configurations:
apiVersion: apps/v1beta1 kind: Deployment metadata: name: ncache labels: app: ncache spec: replicas: 2 template: metadata: labels: app: ncache spec: containers: - name: ncache image: docker.io/alachisoft/ncache:enterprise-server-linux-5.0.2 ports: - name: management-tcp containerPort: 8250 - name: management-http containerPort: 8251 - name: client-port containerPort: 9800
Kubernetes is evolving very quickly and it keeps coming up with new features and the developed features become the part of the main API. However, a few features do not become a part of their main API for their experimental nature. So, the “apiVersion” needs to be set accordingly. The version used here is “v1beta1” which depends on the underlying Kubernetes version so, make sure that you are not using an obsolete version.
The ports mentioned in the deployment file include:
- Port 8250: For TCP management.
- Port 8251: For HTTP management and monitoring.
- Port 9800: For communication between the client applications connecting to NCache.
The “kind” is set as Deployment here. The next thing is the number of “replicas” which is 2 in this case and you can increase it according to your own logic. For further detail on replicas, please refer here. In the “containers”, you need to specify the Docker image by providing the path of NCache Enterprise Server Linux Docker image which is available on Docker Hub. The general command to pull this Docker image is:
docker pull alachisoft/ncache:enterprise-server-linux-5.0.2
Once the YAML file is created with all the necessary configurations, you need to import the file using the OpenShift web console. Create a new project with a name of your choice and import the YAML file into the project containing the NCache deployments.
This can also be done using the OpenShift CLI (oc) tool which shows the status of the deployments.
Step 2: Create Cache Discovery Service
I briefly mentioned before that Cache Discovery Service is responsible for routing all the NCache communication to underlying pods. This is the main communication gateway between the client applications and NCache cache clusters which are a part of the Kubernetes cluster. It is a headless service and is used to retrieve the IP-addresses of the underlying NCache cache server pods in the Kubernetes cluster.
In order to create a Cache Discovery Service, another YAML is created. Given below is the sample YAML file with the configurations:
apiVersion: v1 kind: Service metadata: name: cacheserver labels: app: cacheserver spec: clusterIP: None sessionAffinity: ClientIP selector: app: ncache ports: - name: management-tcp port: 8250 targetPort: 8250 - name: client-port port: 9800 targetPort: 9800
Here it is named as cacheserver however, you can name it with respect to your own configurations.
In this case, the “kind” is service. It further contains the ports with the name and the port number which are needed for the communication as well as the service. The “sessionAffinity” is set to the ClientIP so it is ensured that the management and monitoring operations outside the Kubernetes cluster are sticky to one of the pods at the given time.
After creating the YAML file, import this file through the wizard and it automatically creates your cache discovery service as shown in the image below.
Step 3: Create Management Gateway
This step is for the management and monitoring operations outside the Kubernetes cluster. Any management operation performed are to be routed through this gateway to this cache discovery service and that would, in turn, help you manage and monitor all the underlying pods as well.
In order to create the management gateway:
- Go to the “Networking” section from the OpenShift portal.
- Select “Routes” from the drop-down menu.
- Create a route to the headless service a.k.a Cache Discovery Service.
- Provide a name for the route and select the service “cacheserver” created in the previous step. Along with that, provide the target port 8251 for management and monitoring outside the Kubernetes cluster.
5. Once it is created, from the “Location”, select the location path which redirects to NCache Web Manager at one of the cache server pods.
Step 4: Create a Cache Cluster
Now that we have successfully deployed NCache in Red Hat OpenShift, we can proceed to create a cache cluster using NCache Web Manager.
Create the cache cluster following the steps in the documentation and make sure that the IPs used are the IPs of your cache pods. In order to get the IPs of the cache pods, go to the “Pods” section from the OpenShift web console or through the command-line tool. Once the cache is created, start the cache using NCache Web Manager.
Step 5: Deploy Client Applications
You can now deploy and run your client applications by creating a YAML file containing the deployment for clients. This deployment file is also imported using the OpenShift portal. The client applications can either be .NET Core or Java as per your own needs.
apiVersion: apps/v1beta1 kind: Deployment metadata: name: clientapp labels: app: clientapp spec: replicas: 1 template: metadata: labels: app: clientapp spec: containers: - name: clientapp image: your-client-application-repo-path ports: - name: management-tcp containerPort: 8250
Here, for connecting to the cache we do not need IP addresses of the cache pods. The cache discovery service we created with the name “cacheServer” provides the IP-address of cache pods to our client application at runtime.
The NCache client has this built-in logic where it talks to its named service and with that service, it automatically discovers all the underlying resources within the OpenShift Kubernetes platform. Hence, the NCache client is intelligent enough to connect to a fully connected cluster on providing the name of the service.
Step 6: Monitoring NCache Cluster
NCache comes with various tools to help you monitor your cache cluster. Monitoring your cache cluster gives you real-time information about cluster health, cache activity, the number of operations being performed and much more. You can also monitor your cache cluster to take suitable measures for network disruption, memory overheads and many more.
NCache Web Manager is a management tool provided by NCache to configure the caches and then monitor their performance. Similarly, NCache Web Monitor is a web management tool that lets you monitor the real-time cache performance.
Step 7: Scaling NCache Cluster
NCache is a distributed caching system with a very scalable architecture. So, in order to achieve an enhanced capacity and functionality for NCache in your OpenShift environment, you can scale your NCache cluster by adding more pods to it. There are multiple ways of getting this done. Beginning with the OpenShift web portal:
- Go to “Deployments”.
- Click the “Edit Count” button.
- Increase the number of pods by clicking the “+” button.
Doing this automatically increases the replica count in your deployment file as per the number of pods added. It can also be done using OpenShift CLI (oc) tool.
Note that by adding the pods using the above steps, another pod is created but does not become a part of the cluster on its own. So, in order to add cache servers to a running cache cluster, go to the Server Nodes page from NCache Web Manager and add a server IP for adding that server node in the cluster. A server node gets added in your cache cluster at runtime and it improves performance drastically as NCache offers easy scaling.
Putting it all together, NCache deployment in Red Hat OpenShift is an easy to follow, step-wise procedure. Containerization is an emerging technological necessity of today’s world for the lightweight nature they offer. NCache is an extremely fast distributed caching solution and with Red Hat OpenShift, your containerized Kubernetes cluster can be managed easily.
I have put together a detailed impression of all the steps required for the NCache plug-in in your OpenShift environment in this article. You can step into the NCache world with your .NET or Java applications running on the Kubernetes cluster with just a few easy steps.