In this show we’ll see the basic concepts of NCache distributed caching, architecture and different caching topologies which can be implemented for your .NET / .NET Core applications.
Here is what this show covers:
In today's webinar, we're going to be covering the basics of NCache's Distributed Caching Architecture and different Caching Topologies, which you can use in your .NET, .NET Core Applications. We'll cover an intro to NCache Live and NCache Cloud, the Architecture and Clustering details, Caching Topologies, Data Caching with NCache APIs, your ASP.NET and ASP.NET Core Web Caching, such as, Session State, View State, Response Caching, SignalR and any other important features that you decide to bring up for Q&A.
Honestly, without further ado, I think I’m going to hand it over to Ron and like I mentioned before, if you have questions, just feel free to type them right into the tab and we'll answer them throughout the presentation, as well as at the end of the presentation at a separate Q&A.
The topic that we've chosen today is a very popular one. It talks about NCache, the NCache Architecture. It's different use cases, different features and different topologies and how you can integrate NCache into your applications and then we will focus on two new features; NCache Live, where you can test NCache APIs, as well as, some monitoring and management aspects using our live environment that we've offered and then Cache Cloud, is also the new feature or new offering, that that I’m going to present to you guys.
In this particular webinar, I'll start with some basic details about Distributed Caching. What are the challenges that you see in a typical application deployment, in a web farm or in application farm and then I'll talk about NCache as a solution. What is NCache? How it is deployed and then I’ll show you the product in action and then we'll talk about how NCache is able to support that Distributed Caching Architecture, the High Availability aspect of it, the high performance and scalability aspect of it and data reliability. We'll also of course talk about different Caching Topologies which are supporting NCache Architecture and different use cases, that you can apply within your environment.
So, first of all, let's quickly get started with the scalability problem. Let me define scalability. Scalability is an ability of an application to scale. When your application load grows, the applications tend to choke down. But if your application is performing really well, under low user load and if it also performs really well under high user load, then we would categorize that application as a scalability or scalable application architecture and you can do this with the help of application web farm, where application tier can scale out by distributing the load on multiple application or web servers, app servers or web servers. Your single application is deployed on different servers. You put a load balancer in front of your app farm or web farm and then you route request evenly or in a sticky load balancing manner. You can, you know, route your request to all the web servers. So, your application here is already very scalable.
The problem lies with backend database and typically it's a relational database. It could be other databases as well but most commonly it's your relational database, which becomes a bottleneck. Although, your application tier is very scalable, that application tier has to talk to a backend database and that is not very scalable. Because, it's usually a single server, hosting all your data, it's very good for storage but when it comes to handling, you know, request load and a higher request load, extreme transactional processing, databases are not designed for that. They’re very good for storage, but under peak load, they tend to choke down and if database is slow regardless of how many servers you have in your app farm or web farm that server or that server farm is going to talk to a backend database, which is slow, so, that forms a contention and all your requests are going to slow down.
Requests start to queue up and you start seeing performance degradation and that impacts your end user experience as well and that's very critical to your business, where you want to make sure that your users are, you know, seeing very good responses from your application, they're able to get their job done and that's how a typical system should be. But that's not possible, when you have a data storage which is a scalability and performance bottleneck.
NoSQL databases have somewhat resolved this but NoSQL presents another challenge, where your application has to now stop using our Relational Database. So, you need to leave Relational Database and start using a NoSQL and it's an unstructured data and a lot of data requirements are not fulfilled with NoSQL and re-architecture is a big issue, where you have to stop using Relational Database and start using a NoSQL. So, it's usually not the answer.
We also have a product called NosDB, which is a NoSQL Database but that's primarily for a NoSQL application requirements, where you have unstructured data and then you want to have a lot of data, you know, being very scalable in nature. But we, highly recommend that, you know, use NoSQL with an understanding that you have to re-architecture and start using NoSQL, as a data source option. So, it's not really a solution to your scalability problem that's typically with Relational Database or any legacy data sources.
Solution is very simple, that you use a Distributed Caching product like NCache. First of all, it's very fast, it's in-memory. So, Disk in comparison to RAM, is slower. So, with in-memory access, you're going to get dramatic performance improvement. So, that's the first benefit, that you're going to get a super-fast data access from NCache, as soon as, you plug in into your application. It's linearly scalable.
The scalability problem is resolved, by using NCache as your data source, for most of your requests. You keep using backend database as well, but you keep a lot of data, working set or a lot of data in the cache and you save expensive trips to the database. And, when you bring in NCache into the architectural picture, NCache takes most of your application load. All the requests are handled by NCache and it has a team of servers. It's not just a single server, there could be multiple servers hosting this cache cluster, hosting your data and also hosting and serving all your client requests as well and nice thing about NCache is that, you use it in addition to a backend database.
You don't have to replace your Relational Database unlike NoSQL data sources. NCache has a lot of features which you can use in conjunction with databases, such as database dependencies, database synchronizations, read-through and write-through to sync your cache and database, apply operations on the cache and database, for reading, as well as, for writing, in combination to one another.
So, it's not a replacement. It's something which you use in addition to a database. It sits in between your application and database. Idea here is that, since it's super-fast because of in-memory access, it's very scalable because it has team of servers, a cluster is formulated, multiple servers work in action. So, idea here is that overall application performance would be increased. Overall user experience would be better, because now you're using a source which does not have scalability or performance issues and you use it in addition to a backend database as well.
NCache is, you know, available in a very flexible model. You can use it on premise, on your physical or virtual machines. It's available on Azure and AWS marketplaces. So, we have VM model available, where you can bring your own license. You download the software, install it anywhere. You only need .NET or .NET Core framework. So, that's a conventional model, which works very well for on premise or private, you know, hosting deployments, private cloud.
In public clouds, in Azure and AWS, we have AWS and Azure Marketplace images for NCache. So, you can get a pre-installed NCache VM image made available, as soon as, you plan on deploy NCache and you can license NCache using the BYOL model. So, these are the conventional models and in addition to this we have released SaaS for easy deployment.
Now, how would you choose between Azure/AWS VMs and then between NCache SaaS and Azure and AWS. It's very simple. If your requirement is to use NCache and pay as you use it, so, pay as you go model, you should definitely consider SaaS, because it gives you a yearly subscription, it gives you monthly, as well as hourly, pay as you go options.
Second option is, its ease of use. A managed portal is given, which allows you to configure your subscription. So, you don't need to get involved in any other processes. You can just get done with provisioning your subscription and the deployment, using that portal. So, it's a provisioning. It allows you to subscribe, as well as, provision your environments, where you would get NCache VM images, made available already.
So, SaaS is easy to use, for provisioning, for subscription and also the second, the first one to be precise is, if you're interested in pay as you go option. But, all of these are again going to be in your own subscription, so, our SaaS deployment allows you to provision NCache VMs within your own subscription.
So, that's the benefit that NCache offers even in SaaS, that it's not going to be hosted across region or in another network. You can choose your own virtual network, your own subscription, your own resource groups and you can still deploy NCache in the same region where your applications are hosted. So, NCache is all about performance optimization, performance improvements, request level, as well as, you know, the network level, where any application talking to NCache sources, should not see any latency because of networking. So, that is covered with NCache SaaS as well, and it works with any cloud. The conventional download and install model is available across the board that's something that you can use anywhere.
These are some basic details. I wanted to introduce NCache SaaS. NCache SaaS portal is available, as we speak today. You can get started with it. You just need to log in. Subscribe one of the plans and get started with the provisioning.
This is how the deployment of NCache would look like, in a typically large production environment. We have a set of servers, which are hosting the cache cluster. NCache is again very flexible on this front as well, where we have Windows, as well as, Linux Server deployment options. So, with NCache you can choose to have either a Windows cache cluster, using .NET or .NET Core and similarly .NET Core server is available, so, you can deploy NCache on Linux boxes as well.
Question: How fast will NCache work? What kind of speed are we looking at, in terms of bringing NCache?
Okay. So, NCache is, I think that's the next slide. In a typical cluster deployment, NCache is, you know, if you start off with two server nodes, you know, you can expand this and we recently conducted some tests in our AWS lab where we were able to generate 2 Million requests per Second, with just 5 NCache Servers and we started off with 2 Servers.
So, this was the performance, about close to 1 Million with just 2 Servers and then 1.5 with 3 to 4 and then 4 to 5 had 2 Million and beyond. So, if we round it off to just 2 Million requests per second, you just need 5 NCache Servers. These were pretty high end Servers in AWS but with a smaller configuration box, let's say 16 Core, 16 vCPUs, 32 GB box, that's the conventional box for NCache server deployment. Even 8 vCPUs are fine but we recommend 16 vCPUs. So, you would see pretty good performance numbers out of NCache and while this, these tests were conducted, we made sure that latency was reviewed monitored, so, we saw sub Millisecond, Microsecond latency per operation and this was not a touch and go data, it was a real life application data, simulated in our AWS lab. There’s a video demonstration and a benchmark document available as well, if you're interested to review those. I hope that answers your question.
So, as far as deployment is concerned, this is a typically large production environment with 5 NCache Servers, with .NET and .NET Core you can choose Windows or Linux boxes to host the cache cluster. Docker images are also available. NCache can fully run in containerized environment. You can host NCache cluster using Kubernetes. We've video demonstration available to use NCache in OpenShift, again, backed up by Kubernetes, AKS, EKS, Azure Kubernetes Service, Elastic Kubernetes Service, Google Kubernetes Service. So, all of these demos are available on our website to be reviewed.
Then your applications could be ASP.NET or ASP.NET Core web apps, again deployed on Windows or Linux environments, can connect to it in a Client Server model. You can install NCache and create a cache cluster and that sits in between your application and database and now you can start consuming it for object caching, for session caching and all of different use cases as needed.
So, it's very flexible. These are very regular low cost servers. It's a cluster of servers. It's a, you know, very scalable in terms of adding and removing servers, which I’ll cover in upcoming slides, but please let me know if there are any more questions around the deployment architecture. This cluster can be on premise, it could be in public cloud, in a private cloud, it could be on Kubernetes, it could be SaaS based deployment, it could be VM based deployment, so, it's entirely up to you. You can come and choose this or get recommendations from us and and we can work with you. So, I've already covered NCache scalability numbers, so, I’m going to skip this now.
Next, I'm going to talk about different use cases, which NCache has to offer, so, this is something that I plan on covering using our sample application as well. Within NCache Live I’ll cover these three important aspects.
So, with Data Caching, that's our first use case and that's very common for NCache users. Most of our customers are using NCache for App Data Caching. And this, make use of NCache Data Caching APIs. So, you make some code changes by introducing NCache APIs alongside your data access layer and you cache almost everything that you normally fetch from the database and you expect to read it more than once.
So, anything that you're, that's a working set that could be all of your reference data, some of your transactional data, ideally anything that you plan on reading more than once, you can cache it, depending on the memory resources that you have and also setting up some invalidation, such as time based expiry on those items but we recommend caching as much data as possible and save expensive trips to the backend database and by doing this you improve your application performance and scalability. You can cache your domain objects, collections, data sets, images, any sort of application related data can be cached using our data caching model and if I quickly show you a quick review, a peek into our API.
This is how our API looks like. You get connected to the cache, you provide the cache name and I’ll show you how you create this cache next, then that returns a cache handle and then you use that cache handle to get items. Everything is stored in the key value pair. Key is a string key and value is a .NET permitted object, so, you provide that key to return items, retrieve items, cache.Add and is AddAsync to add items to the cache, Insert is an add-in as well as update and employee is retrieved and removed at this point cache.Remove.
So, that's a simple overview of our API, very basic and then it expands on top of it, where you have time based expiry, synchronization with database, with non-relational databases. Since data exists in two different places, so, you need to have some kind of synchronization or time base expiry on the data. SQL and LINQ searches are available. Parallel Queries can be executed on the items and then some grouping tagging feature and server side code.
So, this is a very elaborate, you know, section within NCache, where you can use NCache for data caching and we have huge set of features available on this use case and I would say over 70% to 80 % of our customers are using NCache for data caching, and once you've saved trips to the database, your overall application performance would be improved. It would be very scalable architecture, as far as, your applications are concerned. It's very reliable and it also has high availability built into it. You don't have to worry about anything.
Then we have second use case, which is around ASP.NET and ASP.NET Core specific. So, any web application ASP.NET or ASP.NET Core, you can start using NCache for session caching, that's the first use case here. Now sessions typically, what are the options? You would use sessions InProc, you would use a state service or you would use database. With InProc, it's a single point of failure, you also need to use sticky session load balancing, so, that's limited. With state server, it's a single server, it's not scalable and it's a single point of failure as well. With database as a session manager, database is slow and it can become a source of contention as well. It's not scalable. With NCache, without any code changes or with one line of code change in ASP.NET Core, you can start using NCache for session caching and all your session data would go into NCache and NCache would handle all section requests very effectively, by distributing session data across cache servers in a cluster.
So, it's very scalable. It's very fast. Session data is replicated across servers based on topology which we'll cover. So, there is no data loss in case of any server going down. You don't need to use sticky session load balancing anymore. So, all of this, without any code changes or one line of code change in ASP.NET code, will you plug in NCache as a provider and not just single site, you can even have multi-site Active-Active session caching with NCache, where multiple applications are using NCache in their own region and any overflow or a request being bounced from one data center to the other or you bring one side down for maintenance, all your sessions can be replicated to the other on demand, using our multi-site session feature. Then on the same, we have ASP.NET Core, which would work on the same lines.
We also have IDistributedCache interface available, so, that's under object data caching but again it's specific to ASP.NET Core applications. View state for legacy ASP.NET web farms. You can store ViewState inside NCache and you don't have to send that ViewState back to the browser. So, you save up bandwidth and you also improve application performance by dealing with smaller payload. So, view state is not part of your request and response, if you plug in NCache as a ViewState provider.
So, for legacy application, this is another feature, which is going to give you performance benefits. ASP.NET output cache provider and response caching in ASP.NET Core, that's another feature, where you can plug in NCache. So, ASP.NET page responses or ASP.NET Core views responses can be cached within NCache and again this is a no code change option.
So, static page output contents can be cached inside NCache and next time you can automatically generate the request and get the cached content made available at runtime. Then we have SignalR Backplane on this as well. So SignalR, if you're using a web farm and using a SignalR, you need the backplane as a must. Database and conventional message, you know, event bus or message bus is not very fast and very scalable. NCache can be used as a backplane, because it's, first of all written in .NET. So our .NET application would natively use it. Secondly, you can use it in a very scalable manner, where you can have multiple servers hosting the message load of SignalR applications. So, these are some of the features within ASP.NET, ASP.NET Core specific application.
And, finally we have Pub/Sub Messaging. NCache can also be used as an event driven message bus, messaging platform. NCache can offer you the communication platform for Pub/Sub. Multiple applications connect to it. It's a loosely coupled Pub/Sub Messaging platform. Applications don't need to know about other applications. They can still publish and get on with their jobs and subscriber application can receive an event and receive the messages that are published to them.
Microservices can really benefit from this, where multiple Microservices need to coordinate with one another. They're representing one big application but these are decomposed into smaller microservices. What if they need to talk to one another? If you build communication within microservice, microservice performance would be compromised because that service now has to wait for a response or acknowledgement or it has to send messages to the other microservice, which could be busy in another task. So, with usage of NCache as a Pub/Sub platform, one microservice can publish messages, get on with its tasks. Other application, other microservice, can consume it, when it gets the event from the NCache and NCache is a communication platform backed up by multiple servers. So, it's super-fast, it's very scalable and it's based on async event driven model. So, these are some of the use cases.
We also have full text search, which is introduced in one of our newer versions of NCache. So, Distributed Lucene is now part of NCache. So, you can store your Lucene indices on NCache and you can run Lucene API, standard Lucene API, Lucene.NET to be precise and you can still use NCache as a full text searching platform for your application.
So, that's the fourth use case. I'll be more focused on these three use cases. I'll show you actual examples, but please let me know if there are any questions at this point. So, one of the questions is we are looking into Microservices / Kubernetes for upcoming projects what kind of support does NCache have for this?
NCache is fully supported for Kubernetes environment. You have our Docker Image available. We have a sample implementation available on our website, depending upon which platform you choose? Whether you keep it on premise or use some of the managed environments depending on that you can get in touch with us or use our online help documents and fully deploy NCache Kubernetes, NCache Kubernetes Cluster. So, we have reference YAML files, which you can use.
We also talk about, how cluster would communicate? How applications would connect? So, it's a fully supported option, as far as your deployment is concerned and then Microservices, we just discussed Pub/Sub messaging, where that use case is specifically designed for Microservices, alongside that if it's a web based application you can also use Session State, which is not very common but that's an option and then Data Caching is another aspect, where instead of relying on a source, Microservice has to deal with the backend database which can slow it down. You must use NCache, which is extremely fast in comparison. So, yes you should definitely consider NCache for Microservices and if the deployment architecture demand that it has to run in Kubernetes, NCache is fully supported on that.
All right, I'll cover one architectural topic and then I’ll move on to our hands-on portion, show you the actual product in action and then we'll come back and show you how NCache is able to support those topologies? What are the options and take it from there and dig deep into the architectural side of things. So, NCache Cluster itself, if you noticed, we had a cluster. So, let's talk about how this cluster gets formulated? So, first of all it's a TCP/IP based cache clustering protocol. We call it dynamic cache cluster.
It's our own implemented 100% peer-to-peer architectured caching cluster. We're not using any third party or windows cluster for this. It's our own implemented 100% peer-to-peer architecture proprietary protocol. There's no single point of failure. You can add or remove any server at runtime, from a running cache. You don't have to stop the cache or the client applications, which are connected to it. So, dynamically you can make changes to our running cache cluster and when you do this there are no downtime or data loss on the application side. It's all built into the protocol and seamless to your applications. By this, you know, it ensure that NCache has high availability and 100% uptime for your applications.
When a server goes down, for example, you lose a server, other servers still running in the cache cluster, detect that and then notify clients, which are in constant communication with the cluster and there's a connection failover support which is built into the client side of NCache, that allows clients to decide that they need to stop using the down server and start using the surviving nodes. So, they automatically failover and start using the surviving server. If that server also goes down clients further failover and starts using the last survivor. So, as long as there's one server up and running in the cache cluster, there's no data loss or downtime.
All of this is seamless to your applications ensuring 100% uptime, a highly available cache scenario. You can even apply some configurations at runtime. You don't even have to stop the cache for that. So, that's the best thing about NCache and in comparison other products, this is very limited support available. Their cluster is not 100% peer-to-peer architecture. There's a concept of master and slave. If master goes down, slave cannot function independently, in a 100% healthy manner. The sharding concept, the partitioning and sharding has to, you know, there's a manual intervention involved, it's not 100% peer-to-peer architecture. So, with NCache, we guarantee you, are no downtime in case any server goes down. Other servers are fully capable of managing the cache cluster, as long as they're up and running. With just one server you can manage everything and let me show this, you know, demonstration.
Let me demonstrate how you can formulate a cache cluster? How applications can connect to it? So, let's quickly, logon to our demo environment and guys please let me know if there are any questions? Feel free to type into the questions answer tab. All right, so, this is our demo environment. I'm using 107 box and then I also have 108 box, which is my second box, designed or deployed in this case for demonstration. So, let me quickly go and first of all this is our web management tool. This allows you to remotely manage NCache from any place, as long as you have network access, internet access to this.
So I'm using localhost, I'm logged into the same box but I can access it from another box as well. So, I can just provide IP address of one of the servers and it would still connect to it. So, this web management, it's written in .NET Core, so, you can even manage Linux Servers if needed. So, same management tool allows you to create a cache, configure it, change settings, monitor it. So, all of this is built into our management and monitoring support.
So, let me go ahead and create a new clustered cache and walk you through how you would create a cache cluster. I've installed NCache on two boxes. That was the first step. Second step is to create a cache cluster and then use it in in the application. So I'm going to, you know, use ‘democache’ as my cache name.
Mode of Serialization could be Binary or JSON. With binary, you need to decorate your objects with Serializable tag and NCache would carry those, the binary serialization on those objects, while you're adding it and deserialize it using the same approach on the client end. Now, with .NET 5.0 release, binary formatters are, you know, going end of life. There is a warning, which comes if you make use of binary formatter.
So with NCache you might see this because NCache uses binary serialization. So, you can use JSON and that's the future that we are also predicting. That's the direction we would take along with Microsoft that JSON would be more common in comparison to binary. So, I'll keep binary for now but you can come up with any, as needed.
Selecting caching topology, so, next step is to select the right type of the cache cluster. There are four options to choose from. We have Partitioned-Replica Cache, Partitioned Cache, Replicated and Mirrored Cache and that's something that I would use, based on the, you know, I would explain based on these slides.
So, this is our Mirrored, which is a two node Active-Passive. This is our Replicated Active-Active with sync updates, Clients are distributed. This is our Partitioned, where data is fully partitioned and then we have Partitioning with backups, where data is fully Partitioned but with backups and Clients are connected to all the servers. I'll come back to these topologies and explain them in more details, because that's what we plan on doing in the architecture webinar but now that you have some information about this, I’m going to go ahead and choose Partition of Replica Cache because it's very good for reads and writes. It's very reliable because it has backups. All clients are connected to all the servers and it's very scalable where you can add more and more servers and it works really well. You just choose the caching topology and NCache cluster is automatically formulated after that and it starts using that architecture automatically. So, you don't have to dictate that this becomes partition one. This becomes the Active Node and so on so forth. So, you just choose the Caching Topology. That's how simple it is to set up a cache cluster.
Mode of replication between the active partition and its backup, for example, this is our partition of replica. Active partition backs up data on the passive partition, so, this could be sync or async and sync client updates the active and backup as one operation, as a transactional operation. In async, client updates the active returns and NCache updates to backup. So, async is of course faster, sync is more reliable.
I’m going to go with async. Size of the cache cluster, I can provide 2 Gigs per server. In partition of replica, if you use 2 Gigs, you need 2 Gigs for the backup as well. If you look right here, we have 2 Gigs for the Active Partition and 2 Gigs for the backup. So, you need total of 4 Gigs per server but you only have 2 Gigs for the active cache size but you can specify 2 Servers. So, if I specify 107 as my first server, 108 as second server, now I essentially have 4 Gigs of active cache size from the cache cluster combined but on each server I need 4 Gigs anyway because 2 Gigs are for active and 2 Gigs for the backup of the other server, which is going to get created automatically.
TCP based parameters need to be set here, 7814 is the port it suggested, I'm just going to pick it but you can come up with any port that you need and make sure that port is enabled on firewall, if there is a firewall between Server Nodes. We typically don't recommend any firewall between NCache Server-to-Server or Client-to-Server communication, but if there is one, typically it's going to be between your application boxes NCache boxes, by the way we don't recommend it.
We want you to keep NCache on the same tier where your application is and use it without any firewall. Firewall slows things down, right, so, we don't recommend any firewall but if there is one, make sure these ports are enabled and we can give you a list of ports which you need for management and monitoring as well. One is 8251, 7814 is the cluster port and then port 9800 is used between client application and NCache Servers.
If your cache becomes full, there are two options. One is that you don't have Evictions. Default evictions are turned on. If you don't have evictions, in that case cache becomes full and you get error on adding more items in the cache but if you have actions turned on in that case NCache would automatically make room for the new items, by removing some items from the cache, using priority algorithm LFU, least frequently used items or least recently used items can be removed from the cache at runtime and only 5% of the items are going to be removed, which are older or based on the priority.
So, if it's sensitive data such as ASP.NET sessions, turn the Evictions off but if it's other data which you can always re-construct from the backend database and you need the most up-to-date data and you still want cache to not become full, you should turn evictions on. Start this cache on finish. Auto start on this cache and when your cache starts up, your server starts up. In that case if server reboots, cache cluster node would automatically start up and join the cache cluster. So, it's a very important feature to turn on and that's it. It was a simple four or five step process, which allowed you to create a cache cluster.
All of this can be managed with the help of our PowerShell tools as well. So, we have PowerShell Cmdlets, our PowerShell module for NCache is very powerful. It allows you to manage all of this and you can script around it. A lot of our deployments are scripted, using our PowerShell tools. So, you can use them as well.
Performance Counters to see activity. At the moment no clients are connected, so, we're good to go with, you know, at this point and then monitoring tool, which gives us cluster health, which shows fully connected no clients are connected, zero clients. Request per second, average Microsecond per cache operation. Additions, Fetches, Updates, Deletes, Cache Size, CPU Memory Graph, Count data in the cache, CPU Time, Processor Time, System Memory and some Client processes and Event Log entries are also there.
Similarly we have Client-Side dashboards available. So, any application that you run and connect to it, it can also, you know, publish some counters and this tool is going to capture those counters by default in the Client Dashboard. These are pre-configured dashboards to get you started and then we have Report dashboard as well, which gives the Server-Side and Client-Side, performance counters in a consolidated view and then on top of it you can create your own dashboards, which I’ll demonstrate. So, keeping everything simple I think it's fully connected, it's ready. I'm going to run an application and start using it. For that, you can use stress test tool, which comes install with NCache and run it with default parameters and you would notice that some activity would be simulated on my cache cluster.
One client is connected. Requests are coming in. 300 to 400 requests per second are being generated at the moment and it's a mix of Additions, Fetches, Updates and then Micro Sub Microsecond latency is being observed. So, without performance, you know, degradation it's able to handle 300 to 400 requests per second. It's managing around less than 100 Microsecond per operation.
Let me run it one more time and you would see an increase in the activity and notice my client application got connected to both servers automatically. It's obeying the topology that we've chosen. If you remember in Partition of Replica, a client opens connection to both servers. So, that's exactly what it did and about eight to nine hundred requests per second are being observed by my server nodes. So, entire cache cluster is hosting about, handling about 1600 to 1700 requests per second with little less than 100 Microsecond per cache latency.
CPU, Memory, everything is well within range. It's not very CPU or Memory intensive. Memory is what it uses for data that restores and we're just using one Kilobyte average object size. We have client processes here and then we have client dashboards showing CPU, Memory Usage, Average Item Size 100, you know, a thousand bytes, one kilobyte, Read Operations per second, Write Operations per second, no Additions because we're just updating. So, Updates per second, so, all of this is being shown right here, on the Client Side. Fetches, average Microsecond per Fetch is shown right here and then report view of Server Side and Client Side as well.
Now this tool is using Perfmon Counters behind the scenes. If you're using Linux Server, this would be based on NCache own monitoring. We've implemented some performance counters for Linux, which are not Perfmon because Perfmon is not available as yet on the .NET Core side but you can still monitor your Linux boxes. On top of it you can create your own dashboards as well. For example, I can create a custom dashboard, which would say, let me just name it ‘mydashboard’ and create it and now you have a bunch of more options.
For example, on the right pane if you open the counters, you can see Messaging, Data Types, System Resources, Cache Health, Cache Resources. So, let's pick something. Let's say API Logs and this will now start logging the APIs. All the operations on the Server Side and you can turn on logging on this. So, if I stop it for now. It says ‘Time Stamp, Server Node, Client Node, Process ID of the Client, Execution Time and the Method. If there are any errors, those would be presented or printed here as well.
So, this is a very effective way of monitoring. Similarly you can come up with any kind of monitoring. For example, you can see number of Clients which are added. So, you can see number of Clients here. You can see Cache Size. So, you can drag and drop counters from the right side and you can use your own custom monitoring as needed. So, any questions so far? This was a quick peek into our Cache Creation, our Web Management and Monitoring tool. Next I’ll show you how to use it in the application and also talk about the architecture in detail. Last 20 minutes are going to be focused on NCache topologies. Different options to create the cache cluster and then also the application use cases alongside.
This question's a fun one it's just, ‘Can we get help in setting this up?’
Absolutely, absolutely, we work very closely with our customers. You just need to get in touch with our sales and support team and based on that we can, first of all, it's a fully working trial which is available. So, you can download and get started with it right away and then if there are any requirements, any questions, any feedback needed from our side, any hand-holding sessions are needed, we would love to work with you, yeah. Yeah I second that with Ron. We not only do, like live sessions but well we do as many sessions as you need, as many causes you need, as many engagements as you need. We will literally guide you through the entire process, if you'd like. So, anyway Ron back to you.
All right, so, very good. So, I’m going to show architectural details of some topologies and then I’m going to bring back the NCache Live, where I showcase the applications. All right, so, we already discussed there are many options to choose from. I'm going to spend less time on Mirrored and Replicated and more time on Partitioned and Partitioned of Replica Cache because these are our most popular ones.
Mirrored Cache is a 2 Node Active-Passive setup. All clients connect to the Active Server and whatever data that you have on Active is asynchronously backed up on the Passive. So, it’s for smaller configurations, an Active-Passive setup with Async Mirroring. Idea here is, for smaller applications you need good performance for Reads and Writes and you need some kind of reliability as well, where Server 1 goes down backup is already there.
So, it's very good for reliable data storage as well. So, it's recommended for smaller configurations. There is no scalability available because you only have 1 Server. So it's limited on that front.
Replicated again for smaller configurations, Active-Active. Each server is a copy of Cache. So, whatever data that you have on Server 1, a copy is on Server 2 and sync updates happen between Servers. So, whatever data that you update on Server 1, has to be updated on Server 2 and vice versa. However, for Reading, clients are distributed. Some client of applications would connect to Server 1 and some with 2 and this is done automatically, as part of the protocol and they, you know, Read from a Server directly, so, Reads performance is fast Writes performance has to be applied on all Servers.
So, Write Operation would be slightly slower but it works really well for smaller configurations. Up to 2 Servers or 3 Servers, it doesn't give you performance issues, even for Writes and on top of it, it's very reliable if any Server goes down these Clients failover and start using the surviving node, no data loss or downtime. So, for read intensive applications, for smaller configurations, not very scalable for Writes but very scalable for Reads and also for reliable transactions, high-availability and data reliability is part of this topology.
Partitioned is for performance centric scenarios. Strictly for Reads and Writes performance and also for scalability but no backups. So, in this we have partitioning of data happening. So each Server has a partition of data. So, some data goes on Server 1, someone 2. Clients are connected to all the Servers. This is in, within the design. Automatically clients would open connection. So data is fully distributed, so, your Request load would also go to all Servers and there's a distribution algorithm working behind the scenes.
So, clients are where, where data exists. They're the ones, who are distributing data and they're calculating this distribution location with the help of a hash map. A hash algorithm is working behind the scenes. So, it hashes a current key on a given server and that's how it pinpoints and knows where that data exists. So, Read and Write Performance is super-fast, with just 1 or 2 Servers and if you add more Servers you pool the memory resources together. More Servers work in combination to one another and pool their resources and help serve Client Requests. So, more Servers mean more request handling capacity out of NCache. So, it's very scalable as well. It doesn't have any backup, so, if you lose any server, you lose that partition, so, you get data loss. You need to reconstruct your data from the backend data source. Otherwise, for performance centric scenarios, for high performance Reads, for Writes and or for Scalability, this is a very good topology to have.
And then we have Partition of Replica, which gives you all the benefits of Partitioned Cache but with backup support. So, we have data fully partitioned. Clients are fully, you know, connected to all the Servers. Distribution map works but each Server maintains 2 partitions, an Active Data Partition and a Passive Replica Partition of another Server. As you can see, Server 1s backup is on 2 and Server 2s backup is on 1 and again you have sync or async backup, as shown in the demonstration.
Clients, you know, Read and Write from the node, where data exists. So, Read and Write performance is super-fast, like Partitioned Cache. Works very well for smaller configurations, for 2 nodes. If you add more Servers, it works even better by distributing the load. So, more request handling capacity out of the cache, on top of it, it has data reliability support, in the form of these backups. If you lose any Server, the backup detects it, that partition becomes activated at runtime. So, without any data loss or downtime the surviving node is fully capable of providing the data of down Server. So, there's no data loss or downtime and this would work, you know, with more Servers as well. So, in 3 Server scenario, Server 1 is active, backup on 2, Server 2 is active, backup on 3 and Server 3 is active backup on 1, is formulated and in case of any Server going down, let's say Server 1 goes down, the back of a Server 1 would merge into surviving node and nodes and cluster would heal itself. Cluster would reformulate active & backups using the current number of Servers and while all of this is being done, there is no data loss or downtime. It’s a background process, which is very optimized.
So, this topology gives you very good performance for Reads, very good performance for Writes. It works very well for smaller configurations. If you have more Servers, it will work even better with more and more scalability, where you keep on adding more Servers and give you more capacity of request handling and on top of it, it gives you data reliability support, where if any Server goes down, there's no data loss or application downtime. So, this is our most recommended topology.
I would cover two more concepts. There's a Client Cache as well, that's not a cluster topology but it's a configuration that you can set up on top of your Cache. For example, if I want to turn on Client Cache on this, all I have to do is, I come right here, add a Client Node, which is 107 and then I have a Client Cache right here. I just specify, let's say, ‘myclientCache’. It has two synchronization modes. It has isolation level of InProc or OutProc. I can keep it as a Local Cache, which connects to my Server Cache as a subset of the data, or I can keep it inside the application process as well. Give the size, since it's going to maintain subset of the data. So, i'm good and finished. That's it.
So, notice I've just made some configuration changes, now what this has done is, it has created a Client Cache on my application box and this Client Cache would start, you know, would get utilized automatically behind the scenes by my application. It keeps subset of the data closer to the application on the same box, either in a Local or InProc manner. It's connected with the Server Cache. So, any changes that you make here are propagated with the Server Cache and as a matter of fact data comes from the Client Cache, once you access it from the Server Cache. So, first time it would get it from the Server Clustered Cache, it copies it in the Client Cache, keeps a subset there and it would keep using that data to save trips to the Clustered Cache. So, you're saving expensive network trips and any changes that you make here it's a Synchronized Client Cache which NCache manages on your application's behalf. So, without any code changes you can start using Client Cache. It could be InProc or OutProc and you can improve your application performance. Synchronization is even managed by Cache, so, no implementation is needed for that.
If your use case is more of a reference nature, if you have more than, let's say, 70% Reads then Writes, we highly encourage you to start using Client Cache.
And then we have final topology in this segment, which is our WAN Replication. So, if you have Multi Datacenters, we have topology for that. You can choose any caching topology, Partitioned of Replica probably, then you can use two different data centers to connect with one another. The caches can connect and transmit data from one datacenter to the other, using our WAN Replication. We have Active-Passive and Active-Active topologies available. In Active Passive it's a one-way distribution of data, bridges another cache, which allows you to connect two caches together.
So, if you come to our management console, let me just bring back, right. So, if you click on bridge it allows you to create a bridge, so you can create bridge.
All the details, let's actually go ahead and do that. It provides you the Server IPs, for example, I need to add the bridge, let's say, on 107 I want to create the bridge. Please bear with me, I’m not able to move this. Okay! Bridge is already registered on this node, okay. So, I can give a different name for the bridge. Name was already picked. There you go.
Okay, so, choose ‘Next’ on this. Specify, keep everything default for now, based on time constraint and then I can go ahead and add 2 Caches to this bridge. There you go. So, I can add a Cache 1, let's say pick Cache from 107. Let’s pick this Cache. Make it Active and then I can go ahead and do the same for the 2nd Cache. Just pick any. Make it Active-Active or Passive. It's up to you and that's it.
If I now run the bridge, what this has done is it has constructed a bridge between 2 Caches and let me just bring back the presentation, it has constructed a bridge between two datacenters, caches across datacenters. Although in my case, I was just using one box but assume that it's across datacenters. So, Cache 1 would transmit data to the bridge, which is again a queue Active-Passive and then that would in turn transmitted to the target Cache and in Active-Active, it could be 2 Caches transferring data to one another and I’ve shown you the configuration, it's without any code change. You can link 2 datacenters together for Active-Passive, for DR scenario, for maintenance use case, for east to west migration of data, or it could be Active-Active where you have two different applications deployed in two different regions or same application deployed in different regions and you want to share data between those two regions, as far as cache is concerned. So, that's our WAN Replication model.
So, these are all topologies, I would personally recommend that you use Partition of Replica Cache. If you have a reference data scenario use Client Cache and if you have WAN Replication requirements Active-Passive or Active Active, consider our WAN Replication bridge feature.
Now coming back to our demo environment, if you remember we had an application running. This was a pre-configured application, using stress test tool. How about we use some applications to connect to our cache?
So, first of all how to use NCache in a real life application. So for that, I'm going to use our samples. We have basic operation sample, which comes installed with NCache. All you need to do is add NCache NuGet Package, add these namespaces and then based on this, you can start using NCache in object caching application, like this. You have Alachisoft.NCache.Runtime; and Alachisoft.NCache.Client; and you construct a cache handle by using ICache, then this sample initializes the cache. Constructs a key. Adds an item to the cache, retrieves it, updates it and removes it.
So, let's go inside this. So, if you look at the implementation, it's calling CacheManager.GetCache(cache); to get the item from the Cache. Actually get the Cache handle and get connected to it.
It's using configurations which are, you know, either made part of the applications and it knows where to connect. By default, it's using a Local Cache but you can use our Remote Cache by pointing to that Server and then you can say _cache.Add to add data. _cache.Get retrieves that data back. Similarly, _cache.Insert to update that data.
So, that's how easy it is to set up. You just need some references in your application and you connect to a cache by using one of our sample applications.
Another interesting way, which I wanted to cover in this presentation is the NCache Live. So, on our website on the right side, you see this ‘Try NCache Live’. So, what we've done is, we have hosted a cache cluster for you, which you can use by registering with us. For example, I’m already registered, so, I'm going to use my credentials and based on that it has some runnable examples for you with a 2 node hosted cache cluster as well.
So, on the right side, we have the NCache Web Manager and Monitor. So, you can click on this and play around with it. You can add caches, you can, you know, monitor things, for example, this is the monitoring view and similarly you can change the view going to the Client Dashboard, Report Dashboard and all that good stuff and I just keep Server Dashboard and then you have run up examples on the left side.
So, let's open CRUD Operations. So, whatever I’ve shown, you let's pick this ‘Add, Get, Update, Delete (together)’, because that's the most elaborate one in CRUD Operations. Okay, so, in this, it's again getting cache handle by calling CacheManager.GetCache. It's constructing some product objects. I want to be precise, it's adding that into NCache and then it's retrieving the same object back with Product ID 505, name some, you know, let's see what the name is? “Bread”, UnitPrice is 1.5, so, that's the product. So, we added it in the Cache, we retrieved it, updated it and removed it. With update we're just changing some parameters, I think unit price is now from 1.5 to 3.5.
Let's run it and you can change some details here. Next time we'll just change it with another one. There you go. Product added in the cache, retrieved, product update, retrieved again, unit price is 3.5. You can make changes to this by the way. Just to show you that it's a runnable code, let me just add it one more time. There you go. So, it's a runnable code, which you can change at runtime and play around with NCache. So, that's one simple example. Let me show you the one other feature within this and then move on to our session caching.
So, if I go to home page, as a matter of fact, let me stick to this NCache Live for now because it's convenient instead of opening the sample. Right, so, with this we have a SQL searching available. That's one of the features within object caching, where you can search and cache using SQL Like searches. LINQ Queries can also be run but for now I would just focus on SQL search. Idea here is that you get connected to the cache, add a bunch of customers and then construct an SQL Like query, which you run on top of NCache objects, the objects that you have. So we're saying, ‘SELECT *’ select all properties from ‘Entities.Customer’. That’s the namespace of the object where ‘OrdersCount’ is greater than our runtime parameter and order by CustomerID in an ascending order. So, Order by CustomerID, OrdersCount is 5. So, that's the runtime parameter and then we're passing this to SearchService.ExecuteReader query command and based on that it's going to execute a query on the cache, construct a reader and then you can read, iterate through the result set, using the reader base interface.
So, let me run this and that would allow you to run SQL Like queries on your objects. You can run SELECT Products WHERE Product.Price > 10 AND < 100. You can select products based on a certain category. You can search orders based on customer id and all that. So, it retrieved all those lists of customers directly, using the criteria that you specified.
And, finally let me just come back here, let me see if I can show you Pub/Sub Messaging as well. I think I can. Okay, so, Pub/Sub Messaging is available as well, that was the third use case and then finally I would show you the session caching, through the presentation. So, you can publish a message on the topic, by running this code. So, let me show you quick peek into this and then take it from there. CacheManager.GetCache, Topic CreateTopic. We can structure a name. There is a concept of topic. Similar messages can go to a topic. So, you construct a topic that's a channel where you have similar messages. For example, for one application you can create a certain topic or based on your object types or based on your Microservices, based on different Microservices, Modules, you can have different topics created and then random customer from database were retrieved and then you publish those customers in here and then you simply publish the messages to NCache and from this standpoint this has published some messages onto NCache.
So, coming back here, let's subscribe and receive some messages. So, your subscriber application code would look something like this. We have samples available as well but based on due to time constraint. Again it constructed a topic. Same approach created a subscription, right and then it has a callback, which it simply calls. So, let's run it. There you go. So, it received messages from that against, you know, the random customers, which were published, were received here.
And, finally if you're interested in Session Caching we have samples which are available right here. So if you go to samples, dotnetcore preferably or dotnet, depend upon your choice, I think dotnet is going to be easier in explaining. I think we have sample available, right, there you go. Okay, so, with GuessGame within our web config, all you have to do is open the web config, add an assembly tag right here this is it and then you attach a session state, you know, tag and here you specify the cache name which you've just configured. For example, if I run this, it will now use demoCache as my session manager. So, this covers our session caching use case.
Same goes for SignalR Backplane. We have samples available for SignalR. Here it is. We have ViewState available, we have in ASP.NET Core ResponseCaching available. We have IDistributedCache sample available.
So, all of these samples are available and based on time constraint, I think, I would conclude this presentation at this point. So, I would hand it over to you Zack. This was more hands-on, so, I think these are enough, as far as this presentation is concerned.
Okay, we do have one more question and it is your favorite question. It is, we've looked at Redis as well and we like NCache as well, what does, how do they compare for NCache and Redis?
First of all it's the clustering, right. So, let's start with that Redis is very basic in terms of the cache clustering. So, it's not 100% peer-to-peer architecture. I covered this aspect when we were discussing dynamic clustering of NCache. NCache is a 100% peer-to-peer architecture. In comparison Redis is not.
Then the platform and language support. So, it's not native .NET or Windows product. The Open Source version is buggy and the ported version is buggy and then doesn't really run well and you know, Redis documentation owns it. They recommend not to use it on Windows. The preferred platform is Linux for it. So, that's a big limitation, if you are coming from a Windows background. You need to have a product which runs on Windows and since NCache runs on Windows and Linux, so, it's a clear winner on that front.
Then from a .NET support, the support is not official. There are some libraries, which are written in .NET and again third-party support options are available. Then the feature set, NCache comes with Server-Side code, Read-Through, Write Through, Cache Loader. You can run computations on NCache directly. So, it allows you to run .NET and .NET Core code directly on NCache Server-Side. Redis completely lacks those features. The SQL Search, the LINQ Query support that's, not available with Redis. So, this list can go on and Zack you're right this is one of very common questions. So, I would recommend that you go to our comparisons. I'm sorry this, all right, so, if you go to NCache comparisons, there are a lot of comparisons published and this would give you step-by-step details about or feature by feature details in terms of differences between NCache and Redis and you would be able to make a clear assessment how NCache is a better product, in comparison.
Okay, well, I think that pretty much concludes for the time. Thank you everyone, for all of you for coming out. We always say this and we say it again, please stay safe out there, stay sane out there and you can always reach out to either firstname.lastname@example.org or email@example.com if you would like any assistance in getting NCache running or if you have any questions. We do have a two month free trial of NCache Enterprise available on the website, so, definitely go and download it if you haven't already and we look forward to seeing you in our next webinar, as you know, thank you so much for coming out and all of you have a wonderful day. Thank you everybody. Thank you Zack. Thank you Ron. Bye.