NCache Architecture

Today, I’ll talk about NCache Architecture. NCache is an open source In-Memory Distributed Cache for .NET and .NET Core. NCache also supports Java and Node.js applications.

Five Common Uses of NCache

There are five common uses of NCache for .NET, .NET Core, Java and Node.js applications.

  1. Application Data Caching
  2. NoSQL Database (In-Memory with Persistence)
  3. Web App Caching
  4. Pub / Sub Messaging, CQ, and Events
  5. Distributed Lucene for .NET (Full Text Search)

Application Data Caching

Number one is application data caching. This is where you cache application data to reduce those expensive database trips. You do this because cache is much faster and more scalable than your database. It helps you improve your application performance and scalability. This is the most popular way people use NCache.

NoSQL Database (In-Memory with Persistence)

Number two use is, to create a NoSQL database, which is an in-memory NoSQL database but with persistence. What that means is that the entire in-memory NoSQL database is also persisted either to a native disk store provided by NCache, or to a third-party database of your choice through a provider. The in-memory NoSQL database is kept synchronized with the persisted copy so, you know that your data is always correct.

Web App Caching

The third common use case of NCache is to use it in web applications. For ASP.NET Core, ASP.NET, legacy ASP.NET, Java Web Apps and Node.js you can store your sessions in NCache, your web sessions. NCache replicates them across multiple servers, so that, there's no loss of session data in case any server goes down. This is a very fast and scalable storage for sessions. It improves your application performance and scalability.

For ASP.NET Core, you can also use NCache for your Response Cache storage and also a Backplane for your SignalR. And, you can also use the IDistributedCache interface that is provided with ASP.NET Core to plug in NCache without any code change.

For legacy ASP.NET applications, you can also store your View State in NCache. You can also store Output Cache which is the page output cache and you can also use NCache as the SignalR Backplane for the legacy ASP.NET applications.

For the sessions, in all of these applications, there's the single site session, which is the default but, if your application is deployed on multiple geographical locations, then NCache has this multi-site session feature, where the sessions can move from one location to the other in a seamless fashion.

Pub / Sub Messaging, CQ, and Events

The fourth most common use of NCache is Pub/Sub Messaging, Continuous Query and Events. NCache provides an in-memory Pub/Sub Messaging platform, which is extremely fast and scalable and, it allows your applications, whether these are .NET, .NET Core, Java or Node.js applications, to share data with other applications in a decoupled manner using the Pub/Sub model. NCache Pub/Sub features include subscriptions, topics, messages, etc.

NCache also provides a Continuous Query feature, where it allows you to create a data set in the cache, based on an SQL query and, then it monitors any changes in that data set. For example, you could say, give me all the important customers and then if there's another important customer that is added updated or removed, you're going to be notified. So, a very powerful feature for a very specific use case, where you need to monitor changes to data sets.

Finally, the events feature is that NCache provides you Item Level Events and also Cache Level Events.

Distributed Lucene for .NET (Full Text Search)

The fifth most common use case of NCache is for full text searching. If your application is doing full text searching with Lucene.NET API, NCache has implemented Lucene in a distributed fashion so, the Lucene index is actually distributed across multiple NCache servers for scalability. This is much faster and more scalable than using a standalone Lucene. If you have an application that is already using Lucene, you can migrate that to NCache without any code change because, the same Lucene API works with NCache.

NCache in the Mission Critical Apps

So, let's talk about how NCache is used in mission critical applications, to achieve extreme performance and scalability. These applications are usually high transaction applications. They’re either web applications, microservices, web APIs or any other server applications that are trying to access either your relational databases like SQL Server, Oracle or a NoSQL database like Cosmos DB, MongoDB, etc. or maybe legacy data on mainframe.

NCache Architecture
NCache in the Mission Critical Apps

In a typical deployment, NCache sits in between the application tier and the data tier, as a separate caching tier of two or more servers, where NCache pools the memory, the CPU and the network card resources of all of these servers into one logical capacity. Then if that capacity is maxed out, let's say, if you started with the two node cache cluster and that capacity got maxed out, you can just add a third server at runtime and the whole capacity increases in a linear fashion. That is why NCache never becomes a bottleneck, which your data storage does become a bottleneck.

Now, the caching tier being a separate tier is the most popular way of using NCache but, it doesn't have to be that way, your application and NCache can sit on the same server, if that is your preferred way of deploying your application. For application data caching, NCache goal is to cache so much data in memory that about 80% of the time you don't even go to your data store and you fetch data from NCache. That reduces the traffic to your data storage or database by 80%.

NCache Architecture

Let’s go into NCache architecture. The Core of NCache architecture is this dynamic cache clustering.

Dynamic Cache Cluster

NCache has a self-healing dynamic cache cluster that has a peer-to-peer architecture. What peer-to-peer means is that there's no master node, there's no slave node, every node is a peer, it's equal. There is a cluster coordinator node of course, to manage the cluster operations. But, that is usually the oldest node and, then if that node ever goes down, in a very seamless fashion, the next oldest node is automatically picked as the coordinator.

The cluster coordinator is responsible for managing the cluster membership. It also keeps caching topology information. It keeps a distribution map, I will talk about that in a bit. The whole goal is to have this a very dynamic cluster so, that you can add or remove cache servers at runtime without stopping either the cache or the applications.

Dynamic Cache Cluster
Dynamic Cache Cluster

The cluster itself is a TCP based cluster. When I use the word cluster, I do not mean Windows clustering, I mean TCP. NCache has its own TCP based clustering. By the way, the cache servers in this cluster can either be Windows or Linux boxes.

Within the cluster there has a connection failover capability. So, if any sockets break within the cluster, there's a retry mechanism. So, it keeps retrying for a certain time and, then after that it starts to time out. There is also a heartbeat feature to keep detecting whether the all the nodes are up or not.

Dynamic Clients & Dynamic Configuration

The next feature of NCache architecture is the dynamic clients and their dynamic configuration. All the client connections are dynamic. What that means is that the clients and, the client here is your application which is either the web server or the app server, running on the web server the app server accessing the cache cluster. They don't have to hard code any cache information, any cache server information. As long as they can connect to any one server in the cluster that server provides them the cluster membership information. Once they have the cluster membership, they know about all the servers in the cluster and, they can connect to all of them if they need to. That is based on caching topology that I will talk about in a bit.

Dynamic Clients & Dynamic Configuration
Dynamic Clients & Dynamic Configuration

The other part also is that when you add or remove servers to the cache cluster, which as I mentioned in the previous one, yes, you know, you can add or remove servers the cluster membership is updated. The updated cluster membership is then propagated to all the clients. And, that way all the clients know even if a new cache server is added or, if an existing server is removed or drops out for some reason from the cluster. That's the dynamic cluster membership.

Secondly, the caching topology information is also dynamic. All the caching topology related information is propagated to the clients by the cache cluster. So, that way, if there's any changes that happen in that topology related configuration that gets propagated. And, just like within the cache cluster, there's a connection failover. There's also a connection failover capability between the clients and the cluster. Actually, the likelihood of a client connection breaking with the cluster is more than the likelihood of connection breaking within the cluster and, the reason is because most of the time every server in the cluster sits within the same subnet, you know, very close to each other whereas, the clients may be sitting across multiple routers, across firewalls, which can increase the likelihood of sockets breaking. If that happens there's a resilience built into NCache which is that the retries are done automatically. And, in most situations those connections are automatically, in a very seamless fashion restored. And, the application doesn't even notice that there was a connection breakup. But, when that fails, then of course the connection has to break, if the connection permanently breaks.

Split Brain Detection & Recovery

There’s another aspect of NCache dynamic clustering, which is called Split Brain Detection and Recovery. I am not going to go into the details of it, just give you a high level. The split brain essentially means that if there's a cluster due to some environment issues, maybe some sockets breakage something, the communication between these servers breaks and, this results in multiple sub clusters.

Split Brain Detection & Recovery
Split Brain Detection & Recovery

So, let's say, in this case there's three splits. Split 1, Split 2, Split 3. So, split occurs because of some connection breakup that could not be resolved through that retries and timeouts. Once that happens, then after some time, that's the connection is restored, the sockets are up and they're working. So, at that time when the reconnection happens because the servers knew about each other, they detect there is a split that has happened. The split means that up until this time each server assumes that each sub-cluster assume that the other cluster is dead. So, it kept on doing its own stuff. So, now that all of them are able to see each other they need to merge and become the old cluster. NCache has this ability where it can do that automatically. The merging happens in an iterative way where it starts with the largest to smallest clusters. And, there is some data loss that occurs because the updates were done to multiple clusters. So, one of the cluster has to lose or give up its data to join the other cluster but, that's all part of the logic, it's expected. The nice thing is that at the end of this, the cluster is again working in a healthy situation. And, all of that happens automatically without any human intervention. You can find out more about this on our website.

Split-Brain Recovery in NCache: A Tale of Two Halves

Caching Topologies

Let’s go into the caching topologies part of NCache architecture. A caching topology essentially is data partitioning, data storage and, client connection strategy. Data replication as well also.

Partitioned Cache

There are four different topologies. Number one is called Partitioned Cache. Partitioned Cache and Partitioned-Replica Cache are pretty much the same, except in case of a Partitioned Replica, there's a replica and I’ll talk about it in a bit. So, let's talk about Partitioned Cache.

In Partitioned Cache the entire cache is broken up into partitions. Every server has one partition. There are about a thousand buckets for the entire cluster. Every partition has one nth the number of buckets based on how many and ‘n’ being the number of servers or the number of partitions. These partitions are created at runtime.

Partitioned Cache
Partitioned Cache

So, what that means is if you add a new server at runtime, partition three will be created and, all the buckets will be readjusted based on that. And, if a three node cluster, one server goes down, the number of partitions go from three to two. So, all of that is done at runtime. That's why it's called dynamic partitions. It makes your job very easy because, it's all done in a seamless fashion.

Once partitioning happens, there's a distribution map which is essentially the hash map of what data is in which bucket and, which partition has which buckets, that is propagated to all the clients. And, the clients in this topology, every client talks to all the servers in the cluster. The reason it talks to all the servers so, they can go directly where the data is. So, for example, if this client wants item number one, it'll go to partition one. It knows based on the distribution map that item number one exists in partition one, if it wants item number four it knows to go to partition two, based on the distribution map. The distribution map does not change when data is added. It only changes when you add or remove partitions. So, it's not a map about actual key, it's a map about hash map distribution.

There's also another feature called dynamic data balancing and, that means that if one partition starts to get more data than the other because, maybe the number of keys or the type of keys that you're using, tend to congregate more on partition two than partition one, then after a certain threshold NCache realizes that you know there's too much data on one of the partitions and not enough data in partition one so, it will do what it calls data balancing. It moves some of the buckets from partition two to partition one and, takes some of the empty buckets from partition one and moves it to partition two. And, that's how it does the data balancing. So, all of that ensures that the entire cache cluster with all the partitions are working in a very balanced manner.

Partition-Replica Cache

Everything I’ve talked about so far is true for partition replica also. The only thing that is different in partition replica is that, every partition has a replica add on a different server. So, the replica is always on a different server and, there's only one replica per partition and, it is created dynamically. Means that it is created at runtime. Just like partitions are created at runtime so, are the replicas. The replicas are passive. What that means is, no client talks to the replica, they only talk to the partition and, every partition talks to its replica.

That architecture is intentional because, this allows the partition to do bulk updates. So, by default the replication between partition and the replica is Async. So, all the updates are done and the clients go back and they're done. Now, the partition has a bunch of updates that were made. It cues them up and does one bulk update to the replica. Doing the bulk update makes it super-fast and, again improves performance and scalability.

Partition-Replica Cache
Partition-Replica Cache

But, in some situations your data may be very sensitive. You know, when you do bulk update asynchronously, there is a chance that if, let's say, if this partition, partition one goes down, some of the updates may not have made it to replica one. So, that did that much data is going to be lost. Now, in most cases that's perfectly okay because, the data that you're caching, there's a backup of it in the database and it's not that time-sensitive data. I am talking time-sensitive in terms of milliseconds. But, if your data is very sensitive, let's say, you may be a financial services application. So, you really want to make sure that every update to the cache is up, is replicated in a synchronous fashion.

So, there's a synchronous replication feature, what that means is that, when a client updates item number one that update operation does not complete until the replica is also updated with item number one. That way the partition and the replica are always in sync. Obviously it's not as fast because you can't do bulk copy and, every operation has to wait until two places are updated. But, it achieves the goal of, you know, much more stricter data integrity.

Partition / Replica Cache (Dynamic Partitioning)

Finally, just like Partitioned Cache which has dynamic data balancing, there's also dynamic data balancing for replicas. So, when a partition is data balanced, the replica is automatically data balanced. Because, the replica has to be a copy of the partition. Let’s now go a little bit more deeper into what dynamic partitioning really means. And, I am going to use the partition replica as the example here.

Partition / Replica Cache (Dynamic Partitioning)
Partition / Replica Cache (Dynamic Partitioning)

Let's say, if you had a two server cluster here, every server had three items each and partition one is replicated here, partition two is replicated here and, you want to add a third server, you did that at runtime, as soon as you do that the dynamic partitioning happens and, now there's a three third partition that is created. There was a thousand buckets. So, 500 each were here. So, now one third, one third, one third, will be put and, as a result some of this data from here, some of the data from here, will move to partition three. So, that's the first thing that happens again it's all done automatically, runtime without stopping anything. Your application continues to work. It continues to read and write as if nothing is happening and, a third node is added, a third partition is added. And, with that partition now, the next thing that happens is that the replica 2 which was on this node gets moved here and then replica 3 gets created here. So, all of that is done automatically at runtime. It doesn't take very long. Obviously, it depends on how much data you have. But it's done in an asynchronous fashion in the background while your application is continuing to update.

The same way, if you had a three node cluster and, one of the node went down for some reason, now, you had six item on three nodes. Now, all those six have to fit into two nodes. Again, third partition goes down, when that happens, the replica 3 becomes active. So, now partition 1, partition 2 and, replica 3, for a brief period of time. Now replica 3 merges itself with partition 1, partition 2. So, replica 3 goes away. Once replica 3 merges itself completely with partition 1 and partition 2, then it goes away and now replica 2 is created. So, as a result, you have three items in this and three items in this. So, it can go both ways. You can add a server from here to here or you can drop a server from here to here. Everything is done dynamically.

Partition-Replica Cache (Maintenance Mode)

Now, there's a situation where you're bringing a node down for maintenance purposes. Let's say, you're trying to apply a patch, operating system patch or, something else. And, you have a lot of data you have tens of gigabytes of data on each node. So, you really don't want to do re-partitioning that I’m talking about here because, this node is going to be up, back up in a few minutes. Maybe in 5 minutes, 10 minutes, whatever is the time that is going to take to do the patch. So, you had a live three node cluster. So, you brought this node down but you did it in a maintenance mode. That means you're telling NCache please don't re-partition. Because, I’m going to be back up. So, just assume that it's okay to have two partitions on the same node and one partition here. It’s okay not to have a replica for all of these because this is only for a brief period of time. So, NCache does that when you bring this node down, you apply the patches and stuff. Replica 3 that was here becomes partition 3 and, all the reads and writes are happening.

Partition-Replica Cache (Maintenance Mode)
Partition-Replica Cache (Maintenance Mode)

Now, for the time that this is down obviously, you don't have as much high availability. So, if any one of these nodes go down, you will lose some data. But, that's okay because, you know, it's a schedule maintenance. Usually, you do it at a low activity time. But, since there's a lot of data, you don't want to unnecessarily repartition and then repartition again. So, once you're done you bring this node back up, when that brings back up partition 3 that's copies itself again back to partition 3 here and then this becomes replica 3 and, then partition or replica 2 is recreated based on whatever copy was here. So, that much is done automatically at runtime. But, this is much less work than if you had to repartition the entire cache again.

So, this feature was introduced based on a lot of customers’ requests that during their maintenance operations, they were seeing a lot of unnecessary repartitioning and copying of data that they didn't want to. So, this is a very powerful feature.

Replicated Cache

The next caching topology is called Replicated Cache. Now replicated cache is a topology where you can have two or more servers. Every server has the entire copy of the cache. And, all nodes are active, which means every node has clients connected to it. In this topology a client only connects to one server and not all of them but, it knows about all of them. So, in case it connects to server 1, if this one goes down it knows there's a server 2 or, server 3 in this and it can connect to the next one, as it's sees appropriately.

Replicated Cache
Replicated Cache

The reason it is connected to only one server is because the entire cache exists on this node. So, any reads that it wants to do are super-fast it just gets them right there. This client gets it from here, this client gets it from here. So far, so good and, it's super-fast. The more servers you have, the more reads you can do. The updates however are not as fast because, they have to be done synchronously. Because, they're all active nodes. So, for example, if this node wanted to update item number 3, server 1 will notify all other servers that item number 3 is being updated. They will all update it as well and, once all of them have updated item number 3, that's when the update operation completes. So, that's why it's synchronous. That means the client has to wait for item number three to be updated on all the servers. That's the first part.

Second part, since, it's a distributed environment they all have to be updated in the same sequence. That means multiple clients are updating multiple items. So, that's why there's a sequence based algorithm where the cluster coordinator maintains a unique sequence at the cluster level, that every node has to obtain and, then based on that sequence it then updates. So, if it sends the sequence to all the other servers and those servers will only update item number 3 when it's sequence comes up for update. And, this ensures data integrity.

So, replicated cache is very good for a lot of read intensive operations or, if the updates are only a two node environment but, as you add more servers the updates become slower.

Mirrored Cache

Finally, the last topology is called Mirrored Cache. It's also a 2 node active-passive topology. There's actually only one active node, that the clients connect to. The entire cache sits on this node. But, an entire copy of that cache sits on the passive node. Nobody talks to the passive node as long as the active node is up. All the reads and writes are done here and the updates are asynchronously backed up or mirrored into the passive node. And, then if the active node ever goes down, all the clients automatically move to the passive or, the newly … This node automatically becomes active and, all the clients move to it and, then it continue the operation. Then, when this thing comes back up, you can reverse the behavior again.

Mirrored Cache
Mirrored Cache

This topology is super-fast for both reads and writes. Because, just like a partitioned or partitioned replica topology, actually, just like partition replica topology, all of the updates are being done to that same node. But, it has a scalability limitation that you cannot have more than one server in terms of the cache size. It is faster for updates than replicated cache. For reads, a two node replicated has more throughput than a two node mirror topology. So, you can choose which one you want. Both of them are for lower end environments. For higher end environments you should use either partition or partition replica. Partition replica, by the way, is our most popular topology.

Client Cache

Another feature of NCache is called client cache. This is the cache on top of a cache and it allows you to have an InProc speed, even though, you're using a distributed cache.

Let me explain how. So, let's say, if you have a cache cluster of, you know, any number of servers here and of course your database is right here. You’ve got a lot of performance improvement just by using the cache cluster itself. But, this is still a distributed environment, you're going across the network to fetch everything. So, if you have a lot of read intensive operations, wouldn't it be nice if your frequently used data sat right next to your application? That's what the client cache is.

Client Cache
Client Cache

It is a local cache that sits on the same box as the application, which is your application or web server and, it can even be InProc. InProc is super-fast because that's like an object on your heap and, actually it is it keeps things in a deserialized fashion. It keeps it in an object form on your heap so, that you don't even have to do serialize / deserialize fetching the object which you have to do for any OutProc or any remote access to the cache. An InProc speed is just so much faster. However, if you have multiple processes running on the application server and, you don't want to create multiple copies of the client cache that maybe the size of the cache is too big, then you can create one local OutProc cache. And, then all those processes access that OutProc cache. That's still much faster than going across the network to access the cache cluster. On top of it, it is reducing pressure on the caching tier which was reducing pressure on your database. So, it's like a multiple tiers approach.

The client cache being local is not a standalone cache. It is connected to the caching tier, the cache cluster and, whatever data exists here, also exists here. And, if any other client updates that data, the cache cluster notifies the client cache. So, the client cache can update itself with that change. Obviously, all of that happens asynchronously but, it's pretty fast, super-fast. And, you know, the application when it fetches that data, there are two type of synchronization that client cache supports. One is the optimistic synchronization, which is the default. Which assumes that it's okay that if in a very rare situation, you may read a stale copy of the data. Stale means that this is an older copy here, this copy changed but, by the time that update made it here, you read it before it got updated and, it was okay. In most cases that's perfectly okay and that gives you super fast performance.

In some cases like I mentioned in a partition replica where you have a synchronous replication, you may have very sensitive data where you absolutely cannot afford to have a stale data being read. In which case NCache provides a pessimistic synchronization option with this. What that does is that the client cache, when you try to fetch anything, again all of this is happening behind the scene, you do a cache.Get, the cache.Get, before it gets the data from … the client cache has that item but it doesn't get it from there. It first checks whether the version on the server, on the cluster is a newer version than the client cache, if it is a newer version it fetches the newer version and updates the client cache and gives the application that data. If the version on the cache cluster is the same as the client cache, it gets the copy from the client cache.

Now, there is a trip of course involved to the cache cluster, so, it's not as fast as the optimistic. There is a noticeable performance difference, but, it's not as slow as fetching the entire data from the cache cluster, especially, if your items are larger. If you have 1k or larger object sizes or that, you know, it's a lot faster to just check the version of that object in the cache cluster than fetching that entire object. It's still faster than not having it.

Finally, the really nice thing about client cache is that there's no programming on your end. You just make the standard NCache API calls cache.Get, cache.Insert, cache.Update, cache.Add, cache.Delete and the client cache is automatically plugged in. The client cache is good or situations where you're doing a lot more reads than writes. Because the writes, as you can see, the updates have to happen here and here so the updates are actually slower than not having a client cache. But, if you were doing 5:1 or 10:1 of reads versus writes, then obviously client cache gives you super-fast performance. I will give you an example.

If you're using NCache for session storage. Sessions are not a read intensive use case because, for every session every web request, there's one read and one write on the session. So, the number of reads and writes are equal. So, in that case client cache actually degrades the performance. So, don't use it in that case. But, for application data, for a lot of other situations, it's a very powerful thing.

WAN Replication

Another aspect of NCache is WAN Replication. This is very useful if your application is deployed across multiple geographical locations. You may have two different locations, one is active, one is passive. Or you may have two locations. They are both active. In an active passive situations what NCache does, it provides you a bridge topology where you create a bridge cluster. So, whatever updates are done to your active site clustered cache, they are asynchronously submitted to the bridge. The bridge again is an active-passive two node cluster itself for high availability and, the bridge then just applies the same changes to the passive side. The assumption that the passive side is not doing anything and maybe it's doing some reads but no updates. And, most of the time not doing anything.

WAN Replication of NCache
WAN Replication of NCache

So, the active-passive is pretty straightforward. It’s a one-way traffic making sure that the active site updates are being applied to the passive site. So, if the active site ever goes down the passive site already has the latest data by maybe a few milliseconds gap. You know, if this site goes down maybe there is a few milliseconds of data, a few tens of milliseconds of data that didn't make it yet but, the rest of the data is updated to the passive site.

In an active-active situation, both sites are being active, they're both active. That means the data is being updated on both places. And, they're both connected to the bridge. It's a two node active-passive cluster, that's the bridge. The bridge cluster is a two node active-passive. Both of them are submitting their updates to the bridge and the bridge is propagating the updates to the other site. So, if this one submits an update here. Maybe, an add, update or delete of an item, the bridge applies it to the other site and, if this does the same, so, there's no issue except when there's a conflict. What is the conflict? What that mean is, if both sites submit the same item for update. In that case the bridge recognizes that there's a conflict. So, it does conflict resolution to one of two ways. One is the last update win where it uses the time stamp of whoever provided the update the last, wins or, you can provide a conflict resolution handler to the bridge. And, the bridge calls your conflict resolution handler and, it gives it both copies of the update, so that, you can do content analysis to see which content is more appropriate and, then tells the bridge, okay, this content is the one that should be applied, in which case the bridge makes sure that both of these have that content updated. So, that's how the conflict resolution is handled in the bridge.

3+ Active Active
WAN Replication of NCache (3+ Active-Active)

You can also have a 3+ active-active sites. For example, you may have three different geographical locations or, more than three also. And, it works exactly the same way, they're all connected to the bridge. They all submit their updates to the bridge when there's no conflict that that update is applied automatically to all the others. When there is a conflict, it is applied based on the last update win. Or, if you have a conflict resolution handler that can analyze the content. But, the bridge topology is a very powerful way to allow you to have multiple geographical sites where your application is deployed but where the cache has to stay synchronized.

The synchronization is of course based on the latency of the WAN. It’s maybe a few tens of milliseconds delay but, it's not minutes or hours of course, maybe. So, that's still pretty good. Make sure that your site is always up even if any one site goes down. You can route the traffic to the other site and your users will not see any difference. Okay.

NCache Benchmarks

Let's now go into NCache benchmarks where I’ll show you, we've published these benchmarks, where we've done 2 million operations per second on a five node cluster. We did these benchmarks on the AWS Cloud. You could do that in Azure or any other cloud or even in your own data center. As you can see, it's a linear graph. With a five node cluster, we did a little more than two million operations per second and then it came down as you had fewer.

NCache Benchmarks
NCache Benchmarks

And, I’ll show. You just come to our website, you can go to the ‘Resources’ menu and go to the ‘Benchmarks’ section and, you'll see, you can download the whitepaper that explains all the details of the benchmarks or, you can actually watch the video. This video actually shows when we were actually taking the benchmark. So, it's like an evidence that this actually happened and we're not just publishing the document. And, obviously when you try this in your own environment you will know that this is all done.

So, this is basically the end of my presentation. I will strongly urge you to go and download a fully working copy of NCache. It's a 60-Day trial. It has all the features. You can download either the Windows or, the Linux. You can also go to the cloud and use it in either Azure or AWS or any other cloud that you're in.

What to Do Next?

Signup for monthly email newsletter to get latest updates.

© Copyright Alachisoft 2002 - . All rights reserved. NCache is a registered trademark of Diyatech Corp.