A distributed cache is essential for any application that demands fast performance during extreme transaction loads. An in-memory distributed cache performs better than a database. And, it can provide linear scalability in handling greater transaction loads because it can easily let you add more servers to the cache cluster that a database server cannot do.
Despite all these benefits, there is still one problem. In most cases, a distributed cache is hosted on a set of dedicated cache servers across the network, so your application has to make network trips to fetch any data. And, this is not as fast as accessing data locally and especially from within the application process. This is where client cache comes in handy.
In NCache, a client cache keeps a connection open to the distributed cache cluster and receives event notifications from the cache cluster whenever client cache data changes there. A distributed cache cluster knows the location of data items among client caches, so event notifications are sent only to the relevant client cache instead of broadcasting them to all client caches.
How Does Client Cache Work?
A client cache is nothing but a local cache on your web/application server, but it is aware of the distributed cache along with being connected to it. Additionally, a client cache can either be in-process (a client cache exists inside your application) or out-of-process. This allows a client cache to deliver much faster read performance than even distributed cache and at the same time ensure that client cache data is always kept synchronized with the distributed cache.
However, a distributed cache notifies the client cache asynchronously after it successfully updates data in the distributed cache cluster. Technically, this means there is a small window of time (in milliseconds) during which some of the data in the client cache is older than the distributed cache. Now, in most cases, this is perfectly acceptable to applications. But, in some cases, applications demand 100% accuracy of data.
So, to handle such situations, NCache provides a pessimistic synchronization model for client cache as well. In this model, every time the application tries to fetch anything from the client cache, the client cache first checks whether the distributed cache has a newer version of the same cached item. If it does, then the client cache fetches a newer version from the distributed cache. Now, this trip to the distributed cache has its cost, but it is still faster than fetching the cached item entirely from the distributed cache.
When to Use a Client Cache?
After reading this blog, the main question that comes to mind is when to use a client cache and when not to use it. Well, the answer is pretty straightforward. If your application(s) performs more reads than writes, use a client cache, especially if the same items are over and over again.
If your application(s) performs many updates (or at least as many as the reads), don’t use a client cache because the updates are slower with a client cache. This happens since you’re updating two different caches now, the client cache and the distributed cache.
So, NCache allows you to take advantage of client cache with a distributed cache. Download a fully working 60-day trial of NCache Enterprise and try it out for yourself.