Many application developers don't cache any data or at most cache some of the read-only static data. Some developers go a step further and cache data in "Session State" between user requests. However, all of these approaches miss the fundamental benefit of caching and that is to cache and share data with all the users of the application. And, this type of caching must be very intelligent so as to avoid data integrity problems in the cache (through stale data).
If you're serious about improving your application's performance through intelligent caching, you have the obvious question of buy versus build. There are a number of disadvantages in building your own caching solution.
First and foremost is that you're taking important development resources away from your business objective of developing your application and putting them into developing an infrastructure that you could easily buy. Second is that developing a correct and intelligent caching solution requires a lot of effort and you may not have the right expertise in your team to design and develop a high-performance cache that can handle the complex concurrency and synchronization scenarios. And, finally, even if you had the expertise, you're putting a lot of effort into something that is taking you away from your business focus.
Even if you finally decide to purchase a commercial caching solution, you must ensure that it provides all the required caching and clustering features that your application would require. Even if you're starting out with a single-server configuration today, you would want a caching solution that can grow with your application into a distributed and clustered environment.
The most obvious benefit of using caching is a dramatic improvement in your application performance. Caching is the process of storing data (both read-only and transactional) close to the application that is frequently used. Typically this data is stored in memory (as objects) since retrieving data from memory is much more efficient than retrieving it from other locations, such as a database.
Most applications perform many more read operations than write operations (usually 70:30 or 80:20 ratio). And, caching data in the application server tier allows the applications to reduce database trips for read operations. This dramatically reduces the load on the database server.
There are two obvious benefits from this. First is that the database server now performs the write operations much faster. And, secondly, the database server can now handle a much larger number of clients without requiring expensive hardware upgrades. Database servers are usually the most expensive hardware in most N-Tier application deployments and therefore the hardware upgrade savings here are usually very high.
Traditionally, web applications or web services either don't cache any data or use primitive mechanisms like "Session State" to cache read-only data. There are two problems with this approach. First is that most of the data that a mission critical application uses is not read-only but transactional and therefore caching read-only data does not go far enough in improving application performance. Secondly, even this read-only data is not shared among users , which in a real-life application, run into thousands or tens of thousands. As a result, most of these applications end up going to the database server for most of the data they need and hence the performance problems.
On the other hand, NCache lets applications keep both static and transactional data in the cache. And, this cached data is available to all users in the server-cluster. NCache then provides number of mechanisms to ensure that cached data does not become stale and is always updated whenever the application updates the data in the database. NCache also allows you to handle situations where data must be updated from outside the application. This is achieved through a concept of "dependencies".
Most real-life applications deal with complex data that is not only transactional but contains multi-layered relationships. This means if you cache any data, you must also handle its relationships. The cache must know about these relationships so it manage them in case of load, insert, update, or delete operations.
NCache manages relationships among objects so changes to one object can trigger changes or invalidations to all the related objects. Similarly, even if your application loads one object first and puts it in the cache and then later loads its related objects, it can tell NCache about these relationships and NCache can manage them.
Not only can you manage relationships between cached objects but also between cached objects and outside resources. NCache provides key and file based dependencies for this purpose. And, these dependencies can be invoked remotely by using .NET Remoting. This helps keep your cache fresh all the time.
Many web applications and web services run in server-cluster/server-farm configurations in order to handle large number of users. In these environments, if your cache is not clustered then any updates to it from one server won't be available to other servers. As a result, your cached data will become inconsistent and stale, thereby leading to data integrity problems.
NCache is a powerful clustered cache that synchronizes all data changes throughout the cluster. It provides a rich set of clustering topologies to help you meet your specific requirements. You can choose from Mirrored Cache, Replicated Cache, Partitioned Cache, and Partitioned-replica Cache topologies in your cluster. These are discussed in more detailed in NCache Clustering Topologies.
NCache ensures that concurrent updates to the cache are handled in a serialized manner so as to prevent data integrity problems. Additionally, it ensures that all changes to the cache are immediately available to all the nodes in the cluster. This allows your application to treat NCache as one logical cache throughout the cluster.