Scalability in Caching Topologies
Scalability, in general terms, is the system's ability to increase or decrease performance and cost as the demand of an application changes. An application is deemed scalable if it is performing the same with a load of 10 users, 1000 users, or 10000 users. Usually, databases are hard to scale, and, hence, are a hurdle in an application's overall scalability. NCache, being a distributed caching solution, takes off the database load and offers a variety of ways to make applications much more scalable in terms of transaction load and storage capacity.
Depending upon the application's requirement, you can choose from a variety of caching topologies that NCache offers. An application that has a limited amount of data to cache but requires high availability should pick the Replicated topology. This topology offers high availability as all nodes in the cluster have the same copy of data. The topology gives read scalability and can survive multiple node failures (n-1 node failures in a cluster of n number of nodes) without losing any data. If the application has constantly growing data that it needs to cache, then the Replicated topology is not an answer. It can only store as much data as it can on a single node in the cache cluster, irrespective of the number of nodes.
Applications that require increasing amounts of data, but can tolerate data loss should use the Partitioned topology. This topology is not only scalable in terms of reads and writes but in terms of storage, as well. However, this does not give you high availability, as there is data loss with every node failure. You can use this topology when your application can bear data loss.
If the requirement is to accommodate the growing data needs as well as high availability, Partition-Replica topology offers both. Even though it is not as highly available as the Replicated topology, it still can bear one node failure without losing any data. There is one backup of each partition, so each node has one partition and one backup of another partition. That means until there are simultaneous node failures, which is unlikely in most situations, the topology takes care of both scalability and high availability needs.