Scalability in Caching Topologies
Scalability, in general terms, is the system's ability to increase or decrease performance and cost as the demand of an application changes. An application is considered scalable if it is performing the same with the load of 10 users, 1000 users or 10000 users. Usually, databases are hard to scale and hence are a hurdle in an application's overall scalability. NCache, being a distributed caching solution, takes off the database load and offers a variety of ways to make applications much more scalable in terms of transaction load and storage capacity.
Depending upon what is application's requirement, you can choose from a variety of caching topologies NCache offers. For an application that has a limited amount of data to cache but requires high available should pick Replicated topology. This topology offers high availability as all nodes in the cluster have the same copy of data. The topology gives read scalability and can survive multiple node failures (n-1 node failures in a cluster of n number of nodes) without losing any data. If the application has always growing data that it needs to cache, then Replicated topology is not an answer. It can only store as much data as it can on a single node in the cache cluster, irrespective of the number of nodes.
For applications with growing data needs, but can afford data loss, can choose Partitioned topology. This topology is not only scalable in terms of reads and writes but is also scalable in terms of storage. However, this does not give you high availability as there is data loss with every node failure. This should be used when an application can bear a data loss.
If the requirement is to accommodate the growing data needs as well as high availability, Partition-Replica topology offers both. Even though it is not as highly available as Replicated topology is, but it still can bear one node failure without losing any data. There is one backup of each partition, so basically, each node has one partition and one backup of another partition. That means until there are simultaneous node failures, which is unlikely in most situations, the topology takes care of both scalability and high availability needs.