NCache 4.4 - Online Documentation

Partitioned-Replica Cache

 
Multiple Server Nodes: This topology has two or more servers. When any cache item is added in the cache it is saved
only to the relevant server node which means that every server node has a unique set of data.
 
Data Distribution: Data is distributed/partitioned among all server nodes on the basis of the hash code of cache key. A hash base distribution map is generated which is then distributed to every server node.
 
Fault Tolerance: It provides fault tolerance up to one node replica. Each server node in the cluster has one active partition and its passive/mirror replica on the other node. Thus when one node is down data can be restored from its replica.
 
Double Memory Consumption: Each server node consumes double memory than the configured size because that node contains one active partition and one replica. Both active and replica has the same size as configured for cache.
 
Clients Connections: Cache clients are connected to all server nodes in this topology. Data distribution maps are also shared with cache clients and cache clients are aware of the actual data distribution. Operations are directly sent to the node containing data with the help of distribution map on the basis of cache key.
 
Runtime Scalability: Server nodes can be added or removed at runtime when ever needed.
 
Storage Scalability: When more nodes are added in partitioned-replica cache, storage capacity is increased because each partition in the cluster has a unique set of data.
 
High Performance: Scaling the cache will not affect the performance of  cluster cache for both reference and transactional data because read/write is one hop operation in this topology. It will give high performance in reads/writes without affecting the performance of cluster cache. Addition of new nodes to the cache cluster results in load distribution of existing members resulting in more throughput.
 
State Transfer: When a node joins the cache cluster in partitioned-replica topology, existing nodes sacrifice a portion of their data, according to hash based distribution, for the newly joined node. Newly joined node gets its data share transfered from existing nodes which results in an even distribution of the cached data among all cluster nodes. Similarly when a node leaves the cache cluster, data from its backup node is distributed among existing partitions. State transfer also occur between active and passive nodes.  State transfer occurs in background. Therefore during state transfer, user cache operation does not suffer.
 
Replication strategy/modes: Following are the data replication modes provided in this topology.
 
  • Synchronous Replication
Cache clients have to wait for replication to the backup partition as part of the cache operation. Due to synchronous replication it can be assured that if cache client request is executed successfully, then data is already stored to the backup node.
 
  • Asynchronous Replication
Cache clients request returns immediately after operation is performed on active partition node. Operations are replicated asynchronously to the replica node. If any node goes down disgracefully, then operations that remain in the replicator queue could be lost.
 
 
 
See Also