One common nightmare among developers and software architects is your sole web-server/data source crashing while thousands of your connected clients, applications, and your precious data are lost. Introducing a distributed, load-balanced caching layer such as NCache can make your application tier very scalable and highly available since you can add more servers as your transaction load increases. Moreover, there is no single point of failure as all data and clients are distributed.
NCache is an in-memory, distributed data store that provides optimum performance for your applications. The NCache cluster is self-healing and dynamic and contains nodes that automatically balance the load among themselves without user intervention once a server node is added/removed or clients leave or join the cluster.
This blog gives you a quick tour of how NCache offers scalability and performance while maintaining 100% uptime. For understanding NCache architecture in detail, you can check this video out:
Maintaining High Availability in NCache Cluster
NCache’s distributed and replicated architecture ensures 100% uptime even if a node goes down unexpectedly. This is made possible because of the peer-to-peer architecture of NCache and the runtime discovery of clusters and clients without any user intervention. Moreover, NCache provides intelligent failover support so the cluster remains available for all connected clients at all times.
NCache provides dynamic cache clustering with a peer-to-peer architecture where there is no single point of failure. A cache cluster has interconnected servers and it contains a coordinator (senior-most server node) that manages the memberships to the cluster. In case the coordinator ever goes down, this role passes on to the next senior-most server in the cluster. This removes any single point of failure in cluster membership management.
Runtime Discovery Within Cluster and Clients
Once a server starts, it must know of at least one other server in the cluster. The server contains a list of multiple cache servers, and it tries to connect to anyone of them. Once it connects to a server, it asks that server about the cluster coordinator and asks the coordinator to add it to the membership list of the cluster.
The coordinator adds this new server to the cluster at runtime and informs the other connected servers that a new server has joined the cluster. It also informs the new server about all the members of the cluster. The new server then establishes a TCP connection with all the servers in the cluster.
Once the client connects to a cache server, it receives the following information from that server at runtime:
- Cluster membership information
- Caching topology information
- Data distribution map
The client uses this information to help determine which cache servers to connect to and how to access the cache based on the caching topology.
As the NCache cluster is self-healing, it provides failover support within the cluster and for the clients if a server is added or removed at runtime.
- Cluster failover support: The cluster automatically rearranges itself by updating its connections to the other servers, once a server is added or removed.
- Client failover support: The clients automatically connect to another server in the cluster if their connected server is removed, and if a server is added to the cluster, the clients update themselves and can opt to connect to the new server.
For more detail on high availability features, head over to our blog High Availability Promised with NCache.
Attaining Runtime Scalability of NCache Cluster
Since NCache stores your data along with providing advanced features like Pub/Sub messaging and query execution, you can expect to run into memory or computational limit if all your transactions are on one server only. This is why NCache provides seamless linear scaling to handle increasing requests/sec and store more data.
NCache Web Manager makes scaling your environment as simple as clicking on buttons, and voila, you have a dynamic cluster with additional nodes without stopping your clients. The following GIF shows how simple it is to scale your cluster dynamically in NCache:
NCache has a dynamic cluster that lets clients receive the required data in just one hop because the clients are intelligently handled within the cluster without any user intervention. Moreover, the client operations are sent and executed in parallel on all nodes, and the results from each node are merged into a single result, making the operations scalable while enhancing the performance of the transactions because of parallelism.
With pipelining, NCache reduces network overhead by combining multiple client operations that are sent in one TCP call to the server. Similarly, the response of the operations is also received by the client in a single chunk in one call. This helps to scale operations.
With object pooling, the NCache server pools the objects and reuses them to prevent invoking Garbage Collector over and over again. Garbage collection is a performance extensive task, hence lessening the need to invoke the GC results in higher performance and scalability of your environment.
NCache offers client cache, a local cache residing where the application resides. As the client cache lies between the application and clustered cache, it is automatically synced and boosts performance especially for read operations. This cuts down on network overhead.
For more detail, you can check out the blog: Scalability Architecture in NCache – An Insight
NCache, being a .NET native distributed caching solution, fits into your application stack very seamlessly. It boosts performance tremendously because of object pooling, parallel operations, and client cache that sits next to your application. Apart from being scalable, it also maintains 100% uptime at all times to ensure high availability of data and clients.