NCache Features - Open Source .NET/.NET Core Distributed Cache

.NET Core

.NET Core is a cross-platform solution from Microsoft for creating web apps, microservices, libraries, and console applications. .NET Core runs on the popular Windows, Linux, MacOS and Windows Nano Server operating platforms. It is compatible with .NET Framework, Xamarin, and Mono via the .NET Standard Library. Furthermore, .NET Core is open source, using Apache 2 licenses and MIT.

NCache .NET Core Client NCache is compatible with applications built using .NET Core (as well as the traditional .NET Framework). The NCache .NET Core Client is included in the distributed cache cluster server software for NCache Enterprise and Community Edition. Customers’ .NET Core apps communicate with the NCache distributed cache cluster via connection through the NCache .NET Core Clients. The NCache distributed cache cluster operates in the .NET Framework, while the NCache .NET Core Client runs in .NET Core.
With NCache version 4.8 and later, a separate NCache Client license no longer exists. Rather, the .NET Core Client is included in the Enterprise or Community Edition server license(s).
NCache's .NET Core Clients can run on Windows, using the Enterprise or Community Edition (.msi installer) and on Linux using an additional .NET Core Client (.tar.gz installer).

Virtualization and Containerization

Docker Support NCache fully supports Docker for cache clients, in addition to cache servers. This enables .NET applications deployed in Docker to seamlessly include the NCache Client. Docker support enables deployment of all NCache servers in the caching tier in Docker. And it makes NCache deployment very easy.
NCache Server and NCache Client are both available in the Docker Hub to include in your Docker configuration.

Performance and Scalability

Performance is defined as how fast cache operations are performed at a normal transaction load. Scalability is defined as how fast the same cache operations are performed under higher and higher transaction loads. NCache is extremely fast and scalable.
See NCache Benchmarks for more details.

Cache Performance NCache is an extremely fast distributed cache. It is much faster than going to the database to read data. It provides sub-millisecond response times to its clients.
Cache Scalability Unlike databases, NCache is able to scale out seamlessly and allow you to keep growing your transaction load. On top of this, NCache provides you with linear scalability, which means that as you increase your cache servers, your transaction capacity also increases from the original capacity.
Bulk Operations Bulk operations such as Bulk Get, Add, Insert, and Remove. This covers most of the major cache operations and gives great performance boost.
Asynchronous Operations An asynchronous operation returns control to the application and performs the cache operation in the background. Very useful in many cases and improves application performance greatly.
Compression You can specify this in the config. file and change it at runtime and then do a "Hot Apply".
Compact Serialization NCache generates serialization code and compiles it in memory when your application connects to the cache. This code is used to serialize objects and it is almost 10 times faster than the regular .NET serialization (especially for larger objects). NCache also stores type-IDs instead of long string-based type names in the serialized object so it is much smaller (hence compact). The following enhancements have been made to compact serialization:
  1. Users can select and de-select the data members to be compact serialized.
  2. Byte arrays are no more serialized.
  3. Compact types are hot-applicable.
Indexes NCache creates indexes inside the cache to speed up various data retrieval cache operations. You can use this with groups, tags, Named Tags, searching, and more. Expiration and eviction policies also use indexes.
Multiple Network Interface Cards You can assign two network interface cards to a cache server. One can be used for clients to talk to the cache server and the second for multiple cache servers in the cluster to talk to each other. It improves your data bandwidth scalability greatly.

Cache Elasticity (High Availability)

Cache elasticity means how flexible the cache is at runtime. Are you able to perform the following operations at runtime without stopping the cache or your application?

  • Add or remove cache servers at runtime without stopping the cache.
  • Make cache config. changes without stopping the cache.
  • Add or remove web/application servers without stopping the cache.
  • Have failover support in case any server goes down (meaning are cache clients are able to continue working seamlessly).

NCache provides a self-healing dynamic cache clustering that makes NCache highly elastic.

See Dynamic Cache Clustering for details.

Recovery from Split-Brain Split-Brain is a situation, where due to temporary network failure, between cluster nodes result in multiple sub-clusters. Each sub-cluster, in this case, has its own coordinator node and does not know about the other sub clusters. This can eventually result in inconsistent data.
With NCache 4.9 and above, users can enable the cache clusters to automatically recover from the Split-Brain scenarios.
Cache Client Keep-Alive Some firewalls break idle network connections, causing problems in cache client-to-cache server communication in NCache. The Cache Client Keep-Alive feature, if enabled on a client node, automatically sends a lightweight packet to the cache servers at a configurable interval (a sort of heartbeat).
These packets are sent only in case of "no activity" between clients and servers and therefore do not interfere with regular client/server traffic.
Dynamic Cache Cluster NCache is highly dynamic and lets you add or remove cache servers at runtime without any interruption to the cache or your application.
Peer-to-peer architecture This means there are no "master" or "slave" nodes in the cluster. There is a "primary coordinator" node that is the senior most node. And, if it goes down, the next senior most node automatically becomes the primary coordinator.
Connection Failover Full support. When a cache server goes down, the NCache clients automatically continue working with other servers in the cluster and no interruption is caused.
Dynamic Configuration A lot of information can be changed at runtime without stopping the cache or your applications. NCache has a "Hot Apply" feature does this.
Unlimited Cache Clusters NCache allows you to create an unlimited number of cache clusters. The only limiting factor is the underlying hardware (RAM, CPU, NICs etc.)
Persistence NCache provides a key-value distributed cache, with an in-memory persistence store, to reliably retrieve valuable data as required. Persistence ensures high data availability, and simultaneously in-memory data provides high performance. It maintains a copy of the cache data in the persistence store and later loads persisted data on cache restart (planned or unplanned).
Backup and Restore NCache allows flexible backup of the persisted data while the cache is still running. Hence, you can backup your data in the cache and recover it in case of a disaster, keeping in view, the need for periodical backup of the persistence store due to maintenance or other reasons.

Cache Topologies

Cache Topologies determine data storage and client connection strategy. There are different topologies for different uses.
See NCache Caching Topologies for details.

Local Cache You can use NCache as InProc or OutProc local cache. InProc is much faster but your memory consumption is higher if you have multiple application processes. OutProc is slightly slower but saves your memory consumption because there is only one cache copy per server.
Client Cache (Near Cache) Client Cache is simply a local InProc/OutProc cache on the client machine but one that stays connected and synchronized with the distributed cache cluster. This way, the application really benefits from this "closeness" without compromising on data integrity.
Mirrored Cache Mirrored Cache is a 2-node active-passive cache and data mirroring is done asynchronously.
Replicated Cache Replicated Cache is an active-active cache where the entire cache is copied to each cache server. Reads are super fast and writes are done as atomic operations within the cache cluster.
Partitioned Cache You can create a dynamic Partitioned Cache. All partitions are created and clients are made aware at runtime. This allows you to add or remove cache servers without any interruption.
Partitioned-Replica Cache Same as the Partitioned Cache and is fully dynamic except there is also a "replica" for each partition kept at another cache server for reliability.

WAN Replication

WAN replication is an important feature for many applications deployed in multiple data centers either for disaster recovery purposes or for load balancing of regional traffic. The idea behind WAN replication is that it must not slow down the cache in each geographical location due to the high latency of WAN. NCache provides Bridge Topology to handle all of this.

See WAN Replication for details.

One Active - One Passive Bridge Topology Active-Passive
You can create a Bridge between the active and passive sites. The active site submits all updates to the Bridge which then replicates them to the passive site.
Both Active Bridge Topology Active-Active
You can create a Bridge between two active sites. Both submit their updates to the Bridge which handles conflicts on last update wins rule or a custom conflict resolution handler provided by you. Then, the Bridge ensures that both sites have the same update.
Switch between Active/Active and Active/Passive While adding caches to the Bridge, you can configure a cache to participate as an active or passive member of the Bridge. Even when the Bridge is up and running, you can turn a passive into an active and an active into a passive without losing any data.
User experience to configure a Bridge is also changed as the topologies in the Bridge can be switched between Active-Active to Active-Passive at any time.
Connect/Disconnect Caches Cache administrators can temporarily connect and disconnect caches from the Bridge while it is running. When a cache is disconnected, no data is transferred between the Bridge and the disconnected cache. Similarly, the cache on the other side of the Bridge stops queuing data as the disconnected cache is no more receiving any data.
The cache can be reconnected at any time.

Cache Administration and Management

Cache administration is a very important aspect of any distributed cache. A good cache should provide the following:

  • GUI based and command line tools for cache administration including cache creation and editing/updates.
  • GUI based tools for monitor the cache activities at runtime.
  • Cache statistics based on PerfMon (since for Windows, PerfMon is the standard).
  • NCache provides powerful support in all these areas.
See Admin Tools for details.

Thin NCache Manager Project Files The GUI-based NCache management tool (called NCache Manager) is used to retain cache configuration information inside its project file. This caused data integrity issues if multiple cache configuration modifications were input on different machines. To avoid this, NCache Manager no longer stores any cache configuration information inside its project files. Instead, all configuration information is kept on common cache servers, that can be accessed from any location.
Total Cache Management Through PowerShell NCache has a rich set of command line tools (along with powerful GUI-based management tools). With NCache 4.8 and later, all NCache command-line cache management tools are implemented in PowerShell, allowing for very sophisticated cache management.
Visual Studio Integration NCache allows yoyu to perform basic management and configuration operations within the Visual Studio. With NCache 4.4 SP2 and later, the Developer installation comes with an 'NCache Manager' extension which helps developers manage NCache from Visual Studio. Visual Studio 2010/2012/2013/2015/2017 are supported by NCache.
Cache Admin GUI Tool NCache Manager is a powerful GUI tool for NCache. It gives you an explorer view and lets you quickly administer the cache including cache creation/editing and many other functions.
Cache Monitoring GUI Tool It lets you monitor NCache cluster-wide activity from a single location. It also lets you monitor all of NCache clients from a single location. And, you can incorporate non-NCache PerfMon counters in it for real-time comparison with NCache statistics.
PerfMon Counters NCache provides a rich set of PerfMon counters that can be seen from NCache Manager, NCache Monitor, or any third-party tool that supports PerfMon monitoring.
Graceful Node Stop A node can now be gracefully stopped in a cluster. This action will make sure that all client requests that have reached the node executed on the cache before it comes to a complete stop. Similarly, all write behind operations pending in the queue at that time are also executed on the data source. However, no more client requests are accepted by this node.
ReportView Control There is another type of dashboard available in NCache Monitor now that allows you to create a report view style dashboard. In this dashboard, we have two report controls. One is for cache server nodes, while the other one is for client nodes.
Users can drop the counters in this control and their values are shown in a report view style as shown in PerfMon.
Logging of Counters Counters added in report view can also be configured to be logged. Users can start and stop logging at any time. They can also schedule the logging to start automatically by specifying the start and stop times. These log files are generated in .csv format.
Command Line Admin Tools NCache provides a rich set of command line tools/utilities. You can create a cache, add remote clients to it, add server nodes to it, start/stop the cache, and much more.
SNMP SNMP (Simple Network Management Protocol) is a standard protocol through which different devices on a network communicate and share information. NCache support monitoring this protocol's activity using its SNMP counters. You can read more about these SNMP Counters in the NCache Administrators Guide.
Grafana Grafana is an open-source analytics and monitoring tool. NCache provides a Grafana Application Plugin that collects and displays NCache metrics data from your cluster on several feature-rich metrics dashboards using Prometheus as a data source.
Prometheus Prometheus is an open-source monitoring system that records real-time metrics in a time series database. NCache provides support for monitoring its performance counters through Prometheus. You can monitor Distributed Caches, Distributed Cache with Persistence, the Pub/Sub Message Store, Distributed Lucene, Clients, and Bridges through the extensive counters published by NCache.
NCache log viewer The NCache Log Viewer is an interactive GUI tool to display logs in an organized manner. This log viewer lets you maintain logs categorically, i.e., it allows a separate field identification and customizes the search entries in a manner convenient for you.

Security & Encryption

Many applications deal with sensitive data or are mission critical and cannot allow the cache to be open to everybody. Therefore, a good distributed cache provides restricted access based on authentication and authorization to classify people in different groups of users. And, it should also allow data to be encrypted inside the client application process before it travels to the cache cluster.
NCache provides strong support in all of these areas.
See Security and Encryption Features for details.

Transport Level Security (TLS) 1.2 All communication from NCache clients to NCache servers can now be optionally secured through TLS 1.2 (a newer specification than SSL 3.0). TLS is the same protocol used by HTTPS to ensure a secure connection between the browser and a web server.
TLS 1.2 ensures all data traveling between NCache clients and NCache servers is fully encrypted and secured. Please note that encryption/decryption on NCache clients and NCache servers have a slight performance impact.
Active Directory/LDAP Authentication You can authenticate users against Active Directory or LDAP. If security is enabled, nobody can use the cache without authentication and authorization.
Authorization You can authorize users to as either "users" or "admins". Users can only access the cache for read-write operations while "admins" can administer the cache.
Encryption (DES & AES) You can enable encryption and NCache automatically encrypts all items inside the client process before sending them to the cache. And decryption also happens automatically and transparently. Currently, 3DES, AES-128, AES-192, AES-256, AES-FIPS 128, AES-FIPS 192, and AES-FIPS 256 encryptions are provided and more are being added.
When encryption is enabled, indexed data is also encrypted.
Support for HTTPS (NCache Manager) NCache enables the use of HTTPS for NCache Manager in Windows through TLS certificates.

Object Caching Features

These are the most basic operations without which a distributed cache becomes almost unusable. These by no means cover all the operations a good distributed cache should have.

Get, Add, Insert, Remove, Exists, Clear Cache NCache provides variations of these operations and therefore more control to the user.
Expirations Absolute expiration is good for data that is coming from the database and must be expired after a known time. Sliding expiration means expire after a period of inactivity and is good for session and other temporary data that must be removed once used. NCache also supports default expirations that can aid you in setting your data invalidation strategies with more flexibility.
Lock & Unlock A lock is used to exclusively lock a cached item so nobody else can read or write it. This item stays locked until either the lock expires or it is unlocked. NCache also has incorporated "lock/unlock" features in "Get" and "Insert" calls where "Get" returns an item locked and "Insert" updates the item and also unlocks it.
Streaming API For large objects (e.g. movie or audio files), where you might want to stream them, NCache provides a streaming API. With this API, you can get chunks of data and pass one chunk upward at a time.

Managing Data Relationships

Since most data being cached comes from relational databases, it has relationships among various data items. So, a good cache should allow you to specify these relationships in the cache and then keep the data integrity. It should allow you to handle one-to-one, one-to-many, and many-to-many data relationships in the cache automatically without burdening your application with this task.

See more about Managing Data Relationships.

Key Based Dependency NCache provides full support for it. You can specify one cached item A depends on another cached item B which then depends on a third cached item C. Then, if C is ever updated or removed, B is automatically removed from the cache and that triggers the removal of A from the cache as well. And, all of this is done automatically by the cache.
With this feature, you can keep track of one-to-one, one-to-many, and many-to-many relationships in the cache and invalidate cached items if their related items are updated or removed.
Multi-Cache Key Dependency This is an extension of Key Based Dependency except it allows you to create this dependency across multiple caches.

Synchronization with Data Sources

Database synchronization is a very important feature for any good distributed cache. Since most data being cached is coming from a relational database, there are always situations where other applications or users might change the data and cause the cached data to become stale. To handle these situations, a good distributed cache should allow you to specify dependencies between cached items and data in the database. Then, whenever that data in the database changes, the cache becomes aware of it and either invalidates its data or reloads a new copy.

Additionally, a good distributed cache should allow you to synchronize the cache with non-relational data sources since real life is full of those situations as well. NCache provides a very powerful Database Synchronization Feature.

SQL Dependency NCache provides SqlDependency support for SQL Server. You can associate a cached item with a SQL statement-based dataset in SQL Server. Then whenever that dataset changes (addition, updates, or removal), SQL Server sends a .NET event to NCache and NCache invalidates this cached item.
This feature allows you to synchronize the cache with the SQL Server database. If you have a situation where some applications or users are directly updating data in the database, you can enable this feature to ensure that the cache stays fresh.
Oracle Dependency NCache provides OracleDependency support for Oracle. It works just like SqlDependency but for Oracle. Whenever data changes in the database, Oracle notifies NCache through Oracle event notification.
Just like SqlDependency, this feature allows you to synchronize the cache with the Oracle database.
OLEDB Database Dependency NCache provides support for you to synchronize the cache with any OLEDB database. This synchronization is based on polling. It is much more efficient because in one poll, NCache can synchronize thousands of cached items instead of receiving thousands of individual events in SqlDependency.
File Based Dependency NCache allows you to specify a dependency on an external file. Then NCache monitors this file for any updates and when that happens, NCache invalidates the corresponding cached item. This allows you to keep the cached item synchronized with a non-relational data source.
Custom Dependency NCache offers a Custom Dependency feature, using which you can implement your custom logic that defines when certain data becomes invalid. This feature enables greater flexibility and control over cache management. Custom Dependency can be used with any database, not just those supported by NCache.
Aggregate Dependency NCache also allows you to use different strategies in combination with the same cache data in the form of Aggregate Cache Dependency. It allows you to associate multiple dependencies of different types with a single cached item.

Runtime Data Sharing

Runtime data sharing has become an important use for distributed caches. More and more applications today need to share data with other applications at runtime in an asynchronous fashion. Previously, relational databases were used to share data among multiple applications but that requires constant polling by the applications wanting to consume data. Then, message queues became popular because of their asynchronous features and their capability to persist events. And although message queues are great, they lack the performance and scalability requirements of today's applications.

As a result, more and more applications are using distributed caches for event driven runtime data sharing. This data sharing should be between multiple .NET applications or between .NET and Java applications. NCache provides very powerful features to facilitate runtime data sharing.

See Run-time Data Sharing for details.

Publish/Subscribe (Pub/Sub) with Topic The Publish/Subscribe (Pub/Sub) messaging paradigm is provided where a publisher sends messages into channels, without knowing the subscribers (if any). And subscribers receive only messages of interest, without knowing who the publishers are.
NCache provides named Topic support through which a publisher sends messages to multiple subscribers or anyone among them. And, subscribers can subscribe to a named Topic, and its callback is called by NCache when a message arrives against this Topic.
Events with Data You can specify whether to receive data with the event or only the event. In many situations, you may want the data along with the event to guarantee that you're getting the copy that was actually modified. This is because if you don't receive data with events and your application then fetches the data from the cache, somebody else might have changed this data by that time.
Cached Item Specific Events (onInsert/onRemove) NCache allows you to register interest in various cached items. Then, when these items are updated or removed, your callbacks are called. This is true even if you're connected to the cache remotely.
Cache Level Events (Add/Insert/Remove) NCache allows you to register to be notified whenever any cached item is added, updated, or removed. Your callback is called when this happens even if your application is remotely connected to the cache.
Custom Events (Fired by Apps) NCache allows your applications to fire custom events into the cache cluster. And, other applications can register to be notified of these events.
This feature allows you to coordinate a producer/consumer scenario where after the producer has produced data, it notifies all the consumers to consume it.
Continuous Query NCache provides a powerful Continuous Query feature. Continuous Query lets you specify a SQL-like query against which NCache monitors the cache for any additions, updates, or deletes. And, your application is notified whenever this happens.

Cache Data Query (SQL-Like)

Distributed cache is frequently used to cache objects that contain data coming from a relational database. This data may be individual objects or collections that are the result of some database query.
Either way, applications often want to fetch a subset of this data and if they have the ability to search the distributed cache with a SQL-like query language and specify object attributes as part of the criteria, it makes the distributed cache much more useful for them.
NCache provides powerful Object Query Language (OQL) for searching the cache with a SQL-like query.
See Object Query Language for details.

SQL Support NCache provides a rich SQL Support with which you can search the cache. Your search criteria can now include object attributes (e.g., = 'New York') and you can also include Tags and Named Tags in the query language.
Search Data with ExecuteReader NCache enables you to perform a search on the cache based on the query specified using ExecuteReader. It returns a list of key-value pairs in a data reader which fulfills the query criteria. This key-value pair has a cache key and its respective values.
GROUP-BY for Queries You can use GROUP BY queries and obtain a result set that includes the count of items in the cache groups by attribute values.
ORDER BY for Queries NCache provides you with the ability to sort the data in an ascending or descending order, according to the given criteria, through SQL-like query format using the ORDER BY clause.
Delete Data with ExecuteNonQuery NCache lets you delete the data from the cache based on the given criteria. The Delete statement returns the number of the deleted rows through the query executed using ExecuteNonQuery.
LINQ Queries NCache allows you to search the cache with LINQ queries. LINQ is a popular object querying language in .NET and NCache has implemented a LINQ provider.

Data Grouping

A distributed cache should be much more than a Hashtable with a (key, value) pair interface. It needs to meet the needs of real life applications that expect to fetch and update data in groups and collections. In a relational database, SQL provides a very powerful way to do all of this.
We've already explained how to search a distributed cache through OQL and LINQ. Now let's discuss Groups, Tags, and Named Tags. These features allow you to keep track of collections of data easily and even modify them.

Groups NCache provides the ability for you to group cached items.
You can later fetch or remove all items belonging to a group. You can also fetch just the keys and then only fetch a subset of them.
Tags A Tag is a string that you can assign to one or more cached items. And one cached item can be assigned multiple Tags.
And, later, you can fetch items belonging to one or more Tags in order to manipulate them.
You can also include Tags in Object Query Language or LINQ search as part of the criteria.
Named Tags NCache provides Named Tags feature where you can assign a "key" and "tag" to one or more cached items. And, a single cached item can get multiple Named Tags.
Later, you can fetch items belonging to one or more Named Tags.

Read-Through & Write-Through

Many people use distributed cache as "cache on the side" where they fetch data directly from the database and put it in the cache. Another approach is "cache through" where your application just asks the cache for the data. And, if the data isn't there, the distributed cache gets it from your data source.
The same thing goes for Write-Through. Write-behind is nothing more than a Write-Through where the cache is updated immediately and the control returned to the client application. And, then the database or data source is updated asynchronously so the application doesn't have to wait for it. NCache provides powerful capabilities in this area.

See Read-Through & Write-Through for details.

Read-Through NCache allows you to implement multiple Read-Through handlers and register with the cache as "named providers". Then, when the applications tell NCache to use Read-Through upon a "cache miss", an appropriate Read-Through is called by the cache server to load the data from your database or data source.
Write-Through & Write-Behind NCache allows you to implement multiple Write-Through handlers and register with NCache as "named providers". Then, whenever application updates a cached item and tells NCache to also call Write-Through, NCache server calls your Write-Through handler.
If you've enabled write-behind, then NCache updates the cache immediately and queues up the database update and a background thread processes it and calls your Write-Through handler.
Write-Through & Write-Behind batching options You can specify the following:
  1. Batch size - how many items to batch together when doing database updates.
  2. Batch interval - How long to wait before processing the next batch.
  3. Retries threshold - How many retries to do if a database update fails.
  4. Retry queue eviction - How many items to evict from retry-queue if it becomes full.
Reload Items at Expiration & Database Synchronization NCache allows you to specify that whenever a cached item expires, instead of removing it from the cache, NCache should call your Read-Through handler to read a new copy of that object and update the cache with it.
You can specify the same when database synchronization is enabled and a row in the database is updated and a corresponding cached item would have been removed from the cache. These items are reloaded with the help of your Read-Through provider.

Cache Size Management (Evictions Policies)

A distributed cache always has less storage space than a relational database. So, by design, a distributed cache is supposed to cache a subset of the data which is really the "moving window" of a data set that the applications are currently interested in.
This means that a distributed cache should allow you to specify how much memory it should consume and once it reaches that size, the cache should evict some of the cached items. However, please keep in mind that if you're caching something that does not exist in the database (e.g. ASP.NET Sessions) then you need to do proper capacity planning to ensure that these cached items (sessions in this case) are never evicted from the cache. Instead, they should be "expired" at appropriate time based on their usage.

Specify Cache Size NCache lets you specify the upper limit of cache size in MB as per your needs. And you can "Hot-Apply" these changes.
LRU Evictions (Least Recently Used) Least recently used is one of the eviction policies and it means those items that have not been accessed by any client application in the longest time will be evicted from the cache.
LFU Evictions (Least Frequently Used) Least frequently used is one of the eviction policies and it means those items that have been accessed the least number of times since the last eviction or since the starting of the cache, whichever is later, would be evicted.
Priority Evictions NCache lets you specify different priority values when you add an item to the cache. These include high, above-normal, normal, below-normal, and low. When the cache is full and NCache wants to evict items, it starts from priority low and starts evicting all the items in a FIFO (first-in-first-out) manner.
Do not Evict NCache also lets you specify a "do not evict" priority for some cached items and then they are not evicted. This is enabled at the cache level so nothing gets evicted from the entire cache. This is especially useful in case of ASP.NET Session storage.

ASP.NET Support

ASP.NET applications need three things from a good distributed cache. And, they are ASP.NET Session State storage, ASP.NET View State caching, and ASP.NET Output Cache.
ASP.NET Session State store must allow session replication in order to ensure that no session is lost even if a cache server goes down. And, it must be fast and scalable so it is a better option than the InProc, StateServer, and SqlServer options that Microsoft provides out of the box. NCache has implemented a powerful ASP.NET Session State provider.
See ASP.NET Session State for details.
ASP.NET View State caching allows you to cache heavy View State on the web server so it is not sent as a "hidden field" to the user browser for a round-trip. Instead, only a "key" is sent. This makes the payload much lighter, speeds up ASP.NET response time, and also reduces bandwidth pressure and cost for you. NCache provides a feature-rich View State cache.
See ASP.NET View State for details.
Third is ASP.NET Output Cache. For .NET 4.0, Microsoft has changed the ASP.NET Output Cache architecture and now allows third-party distributed caches to be plugged-in. ASP.NET Output Cache saves the output of an ASP.NET page so the page doesn't have to execute next time. And, you can either cache the entire page or portions of the page. NCache has implemented a provider for ASP.NET Output Cache.

ASP.NET Core Response Caching NCache's implementation of IDistributedCache utilizes Distributed Cache Tag Helper that provides the ability to dramatically improve the performance of your ASP.NET Core app by caching its responses.
ASP.NET Core Session Provider & IDistributedCache NCache provides full ASP.NET Core support, in addition to the previously available ASP.NET in the .NET Framework.
NCache includes a powerful ASP.NET Core Session Provider that has more features than the regular ASP.NET Session Provider. And, it supports the IDistributedCache interface in ASP.NET Core.
ASP.NET Session State Store NCache has implemented an ASP.NET Session State Provider (SSP) for .NET 2.0+. You can use it without any code changes. Just change web.config.
NCache provides intelligent session replication and is much faster than any database storage for sessions.
ASP.NET View State Cache NCache has an ASP.NET View State caching module. You can use it without any code changes to your application. Just modify web.config.
You can also associate ASP.NET View State to a user session so when that session expires, all of its View State is also removed.
ASP.NET Output Cache NCache has an ASP.NET Output Cache provider implemented. It allows you to cache ASP.NET page output in a distributed cache and share it in a web farm.
Users can now write their own code to modify the cache items before they are inserted in NCache. Users can change the expiration, dependencies, etc. of output cache items by writing these hooks.
For this, users have to implement an interface provided with OutputCacheProvider and then register this assembly and class in web.config
NCache Backplane for SignalR NCache offers support for SignalR through an extension to the SignalR provider. All concerned web servers for the application are registered against the provider. Meanwhile, the clients are connected to their respective web servers. In NCache, as soon as a client registers itself against the web server, two key features of NCache come into play: Custom Events and CacheItem.itemVersion.
ASP.NET Core Sessions NCache architecture allows you to scale linearly to handle extreme transaction loads by allowing you to add more cache servers at runtime. NCache also provides intelligent cache replication to ensure zero ASP.NET Core Session data loss if a web or a cache server goes down.
ASP.NET Core Data Protection Provider NCache architecture supports the ASP.NET Data Protection Providers, as it allows you to protect your data using different encryption algorithms. The ASP.NET Core Data Protection provides cryptographic API to protect your data, including key management and rotation.
Using ASP.NET Core Data Protection, any data that should not be accessible to everyone in a shared environment, can be protected.

Third-Party Integrations

NHibernate is a very powerful and popular object-relational mapping engine. And, fortunately, it also has a second-level cache provider architecture that allows you to plug in a third-party cache without making any code changes to the NHibernate application. NCache has implemented this NHibernate Second Level Cache provider.

See NHibernate Second Level Cache for details.

Similarly, Entity Framework from Microsoft is also a very popular object-relational mapping engine. And, although Entity Framework doesn't have a nice Second Level Cache Provider architecture like NHibernate, NCache has nonetheless implemented a Second-Level Cache Provider for Entity Framework.

See Entity Framework Second Level Cache for details.

Entity Framework Core (EF Core) 2.0 Extension Methods for NCache NCache has very easy-to-use EF Core 2.0 Extension Methods that cache application data fetch through EF Core 2.0. Although Extension Methods require some minimal coding, it is a small effort and it yields a lot of control over which data to cache and for how long.
NHibernate Second Level Cache NCache provides an NHibernate Second-Level Cache Provider that you can plug in through web.config or app.config changes.
NCache has also implemented a database synchronization feature in this so you can specify which classes should be synchronized with the database. NCache lets you specify SqlDependency or any OLEDB-compliant databased dependency for this.
Entity Framework Second Level Cache You can plug in NCache to your Entity Framework application, run it in analysis mode, and quickly see all the queries being used by it. Then, you can decide which queries should be cached and which ones to be skipped. You can also specify which queries should be synchronized with the database through SqlDependency.
IdentityServer4 NCache has integrated itself with IdentityServer4 and you can plug in NCache to your existing IdentityServer4 applications. You only need to change the configuration files to take advantage of NCache distributed system.
AppFabric NCache provides an AppFabric wrapper that makes the migration from AppFabric caching to NCache seamless for you.

Please read more about NCache and also download a 60-Day free Install Key for NCache.

Signup for monthly email newsletter to get latest updates.

© Copyright Alachisoft 2002 - . All rights reserved. NCache is a registered trademark of Diyatech Corp.