NET Core is a cross-platform, open-source platform from Microsoft for creating web apps, microservices, libraries and console applications. .NET Core runs on the popular Windows, Linux, MacOS and Windows Nano Server operating platforms. It is compatible with .NET Framework, Xamarin and Mono via the .NET Standard Library. Furthermore, .NET Core is open source, using Apache 2 licenses and MIT.
|NCache .NET Core Client||NCache is compatible with applications built using .NET Core (as well as the traditional .NET Framework). The NCache .NET Core Client is included in the distributed cache cluster server software for NCache Enterprise and Community Edition. Customers’ .NET Core apps communicate with the NCache distributed cache cluster via connection first through the NCache .NET Core Clients. The NCache distributed cache cluster operates in the .NET Framework, while the NCache .NET Core Client runs in .NET Core.
With NCache version 4.8 and later, a separate NCache Client license no longer exists. Rather, the .NET Core Client is included in the Enterprise or Community Edition server license(s).
NCache's .NET Core Clients can run on Windows, using the Enterprise or Community Edition .msi installer and on Linux using an additional .NET Core Client .tar.gz installer.
|Docker Support||NCache fully supports Docker for cache clients, in addition to cache servers. This enables .NET applications deployed in Docker to seamlessly include the NCache Client. Docker support enables deployment of all NCache servers in the caching tier in Docker. And it makes NCache deployment very easy.
NCache Server and NCache Client are both available in the Docker Hub to include in your Docker configuration.
Performance is defined as how fast cache operations are performed at a normal transaction load. Scalability is defined as how fast the same cache operations are performed under higher and higher transaction loads. NCache is extremely fast and scalable.
See NCache Benchmarks for more details.
|Cache Performance||NCache is an extremely fast distributed cache. It is much faster than going to the database to read data. It provides sub-millisecond response times to its clients.|
|Cache Scalability||Unlike databases, NCache is able to scale out seamlessly and allow you to keep growing your transaction load. On top of this, NCache provides you linear scalability, which means that as you increase your cache servers, your transaction capacity also increases from the original capacity.|
|Bulk Operations||Bulk Get, Add, Insert, and Remove. This covers most of major cache operations and gives great performance boost.|
|Asynchronous Operations||An asynchronous operation returns control to the application and performs the cache operation in the background. Very useful in many cases and improves application performance greatly.|
|Compression||You can specify this in the config. file and change it at runtime and then do a "Hot Apply".|
|Compact Serialization||NCache generates serialization code and compiles it in-memory when your application connects to the cache. This code is used to serialize objects and it is almost 10 times faster than regular .NET serialization (especially for larger objects). NCache also stores type-IDs instead of long string-based type names in the serialized object so it is much smaller (hence compact). Following enhancements have been made to compact-serialization:
|Indexes||NCache creates indexes inside the cache to speed up various data retrieval cache operations. You can use this with groups, tags, named tags, searching, and more. Expiration and eviction policies also use indexes.|
|Multiple Network Interface Cards||You can assign two network interface cards to a cache server. One can be used for clients to talk to the cache server and second for multiple cache servers in the cluster to talk to each other. Improves your data bandwidth scalability greatly.|
Cache elasticity means how flexible the cache is at runtime. Are you able to perform the following operations at runtime without stopping the cache or your application?
NCache provides a self-healing dynamic cache clustering that makes NCache highly elastic.
See Dynamic Clustering for details.
|Recovery from Split-Brain||Split-Brain is a situation where due to temporary network failures between cluster nodes result in multiple sub clusters. Each sub cluster, in this case, has its own coordinator node and does not know about the other sub clusters. This can eventually result in inconsistent data.
With NCache 4.9, users can enable the cache clusters to automatically recover from Split-Brain scenarios.
|Cache Client Keep Alive||Some firewalls break idle network connections, causing problems in cache client-to-cache server communication in NCache. The Cache Client Keep Alive feature, if enabled on a client node, automatically sends a lightweight packet to the cache servers at a configurable interval (a sort of heart-beat).
These packets are sent only in case of "no activity" between clients and servers and therefore do not interfere with regular client/server traffic.
|Dynamic Cache Cluster||NCache is highly dynamic and lets you add or remove cache servers at runtime without any interruption to the cache or your application.|
|Peer to peer architecture||This means there is no "master" or "slave" nodes in the cluster. There is a "primary coordinator" node that is the senior most node. And, if it goes down, the next senior most node automatically becomes the primary coordinator.|
|Connection Failover||Full support. When a cache server goes down, the NCache clients automatically continue working with other servers in the cluster and no interruption is caused.|
|Dynamic Configuration||A lot of information can be changed at runtime without stopping the cache or your applications. NCache has a "Hot Apply" feature does this.|
|Unlimited Cache Clusters||NCache allows you can create unlimited number of cache clusters. The only limiting factor is the underlying hardware (RAM, CPU, NICs etc.)|
Cache Topologies determine data storage and client connection strategy. There are different topologies for different type of uses.
See NCache Caching Topologies for details.
|Local Cache||You can use NCache as InProc or OutProc local cache. InProc is much faster but your memory consumption is higher if you have multiple application processes. OutProc is slightly slower but saves you memory consumption because there is only one cache copy per server.|
|Client Cache (Near Cache)||Client Cache is simply a local InProc/OutProc cache on client machine but one that stays connected and synchronized with the distributed cache cluster. This way, the application really benefits from this "closeness" without compromising on data integrity.|
|Mirrored Cache||Mirrored Cache is a 2-node active-passive cache and data mirroring is done asynchronously.|
|Replicated Cache||Replicated Cache is active-active cache where the entire cache is copied to each cache server. Reads are super fast and writes are done as atomic operations within the cache cluster.|
|Partitioned Cache||You can create a dynamic Partitioned Cache. All partitions are created and clients are made aware all at runtime. This allows you to add or remove cache servers without any interruption.|
|Partitioned-Replica Cache||Same as Partitioned Cache and is fully dynamic except there is also a "replica" for each partition kept at another cache server for reliability.|
WAN replication is an important feature for many applications are deployed in multiple data centers either for disaster recovery purpose or for load balancing of regional traffic. The idea behind WAN replication is that it must not slow down the cache in each geographical location due to the high latency of WAN. NCache provides Bridge Topology to handle all of this. See WAN Replication for details.
|One Active - One Passive||Bridge Topology Active-Passive
You can create a Bridge between the active and passive sites. The active site submits all updates to the Bridge which then replicates them to the passive site.
|Both Active||Bridge Topology Active-Active
You can create a Bridge between two active sites. Both submit their updates to the Bridge which handles conflicts on last update wins rule or a custom conflict resolution handler provided by you. Then, the Bridge ensures that both sites have the same update.
|Switch between Active/Active and Active/Passive||While adding caches to bridge, you can configure a cache to participate as an active or a passive member of bridge. Even when bridge is up and running, you can turn a passive into active and an active into passive without losing any data.
User experience to configure a bridge is also changed as the topologies in bridge can be switched between Active-Active to Active-Passive at any time.
|Connect/Disconnect Caches||Cache administrators can temporarily connect and disconnect caches from the bridge while bridge is running. When a cache is disconnected, no data is transferred between bridge and the disconnected cache. Similarly, the cache on the other side of the bridge stops queuing data to the bridge as the disconnected cache is no more receiving any data.
Cache can be reconnected at any time.
Cache administration is a very important aspect of any distributed cache. A good cache should provide the following:
|Thin NCache Manager Project Files||The GUI-based NCache management tool (called NCache Manager) used to retain cache configuration information inside its project file. This caused data integrity issues if multiple cache configuration modifications were input on different machines. To avoid this, NCache Manager no longer stores any cache configuration information inside its project files. Instead, all configuration information is kept on common cache servers, accessed from any location.|
|Total Cache Management Through PowerShell||NCache has a rich set of command line tools (along with powerful GUI-based management tools). With NCache 4.8 and later, all NCache command-line cache management tools are implemented in PowerShell, allowing for very sophisticated cache management.|
|Visual Studio Integration||NCache allows yoyu to perform basic management and configuration operations within the Visual Studio. With NCache 4.4 SP2 and later, the Developer installation comes with an 'NCache Manager' extension which helps developers manage NCache from Visual Studio. Visual Studio 2010/2012/2013/2015/2017 are supported by NCache.|
|Cache Admin GUI Tool||NCache Manager is a powerful GUI tool for NCache. It gives you an explorer view and lets you quickly administer the cache including cache creation/editing and many other functions.|
|Cache Monitoring GUI Tool||It lets you monitor NCache cluster wide activity from a single location. It also lets you monitor all of NCache clients from a single location. And, you can incorporate non-NCache PerfMon counters in it for real-time comparison with NCache stats.|
|PerfMon Counters||NCache provides a rich set of PerfMon counters that can be seen from NCache Manager, NCache Monitor, or any third party tool that supports PerfMon monitoring.|
|Graceful Node Stop||A node can now be gracefully stopped in a cluster. This action will make sure that all client requests that have reached the node are executed on cache before it comes to complete stop. Similarly, all write behind operations pending in the queue at that time are also executed on the data source. However, no more client requests are accepted by this node.|
|ReportView Control||There is another type of dashboard available in NCache Monitor now that allows you to create a report view style dashboard. In this dashboard, we have two report controls. One is for cache server nodes, while other one for client nodes.
Users can drop the counters in this control and their values are shown in a report view style as shown in PerfMon.
|Logging of Counters||Counters added in report view can also be configured to be logged. Users can start and stop logging at any time. They can also schedule the logging to start automatically by specifying the start and stop time. These log files are generated in .csv format.|
|Command Line Admin Tools||NCache provides a rich set of command line tools/utilities. You can create a cache, add remote clients to it, add server nodes to it, start/stop the cache, and much more.|
Many applications deal with sensitive data or are mission critical and cannot allow the cache to be open to everybody. Therefore, a good distributed cache provides restricted access based on authentication and authorization to classify people in different groups of users. And, it should also allow data to be encrypted inside the client application process before it travels to the cache cluster.
NCache provides strong support in all of these areas.
See Security and Encryption Features for details.
|Transport Level Security (TLS) 1.2||All communication from NCache clients to NCache servers can now be optionally secured through TLS 1.2 (a newer specification than SSL 3.0). TLS is the same protocol used by HTTPS to ensure a secured connection between the browser and a web server.
TLS 1.2 ensures all data traveling between NCache clients and NCache servers is fully encrypted and secured. Please note that encryption/decryption on NCache clients and NCache servers has a slight performance impact.
|Active Directory/LDAP Authentication||You can authenticate users against Active Directory or LDAP. If security is enabled, nobody can use the cache without authentication and authorization.|
|Authorization||You can authorize users to as either "users" or "admins". Users can only access the cache for read-write operations while "admins" can administer the cache.|
|Encryption (DES & AES)||You can enable encryption and NCache automatically encrypts all items inside the client process before sending them to the cache. And decryption also happens automatically and transparently. Currently, 3DES, AES-128, AES-192 and AES-256 encryptions are provided and more are being added.
When encryption is enabled, indexed data is also encrypted.
These are the most basic operations without which a distributed cache becomes almost unusable. These by no means cover all the operations a good distributed cache should have.
|Get, Add, Insert, Remove, Exists, Clear Cache||NCache provides variations of these operations and therefore more control to the user.|
|Expirations||Absolute expiration is good for data that is coming from the database and must be expired after a known time. Sliding expiration means expire after a period of inactivity and is good for session and other temporary data that must be removed once used.|
|Lock & Unlock||Lock is used to exclusively lock a cached item so nobody else can read or write it. This item stays locked until either the lock expires or it is unlocked. NCache also has incorporated "lock/unlock" features in "Get" and "Insert" calls where "Get" returns an item locked and "Insert" updates item and also unlocks it.|
|Streaming API||For large objects (e.g. movie or audio files), where you might want to stream them, NCache provides a streaming API. With this API, you can get chunks of data and pass one chunk upward at a time.|
Since most data being cached comes from relational databases, it has relationships among various data items. So, a good cache should allow you to specify these relationships in the cache and then keep the data integrity. It should allow you to handle one-to-one, one-to-many, and many-to-many data relationships in the cache automatically without burdening your application with this task. See more at managing data relationships.
|Key Based Dependency||NCache provides full support for it. You can specify one cached item A depends on another cached item B which then depends on a third cached item C. Then, if C is ever updated or removed, B is automatically removed from the cache and that triggers the removal of A from the cache as well. And, all of this is done automatically by the cache.
With this feature, you can keep track of one-to-one, one-to-many, and many-to-many relationships in the cache and invalidate cached items if their related items are updated or removed.
|Multi-Cache Key Dependency||This is an extension of Key Based Dependency except it allows you to create this dependency across multiple caches.|
Database synchronization is a very important feature for any good distributed cache. Since most data being cached is coming from a relational database, there are always situations where other applications or users might change the data and cause the cached data to become stale. To handle these situations, a good distributed cache should allow you to specify dependencies between cached items and data in the database. Then, whenever that data in the database changes, the cache becomes aware of it and either invalidates its data or reloads a new copy. Additionally, a good distributed cache should allow you to synchronize the cache with non-relational data sources since real life is full of those situations as well. NCache provides a very powerful database synchronization feature.
|SQL Dependency||NCache provides SqlDependency support for SQL Server. You can associate a cached item with a SQL statement based dataset in SQL Server. Then whenever that dataset changes (addition, updates, or removal), SQL Server sends a .NET event to NCache and NCache invalidates this cached item.
This feature allows you to synchronize the cache with SQL Server database. If you have a situation where some applications or users are directly updating data in the database, you can enable this feature to ensure that the cache stays fresh.
|Oracle Dependency||NCache provides OracleDependency support for Oracle. It works just like SqlDependency but for Oracle. Whenever data changes in the database, Oracle notifies NCache through Oracle event notification.
Just like SqlDependency, this feature allows you to synchronize the cache with Oracle database.
|OLEDB Database Dependency||NCache provides support for you to synchronize the cache with any OLEDB database. This synchronization is based on polling it is much more efficient because in one poll, NCache can synchronize thousands of cached items instead of receiving thousands of individual events in SqlDependency.|
|File Based Dependency||NCache allows you to specify a dependency on an external file. Then NCache monitors this file for any updates and when that happens, NCache invalidates the corresponding cached item. This allows you to keep the cached item synchronized with a non-relational data source.|
|Custom Dependency||NCache allows you to implement a custom dependency and register your code with the cache cluster. Then, NCache calls your code to monitor some custom data source for any changes. When changes happen, you fire a dependency update within NCache which causes the corresponding cached item to be removed from the cache.
This feature is good when you need to synchronize the cached item with a non-relational data source that cannot be captured by a flat file. So, custom dependency handles this case.
Runtime data sharing has become an important use for distributed caches. More and more applications today need to share data with other applications at runtime in an asynchronous fashion.
Previously, relational databases were used to share data among multiple applications but that requires constant polling by the applications wanting to consume data. Then, message queues became popular because of their asynchronous features and their persistence of events. And although message queues are great, they lack performance and scalability requirements of today's applications.
As a result, more and more applications are using distributed caches for event driven runtime data sharing. This data sharing should be between multiple .NET applications or between .NET and Java applications. NCache provides very powerful features to facilitate runtime data sharing.
See run-time data sharing for details.
|Publish/Subscribe (Pub/Sub) with Topic||The Publish/Subscribe (Pub/Sub) messaging paradigm is provided where a publisher sends messages into channels, without knowing the subscribers (if any). And subscribers receive only messages of interest, without knowing who the publishers are.
NCache provides named Topic support through which a publisher sends messages to multiple subscribers or to any one among them. And, subscribers can subscribe to a named Topic, and its callback is called by NCache when a message arrives against this Topic.
|Events with Data||You can specify whether to receive data with the event or only the event. In many situations, you may want the data along with the evet to guarantee that you're getting the copy that was actually modified. This is because if you don't receive data with events and your application then fetches the data from the cache, somebody else might have changed this data by that time.|
|Cached Item Specific Events (onInsert/onRemove)||NCache allows you to register interest in various cached items. Then, when these items are updated or removed, your callbacks are called. This is true even if you're connected to cache remotely.|
|Cache Level Events (Add/Insert/Remove)||NCache allows you to register to be notified whenever any cached item is added, updated, or removed. Your callback is called when this happens even if your application is remotely connected to the cache.|
|Custom Events (Fired by Apps)||NCache allows your applications to fire custom events into the cache cluster. And, other applications can register to be notified for these events.
This feature allows you to coordinate a producer/consumer scenario where after the producer has produced data, it notifies all the consumers to consume it.
|Continuous Query||NCache provides a powerful Continuous Query feature. Continuous Query lets you specify a SQL-like query against which NCache monitors the cache for any additions, updates, or deletes. And, your application is notified whenever this happens.|
Distributed cache is frequently used to cache objects that contain data coming from a relational database. This data may be individual objects or collections that are the result of some database query.
Either way, applications often want to fetch a subset of this data and if they have the ability to search the distributed cache with a SQL-like query language and specify object attributes as part of the criteria, it makes the distributed cache much more useful for them.
NCache provides powerful Object Query Language (OQL) for searching the cache with a SQL-like query.
See Object Query Language for details.
|Object Query Language (OQL)||NCache provides a rich Object Query Language (OQL) with which you can search the cache. Your search criteria can now include object attributes (e.g. cust.city = 'New York') and you can also include Tags and Named Tags in the query language.|
|Group-by for Queries||You can issue GROUP BY queries and obtain a result set that includes counts of items in the cache groups by attribute values.|
|Delete Statement in Queries||You can delete cached items by specifying an attribute based criteria.|
|LINQ Queries||NCache allows you to search the cache with LINQ queries. LINQ is a popular object querying language in .NET and NCache has implemented a LINQ provider.|
A distributed cache should be much more than a Hashtable with a (key, value) pair interface. It needs to meet the needs of real life applications that expect to fetch and update data in groups and collections. In a relational database, SQL provides a very powerful way to do all of this.
We've already explained how to search a distributed cache through OQL and LINQ. Now let's discuss Groups, Tags, and Named Tags. These features allow you to keep track of collections of data easily and even modify them.
|Groups/Subgroups||NCache provides the ability for you to group cached items in a group-subgroup combination (or just group with no subgroup).
You can later fetch or remove all items belonging to a group. You can also fetch just the keys and then only fetch subset of them.
|Tags||A Tag is a string that you can assign to one or more cached items. And one cached item can be assigned multiple Tags.
And, later, you can fetch items belonging to one or more Tags in order to manipulate them.
You can also include Tags in Object Query Language or LINQ search as part of the criteria.
|Named Tags||NCache provides Named Tags feature where you can assign a "key" and "tag" to one or more cached items. And, a single cached item can get multiple Named Tags.
Later, you can fetch items belonging to one or more Named Tags.
Many people use distributed cache as "cache on the side" where they fetch data directly from the database and put it in the cache. Another approach is "cache through" where your application just asks the cache for the data. And, if the data isn't there, the distributed cache gets it from your data source.
The same thing goes for write-through. Write-behind is nothing more than a write-through where the cache is updated immediately and the control returned to the client application. And, then the database or data source is updated asynchronously so the application doesn't have to wait for it. NCache provides powerful capabilities in this area.
See Read-through & Write-through for details.
|Read-through||NCache allows you to implement multiple read-through handlers and register with the cache as "named providers". Then, when the applications tell NCache to use read-through upon a "cache miss", an appropriate read-through is called by the cache server to load the data from your database or data source.|
|Write-through & Write behind||NCache allows you to implement multiple write-through handlers and register with NCache as "named providers". Then, whenever application updates a cached item and tells NCache to also call write-through, NCache server calls your write-through handler.
If you've enabled write-behind, then NCache updates the cache immediately and queues up the database update and a background thread processes it and calls your write-through handler.
|Write-through & Write- behind batching options||You can specify the following:
|Reload Items at Expiration & Database Synchronization||NCache allows you to specify that whenever a cached item expires, instead of removing it from the cache, NCache should call your read-through handler to read a new copy of that object and update the cache with it.
You can specify the same when database synchronization is enabled and a row in the database is updated and a corresponding cached item would have been removed from the cache. These items are reloaded with the help of your read-through provider.
A distributed cache always has less storage space than a relational database. So, by design, a distributed cache is supposed to cache a subset of the data which is really the "moving window" of a data set that the applications are currently interested in.
This means that a distributed cache should allow you to specify how much memory it should consume and once it reaches that size, the cache should evict some of the cached items. However, please keep in mind that if you're caching something that does not exist in the database (e.g. ASP.NET Sessions) then you need to do proper capacity planning to ensure that these cached items (sessions in this case) are never evicted from the cache. Instead, they should be "expired" at appropriate time based on their usage.
|Specify Cache Size||NCache lets you specify the upper limit of cache size in MB as per your needs. And you can "Hot-Apply" these changes.|
|LRU Evictions (Least Recently Used)||Least recently used is one of the eviction policies and it means those items that have not been accessed by any client application in the longest time will be evicted from the cache.|
|LFU Evictions (Least Frequently Used)||Least frequently used is one of the eviction policies and it means those items that have been accessed the least number of times since the last eviction or since the starting of the cache, whichever is later, would be evicted.|
|Priority Evictions||NCache lets you specify different priority values when you add an item to the cache. These include high, above-normal, normal, below-normal, and low. When the cache is full and NCache wants to evict items, it starts from priority low and starts evicting all the items in a FIFO (first-in-first-out) manner.|
|Do not Evict||NCache also lets you specify a "do not evict" priority for some cached items and then they are not evicted. This is enabled at cache-level so nothing gets evicted from the entire cache. This is espically useful in case of ASP.NET Session storage.|
ASP.NET applications need three things from a good distributed cache. And, they are ASP.NET Session State storage, ASP.NET View State caching, and ASP.NET Output Cache.
ASP.NET Session State store must allow session replication in order to ensure that no session is lost even if a cache server goes down. And, it must be fast and scalable so it is a better option than InProc, StateServer, and SqlServer options that Microsoft provides out of the box. NCache has implemented a powerful ASP.NET Session State provider.
See ASP.NET Session State for details.
ASP.NET View State caching allows you to cache heavy View State on the web server so it is not sent as a "hidden field" to the user browser for a round-trip. Instead, only a "key" is sent. This makes the payload much lighter, speeds up ASP.NET response time, and also reduces bandwidth pressure and cost for you. NCache provides a feature-rich View State cache.
See ASP.NET View State for details.
Third is ASP.NET Output Cache. For .NET 4.0, Microsoft has changed the ASP.NET Output Cache architecture and now allows third-party distributed caches to be plugged-in. ASP.NET Output Cache saves the output of an ASP.NET page so the page doesn't have to execute next time. And, you can either cache the entire page or portions of the page. NCache has implemented a provider for ASP.NET Output Cache.
|ASP.NET Core Response Caching||NCache's implementation of IDistributedCache utilizes Distributed Cache Tag Helper that provides the ability to dramatically improve the performance of your ASP.NET Core app by caching its responses.|
|ASP.NET Core Session Provider & IDistributedCache||NCache provides full ASP.NET Core support, in addition to the previously available ASP.NET in the .NET Framework.
NCache includes a powerful ASP.NET Core Session Provider that has more features than the regular ASP.NET Session Provider. And, it supports the IDistributedCache interface in ASP.NET Core.
|ASP.NET Session State Store||NCache has implemented an ASP.NET Session State Provider (SSP) for .NET 2.0+. You can use it without any code changes. Just change web.config.
NCache provides intelligent session replication and is much faster than any database storage for sessions.
|ASP.NET View State Cache||NCache has an ASP.NET View State caching module. You can use it without any code changes to your application. Just modify web.config.
You can also associate ASP.NET View State to a user session so when that session expires, all of its View State is also removed.
|ASP.NET Output Cache||NCache has an ASP.NET Output Cache provider implemented. It allows you to cache ASP.NET page output in a distributed cache and share it in a web farm.
Users can now write their own code to modify the cache items before they are inserted in NCache. Users can change the expiration, dependencies, etc. of output cache items by writing these hooks.
For this, users have to implement an interface provided with
|NCache Backplane for SignalR||NCache offers support for SignalR through an extension to the SignalR provider. All concerned web servers for the application are registered against the provider. Meanwhile, the clients are connected to their respective web servers. In NCache, as soon as a client registers itself against the webserver, two key features of NCache come into play: Custom Events and CacheItem.itemVersion.|
|ASP.NET Core Sessions||NCache architecture allows you to scale linearly to handle extreme transaction load by allowing you to add more cache servers at runtime. NCache also provides intelligent cache replication to ensure zero ASP.NET Core Session data loss if a web or a cache server goes down.|
Memcached is an open-source in-memory distributed caching solution which helps speed up web applications by taking pressure off the database. Memcached is used by many of the internet's biggest websites and has been merged with other technologies.
NCache implements Memcached protocol to enable users with existing Memcached implementations to easily migrate to NCache. No code required for this.
See Memcached Wrapper for details.
NHibernate is a very powerful and popular object-relational mapping engine. And, fortunately, it also has a second level cache provider architecture that allows you to plug-in a third-party cache without making any code changes to the NHibernate application. NCache has implemented this NHibernate Second Level Cache provider.
See NHibernate Second Level Cache for details.
Similarly, Entity Framework from Microsoft is also a very popular object-relational mapping engine. And, although Entity Framework doesn't have a nice Second Level Cache provide architecture like NHibernate, NCache has nonetheless implemented a second level cache for Entity Framework.
See Entity Framework Second Level Cache for details.
|Entity Framework Core (EF Core) 2.0 Extension Methods for NCache||NCache has very easy to use EF Core 2.0 Extension Methods that cache application data fetched through EF Core 2.0. Although Extension Methods require some minimal coding, it is a small effort and it yields a lot of control over which data to cache and for how long.|
|Memcached Wrapper||Memcached Wrapper for NCache offers you a no-code-change way for migrating your Memcached applications to a powerful elastic distributed cache. You can use Memcached Wrapper for NCache in two ways as following:
|NHibernate Second Level Cache||NCache provides an NHibernate second level cache provider that you can plug-in through web.config or app.config changes.
NCache has also implemented database synchronization feature in this so you can specify which classes should be synchronized with the database. NCache lets you specify SqlDependency or any OLEDB compliant databased dependency for this.
|Entity Framework Second Level Cache||You can plug-in NCache to your Entity Framework application, run it in analysis mode, and quickly see all the queries being used by it. Then, you can decide which queries should be cached and which ones to be skipped.
You can also specify which queries should be synchronized with the database through SqlDependency.