Alachisoft.com
Download NCache Now!
Contact Us

+1 (925) 236-3830 sales@alachisoft.com

support@alachisoft.com

Live Chat

NCache Features

Click to download complete feature list of NCache (PDF)

Feature Area

Description

Performance & Scalability

Optimizations provided as options. All Bulk operations provided. Custom compact serialization provided. Async operations provided. And much more.

Cache Elasticity (High Availability)

Highly elastic with 100% uptime. Peer to peer cache cluster architecture. Connection failover support and dynamic configuration.

Cache Topologies

Rich options. Mirrored, Replicated, Partitioned, Partitioned-Replica, and Client Caches. Highly dynamic. And intelligent replication provided.

WAN Replication

Supports WAN replication between following sites. Active-passive, active-active. No performance drop.

Cache Administration

Very feature-rich GUI tools. NCache Manager, NCache Monitor, PerfMon counters, and command line tools provided.

Security & Encryption

Active Directory/LDAP authentication, authorization, and 3DES/AES-256 encryption. No programming needed.

Object Caching Features

Very rich operations. Get, Add, Update, Remove, Exists, & Clear Cache with many options. Both absolute & sliding expirations. Lock/unlock, item versioning, and streaming API provided.

Managing Data Relationships

Key based Cache Dependency allows you to handle one-to-one, one-to-many, and many-to-many relationships in the cache automatically.

Synchronization with Data Sources

SqlDependency, OracleDependency, DbDependency, & CLR Stored Procedures for database synchronization. File based and Custom dependency for non-relational data sources.

Runtime Data Sharing

Events, Continuous Query, .NET/Java portable binary data, and more. Use NCache for publisher/consumer data sharing between .NET/.NET or .NET/Java apps.

Search Cache (SQL-Like)

Object Query Language (OQL) and LINQ. Search cache on object attributes, Tags, and Named Tags with SQL-like query.

Data Grouping

Group/sub-group, Tags, and Named Tags. Group, fetch, update, and manipulate data intelligently.

Read-through & Write-through

Multiple Read-through, Write-through. Use cache to fetch data from your database and simplify your apps. Also auto-reload cached items when expired or when database synchronization needed.

Cache Size Management

Cache level management.
LRU, LFU, and Priority evictions.
You can designate cached items to not be evicted with priority eviction.

ASP.NET Support

ASP.NET Session State, ASP.NET View State, ASP.NET Output Cache.
Replication for sessions, View State, and page output. Link view state with sessions for auto expiry and much more.

Third Party Integrations

NHibernate Second Level Cache, Entity Framework Cache, Memcached Wrapper; extra features provided for all of these. Use all of these without any programming.
^ Top

Performance and Scalability

Performance is defined as how fast cache operations are performed at a normal transaction load. Scalability is defined as how fast the same cache operations are performed under higher and higher transaction loads. NCache is extremely fast and scalable.

See NCache Benchmarks for more details.

Feature

Description

Cache Performance NCache is an extremely fast in-memory distributed cache. It is much faster than going to the database to read data. It provides sub-millisecond response times to its clients.
Cache Scalability Unlike databases, NCache is able to scale out seamlessly and allow you to keep growing your transaction load. On top of this, NCache provides you linear scalability, which means that as you increase your cache servers, your transaction capacity also increases from the original capacity.
Bulk Operations Bulk Get, Add, Insert, and Remove. This covers most of major cache operations and gives great performance boost.
Asynchronous Operations An asynchronous operation returns control to the application and performs the cache operation in the background. Very useful in many cases and improves application performance greatly.
Compression You can specify this in the config. file and change it at runtime and then do a “Hot Apply”.
Compact Serialization NCache generates serialization code and compiles it in-memory when your application connects to the cache. This code is used to serialize objects and it is almost 10 times faster than regular .NET serialization (especially for larger objects). NCache also stores type-IDs instead of long string-based type names in the serialized object so it is much smaller (hence compact). Following enhancements have been made to compact-serialization:
  1. Users can select and de-select the data members to be compact serialized.
  2. Byte arrays are no more serialized.
  3. Compact types are hot-appliable.
Indexes NCache creates indexes inside the cache to speed up various data retrieval cache operations. You can use this with groups, tags, named tags, searching, and more. Expiration and eviction policies also use indexes.
Multiple Network Interface Cards You can assign two network interface cards to a cache server. One can be used for clients to talk to the cache server and second for multiple cache servers in the cluster to talk to each other. Improves your data bandwidth scalability greatly.

^ Top

Cache Elasticity (High Availability)

Cache elasticity means how flexible the cache is at runtime. Are you able to perform the following operations at runtime without stopping the cache or your application?

  • Add or remove cache servers at runtime without stopping the cache.
  • Make cache config changes without stopping the cache.
  • Add or remove web/application servers without stopping the cache.
  • Have failover support in case any server goes down (meaning are cache clients are able to continue working seamlessly).

NCache provides a self-healing dynamic cache clustering that makes NCache highly elastic.

See Dynamic Clustering for details.

Feature

Description

Dynamic Cache Cluster NCache is highly dynamic and lets you add or remove cache servers at runtime without any interruption to the cache or your application.
Peer to peer architecture This means there is no “master” or “slave” nodes in the cluster. There is a “primary coordinator” node that is the senior most node. And, if it goes down, the next senior most node automatically becomes the primary coordinator.
Connection Failover Full support. When a cache server goes down, the NCache clients automatically continue working with other servers in the cluster and no interruption is caused.
Dynamic Configuration A lot of information can be changed at runtime without stopping the cache or your applications. NCache has a “Hot Apply” feature does this.
Unlimited Cache Clusters NCache allows you can create unlimited number of cache clusters. The only limiting factor is the underlying hardware (RAM, CPU, NICs etc…).

^ Top

Cache Topologies

Cache Topologies determine data storage and client connection strategy. There are different topologies for different type of uses.

See NCache Caching Topologies for details.

Feature

Description

Local Cache You can use NCache as InProc or OutProc local cache. InProc is much faster but your memory consumption is higher if you have multiple application processes. OutProc is slightly slower but saves you memory consumption because there is only one cache copy per server.
Client Cache (Near Cache) Client Cache is simply a local InProc/OutProc cache on client machine but one that stays connected and synchronized with the distributed cache cluster. This way, the application really benefits from this “closeness” without compromising on data integrity.
Mirrored Cache Mirrored Cache is a 2-node active-passive cache and data mirroring is done asynchronously.
Replicated Cache Replicated Cache is active-active cache where the entire cache is copied to each cache server. Reads are super fast and writes are done as atomic operations within the cache cluster.
Partitioned Cache You can create a dynamic Partitioned Cache. All partitions are created and clients are made aware all at runtime. This allows you to add or remove cache servers without any interruption.
Partitioned-Replica Cache Same as Partitioned Cache and is fully dynamic except there is also a “replica” for each partition kept at another cache server for reliability.

^ Top

WAN Replication

WAN replication is an important feature for many applications are deployed in multiple data centers either for disaster recovery purpose or for load balancing of regional traffic.

The idea behind WAN replication is that it must not slow down the cache in each geographical location due to the high latency of WAN. NCache provides Bridge Topology to handle all of this.

See WAN Replication for details.

Feature

Description

One Active - One Passive Bridge Topology Active-Passive
You can create a Bridge between the active and passive sites. The active site submits all updates to the Bridge which then replicates them to the passive site.
Both Active Bridge Topology Active-Active
You can create a Bridge between two active sites. Both submit their updates to the Bridge which handles conflicts on last update wins rule or a custom conflict resolution handler provided by you. Then, the Bridge ensures that both sites have the same update.
Switch between Active/Active and Active/Passive While adding caches to bridge, you can configure a cache to participate as an active or a passive member of bridge. Even when bridge is up and running, you can turn a passive into active and an active into passive without losing any data.

User experience to configure a bridge is also changed as the topologies in bridge can be switched between Active-Active to Active-Passive at any time.
Connect/Disconnect Caches Cache administrators can temporarily connect and disconnect caches from the bridge while bridge is running. When a cache is disconnected, no data is transferred between bridge and the disconnected cache. Similarly, the cache on the other side of the bridge stops queuing data to the bridge as the disconnected cache is no more receiving any data.

Cache can be reconnected at any time.

^ Top

Cache Administration and Management

Cache administration is a very important aspect of any distributed cache. A good cache should provide the following:

  • GUI based and command line tools for cache administration including cache creation and editing/updates.
  • GUI based tools for monitor the cache activities at runtime.
  • Cache statistics based on PerfMon (since for Windows PerfMon is the standard).

NCache provides powerful support in all these areas.

See Admin Tools for details.

Feature

Description

Cache Admin GUI Tool NCache Manager is a powerful GUI tool for NCache. It gives you an explorer view and lets you quickly administer the cache including cache creation/editing and many other functions.
Cache Monitoring GUI Tool It lets you monitor NCache cluster wide activity from a single location. It also lets you monitor all of NCache clients from a single location. And, you can incorporate non-NCache PerfMon counters in it for real-time comparison with NCache stats.
PerfMon Counters NCache provides a rich set of PerfMon counters that can be seen from NCache Manager, NCache Monitor, or any third party tool that supports PerfMon monitoring.
Graceful Node Stop A node can now be gracefully stopped in a cluster. This action will make sure that all client requests that have reached the node are executed on cache before it comes to complete stop. Similarly, all write behind operations pending in the queue at that time are also executed on the data source. However, no more client requests are accepted by this node.
ReportView Control There is another type of dashboard available in NCache Monitor now that allows you to create a report view style dashboard. In this dashboard, we have two report controls. One is for cache server nodes, while other one for client nodes.

Users can drop the counters in this control and their values are shown in a report view style as shown in PerfMon.
Logging of Counters Counters added in report view can also be configured to be logged. Users can start and stop logging at any time. They can also schedule the logging to start automatically by specifying the start and stop time. These log files are generated in .csv format.
Command Line Admin Tools NCache provides a rich set of command line tools/utilities. You can create a cache, add remote clients to it, add server nodes to it, start/stop the cache, and much more.

^ Top

Security & Encryption

Many applications deal with sensitive data or are mission critical and cannot allow the cache to be open to everybody. Therefore, a good distributed cache provides restricted access based on authentication and authorization to classify people in different groups of users. And, it should also allow data to be encrypted inside the client application process before it travels to the cache cluster.

NCache provides strong support in all of these areas.

See Security and Encryption Features for details.

Feature

Description

Active Directory/LDAP Authentication You can authenticate users against Active Directory or LDAP. If security is enabled, nobody can use the cache without authentication and authorization.
Authorization You can authorize users to as either “users” or “admins”. Users can only access the cache for read-write operations while “admins” can administer the cache.
Encryption (DES & AES) You can enable encryption and NCache automatically encrypts all items inside the client process before sending them to the cache. And decryption also happens automatically and transparently. Currently, 3DES, AES-128, AES-192 and AES-256 encryptions are provided and more are being added.

When encryption is enabled, indexed data is also encrypted.

^ Top

Object Caching Features

These are the most basic operations without which a distributed cache becomes almost unusable. These by no means cover all the operations a good distributed cache should have.

Feature

Description

Get, Add, Insert, Remove, Exists, Clear Cache NCache provides variations of these operations and therefore more control to the user.
Expirations Absolute expiration is good for data that is coming from the database and must be expired after a known time. Sliding expiration means expire after a period of inactivity and is good for session and other temporary data that must be removed once used.
Lock & Unlock Lock is used to exclusively lock a cached item so nobody else can read or write it. This item stays locked until either the lock expires or it is unlocked. NCache also has incorporated “lock/unlock” features in “Get” and “Insert” calls where “Get” returns an item locked and “Insert” updates item and also unlocks it.
Streaming API For large objects (e.g. movie or audio files), where you might want to stream them, NCache provides a streaming API. With this API, you can get chunks of data and pass one chunk upward at a time.

^ Top

Managing Data Relationships

Since most data being cached comes from relational databases, it has relationships among various data items. So, a good cache should allow you to specify these relationships in the cache and then keep the data integrity. It should allow you to handle one-to-one, one-to-many, and many-to-many data relationships in the cache automatically without burdening your application with this task.

See more at managing data relationships.

Feature

Description

Key Based Dependency NCache provides full support for it. You can specify one cached item A depends on another cached item B which then depends on a third cached item C. Then, if C is ever updated or removed, B is automatically removed from the cache and that triggers the removal of A from the cache as well. And, all of this is done automatically by the cache.

With this feature, you can keep track of one-to-one, one-to-many, and many-to-many relationships in the cache and invalidate cached items if their related items are updated or removed.
Multi-Cache Key Dependency This is an extension of Key Based Dependency except it allows you to create this dependency across multiple caches.

^ Top

Synchronization with Data Sources

Database synchronization is a very important feature for any good distributed cache. Since most data being cached is coming from a relational database, there are always situations where other applications or users might change the data and cause the cached data to become stale.

To handle these situations, a good distributed cache should allow you to specify dependencies between cached items and data in the database. Then, whenever that data in the database changes, the cache becomes aware of it and either invalidates its data or reloads a new copy.

Additionally, a good distributed cache should allow you to synchronize the cache with non-relational data sources since real life is full of those situations as well.

NCache provides a very powerful database synchronization feature.

Feature

Description

SQL Dependency NCache provides SqlDependency support for SQL Server. You can associate a cached item with a SQL statement based dataset in SQL Server. Then whenever that dataset changes (addition, updates, or removal), SQL Server sends a .NET event to NCache and NCache invalidates this cached item.

This feature allows you to synchronize the cache with SQL Server database. If you have a situation where some applications or users are directly updating data in the database, you can enable this feature to ensure that the cache stays fresh.
Oracle Dependency NCache provides OracleDependency support for Oracle. It works just like SqlDependency but for Oracle. Whenever data changes in the database, Oracle notifies NCache through Oracle event notification.

Just like SqlDependency, this feature allows you to synchronize the cache with Oracle database.
OLEDB Database bDependency NCache provides support for you to synchronize the cache with any OLEDB database. This synchronization is based on polling it is much more efficient because in one poll, NCache can synchronize thousands of cached items instead of receiving thousands of individual events in SqlDependency.
File Based Dependency NCache allows you to specify a dependency on an external file. Then NCache monitors this file for any updates and when that happens, NCache invalidates the corresponding cached item. This allows you to keep the cached item synchronized with a non-relational data source.
Custom Dependency NCache allows you to implement a custom dependency and register your code with the cache cluster. Then, NCache calls your code to monitor some custom data source for any changes. When changes happen, you fire a dependency update within NCache which causes the corresponding cached item to be removed from the cache.

This feature is good when you need to synchronize the cached item with a non-relational data source that cannot be captured by a flat file. So, custom dependency handles this case.

^ Top

Runtime Data Sharing

Runtime data sharing has become an important use for distributed caches. More and more applications today need to share data with other applications at runtime in an asynchronous fashion.
Previously, relational databases were used to share data among multiple applications but that requires constant polling by the applications wanting to consume data. Then, message queues became popular because of their asynchronous features and their persistence of events. And although message queues are great, they lack performance and scalability requirements of today’s applications.
As a result, more and more applications are using in-memory distributed caches for event driven runtime data sharing. This data sharing should be between multiple .NET applications or between .NET and Java applications. NCache provides very powerful features to facilitate runtime data sharing.

See run-time data sharing for details.

Feature

Description

Events with Data You can specify whether to receive data with the event or only the event. In many situations, you may want the data along with the evet to guarantee that you’re getting the copy that was actually modified. This is because if you don’t receive data with events and your application then fetches the data from the cache, somebody else might have changed this data by that time.
Cached Item Specific Events (onInsert/onRemove) NCache allows you to register interest in various cached items. Then, when these items are updated or removed, your callbacks are called. This is true even if you’re connected to cache remotely.
Cache Level Events (Add/Insert/Remove) NCache allows you to register to be notified whenever any cached item is added, updated, or removed. Your callback is called when this happens even if your application is remotely connected to the cache.
Custom Events (Fired by Apps) NCache allows your applications to fire custom events into the cache cluster. And, other applications can register to be notified for these events.

This feature allows you to coordinate a producer/consumer scenario where after the producer has produced data, it notifies all the consumers to consume it.
Continuous Query NCache provides a powerful Continuous Query feature. Continuous Query lets you specify a SQL-like query against which NCache monitors the cache for any additions, updates, or deletes. And, your application is notified whenever this happens.

^ Top

Cache Search (SQL-Like)

Distributed cache is frequently used to cache objects that contain data coming from a relational database. This data may be individual objects or collections that are the result of some database query.

Either way, applications often want to fetch a subset of this data and if they have the ability to search the distributed cache with a SQL-like query language and specify object attributes as part of the criteria, it makes the distributed cache much more useful for them.

NCache provides powerful Object Query Language (OQL) for searching the cache with a SQL-like query.

See Object Query Language for details.

Feature

Description

Object Query Language
(OQL)
NCache provides a rich Object Query Language (OQL) with which you can search the cache. Your search criteria can now include object attributes (e.g. cust.city = ‘New York’) and you can also include Tags and Named Tags in the query language.
Group-by for Queries You can issue GROUP BY queries and obtain a result set that includes counts of items in the cache groups by attribute values.
Delete Statement in Queries You can delete cached items by specifying an attribute based criteria.
LINQ Queries NCache allows you to search the cache with LINQ queries. LINQ is a popular object querying language in .NET and NCache has implemented a LINQ provider.

^ Top

Data Grouping

A distributed cache should be much more than a Hashtable with a (key, value) pair interface. It needs to meet the needs of real life applications that expect to fetch and update data in groups and collections. In a relational database, SQL provides a very powerful way to do all of this.

We’ve already explained how to search a distributed cache through OQL and LINQ. Now let’s discuss Groups, Tags, and Named Tags. These features allow you to keep track of collections of data easily and even modify them.

Feature

Description

Groups/Subgroups NCache provides the ability for you to group cached items in a group-subgroup combination (or just group with no subgroup).

You can later fetch or remove all items belonging to a group. You can also fetch just the keys and then only fetch subset of them.
Tags A Tag is a string that you can assign to one or more cached items. And one cached item can be assigned multiple Tags.

And, later, you can fetch items belonging to one or more Tags in order to manipulate them.

You can also include Tags in Object Query Language or LINQ search as part of the criteria.
Named Tags NCache provides Named Tags feature where you can assign a “key” and “tag” to one or more cached items. And, a single cached item can get multiple Named Tags.

Later, you can fetch items belonging to one or more Named Tags.

^ Top

Read-through & Write-through

Many people use distributed cache as “cache on the side” where they fetch data directly from the database and put it in the cache. Another approach is “cache through” where your application just asks the cache for the data. And, if the data isn’t there, the distributed cache gets it from your data source.

The same thing goes for write-through. Write-behind is nothing more than a write-through where the cache is updated immediately and the control returned to the client application. And, then the database or data source is updated asynchronously so the application doesn’t have to wait for it. NCache provides powerful capabilities in this area.

See Read-through & Write-through for details.

Feature

Description

Read-through NCache allows you to implement multiple read-through handlers and register with the cache as “named providers”. Then, when the applications tell NCache to use read-through upon a “cache miss”, an appropriate read-through is called by the cache server to load the data from your database or data source.
Write-through & Write behind NCache allows you to implement multiple write-through handlers and register with NCache as “named providers”. Then, whenever application updates a cached item and tells NCache to also call write-through, NCache server calls your write-through handler.

If you’ve enabled write-behind, then NCache updates the cache immediately and queues up the database update and a background thread processes it and calls your write-through handler.
Write-through & Write- behind batching options You can specify the following:
  1. Batch size – how many items to batch together when doing database updates.
  2. Batch interval – How long to wait before processing the next batch.
  3. Retries threshold - How many retries to do if a database update fails.
  4. Retry queue eviction - How many items to evict from retry-queue if it becomes full.
Reload Items at Expiration & Database Synchronization NCache allows you to specify that whenever a cached item expires, instead of removing it from the cache, NCache should call your read-through handler to read a new copy of that object and update the cache with it.

You can specify the same when database synchronization is enabled and a row in the database is updated and a corresponding cached item would have been removed from the cache. These items are reloaded with the help of your read-through provider.

^ Top

Cache Size Management (Evictions Policies)

An in-memory distributed cache always has less storage space than a relational database. So, by design, a distributed cache is supposed to cache a subset of the data which is really the “moving window” of a data set that the applications are currently interested in.

This means that a distributed cache should allow you to specify how much memory it should consume and once it reaches that size, the cache should evict some of the cached items. However, please keep in mind that if you’re caching something that does not exist in the database (e.g. ASP.NET Sessions) then you need to do proper capacity planning to ensure that these cached items (sessions in this case) are never evicted from the cache. Instead, they should be “expired” at appropriate time based on their usage.

Feature

Description

Specify Cache Size NCache lets you specify the upper limit of cache size in MB as per your needs. And you can “Hot-Apply” these changes.
LRU Evictions
(Least Recently Used)
Least recently used is one of the eviction policies and it means those items that have not been accessed by any client application in the longest time will be evicted from the cache.
LFU Evictions
(Least Frequently Used)
Least frequently used is one of the eviction policies and it means those items that have been accessed the least number of times since the last eviction or since the starting of the cache, whichever is later, would be evicted.
Priority Evictions NCache lets you specify different priority values when you add an item to the cache. These include high, above-normal, normal, below-normal, and low. When the cache is full and NCache wants to evict items, it starts from priority low and starts evicting all the items in a FIFO (first-in-first-out) manner.
Do not Evict NCache also lets you specify a “do not evict” priority for some cached items and then they are not evicted. This is enabled at cache-level so nothing gets evicted from the entire cache. This is espically useful in case of ASP.NET Session storage.

^ Top

ASP.NET Support

ASP.NET applications need three things from a good distributed cache. And, they are ASP.NET Session State storage, ASP.NET View State caching, and ASP.NET Output Cache.
ASP.NET Session State store must allow session replication in order to ensure that no session is lost even if a cache server goes down. And, it must be fast and scalable so it is a better option than InProc, StateServer, and SqlServer options that Microsoft provides out of the box. NCache has implemented a powerful ASP.NET Session State provider.

See ASP.NET Session State for details.

ASP.NET View State caching allows you to cache heavy View State on the web server so it is not sent as a “hidden field” to the user browser for a round-trip. Instead, only a “key” is sent. This makes the payload much lighter, speeds up ASP.NET response time, and also reduces bandwidth pressure and cost for you. NCache provides a feature-rich View State cache.

See ASP.NET View State for details.

Third is ASP.NET Output Cache. For .NET 4.0, Microsoft has changed the ASP.NET Output Cache architecture and now allows third-party distributed caches to be plugged-in. ASP.NET Output Cache saves the output of an ASP.NET page so the page doesn’t have to execute next time. And, you can either cache the entire page or portions of the page. NCache has implemented a provider for ASP.NET Output Cache.

Feature

Description

ASP.NET Session State Store NCache has implemented an ASP.NET Session State Provider (SSP) for .NET 2.0+. You can use it without any code changes. Just change web.config.

NCache provides intelligent session replication and is much faster than any database storage for sessions.
ASP.NET View State Cache NCache has an ASP.NET View State caching module. You can use it without any code changes to your application. Just modify web.config.
You can also associate ASP.NET View State to a user session so when that session expires, all of its View State is also removed.
ASP.NET Output Cache NCache has an ASP.NET Output Cache provider implemented. It allows you to cache ASP.NET page output in a distributed cache and share it in a web farm.

Users can now write their own code to modify the cache items before they are inserted in NCache. Users can change the expiration, dependencies, etc. of output cache items by writing these hooks.

For this, users have to implement an interface provided with OutputCacheProvider and then register this assembly and class in web.config.

^ Top

Third Party Integrations

Memcached is an open-source in-memory distributed caching solution which helps speed up web applications by taking pressure off the database. Memcached is used by many of the internet’s biggest websites and has been merged with other technologies.

NCache implements Memcached protocol to enable users with existing Memcached implementations to easily migrate to NCache. No code required for this.

See Memcached Wrapper for details.

NHibernate is a very powerful and popular object-relational mapping engine. And, fortunately, it also has a second level cache provider architecture that allows you to plug-in a third-party cache without making any code changes to the NHibernate application. NCache has implemented this NHibernate Second Level Cache provider.

See NHibernate Second Level Cache for details.

Similarly, Entity Framework from Microsoft is also a very popular object-relational mapping engine. And, although Entity Framework doesn’t have a nice Second Level Cache provide architecture like NHibernate, NCache has nonetheless implemented a second level cache for Entity Framework.

See Entity Framework Second Level Cache for details.

Feature

Description

Memcached Wrapper Memcached Wrapper for NCache offers you a no-code-change way for migrating your Memcached applications to a powerful elastic distributed cache. You can use Memcached Wrapper for NCache in two ways as following:

  1. Memcached Plug-In (.NET apps)
    Memcached Plug-In option is for .NET applications. Alachisoft has taken all the popular Open Source Memcached client libraries and implemented them for NCache. These libraries for .NET are:
    • Enyim
    • BeIT
    • More...
  2. Memcached Gateway
    Memcached Gateway implements Memcached Protocol and supports all types of Memcached applications. You direct your application to this Memcached Gateway and your application starts using NCache behind the scenes.
NHibernate Second Level Cache NCache provides an NHibernate second level cache provider that you can plug-in through web.config or app.config changes.

NCache has also implemented database synchronization feature in this so you can specify which classes should be synchronized with the database. NCache lets you specify SqlDependency or any OLEDB compliant databased dependency for this.
Entity Framework Second Level Cache You can plug-in NCache to your Entity Framework application, run it in analysis mode, and quickly see all the queries being used by it. Then, you can decide which queries should be cached and which ones to be skipped.

You can also specify which queries should be synchronized with the database through SqlDependency.

Please read more about NCache and also feel free to download a fully working 60-day trial of NCache.

^ Top