• Webinars
  • Docs
  • Download
  • Blogs
  • Contact Us
Show / Hide Table of Contents
  • Programmer's Guide
  • Client Side API Programming
    • Setting Up Development Environment
    • Basic Cache Operations
      • Initialize Cache
      • Add Data to Cache
      • Update Data in Cache
      • Fetch Data From Cache
      • Remove Data From Cache
      • Dispose Cache
    • Bulk Operations
      • Adding Collection to Cache
      • Updating Collection in Cache
      • Retrieving Collection from Cache
      • Removing Collection from Cache
      • Deleting Collection from Cache
    • Asynchronous Operations
      • Using Asynchronous Operations
      • Using Asynchronous Operations with Callback Methods
    • Groups and Subgroups
      • Adding/Updating Data Group in Cache
      • Retrieving Data Group from Cache
      • Removing Data Group from Cache
    • Tagging Data in NCache
      • Creating Tags
      • Adding Items with Tags
      • Retrieving Previously Tagged Data
      • Removing Tagged Items from Cache
    • Named Tags
    • Data Expiration Strategies
      • Using Absolute Expiration
      • Using Sliding Expiration
    • Cache Dependencies
      • Key Dependency
      • File Dependency
      • Notification based Dependencies
        • Database Dependency using SQL Server
        • Database Dependency using Oracle
      • Polling Based Dependency
      • Custom Data Source Dependency
      • Multiple Cache Sync Dependency
      • Aggregate Dependency
      • Add Dependency to Existing Item
      • Using CLR Procedures to Call NCache
    • Locking Data in NCache
      • Locking Items in Cache (Pessimistic Locking)
      • Locking Items with Cache Item Versioning (Optimistic Locking)
    • SQL Reference for NCache
      • SQL Syntax
      • Querying Samples for Operators
      • Querying Data in NCache
      • NCache Language Integrated Query (LINQ)
        • Using LINQ in NCache
        • Configuring LINQPad for NCache
        • Querying NCache Data in LINQPad
    • Event Notifications
      • Cache Level Event Notifications
      • Item Level Event Notifications
      • Custom Event Notifications
    • Publish/Subscribe (Pub/Sub) in NCache
      • Pub/Sub Topics
      • Managing Topics
      • Pub/Sub Messages
        • Message Behavior and Properties
        • Creating a Message
      • Publish Messages to Topic
      • Subscribe for Topic Messages
      • Monitoring Pub/Sub Topics
    • Continuous Query
    • Using Streams in NCache
      • Opening with Stream Modes
      • Adding and Updating Data with Streams
      • Retrieving Data from Streams
      • Closing a Stream
    • Security and Encryption
      • NCache Security
      • NCache Data Encryption
    • Data Compression
    • NCache Management API
  • Server Side API Programming
    • Cache Startup Loader
      • Components of Cache Startup Loader
      • Sample Implementation of ICacheLoader on Single Node
      • Sample Implementation of ICacheLoader with Distribution Hints
    • Data Source Providers (Backing Source)
      • Read-Through Caching
        • Configure Read-Through Provider
        • Using Read-Through with Cache Operations
      • Write-Through Caching
        • Configuring Write-Through Provider
        • Using Write-Through with Basic Operations
        • Using Write-Behind with Basic Operations
        • Using Write-Behind with Bulk Operations
        • Using Write-Behind with Async Operations
        • Monitor Write-Through Counters
    • Custom Dependency
      • Sample Implementation of Custom Dependency
      • Sample Usage of Custom Dependency
    • WAN Replication through Bridge
      • Bridge Configurations
      • Implementing Bridge Conflict Resolver
    • Entry Processor
      • Sample Implementation of IEntryProcessor Interface
      • Sample Usage of EntryProcessor
    • MapReduce
      • Sample Implementation of MapReduce Interfaces
      • Sample Usage of MapReduce
    • Aggregator
      • Sample Implementation of IValueExtractor Interface
      • Sample Implementation of IAggregator Interface
      • Sample Usage of Aggregator
    • Dynamic Compact Serialization
  • Client Side ASP.NET Features
    • ASP.NET
      • ASP.NET Session State Provider for NCache
      • Multi-Region ASP.NET Session State Provider for NCache
    • ASP.NET Core
      • Session Storage in ASP.NET Core
        • Configure NCache ASP.NET Core Session Provider
        • Configure ASP.NET Core Sessions with NCache IDistributedCache Provider
      • Multi-Region ASP.NET Core Session Provider for NCache
      • Object Caching in ASP.NET Core
    • ASP.NET SignalR
      • Using NCache Extension for SignalR
    • View State Caching
      • Configuring and Using Content Optimization
      • Group View State with Sessions
      • Limit View State Caching
      • Perform Page Level Grouping for View State
    • ASP.NET Output Cache
      • Configure ASP.NET Output Caching
      • Using ASP.NET Output Cache with Custom Hooks
  • Client Side Third Party Integrations
    • Migrating AppFabric to NCache
      • AppFabric API vs. NCache API
    • NHibernate
      • NCache as NHibernate Second Level Cache
      • Using NHibernate Query Caching
      • Configuring Database Synchronization with NHibernate
    • Entity Framework Caching Integration
      • NCache as Entity Framework Second Level Cache
      • Entity Framework Caching Config File
    • Entity Framework Core Caching
      • Installing NCache Entity Framework Core Provider
      • Configuring NCache Entity Framework Core Provider
      • Using NCache Entity Framework Core Provider
        • Caching Options for EF Core Provider
        • LINQ APIs for EF Core Provider
        • Cache Only APIs for EF Core Provider
        • Query Deferred APIs for EF Core Provider
      • Logging in NCache Entity Framework Core Provider
    • Memcached
      • NCache Memcached Gateway Approach
      • Memcached Client Plugin for .NET
    • Debug NCache Providers in Visual Studio
    • NCache for Visual Studio Extension

Write-Through Caching

NCache supports write through caching which allows write operations to the back-end data source. In this way you can synchronize your cache and the master data source. In write-through caching, NCache updates cache store first and then applies that operation to the configured data source. For example, if a client application updates an entry in cache, then NCache will also update the configured data source (if write through is enabled).

Similarly, for write-through you need to implement WriteThruProvider Interface. NCache framework will internally use this custom provider to perform write operations on the back-end data source. Here you have to implement your custom logic for write operations on the back-end data source. NCache will call your provider behind write operations (Add, Insert, Remove/Delete) API calls with write-thru. Currently NCache provides two modes for write-through caching.

  • Write-through (Updates data source synchronously)
  • Write-Behind (Updates data source asynchronously)
Note

NCache provides a performance counter for Write-Through operations per sec.

Write-Through

In Write-through caching, an operation is first applied on cache store and then synchronously updated to the configured data source. In write through mode, operations will be completed after NCache applies that operation on the back-end data source. You can use write-through caching if immediate database updates are critical and you need to the update data source as soon as cache is updated.

Write-Behind

In Write-through, due to synchronous operations on data source, rate of operations on data source will be same as the rate of user operations on the cache. For applications with high user traffic, rate of user operation on cache can be very high which can resultantly overwhelm your data source. Also synchronous data source operations may affect response time of a user operation.

To overcome these problems, Write-behind can be used instead of write through. In write-behind, data source operations are performed asynchronously after NCache performs operations on cache store. After updating cache store, these operations are queued up and later applied to configured data sources asynchronously. Thus write-behind mode will enhance the response time of cache operations. NCache provides different configurations settings in write-behind to control operations flow on data source. For instance, you can specify the rate at which NCache will apply write behind operations on data source through Throttling.

Throttling

It indicates the number of operations applied on data source in a second. Default value for throttling is 500 ops/sec. You can change this value through Manager Backing Source settings.

Note

NCache provides a performance counter for Write-Behind operations performed per second.

Write-Behind Modes

NCache allows you to apply write behind operations individually or in batch. A write behind queue is maintained for write-behind operations. All write-behind operations will be enqueued in this queue and later applied to the data source according to the configured batched or non-batched mode. These two modes are explained below:

  • Non-Batch Mode

By default, non-batch mode will be configured for write-behind operations. In this mode, operations in write behind queue will be applied one by one on data source according to the configured throttling rate. For example, if the throttling rate is 500 operations per second, NCache will apply write behind operations one at a time to data source and their applying rate will not exceed from 500 operations per second.

  • Batch Mode

In Batch-mode, you can configure operation delay for write behind operations, which indicates the time in milliseconds that each operation must wait in the write-behind queue before applying it on the data source. By default, its value is zero. In this mode, a batch/bulk of operations is selected according to their operation delay. A dedicated thread periodically collects all those operations which completed their delay interval after a configurable interval called "batch-interval". Thus Batch-interval is the configurable interval according to which NCache periodically checks for operation delay timed out operations in write-behind queue. In short, a bulk of ready operations (that completed their delay interval) are selected at every batch interval.

For example, if operation delay is configured as 1000ms and batch interval as 5 seconds, NCache checks the operations in write-behind queue after every 5 seconds (batch-interval) and selects all operations which have expired operation delays (all operations which are in queue for last 1000 milliseconds).

After a selection of operation bulk, these operations are then applied to the data source according to the configured throttling rate. Let's say a bulk of 1000 operations are selected from a write-behind queue, these operations are then applied to the data source in a batch of 500 operations (if throttling rate is 500 ops/sec) as the maximum operation applied to data source per second can't exceed to given throttling value.

You can specify an operation delay time ranging from seconds to days and months. In this way you can pause your operations on the data source by configurable amount of time. NCache also provides performance counters for write-behind queue, operations count and current batch operation count. Current batch operation count displays the number of operations selected in current batch interval for execution.

For write-behind, if batching is enabled, operations which are ready to be executed on data source are dequeued from write behind queue. The number of operations de-queued in the current batch interval will be displayed by the current batch operation count counter.

Write-Through Caching Operation Result

NCache provides you with the flexibility to synchronize write-through operations in cache on the basis of its operation result. After applying an operation (Add/Insert) on data source, you can specify operation status on the basis of which NCache will synchronize the cache store. For example, in case of data source operation failure, you can decide to remove that item from cache or to keep it. You can also retry that operation on the data source. For this, you have to specify Success/Failure/FailureRetry/FailureDontRemove as DSOperationStatus of OperationResult. This is provided in both modes of write-through caching i.e. write-thru/write-behind. Data source operation status and their corresponding actions by NCache are described below:

  • Success: This means that the data source operation is successful and the item was added to the data source so NCache will keep it in the cache as well.

  • Failure: This means that data source operation failed and the item could not be added to database, so NCache will remove it from the cache as well.

  • FailureDontRemove: This means that the data source operation failed and the item could not be added to the database, but NCache will keep it in the cache.

  • FailureRetry: This means that data source operation failed and the item could not be added to the database, so NCache will keep the item in cache and retry. Retries will be done as write behind operations.

Retrying Failed Operations

NCache allows you to retry operations in write-through/write-behind in case they are failed on data source. For this purpose, if you enable operation retrying and specify to retry a certain operation through provider, then NCache will retry that operation on data source. In case of write-through or write–behind, all retry operations will be re-queued to write behind queue, which means a write thru retry operation will be retried asynchronously as a write behind operation.

Note

NCache also provides a performance counter for Data source failed operations/sec. Write operations performed on data source returning Failure/FailureRetry/FailureDontRemove as DSOperationtatus of OperationResult are counted per second by this counter.

NCache allows you to limit the number of failed operations to be retried. In such a situation, you will mention the "Failed operation queue limit" through NCache Manager, and if that limit exceeds you can evict failed operations through a configurable eviction ratio. Here NCache will evict most retried operations when retried queue is full. Each operation has associated the RetryCount property which is incremented on each operation retried on the data source.

For this, NCache provides a performance counter for the write-behind failure retry count and write-behind eviction/sec. The Write-behind failure retry counter will show the number of operations re-queued for retry. Data source write operations returning FailureRetry as Status in OperationResult will be requeued for retry. Whereas write-behind eviction/sec counter will display the number of retry operations evicted per second.

Updating Cache after Data Source Operation

As stated earlier, in write-through caching, the operation is first performed on cache store and then to data source. There may exist scenarios in which after performing operation on data source, an item’s value becomes modified e.g. in case of identity columns, its value will be modified by data source operation. In such situation, data may become inconsistent in cache and data source. To handle this, NCache allows you to specify whether to update data in cache after data source operations or not. You can set flag "UpdateInNCache" to perform operation (Add/Insert) again on cache store to make it synchronized with the data source.

NCache updates operations in cache store, if specified, synchronously or asynchronously depending on the write-through caching mode. In write thru, synchronous updates will be applied and asynchronous in case of write-behind.

Note

NCache also provides a performance counter for data source updates/sec. It displays the number of update operations per second in cache after data source write operations. Data source write operations with UpdateINCache flag set to true in OperationResult are then performed on cache. This counter displays the number of these update operations performed on the cache per second.

Hot Apply Support for Write-Behind Configuration

NCache supports hot apply for write-behind settings which allows you to change write behind configurations at runtime without stopping the cache. You can change almost all write behind configurable attribute through NCache Manager and NCache will incorporate those changes dynamically.

In this hot apply support, you can change the write-behind mode from batch to non-batch and vice versa. For instance, if you have changed the batch mode to non-batch, then NCache will ignore operation delay value and start executing operations individually. Also you can change the throttling rate at runtime according to your need. Similarly, operation delay, batch-interval, failed operation queue limit and eviction ratio can also change at runtime.

[!WARMING]
You can only change the "failed operation queue limit" in increasing fashion; otherwise NCache will use its default value for further operations.

Write Behind in Clustered Environment

As a write-behind queue is maintained for write-behind operations, a separate dedicated thread monitor executes its operation. The Topology level details for write-behind are mentioned below:

  • In the replicated cache topology, write behind queue will be maintained on all nodes, but write-behind async processor will be present on coordinator node only. It means that all write-behind operations will be performed through this node and replicated to other node queues cluster wide. In this way, if a node is down, than the next coordinator will perform all of the remaining write behind operations.

  • In the partitioned-replicated topology, write-behind queue is maintained on each active node and also replicated to its corresponding replicas. Each node will be responsible for its write-behind operation on data source.

  • In mirrored topology, write behind queue will be maintained on both active and passive nodes, but only the active node will be responsible to perform write-behind operations. Similarly, if the active node is down, then the passive will become active and perform the entire remaining write behind operations.

  • In partitioned topology, write-behind queue is maintained on each partition and every node will be responsible for its write-behind operations on data source.

In This Section

Configure Write-Through Provider
Explains the IWriteThruProvider interface and provides a sample implementation for the interface.

Using Write-Through with Basic Operations
Provides samples to use Write-Through with basic operations in NCache.

Using Write-Behind with Basic Operations
Provides samples to use Write-Behind with basic operations in NCache.

Using Write-Behind with Bulk Operations
Provides samples to use Write-Behind with bulk operations in NCache.

Using Write-Behind with Asynchronous Operations
Provides samples to use Write-Behind with asynchronous operations in NCache.

Monitor Write-Through Counters
Describes the performance counters provided by NCache to monitor Write-Through Caching.

Back to top Copyright © 2017 Alachisoft