NCache 4.4 - Online Documentation

Write-Through Caching

 
NCache supports write-through caching which allows write operations to backend data source. In this way, cache and master data source can be synchronized. In write-through caching, NCache updates cache store first and then apply that operation to configure data source. For example, if client application updates an entry in cache, then NCache will also update the configured data source (if write-through is enabled).
 
Similarly for write-through, IWriteThruProvider interface needs to be implemented. NCache framework will internally use this custom provider to perform write operation on back-end data source.  Here, custom logic for write operations on back-end data source needs to be implemented. NCache will call the provider behind write operations (Add, Insert, Remove/Delete) API calls with write-through. Currently NCache provides two modes for write-through caching.
 
  • Write-Through (Updates data source synchronously)
  • Write-Behind (Updates data source asynchronously)
 
 
NCache provides performance counter for write-through operations performed per sec.
 
 
Write-Through
 
In Write-through caching, an operation first applies on cache store and then synchronously updates the configured data source. In write-through mode, operation will be completed after NCache applies that operation on back-end data source. Write-through caching can be used if immediate database updates are critical and data source needs to be updated as soon as cache is updated.
 
Write-Behind
 
In Write-through, due to synchronous operations on data source, rate of operations on data source will be the same as rate of user operations on cache. For applications with high user traffic, rate of user operation on cache can be very high which can resultantly over whelm the data source. Also synchronous data source operation may affect response time of user operation.
 
To  overcome these problem, write-behind can use instead of write through. In write-behind, data source operations are performed asynchronously after NCache performs operation on cache store. After updating cache store, these operations are queued up and later applied to configured data source asynchronously. Thus write-behind mode will enhance response time of operations. NCache provides different configurations settings in write-behind to control operations flow on data source.  For instance, through Throttling, the rate at which NCache will apply write behind operations on data source can be specified. It indicates the number of operations applied on data source in a second. Default value for throttling is 500 ops/sec .This value can be changed through NCManager Backing Source settings.
 
 
NCache provides performance counter for write-behind operations performed per sec.
 
 
Write-Behind Modes
 
NCache allows write-behind operations to be applied either individually or in batch. A write-behind queue is maintained for write-behind operations. All write-behind operations will be en queued in this queue and later on applied to data source according to the configured batched or non-batched mode. These two modes are explained below:
 
  • Non Batch Mode
By default, non-batch mode will be configured for write-behind operations. In this mode, operations in write behind queue will be applied one by one on data source according to configured throttling rate. For example, if throttling rate is 500 operations per second, NCache will apply write behind operation one at a time to data source and their applying rate will not exceed from 500 operations per second.
 
  • Batch Mode
In Batch-mode, operation delay can be configured for write behind operations, which indicates the time in milliseconds that each operation must wait in write-behind queue before applying it on data source. By default, its value is zero. In this mode, a batch/bulk of operations is selected according to their operation delay. A dedicated thread periodically collects all those operations which complete their delay interval after a configurable interval called "batch-interval". Thus Batch-interval is the configurable interval according to which NCache periodically checks for operation delay timed out operations in write-behind queue. In short, a bulk of ready operations (that completed their delay interval)   is selected at every batch interval.
 
For example, if operation delay is configured as 1000ms and batch interval as 5s then NCache checks the operations in write-behind queue after every 5s (batch-interval) and select all operations whom operation delays are expired (all operations which are in queue for  the last 1000 milliseconds).
 
After selection of operation bulk, these operations are then applied to data source according to configured throttling rate. Let's say a bulk of 1000 operations are selected from write-behind queue, these operations are then applied to data source in a batch of 500 operations (if throttling rate is 500ops/sec) as the maximum operation applied to data source per second can't exceed the given throttling value.
 
 
Operation delay time can range from seconds to days and months. In this way, operations on data source can be paused by configurable amount of time.
 
 
 
NCache also provides performance counters for write-behind queue operations count and current batch operation count. Current batch operation count displays the number of operations selected in current batch interval for execution. For write-behind, if batching is enabled, operations ready to be executed on data source are de queued from write behind queue. Number of operation de queued in current batch interval will be displayed by current batch operation count counter.
 
 
Write-Through Caching Operation Result
 
NCache provides a flexible way to synchronize write-through operation in cache on the basis of its operation result. After applying an operation (Add/ Insert) on data source, operation status can be specified on the basis of which NCache will synchronize cache store. For example, in case of data source operation failure, that item can either be removed from cache or kept in it. The operation can even be retried on data source. For this, Success/Failure/FailureRetry/FailureDontRemove has to be specified as DSOperationStatus of OperationResult. This is provided in both modes of write-through caching i.e. write-through and write-behind. Data source operation statuses and their corresponding actions by NCache are described below:
 
  • Success: This means that data source operation is successful and item was added to the data source so NCache will keep it in the cache as well.
  • Failure:  This means that data source operation is failed and item could not be added to database, so NCache will remove it from the cache as well.
  • FailureDontRemove: This means that data source operation is failed and item could not be added to the database, but NCache will keep it in the cache.
  • FailureRetry: This means that data source operation is failed and item could not be added to the database, so NCache will keep the item in cache and retry. Retries will be done as write behind operations.
 
Retrying Failed Operations
 
NCache allows retrying operations in write-through/write-behind in case they are failed on data source. For this purpose, if operation retrying is enabled and specifications about retrying certain operations through provider are given, then NCache will retry that operation on data source. In case of write-through or write –behind, all retry operations will be re-queued to write behind queue. It means that a write-through retry operation will be retried asynchronously as a write behind operation.
 
 
NCache also provides performance counter for data source failed operations/sec. Write operations performed on data source returning Failure/ FailureRetry/ FailureDontRemove as DSOperationtatus of OperationResult are counted per second by this counter.
 
 
 
NCache allows limiting the number of failed operations to be retried. In such a situation, Failed operation queue limit through NCache Manger should be mentioned, and if that limit exceeds, failed operations can be evicted through a configurable eviction ratio. Here NCache will evict the most retried operations when retried queue is full. Each operation has associated RetryCount property which is incremented on each operation retry on data source.
 
 
For this, NCache provides performance counter for write-behind failure retry count and write-behind eviction/sec. Write-behind failure retry counter will show the number of operation re queued for retry. Data source write operation returning FailureRetry as status in OperationResult will be re queued for retry. Whereas write-behind eviction/sec counter will display the number of retry operations evicted per second.
 
 
Updating Cache after Data Source Operation
 
As stated earlier, in write-through caching, operation is first performed on cache store and then to data source. There may exist scenarios in which after performing operation on data source, an item’s value is modified, e.g., in case of identity columns, its value will be modified by data source operation. In such a situation, data may become inconsistent in cache and data source. To handle this, NCache allows specifying whether to update data in cache after data source operation or not. The flag UpdateInCache can be set to again perform the operation (Add/Insert) on cache store to make it synchronized with data source.
 
NCache updates operation in cache store, if specified, synchronously or asynchronously depending on the write-through caching mode. In write-through, synchronous updates will be applied and asynchronous in case of write-behind.
 
 
NCache also provides performance counter for data source updates/sec. It displays the number of update operations per second in cache after data source write operations. Data source write operations with UpdateInCache flag set to true in OperationResult are then performed on cache. This counter displays the number of these update operations performed on cache per second.
 
 
 
 
Monitor Write-Through Caching Using Counters
 
In write-through caching, NCache provides the following counters to monitor cache activity on data source:
 
  • Write-through/sec: Displays the number of write-through operations per second.
  • Write-behind/sec: Displays the number of write-behind operations per second.
  • Write-behind queue count: Displays the number of operations in write-behind queue.
  • Write-behind failure retry count: Displays the number of operations failed and re-queued for retry. Data source write operation returning FailureRetry will be re-queue. This counter displays re-queued operation count.
  • Average Write-Behind Updates: Average time, in milliseconds, taken to complete one data source update cache operation. Data source write operations with UpdateInCache flag set to true in OperationResult are then performed on cache. This counter displays the average time taken by these update operations on cache.
  • Write-behind evictions/sec: Displays the number of retried operations evicted per second. Data source write operation returning FailureRetry are re queued for retry. This counter displays number of such operations evicted per second.
  • Data source updates/sec: Displays the number of update operations per second in cache after data source write operations. Data source write operations with UpdateInCache flag set to true in operation result are then performed on cache. This counter displays number of these update operations performed on cache per second.
  • Data source failed operations/sec: Displays the number of data source write operations failed per second. Write operations performed on data source returning Failure/FailureRetry/FailureDontRemove as DSOperationtatus of OperationResult are counted per second by this counter.
  • Current batch operations count: Displays current batch operation count i.e. thenumber of operations selected in current batch interval for execution. For write-behind, if batching is enabled, operations ready to be executed on data source are de queued from write behind queue. The number of operations de queued in current batch interval will be displayed by current batch operation count counter.
 
Hot Apply Support for Write-Behind Configuration
 
NCache supports hot apply for write-behind settings which allow changing the write behind configuration at runtime without stopping the cache. Almost all write behind configurable attributes can be changed through NCache Manager and NCache will incorporate those changes dynamically.
 
In this hot apply support, write-behind mode can be changed from batch to non-batch and vice versa. For instance, if the batch mode is changed to non-batch, then NCache will ignore the operation delay value and start executing operations individually. The throttling rate can also be changed at runtime. Similarly operation delay, batch-interval, failed operation queue limit and eviction ratio can also be changed at runtime.
 
 
The failed operation queue limit can only be changed in increasing fashion; otherwise NCache will use its default value for further operations.
 
 
Write Behind in Clustered Environment
 
As a write-behind queue is maintained for write-behind operations, a separate dedicated thread monitors and executes these operations.  Topology level details for write-behind are mentioned below:
 
  • In Replicated Cache topology, write-behind queue will be maintained on all nodes, but write-behind async processor will be present on coordinator node only. This means that all write-behind operations will be performed through this node and replicated to other node queues cluster wide. In this way, if a node is down, then the next coordinator will perform the remaining write behind operations.
  • In Partitioned-Replica topology, write-behind queue is maintained on each active node and also replicated to its corresponding replica. Each node will be responsible for its write-behind operation on data source.
  • In Mirrored Cache, write behind queue will be maintained on both active and passive node, but only active node will be responsible to perform write-behind operations. Similarly, if active node is down, then passive will become active and perform the remaining write behind operations.
  • In Partitioned Cache topology, write-behind queue is maintained on each partition and every node will be responsible for its write-behind operation on data source.
 
 
See Also