ASP.NET web applications, .NET web service applications, and other .NET server applications need to handle extreme transaction loads without slowing down. While the application tier can scale linearly, the data storage and database tier often cannot, creating bottlenecks hampering the overall application scalability.
To address these issues, simple in-memory distributed key-value stores like Memcached and Redis were introduced on Unix/Linux platforms to help resolve this scalability problem. They quickly became quite popular mainly because they provided linear scalability just like the application tiers, effectively eliminating the database bottleneck. But as the first attempts to address these issues, they were not as comprehensive as the subsequent .NET Distributed Caches as we’ll explore in this blog.
Limitations in Key Value Stores
But, despite their popularity, these solutions were very simple and basic, failing to address the real-life application challenges. They exhibited significant weaknesses in several areas:
- Preventing high availability
- Stale cache data
- Inability to address SQL queries
- Lack of server-side caching code (e.g., Read-through, Write-through, Cache Loader & Refresher, etc.)
For example, Memcached’s availability was so poor that third parties started developing high-availability “fix-ins” for it. Unfortunately, the underlying architecture was not designed for high availability, limiting the effectiveness of these solutions. Redis faced similar availability issues but later re-architected their product to incorporate some high availability features like failover support. Despite these improvements, both Memcached and Redis still have a few shortcomings. This is where advanced distributed cache solutions step in to provide the necessary features and reliability that the key-value stores lack.
.NET Distributed Cache as a 2nd Generation Key-Value Store
.NET distributed caches like NCache, were designed from scratch to address all the limitations mentioned above. As a second-generation solution, NCache goes beyond the capabilities of key-value stores like Memcached and Redis. With over a decade of proven reliability, NCache has become a popular and robust distributed cache for .NET applications, offering advanced features and the performance necessary to meet the demands of modern, high-transaction as detailed in this blog.
Dynamic Cache Cluster & Data Replication
NCache has a self-healing dynamic cache cluster that pools all the CPU and memory resources from all cache servers in the cluster. At the same time, NCache provides a variety of caching topologies with different data distribution and replication strategies. This allows NCache to scale linearly without compromising availability. And, even if a cache server goes down, the cache cluster continues running without any data loss, and all the applications using the cache also continue without any interruptions.
Keeping Cache Fresh
Another area where distributed caches like NCache shine, is keeping the data fresh and consistent with the database. It does this through a variety of features including Expiration, event-driven SqlDependency, CLR procedures for relational databases, and much more. Expirations work just like key-value stores but the SqlDependency allows NCache to synchronize the cache with any changes in the database for the related data. And, CLR stored procedures allow you to directly update the cache from your SQL Server database when the corresponding data changes.
This means that even if a third-party application changes data in the database, NCache immediately updates the relevant cache data accordingly. The benefit is caching all relevant application data, as compared to just read-only data enhancing application performance.
Querying Cache with SQL and LINQ
When you’re able to cache almost all your data using “keep cache fresh” features, relying solely on key-value mechanisms can make data retrieval challenging. But, if you could search data based on attributes, then a distributed cache like NCache becomes as easy to search as a database. For this purpose, NCache provides SQL and LINQ querying capabilities, making data retrieval efficient and straightforward.
In addition to these querying mechanisms based on object attributes, you can assign Groups, Tags, and Named Tags to cached items and include them in your queries. Below is an example of SQL querying that retrieves all the products based on their UnitPrice and groups them according to their Category and Count.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
string query = "SELECT Category, COUNT(*) FROM FQN.Product WHERE UnitPrice < ? Group By Category"; // Use QueryCommand for query execution var queryCommand = new QueryCommand(query); queryCommand.Parameters.Add("UnitPrice", 100.0); // Executing QueryCommand through ICacheReader ICacheReader reader = cache.SearchService.ExecuteReader(queryCommand); // Check if result set is not empty if (reader.FieldCount > 0) { while (reader.Read()) { // Get the value of the result set string category = reader.GetValue<string>("Category"); int count = reader.GetValue<int>("COUNT()"); Console.WriteLine($"Category '{category}' has '{count}' affordable products."); } } else { Console.WriteLine($"No category contains affordable products."); } |
Server-Side Code
Finally, server-side code like Read-through, Write-through, Custom Dependency, and Cache-Loader and Refresher are invaluable. These codes are developed by you but executed by the distributed cache within the cache cluster, enabling you to simplify your applications by moving frequently used logic to the caching tier.
For example, NCache calls your Read-through handler when your application requests data that is not in the cache. The Read-through handler retrieves the data and loads it into the cache. Similarly, by combining Read-through with expirations and database synchronizations, you can auto-reload the cached item instead of removing it from the cache.
Write-through works in the same fashion as Read-through but handles updates. When your application updates the cache, Write-through ensures that the database is updated as well. Alternatively, Write-behind can update the database asynchronously while the cache is updated synchronously. Lastly, the Cache Loader is called when the cache starts, allowing you to pre-load it with the desired data, ensuring that frequently accessed information is readily available. The following code sample provides a sample implementation to configure a Read-through caching provider.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
public class SampleReadThruProvider : IReadThruProvider { private SqlConnection _connection; // Perform tasks like allocating resources or acquiring connections public void Init(IDictionary parameters, string cacheId) { // Create SQL connection and other initializations at the server-side } // Responsible for loading an item from the external data source public ProviderCacheItem LoadFromSource(string key) { // LoadFromDataSource loads data from data source object value = LoadFromDataSource(key); var cacheItem = new ProviderCacheItem(value); return cacheItem; } // Responsible for loading bulk of items from the external data source public IDictionary<string, ProviderCacheItem> LoadFromSource(ICollection<string> keys) { var dictionary = new Dictionary<string, ProviderCacheItem>(); foreach (string key in keys) { // LoadFromDataSource loads data from data source dictionary.Add(key, new ProviderCacheItem(LoadFromDataSource(key))); } return dictionary; return dictionary; } // Adds ProviderDataTypeItem with enumerable data type public ProviderDataTypeItem<IEnumerable> LoadDataTypeFromSource(string key, DistributedDataType dataType) { IEnumerable value = null; ProviderDataTypeItem<IEnumerable> dataTypeItem = null; switch (dataType) { case DistributedDataType.List: value = new List<object>() { LoadFromDataSource(key) }; dataTypeItem = new ProviderDataTypeItem<IEnumerable>(value); break; case DistributedDataType.Dictionary: value = new Dictionary<string, object>() { { key , LoadFromDataSource(key) } }; dataTypeItem = new ProviderDataTypeItem<IEnumerable>(value); break; } return dataTypeItem; } // Perform tasks associated with freeing, releasing, or resetting resources. public void Dispose() { if (_connection != null) { _connection.Close(); } } } |
Conclusion
As you can see, NCache, an open-source .NET distributed cache, offers far greater functionality and power compared to key-value stores like Redis or Memcached. The detailed comparisons between Redis and NCache, as well as Memcached and NCache, clearly illustrate how NCache enhances caching performance and capabilities, making it a superior choice for handling complex and high-transaction applications.