Businesses today are developing high-traffic ASP.NET web applications that serve tens of thousands of concurrent users. To handle this type of load, multiple application servers are deployed in a load-balanced environment. In such highly parallel conditions, multiple users often try to access and modify the same data and trigger a race condition.
A race condition is when two or more users try to access and change the same shared data at the same time but end up doing it in the wrong order. This leads to a high risk of losing data integrity and consistency. With the advent of in-memory, scalable caching solutions like NCache providing distributed lock mechanisms, enterprises can achieve significantly enhanced data consistency.
NCache Details Locking and Control Docs NCache Docs
Distributed Locking for Data Consistency
NCache provides you with a mechanism of distributed locking in .NET that allows you to lock selective cache items during concurrent updates. To maintain data consistency in such cases, NCache acts as a distributed lock manager and provides you with two types of locking:
We will discuss them in detail later in the blog. For now, consider the following scenario to understand how, without a distributed locking service, data integrity could be violated.
Two users simultaneously access the same Bank Account with a balance = 30,000. One user withdraws 15,000 whereas the other user deposits 5,000. If done correctly, the end balance should be 20,000. If, on the other hand, a race condition occurs but is not handled, the balance would be either 15,000 or 35,000 as you can see above.
Here is how this race condition occurs:
- Time t1: User 1 fetches Bank Account with balance = 30,000
- Time t2: User 2 fetches Bank Account with balance = 30,000
- Time t3: User 1 withdraws 15,000 and updates Bank Account balance = 15,000
- Time t4: User 2 deposits 5,000 and updates Bank Account balance = 35,000
In both cases, a code block not catering to managing threads could be disastrous to the bank. So, in the subsequent sections, let’s see how NCache provides locking mechanisms to ensure that your application logic is thread-safe.
NCache Details Pessimistic Locking Optimistic Locking
Optimistic Locking (Item Versions)
In optimistic locking, NCache uses cache item versioning. On the server-side, every cached object has a version number associated with it which gets incremented on every cache item update. NCache then checks if you are working on the latest version. If not, it rejects your cache update. This way, only one user gets to update and other user updates fail.
Take a look at the following code explaining this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
// Pre-condition: Cache is already connected // An item is added in the cache with itemVersion // Specify the key of the cacheItem string key = "Product:1001"; CacheItemVersion itemVersion = null; // Get the cacheItem previously added in the cache with the version CacheItem cacheItem = cache.GetCacheItem(key, ref itemVersion); if (cacheItem != null) { var prod = cacheItem.GetValue(); prod.UnitsInStock++; cacheItem.SetValue(prod); itemVersion = cache.Insert(key, cacheItem); } else { // Item could not be retrieved due to outdated CacheItemVersion } |
In the above example, if your cache item version is the latest, NCache operates successfully. If not, then an operation failed exception is thrown with the detailed message. In this case, you should re-fetch the latest version and redo your operation.
With optimistic locking, NCache ensures that every write to the distributed cache is consistent with the version each application holds. For a more detailed code example, please refer to our official NCache Documentation.
Pessimistic Locking (Exclusive Locking)
The other way to ensure data consistency is to acquire an exclusive lock on the cached data. This mechanism is called Pessimistic locking. It is essentially a writer-lock that blocks all other users from reading or writing the locked item.
To clarify it further, take a look at the following code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
// Pre-condition: Cache is already connected // An item is added in the cache with itemVersion // Specify the key of the cacheItem string key = "Product:1001"; LockHandle lockHandle = null; // Get the cacheItem previously added in the cache CacheItem cacheItem = cache.GetCacheItem(key, true, TimeSpan.FromSeconds(10), ref lockHandle); //CacheItem will be returned as null if item does not exist in the cache or is already locked. //In case item exists in cache but is locked, the lockHandle.LockId will not be null. if (cacheItem != null) { var prod = cacheItem.GetValue(); prod.UnitsInStock++; cacheItem.SetValue(prod); cache.Insert(key, cacheItem, lockHandle, true); } |
Here, we first try to obtain an exclusive lock on the cache item. If successful, we get the object along with the lock handle. If another application had already acquired the lock, a LockingException would be thrown. In this case, you must retry fetching the item after a small delay.
If you are looking for detailed code samples, please head over to the NCache Docs on Pessimistic locking.
Upon successfully acquiring the lock while fetching the item, the application can now safely perform operations knowing that no other application can fetch or update this item as long as you have this lock. To finally update the data and release the lock, we will call the insert API with the same lock handle. Doing so will insert the data in the cache and release the lock, all in one call. After releasing the lock, the cached data will be available for all other applications.
Just remember that you should acquire all locks with a timeout. By default, if the timeout is not specified, NCache will lock the item for an indefinite amount of time. If the application crashes without releasing the lock, the item will remain locked forever. For a workaround, you could forcefully release it but this practice is ill-advised.
NCache Details Using Locking with Cached Data Blog
Failover Support in Distributed Locking
Since NCache is an in-memory, distributed cache, it also provides complete failover support so that there is no data loss. In case of a server failure, your client applications keep working seamlessly. Similarly, your locks in the distributed system are also replicated and maintained by the replicating nodes. If any node fails while a lock was acquired by one of your applications, the lock will be propagated to a new node automatically with its specified properties e.g. Lock Expiration.
Conclusion
So, which locking mechanism is best for you, optimistic or pessimistic? Well, it depends on your use case and what you want to achieve. Optimistic Locking provides an improved performance benefit over Pessimist Locking especially when your applications are read-intensive. Whereas, Pessimist Locking is safer from a data consistency perspective therefore, choose your locking mechanism carefully. For more details, head on to the website. In case of any questions, contact us and let our experts help you out!