Caching is a commonly used technique that used in to store frequently accessed data in a memory. In this article, we will discuss the in-memory cache, one of the types of caching. In-memory caching is a widely used method of storing data in a RAM to provide faster access than retrieving it from slower storage systems like hard drives. We know that the data stored in memory is volatile; this means all data can be lost if the power goes off or if the system crashes. The capacity of data that can be stored is limited by the RAM capacity of the host of the application.
graph TD
A[Application] -- 1. Read data from cache if data exists --> B(Cache)
B -- 2. If it doesn't exist, retrieve the actual data from database --> C[Database]
C -- 3. Write data to cache that retrieved from db --> B
Memory(RAM) is more expensive than hard disks, so scaling our cache system and deciding which data to cache play crucial roles in caching. Datas that frequently accessed and changes infrequently is a good choice for determining which data will be cached. More static datas like lookup tables, fixed dropdowns, user preferences, profile settings can be cached.
Data that changes very frequently, sensitive data like personal identification or passwords, and very large objects should not be cached.
Caching is a vital component in enhancing the performance and efficiency of applications. Various caching techniques are used to optimize data storage and access. These techniques can be used in different scenarios and requirements.
On-demand caching stores only a necessary part of data that is actively used or is expected to be used in the near future, rather than storing all available data in a cache.
Pre-population cache is a approcah where specific data is loaded into a cache before it’s actually requested. It’s useful when you have a fixed lookups or fields.
Two commonly used methods in cache management are fundamental for determining the time limitations of cached items. Understanding these concepts plays a crucial role in efficient cache management.
With absolute time expiration, cache entry has a known and fixed(absolute) lifetime. -Example: If a cache entry is set with an absolute time limit of 24 hours, it will be removed from the cache 24 hours after it was added.
Sliding time expiration is suitable for cache entry that doesn’t have a fixed expiration but should be kept in cache as long as it is frequently accessed. -Example: If a cache entry has a sliding expiration of 40 minutes, and it is accessed 5 minutes after being added to the cache, its expiration time will be reset to 10 minutes from that point. If it is not accessed again within those 40 minutes, it will then expire.
There are already built-in memory cache in dotnet environment, but i want to show background as well. To see the full example, you can check MemCache class or browse the minimal API endpoints and the rest of the repository.
The ConcurrentDictionary is used in cache implementations to ensure thread-safe operations, maintain high performance under concurrent access, and offer scalability and ease of use for developers. These qualities are essential for an efficient caching system, particularly in server-side and multi-threaded applications.
The MemoryCache primarily uses managed resources (like a ConcurrentDictionary). Managed resources are handled by the .NET Garbage Collector (GC), which automatically frees memory when objects are no longer in use. Therefore, there's no need for cleanup.
In conclusion, caching is a vital component in the landscape of modern software development, especially for enhancing the performance and scalability of applications. In this repo, I have attempted to provide a more detailed analysis of in-memory caching.
Happy coding!