We intend to implement already existing algorithms for cache optimization using ML techniques.
There are two ways in which cache blocks are replaced:
- Eviction
- Prefetching
Proposed a general framework called LeCaR that uses the ML technique of regret minimization to answer the question in the affirmative. Paper shows that the LeCaR framework outperforms ARC using only two fundamental eviction policies, LRU and LFU, by more than 18x when the cache size is small relative to the size of the working set.
Implemented in C with the following parameters:
Cache Size = 12
Size of History = 4
Learning Rate = 0.45
Discount Rate = 0.64