LRU Cache
Two Ways to Implement LRU Cache
1️⃣ Easy Way (using LinkedHashMap
)
-
In the code we just wrote with
LinkedHashMap
, we don’t manually write a doubly linked list. -
That’s because
LinkedHashMap
internally already uses a doubly linked list to maintain order (either insertion order or access order). -
So when we write:new LinkedHashMap<>(capacity, 0.75f, true)
That
true
tells it to maintain the doubly linked list in access order.-
We don’t see it in code because it’s hidden inside the JDK implementation.
👉 This is why the
LinkedHashMap
solution is the easy shortcut
2️⃣ Full Custom Way (HashMap + Doubly Linked List)
-
In FAANG-style interviews, sometimes they say:
“Don’t use LinkedHashMap, implement it yourself.” -
In that case, you must build your own doubly linked list:
-
HashMap<key, Node>
→ fast lookup -
Node
→ custom class withkey, value, prev, next
-
head
→ most recently used -
tail
→ least recently used
-
👉 In this approach, you see the doubly linked list explicitly in your code (like the longer version I gave earlier).
Quick Analogy
-
LinkedHashMap way = buying a ready-made engine (internally it already has gears, pistons → i.e., doubly linked list).
-
Custom way = building your own engine piece by piece (you write HashMap + your own doubly linked list).
Both are valid. ✅
Way 1 – Using LinkedHashMap (Shortcut)
Concept
-
LinkedHashMap
internally uses a doubly linked list to keep track of element order. -
We can set it to insertion order (default) or access order (
true
). -
With access order, whenever you
get
orput
a key, that entry is moved to the end (most recent). -
The method
removeEldestEntry()
allows us to evict the oldest (least recently used) entry when size exceeds capacity.
1️⃣ Start with Definition & Use Case
👉 "LRU stands for Least Recently Used.
It’s a cache mechanism that keeps only a fixed number of items.
When it’s full and a new item comes, it removes the least recently used item.
This is useful in browser cache, DB cache, OS paging where memory is limited."
2️⃣ State the Requirements
👉 "We need two operations with O(1) time complexity:
-
get(key)
→ return value if present, else -1. -
put(key, value)
→ insert or update value, and if capacity is full, remove least recently used."
3️⃣ Identify the Problem with Naive Approach
👉 "If I just use an array or LinkedList, removing least recently used will take O(n). That’s too slow.
So, we need a combination of fast lookup + fast update of order."
4️⃣ Propose the Data Structures
👉 "I’ll use:
-
HashMap<Key, Node> → for O(1) lookup of cache items.
-
Doubly Linked List → to maintain order of usage.
-
Head → Most Recently Used
-
Tail → Least Recently Used"
-
5️⃣ Explain Operations
-
Get(key):
-
If not found → return -1.
-
If found → move that node to head (most recent).
-
-
Put(key, value):
-
If key exists → update value & move node to head.
-
If key doesn’t exist:
-
If cache full → remove from tail (LRU).
-
Insert new node at head.
-
-
👉 This ensures both operations run in O(1).
6️⃣ Show Example Walkthrough
👉 "Say capacity = 3.
-
Insert (1, A), (2, B), (3, C) → Cache =
[3:C, 2:B, 1:A]
-
Access key 2 → move 2 to head →
[2:B, 3:C, 1:A]
-
Insert (4, D) → remove least recent (1:A) →
[4:D, 2:B, 3:C]
"
(If you can draw arrows in the notebook or on whiteboard → even better.)
7️⃣ Mention Alternatives
👉 "In Java, I can also use LinkedHashMap with accessOrder=true
.
It automatically maintains access order and removes eldest entry when capacity is exceeded."
8️⃣ Wrap Up
👉 "So, the LRU cache is implemented using HashMap + Doubly Linked List for O(1) performance,
and it’s used widely in real systems where recent data is more valuable than older data."
✅ This step-by-step explanation is short, logical, and shows depth – perfect for interviews.
1️⃣ Normal LinkedHashMap (Insertion Order)
By default, LinkedHashMap
keeps elements in the order they were inserted.
Example:
Map<Integer, String> map = new LinkedHashMap<>();
map.put(1, "A");
map.put(2, "B");
map.put(3, "C");
System.out.println(map); // {1=A, 2=B, 3=C}
➡️ Even if you get(2)
, the order does not change.
It always stays {1=A, 2=B, 3=C}
.
2️⃣ LinkedHashMap with Access Order (true
)
If you create it like this:
Map<Integer, String> map = new LinkedHashMap<>(3, 0.75f, true);
Why Important for LRU Cache?
-
In LRU Cache, we need to know the least recently used item.
-
With access order, the first element in the LinkedHashMap is always the least recently used, and the last element is the most recently used.
-
This allows us to easily remove the eldest entry when the cache is full.
👉 "Access order means that in a LinkedHashMap, whenever we
get
or put
an entry, that entry is moved to the end of the map. This way, the map keeps track of usage order, which is exactly what we need to implement an LRU cache."Great catch! In new LinkedHashMap<>(capacity, 0.75f, true)
the 0.75f
is the load factor.
What is load factor?
-
Definition: The fraction at which the map resizes (rehashes).
-
Threshold:
threshold = current_capacity × load_factor
-
With
0.75f
, when the number of entries exceeds 75% of the current bucket capacity, the map grows (typically doubles) and rehashes entries.
Why f
?
-
0.75f
is a float literal (thef
makes itfloat
). -
Without
f
(i.e.,0.75
), it’s adouble
. The constructor expects afloat
, so we write0.75f
.
Defaults and why 0.75?
-
Default load factor in Java maps is 0.75f—a good balance between:
-
Space (not too many empty buckets)
-
Speed (keeps collisions low → O(1) average ops)
-
Quick example
If internal capacity is 16:
-
Threshold =
16 × 0.75 = 12
-
When you insert the 13th entry, the map resizes and rehashes.
In your LRU cache
-
The first parameter (
capacity
) insuper(capacity, 0.75f, true)
is just the initial internal capacity, not the LRU limit. -
The LRU limit is enforced by
removeEldestEntry(...)
returningtrue
whensize() > lruCapacity
.
When would you change it?
-
Higher load factor (e.g., 0.9f): saves memory but may slow lookups (more collisions).
-
Lower load factor (e.g., 0.5f): faster lookups but uses more memory.
-
For almost all cases (including LRU), keep 0.75f.
👉 “My cache should only store 3 entries at a time. If I add a 4th one, the least recently used entry should be removed.”
Way 1 – Using LinkedHashMap (Shortcut)
Concept
-
LinkedHashMap
internally uses a doubly linked list to keep track of element order. -
We can set it to insertion order (default) or access order (
true
). -
With access order, whenever you
get
orput
a key, that entry is moved to the end (most recent). -
The method
removeEldestEntry()
allows us to evict the oldest (least recently used) entry when size exceeds capacity.
import java.util.*;
class LRUCache<K, V> extends LinkedHashMap<K, V> {
private final int capacity;
public LRUCache(int capacity) {
// initialCapacity, loadFactor, accessOrder
super(capacity, 0.75f, true);
this.capacity = capacity;
}
// Called after every put
@Override
protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
return size() > capacity; // remove eldest if capacity exceeded
}
public static void main(String[] args) {
LRUCache<Integer, String> cache = new LRUCache<>(3);
cache.put(1, "A");
cache.put(2, "B");
cache.put(3, "C");
System.out.println(cache); // {1=A, 2=B, 3=C}
cache.get(2); // Access 2 -> becomes most recent
System.out.println(cache); // {1=A, 3=C, 2=B}
cache.put(4, "D"); // Exceeds capacity -> remove 1
System.out.println(cache); // {3=C, 2=B, 4=D}
}
}
Explanation (Easy to Say in Interview)
-
"I used
LinkedHashMap
withaccessOrder=true
. -
This internally maintains a doubly linked list of entries ordered by recent access.
-
On each
get()
orput()
, the key is moved to the end (most recent). -
When the cache exceeds capacity,
removeEldestEntry()
automatically removes the oldest one. -
Both
get
andput
are O(1).
import java.util.*;
class LRUCache<K, V> extends LinkedHashMap<K, V> {
private final int capacity;
public LRUCache(int capacity) {
// initialCapacity, loadFactor, accessOrder
super(capacity, 0.75f, true);
this.capacity = capacity;
}
// Called after every put
@Override
protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
return size() > capacity; // remove eldest if capacity exceeded
}
public void displayLRU() {
System.out.println("cache " + this);
if (!isEmpty()) {
K lru = keySet().iterator().next();
System.out.println(lru);
} else {
System.out.println("Cache is empty");
}
}
public void printMRU() {
if (!isEmpty()) {
K mru = null;
for (K k : keySet()) { // iterate till last key
mru = k;
}
System.out.println("MRU = " + mru);
} else {
System.out.println("Cache is empty");
}
}
public static void main(String[] args) {
LRUCache<Integer, String> cache = new LRUCache<>(3);
cache.put(1, "A");
cache.put(2, "B");
cache.put(3, "C");
System.out.println(cache); // {1=A, 2=B, 3=C}
cache.get(2); // Access 2 -> becomes most recent
System.out.println(cache); // {1=A, 3=C, 2=B}
cache.put(4, "D"); // Exceeds capacity -> remove 1
System.out.println(cache); // {3=C, 2=B, 4=D}
cache.displayLRU();
cache.printMRU();
}
}
import java.util.Map;
public class LRUCache extends LinkedHashMap<Integer,Integer> {
int capacity;
Map.Entry<Integer,Integer>removeLru;
public LRUCache(int capacity) {
super(capacity,0.75f,true);
this.capacity = capacity;
}
@Override
public boolean removeEldestEntry(Map.Entry<Integer, Integer> eldest)
{
boolean shouldremove=size()>capacity;
if(shouldremove)
{
removeLru=eldest;
}
return shouldremove;
}
Map.Entry<Integer,Integer> getRemoveLru()
{
return removeLru;
}
Map.Entry<Integer,Integer>getMru()
{
Map.Entry<Integer,Integer> mru=null;
for(Map.Entry<Integer,Integer> entry:this.entrySet())
{
mru=entry;
}
return mru;
}
}
LRUCache cache1=new LRUCache(2);
cache1.put(1,100);
cache1.put(2,101);
cache1.put(3,103);
Map.Entry<Integer,Integer> removelast=cache1.getRemoveLru();
if(removelast !=null)
{
System.out.println("Removed:1 " + removelast.getKey() + " -> " + removelast.getValue());
}
Map.Entry<Integer,Integer> mru = cache1.getMru();
System.out.println("Current Cache1: " + cache1);
System.out.println("Current MRU: " + mru.getKey() + " -> " + mru.getValue());
}
Comments
Post a Comment