146. LRU Cache Medium
Design a data structure that follows the constraints of a Least Recently Used (LRU) cache.
Implement the LRUCache class:
LRUCache(int capacity)Initialize the LRU cache with positive sizecapacity.int get(int key)Return the value of thekeyif the key exists, otherwise return-1.void put(int key, int value)Update the value of thekeyif thekeyexists. Otherwise, add thekey-valuepair to the cache. If the number of keys exceeds thecapacityfrom this operation, evict the least recently used key.
The functions get and put must each run in O(1) average time complexity.
Example 1:
Input
["LRUCache", "put", "put", "get", "put", "get", "put", "get", "get", "get"]
[[2], [1, 1], [2, 2], [1], [3, 3], [2], [4, 4], [1], [3], [4]]
Output
[null, null, null, 1, null, -1, null, -1, 3, 4]
Explanation LRUCache lRUCache = new LRUCache(2);
lRUCache.put(1, 1); // cache is {1=1}
lRUCache.put(2, 2); // cache is {1=1, 2=2}
lRUCache.get(1); // return 1
lRUCache.put(3, 3); // LRU key was 2, evicts key 2, cache is {1=1, 3=3}
lRUCache.get(2); // returns -1 (not found)
lRUCache.put(4, 4); // LRU key was 1, evicts key 1, cache is {4=4, 3=3}
lRUCache.get(1); // return -1 (not found)
lRUCache.get(3); // return 3
lRUCache.get(4); // return 4
Approach
Input: An LRUCache class
Output: Design and implement a data structure satisfying the LRU (Least Recently Used) cache constraint.
This problem belongs to the Hash Table + Doubly Linked List category.
We can use a hash table to record each newly added key and value, but the recorded value can be wrapped in a doubly linked list node.
We use two nodes head and tail to point to the LRU and MRU respectively:
Hash table: cache[key] -> Node Object
Doubly linked list:
head <-> LRU <-> ... <-> MRU <-> tailLRUcan be found viahead.nextMRUcan be found viatail.prev
First, we design a doubly linked list node Node.
Then, implement a remove function to remove specified nodes. The advantage of a doubly linked list is that you can directly access the previous and next nodes, and by cross-linking them, you easily detach the current node.
Then, implement an insert function to add a specified node to the MRU (Most Recently Used). Through the tail node, we can get the current MRU node, and point it to the newly inserted node.
For the get function, we first check if a node with the same key exists in the hash table. If it does, we remove the current node and add it to the MRU, then return the value. If not, return -1.
For the put function, we first check if a node with the same key exists in the hash table. If it does, simply remove it. After that, create a new key-value node and add it to the MRU. Finally, determine if the maximum capacity has been exceeded. If so, locate the LRU node and remove it.
Implementation
# Define the doubly linked list Node structure
class Node:
def __init__(self, key, val):
self.prev = None # Points to previous node
self.next = None # Points to next node
self.key = key # Kept for deletion from dictionary
self.val = val # The cached value
class LRUCache:
def __init__(self, capacity: int):
self.cap = capacity # Max capacity
self.cache = {} # Hash map: key -> Node
# Create dummy head and tail nodes to simplify insertion and deletion
self.head = Node(0, 0)
self.tail = Node(0, 0)
self.head.next = self.tail
self.tail.prev = self.head
# Remove a node from the linked list
def remove(self, node):
prev, next = node.prev, node.next
prev.next = next
next.prev = prev
# Insert a node before tail (MRU)
def insert(self, node):
prev, next = self.tail.prev, self.tail
prev.next = node
node.prev = prev
node.next = next
next.prev = node
# Get value: if exists return it, and move node to MRU
def get(self, key: int) -> int:
if key in self.cache:
self.remove(self.cache[key]) # Remove from current position
self.insert(self.cache[key]) # Re-insert at MRU
return self.cache[key].val # Return value
return -1 # Return -1 if not found
# Put value: update and move to MRU; evict LRU if capacity exceeded
def put(self, key: int, value: int) -> None:
if key in self.cache:
self.remove(self.cache[key]) # Delete old node if exists
# Create new node and add at MRU
self.cache[key] = Node(key, value)
self.insert(self.cache[key])
# Over capacity: remove LRU (first node after head)
if len(self.cache) > self.cap:
lru = self.head.next # The oldest node
self.remove(lru) # Remove from linked list
del self.cache[lru.key] # Delete from hash map// ES6 Map inherently maintains insertion order
// For get and put, re-adding to the Map moves it to the end (MRU)
/**
* @param {number} capacity
*/
var LRUCache = function(capacity) {
this.cache = new Map();
this.capacity = capacity;
};
/**
* @param {number} key
* @return {number}
*/
LRUCache.prototype.get = function(key) {
if (!this.cache.has(key)) return -1;
const value = this.cache.get(key);
this.cache.delete(key);
this.cache.set(key, value);
return value;
};
/**
* @param {number} key
* @param {number} value
* @return {void}
*/
LRUCache.prototype.put = function(key, value) {
if (this.cache.get(key)) {
this.cache.delete(key);
} else if (this.cache.size >= this.capacity) {
const oldestKey = this.cache.keys().next().value;
this.cache.delete(oldestKey);
}
this.cache.set(key, value);
};Complexity Analysis
- Time Complexity:
O(1)for bothgetandput - Space Complexity:
O(capacity)to store the cache