0
total calls
0
cache hits
0%
saved
cache — fib(0)…fib(8)
fib(0)
?
fib(1)
?
fib(2)
?
fib(3)
?
fib(4)
?
fib(5)
?
fib(6)
?
fib(7)
?
fib(8)
?
Computing fib(8) with memoization. Watch cache saves multiply.
Without memo, fib(n) recomputes subproblems exponentially. With memo, each unique value is solved exactly once.
start
1 / 33
SPD0.5×
click to pause · hover for controls · fullscreen for more

Memoization

Cache results to avoid recomputing the same subproblem twice

Visual by thisgirltech

IntermediateConcept

Memoization is simple: before computing a result, check if you've computed it before. If yes, return the cached value instantly. If no, compute it, store it, return it. That's the entire technique.

It transforms recursive algorithms with overlapping subproblems from exponential to polynomial time. Fibonacci goes from O(2ⁿ) to O(n). Coin change goes from exponential to O(n·k). It's the gateway concept to dynamic programming.

Store once, reuse many

Every unique input is computed exactly once. All subsequent calls with that input return the cached result in O(1). No redundant work.

Overlapping subproblems

Memoization only helps when the same subproblem recurs. fib(3) is called 3 times in fib(5) — that's 3 overlapping subproblems. Without a cache, all 3 run fully.

Top-down DP

Memoization is top-down dynamic programming — start from the top (fib(n)) and recurse down, caching as you go. Bottom-up DP fills a table from the smallest case upward.

Time-space trade-off

Memoization pays O(n) space (the cache) to save exponential time. Always ask: what does the cache store, and how large can it get?

TIP

Watch the call counter in the naive run. For fib(7) it reaches 41. In the memoised run it stops at 8. Same answer, same algorithm structure — the only difference is one Map lookup at the start of each call.

COLOUR LEGEND

Computing — not in cache
Cache hit — instant return ⚡
Storing result in cache