A hash table stores key-value pairs and retrieves them in O(1) average time — regardless of how many entries are stored. It achieves this by using a hash function to convert a key into an array index, then storing the value there.
The magic is in the math: a good hash function distributes keys evenly across buckets so lookups, insertions, and deletions are all constant time on average. No searching, no sorting — just compute the index and go directly there.
get, set, and delete all compute the bucket index in O(1). As long as the load factor stays low and the hash function distributes keys evenly, every operation is constant time.
Two different keys can hash to the same bucket — this is a collision. It's handled via chaining (linked list per bucket) or open addressing. Collisions don't break correctness, but they slow things down.
Load factor = entries / buckets. As it rises toward 1.0, collision probability increases and performance degrades. Most implementations resize (rehash) when load factor exceeds 0.7–0.75.
Hash tables need keys that can be consistently converted to a number — strings, numbers, and tuples work. Mutable objects like arrays are risky as keys because their hash can change.
TIPTry adding entries with different keys and watch which bucket each lands in. Then try adding two keys that hash to the same bucket — you'll see the collision warning appear. Notice how the load factor changes as you add more entries.