An array stores elements in contiguous memory — one block of addresses, side by side. This layout is what makes arrays so fast: to access any element, the CPU computes its address as base + (index × element_size) and jumps directly there in one step. No searching, no traversal.
Arrays are the most fundamental data structure in computing. Almost every other structure — stacks, queues, heaps, hash tables — is built on top of arrays or directly compared against them. Understanding array trade-offs is the foundation of data structure reasoning.
Any element can be read or written in constant time using its index. The address formula base + (i × size) makes this work. This is the defining superpower of arrays and why they're used everywhere.
Array elements are stored in adjacent memory addresses. This means the CPU cache can prefetch ahead — reading arr[0], arr[1], arr[2] in sequence is extremely cache-friendly and fast in practice.
Inserting at index i requires shifting every element after i one slot to the right. Deleting at index i requires shifting everything left. For large arrays this is expensive. Only appending to the end is O(1).
Low-level arrays (C, Java primitives) have fixed size at creation. Dynamic arrays (Python lists, JavaScript arrays) resize automatically — typically doubling capacity when full — giving O(1) amortized append.
TIPUse the index input to access specific positions — notice how it highlights instantly with no traversal. Then try inserting at index 0 and watch every element shift right. That shift is the O(n) cost made visible.