What Are Dynamic Arrays? How Do They Differ From Traditional Arrays?
This article is a continuation of the "Introduction to array data structure" and contains sketches, memory diagrams, strengths and weaknesses with examples.
Table of Contents
Introduction
Arrays are fixed size. You need to specify the number of elements your array will hold ahead of time.
Converting an array from fixed size to resizing itself is the most common use-case for creating all complex data structures like ArrayList
, growable arrays and many more. The other names of dynamic arrays are mutable arrays, resizable arrays etc.
A dynamic array resizes or expands when you add more elements than the capacity
of the array.
Strengths
Fast lookups
Retrieving the element at a given index takes O(1)
time, regardless of the array's length.
Array size
You can add as many elements as you need. There is no limitation. Dynamic arrays expand to hold them.
Cache friendly
Just like arrays, dynamic arrays place items right next to each other in memory, making efficient use of caches.
Weaknesses
Size doubling appends
Let us consider an array with a capacity of 5
elements.
But the elements we want to store in this array are more, which means we have to double the size, create a new array, copy the old array elements and add new elements, which takes O(n)
time.

Costly inserts
Inserting an element at the end of the array takes O(1)
time. But, inserting an element at the start/middle of the array takes O(n)
time. Why?
If we want to insert something into an array, first, we have to make space by "scooting over" everything starting at the index we're inserting into, as shown in the image. In the worst case, we're inserting into the 0th index in the array (prepending), so we have to "scoot over" everything in the array.
That's O(n)
time.

Inserting an element at the 2
nd index
and moving the rest of the element right shift each once. The resultant array becomes – { A, B, C, D, E }
.
I recommend you read Array insertions and shifting algorithms(link below) with a clear explanation with code snippets and sketches to understand why these inserts are expensive at the start and middle.
Costly deletes
Deleting an element at the end of the array takes O(1)
time, which is the best case. In computer science, we only care about the worse case scenarios when working on algorithms. But, when we remove an element from the middle or start of the array, we have to fill the gap by scooting over all the elements after it. This will be O(n)
if we consider a case of deleting an element from the 0theindex.

Deleting an element at the 3
rdindex
and filling the gap by left-shifting the rest of the elements; the resultant array becomes – { A, B, C, D, E }
.
BIg-O worst-case time complexities
Operation | Worst-Case Time Complexity |
---|---|
Lookup/access a value at a given index | O(1) |
Update a value at a given index | O(1) |
Insert at the beginning/middle | O(N) |
Insert at the end for dynamic array | O(N) |
Insert at the end for Static array | O(N) |
Append at the end | O(1) |
Delete at the beginning/middle | O(N) |
Delete at the end | O(1) |
copying the array | O(N) |
👨🏻💻 Gopi Gorantala Newsletter
Join the newsletter to receive the latest updates in your inbox.