UNIT 09Prerequisite: Units 02 + 08120 min read
Sorting is one of the most studied problems in all of computer science. Given a list of items in random order, arrange them in a defined sequence — ascending, descending, alphabetical. Simple to describe, endlessly interesting to optimise.
In this unit we cover six sorting algorithms — from the naive O(n²) ones that every beginner learns, to the powerful O(n log n) ones that real systems use. Every algorithm gets a full explanation, a step-by-step trace, complete C code, and honest complexity analysis.
// Section 1
Why Does Sorting Matter?
Sorting is not just a textbook exercise. It is the foundation that makes almost every other operation faster. Consider:
🔍Binary search requires sorted data
The O(log n) search algorithm we cover in Unit 10 only works if the array is sorted. Searching 1 million items takes 20 steps instead of 1 million — but only if sorted.
📊Databases sort constantly
Every ORDER BY in SQL triggers a sort. Every index in PostgreSQL or MySQL is a sorted structure. Database performance depends heavily on efficient sorting.
🤝Merge operations need sorted input
Combining two datasets, removing duplicates, finding common elements — all of these are trivial with sorted data and expensive without.
📱Leaderboards, rankings, feeds
Every time you see a leaderboard, search results, or a ranked feed — a sorting algorithm ran to produce that view.
One term to know before we start — Stability
A sorting algorithm is stable if it preserves the original relative order of elements that have equal keys.
Stable example
Sort students by marks. Two students both scored 85. A stable sort keeps them in their original name order (Alice before Bob if Alice was listed first).
Unstable — may reorder equal elements
An unstable sort might put Bob before Alice even though Alice was originally first — their relative order is not guaranteed.
// Algorithm 1
01Bubble SortThe simplest — and the slowest
Bubble sort repeatedly steps through the array, compares adjacent elements, and swaps them if they are in the wrong order. After each pass, the largest unsorted element "bubbles up" to its correct position at the end — just like air bubbles rising in water. Repeat n-1 times and the array is sorted.
// Tracing bubble sort on [5, 3, 8, 1, 2] — Pass 1
Start — compare arr[0]=5 and arr[1]=3
5 > 3 → swap. Compare arr[1]=5 and arr[2]=8
5 < 8 → no swap. Compare arr[2]=8 and arr[3]=1
8 > 1 → swap. Compare arr[3]=8 and arr[4]=2
8 > 2 → swap. Pass 1 done — 8 is in final position
#include <stdio.h>
void bubbleSort(int arr[], int n) {
int i, j, temp;
int swapped;
for (i = 0; i < n - 1; i++) {
swapped = 0; /* track if any swap happened this pass */
for (j = 0; j < n - i - 1; j++) { /* inner loop shrinks each pass */
if (arr[j] > arr[j + 1]) {
/* swap adjacent elements */
temp = arr[j];
arr[j] = arr[j + 1];
arr[j + 1] = temp;
swapped = 1;
}
}
/* optimisation: if no swaps happened, array is already sorted */
if (swapped == 0) break;
}
}
void printArray(int arr[], int n) {
int i;
for (i = 0; i < n; i++) printf("%d ", arr[i]);
printf("\n");
}
int main() {
int arr[] = {5, 3, 8, 1, 2};
int n = 5;
printf("Before: "); printArray(arr, n);
bubbleSort(arr, n);
printf("After: "); printArray(arr, n);
/* After: 1 2 3 5 8 */
return 0;
}
🎯 Pro Tip
The swapped optimisation makes best case O(n) — if the array is already sorted, the first pass makes zero swaps and the loop exits immediately. Without this, even a sorted array takes O(n²) passes. Always include it.
// Algorithm 2
02Selection SortFind the minimum, place it — repeat
Selection sort divides the array into two parts: sorted (left) and unsorted (right). In each pass, find the minimum element in the unsorted part and swap it into its correct position at the start of the unsorted section. After n-1 passes, sorted. Simple and predictable — always exactly O(n²) regardless of input.
// Tracing selection sort on [5, 3, 8, 1, 2]
Pass 1: min=1 at index 3 → swap with index 0
Pass 2: min=2 at index 4 → swap with index 1
Pass 3: min=3 at index 4 → swap with index 2
Pass 4: min=5 at index 3 → already in place ✓
#include <stdio.h>
void selectionSort(int arr[], int n) {
int i, j, minIdx, temp;
for (i = 0; i < n - 1; i++) {
minIdx = i; /* assume current position holds the minimum */
/* find the actual minimum in the unsorted portion */
for (j = i + 1; j < n; j++) {
if (arr[j] < arr[minIdx]) {
minIdx = j; /* found a smaller element */
}
}
/* swap the found minimum with the first unsorted element */
if (minIdx != i) {
temp = arr[minIdx];
arr[minIdx] = arr[i];
arr[i] = temp;
}
}
}
int main() {
int arr[] = {5, 3, 8, 1, 2};
int n = 5;
selectionSort(arr, n);
int i;
for (i = 0; i < n; i++) printf("%d ", arr[i]);
/* Output: 1 2 3 5 8 */
return 0;
}
💡 Note
Selection sort makes exactly n-1 swaps — one per pass, regardless of input. This makes it useful when swap cost is very high (like writing to flash memory). It does more comparisons than bubble sort but fewer swaps.
// Algorithm 3
03Insertion SortLike sorting cards in your hand
Imagine holding a hand of cards and picking them up one at a time. Each new card gets inserted into the correct position among the cards you already hold. Insertion sort works the same way — it takes one element at a time and inserts it into its correct position in the already-sorted portion on the left. Excellent for nearly-sorted data.
// Tracing insertion sort on [5, 3, 8, 1, 2]
key=3: 3 < 5 → shift 5 right, insert 3
key=8: 8 > 5 → already in place
key=1: shift 8,5,3 right → insert 1 at front
key=2: shift 8,5,3 right → insert 2 after 1
#include <stdio.h>
void insertionSort(int arr[], int n) {
int i, j, key;
for (i = 1; i < n; i++) {
key = arr[i]; /* element to be inserted into sorted portion */
j = i - 1;
/* shift elements of sorted portion that are greater than key */
while (j >= 0 && arr[j] > key) {
arr[j + 1] = arr[j]; /* move element one position right */
j--;
}
arr[j + 1] = key; /* insert key at its correct position */
}
}
int main() {
int arr[] = {5, 3, 8, 1, 2};
int n = 5;
insertionSort(arr, n);
int i;
for (i = 0; i < n; i++) printf("%d ", arr[i]);
/* Output: 1 2 3 5 8 */
return 0;
}
🎯 Pro Tip
Insertion sort is the best of the O(n²) algorithms for nearly sorted data.If the array is almost sorted — only a few elements out of place — insertion sort runs in almost O(n). Real systems like TimSort (used in Python and Java) use insertion sort for small subarrays because of this property.
// Algorithm 4
04Merge SortDivide and conquer — guaranteed O(n log n)
Merge sort uses the divide-and-conquer strategy and recursion from Unit 08. Split the array in half, recursively sort each half, then merge the two sorted halves back together. The merge step is where the real work happens — and it is linear. This guarantees O(n log n) in all cases — best, average, and worst.
// Merge sort on [5, 3, 8, 1] — divide phase (going down)
↓ split
↓ split
↑ merge back up
↑ merge
sorted ✓
#include <stdio.h>
/* merge two sorted halves into one sorted array */
void merge(int arr[], int left, int mid, int right) {
int n1 = mid - left + 1; /* size of left half */
int n2 = right - mid; /* size of right half */
/* temporary arrays to hold the two halves */
int L[n1], R[n2];
int i, j, k;
/* copy data into temp arrays */
for (i = 0; i < n1; i++) L[i] = arr[left + i];
for (j = 0; j < n2; j++) R[j] = arr[mid + 1 + j];
/* merge the temp arrays back — comparing front elements */
i = 0; j = 0; k = left;
while (i < n1 && j < n2) {
if (L[i] <= R[j]) {
arr[k] = L[i];
i++;
} else {
arr[k] = R[j];
j++;
}
k++;
}
/* copy remaining elements if any */
while (i < n1) { arr[k] = L[i]; i++; k++; }
while (j < n2) { arr[k] = R[j]; j++; k++; }
}
/* recursively split and sort */
void mergeSort(int arr[], int left, int right) {
if (left >= right) return; /* base case: one or zero elements */
int mid = left + (right - left) / 2; /* find midpoint */
mergeSort(arr, left, mid); /* sort left half */
mergeSort(arr, mid + 1, right); /* sort right half */
merge(arr, left, mid, right); /* merge both halves */
}
int main() {
int arr[] = {5, 3, 8, 1, 2, 9, 4};
int n = 7;
mergeSort(arr, 0, n - 1);
int i;
for (i = 0; i < n; i++) printf("%d ", arr[i]);
/* Output: 1 2 3 4 5 8 9 */
return 0;
}
💡 Note
Why O(n log n)? The array splits log n times (halving each level). At each level, merging all elements costs O(n). So total = n × log n = O(n log n). The tradeoff: it needs O(n) extra space for the temporary arrays during merge. If memory is tight — quick sort is preferred.
// Algorithm 5
05Quick SortThe fastest in practice — pivot-based partitioning
Quick sort picks a pivot element and partitions the array so that all elements smaller than the pivot go to its left and all larger go to its right. The pivot is now in its final correct position. Recursively apply the same to the left and right parts. Despite O(n²) worst case, quick sort is the fastest in practice on average due to excellent cache behaviour.
// Partition step on [3, 6, 8, 10, 1, 2, 1] — pivot = last element (1)
pivot = 1 (last). i = -1. Walk j from left to right
arr[0]=3 > pivot → skip. arr[4]=1 ≤ pivot → swap with arr[0]
arr[5]=2 > pivot → skip. Partition done — place pivot
Pivot 1 is now in its FINAL position at index 1 ✓
#include <stdio.h>
/* swap helper */
void swap(int *a, int *b) {
int temp = *a; *a = *b; *b = temp;
}
/*
* partition: choose last element as pivot
* place all smaller elements to its left
* returns the final position of the pivot
*/
int partition(int arr[], int low, int high) {
int pivot = arr[high]; /* last element is pivot */
int i = low - 1; /* i tracks the boundary of smaller elements */
int j;
for (j = low; j < high; j++) {
if (arr[j] <= pivot) {
i++;
swap(&arr[i], &arr[j]); /* move element to left side */
}
}
/* place pivot in its correct final position */
swap(&arr[i + 1], &arr[high]);
return i + 1; /* return pivot's final index */
}
void quickSort(int arr[], int low, int high) {
if (low >= high) return; /* base case: one or zero elements */
int pi = partition(arr, low, high); /* partition and get pivot index */
quickSort(arr, low, pi - 1); /* sort left of pivot */
quickSort(arr, pi + 1, high); /* sort right of pivot */
}
int main() {
int arr[] = {5, 3, 8, 1, 2, 9, 4};
int n = 7;
quickSort(arr, 0, n - 1);
int i;
for (i = 0; i < n; i++) printf("%d ", arr[i]);
/* Output: 1 2 3 4 5 8 9 */
return 0;
}
⚠️ Important
Quick sort's worst case is O(n²) — it happens when the pivot is always the smallest or largest element (e.g. already sorted array with last element as pivot). The fix: use a random pivot or the median-of-three strategy. In practice with good pivot selection, quick sort is faster than merge sort because it sorts in-place and has better cache locality.
// Algorithm 6
06Counting SortNot comparison-based — O(n+k) when values are bounded
All previous algorithms compare elements against each other. Counting sort does not compare at all — it counts how many times each value appears, then reconstructs the sorted array from those counts. k = the range of values (max - min). When k is small relative to n, this is blazing fast. Useless when values are unbounded (like real numbers or large integers).
#include <stdio.h>
#include <string.h>
void countingSort(int arr[], int n) {
/* find the maximum value to know the count array size */
int max = arr[0];
int i;
for (i = 1; i < n; i++) {
if (arr[i] > max) max = arr[i];
}
/* count array — one slot per possible value (0 to max) */
int count[max + 1];
memset(count, 0, sizeof(count)); /* initialise all counts to 0 */
/* step 1: count occurrences of each value */
for (i = 0; i < n; i++) {
count[arr[i]]++;
}
/* step 2: reconstruct sorted array from counts */
int idx = 0;
for (i = 0; i <= max; i++) {
while (count[i] > 0) {
arr[idx] = i; /* place value i into the output */
idx++;
count[i]--;
}
}
}
int main() {
int arr[] = {4, 2, 2, 8, 3, 3, 1};
int n = 7;
printf("Before: ");
int i;
for (i = 0; i < n; i++) printf("%d ", arr[i]);
countingSort(arr, n);
printf("\nAfter: ");
for (i = 0; i < n; i++) printf("%d ", arr[i]);
/* After: 1 2 2 3 3 4 8 */
return 0;
}
Tracing counting sort on [4, 2, 2, 8, 3, 3, 1]:
count[0]=0, count[1]=1, count[2]=2, count[3]=2, count[4]=1, ... count[8]=1
Reconstruct: 1×1, 2×2, 3×2, 4×1, 8×1 → 1 2 2 3 3 4 8 ✓
🎯 Pro Tip
When to use counting sort: When you know the values are non-negative integers in a small bounded range. Sorting 1 million exam scores from 0–100? Counting sort is perfect — k=100 is tiny. Sorting 1 million phone numbers? k would be 10 billion — terrible choice. Know your data before choosing.
// The Full Picture
All Six Algorithms — Side by Side
| Algorithm | Best | Average | Worst | Space | Stable | When to use |
|---|
| Bubble Sort | O(n) | O(n²) | O(n²) | O(1) | ✓ | Learning, nearly sorted small data |
| Selection Sort | O(n²) | O(n²) | O(n²) | O(1) | ✗ | When swap count must be minimized |
| Insertion Sort | O(n) | O(n²) | O(n²) | O(1) | ✓ | Small or nearly sorted arrays |
| Merge Sort | O(n log n) | O(n log n) | O(n log n) | O(n) | ✓ | Large data, stability required, linked lists |
| Quick Sort | O(n log n) | O(n log n) | O(n²) | O(log n) | ✗ | General purpose, in-place, cache-friendly |
| Counting Sort | O(n+k) | O(n+k) | O(n+k) | O(k) | ✓ | Small range integers (marks, ages, ratings) |
Real numbers — sorting 1 million elements (1,000,000):
Bubble / Selection / Insertion
1 trillion ops — takes minutes
1,000,000,000,000 ops
Merge Sort / Quick Sort
20 million ops — takes milliseconds
20,000,000 ops
Counting Sort (k=100)
barely over 1 million — near instant
1,000,100 ops
📌 Real World Example
Interview answer pattern: When asked "which sorting algorithm would you use?" — never say just "quick sort." Say: "It depends on the constraints. For general purpose in-place sorting I'd use quick sort. If stability is required I'd use merge sort. If the values are bounded integers I'd use counting sort. For small or nearly-sorted data, insertion sort." That answer shows real engineering judgment.
// What's Next
You Are Ready for Unit 10
You now know six sorting algorithms inside out — their logic, their code, their complexity, and when to pick each one. This unit alone covers a significant portion of what most coding interviews test.
In Unit 10 we cover Searching Algorithms — linear search for unsorted data, and the elegant binary search that cuts the problem in half each step to achieve O(log n). Sorting and searching always go together — sorted data makes searching dramatically faster.
UP NEXT → UNIT 10
Searching Algorithms — Find It Fast
Linear search, binary search, variations — with full C code and complexity.
Coming Soon →🎯 Key Takeaways
- ✓Sorting enables binary search, database indexing, merge operations, and ranked displays — it is foundational
- ✓Bubble sort: compare adjacent pairs, bubble largest to end each pass — O(n²) but O(n) if already sorted
- ✓Selection sort: find minimum, place it — always exactly O(n²), minimizes swap count
- ✓Insertion sort: pick one element, insert into correct position in sorted portion — best for nearly sorted data
- ✓Merge sort: split in half, sort each, merge back — guaranteed O(n log n), needs O(n) extra space
- ✓Quick sort: pick pivot, partition smaller left larger right — O(n log n) average, O(n²) worst, fastest in practice
- ✓Counting sort: count occurrences, reconstruct — O(n+k), only works for bounded non-negative integers
- ✓Stable sort preserves relative order of equal elements — Bubble, Insertion, Merge, Counting are stable
- ✓Interview rule: pick quick sort for general use, merge sort for stability, counting sort for bounded integers
- ✓O(n²) sorts handle ~10,000 elements fine. O(n log n) sorts handle millions. Never use O(n²) on large datasets