Python · SQL · Web Dev · Java · AI/ML tracks launching soon — your one platform for all of IT
Beginner+100 XP

Unit 01 — Complexity

How to measure how fast your code runs and how much memory it uses — the skill that separates good code from great code.

60 min March 2026
UNIT 01Prerequisite: Unit 0060 min read

Two programs can both solve the same problem and produce the same correct answer — yet one can be a million times faster than the other. How do you know which one is better? How do you even measure it?

That is exactly what this unit answers. Complexity is the scoreboard for code. It tells you — before you even run the program — how it will behave when the data gets big. This is what every interviewer means when they ask "what is the time complexity of your solution?"

// Section 1

What is Time Complexity?

Time complexity does not measure how many seconds your code takes to run. If it did, a slow computer would make every program look bad, and a fast computer would make every program look good. That is not useful.

Instead, time complexity measures how the number of steps grows as the input size grows. It is about the relationship between input and effort — not the actual clock time.

A story to make this stick forever

Imagine a teacher has a stack of exam papers to return. There are 30 students.

👩‍🏫
Teacher A
Calls each student's name one by one and hands the paper. 30 students = 30 calls. 100 students = 100 calls. The work grows directly with the number of students.
📌
Teacher B
Puts all papers on a notice board sorted alphabetically. Every student walks up and finds their own in seconds. 30 students or 3000 students — it doesn't matter. The board is ready instantly.

Teacher A's method grows with input size. Teacher B's method stays constant. That difference — that relationship between input and steps — is time complexity.

💡 Note
Key definition: Time complexity describes how the number of operations in your code scales as the input size (n) increases. We write it using Big O notation — which we will cover in Section 3.
// Section 2

What is Space Complexity?

While time complexity is about speed, space complexity is about memory. It measures how much extra memory your program uses as the input size grows.

The moving house analogy again

High space usage
You make a full photocopy of every document before packing. 200 documents → 400 pieces of paper. Your memory doubles.
Low space usage
You just write a list of what is in each box. 200 documents → one small list. Memory stays minimal regardless of document count.

In code, this means: does your program create extra variables, arrays, or data structures that grow with the input? Or does it work with a fixed small amount of memory no matter how large the input is?

🎯 Pro Tip
Time vs Space tradeoff: In real engineering, you often trade one for the other. A faster algorithm sometimes uses more memory. A memory-efficient one is sometimes slower. Knowing complexity lets you make that tradeoff consciously instead of accidentally.
// Section 3

Big O Notation — The Universal Language

Big O notation is how engineers all over the world describe complexity in a standard way. When someone says "this algorithm is O(n²)" — every engineer on earth knows exactly what that means.

The letter n always represents the size of the input. If you have an array of 100 elements, n = 100. If you have 1 million users, n = 1,000,000. Big O describes what happens to the number of steps as n grows.

O(1)Constant Time
relative speed
Real life: You know your locker number. You walk straight to it. 100 lockers or 10,000 — same number of steps: one.
Accessing arr[5] in an array
O(log n)Logarithmic Time
relative speed
Real life: Finding a word in a dictionary by opening to the middle, then the middle of the half, and so on. Each step cuts the remaining work in half.
Binary search in a sorted array
O(n)Linear Time
relative speed
Real life: Checking every seat in a cinema to find your friend. 100 seats = 100 checks. 500 seats = 500 checks. Grows directly with input.
Looping through every element once
O(n log n)Linearithmic Time
relative speed
Real life: Sorting a pack of cards by repeatedly splitting the deck in half and merging — slightly worse than linear but much better than quadratic.
Merge Sort, Quick Sort
O(n²)Quadratic Time
relative speed
Real life: Comparing every student in class with every other student. 10 students = 100 comparisons. 100 students = 10,000. Gets slow very fast.
Nested loops — bubble sort
O(2ⁿ)Exponential Time
relative speed
Real life: Every time you add one more item, the work doubles. With 30 items you already have over a billion operations. Avoid this at all costs.
Naive recursive Fibonacci
// From fastest to slowest — best to worst
O(1)O(log n)O(n)O(n log n)O(n²)O(2ⁿ)
← FAST (want this)SLOW (avoid this) →
💡 Note
Big O ignores constants and small terms. If an algorithm takes 3n + 5 steps, Big O calls it O(n) — we drop the 3 and the 5 because for large inputs they don't change the overall shape of the growth. We only care about the dominant term.
// Section 4

Best, Average, and Worst Case

The same algorithm can behave very differently depending on what data you give it. This is why we talk about three cases.

Suppose you are searching for a person in a list of 1000 names, going one by one:

🟢 Best Case
O(1) — constant
The person you are looking for is the very first name on the list. You find them in 1 step. This is the best possible situation.
🟡 Average Case
O(n) — linear
The person is somewhere in the middle — around position 500. On average, you do about n/2 checks. Still grows with n.
🔴 Worst Case
O(n) — linear
The person is the very last name, or not in the list at all. You check all 1000 names before you know. This is the worst possible situation.
⚠️ Important
In interviews and real systems, always think in worst case. A system that works well most of the time but crashes under bad input is not reliable. When engineers say "this is O(n)", they almost always mean worst case. Building for worst case means your system never surprises you.
// Section 5

How to Analyse Any Code — Step by Step

Now let us actually apply all of this. Here is the method — four simple rules you can apply to any piece of code to figure out its Big O complexity.

Rule 1A single statement = O(1)
Any single operation — assigning a variable, adding two numbers, accessing an array index — takes constant time. It does not matter what n is.
Rule 2A single loop over n items = O(n)
If you loop through all elements once, the number of steps grows directly with n. Double the input, double the steps.
Rule 3A loop inside a loop = O(n²)
If the outer loop runs n times and the inner loop also runs n times for each outer iteration, total steps = n × n = n². This gets expensive fast.
Rule 4Halving the input each step = O(log n)
If each step cuts the remaining work in half — like binary search — the total steps grow as log₂(n). For n = 1,000,000, log₂(n) is only about 20 steps.

Example 1 — O(1) Constant Time

No matter what value n has, this function always does exactly the same number of steps.

c
int getFirstElement(int arr[], int n) {
return arr[0]; /* always 1 step — doesn't matter if n is 10 or 10 million */
}
Time Complexity: O(1)

Example 2 — O(n) Linear Time

The loop runs once for every element. More elements = more steps, directly.

c
void printAll(int arr[], int n) {
int i;
for (i = 0; i < n; i++) { /* this loop runs n times */
printf("%d\n", arr[i]); /* one step each time */
}
/* total steps = n → O(n) */
}
Time Complexity: O(n)

Example 3 — O(n²) Quadratic Time

A loop inside a loop. The inner loop runs n times for every iteration of the outer loop. Total steps = n × n.

c
void printPairs(int arr[], int n) {
int i, j;
for (i = 0; i < n; i++) { /* outer loop — runs n times */
for (j = 0; j < n; j++) { /* inner loop — runs n times for EACH outer step */
printf("(%d, %d)\n", arr[i], arr[j]);
}
}
/* total steps = n × n = n² → O(n²) */
}
Time Complexity: O(n²)

Example 4 — Mixed: What is the complexity here?

When you have multiple parts, identify the dominant one.

c
void mixedExample(int arr[], int n) {
int x = arr[0]; /* O(1) — single statement */
int i;
for (i = 0; i < n; i++) { /* O(n) — single loop */
printf("%d\n", arr[i]);
}
int j, k;
for (j = 0; j < n; j++) { /* O(n²) — nested loops */
for (k = 0; k < n; k++) {
printf("%d %d\n", arr[j], arr[k]);
}
}
/* Total = O(1) + O(n) + O(n²) */
/* We keep only the dominant term → O(n²) */
}
How to simplify: drop everything except the largest term
O(1) + O(n) + O(n²)O(n²)(n² dominates for large n)
O(n) + O(n) + O(n)O(n)(3n simplifies to n — drop constants)
O(n²) + O(n log n)O(n²)(n² grows faster than n log n)
🎯 Pro Tip
The golden rule: When you have a mix of complexities, keep only the one that grows the fastest as n gets very large. Everything else becomes insignificant and gets dropped.
// Section 6

What These Numbers Actually Mean

Let us make this real. Here is how many operations each complexity class produces for different input sizes. Assume your computer does 1 billion operations per second.

n (input size)O(1)O(log n)O(n)O(n²)O(2ⁿ)
1013101001,024
1001710010,0001.3 × 10³⁰
1,0001101,0001,000,000> universe
1,000,0001201,000,00010¹²> universe
⚠️ Important
Look at O(2ⁿ) for n = 100. That is more operations than there are atoms in the observable universe. A computer running at 1 billion operations per second would not finish before the end of time. This is why exponential algorithms are never used in real software — and why understanding complexity is not optional.
// What's Next

You Are Ready for Unit 02

You now have the single most important mental tool in all of DSA — the ability to look at any piece of code and judge whether it is efficient or not. Every unit from here uses this language. When we say "arrays give O(1) access" or "bubble sort is O(n²)", you now know exactly what that means.

In Unit 02 we dive into Arrays — the very first and most fundamental data structure. We will build them from scratch in C, understand exactly how they sit in memory, and write the core operations: insert, delete, search, and traverse.

UP NEXT → UNIT 02
Arrays — The Foundation of Everything
Memory layout, insert, delete, search, 2D arrays — in C.
Coming Soon →

🎯 Key Takeaways

  • Time complexity measures how the number of steps grows as input size (n) grows — not actual clock time
  • Space complexity measures how much extra memory your program uses as n grows
  • Big O notation is the standard language: O(1) < O(log n) < O(n) < O(n log n) < O(n²) < O(2ⁿ)
  • O(1) = constant — same steps no matter what. O(n) = linear — steps grow with n. O(n²) = nested loops
  • Always analyse for worst case — a system that fails under bad input is not reliable
  • When combining complexities, keep only the dominant term — O(n² + n) simplifies to O(n²)
  • Exponential O(2ⁿ) algorithms are never used in real systems — they become impossibly slow instantly
Share

Discussion

0

Have a better approach? Found something outdated? Share it — your knowledge helps everyone learning here.

Continue with GitHub
Loading...