Python · SQL · Web Dev · Java · AI/ML tracks launching soon — your one platform for all of IT
Advanced+200 XP

Unit 17 — Greedy Algorithms

Always pick the locally best option at each step. Sometimes that is enough to reach the global optimum — and it is much faster than DP. Activity selection, fractional knapsack, Huffman coding, and when greedy fails.

75 min March 2026
UNIT 17Prerequisite: Unit 09 — Sorting75 min read

In the previous unit, Dynamic Programming tried every possibility and remembered the results. Greedy algorithms take a completely different approach — they never look back. At each step, make the best available choice right now, and commit to it. No exploring alternatives. No going back. Just always grab the best option in front of you.

When greedy works, it is dramatically simpler and faster than DP. When it fails — and it does fail for many problems — you need DP. Understanding the difference is one of the most important skills in algorithm design.

// Section 1

What is a Greedy Algorithm?

A greedy algorithm makes the locally optimal choice at each step with the hope that these local choices lead to a globally optimal solution. It never reconsiders a choice once made.

💰 The coin change analogy — where greedy works perfectly

You need to give someone ₹41 change using Indian coins (₹25, ₹10, ₹5, ₹1). A greedy cashier always picks the largest coin that fits:

41 remaining₹2516 remaining
16 remaining₹106 remaining
6 remaining₹51 remaining
1 remaining₹10 remaining ✓
4 coins total — this IS the optimal answer for Indian denominations ✓
Condition 1 — Greedy Choice Property
A globally optimal solution can always be built by making locally optimal (greedy) choices. The best choice now is always part of some optimal solution.
Condition 2 — Optimal Substructure
After making the greedy choice, the remaining subproblem has the same structure as the original. Its optimal solution combines with the greedy choice to give the global optimum.
⚠️ Important
Greedy does NOT always work. If the greedy choice property does not hold, the algorithm will give a suboptimal answer without any error. It will silently produce the wrong answer. This is why you must provea greedy approach is correct before trusting it — or use DP to be safe.
// Problem 1
Problem 01Activity Selection — Maximum Non-Overlapping Events
Time: O(n log n)Space: O(1)

You have n activities, each with a start time and finish time. Only one activity can run at a time. Select the maximum number of non-overlapping activities you can attend. This is the classic greedy problem — and the greedy solution is provably optimal.

// 6 activities — which ones to pick?
ActivityStartFinishSelected?
A114✓ YES
A235
A306
A457✓ YES
A539
A6610
A7811✓ YES
Selected: A1 (1-4), A4 (5-7), A7 (8-11) → 3 activities maximum
The greedy insight — always pick the activity that finishes earliest:
1
Sort all activities by their finish time (earliest finish first)
2
Select the first activity (it finishes earliest — leaves maximum time for others)
3
For each remaining activity: if its start time ≥ finish time of last selected → select it
4
Skip activities that overlap with the last selected one
c
#include <stdio.h>
#include <stdlib.h>
struct Activity {
int start;
int finish;
char name[3];
};
typedef struct Activity Activity;
/* compare by finish time — for sorting */
int compareFinish(const void *a, const void *b) {
return ((Activity*)a)->finish - ((Activity*)b)->finish;
}
int activitySelection(Activity acts[], int n) {
/* step 1: sort by finish time */
qsort(acts, n, sizeof(Activity), compareFinish);
printf("Selected activities:\n");
/* step 2: always pick the first activity */
int count = 1;
int lastFinish = acts[0].finish;
printf(" %s (start=%d, finish=%d)\n", acts[0].name, acts[0].start, acts[0].finish);
/* step 3: pick next non-overlapping activity */
int i;
for (i = 1; i < n; i++) {
if (acts[i].start >= lastFinish) {
/* this activity starts after last one ends — no overlap */
printf(" %s (start=%d, finish=%d)\n", acts[i].name, acts[i].start, acts[i].finish);
lastFinish = acts[i].finish;
count++;
}
}
return count;
}
int main() {
Activity acts[] = {
{1, 4, "A1"},
{3, 5, "A2"},
{0, 6, "A3"},
{5, 7, "A4"},
{3, 9, "A5"},
{6, 10, "A6"},
{8, 11, "A7"},
};
int n = 7;
int total = activitySelection(acts, n);
printf("Maximum activities: %d\n", total); /* 3 */
return 0;
}
🎯 Pro Tip
Why earliest finish — not earliest start, not shortest duration?Picking the earliest-finishing activity leaves the maximum remaining time for future activities. This is provably optimal. An activity that finishes later "blocks" more of the future timeline. Always finishing early = always leaving the most options open.
// Problem 2
Problem 02Fractional Knapsack — Take Fractions of Items
Time: O(n log n)Space: O(1)

Same as 0/1 knapsack — items with weights and values, bag with capacity W — but now you can take fractions of items. Take 60% of an item, get 60% of its value. This small difference means greedy works perfectly here, whereas 0/1 knapsack required DP.

The greedy insight — sort by value-per-kg, take the most valuable first:
1
Calculate value/weight ratio for each item
2
Sort items by ratio — highest ratio first
3
Take as much of the most valuable item as fits
4
Move to the next item. If it fully fits — take all. If not — take the fraction that fits.
// 4 items, capacity W = 50 — sorted by value/weight ratio
ItemWeightValueValue/WeightTakenValue added
Item 210 kg₹606.0 ← highestAll 10 kg₹60
Item 420 kg₹1005.0All 20 kg₹100
Item 120 kg₹804.020 kg (full)₹80
Item 330 kg₹1204.00 kg (bag full)₹0
Total value: ₹60 + ₹100 + ₹80 = ₹240 with exactly 50 kg used
c
#include <stdio.h>
#include <stdlib.h>
struct Item {
int weight;
int value;
double ratio; /* value per unit weight */
};
typedef struct Item Item;
int compareRatio(const void *a, const void *b) {
double diff = ((Item*)b)->ratio - ((Item*)a)->ratio;
return (diff > 0) - (diff < 0); /* sort descending by ratio */
}
double fractionalKnapsack(Item items[], int n, int capacity) {
int i;
/* compute ratio for each item */
for (i = 0; i < n; i++) {
items[i].ratio = (double)items[i].value / items[i].weight;
}
/* sort by value/weight ratio — highest first */
qsort(items, n, sizeof(Item), compareRatio);
double totalValue = 0.0;
int remaining = capacity;
for (i = 0; i < n && remaining > 0; i++) {
if (items[i].weight <= remaining) {
/* take the whole item */
totalValue += items[i].value;
remaining -= items[i].weight;
printf("Took all of item %d (weight=%d, value=%d)\n",
i+1, items[i].weight, items[i].value);
} else {
/* take a fraction — whatever fits */
double fraction = (double)remaining / items[i].weight;
totalValue += items[i].value * fraction;
printf("Took %.1f%% of item %d (value=%.2f)\n",
fraction * 100, i+1, items[i].value * fraction);
remaining = 0;
}
}
return totalValue;
}
int main() {
Item items[] = {
{20, 80, 0}, /* item 1 */
{10, 60, 0}, /* item 2 */
{30, 120, 0}, /* item 3 */
{20, 100, 0}, /* item 4 */
};
int n = 4, capacity = 50;
double maxValue = fractionalKnapsack(items, n, capacity);
printf("Maximum value: %.2f\n", maxValue); /* 240.00 */
return 0;
}
Why this does NOT work for 0/1 knapsack:
With 0/1 knapsack (no fractions), capacity=10, items: A(weight=6,value=6), B(weight=5,value=5), C(weight=5,value=5).
Greedy picks A first (ratio=1.0), then bag is full: value=6.
But B+C = weight 10, value 10 → greedy missed the better answer.
Fractional knapsack: take all of A + 4/5 of B → 6 + 4 = 10 ✓ (greedy works because fractions are allowed).
// Problem 3
Problem 03Huffman Coding — Optimal Data Compression
Time: O(n log n)Space: O(n)

Huffman coding assigns variable-length binary codes to characters — shorter codes for more frequent characters, longer codes for rare ones. This is how ZIP files, JPEG images, and MP3 audio compress data. It is provably the optimal prefix-free encoding.

The idea — assign shorter codes to more frequent characters:
Fixed length (naive)
A → 000 (freq: 45)
B → 001 (freq: 13)
C → 010 (freq: 12)
D → 011 (freq: 16)
E → 100 (freq: 9)
F → 101 (freq: 5)
3 bits per char always
Huffman (optimal)
A → 0   (1 bit — most frequent)
B → 101 (3 bits)
C → 100 (3 bits)
D → 111 (3 bits)
E → 1101 (4 bits)
F → 1100 (4 bits)
Avg ~2.24 bits per char ✓
The greedy algorithm — always merge the two least frequent nodes:
1
Put all characters in a min-heap ordered by frequency
2
Extract the two nodes with lowest frequency
3
Create a new internal node with frequency = sum of the two. Make them its left and right children
4
Insert the new node back into the heap
5
Repeat until only one node remains — that is the root of the Huffman tree
6
Assign 0 to every left edge and 1 to every right edge. Read path from root to leaf = code for that character
c
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
struct HNode {
char ch;
int freq;
struct HNode *left, *right;
};
typedef struct HNode HNode;
HNode* newNode(char ch, int freq) {
HNode *n = malloc(sizeof(HNode));
n->ch = ch; n->freq = freq;
n->left = n->right = NULL;
return n;
}
/* simple min-heap for Huffman (array-based priority queue) */
#define MAXNODES 256
HNode* heap[MAXNODES];
int heapSize = 0;
void heapSwap(int i, int j) {
HNode *t = heap[i]; heap[i] = heap[j]; heap[j] = t;
}
void heapPush(HNode *node) {
heap[heapSize] = node;
int i = heapSize++;
/* bubble up */
while (i > 0 && heap[(i-1)/2]->freq > heap[i]->freq) {
heapSwap(i, (i-1)/2);
i = (i-1)/2;
}
}
HNode* heapPop() {
HNode *top = heap[0];
heap[0] = heap[--heapSize];
/* heapify down */
int i = 0;
while (1) {
int smallest = i;
int l = 2*i+1, r = 2*i+2;
if (l < heapSize && heap[l]->freq < heap[smallest]->freq) smallest = l;
if (r < heapSize && heap[r]->freq < heap[smallest]->freq) smallest = r;
if (smallest == i) break;
heapSwap(i, smallest);
i = smallest;
}
return top;
}
/* print codes — traverse tree, track path */
void printCodes(HNode *root, char *code, int depth) {
if (root->left == NULL && root->right == NULL) {
code[depth] = '\0';
printf(" '%c' (freq=%d) → %s\n", root->ch, root->freq, code);
return;
}
if (root->left) { code[depth] = '0'; printCodes(root->left, code, depth+1); }
if (root->right) { code[depth] = '1'; printCodes(root->right, code, depth+1); }
}
HNode* buildHuffman(char chars[], int freqs[], int n) {
int i;
/* step 1: insert all characters into min-heap */
for (i = 0; i < n; i++) heapPush(newNode(chars[i], freqs[i]));
/* step 2-4: keep merging two smallest until one tree remains */
while (heapSize > 1) {
HNode *left = heapPop(); /* smallest frequency */
HNode *right = heapPop(); /* second smallest */
/* create parent node with combined frequency */
HNode *parent = newNode('$', left->freq + right->freq);
parent->left = left;
parent->right = right;
heapPush(parent); /* push back into heap */
}
return heapPop(); /* the root of the Huffman tree */
}
int main() {
char chars[] = {'A', 'B', 'C', 'D', 'E', 'F'};
int freqs[] = {45, 13, 12, 16, 9, 5};
int n = 6;
HNode *root = buildHuffman(chars, freqs, n);
char code[50];
printf("Huffman codes:\n");
printCodes(root, code, 0);
return 0;
}
💡 Note
Huffman coding is used everywhere. ZIP and GZIP use it for lossless compression. JPEG uses a variant for image compression. MP3 uses it for audio. HTTP/2 uses HPACK which is Huffman-based for header compression. Every time you download a file faster than raw bytes would allow — Huffman coding is likely involved.
// Problem 4
Problem 04Job Sequencing with Deadlines — Maximise Profit
Time: O(n²)Space: O(n)

You have n jobs. Each takes exactly 1 unit of time. Each has a deadline and a profit (earned only if completed before/by the deadline). Only one job can run at a time. Select jobs to maximise total profit. Greedy: always pick the highest profit job first, schedule it as late as possible before its deadline.

c
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
struct Job {
char id;
int deadline;
int profit;
};
typedef struct Job Job;
/* sort by profit descending */
int compareProfit(const void *a, const void *b) {
return ((Job*)b)->profit - ((Job*)a)->profit;
}
int jobSequencing(Job jobs[], int n) {
/* sort jobs by profit — highest first */
qsort(jobs, n, sizeof(Job), compareProfit);
/* find max deadline — tells us how many time slots we need */
int maxDeadline = 0, i;
for (i = 0; i < n; i++) {
if (jobs[i].deadline > maxDeadline) maxDeadline = jobs[i].deadline;
}
/* slots array: slot[t] = job id scheduled at time t (-1 = free) */
char slots[maxDeadline + 1];
memset(slots, -1, sizeof(slots));
int totalProfit = 0, count = 0;
for (i = 0; i < n; i++) {
/* try to schedule this job as LATE as possible before its deadline */
int t;
for (t = jobs[i].deadline; t >= 1; t--) {
if (slots[t] == -1) { /* slot is free */
slots[t] = jobs[i].id;
totalProfit += jobs[i].profit;
count++;
break;
}
}
}
printf("Scheduled jobs: ");
for (i = 1; i <= maxDeadline; i++) {
if (slots[i] != -1) printf("%c ", slots[i]);
}
printf("\nTotal profit: %d\n", totalProfit);
return totalProfit;
}
int main() {
Job jobs[] = {
{'A', 2, 100},
{'B', 1, 19},
{'C', 2, 27},
{'D', 1, 25},
{'E', 3, 15},
};
jobSequencing(jobs, 5);
/* Scheduled: A C E → profit 100+27+15 = 142 */
return 0;
}
// Section 6

When Greedy Fails — And Why

Greedy is not a universal solution. It fails whenever a locally optimal choice leads to a globally suboptimal result. Here are the three most important examples — understand them and you will never misapply greedy again.

✗ Failure 1 — 0/1 Knapsack
Items: A(w=3,v=9), B(w=4,v=10), C(w=5,v=12). Capacity=7.
Ratio: A=3.0, B=2.5, C=2.4
Greedy picks A(3kg, ₹9), then B doesn't fit (3+4=7 ✓ wait it does)... B fits → A+B = 7kg, ₹19
But C+B = 4+5=9 > 7 ✗. A+C = 3+5=8 > 7 ✗.
Actually here greedy gets ₹19 and optimal is also ₹19. Better counter-example:
Items: A(w=10,v=10), B(w=6,v=7), C(w=6,v=7). Cap=12.
Greedy: ratio A=1.0, B=C=1.17 → pick B(₹7), C(₹7) = ₹14 but 6+6=12 fits ✓ greedy works here too.
Real failure: Items X(w=1,v=1), Y(w=10,v=9). Cap=10.
Greedy: ratio X=1.0> Y=0.9 → pick X(1kg,₹1), then Y won't fit = ₹1
Optimal: pick Y alone = ₹9
✗ Failure 2 — Coin Change with unusual denominations
Coins: [1, 3, 4]. Amount: 6.
Greedy: 4 + 1 + 1 = 3 coins
Optimal (DP): 3 + 3 = 2 coins
Greedy grabs 4 (the largest) but misses the fact that two 3s are better.
✗ Failure 3 — Shortest path without Dijkstra
Simply picking the shortest edge at each step (pure greedy) does not find the shortest path in a graph. A short first step might lead to a very long path overall. This is why Dijkstra needs the priority queue — it is a more careful greedy that considers total distance, not just the current edge.
// Section 7

Greedy vs DP — The Decision Framework

AspectGreedyDynamic Programming
ApproachOne pass — pick best now, never look backExplore all choices, store results, pick optimal
SpeedFast — usually O(n log n)Slower — usually O(n²) or O(n × W)
CorrectnessOnly correct if greedy choice property holdsAlways correct if recurrence is right
Code complexitySimple — sort + one loopMore complex — 2D tables, careful indexing
Works onActivity selection, fractional knapsack, Huffman0/1 knapsack, LCS, coin change, edit distance
SafetyCan silently give wrong answerGives optimal answer by construction
Quick decision rule for interviews:
Q: Can you take fractions of items?
Yes → Fractional knapsack → Greedy. No → 0/1 knapsack → DP
Q: Is it about scheduling maximum non-overlapping events?
Sort by finish time → Greedy works perfectly
Q: Is it minimum coins / minimum steps / minimum operations?
Almost always DP — greedy fails with non-standard denominations
Q: Does making the greedy choice now ever make a future choice worse?
If yes → DP. If provably no → Greedy
// What's Next

You Are Ready for Unit 18

You now know greedy algorithms — the idea, four classic problems, when greedy is provably correct, and the critical cases where it fails. The greedy vs DP decision framework will save you hours in interviews.

In Unit 18 we cover Backtracking — try a path, hit a dead end, undo, try another. The technique behind N-Queens, Sudoku solvers, and maze problems. Backtracking is brute force made smart — it prunes paths that cannot possibly lead to a solution.

UP NEXT → UNIT 18
Backtracking — Try, Fail, Undo, Try Again
N-Queens, Rat in a Maze, Sudoku Solver, Subset Sum — all in C.
Coming Soon →

🎯 Key Takeaways

  • Greedy = at each step pick the locally best option and never reconsider it
  • Two conditions for greedy to work: greedy choice property (local best is part of global best) + optimal substructure
  • Activity selection: sort by finish time, always pick the earliest-finishing non-overlapping activity — provably optimal
  • Fractional knapsack: sort by value/weight ratio, take highest ratio items first, take fractions when needed — O(n log n)
  • Huffman coding: always merge the two lowest-frequency nodes — builds optimal prefix-free binary codes
  • Job sequencing: sort by profit descending, schedule each job as late as possible before its deadline
  • Greedy FAILS for 0/1 knapsack, coin change with unusual denominations, and shortest path without careful tracking
  • Greedy vs DP: if you can take fractions → greedy. If you cannot → DP. If minimum ops/coins → almost always DP
  • Greedy silently gives wrong answers when the greedy choice property does not hold — always verify correctness
  • Greedy is usually O(n log n) due to sorting. DP is usually O(n²) or higher. When greedy is correct, always prefer it
Share

Discussion

0

Have a better approach? Found something outdated? Share it — your knowledge helps everyone learning here.

Continue with GitHub
Loading...