Limiting Results — LIMIT / OFFSET
Control exactly how many rows come back, implement pagination, and understand the performance implications of LIMIT on large tables
// Part 01
Why You Almost Never Want All the Rows
Every query you have written so far returns every row that satisfies the WHERE condition. On the FreshMart database with 20 customers and 30 orders, that is fine. In a real production database at Swiggy, Flipkart, or Razorpay, the equivalent tables have tens of millions of rows. Returning all of them in a single query is not just slow — it can crash your application, exhaust database memory, and bring down services for other users.
LIMIT is the clause that says: "Stop after you have returned N rows." It is one of the most important performance tools in SQL. Combined with ORDER BY from Module 08, LIMIT lets you efficiently answer questions like:
All five of these require LIMIT. Without it, answering "the top 10" means sorting all 10 million rows and reading them all — with LIMIT 10, the database stops as soon as it has found 10 qualifying rows.
// Part 02
LIMIT — The Syntax and What It Does
LIMIT takes a single integer — the maximum number of rows to return. It is always the last clause in a query, after ORDER BY.
LIMIT does not guarantee which rows you get — ORDER BY does
This is the most important thing to understand about LIMIT. LIMIT without ORDER BY returns an arbitrary set of rows — whichever rows the database happens to encounter first in its internal scan. Two identical queries can return different rows if the storage layout changes. LIMIT alone answers "give me any N rows." ORDER BY + LIMIT answers "give me the top N rows by this criterion." Always combine them.
// Part 03
OFFSET — Skip N Rows Before Returning
OFFSET tells the database to skip a specified number of rows before starting to return results. OFFSET is almost always used with LIMIT together — LIMIT says how many rows to return, OFFSET says how many to skip first.
The pattern is clear: for page number P with page size N, the OFFSET is (P - 1) × N. Page 1 → OFFSET 0. Page 2 → OFFSET 5. Page 3 → OFFSET 10. Page 4 → OFFSET 15. This is how every paginated list in every application works at the SQL level.
// Part 04
Pagination — Building Page-by-Page Navigation
Pagination is how applications display large datasets in manageable chunks — 10 results per page, 20 per page, 50 per page. The SQL behind every paginated list is LIMIT + OFFSET + ORDER BY.
The pagination formula
How many pages are there? — COUNT with LIMIT
To know how many total pages exist, you need to know the total row count. Applications typically run two queries: one for the total count (to calculate total pages), and one with LIMIT + OFFSET for the actual data.
// Part 05
Syntax Variations Across Databases
LIMIT and OFFSET are supported in PostgreSQL, MySQL, SQLite, and DuckDB (the playground). SQL Server and Oracle use different syntax — important to know when working in enterprise environments.
| Database | Syntax | Notes |
|---|---|---|
| PostgreSQL / MySQL / SQLite / DuckDB | LIMIT n OFFSET m | Most common. OFFSET is optional — omit it for no skipping. |
| PostgreSQL (alternative) | LIMIT n OFFSET m or FETCH FIRST n ROWS ONLY | PostgreSQL supports both syntaxes. |
| SQL Server | ORDER BY col OFFSET m ROWS FETCH NEXT n ROWS ONLY | OFFSET and FETCH are mandatory together. ORDER BY is required. |
| Oracle (12c+) | FETCH FIRST n ROWS ONLY or OFFSET m ROWS FETCH NEXT n ROWS ONLY | Modern Oracle uses FETCH. Older Oracle used ROWNUM workarounds. |
| MySQL (alternative) | LIMIT m, n | Reversed order: skip m rows, return n. Confusing — avoid this form. |
// Part 06
LIMIT and Performance — How the Database Executes It
Understanding what happens inside the database when you add LIMIT explains why it can dramatically speed up queries — and also why it sometimes does not help as much as you expect.
How LIMIT helps — early termination
When the database executes ORDER BY column DESC LIMIT 10, it does not necessarily sort all rows first. If the sort column has an index, the database can read the index in reverse order, take the first 10 entries, and stop — never reading the rest of the table. This is called early termination and makes LIMIT with an indexed ORDER BY column essentially free regardless of table size.
Even without an index, the database uses a top-N heap sort optimisation — it maintains a sorted buffer of only N rows as it scans the table, discarding any row that would not make the top N. This is far cheaper than sorting all rows and then taking the first N.
When LIMIT does NOT help
LIMIT does not help when it is combined with a large OFFSET. To execute LIMIT 10 OFFSET 990000 (page 99,001 of a 10-per-page list), the database must still process 990,010 rows to find which 10 to return — it skips 990,000 but it has to count through all of them first. Deep pagination with large OFFSETs is expensive regardless of how small the LIMIT is. This is called the deep pagination problem and is why applications like Twitter and Instagram use cursor-based pagination rather than OFFSET-based pagination for very large datasets.
LIMIT is not a substitute for WHERE
A common beginner mistake: using LIMIT to make a query "fast" without adding a WHERE clause. LIMIT 10 on a 500-million-row table without WHERE still triggers a full table scan — the database reads pages from the start until it finds 10 rows, which happens quickly if those rows are near the beginning. But if the ORDER BY requires a sort (no index), all 500 million rows must be processed before LIMIT can take effect. WHERE filters early. LIMIT cuts late. They serve different purposes and are most powerful together.
// Part 07
Cursor-Based Pagination — The Production Alternative
OFFSET-based pagination has a fundamental problem at scale: as the offset grows, performance degrades. Page 1 is fast. Page 10,000 is slow. There is also a data consistency issue — if new rows are inserted while a user is paginating, rows can shift, causing duplicates or skipped items across pages.
The production alternative for large-scale pagination is cursor-based pagination (also called keyset pagination). Instead of "skip N rows," you say "give me the next N rows after this specific ID." Because the primary key is always indexed, the WHERE clause jumps directly to the right position regardless of how deep into the dataset you are.
Cursor-based pagination is O(1) — the same speed on page 1 as on page 1,000,000, because the WHERE condition on the primary key always does an index lookup. OFFSET-based pagination is O(n) — proportional to how deep you are in the dataset. For applications serving millions of users with infinite scroll (Twitter, Instagram, Flipkart product listings), cursor-based pagination is the only viable approach.
// Part 08
Common LIMIT Patterns in Production
These are the patterns you will write most often in real work.
Top N — leaderboards and rankings
Most recent N records
Single record lookup — the top 1 pattern
Sampling — get a random representative subset
// Part 09
What This Looks Like at Work
You are a backend engineer at Meesho building the product listing API. The product manager asks you to implement the product catalogue endpoint — it needs to support pagination, filtering by category, and sorting by price. This is a classic LIMIT + OFFSET API implementation.
🎯 Pro Tip
In any API that serves paginated data, always return metadata alongside the data: total_count, total_pages, current_page, and has_next_page. The frontend needs total_count to render pagination controls. has_next_page is cheaper to compute than total_count for cursor-based pagination and is sufficient for infinite scroll. Never make the frontend calculate pagination state from the raw data.
// Part 10
Interview Prep — 5 Questions With Complete Answers
LIMIT restricts the number of rows returned by a query to at most the specified number. If the query would normally return 10,000 rows, LIMIT 10 returns only 10. LIMIT is evaluated last in the SQL execution order — after FROM, WHERE, GROUP BY, HAVING, SELECT, and ORDER BY — so it cuts the final result set after all other processing is complete.
ORDER BY is critical when using LIMIT because without it, LIMIT returns an arbitrary set of rows — whichever rows the database encounters first in its internal storage scan. This order is non-deterministic and can vary between executions as storage layouts change. LIMIT 10 without ORDER BY answers "give me any 10 rows," which is useful only for sampling. LIMIT 10 with ORDER BY unit_price DESC answers "give me the 10 most expensive rows" — a specific, reproducible, meaningful result.
The combination ORDER BY column DESC LIMIT n is also a performance pattern: if the sort column has an index, the database reads the index in order and stops after n rows — never reading the rest of the table. Without an index, the database uses a top-N heap sort that is far cheaper than a full sort followed by truncation. In both cases, LIMIT + ORDER BY is much more efficient than returning all rows and discarding most of them in application code.
OFFSET tells the database to skip a specified number of rows before starting to return results. LIMIT 10 OFFSET 20 skips the first 20 rows and returns the next 10 — rows 21 through 30. OFFSET is almost always used with LIMIT. Without LIMIT, OFFSET alone means "skip these rows and return everything remaining" — useful occasionally but not the standard use case.
The pagination formula for page-based navigation: OFFSET = (page_number - 1) × page_size. For a 10-items-per-page list: page 1 → OFFSET 0, page 2 → OFFSET 10, page 3 → OFFSET 20, page N → OFFSET (N-1)×10. The query structure is always: SELECT ... FROM ... WHERE ... ORDER BY ... LIMIT page_size OFFSET (page_number - 1) * page_size.
To know how many total pages exist: run a separate SELECT COUNT(*) with the same WHERE clause, then calculate total_pages = CEIL(total_count / page_size). Applications typically run both queries per page request — the count query for the pagination controls and the data query for the actual rows. This two-query pattern is standard in REST APIs, GraphQL resolvers, and database-backed dashboards.
The deep pagination problem is the performance degradation that occurs with large OFFSET values. To execute LIMIT 10 OFFSET 990000, the database must process 990,010 rows — skip 990,000 and return 10. Even though only 10 rows are returned, the work of scanning through 990,000 rows cannot be avoided. Performance degrades linearly with OFFSET size: page 1 takes 1ms, page 100,000 takes 100,000ms. On tables with billions of rows, deep pagination with large OFFSETs becomes functionally unusable.
The solution is cursor-based pagination (also called keyset pagination). Instead of "skip N rows," use "give me rows after this specific cursor value." For example, instead of LIMIT 10 OFFSET 990000, store the last order_id from the previous page (say, order_id = 991000) and run: WHERE order_id > 991000 ORDER BY order_id ASC LIMIT 10. Because order_id is the primary key and always indexed, this WHERE clause does a direct index lookup regardless of how deep into the table it is — O(1) performance on page 1 and page 1,000,000 alike.
The trade-off: cursor-based pagination cannot jump to an arbitrary page number (you cannot say "go to page 5000" without knowing the cursor for page 4999). It only supports "next page" and sometimes "previous page." This is why Instagram, Twitter, and product catalogues with infinite scroll use cursor-based pagination — users always go next, never jump to page 5000. Admin dashboards and reports where users need to jump to specific pages sometimes still use OFFSET despite the performance cost, accepting the trade-off for the UX benefit.
WHERE and LIMIT both reduce the number of rows returned, but they operate at fundamentally different points in the execution pipeline and serve entirely different purposes. WHERE is a filter — it specifies which rows qualify to be in the result at all, based on their values. It runs early in the execution order (after FROM, before GROUP BY, SELECT, ORDER BY) and can use indexes to avoid reading rows that do not match. WHERE changes which rows are in the result set.
LIMIT is a cap — it stops returning rows after the specified count regardless of how many qualifying rows remain. It runs last in the execution order, after all other processing including ORDER BY. LIMIT does not filter based on values — it simply truncates the result at N rows. LIMIT changes how many rows from the result set are returned to the caller, not which rows qualify.
Performance implication: WHERE on an indexed column can reduce the rows the database processes from billions to hundreds — a massive speedup applied early. LIMIT reduces the rows returned to the caller but the database may still process many rows to find them, especially without an index or with a large OFFSET. A query with no WHERE and LIMIT 10 on a billion-row table without an appropriate index still scans the full table (or sorts it). Always use WHERE to filter the relevant data, then LIMIT to cap the output size. They are complementary, not interchangeable.
LIMIT n OFFSET m is the most common syntax and works in PostgreSQL, MySQL, SQLite, DuckDB, and MariaDB. OFFSET is optional — LIMIT n alone returns the first n rows with no skipping. This syntax is clean and widely understood, making it the de facto standard for most modern development.
SQL Server uses a different syntax that is part of the SQL standard: ORDER BY column OFFSET m ROWS FETCH NEXT n ROWS ONLY. Important: SQL Server requires ORDER BY when using OFFSET/FETCH — you cannot use it without specifying a sort order. Oracle 12c and later uses the same FETCH NEXT syntax. Older Oracle versions (pre-12c) required a ROWNUM workaround: SELECT * FROM (SELECT *, ROWNUM rn FROM table WHERE ROWNUM <= n + m) WHERE rn > m — cumbersome and now rarely needed.
MySQL has a shorthand form LIMIT m, n where the first argument is the offset and the second is the count — the reverse of LIMIT n OFFSET m. This form is valid MySQL but widely considered confusing because the argument order is counterintuitive. Avoid it in shared codebases. PostgreSQL supports both LIMIT/OFFSET and FETCH NEXT, making it compatible with both MySQL-style and SQL Server-style code. When writing SQL that must run on multiple database types, use FETCH FIRST n ROWS ONLY (no offset) or OFFSET m ROWS FETCH NEXT n ROWS ONLY — these are part of the SQL standard and work on PostgreSQL, SQL Server, Oracle, and DB2.
// Part 11
Errors You Will Hit — And Exactly Why They Happen
🎯 Key Takeaways
- ✓LIMIT n returns at most n rows from the result. It is always the last clause in a query, executed after ORDER BY.
- ✓LIMIT without ORDER BY returns arbitrary rows — non-deterministic and changes between runs. Always combine LIMIT with ORDER BY to get a specific, reproducible set of rows.
- ✓OFFSET m skips the first m rows before returning results. Used with LIMIT for pagination: LIMIT page_size OFFSET (page_number - 1) * page_size.
- ✓Pagination formula: OFFSET = (page_number - 1) × page_size. Page 1 → OFFSET 0. Page 2 → OFFSET page_size. Page N → OFFSET (N-1) × page_size.
- ✓Always add the primary key as the final ORDER BY tiebreaker for paginated queries — guarantees no duplicate or skipped rows across pages.
- ✓Deep pagination with large OFFSET values is slow — the database must count through all skipped rows. LIMIT 10 OFFSET 1000000 processes 1,000,010 rows.
- ✓Cursor-based pagination (WHERE id > last_cursor ORDER BY id LIMIT n) solves deep pagination — O(1) performance regardless of depth because the WHERE on the primary key uses an index lookup.
- ✓LIMIT is not a substitute for WHERE. LIMIT cuts after processing. WHERE filters before processing. Use WHERE to reduce the dataset, then LIMIT to cap the output.
- ✓Syntax varies by database: LIMIT n OFFSET m (PostgreSQL/MySQL/SQLite/DuckDB), OFFSET m ROWS FETCH NEXT n ROWS ONLY (SQL Server/Oracle 12c+).
- ✓For paginated APIs, always return metadata: total_count, total_pages, current_page, has_next_page. The frontend needs this to render pagination controls.
What comes next
In Module 10, you learn DISTINCT — how to remove duplicate rows from your results, when to use it, and the performance cost it carries on large tables.
Module 10 → Removing Duplicates — DISTINCTDiscussion
0Have a better approach? Found something outdated? Share it — your knowledge helps everyone learning here.