9 Dec 2025

3x Faster TID Range Scans - Postgres 19

If you've ever had to scrub significantly large tables—whether updating older records or deleting expired ones—you know the pain of trying to do it efficiently without locking the entire table. One common trick is to iterate over the ctid (the physical location of the tuple) to process the table in chunks. Until now, this was strictly a single-threaded affair. But with the latest commit to Postgres 19, TID Range Scans can now be parallelized, allowing you to harness multiple cores to rip through these maintenance tasks significantly faster.

In my tests, upto 3x Faster!!

The Problem with Linear Scanning

When you perform a batched operation using ctid (e.g., WHERE ctid >= '(0,0)' AND ctid < '(1000,0)'), Postgres uses a Tid Range Scan. Previously, this scan type didn't support parallelism. If you had a 1 TB table and wanted to process ranges of it, a single CPU core would have to do all the heavy lifting for that range, fetching blocks one by one. Even if you had 64 cores gathering dust, they couldn't help with that specific scan node.

The Solution: Parallel TID Range Scan

Commit 0ca3b169, developed by Cary Huang and committed by David Rowley, introduces infrastructure to allow Tid Range Scan to participate in parallel query plans. The logic effectively splits the block range among the available parallel workers. Instead of one process sweeping from block 0 to N, workers can grab chunks of blocks concurrently.

I was following the discussion on the mailing list, where Cary and David collaborated closely to refine the parallel safety logic, ensuring workers handle scan limits correctly—leading to the robust implementation that landed in the master branch.

NOTE
This feature is currently in the master branch (Postgres 19devel). As with all unreleased features, it could theoretically be rolled back or modified before the final release.

Benchmark: Before vs. After

To see this in action, I spun up two Postgres instances: one built just before the commit (Postgres 18) and one after (Postgres 19). I created a table bench_tid_range with 10 million rows and ran a count(*) query over the first 50% of the table using a ctid range condition.

Test Setup:

  • Table Size: 10 Million rows
  • Query: SELECT count(*) FROM bench_tid_range WHERE ctid >= '(0,0)' AND ctid < '(41667,0)'
Environment Workers Execution Time (Median) Speedup
Before (Pg 18) 0 448 ms 1.00x
After (Pg 19) 0 435 ms 1.03x
After (Pg 19) 1 238 ms 1.88x
After (Pg 19) 2 174 ms 2.58x
After (Pg 19) 3 151 ms 2.97x
After (Pg 19) 4 150 ms 2.98x
After (Pg 19) 5 147 ms 3.05x
After (Pg 19) 6 143 ms 3.14x
After (Pg 19) 7 147 ms 3.04x
After (Pg 19) 8 147 ms 3.04x

*(Each execution time is a Median of 9 runs)*

The chart above illustrates the performance scaling. Note that Postgres 19 with 0 workers (i.e. serial) is almost identical in speed to Postgres 18; the efficiency gains are visible only when parallelism is enabled.

We see a massive drop in execution time just by enabling 1 worker (which effectively gives us 2 processes scanning: the leader + 1 worker). The gains continue robustly up to 3 workers, where execution time settles around 150ms. Beyond that, we hit diminishing returns as the overhead of managing parallel workers and aggregating results begins to dominate the raw scanning speed. The "sweet spot" for this specific workload appears to be around 2-3 workers.

Under the Hood: Deep Dive

Let's look at the actual EXPLAIN (ANALYZE, BUFFERS) output to see *why* it's faster.

Before (Postgres 18)

The legacy behavior forces a serial scan. The single process reads all 41,667 pages.


 Aggregate  (cost=104171.87..104171.88 rows=1 width=8) (actual time=427.723..427.724 rows=1.00 loops=1)
   Buffers: shared read=41667
   ->  Tid Range Scan on bench_tid_range  (cost=0.01..91671.70 rows=5000069 width=0) (actual time=0.130..243.502 rows=5000040.00 loops=1)
         TID Cond: ((ctid >= '(0,0)'::tid) AND (ctid < '(41667,0)'::tid))
         Buffers: shared read=41667

After (Postgres 19)

The new Parallel Tid Range Scan node appears. Notice the Gather node launching 2 workers.


 Finalize Aggregate  (cost=68713.25..68713.26 rows=1 width=8) (actual time=166.873..169.987 rows=1.00 loops=1)
   Buffers: shared read=41667
   ->  Gather  (cost=68713.03..68713.24 rows=2 width=8) (actual time=166.238..169.978 rows=3.00 loops=1)
         Workers Planned: 2
         Workers Launched: 2
         Buffers: shared read=41667
         ->  Partial Aggregate  (cost=67713.03..67713.04 rows=1 width=8) (actual time=164.518..164.519 rows=1.00 loops=3)
               Buffers: shared read=41667
               ->  Parallel Tid Range Scan on bench_tid_range  (cost=0.01..62504.63 rows=2083362 width=0)
                     (actual time=0.095..97.492 rows=1666680.00 loops=3)
                     TID Cond: ((ctid >= '(0,0)'::tid) AND (ctid < '(41667,0)'::tid))
                     Buffers: shared read=41667

Key Improvements:

1. Distributed Cost: The cost estimate dropped from 104k to 68k, indicating the planner correctly anticipates the benefit of parallelism.

2. Shared Load: While the total buffers read (41667) is the same, the work is shared across 3 processes (1 leader + 2 workers), reducing the wall-clock time for the scan itself from ~243ms to ~97ms per worker.

Counter-Intuitive Alternative: Forced Parallel Seq Scan?

You might ask: *"Why didn't Postgres v18 just choose a Parallel Sequential Scan? Wouldn't scanning the whole table with 4 workers be faster than scanning half the table with 1?"*

I tested exactly that on the v18 instance by forcing enable_tidscan = off with 4 workers.

  • Execution Time: ~230 ms.

 Finalize Aggregate  (cost=124959.76..124959.77 rows=1 width=8) (actual time=227.917..229.476 rows=1.00 loops=1)
   Buffers: shared hit=15791 read=67543
   ->  Gather  (cost=124959.34..124959.75 rows=4 width=8) (actual time=227.660..229.469 rows=5.00 loops=1)
         Workers Planned: 4
         Workers Launched: 4
         Buffers: shared hit=15791 read=67543
         ->  Partial Aggregate  (cost=123959.34..123959.35 rows=1 width=8) (actual time=224.154..224.155 rows=1.00 loops=5)
               Buffers: shared hit=15791 read=67543
               ->  Parallel Seq Scan on bench_tid_range  (cost=0.00..120834.30 rows=1250017 width=0) 
                     (actual time=0.150..169.772 rows=1000008.00 loops=5)
                     Filter: ((ctid >= '(0,0)'::tid) AND (ctid < '(41667,0)'::tid))
                     Rows Removed by Filter: 999992
                     Buffers: shared hit=15791 read=67543
 Planning:
   Buffers: shared hit=25
 Planning Time: 0.217 ms
 Execution Time: 229.584 ms

It was faster than the serial TID scan (~448 ms)! However, it had to visit all ~83k pages (15,791 hit + 67,543 read) instead of just the ~41k pages we cared about.

The new Parallel TID Range Scan (~150 ms) is still 35% faster than the brute-force/forced Parallel Seq Scan *and* it generates half the I/O load. It is the best of both worlds: fast execution time (Parallelism) and efficient resource usage (Index-like scoping).

Use Case: The Hotel Analogy

To understand why this matters, let's step away from databases and imagine a massive hotel with 1 million rooms. You need to verify all guests whose names start with a letter before 'S'.

The Logical Approach (Offset Problem)

You tell the front desk: *"Give me a list of the first 100 guests with names A-R."* matches 100 people named 'Aaron', 'Abby', etc.

Then you ask: *"Give me the next 100 guests."*

To answer this, the clerk has to skip through the 'A's again just to know where to start counting the 'B's. As you get deeper into the alphabet, the clerk spends more and more time skipping known names just to find the new ones. By the time you are asking for names starting with 'R', they are re-reading 90% of the guest list just to find your batch.

The Physical Approach (TID Scan)

This is the TID Range Scan. You stop caring about the names (the logical value) and organize by Room Number (the physical location).

  • Worker 1: "Go check Rooms 1 to 100."
  • Worker 2: "Go check Rooms 101 to 200."

Worker 2 doesn't care who is in Room 1. They go straight to Room 101. If the person inside is named "Zach", they skip them. If it's "Alice", they verify them.

  • Zero Rescanning: No one re-reads the list of 'A's.
  • Parallelism: Because the rooms are physically distinct, you can send 8 workers to different floors simultaneously without them needing to coordinate or skip over each other's results.

This approach transforms an O(N^2) maintainence nightmare into a linear, parallelizable task.

Reproduce this Test

Want to try it yourself on Postgres 19devel? Here is the SQL to set up the test table and run the benchmark.


-- 1. Create the table
DROP TABLE IF EXISTS bench_tid_range;
CREATE TABLE bench_tid_range (id int, payload text);

-- 2. Insert 10M rows to generate ~41k pages
INSERT INTO bench_tid_range 
SELECT x, 'payload_' || x 
FROM generate_series(1, 10000000) x;

-- 3. Vacuum to set visibility map and freeze (important for stable benchmarks)
VACUUM (ANALYZE, FREEZE) bench_tid_range;

-- 4. Enable Parallelism for the Session
SET max_parallel_workers_per_gather = 4; -- Try 2, 4, 8
SET min_parallel_table_scan_size = 0;    -- Force parallel scan even for smaller tables

-- 5. Run the Query
EXPLAIN (ANALYZE, BUFFERS) 
SELECT count(*) 
FROM bench_tid_range 
WHERE ctid >= '(0,0)' AND ctid < '(41667,0)';

Conclusion

This is a welcome "plumbing" improvement. It might not change your daily ad-hoc queries, but for DBAs and developers building custom data maintenance scripts, the ability to parallelize TID based scans is a powerful new tool in the optimization toolkit.

References:

3 Dec 2025

Speed up JOIN Planning - upto 16x Faster!

The hidden cost of knowing too much. That's one way to describe what happens when your data is skewed, Postgres statistics targets are set high, and the planner tries to estimate a join.

For over 20 years, Postgres used a simple O(N^2) loop to compare (equi-join) Most Common Values (MCVs) during join estimation. It worked fine when statistics targets are small (default_statistics_target defaults to 100). But in the modern era - we often see Postgres best-practices recommend cranking that up. Customers are known to be using higher values (1000 and sometimes even higher) to handle complex data distributions + throw a 10 JOIN query to the mix - and this "dumb loop" can easily become a silent performance killer during planning. 

That changes in Postgres 19.

The Problem: It's Quadratic!

When you join two tables, the planner needs to estimate how many rows will match. If both columns have MCV lists (lists of the most frequent values), eqjoinsel() tries to match them up to get a precise estimate.

Historically, it did this by comparing every item in list A with every item in list B.

  • If you have 100 MCVs, that's 10,000 comparisons. Fast.
  • If you have 10,000 MCVs (max stats target), that's 100,000,000 comparisons. Not so fast.

This meant that simply *planning* a query could take significantly longer than executing it, especially for simple OLTP queries.

The Solution: Hash It Out

The fix is elegant and effective.

Instead of a nested loop, the planner now:

1. Checks if the total number of MCVs is greater than 200 (100 each side).

2. If so, it builds a Hash Table of the MCVs from one (smaller) side.

3. It then probes this hash table with the MCVs from the other side.

The threshold of 200 was chosen because hashing has a startup cost (allocating memory, computing hashes). For smaller lists, the simple loop is actually faster. But once you cross that threshold, the hash table wins.

This transforms the complexity from O(N^2) to O(N), making the estimation step virtually instantaneous even with the largest statistics targets.

Let's Benchmark

I wanted to verify this myself, so I set up a worst-case scenario: two tables with 100,000 rows, but only 10,000 distinct values, and I cranked the statistics target to the maximum (10,000) to force a massive MCV list.

Setup


CREATE TABLE t1 (x int);
CREATE TABLE t2 (x int);

-- Scenario A: 3M rows, Stats 10000
INSERT INTO t1 SELECT x % 10000 FROM generate_series(1, 3000000) x;
INSERT INTO t2 SELECT x % 10000 FROM generate_series(1, 3000000) x;

-- Maximize statistics target
ALTER TABLE t1 ALTER COLUMN x SET STATISTICS 10000;
ALTER TABLE t2 ALTER COLUMN x SET STATISTICS 10000;

ANALYZE t1;
ANALYZE t2;

Results

I ran the following query 10 times on both versions, interleaved to ensure fairness and used median for the results:


EXPLAIN (ANALYZE, TIMING OFF) SELECT count(*) FROM t1 JOIN t2 ON t1.x = t2.x;

Scenario A: High Stats (10k MCVs)

  • Rows: 3 Million
  • Statistics Target: 10,000
  • MCV Count: 10,000
  • Before (Postgres 18): ~27.8 ms
  • After (Postgres 19): ~1.75 ms
  • Speedup: ~16x

Scenario B: Medium Stats (1k MCVs)

  • Rows: 300,000
  • Statistics Target: 1,000
  • MCV Count: 1,000
  • Before (Postgres 18): ~0.85 ms
  • After (Postgres 19): ~0.60 ms
  • Speedup: ~1.4x (40% speed-up)

Scenario C: Default Stats (100 MCVs)

  • Rows: 30,000
  • Statistics Target: 100 (Default)
  • MCV Count: 100
  • Before (Postgres 18): ~0.40 ms
  • After (Postgres 19): ~0.43 ms
  • Speedup: None (Optimization correctly skipped)

Since the total MCV count in Scenario C (100 + 100 = 200) did not exceed the 200 threshold, Postgres 19 correctly chose the simpler loop, avoiding the overhead of building a hash table. This confirms that the patch is smart enough to only kick in when it matters.

The "Quadratic Curve" in Action

To visualize the O(N^2) vs O(N) difference, I ran a benchmark across a wide range of statistics targets (from 10 to 10,000). In each test, the number of rows was set to 300x the statistics target to ensure the MCV list was fully populated and relevant.

Stats Target PG 18 (ms) PG 19 (ms) Speedup
10 0.40 0.38 -
100 0.40 0.45 -
200 0.45 0.43 -
500 0.53 0.52 -
1000 0.85 0.63 1.3x
2000 1.73 0.68 2.5x
5000 7.54 1.14 6.6x
8000 17.97 1.57 11.4x
10000 27.56 1.92 14.3x

As you can see, up to a target of 500, the difference is negligible (and the optimization might not even kick in). But as the target grows, the quadratic cost of the old method explodes, while the new hash-based method scales linearly and remains extremely fast.

This patch is a classic example of modernizing legacy assumptions. The code written 20 years ago assumed MCV lists would be short (in all fairness the default was 100 for a long time). Today's hardware and data requirements have pushed those boundaries, and Postgres is evolving to meet them.

Thanks to Ilia Evdokimov for the patch, David Geier for co-authoring, and Tom Lane for the review.

Note: This feature is currently in the master branch for Postgres 19. As with all development features, it is subject to change before the final release.

References

3x Faster TID Range Scans - Postgres 19

If you've ever had to scrub significantly large tables—whether updating older records or deleting expired ones—you know the pain of tryi...