29 Nov 2025

Teaching Query Planner to See Inside C Functions

The Postgres planner just got a new superpower: X-ray vision for your black-box functions.

For years, Postgres has been able to "inline" simple SQL functions. If you wrote CREATE FUNCTION ... LANGUAGE SQL AS 'SELECT ...', the planner would often rip out the function call and paste the raw SQL directly into the main query. This was magic. It meant that WHERE clauses could be pushed down, indexes could be used, and performance was excellent.

But if you stepped outside the safety of simple SQL functions—say, into the world of C extensions or complex dynamic SQL—you were often stuck with a "black box." The planner saw a function, executed it blindly, and only *then* applied your filters.

That changes in Postgres 19.

The "Black Box" Problem

Imagine you have a function that returns a large dataset, but you only want a few rows:


SELECT * FROM my_complex_function() WHERE id = 42;

Before Postgres 19, if my_complex_function wasn't a simple SQL function, the database would:

1. Run my_complex_function() to completion (generating, say, 1 million rows).

2. Filter the result to find id = 42.

This is painfully inefficient. You wanted an Index Scan, but you got a full execution followed by a filter.

The Solution: SupportRequestInlineInFrom

With this feature, Postgres now allows function authors (specifically in C extensions) to provide a "support function" that tells the planner: *"Hey, if anyone calls me in a FROM clause, here is the raw SQL query tree that represents what I'm doing."*

The planner can then take that query tree and inline it, just like it does for standard SQL functions.

The Impact

This is a game-changer for:

  • Extensions: Many extensions provide table-like interfaces (e.g., foreign data wrappers, set-returning functions). They can now expose their internal logic to the planner.
  • Dynamic SQL: Functions that generate SQL strings and execute them can now potentially participate in optimization.

Before vs. After

Scenario: A PL/pgSQL function foo_from_bar that executes a dynamic query on text_tbl (containing 1 million rows).


CREATE TABLE text_tbl (f1 text);
INSERT INTO text_tbl SELECT 'row_' || generate_series(1, 1000000);
INSERT INTO text_tbl SELECT 'common_row' FROM generate_series(1, 100);
CREATE INDEX text_tbl_idx ON text_tbl(f1);
ANALYZE text_tbl;

Here is the PL/pgSQL function I used for testing (designed to return 100 rows). Note that it uses dynamic SQL, which is normally opaque to the planner, and then linked it to the C support function test_inline_in_from_support_func using ALTER FUNCTION.


CREATE OR REPLACE FUNCTION foo_from_bar(colname TEXT, tablename TEXT, filter TEXT)
RETURNS SETOF TEXT
LANGUAGE plpgsql
AS $function$
DECLARE
  sql TEXT;
BEGIN
  sql := format('SELECT %I::text FROM %I', colname, tablename);
  IF filter IS NOT NULL THEN
    sql := CONCAT(sql, format(' WHERE %I::text = $1', colname));
  END IF;
  RETURN QUERY EXECUTE sql USING filter;
END;
$function$ STABLE;

ALTER FUNCTION foo_from_bar(TEXT, TEXT, TEXT)
  SUPPORT test_inline_in_from_support_func;

And here is the query I ran:


SELECT * FROM foo_from_bar('f1', 'text_tbl', 'common_row');

Before (PG 18 and earlier)


->  Function Scan on public.foo_from_bar  (cost=0.25..10.25 rows=1000 width=32)
      Output: foo_from_bar
      Function Call: foo_from_bar('f1'::text, 'text_tbl'::text, 'common_row'::text)
      Buffers: shared hit=32 read=4
      Execution Time: 0.353 ms
The function is treated as a black box. The planner estimates 1000 rows (default) and scans the function output.

After (PG 19)


->  Index Only Scan using text_tbl_idx on public.text_tbl  (cost=0.42..8.44 rows=1 width=10)
      Output: text_tbl.f1
      Index Cond: (text_tbl.f1 = 'common_row'::text)
      Buffers: shared hit=1 read=3
      Execution Time: 0.064 ms
The planner "sees through" the function, realizes it's just a query on text_tbl, and uses the index directly. The execution time drops significantly (5.5x faster in this case).

I also ran a pgbench test (1000 iterations) to measure the end-to-end latency improvement:

  • Before: 0.107 ms average latency
  • After: 0.087 ms average latency
  • Improvement: ~19% faster end-to-end (including network overhead)

Bonus: Outer WHERE Pushdown - Orders of magnitude faster!

Next, I ran a test where NULL was passed to the function (returning all 1 million rows) and applied an external WHERE clause:


SELECT * FROM foo_from_bar('f1', 'text_tbl', NULL) WHERE foo_from_bar = 'common_row';

Before (PG 18):


->  Function Scan on public.foo_from_bar  (cost=0.25..12.75 rows=5 width=32)
      Output: foo_from_bar
      Function Call: foo_from_bar('f1'::text, 'text_tbl'::text, NULL::text)
      Filter: (foo_from_bar.foo_from_bar = 'common_row'::text)
      Rows Removed by Filter: 1000000
      Execution Time: 136.267 ms
The database scanned all 1 million rows from the function, then filtered them down to 100.

After (PG 19):


->  Index Only Scan using text_tbl_idx on public.text_tbl  (cost=0.42..8.44 rows=1 width=10)
      Output: text_tbl.f1
      Index Cond: (text_tbl.f1 = 'common_row'::text)
      Execution Time: 0.055 ms
The planner pushed the WHERE clause *inside* the function logic, triggering an index lookup (~2400x speedup).

Similarly, I tested adding a LIMIT 5 clause to the query and saw a ~20x speedup!!

  • Before: The database scanned the function output until it got 5 rows (~1.091 ms).
  • After: The planner pushed the limit down into the index scan, stopping immediately after finding 5 rows (~0.051 ms).

Testing Methodology

The benchmarks above were conducted using pgbench with the following parameters:

  • Iterations: 1000 transactions per client.
  • Concurrency: Single client (1 thread).
  • Data Volume: The target table text_tbl contained 1,000,100 rows.
  • Return Size: The query returned 100 rows per execution.
  • Metric: Average latency reported by pgbench (excluding connection overhead).
  • Commits tested: c0bc9af (before) and b140c8d (after).

The C support function (test_inline_in_from_support_func) used in this example is defined in src/test/regress/regress.c that is a part of the regression tests. To know more about support functions, see PostgreSQL Documentation on Function Optimization Information.

Discussion & Credits

Coincidentally a Linkedin thread mentioned that Set-Returning Functions (SRF) began their journey in Postgres 7.3 - that was 17 years ago! With this patch, the planner should be able to see through such a function and develop much better execution plans around them! Thanks to Paul A. Jungwirth (for initating the idea with a working patch) and Tom Lane (for in-depth review).

Note: This feature is currently in the master branch for Postgres 19. As with all development features, it is subject to change before the final release.

27 Nov 2025

Settling COUNT(*) vs COUNT(1) debate in Postgres 19

A recent commit to the PostgreSQL master branch brings a nice quality-of-life optimization for a very common SQL pattern - improving performance by up to 64% for SELECT COUNT(h) where h is a NOT NULL column.

If you've ever wondered whether you should use COUNT(*) or COUNT(1), or if you've been dutifully using COUNT(id) on a non-null column, this change is for you.

Note: This feature is currently on the PostgreSQL master branch (committed in November 2025). As with any commit on master, it is subject to change or even rollback before the final release, although this is relatively rare for committed features. If all goes well, this change will be part of the PostgreSQL 19 major version release.

The Optimization

Historically in PostgreSQL, COUNT(*) has been faster than COUNT(1) or COUNT(column).

  • COUNT(*): The planner knows it just needs to count rows. It doesn't need to check if any specific column is null.
  • COUNT(column): The executor has to fetch the value of the column for each row and check if it is NULL before counting it. This involves "deforming" the tuple (extracting the data), which adds overhead.
  • COUNT(1): Often used as a "faster" alternative by users, but treated similarly to COUNT(column) where the "column" is the constant 1.

The new commit, Have the planner replace COUNT(ANY) with COUNT(*), when possible, changes this behavior.

The planner can now detect when you are counting a generic expression (COUNT(ANY)) that:

1. Cannot be NULL: For example, a constant like 1 or a column defined as NOT NULL.

2. Has no extra clauses: No DISTINCT or ORDER BY inside the aggregate.

In these cases, the planner will automatically transform the query to use COUNT(*).

Performance Impact

This optimization eliminates the overhead of tuple deforming and null checking for these cases.

I ran a benchmark on a local PostgreSQL instance to verify the impact, comparing COUNT(h) (not null column), COUNT(1) (constant), and COUNT(*) (rows) on a table with 10 million rows.

Before Patch:

  • SELECT count(h) FROM t: ~195 ms
  • SELECT count(1) FROM t: ~126 ms
  • SELECT count(*) FROM t: ~119 ms
  • *Observation:* COUNT(h) is significantly slower (~64% overhead vs count(*)). COUNT(1) is faster than COUNT(h) but still lags behind COUNT(*).

After Patch:

  • SELECT count(h) FROM t: ~117 ms
  • SELECT count(1) FROM t: ~116 ms
  • SELECT count(*) FROM t: ~114 ms
  • *Observation:* COUNT(h) sees the major win, improving dramatically to match COUNT(*). COUNT(1) sees a minor speedup, also converging to the optimal COUNT(*) performance.

This means users who prefer COUNT(1) for stylistic reasons, or who count specific non-null columns, will now get the optimal performance of COUNT(*) automatically.

(The patch author, David Rowley, demonstrated similar speedups in the initial discussion).

What about Nullable Columns?

I also tested COUNT(nullable_col) to see if the optimization applies. As expected, it does not.

  • Before Patch: COUNT(nullable_col): ~162 ms
  • After Patch: COUNT(nullable_col): ~158 ms (No significant change)

Since the planner cannot guarantee the column is free of NULLs, it must still scan and check every value, so the performance remains unchanged. The optimization strictly targets columns where the planner *knows* the value cannot be NULL.

What about DISTINCT or ORDER BY?

I also tested COUNT(DISTINCT h) and COUNT(h ORDER BY h) to see if the optimization applies. As expected, it does not.

  • Before Patch: COUNT(DISTINCT h): ~2663 ms (Median of 10 runs)
  • After Patch: COUNT(DISTINCT h): ~2708 ms (Median of 10 runs)

Similarly for ORDER BY, the executor still needs to process the values to sort them, so no optimization is possible.

Test Methodology

  • Versions:

* Before: Tested on commit dbdc717ac6743074c3a55fc5c380638c91d24afd.

* After: Tested on commit 42473b3b31238b15cc3c030b4416b2ee79508d8c.

  • Data: Table t with 10 million rows, populated via INSERT INTO t (h) SELECT 1 FROM generate_Series(1,10000000);.
  • Procedure: Each query was executed 10 times in an interleaved pattern (A, B, C, A, B, C...) to ensure fairness and mitigate caching bias. The reported time is the median of those 10 runs.
  • System: Intel Core i7-4770 CPU @ 3.40GHz (4 cores/8 threads), 16GB RAM.

Detailed Analysis

To understand *why* the performance improved, we can look at the EXPLAIN plans and table statistics.

  • Table Size: The table t (10 million rows) is approximately 346 MB.
  • Disk vs. Memory: The benchmark results (median of 10 runs) indicate mostly in-memory / hot-cache performance (~3 GB/s throughput). However, even with some disk I/O involved (as seen in single-run EXPLAIN (ANALYZE, BUFFERS) tests showing ~28k buffer reads), the CPU overhead of deforming the tuple was the dominant differentiator.

The Key Difference:

In the Before scenario, COUNT(h) required the scanner to fetch the column data (width=4), whereas COUNT(*) did not (width=0).


-- Before Patch: COUNT(h)
->  Parallel Seq Scan on t  (... rows=4166657 width=4) ...

In the After scenario, the planner transforms COUNT(h) into COUNT(*), allowing it to skip fetching the column data entirely (width=0).


-- After Patch: COUNT(h)
->  Parallel Seq Scan on t  (... rows=4166657 width=0) ...

This reduction in "width" means the executor doesn't need to extract the integer value from the row, saving significant CPU cycles.

Community Discussion & Trivia

Thanks to David Rowley for working on the patch and Corey Huinker / Matheus Alcantara for reviewing this. The discussions debated on trade-offs between what is fastest and which of these is an "anti-pattern", but it is great to see that now all three are now equivalently optimized.

Future Possibilities:

The infrastructure added (SupportRequestSimplifyAggref) opens the door for further optimizations. For instance, COUNT(NULL) could theoretically be optimized to instantly return 0 without scanning any rows - although that specific change wasn't included in this commit to keep the scope focused.

References

23 Nov 2025

PostgreSQL Buildfarm Members: A status update

The PostgreSQL Buildfarm is a global network of machines that continuously test PostgreSQL across a wide range of operating systems, architectures, compilers, and branches. Over the past few years, I have created and maintained several buildfarm members, each with its own quirks and strengths. In this post, I’ll share a status-update working on the following animals: alligator, dodo, woodpecker, leafhopper, massasauga, parula, and snakefly.

What is the Buildfarm?

The Buildfarm is essential for PostgreSQL development. It helps catch platform-specific bugs early, ensures code quality, and provides confidence that new changes work everywhere. Each member reports results for multiple branches (like master, REL_18_STABLE, etc.), using different OSes, compilers, and hardware.

About the Architectures

The Open Hardware Frontier: RISC-V

RISC-V is an open standard instruction set architecture (ISA) and unlike most other ISAs, RISC-V is provided under open source licenses that do not require fees to use.

  • ovenbird is my first foray into this architecture, running on a VisionFive 2 board.
  • It (hopefully) represents the future of open hardware, and ensuring PostgreSQL compiles and runs correctly on it is a long-term investment in the open-source ecosystem.

Bridging Windows and Linux: WSL2

Windows Subsystem for Linux (WSL) lets developers run a GNU/Linux environment -- including most command-line tools, utilities, and applications -- directly on Windows, unmodified, without the overhead of a traditional virtual machine or dualboot setup.

  • woodpecker runs inside a Debian container on WSL2.
  • This setup is crucial for verifying that PostgreSQL behaves correctly in this increasingly popular development environment, which bridges the gap between Windows and Linux.

Small Scale, Big Impact: Raspberry Pi

Raspberry Pi revolutionized low-cost computing and is a fantastic platform for edge cases (pun intended) :)

  • dodo runs on a Raspberry Pi 4 Model B.
  • It helps identify performance regressions and race conditions that might be masked by faster hardware. It also ensures PostgreSQL remains viable for low-powered, IoT and edge computing use cases.

The Rise of ARM in the Cloud: Graviton

Several of the buildfarm animals I’ve created run on the Graviton processors. Graviton is Amazon’s custom ARM-based CPU family, designed for high performance and energy efficiency in AWS cloud environments.

  • Graviton1 (first generation) was introduced in 2018, bringing ARM64 to AWS EC2.
  • Graviton2 (second generation) launched in 2020, offering major improvements in performance and scalability.
  • Graviton3 (third generation) arrived in 2022, further boosting compute, memory bandwidth, and energy efficiency—making it ideal for demanding workloads like database regression testing.
  • Graviton4 (fourth generation) is the latest, offering even greater performance and efficiency for cloud-native workloads. The buildfarm animal 'leafhopper' is one of the first to test PostgreSQL on Graviton4.

Testing PostgreSQL on these platforms helps ensure the database runs smoothly on modern cloud hardware and takes advantage of ARM’s growing ecosystem.

Disclosure: The Graviton machines are provided by my employer. All other machines (including the WSL2, RISC-V, and Raspberry Pi instances) are my personal machines.

Meet the Buildfarm Animals

Here’s a quick overview of the machines I have created and recently worked on:

alligator

  • OS: Ubuntu 24.04 LTS
  • Arch: x86_64
  • Compiler: gcc experimental (nightly build)
  • Branches: master, REL_18_STABLE, REL_17_STABLE, REL_16_STABLE, REL_15_STABLE, REL_14_STABLE, REL_13_STABLE
  • Notes: Tracks the latest GCC changes, often finds compiler regressions before anyone else.

dodo

  • OS: Raspbian GNU/Linux 10
  • Arch: armv7l
  • Compiler: gcc experimental (nightly build)
  • Branches: master, REL_18_STABLE, REL_17_STABLE, REL_16_STABLE, REL_15_STABLE, REL_14_STABLE, REL_13_STABLE
  • Notes: ARM platform, useful for catching issues on lower-powered hardware.

woodpecker

  • OS: Debian/WSL2@win11 12 (bookworm)
  • Arch: x86_64
  • Compiler: gcc 12.2.0
  • Branches: master, REL_18_STABLE, REL_17_STABLE, REL_16_STABLE, REL_15_STABLE, REL_14_STABLE, REL_13_STABLE
  • Notes: Runs inside WSL2 on Windows 11, great for testing integration with Windows environments.

leafhopper

  • OS: Amazon Linux 2023
  • Arch: aarch64/graviton4/r8g.2xl
  • Compiler: gcc experimental (hourly build)
  • Branches: master, REL_18_STABLE, REL_17_STABLE, REL_16_STABLE, REL_15_STABLE, REL_14_STABLE, REL_13_STABLE
  • Notes: Created and managed in a work-based environment; leafhopper is one of the first buildfarm animals testing PostgreSQL on Graviton4 hardware.

massasauga

  • OS: Amazon Linux 2
  • Arch: aarch64 (Graviton1)
  • Compiler: gcc experimental (nightly build)
  • Branches: master, REL_18_STABLE, REL_17_STABLE, REL_16_STABLE, REL_15_STABLE, REL_14_STABLE, REL_13_STABLE
  • Notes: Created and managed in a work-based environment; Graviton1 machine—one of the earliest ARM64 regression testers in the buildfarm, still running reliably after several years.

parula

  • OS: Amazon Linux 2 (AL2) / Graviton3
  • Arch: aarch64/Graviton3/c7g.2xl
  • Compiler: gcc experimental (nightly build)
  • Branches: master, REL_18_STABLE, REL_17_STABLE, REL_16_STABLE, REL_15_STABLE, REL_14_STABLE, REL_13_STABLE
  • Notes: Created and managed in a work-based environment; focuses on the third generation of AWS Graviton hardware, useful for performance and compatibility.

snakefly

  • OS: AmazonLinux@Graviton2 AL2
  • Arch: aarch64 (Graviton2)
  • Compiler: gcc experimental (nightly build)
  • Branches: master, REL_18_STABLE, REL_17_STABLE, REL_16_STABLE, REL_15_STABLE, REL_14_STABLE, REL_13_STABLE
  • Notes: Created and managed in a work-based environment; Graviton2-based member, helps ensure ARM64 stability across AWS generations.

ovenbird (newest member)

  • OS: Ubuntu 24.04.3 LTS
  • Arch: riscv64
  • Compiler: gcc 13.3.0
  • Branches: master, REL_18_STABLE, REL_17_STABLE, REL_16_STABLE, REL_15_STABLE, REL_14_STABLE, REL_13_STABLE
  • Notes: The newest addition to the family, ovenbird brings riscv64 architecture to the buildfarm, helping ensure PostgreSQL is tested on cutting-edge open hardware.

Challenges and Rewards

Managing these buildfarm animals means keeping up with OS upgrades, compiler changes, hardware failures, and PostgreSQL branch updates. Some of these machines are especially aggressive about GCC: they check for updates from the GCC git repository every few hours, recompile a fresh GCC, and use it for the next buildfarm run. This helps catch compiler regressions and compatibility issues very early.

If you want to read more about how these GCC compiles work and see the open source repository, check out my blog post: Compiling Latest GCC to Test More.

Some of these machines have been running for 3-4 years, and their logs are a treasure trove for debugging tricky platform-specific issues. The diversity of hardware and software helps the PostgreSQL community maintain its reputation for reliability and portability.

Testing with the latest GCC is especially rewarding: it ensures that upstream GCC changes are in tandem with the expectations of the PostgreSQL community, and that PostgreSQL continues to compile and pass tests without surprises. A good example is an upstream GCC bug that was found, reported, and fixed—making sure that no GCC changes adversely affect PostgreSQL in the long run. Read more about this incident here: PostgreSQL mailing list discussion of a GCC bug.

Here's another email thread that exemplifies why testing gcc experimental is helpful in ensuring that PostgreSQL compiles and tests stay green: PostgreSQL mailing list - GCC experimental thread.

However, it is also important to note that aggressive testing of GCC HEAD needs to be balanced against the time of PostgreSQL developers. The current buildfarm system does not explicitly distinguish between "production" and "bleeding edge" machines, meaning failures on experimental setups can sometimes be distracting. As discussed in this mailing list thread, there is an ongoing conversation about how to best handle these "platform not believed stable" scenarios to ensure that transient failures on experimental toolchains don't unnecessarily burden the community.

Speaking of new architectures, a few months back I wrote about [Testing PostgreSQL on Debian/Hurd](https://www.thatguyfromdelhi.com/2025/08/testing-postgresql-on-debianhurd.html) and planned to add a Hurd machine to the buildfarm. It looks like I've been beaten to the punch! A new member, [fruitcrow](https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=fruitcrow&br=master), is already up and running to test PostgreSQL on GNU/Hurd. This is fantastic news—having "competition" in adding diverse buildfarm members is exactly what we want. It shows that more people recognize that a wide array of test environments leads to a more stable PostgreSQL.

Final Thoughts

If you’re interested in contributing to PostgreSQL, running a buildfarm animal is a great way to help. It’s a hands-on way to learn about PostgreSQL internals, compilers, and operating systems, and it’s rewarding to see your machine’s name in the global test results.

3x Faster TID Range Scans - Postgres 19

If you've ever had to scrub significantly large tables—whether updating older records or deleting expired ones—you know the pain of tryi...