<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>Citus Data Blog</title>
  <author>
    <name>Citus Data</name>
  </author>
  <subtitle>Scaling data and analytics with Postgres</subtitle>
  <id>https://www.citusdata.com/blog/</id>
  <link href="https://www.citusdata.com/blog/"/>
  <link href="https://www.citusdata.com/blog/feed.xml" rel="self"/>
  <updated>2026-02-17T17:00:00+00:00</updated>
  <entry>
    <title>Distribute PostgreSQL 18 with Citus 14</title>
    <link rel="alternate" href="https://www.citusdata.com/blog/2026/02/17/distribute-postgresql-18-with-citus-14/"/>
    <id>https://www.citusdata.com/blog/2026/02/17/distribute-postgresql-18-with-citus-14/</id>
    <published>2026-02-17T17:00:00+00:00</published>
    <updated>2026-02-17T17:00:00+00:00</updated>
    <author>Mehmet Yilmaz</author>
    <content type="html">&lt;p&gt;The Citus 14.0 release is out and includes PostgreSQL 18 support! We know you&amp;#39;ve been waiting, and we&amp;#39;ve been hard at work adding features we believe will take your experience to the next level, focusing on bringing the &lt;a href="https://www.postgresql.org/docs/18/release-18.html"&gt;Postgres 18 exciting improvements&lt;/a&gt; to you at distributed scale.&lt;/p&gt;

&lt;p&gt;The Citus database is an &lt;a href="https://github.com/citusdata/citus"&gt;open-source extension of Postgres&lt;/a&gt; that brings the power of Postgres to any scale, from a single node to a distributed database cluster. Since Citus is an extension, using Citus means you&amp;#39;re also using Postgres, giving you direct access to the Postgres features. And the latest of such features came with Postgres 18 release!&lt;/p&gt;

&lt;p&gt;PostgreSQL 18 is a substantial release: asynchronous I/O (AIO), skip-scan for multicolumn B-tree indexes, &lt;code&gt;uuidv7()&lt;/code&gt;, virtual generated columns by default, OAuth authentication, &lt;code&gt;RETURNING OLD/NEW&lt;/code&gt;, and temporal constraints. For those of you who are interested in upgrading to Postgres 18 and scaling these new features of Postgres: you can upgrade to Citus 14.0!&lt;/p&gt;

&lt;p&gt;Let&amp;#39;s take a closer look at what&amp;#39;s new in Citus 14.0: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="#pg18-support"&gt;Postgres 18 support in Citus 14.0&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#pg18-highlights"&gt;PostgreSQL 18 highlights that benefit Citus clusters&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#citus14-compatibility"&gt;What Citus 14 adds for PostgreSQL 18 compatibility&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="pg18-support"&gt;Postgres 18 support in Citus 14.0&lt;/h2&gt;

&lt;p&gt;Citus 14.0 introduces support for PostgreSQL 18. This means that just by enabling PG18 in Citus 14.0, all the query performance improvements directly reflect on the Citus distributed queries, and several optimizer improvements benefit queries in Citus out of the box! Among the many new features in PG 18, the following capabilities enabled in Citus 14.0 are especially noteworthy for Citus users:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="#aio"&gt;Faster scans and maintenance via AIO&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#skip-scan"&gt;Better index usage with skip-scan&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#uuidv7"&gt;&lt;code&gt;uuidv7()&lt;/code&gt; for time-ordered UUIDs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#oauth"&gt;OAuth authentication support&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#json-table"&gt;JSON_TABLE() COLUMNS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#temporal-constraints"&gt;Temporal constraints&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#foreign-table-like"&gt;CREATE FOREIGN TABLE ... LIKE&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#generated-columns"&gt;Generated columns (Virtual by Default)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#vacuum-analyze-only"&gt;VACUUM/ANALYZE ONLY semantics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#not-enforced"&gt;Constraints: NOT ENFORCED + partitioned-table additions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#returning-old-new"&gt;DML: RETURNING OLD/NEW&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#copy-expansions"&gt;COPY expansions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#min-max-aggregates"&gt;MIN()/MAX() on arrays and composite types&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#nondeterministic-collations"&gt;Nondeterministic collations&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#sslkeylogfile"&gt;sslkeylogfile connection parameter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#self-join-elimination"&gt;Planner fix: enable_self_join_elimination&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#utility-ops"&gt;Utility/Ops plumbing and observability&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To learn more about how you can use Citus 14.0 + PostgreSQL 18, as well as currently unsupported features and future work, you can consult the &lt;a href="/updates/v14-0/"&gt;Citus 14.0 Updates page&lt;/a&gt;, which gives you detailed release notes.&lt;/p&gt;

&lt;h2 id="pg18-highlights"&gt;PostgreSQL 18 highlights that benefit Citus clusters&lt;/h2&gt;

&lt;p&gt;Because Citus is implemented as a Postgres extension, the following PG18 improvements benefit your distributed cluster &lt;strong&gt;automatically&lt;/strong&gt;—no Citus-specific changes needed.&lt;/p&gt;

&lt;h3 id="aio"&gt;Faster scans and maintenance via AIO&lt;/h3&gt;

&lt;p&gt;Postgres 18 adds an &lt;strong&gt;asynchronous I/O subsystem&lt;/strong&gt; that can improve sequential scans, bitmap heap scans, and vacuuming—workloads that show up constantly in shard-heavy distributed clusters. This means your Citus cluster can benefit from faster table scans and more efficient maintenance operations without any code changes.&lt;/p&gt;

&lt;p&gt;You can control the I/O method via the new &lt;code&gt;io_method&lt;/code&gt; GUC:&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Check the current I/O method&lt;/span&gt;
&lt;span class="k"&gt;SHOW&lt;/span&gt; &lt;span class="n"&gt;io_method&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="-- Check the current I/O method
SHOW io_method;
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;h3 id="skip-scan"&gt;Better index usage with skip-scan&lt;/h3&gt;

&lt;p&gt;Postgres 18 expands when &lt;strong&gt;multicolumn B-tree indexes&lt;/strong&gt; can be used via &lt;strong&gt;skip scan&lt;/strong&gt;, helping common multi-tenant schemas where predicates don&amp;#39;t always constrain the leading index column. This is particularly valuable for Citus users with multi-tenant applications where queries often filter by non-leading columns.&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Multi-tenant index: (tenant_id, created_at)&lt;/span&gt;
&lt;span class="c1"&gt;-- PG18 skip-scan lets this query use the index even without tenant_id&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;events&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;created_at&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;interval&lt;/span&gt; &lt;span class="s1"&gt;'1 day'&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;created_at&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="-- Multi-tenant index: (tenant_id, created_at)
-- PG18 skip-scan lets this query use the index even without tenant_id
SELECT * FROM events
WHERE created_at &amp;gt; now() - interval &amp;#39;1 day&amp;#39;
ORDER BY created_at DESC
LIMIT 100;
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;h3 id="uuidv7"&gt;uuidv7() for time-ordered UUIDs&lt;/h3&gt;

&lt;p&gt;Time-ordered UUIDs can reduce index churn and improve locality; Postgres 18 adds &lt;code&gt;uuidv7()&lt;/code&gt;. This is especially useful for distributed tables where you want predictable ordering and better index performance across shards.&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Use uuidv7() as a time-ordered primary key&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;events&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="n"&gt;uuid&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="n"&gt;uuidv7&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;tenant_id&lt;/span&gt; &lt;span class="nb"&gt;bigint&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;payload&lt;/span&gt; &lt;span class="n"&gt;jsonb&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;create_distributed_table&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'events'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'tenant_id'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="-- Use uuidv7() as a time-ordered primary key
CREATE TABLE events (
  id uuid DEFAULT uuidv7() PRIMARY KEY,
  tenant_id bigint,
  payload jsonb
);

SELECT create_distributed_table(&amp;#39;events&amp;#39;, &amp;#39;tenant_id&amp;#39;);
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;h3 id="oauth"&gt;OAuth authentication support&lt;/h3&gt;

&lt;p&gt;Postgres 18 adds &lt;strong&gt;OAuth authentication&lt;/strong&gt;, making it easier to plug database auth into modern SSO flows—often a practical requirement in multi-node deployments. This simplifies authentication management across your Citus coordinator and worker nodes.&lt;/p&gt;

&lt;h2 id="citus14-compatibility"&gt;What Citus 14 adds for PostgreSQL 18 compatibility&lt;/h2&gt;

&lt;p&gt;While the highlights above work out of the box, PG18 also introduces &lt;strong&gt;new SQL syntax and behavior changes&lt;/strong&gt; that require Citus-specific work—parsing/deparsing, DDL propagation across coordinator + workers, and distributed execution correctness. Here&amp;#39;s what we built to make these work end-to-end.&lt;/p&gt;

&lt;h3 id="json-table"&gt;JSON_TABLE() COLUMNS&lt;/h3&gt;

&lt;p&gt;PG18 expands SQL/JSON &lt;code&gt;JSON_TABLE()&lt;/code&gt; with a richer &lt;code&gt;COLUMNS&lt;/code&gt; clause, making it easy to extract multiple fields from JSON documents in a single, typed table expression. Citus 14 ensures the syntax can be parsed/deparsed and executed consistently in distributed queries.&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;pg18_json_test&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="nb"&gt;serial&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="n"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;jt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;jt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;age&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg18_json_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="n"&gt;JSON_TABLE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
       &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="s1"&gt;'$.user'&lt;/span&gt;
       &lt;span class="n"&gt;COLUMNS&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
         &lt;span class="n"&gt;age&lt;/span&gt;  &lt;span class="nb"&gt;INT&lt;/span&gt;  &lt;span class="n"&gt;PATH&lt;/span&gt; &lt;span class="s1"&gt;'$.age'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
         &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt; &lt;span class="n"&gt;PATH&lt;/span&gt; &lt;span class="s1"&gt;'$.name'&lt;/span&gt;
       &lt;span class="p"&gt;)&lt;/span&gt;
     &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;jt&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;jt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;age&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="mi"&gt;25&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="mi"&gt;35&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;jt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;age&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;jt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="CREATE TABLE pg18_json_test (id serial PRIMARY KEY, data JSON);

SELECT jt.name, jt.age
FROM pg18_json_test,
     JSON_TABLE(
       data,
       &amp;#39;$.user&amp;#39;
       COLUMNS (
         age  INT  PATH &amp;#39;$.age&amp;#39;,
         name TEXT PATH &amp;#39;$.name&amp;#39;
       )
     ) AS jt
WHERE jt.age BETWEEN 25 AND 35
ORDER BY jt.age, jt.name;
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;h3 id="temporal-constraints"&gt;Temporal constraints&lt;/h3&gt;

&lt;p&gt;Postgres 18 adds temporal constraint syntax that Citus must propagate and preserve correctly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;WITHOUT OVERLAPS&lt;/code&gt; for &lt;code&gt;PRIMARY KEY&lt;/code&gt; / &lt;code&gt;UNIQUE&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;PERIOD&lt;/code&gt; for &lt;code&gt;FOREIGN KEY&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;temporal_rng&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="n"&gt;int4range&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;valid_at&lt;/span&gt; &lt;span class="n"&gt;daterange&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;CONSTRAINT&lt;/span&gt; &lt;span class="n"&gt;temporal_rng_pk&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;valid_at&lt;/span&gt; &lt;span class="k"&gt;WITHOUT&lt;/span&gt; &lt;span class="k"&gt;OVERLAPS&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;create_distributed_table&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'temporal_rng'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'id'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="CREATE TABLE temporal_rng (
  id int4range,
  valid_at daterange,
  CONSTRAINT temporal_rng_pk PRIMARY KEY (id, valid_at WITHOUT OVERLAPS)
);

SELECT create_distributed_table(&amp;#39;temporal_rng&amp;#39;, &amp;#39;id&amp;#39;);
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;h3 id="foreign-table-like"&gt;CREATE FOREIGN TABLE ... LIKE&lt;/h3&gt;

&lt;p&gt;Postgres 18 supports &lt;code&gt;CREATE FOREIGN TABLE ... LIKE&lt;/code&gt;, letting you define a foreign table by copying the column layout (and optionally defaults/constraints/indexes) from an existing table. Citus 14 includes coverage so FDW workflows remain compatible in distributed environments.&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Copy column layout from an existing table&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;FOREIGN&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;my_ft&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;LIKE&lt;/span&gt; &lt;span class="n"&gt;my_local_table&lt;/span&gt; &lt;span class="k"&gt;EXCLUDING&lt;/span&gt; &lt;span class="k"&gt;ALL&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;SERVER&lt;/span&gt; &lt;span class="n"&gt;foreign_server&lt;/span&gt;
  &lt;span class="k"&gt;OPTIONS&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;schema_name&lt;/span&gt; &lt;span class="s1"&gt;'public'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;table_name&lt;/span&gt; &lt;span class="s1"&gt;'my_local_table'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="-- Copy column layout from an existing table
CREATE FOREIGN TABLE my_ft (LIKE my_local_table EXCLUDING ALL)
  SERVER foreign_server
  OPTIONS (schema_name &amp;#39;public&amp;#39;, table_name &amp;#39;my_local_table&amp;#39;);
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;h3 id="generated-columns"&gt;Generated columns (Virtual by Default)&lt;/h3&gt;

&lt;p&gt;PostgreSQL 18 changes generated column behavior significantly:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Virtual by default:&lt;/strong&gt; Generated columns are now computed on read rather than stored, reducing write amplification&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Logical replication support:&lt;/strong&gt; New &lt;code&gt;publish_generated_columns&lt;/code&gt; publication option for replicating generated values&lt;/li&gt;
&lt;/ol&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;events&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="nb"&gt;bigint&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;payload&lt;/span&gt; &lt;span class="n"&gt;jsonb&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;payload_hash&lt;/span&gt; &lt;span class="nb"&gt;text&lt;/span&gt; &lt;span class="k"&gt;GENERATED&lt;/span&gt; &lt;span class="n"&gt;ALWAYS&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;md5&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="n"&gt;VIRTUAL&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;create_distributed_table&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'events'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'id'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="CREATE TABLE events (
  id bigint,
  payload jsonb,
  payload_hash text GENERATED ALWAYS AS (md5(payload::text)) VIRTUAL
);

SELECT create_distributed_table(&amp;#39;events&amp;#39;, &amp;#39;id&amp;#39;);
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;h3 id="vacuum-analyze-only"&gt;VACUUM/ANALYZE ONLY semantics&lt;/h3&gt;

&lt;p&gt;Postgres 18 introduces &lt;code&gt;ONLY&lt;/code&gt; for &lt;code&gt;VACUUM&lt;/code&gt; and &lt;code&gt;ANALYZE&lt;/code&gt; so you can explicitly &lt;strong&gt;target only the parent&lt;/strong&gt; of a partitioned/inheritance tree without automatically processing children. Citus 14 adapts distributed utility-command behavior so &lt;code&gt;ONLY&lt;/code&gt; works as intended.&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Parent-only: do not recurse into partitions/children&lt;/span&gt;
&lt;span class="k"&gt;VACUUM&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;ANALYZE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;ONLY&lt;/span&gt; &lt;span class="n"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;ANALYZE&lt;/span&gt; &lt;span class="k"&gt;ONLY&lt;/span&gt; &lt;span class="n"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="-- Parent-only: do not recurse into partitions/children
VACUUM (ANALYZE) ONLY metrics;
ANALYZE ONLY metrics;
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;h3 id="not-enforced"&gt;Constraints: NOT ENFORCED + partitioned-table additions&lt;/h3&gt;

&lt;p&gt;Postgres 18 expands constraint syntax in several ways that Citus must parse/deparse and propagate across coordinator + workers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;CHECK&lt;/code&gt; constraints can be marked &lt;code&gt;NOT ENFORCED&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;FOREIGN KEY&lt;/code&gt; constraints can be marked &lt;code&gt;NOT ENFORCED&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;NOT VALID&lt;/code&gt; foreign keys on partitioned tables&lt;/li&gt;
&lt;li&gt;&lt;code&gt;DROP CONSTRAINT ONLY&lt;/code&gt; on partitioned tables&lt;/li&gt;
&lt;/ul&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt;
  &lt;span class="k"&gt;ADD&lt;/span&gt; &lt;span class="k"&gt;CONSTRAINT&lt;/span&gt; &lt;span class="n"&gt;orders_amount_positive&lt;/span&gt; &lt;span class="k"&gt;CHECK&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;amount&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="n"&gt;ENFORCED&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt;
  &lt;span class="k"&gt;ADD&lt;/span&gt; &lt;span class="k"&gt;CONSTRAINT&lt;/span&gt; &lt;span class="n"&gt;orders_customer_fk&lt;/span&gt;
  &lt;span class="k"&gt;FOREIGN&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;customer_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;REFERENCES&lt;/span&gt; &lt;span class="n"&gt;customers&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="n"&gt;ENFORCED&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="ALTER TABLE orders
  ADD CONSTRAINT orders_amount_positive CHECK (amount &amp;gt; 0) NOT ENFORCED;

ALTER TABLE orders
  ADD CONSTRAINT orders_customer_fk
  FOREIGN KEY (customer_id) REFERENCES customers(id)
  NOT ENFORCED;
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;h3 id="returning-old-new"&gt;DML: RETURNING OLD/NEW&lt;/h3&gt;

&lt;p&gt;Postgres 18 lets &lt;code&gt;RETURNING&lt;/code&gt; reference both the &lt;strong&gt;previous&lt;/strong&gt; (&lt;code&gt;old&lt;/code&gt;) and &lt;strong&gt;new&lt;/strong&gt; (&lt;code&gt;new&lt;/code&gt;) row values in &lt;code&gt;INSERT/UPDATE/DELETE/MERGE&lt;/code&gt;. Citus 14 preserves these semantics in distributed execution.&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt;
&lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;v&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;v&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;42&lt;/span&gt;
&lt;span class="n"&gt;RETURNING&lt;/span&gt; &lt;span class="k"&gt;old&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;old_v&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;new_v&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="UPDATE t
SET v = v + 1
WHERE id = 42
RETURNING old.v AS old_v, new.v AS new_v;
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;h3 id="copy-expansions"&gt;COPY expansions&lt;/h3&gt;

&lt;p&gt;PG18 adds two useful COPY improvements that Citus 14 supports in distributed queries:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;COPY ... REJECT_LIMIT&lt;/code&gt;&lt;/strong&gt;: set a threshold for how many rows can be rejected before the COPY fails, useful for resilient bulk loading into sharded tables&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;COPY table TO&lt;/code&gt; from materialized views&lt;/strong&gt;: export data directly from materialized views&lt;/li&gt;
&lt;/ul&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Tolerate up to 10 bad rows during bulk load&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt; &lt;span class="n"&gt;my_distributed_table&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="s1"&gt;'/data/import.csv'&lt;/span&gt;
  &lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;FORMAT&lt;/span&gt; &lt;span class="n"&gt;csv&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;REJECT_LIMIT&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="-- Tolerate up to 10 bad rows during bulk load
COPY my_distributed_table FROM &amp;#39;/data/import.csv&amp;#39;
  WITH (FORMAT csv, REJECT_LIMIT 10);
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;h3 id="min-max-aggregates"&gt;MIN()/MAX() on arrays and composite types&lt;/h3&gt;

&lt;p&gt;PG18 extends &lt;code&gt;MIN()&lt;/code&gt; and &lt;code&gt;MAX()&lt;/code&gt; aggregates to work on arrays and composite types. Citus 14 ensures these aggregates work correctly in distributed queries.&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;sensor_data&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;tenant_id&lt;/span&gt; &lt;span class="nb"&gt;bigint&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;readings&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;create_distributed_table&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'sensor_data'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'tenant_id'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- Now works with array columns&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="k"&gt;MIN&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;readings&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="k"&gt;MAX&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;readings&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;sensor_data&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="CREATE TABLE sensor_data (
  tenant_id bigint,
  readings int[]
);
SELECT create_distributed_table(&amp;#39;sensor_data&amp;#39;, &amp;#39;tenant_id&amp;#39;);

-- Now works with array columns
SELECT MIN(readings), MAX(readings) FROM sensor_data;
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;h3 id="nondeterministic-collations"&gt;Nondeterministic collations&lt;/h3&gt;

&lt;p&gt;PG18 extends &lt;code&gt;LIKE&lt;/code&gt; and text-position search functions to work with nondeterministic collations. Citus 14 verifies these work correctly across distributed queries.&lt;/p&gt;

&lt;h3 id="sslkeylogfile"&gt;sslkeylogfile connection parameter&lt;/h3&gt;

&lt;p&gt;PG18 adds the &lt;code&gt;sslkeylogfile&lt;/code&gt; libpq connection parameter for dumping SSL key material, useful for debugging encrypted connections. Citus 14 allows configuring this via &lt;code&gt;citus.node_conn_info&lt;/code&gt; so it works across inter-node connections.&lt;/p&gt;

&lt;h3 id="self-join-elimination"&gt;Planner fix: enable_self_join_elimination&lt;/h3&gt;

&lt;p&gt;PG18 introduces the &lt;code&gt;enable_self_join_elimination&lt;/code&gt; planner optimization. Citus 14 ensures this works correctly for joins between distributed and local tables, avoiding wrong results that could occur in early PG18 integration.&lt;/p&gt;

&lt;h3 id="utility-ops"&gt;Utility/Ops plumbing and observability&lt;/h3&gt;

&lt;p&gt;Citus 14 adapts to PG18 interface/output changes that affect tooling and extension plumbing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;New GUC &lt;code&gt;file_copy_method&lt;/code&gt; for &lt;code&gt;CREATE DATABASE ... STRATEGY=FILE_COPY&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;EXPLAIN (WAL)&lt;/code&gt; adds a &amp;quot;WAL buffers full&amp;quot; field; Citus propagates it through distributed EXPLAIN output&lt;/li&gt;
&lt;li&gt;New extension macro &lt;code&gt;PG_MODULE_MAGIC_EXT&lt;/code&gt; so extensions can report name/version metadata&lt;/li&gt;
&lt;li&gt;New libpq parameter &lt;code&gt;sslkeylogfile&lt;/code&gt; support via &lt;code&gt;citus.node_conn_info&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Diving deeper into Citus 14.0 and distributed Postgres&lt;/h2&gt;

&lt;p&gt;To learn more about Citus 14.0, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check out the &lt;a href="/updates/v14-0"&gt;14.0 Updates page&lt;/a&gt; to get the detailed release notes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also stay connected on the &lt;a href="https://slack.citusdata.com/"&gt;Citus Slack&lt;/a&gt; and visit the &lt;a href="https://github.com/citusdata/citus"&gt;Citus open source GitHub repo&lt;/a&gt; to see recent developments as well. If there&amp;#39;s something you&amp;#39;d like to see next in Citus, feel free to also open a feature request issue :)&lt;/p&gt;
&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href='https://www.citusdata.com/blog/2026/02/17/distribute-postgresql-18-with-citus-14/'&gt;citusdata.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>Bits of wisdom from a year of Talking Postgres</title>
    <link rel="alternate" href="https://www.citusdata.com/blog/2025/07/22/bits-of-wisdom-from-a-year-of-talking-postgres/"/>
    <id>https://www.citusdata.com/blog/2025/07/22/bits-of-wisdom-from-a-year-of-talking-postgres/</id>
    <published>2025-07-22T16:21:00+00:00</published>
    <updated>2025-07-22T16:21:00+00:00</updated>
    <author>Claire Giordano</author>
    <content type="html">&lt;p&gt;A year ago we &lt;a href="https://techcommunity.microsoft.com/blog/adforpostgresql/say-hello-to-the-talking-postgres-podcast/4186111"&gt;renamed&lt;/a&gt; the &lt;a href="https://talkingpostgres.com"&gt;Talking Postgres podcast&lt;/a&gt;&amp;mdash;and just published our 29th episode. Since it’s a monthly thing, that means 13 new conversations in the past year. So this feels like a good moment to pause, reflect, and share a few highlights. &lt;/p&gt;

&lt;p&gt;If you’re already a listener, this post might help you find an episode you missed. If you’re new to the podcast, think of this as a sampler: a few favorite quotes, some backstory, and quick-hit summaries of each episode from the past year.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is the Talking Postgres podcast?&lt;/strong&gt; It&amp;#39;s a long-form conversation&amp;mdash;usually an hour, often more&amp;mdash;with people in the Postgres world. We talk about what they do with Postgres, why they do it, and how they got started. (I’m a bit obsessed with origin stories.)  &lt;/p&gt;

&lt;p&gt;The show isn’t about features or how-to’s. It’s not about speeds and feeds. And it’s definitely not trying to sell you anything. It’s about the people behind Postgres—and what we can learn from each other. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In this post you’ll get highlights from the past 13 episodes&lt;/strong&gt; of &lt;a href="https://talkingpostgres.com/"&gt;Talking Postgres&lt;/a&gt;, shared in reverse chronological order. These summaries can’t do the full episodes justice, but maybe they’ll inspire you to listen in. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And if you’re not a podcast person, maybe it&amp;#39;s time to give it a try?&lt;/strong&gt; Podcast listening in the US has grown from 170 million hours a week in 2015 to over 770 million hours per week in 2025. (Source: &lt;a href="https://podnews.net/article/time-spent-podcasting-weekly-us"&gt;Share of Ear survey from Edison Research, via Podnews.net&lt;/a&gt;).&lt;/p&gt;

&lt;figure&gt;
&lt;picture&gt;
&lt;source srcset="https://cdn.citusdata.com/images/blog/talkingpostgres-grid-speaker-img-1200x675.webp" type="image/webp"&gt;
&lt;img src="https://cdn.citusdata.com/images/blog/talkingpostgres-grid-speaker-img-1200x675.jpg" alt="Talking Postgres speakers grid" loading="lazy" width="850" height="478" /&gt;
&lt;/picture&gt;
&lt;figcaption&gt;&lt;strong&gt;Figure 1:&lt;/strong&gt; As of July 2025 we’ve had 41 amazing guests on the monthly Talking Postgres podcast (originally named Path To Citus Con, prior to the rename in 2024). The photos above are not in any particular order! Left-to-right, starting in the top row: Simon Willison, Floor Drees, Andres Freund, Melanie Plageman, Derk van Veen, Claire Giordano (that&amp;rsquo;s me), Paul Ramsey, Samay Sharma, Affan Dar, Regina Obe, Álvaro Herrera, Chelsea Dole, Heikki Linnakangas, Marco Slot, Lukas Fittl, Rob Treat, Jelte Fennema-Nio, Pino de Candia, Dimitri Fontaine, Grant Fritchey, Thomas Munro, Abdullah Ustuner, Vik Fearing, Teresa Giacomini, Tom Lane, Arda Aytekin, Ryan Booz, Chris Ellis, Daniel Gustafsson, Andrew Atkinson, David Rowley, Burak Yucesoy, Boriss Mejías, Michael Christofides, Aaron Wislang, Robert Haas, Dawn Wages, Bruce Momjian, Peter Farkas, Peter Cooper, and Shireesh Thota.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h2&gt;A listener’s take: Talking Postgres as “oral history”&lt;/h2&gt;

&lt;p&gt;Last October, I got a text from &lt;strong&gt;Thomas Munro&lt;/strong&gt;&amp;mdash;one of the Postgres committers, and someone I deeply respect. He said:&lt;/p&gt;

&lt;div class="normal-quote" aria-hidden="true"&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;“btw, random comment: i think talking postgres is very cool, going very well, and i love what you are doing”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Naturally, I wanted to know more. So I asked. Thomas&amp;#39;s reply:&lt;/p&gt;

&lt;div class="normal-quote" aria-hidden="true"&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;“it&amp;#39;s just a great podcast, you have a relaxed way to get people talking, and people are at ease with you. it’s a perfect format, and it feels quite natural. i don’t know, i just like it. the name is also working well. i was happy to see robert [haas] mentioning it the other day. it&amp;#39;s a really nice part of the ecosystem and brings a positive human dimension to it. it&amp;#39;s also a kind of documentary, a kind of oral history for the postgresql community.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As Bruce Momjian likes to say, it costs nothing to show gratitude. The note from Thomas made my day that day&amp;mdash;and I’m sharing it here in case it nudges someone else to give the podcast a listen.&lt;/p&gt;

&lt;h2&gt;From dreaming of driving a bus to leading database engineering at Microsoft&lt;/h2&gt;

&lt;p&gt;&lt;figure class="float-right thumbnail"&gt;&lt;picture&gt;&lt;source srcset="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep29-shireesh-thota-thumbnail-1000x1000.webp" type="image/webp"&gt;&lt;img src="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep29-shireesh-thota-thumbnail-1000x1000.jpg" alt="Thumbnail for Ep29 with Shireesh Thota" loading="lazy" width="175" height="175" /&gt;&lt;/picture&gt;&lt;figcaption&gt;&lt;a href="https://talkingpostgres.com/episodes/how-i-got-started-leading-database-teams-with-shireesh-thota"&gt;Listen to Episode&amp;nbsp;29&lt;/a&gt;&lt;/figcaption&gt;&lt;/figure&gt; &lt;strong&gt;Shireesh Thota&lt;/strong&gt; joined me to unpack his journey from early BASIC programming to leading all of database engineering at Microsoft Azure. (Fun fact: his childhood dream was to be a bus driver.)&lt;/p&gt;

&lt;p&gt;I&amp;rsquo;ll admit, I thought it might be intimidating to interview my boss&amp;rsquo;s boss. But it wasn&amp;rsquo;t. Shireesh was down-to-earth, thoughtful, and full of stories&amp;mdash;especially about the shift from IC to manager. One of his early lessons? You can&amp;rsquo;t manage people in the same way you write code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Highlights of &amp;quot;How I got started leading database teams with Shireesh Thota&amp;quot;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="green-check-bullets" aria-hidden="true"&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;The shift from developer to manager (if only people came with documentation)&lt;/li&gt;
&lt;li&gt;Why it&amp;rsquo;s important for Microsoft to contribute to the PostgreSQL open source project&lt;/li&gt;
&lt;li&gt;Whether Shireesh has a favorite database (hint: I want it to be Postgres)&lt;/li&gt;
&lt;li&gt;Are there any plans to open source the new VS Code extension for Postgres?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;What drives someone to publish 600+ issues of a Postgres newsletter?&lt;/h2&gt;

&lt;p&gt;&lt;figure class="float-right thumbnail"&gt;&lt;picture&gt;&lt;source srcset="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep28-peter-cooper-thumbnail-1000x1000.webp" type="image/webp"&gt;&lt;img src="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep28-peter-cooper-thumbnail-1000x1000.jpg" alt="Thumbnail for Ep28 with Peter Cooper" loading="lazy" width="175" height="175" /&gt;&lt;/picture&gt;&lt;figcaption&gt;&lt;a href="https://talkingpostgres.com/episodes/12-years-of-postgres-weekly-with-peter-cooper"&gt;Listen to Episode&amp;nbsp;28&lt;/a&gt;&lt;/figcaption&gt;&lt;/figure&gt; I&amp;rsquo;d been wanting to get &lt;strong&gt;Peter Cooper&lt;/strong&gt; on the podcast for a while. Peter is the creator of &lt;a href="https://postgresweekly.com/"&gt;Postgres Weekly&lt;/a&gt;&amp;mdash;my favorite developer newsletter&amp;mdash;and the person behind a whole collection of dev newsletters that reach nearly half a million people each week.&lt;/p&gt;

&lt;p&gt;But how to get Peter to a yes? I was still pondering that when Michael Christofides (founder of pgMustard, &lt;a href="https://youtu.be/mlti_9eD3w0?si=_BMgkeiW-wmvWNUk"&gt;past guest on Talking Postgres&lt;/a&gt;, and co-host of Postgres FM) had the same idea: &amp;ldquo;You should invite Peter Cooper.&amp;rdquo;&lt;/p&gt;

&lt;p&gt;As luck would have it, Peter&amp;mdash;who lives in the UK&amp;mdash;was visiting the San Francisco Bay Area. We met for coffee (no coffee was actually consumed), I pitched the idea, and he said yes. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Highlights of &amp;quot;12 years of Postgres Weekly with Peter Cooper&amp;quot;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="green-check-bullets" aria-hidden="true"&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;How Peter built a newsletter empire that reaches nearly half a million developers each week&lt;/li&gt;
&lt;li&gt;Trends in Postgres he&amp;rsquo;s seen from the editor&amp;rsquo;s seat&lt;/li&gt;
&lt;li&gt;How the BBC&amp;#39;s &amp;ldquo;big tent&amp;rdquo; shapes Peter&amp;rsquo;s editorial (&amp;amp; opinionated) voice&lt;/li&gt;
&lt;li&gt;Where he finds the good stuff (all the useful Postgres links)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;How does a trek to K2 base camp spark the idea for a database startup?&lt;/h2&gt;

&lt;p&gt;&lt;figure class="float-right thumbnail"&gt;&lt;picture&gt;&lt;source srcset="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep27-peter-farkas-thumbnail-1000x1000.webp" type="image/webp"&gt;&lt;img src="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep27-peter-farkas-thumbnail-1000x1000.jpg" alt="Thumbnail for Ep27 with Peter Farkas" loading="lazy" width="175" height="175" /&gt;&lt;/picture&gt;&lt;figcaption&gt;&lt;a href="https://talkingpostgres.com/episodes/how-i-got-started-with-ferretdb-why-we-chose-postgres-with-peter-farkas"&gt;Listen to Episode&amp;nbsp;27&lt;/a&gt;&lt;/figcaption&gt;&lt;/figure&gt; In this episode, &lt;strong&gt;Peter Farkas&lt;/strong&gt; shared the origin story of FerretDB, an open source MongoDB alternative that first became an idea at 16,000 feet at the Himalayan K2 base camp. (Spoiler: “Ferret” wasn’t the original name.) We also talked about why Postgres was the obvious choice for FerretDB&amp;mdash;and how Peter’s background in customer support has shaped his world view.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Highlights of &amp;quot;How I got started with FerretDB &amp;amp; why we chose Postgres with Peter Farkas&amp;quot;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="green-check-bullets" aria-hidden="true"&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;What “true open source” means to Peter and FerretDB &lt;/li&gt;
&lt;li&gt;How FerretDB now runs on the new, open-sourced DocumentDB extension&lt;/li&gt;
&lt;li&gt;How Hungarian Trappist cheese 🧀 deserves a footnote in database history&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;What does it take to lead a global open source project like Postgres?&lt;/h2&gt;

&lt;p&gt;&lt;figure class="float-right thumbnail"&gt;&lt;picture&gt;&lt;source srcset="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep26-bruce-momjian-thumbnail-1000x1000.webp" type="image/webp"&gt;&lt;img src="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep26-bruce-momjian-thumbnail-1000x1000.jpg" alt="Thumbnail for Ep26 with Bruce Momjian" loading="lazy" width="175" height="175" /&gt;&lt;/picture&gt;&lt;figcaption&gt;&lt;a href="https://talkingpostgres.com/episodes/open-source-leadership-with-bruce-momjian"&gt;Listen to Episode&amp;nbsp;26&lt;/a&gt;&lt;/figcaption&gt;&lt;/figure&gt; &lt;strong&gt;Bruce Momjian&lt;/strong&gt; and I were both at &lt;a href="https://pgconf.in/conferences/pgconfin2025"&gt;PGConf India&lt;/a&gt; in Bangalore when he said yes to joining the podcast. Bruce has been part of Postgres since 1996, and he’s spent decades giving talks, writing, and teaching others about the database. So we had too many good topics to choose from. &lt;/p&gt;

&lt;p&gt;We landed on one that felt especially relevant: what it means to lead in open source. Bruce shared stories, lessons, and a perspective that only comes from being in the trenches for nearly 30 years. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Highlights of &amp;quot;Open Source Leadership with Bruce Momjian&amp;quot;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="green-check-bullets" aria-hidden="true"&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Why servant leadership is such a good fit for open source&lt;/li&gt;
&lt;li&gt;How being open to feedback can lead to a much higher quality result&lt;/li&gt;
&lt;li&gt;Tips on becoming a good conference speaker&lt;/li&gt;
&lt;li&gt;How it doesn&amp;rsquo;t cost you anything to show appreciation to people&lt;/li&gt;
&lt;li&gt;Bruce&amp;rsquo;s early days on the project, co-founding the PostgreSQL Global Development Group&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Why so many Python developers have an affinity for Postgres&lt;/h2&gt;

&lt;p&gt;&lt;figure class="float-right thumbnail"&gt;&lt;picture&gt;&lt;source srcset="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep25-dawn-wages-thumbnail-1000x1000.webp" type="image/webp"&gt;&lt;img src="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep25-dawn-wages-thumbnail-1000x1000.jpg" alt="Thumbnail for Ep25 with Dawn Wages" loading="lazy" width="175" height="175" /&gt;&lt;/picture&gt;&lt;figcaption&gt;&lt;a href="https://talkingpostgres.com/episodes/why-python-developers-just-use-postgres-with-dawn-wages"&gt;Listen to Episode&amp;nbsp;25&lt;/a&gt;&lt;/figcaption&gt;&lt;/figure&gt; When I first met &lt;strong&gt;Dawn Wages&lt;/strong&gt;, we somehow got onto the topic of her book-in-progress, &lt;em&gt;Domain-Driven Django&lt;/em&gt;. One chapter title jumped out at me: &amp;ldquo;Just Use Postgres.&amp;rdquo; Of course, I had questions. &lt;/p&gt;

&lt;p&gt;At the time, Dawn was President of the Python Software Foundation&amp;mdash;and I was curious to hear how a Django developer thinks about Postgres. Why do so many Python and Django devs seem to have such affection for it? This one&amp;rsquo;s part origin story, part practical wisdom, and part homage to Postgres.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Highlights of &amp;quot;Why Python developers just use Postgres with Dawn Wages&amp;quot;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="green-check-bullets" aria-hidden="true"&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Why does a book for Django developers deserve a chapter on Postgres?&lt;/li&gt;
&lt;li&gt;Why &amp;ldquo;free as in puppies&amp;rdquo; beats &amp;ldquo;free as in cake&amp;rdquo;&lt;/li&gt;
&lt;li&gt;How making really smart friends helps you in work (and in life)&lt;/li&gt;
&lt;li&gt;Is Python the second-best language for everything?&lt;/li&gt;
&lt;li&gt;Fun fact: Did you know PostgreSQL.org is built with Django?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Why mentor? Because nobody works on an open-source project forever—eventually people move on&lt;/h2&gt;

&lt;p&gt;&lt;figure class="float-right thumbnail"&gt;&lt;picture&gt;&lt;source srcset="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep24-robert-haas-thumbnail-1000x1000.webp" type="image/webp"&gt;&lt;img src="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep24-robert-haas-thumbnail-1000x1000.jpg" alt="Thumbnail for Ep24 with Robert Haas" loading="lazy" width="175" height="175" /&gt;&lt;/picture&gt;&lt;figcaption&gt;&lt;a href="https://talkingpostgres.com/episodes/why-mentor-postgres-developers-with-robert-haas"&gt;Listen to Episode&amp;nbsp;24&lt;/a&gt;&lt;/figcaption&gt;&lt;/figure&gt; Quote from &lt;strong&gt;Robert Haas&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="normal-quote" aria-hidden="true"&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;I think we&amp;#39;re all hoping that there will be more people coming along after us who want to pick up the torch and continue to make PostgreSQL amazing. But I think also we&amp;#39;d like to not just sustain the community but to really see it grow. We&amp;#39;d like PostgreSQL to have more developers in the future than it does today and to have a greater pace of development than it does today. And I think that mentoring is a way that we can try to help that happen.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Highlights of &amp;quot;Why mentor Postgres developers with Robert Haas&amp;quot;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="green-check-bullets" aria-hidden="true"&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Why did Robert kickstart the new PostgreSQL Hackers mentoring program?&lt;/li&gt;
&lt;li&gt;How Robert channeled “what would Tom Lane do?” during patch reviews&lt;/li&gt;
&lt;li&gt;How being willing to admit you don&amp;#39;t know how to do something properly (yet) is an important part of getting better at it&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;How walking into a Postgres session at a Linux conference changed everything&lt;/h2&gt;

&lt;p&gt;&lt;figure class="float-right thumbnail"&gt;&lt;picture&gt;&lt;source srcset="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep23-daniel-gustafsson-thumbnail-1000x1000.webp" type="image/webp"&gt;&lt;img src="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep23-daniel-gustafsson-thumbnail-1000x1000.jpg" alt="Thumbnail for Ep23 with Daniel Gustafsson" loading="lazy" width="175" height="175" /&gt;&lt;/picture&gt;&lt;figcaption&gt;&lt;a href="https://talkingpostgres.com/episodes/how-i-got-started-as-a-developer-in-postgres-with-daniel-gustafsson"&gt;Listen to Episode&amp;nbsp;23&lt;/a&gt;&lt;/figcaption&gt;&lt;/figure&gt; Quote from &lt;strong&gt;Daniel Gustafsson&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="normal-quote" aria-hidden="true"&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;And I was sitting there, being completely blown away, realizing that this database has everything that I need and want. It has PL/Perl. I can write everything as a stored procedure. It was the programming environment that I was looking for, but I didn&amp;#39;t know existed. And so from that day and time, I have not used any other database.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Highlights of &amp;quot;How I got started as a developer &amp;amp; in Postgres with Daniel Gustafsson&amp;quot;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="green-check-bullets" aria-hidden="true"&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;The story of one of Daniel’s earliest memories: a Datasaab M10 computer&amp;mdash;or as Daniel described it: “a giant steel box in the family study”&lt;/li&gt;
&lt;li&gt;Exact date, time, &amp;amp; location a chance LinuxForum encounter with Bruce Momjian changed the course of Daniel&amp;#39;s career&lt;/li&gt;
&lt;li&gt;Daniel&amp;#39;s take on the magic of conferences in the Postgres community&amp;mdash;including Nordic PGDay, PGconfdev, &amp;amp; POSETTE&lt;/li&gt;
&lt;li&gt;Advice for Daniel’s younger self: “don’t be afraid to ask for help” &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;What’s it like to lead Postgres engineering at a cloud giant like Microsoft?&lt;/h2&gt;

&lt;p&gt;&lt;figure class="float-right thumbnail"&gt;&lt;picture&gt;&lt;source srcset="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep22-affan-dar-thumbnail-1000x1000.webp" type="image/webp"&gt;&lt;img src="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep22-affan-dar-thumbnail-1000x1000.jpg" alt="Thumbnail for Ep22 with Affan Dar" loading="lazy" width="175" height="175" /&gt;&lt;/picture&gt;&lt;figcaption&gt;&lt;a href="https://talkingpostgres.com/episodes/leading-engineering-for-postgres-on-azure-with-affan-dar"&gt;Listen to Episode&amp;nbsp;22&lt;/a&gt;&lt;/figcaption&gt;&lt;/figure&gt; This episode with &lt;strong&gt;Affan Dar&lt;/strong&gt;, who heads up engineering for Postgres at Microsoft, was a fun one. Many of the guests on Talking Postgres come from the open source community, but this time we switched it up, and spoke with the Azure engineering leader who’s steering all of the Postgres work at Microsoft (including the open source contributors team.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Highlights of &amp;quot;Leading engineering for Postgres on Azure with Affan Dar&amp;quot;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="green-check-bullets" aria-hidden="true"&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;What it&amp;#39;s like to lead the engineering team behind Azure Database for PostgreSQL&lt;/li&gt;
&lt;li&gt;The strategy behind Microsoft&amp;#39;s investments into Postgres&lt;/li&gt;
&lt;li&gt;Is Postgres extensibility a double-edged sword?&lt;/li&gt;
&lt;li&gt;Affan’s experiences switching between individual contributor &amp;amp; management roles (&amp;amp; back)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Have you ever achieved something remarkable because someone planted an idea in your mind?&lt;/h2&gt;

&lt;p&gt;&lt;figure class="float-right thumbnail"&gt;&lt;picture&gt;&lt;source srcset="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep21-andrew-atkinson-thumbnail-1000x1000.webp" type="image/webp"&gt;&lt;img src="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep21-andrew-atkinson-thumbnail-1000x1000.jpg" alt="Thumbnail for Ep21 with Andrew Atkinson" loading="lazy" width="175" height="175" /&gt;&lt;/picture&gt;&lt;figcaption&gt;&lt;a href="https://talkingpostgres.com/episodes/helping-rails-developers-learn-postgres-with-andrew-atkinson"&gt;Listen to Episode&amp;nbsp;21&lt;/a&gt;&lt;/figcaption&gt;&lt;/figure&gt; &lt;strong&gt;Andrew Atkinson&lt;/strong&gt; is a software developer who got deeper into Postgres as he and his team tackled scale and reliability challenges. But what makes Andrew’s story stand out is what he did next: he wrote a book to share what he learned&amp;mdash;specifically for Rails developers.&lt;/p&gt;

&lt;p&gt;But the idea for the book didn’t come out of nowhere. It was planted by someone else. And that spark turned into a popular new book about PostgreSQL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Highlights of &amp;quot;Helping Rails developers learn Postgres with Andrew Atkinson&amp;quot;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="green-check-bullets" aria-hidden="true"&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Tackling scalability challenges in Rails&lt;/li&gt;
&lt;li&gt;The story behind Andrew’s book, &lt;em&gt;High Performance PostgreSQL for Rails&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;The idea behind &amp;quot;writing is thinking&amp;quot;&lt;/li&gt;
&lt;li&gt;The connection between Andrew&amp;#39;s Postgres book and swimming to Antarctica&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;It was not Tom Lane’s plan to become a computer person&lt;/h2&gt;

&lt;p&gt;&lt;figure class="float-right thumbnail"&gt;&lt;picture&gt;&lt;source srcset="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep20-tom-lane-thumbnail-1000x1000.webp" type="image/webp"&gt;&lt;img src="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep20-tom-lane-thumbnail-1000x1000.jpg" alt="Thumbnail for Ep20 with Tom Lane" loading="lazy" width="175" height="175" /&gt;&lt;/picture&gt;&lt;figcaption&gt;&lt;a href="https://talkingpostgres.com/episodes/how-i-got-started-as-a-developer-in-postgres-with-tom-lane"&gt;Listen to Episode&amp;nbsp;20&lt;/a&gt;&lt;/figcaption&gt;&lt;/figure&gt; &lt;strong&gt;Tom Lane&lt;/strong&gt; is a legend in the PostgreSQL contributor community and it was an honor to have him on the podcast. His origin story? Not what you’d expect. He originally wanted to be a pinball machine designer&amp;mdash;and only later found his way into databases. Thank goodness he eventually went down the rabbit hole and discovered how interesting Postgres is. Because once he did, he never looked back.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Highlights of &amp;quot;How I got started as a developer &amp;amp; in Postgres with Tom Lane&amp;quot;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="green-check-bullets" aria-hidden="true"&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Getting nudged by a professor to apply to be a research assistant&lt;/li&gt;
&lt;li&gt;Going down a bug-fixing rabbithole, starting with bugs in a port to HP-UX&lt;/li&gt;
&lt;li&gt;The importance of adopting a mindset that embraces feedback&lt;/li&gt;
&lt;li&gt;One of the things about working in open source: your failures are inherently extremely public&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;If you could work on anything, would you quit your job to pursue it?&lt;/h2&gt;

&lt;p&gt;&lt;figure class="float-right thumbnail"&gt;&lt;picture&gt;&lt;source srcset="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep19-melanie-plageman-thumbnail-1000x1000.webp" type="image/webp"&gt;&lt;img src="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep19-melanie-plageman-thumbnail-1000x1000.jpg" alt="Thumbnail for Ep19 with Melanie Plageman" loading="lazy" width="175" height="175" /&gt;&lt;/picture&gt;&lt;figcaption&gt;&lt;a href="https://talkingpostgres.com/episodes/becoming-a-postgres-committer-with-melanie-plageman"&gt;Listen to Episode&amp;nbsp;19&lt;/a&gt;&lt;/figcaption&gt;&lt;/figure&gt; &lt;strong&gt;Melanie Plageman&lt;/strong&gt; did exactly that. She left her job to go deep on Postgres—writing an extension, watching every Andy Pavlo video ever, and eventually becoming a Postgres committer. So many of us learn from Melanie and this episode is full of her hard-won insights. Also this, one of my favorite quotes from the podcast:&lt;/p&gt;

&lt;div class="normal-quote" aria-hidden="true"&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;“When you&amp;#39;re getting feedback, it&amp;#39;s not necessarily about whether or not you&amp;#39;re good at engineering—or your patch is a good patch. It&amp;#39;s about whether or not it&amp;#39;s the right thing for Postgres. Not taking things personally—and thinking about it more from the perspective of ‘how can I make my work a better fit for Postgres’ makes it hurt a little bit less.”&lt;/p&gt;

&lt;p&gt;&amp;ndash;Melanie Plageman &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Highlights of &amp;quot;Becoming a Postgres committer with Melanie Plageman&amp;quot;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="green-check-bullets" aria-hidden="true"&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;How Melanie quit her job and proceeded to learn everything she could about Postgres&lt;/li&gt;
&lt;li&gt;Writing a Postgres extension as a way to learn about Postgres (and land a job!)&lt;/li&gt;
&lt;li&gt;The value of mentorship, karma, and paying it forward&lt;/li&gt;
&lt;li&gt;What’s it like to get so deep in code you lose the ability to relate to people?&lt;/li&gt;
&lt;li&gt;The enormous weight of the Postgres committer responsibility&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;How driving a forklift at a cheese factory led to a career in databases&lt;/h2&gt;

&lt;p&gt;&lt;figure class="float-right thumbnail"&gt;&lt;picture&gt;&lt;source srcset="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep18-david-rowley-thumbnail-1000x1000.webp" type="image/webp"&gt;&lt;img src="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep18-david-rowley-thumbnail-1000x1000.jpg" alt="Thumbnail for Ep18 with David Rowley" loading="lazy" width="175" height="175" /&gt;&lt;/picture&gt;&lt;figcaption&gt;&lt;a href="https://talkingpostgres.com/episodes/how-i-got-started-as-a-developer-in-postgres-with-david-rowley"&gt;Listen to Episode&amp;nbsp;18&lt;/a&gt;&lt;/figcaption&gt;&lt;/figure&gt; &lt;strong&gt;David Rowley&lt;/strong&gt;’s origin story is full of surprises. He started out driving a forklift in a cheese factory&amp;mdash;and ended up becoming a go-to expert on Postgres performance.&lt;/p&gt;

&lt;p&gt;This episode is a fan favorite (and it even inspired Daniel Gustafsson to come on the show a few months later.) One of the most useful takeaways? David’s advice on how to make your work stand out: do the research, read the archives, and show that you’ve done your homework.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Highlights of &amp;quot;How I got started as a developer &amp;amp; in Postgres with David Rowley&amp;quot;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="green-check-bullets" aria-hidden="true"&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;David’s unusual origin story: from driving a forklift in a cheese factory to writing software&lt;/li&gt;
&lt;li&gt;Reading the manual for Postgres 9.1 in a remote part of Australia (because there weren’t a great deal of other things to do in the evenings) &lt;/li&gt;
&lt;li&gt;How speeding up Postgres gives a similar sort of buzz to tuning motorbike performance&lt;/li&gt;
&lt;li&gt;Appreciating the very high standards in the Postgres contributor community&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Have you ever eavesdropped on other people’s conversations?&lt;/h2&gt;

&lt;p&gt;&lt;figure class="float-right thumbnail"&gt;&lt;picture&gt;&lt;source srcset="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep17-pino-de-candia-thumbnail-1000x1000.webp" type="image/webp"&gt;&lt;img src="https://cdn.citusdata.com/images/blog/talking-postgres-podcast-ep17-pino-de-candia-thumbnail-1000x1000.jpg" alt="Thumbnail for Ep17 with Pino de Candia" loading="lazy" width="175" height="175" /&gt;&lt;/picture&gt;&lt;figcaption&gt;&lt;a href="https://talkingpostgres.com/episodes/podcasting-about-postgres-with-pino-de-candia"&gt;Listen to Episode&amp;nbsp;17&lt;/a&gt;&lt;/figcaption&gt;&lt;/figure&gt; &lt;strong&gt;Pino de Candia&lt;/strong&gt; co-hosted the first year of this podcast with me—back when it was still called Path To Citus Con, before we renamed the show. In this episode, he came back for a meta-conversation about what we’ve learned from podcasting about Postgres, and what it’s like to record these conversations live. &lt;/p&gt;

&lt;p&gt;We also took a walk down memory lane, revisiting some of our favorite bits from the first 16 episodes—and the guests who shared their Postgres stories with us.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Highlights of &amp;quot;Podcasting about Postgres with Pino de Candia&amp;quot;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="green-check-bullets" aria-hidden="true"&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Is listening to a podcast the next best thing to being in the hallway track at a conference?&lt;/li&gt;
&lt;li&gt;Why it’s worth making the investment to understand Postgres not just at a technical level but also to get to know the community and the project’s history&lt;/li&gt;
&lt;li&gt;What it’s like to record the podcasts live on Discord, with a parallel live text chat&lt;/li&gt;
&lt;li&gt;Shout-outs to other useful Postgres podcasts, including Postgres FM and Scaling Postgres&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Help more people discover Talking Postgres&lt;/h2&gt;

&lt;p&gt;You can find all the episodes—plus high quality transcripts—on &lt;a href="https://talkingpostgres.com/"&gt;TalkingPostgres.com&lt;/a&gt;, or wherever you get your podcasts. 
If you’ve enjoyed the podcast, there are a few easy ways to help more Postgres folks discover it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Leave a review or rating (5 stars is always appreciated)&lt;/li&gt;
&lt;li&gt;Tell a friend or teammate&lt;/li&gt;
&lt;li&gt;&lt;a href="https://talkingpostgres.com/subscribe"&gt;Subscribe&lt;/a&gt; so you don’t miss future episodes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Word of mouth is still one of the best ways to spread the word.&lt;/p&gt;

&lt;h3&gt;We record the podcast LIVE on Discord (you’re invited)&lt;/h3&gt;

&lt;p&gt;If you didn’t know, we record &lt;em&gt;Talking Postgres&lt;/em&gt; live on the Microsoft Open Source Discord, with a parallel live text chat. It usually happens at 10am PT on the first or second Wednesday of the month.&lt;/p&gt;

&lt;p&gt;Want to listen in live? You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Follow the &lt;a href="https://aka.ms/talkingpostgres-cal"&gt;Talking Postgres live recording calendar&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Or join the &lt;a href="https://aka.ms/open-source-discord"&gt;Microsoft Open Source Discord&lt;/a&gt; and mark yourself as “interested” in upcoming events. That way, you’ll get notified via Discord right before we go live.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Thank you to the people who make this podcast possible&lt;/h3&gt;

&lt;p&gt;I’m humbled by every guest who’s joined the show—starting with the &lt;a href="https://talkingpostgres.com/episodes/working-in-public-on-open-source"&gt;very first episode about Working in public on open source&lt;/a&gt; with Simon Willison and Marco Slot. Every guest has brought something different: stories, insights, and their unique journeys with Postgres. &lt;/p&gt;

&lt;p&gt;Huge thanks to Aaron Wislang of Microsoft, who’s been co-producing the podcast since day one. And to the cohort of Postgres friends who keep encouraging me to do this: Daniel Gustafsson, Melanie Plageman, Thomas Munro, and Boriss Mejías. 
Also a big shout-out to Pino de Candia, my co-host for the first year. And to my boss Charles Feddersen for supporting this podcast and all of my &lt;a href="https://www.postgresql.org/community/contributors/"&gt;Postgres open source contributor&lt;/a&gt; work.  &lt;/p&gt;

&lt;div&gt;&lt;figure&gt;&lt;img src="https://cdn.citusdata.com/images/blog/talking-postgres-subscribe.png" alt="Talking Postgres Subscribe page screenshot" loading="lazy" width="850" height="605"&gt;&lt;figcaption&gt;&lt;strong&gt;Figure 2:&lt;/strong&gt; Screenshot of the TalkingPostgres.com "Subscribe" tab, showing all the many platforms where you can listen (and subscribe) to the Talking Postgres podcast—as well as the RSS feed URL.&lt;/figcaption&gt;&lt;/figure&gt;&lt;/div&gt;

&lt;hr&gt;

&lt;style&gt;.thumbnail{margin: 0 0 1em 1em;width:175px;max-width:40%}.thumbnail figcaption{font-weight:inherit;font-size:inherit;}.inline-newsletter-signup{display:none}&lt;/style&gt;
&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href='https://www.citusdata.com/blog/2025/07/22/bits-of-wisdom-from-a-year-of-talking-postgres/'&gt;citusdata.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>Ultimate Guide to POSETTE: An Event for Postgres, 2025 edition</title>
    <link rel="alternate" href="https://www.citusdata.com/blog/2025/06/04/ultimate-guide-to-posette-2025/"/>
    <id>https://www.citusdata.com/blog/2025/06/04/ultimate-guide-to-posette-2025/</id>
    <published>2025-06-04T15:26:00+00:00</published>
    <updated>2025-06-04T15:26:00+00:00</updated>
    <author>Claire Giordano</author>
    <content type="html">&lt;p&gt;&lt;a href="https://posetteconf.com/2025/"&gt;POSETTE: An Event for Postgres 2025&lt;/a&gt; is back for its 4&lt;sup&gt;th&lt;/sup&gt; year—free, virtual, and packed with deep expertise. No travel needed, just your laptop, internet, and curiosity.&lt;/p&gt;

&lt;p&gt;This year’s 45 speakers are smart, capable Postgres practitioners—core contributors, performance experts, application developers, Azure engineers, extension maintainers—and their talks are as interesting as they are useful.&lt;/p&gt;

&lt;p&gt;The four livestreams (42 talks total) run from June 10-12, 2025. Every talk will be posted to YouTube afterward (un-gated, of course). But if you can join live, I hope you do! On the virtual hallway track on Discord, you’ll be able to chat with POSETTE speakers—as well as other attendees. And yes, there will be swag.&lt;/p&gt;

&lt;p&gt;This “ultimate guide” blog post is your shortcut to navigating POSETTE 2025. In this post you’ll get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="#by-the-numbers"&gt;A “by the numbers” summary&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#keynotes"&gt;2 Amazing Keynotes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#postgres-core"&gt;18 Postgres core talks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#postgres-ecosystem"&gt;12 Postgres ecosystem talks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#azure"&gt;10 Azure Database for PostgreSQL talks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#schedule"&gt;Where to find the POSETTE Schedule&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#discord"&gt;How to watch &amp;amp; how to participate on Discord&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#whats-new"&gt;What’s new in POSETTE 2025?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#speakers"&gt;A big thank you to our 45 amazing speakers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#join-us"&gt;Join us for POSETTE 2025 &amp;amp; mark your calendars&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="by-the-numbers"&gt;“By the numbers” summary for POSETTE 2025&lt;/h2&gt;

&lt;p&gt;Here’s a quick snapshot of what you need to know about POSETTE:&lt;/p&gt;

&lt;table style="table-layout:fixed;width:100%;"&gt;
  &lt;colgroup&gt;
    &lt;col style="width:34%"&gt;
    &lt;col style="width:66%"&gt;
  &lt;/colgroup&gt;
   &lt;thead&gt;
      &lt;tr&gt;
         &lt;th colspan="2" style="text-align:center"&gt;
            About POSETTE: An Event for Postgres 2025
         &lt;/th&gt;
      &lt;/tr&gt;
   &lt;/thead&gt;
   &lt;tbody&gt;
      &lt;tr&gt;
         &lt;th style="text-align:right"&gt;
            3 days
         &lt;/th&gt;
         &lt;td&gt;
            June 10-12, 2025
         &lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
         &lt;th style="text-align:right"&gt;
            4 livestreams
         &lt;/th&gt;
         &lt;td&gt;
            In Americas &amp;amp; EMEA time zones (but of course you can watch from anywhere)
         &lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
         &lt;th style="text-align:right"&gt;
            42 talks
         &lt;/th&gt;
         &lt;td&gt;
            All free, all virtual
         &lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
         &lt;th style="text-align:right"&gt;
            2 keynotes
         &lt;/th&gt;
         &lt;td&gt;
            From Bruce Momjian &amp;amp; Charles Feddersen
         &lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
         &lt;th style="text-align:right"&gt;
            45 speakers
         &lt;/th&gt;
         &lt;td&gt;
            PG contributors, users, application developers, community members, &amp;amp; Azure engineers
         &lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
         &lt;th style="text-align:right"&gt;
            17.4% CFP acceptance rate
         &lt;/th&gt;
         &lt;td&gt;
            40 talks selected from 230 submissions
         &lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
         &lt;th style="text-align:right"&gt;
            26% Azure-focused talks
         &lt;/th&gt;
         &lt;td&gt;
            11 talks out of 42 feature Azure Database for PostgreSQL
         &lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
         &lt;th style="text-align:right"&gt;
            74% general Postgres talks
         &lt;/th&gt;
         &lt;td&gt;
            31 talks are not cloud-specific at all
         &lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
         &lt;th style="text-align:right"&gt;
            16 languages
         &lt;/th&gt;
         &lt;td&gt;
            Published videos will have captions available in 16 languages, including English, Czech, Dutch, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, Turkish, Ukrainian, Chinese Simplified, and Chinese Traditional
         &lt;/td&gt;
      &lt;/tr&gt;
   &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;And to give you a feel for the hi-level categories and detailed “tags”, to help you navigate all 42 of the talks, maybe this diagram will help.&lt;/p&gt;

&lt;figure&gt;
&lt;picture&gt;
&lt;source srcset="https://cdn.citusdata.com/images/blog/ultimate-guide-posette-2025-figure1-by-the-numbers.webp" type="image/webp"&gt;
&lt;img src="https://cdn.citusdata.com/images/blog/ultimate-guide-posette-2025-figure1-by-the-numbers.jpg" alt="POSETE 2025 by the numbers" width="800" height="450" loading="lazy" /&gt;
&lt;/picture&gt;
&lt;figcaption&gt;&lt;strong&gt;Figure 1:&lt;/strong&gt; Ultimate Guide for POSETTE 2025, with hi-level categories and detailed tags for all 42 talks.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h2 id="keynotes"&gt;&lt;strong style="font-weight:900"&gt;2&lt;/strong&gt; Amazing Keynotes&lt;/h2&gt;

&lt;p&gt;If you&amp;rsquo;re interested in what Microsoft is building for Postgres these days, then Charles Feddersen&amp;rsquo;s keynote is a must-watch. And in spite of all the hype about AI, you&amp;rsquo;re guaranteed to enjoy Bruce Momjian&amp;rsquo;s keynote about databases in the AI trenches.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/what-microsoft-is-building-for-postgres-in-2025/"&gt;KEYNOTE: What Microsoft is Building for Postgres in 2025&lt;/a&gt;, by Charles Feddersen in Livestream 1 and Livestream 4 &lt;small&gt;(Azure, Postgres 18, Async IO, AI, RAG, VS Code, community, livestream1, livestream4)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/databases-in-the-ai-trenches/"&gt;KEYNOTE: Databases in the AI Trenches&lt;/a&gt;, by Bruce Momjian in Livestream 2 and Livestream 3 &lt;small&gt;(AI, semantic search, vector search, RAG, livestream2, livestream3)&lt;/small&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="postgres-core"&gt;&lt;strong style="font-weight:900"&gt;18&lt;/strong&gt; Postgres Core talks&lt;/h2&gt;

&lt;h3&gt;Performance&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/best-practices-for-tuning-slow-postgres-queries/"&gt;Best Practices for Tuning Slow Postgres Queries&lt;/a&gt;, by Lukas Fittl &lt;small&gt;(EXPLAIN, optimizing queries, performance, startup, livestream1)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/fun-with-uuids/"&gt;Fun With UUIDs&lt;/a&gt;, by Chris Ellis &lt;small&gt;(UUIDs, indexes, performance, livestream3)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/hacking-postgres-executor-for-performance/"&gt;Hacking Postgres Executor For Performance&lt;/a&gt;, by Amit Langote &lt;small&gt;(PG internals, executor, performance, livestream4)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/leveraging-table-partitioning-for-query-performance-and-data-archiving/"&gt;Leveraging table partitioning for query performance and data archiving&lt;/a&gt;, by Derk van Veen &lt;small&gt;(partitioning, performance, livestream4)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/performance-archaeology-20-years-of-improvements/"&gt;Performance Archaeology - 20 years of improvements&lt;/a&gt;, by Tomas Vondra &lt;small&gt;(performance, benchmarks, livestream1)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/what-s-new-in-the-postgres-18-query-planner-optimizer/"&gt;What’s new in the Postgres 18 query planner / optimizer&lt;/a&gt;, by David Rowley &lt;small&gt;(PG internals, planner, livestream2)&lt;/small&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Postgres internals&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/designing-a-monitoring-feature-in-postgresql/"&gt;Designing a monitoring feature in PostgreSQL&lt;/a&gt;, by Rahila Syed &lt;small&gt;(monitoring, PG internals, livestream4)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/postgresql-and-linux-kernel-interactions/"&gt;PostgreSQL and Linux Kernel interactions&lt;/a&gt;, by Krishnakumar &amp;quot;KK&amp;quot; Ravi &lt;small&gt;(Async IO, PG internals, Linux, troubleshooting, livestream3)&lt;/small&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Replication&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/logical-replication-theory-and-concepts/"&gt;Logical replication theory and concepts&lt;/a&gt;, by Ashutosh Bapat &lt;small&gt;(replication, wal, livestream2)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/myths-and-truths-about-synchronous-replication-in-postgresql/"&gt;Myths and Truths about Synchronous Replication in PostgreSQL&lt;/a&gt;, by Alexander Kukushkin &lt;small&gt;(HA, replication, livestream3)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/securing-postgres-with-streaming-replication/"&gt;Securing Postgres with streaming replication&lt;/a&gt;, by Jan Karremans &lt;small&gt;(replication, disaster recovery, WAL, livestream4)&lt;/small&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Community&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/setting-max_connection-building-postgresql-user-groups-that-bring-people-together/"&gt;Setting max_connection: Building PostgreSQL user groups that bring people together&lt;/a&gt;, by Ellyne Phneah &lt;small&gt;(community, startup, user groups, livestream4)&lt;/small&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Fun&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/debugging-data-corruption-in-postgresql-a-systematic-approach/"&gt;Debugging Data Corruption in PostgreSQL: A Systematic Approach&lt;/a&gt;, by Palak Chaturvedi &amp;amp; Nitin Jadhav &lt;small&gt;(debugging, Linux, livestream2)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/implementing-strict-serializability-with-pg_xact/"&gt;Implementing Strict Serializability with pg_xact&lt;/a&gt;, by Jimmy Zelinskie &lt;small&gt;(strict serializability, pg_xact, livestream3)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/incremental-backup-in-postgresql/"&gt;Incremental Backup in PostgreSQL&lt;/a&gt;, by Robert Haas &lt;small&gt;(backup, incremental backup, livestream1)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/managing-postgres-at-scale-challenges-tools-techniques/"&gt;Managing Postgres at scale: Challenges, Tools &amp;amp; Techniques&lt;/a&gt;, by Karen Jex &lt;small&gt;(backups, HA, monitoring, partitioning, scalability, sharding, livestream3)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/neon-how-we-made-postgresql-serverless/"&gt;Neon: How we made PostgreSQL serverless&lt;/a&gt;, by Heikki Linnakangas &lt;small&gt;(serverless, Neon, startup, livestream2)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/postgres-storytelling-cunning-schema-design-with-creative-data-modeling/"&gt;Postgres Storytelling: Cunning Schema Design with Creative Data Modeling&lt;/a&gt;, by Boriss Mejías &amp;amp; Sarah Conway &lt;small&gt;(data modeling, livestream1)&lt;/small&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="postgres-ecosystem"&gt;&lt;strong style="font-weight:900"&gt;12&lt;/strong&gt; Postgres Ecosystem talks&lt;/h2&gt;

&lt;h3&gt;Analytics&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/building-a-postgresql-data-warehouse/"&gt;Building a PostgreSQL data warehouse&lt;/a&gt;, by Marco Slot &lt;small&gt;(extensions, analytics, data warehouse, livestream4)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/pg_duckdb-ducking-awesome-analytics-in-postgres/"&gt;pg_duckdb: Ducking awesome analytics in Postgres&lt;/a&gt;, by Jelte Fennema-Nio &lt;small&gt;(extensions, analytics, DuckDB, startup, livestream2)&lt;/small&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;App dev&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/building-modern-python-web-apps-with-postgresql/"&gt;Building modern Python web apps with PostgreSQL&lt;/a&gt;, by Pamela Fox &lt;small&gt;(app dev, Python, livestream1)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/exploring-net-and-postgresql-on-linux-as-your-oss-app-dev-stack/"&gt;Exploring .NET and PostgreSQL on Linux as your OSS app dev stack&lt;/a&gt;, by Silvano Coriani &lt;small&gt;(app dev, .NET, Linux, livestream2)&lt;/small&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Extensions&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/can-we-use-rust-to-develop-extensions-for-postgresql/"&gt;Can We Use Rust to Develop Extensions for PostgreSQL?&lt;/a&gt;, by Shinya Kato &lt;small&gt;(extensions, pgrx, Rust, livestream2)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/designing-for-document-databases-in-postgresql/"&gt;Designing for Document Databases in PostgreSQL&lt;/a&gt;, by Vinod Sridharan &lt;small&gt;(extensions, DocumentDB, livestream1)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/elasticsearch-quality-full-text-search-in-postgres-via-tantivy/"&gt;Elasticsearch-Quality Full-Text Search in Postgres via Tantivy&lt;/a&gt;, by Philippe Noël &lt;small&gt;(Full text search, extensions, pg_search, startup, livestream1)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/from-mongodb-to-postgres-building-an-open-standard-for-document-databases/"&gt;From MongoDB to Postgres: Building an Open Source Standard for Document Databases&lt;/a&gt;, by Peter Farkas &lt;small&gt;(FerretDB, extensions, DocumentDB, MongoDB, startup, livestream4)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/hitchhiker-s-guide-to-row-level-security-in-citus/"&gt;Hitchhiker&amp;#39;s Guide to Row-Level Security in Citus&lt;/a&gt;, by Adam Wølk &lt;small&gt;(security, Citus, livestream4)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/resource-control-admission-i-have-a-date-with-my-psi/"&gt;Resource Control Admission - I have a date with my PSI&lt;/a&gt;, by Cédric Villemain &lt;small&gt;(extensions, pg_psi, monitoring, Linux, PG Internals, livestream4)&lt;/small&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Patroni&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/what-is-patroni-really/"&gt;What is Patroni, really?&lt;/a&gt;, by Polina Bungina &lt;small&gt;(HA, Patroni, livestream2)&lt;/small&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;VS Code&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/introducing-microsoft-s-vs-code-extension-for-postgresql/"&gt;Introducing Microsoft’s VS Code Extension for PostgreSQL&lt;/a&gt;, by Matt McFarland &lt;small&gt;(VS Code, IDE, livestream1)&lt;/small&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="azure"&gt;&lt;strong style="font-weight:900"&gt;10&lt;/strong&gt; Azure Database for PostgreSQL talks&lt;/h2&gt;

&lt;h3&gt;AI-related talks&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/building-enterprise-rag-with-azure-database-for-postgresql-and-pgvector/"&gt;Building Enterprise RAG with Azure Database for PostgreSQL and pgvector&lt;/a&gt;, by Michael John Pena &lt;small&gt;(AI, Azure, RAG, livestream4)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/building-intelligent-applications-with-graph-based-rag-on-postgresql/"&gt;Building Intelligent Applications with Graph-Based RAG on PostgreSQL&lt;/a&gt;, by Abe Omorogbe &lt;small&gt;(AI, AzureDBPostgres, RAG, livestream3)&lt;/small&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Customer talks&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/fortifying-azure-database-for-postgresql-stop-intrusions-in-their-tracks/"&gt;Fortifying Azure Database for PostgreSQL: Stop Intrusions in Their Tracks&lt;/a&gt;, by Johannes Schuetzner &lt;small&gt;(Azure, authentication, customer, Mercedes, networking, security, livestream2)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/scaling-postgres-to-the-next-level-at-openai/"&gt;Scaling Postgres to the next level at OpenAI&lt;/a&gt;, by Bohan Zhang &lt;small&gt;(AI, Azure, customer, OpenAI, livestream1)&lt;/small&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Flexible Server talks&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/azure-database-for-postgresql-15-essential-standards-for-compliance-and-security/"&gt;Azure Database for PostgreSQL: 15 Essential Standards for Compliance and Security&lt;/a&gt;, by Taiob Ali &lt;small&gt;(Azure, authentication, backup, compliance, security, livestream3)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/boosting-azure-database-for-postgresql-performance-with-azure-advisor/"&gt;Boosting Azure PostgreSQL Performance with Azure Advisor&lt;/a&gt;, by Gayathri Paderla &lt;small&gt;(Azure, Azure Advisor, performance, livestream1)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/dear-azure-database-for-postgresql-can-you-automate-my-index/"&gt;Dear Azure Database for PostgreSQL, can you automate my index?&lt;/a&gt;, by Nacho Alonso Portillo &lt;small&gt;(Azure, indexes, livestream2)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/overcoming-performance-hurdles-in-postgres-partitioned-tables/"&gt;Overcoming Performance Hurdles in Postgres Partitioned Tables&lt;/a&gt;, by Sarat Balijepalli &lt;small&gt;(Azure, partitioning, performance, livestream3)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/what-s-new-with-azure-database-for-postgresql-flexible-server-in-2025/"&gt;What’s New with Azure Postgres Flexible Server in 2025 🆕&lt;/a&gt;, by Varun Dhawan &lt;small&gt;(Azure, what&amp;#39;s new, flexible server, livestream3)&lt;/small&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Oracle to Postgres talks&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://posetteconf.com/2025/talks/migrating-from-oracle-to-azure-database-for-postgresql-seamlessly/"&gt;Migrating from Oracle to Azure Database for PostgreSQL, Seamlessly&lt;/a&gt;, by Neeta Goel &amp;amp; Sandeep Rajeev &lt;small&gt;(Azure, Oracle to Postgres, migration, livestream3)&lt;/small&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="schedule"&gt;Where to find the POSETTE Schedule&lt;/h2&gt;

&lt;p&gt;You may be thinking, &amp;ldquo;I know how to use a website, Claire.&amp;rdquo; Fair. But hear me out: the POSETTE 2025 Schedule page has 4 tabs&amp;mdash;one for each livestream&amp;mdash;and it always opens to Livestream 1 by default.&lt;/p&gt;

&lt;p&gt;So if you&amp;rsquo;re looking for talks in Livestreams 2, 3, or 4:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Head to the &lt;a href="https://posetteconf.com/2025/schedule/"&gt;POSETTE Schedule page&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Click the tab for the livestream you want&lt;/li&gt;
&lt;li&gt;Voila&amp;mdash;talks for that stream&lt;/li&gt;
&lt;/ul&gt;

&lt;figure&gt;
&lt;picture&gt;
&lt;source srcset="https://cdn.citusdata.com/images/blog/ultimate-guide-posette-2025-figure2-schedule-tabs.webp" type="image/webp"&gt;
&lt;img src="https://cdn.citusdata.com/images/blog/ultimate-guide-posette-2025-figure2-schedule-tabs.jpg" alt="POSETE 2025 schedule page" width="800" height="450" loading="lazy" /&gt;
&lt;/picture&gt;
&lt;figcaption&gt;&lt;strong&gt;Figure 2:&lt;/strong&gt; Screenshot of the POSETTE 2025 Schedule with separate tabs for the 4 livestreams&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h2 id="discord"&gt;How to watch &amp;amp; how to participate on Discord&lt;/h2&gt;

&lt;p&gt;Here&amp;rsquo;s how to tune in&amp;mdash;and how to participate in the conference.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to watch the livestreams&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All 4 livestreams will be watchable on the &lt;a href="https://posetteconf.com/2025/"&gt;PosetteConf 2025 home page&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Pro tip:&lt;/strong&gt; If you&amp;rsquo;ve left the page open since the last stream, &lt;strong&gt;refresh your browser&lt;/strong&gt; to see the next livestream.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How to join the virtual hallway track&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Head to the &lt;a href="https://aka.ms/open-source-discord"&gt;#posetteconf channel on Discord&lt;/a&gt; (on the Microsoft Open Source Discord)&lt;/li&gt;
&lt;li&gt;That&amp;rsquo;s where speakers and attendees hang out during the livestreams&amp;mdash;it’s where you can ask questions, share reactions, and just say hi.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="whats-new"&gt;What’s new in POSETTE 2025&lt;/h2&gt;

&lt;p&gt;If you attended POSETTE last year (or back when it was called &lt;em&gt;Citus Con&lt;/em&gt;), you might be wondering, what&amp;rsquo;s different this year?&lt;/p&gt;

&lt;p&gt;In many ways, the POSETTE playbook is the same: useful and delightful Postgres talks in a virtual, accessible format. But here&amp;rsquo;s what&amp;rsquo;s new:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;New website&lt;/strong&gt;: And, a new domain too: PosetteConf.com&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Only 2 keynotes&lt;/strong&gt;: instead of 4 keynotes last year. We&amp;rsquo;re honored that Bruce Momjian &amp;amp; Charles Feddersen accepted the invitation to be keynote speakers. Each keynote will be repeated twice.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;58% speakers new to POSETTE&lt;/strong&gt;: 26 out of 45 speakers (58%) are brand new to POSETTE&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;New livestream hosts&lt;/strong&gt;: 3 of the 7 livestream hosts are brand new to hosting POSETTE livestreams: welcome to Adam Wølk, Derk van Veen, &amp;amp; Thomas Munro&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Same name:&lt;/strong&gt; The POSETTE: An Event for Postgres name is here to stay&amp;mdash;and we still love the name&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="speakers"&gt;Big thank you to our 45 amazing speakers&lt;/h2&gt;

&lt;p&gt;Every great event starts with great talks&amp;mdash;and great talks start with great speakers. Want to learn more about the people behind these talks?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visit the &lt;a href="https://posetteconf.com/2025/speakers/"&gt;POSETTE 2025 Speaker page&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Click a speaker&amp;rsquo;s bio to see their interview (if available)&lt;/li&gt;
&lt;li&gt;If a speaker has been a guest on the &lt;a href="https://talkingpostgres.com/"&gt;Talking Postgres podcast&lt;/a&gt; in the past, then you&amp;rsquo;ll find a link to their episode there, too&lt;/li&gt;
&lt;/ul&gt;

&lt;figure&gt;
&lt;picture&gt;
&lt;source srcset="https://cdn.citusdata.com/images/blog/ultimate-guide-posette-2025-figure3-speakers.webp" type="image/webp"&gt;
&lt;img src="https://cdn.citusdata.com/images/blog/ultimate-guide-posette-2025-figure3-speakers.jpg" alt="POSETE 2025 speakers" width="800" height="450" loading="lazy" /&gt;
&lt;/picture&gt;
&lt;figcaption&gt;&lt;strong&gt;Figure 3:&lt;/strong&gt; Bio pics for all 45 speakers in POSETTE: An Event for Postgres 25, along with our gratitude.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h2 id="join-us"&gt;Join us for POSETTE 2025! Mark your calendars&lt;/h2&gt;

&lt;p&gt;I hope you join us for POSETTE 2025. Consider yourself officially invited. As part of the talk selection team, I’m definitely biased&amp;mdash;but I truly believe these speakers and talk are worth your time.&lt;/p&gt;

&lt;p&gt;I&amp;rsquo;ll be hosting &lt;strong&gt;Livestream 1&lt;/strong&gt; and &lt;strong&gt;Livestream 2&lt;/strong&gt; and you&amp;rsquo;ll find me in the #posetteconf Discord chat. I hope to see you there.&lt;/p&gt;

&lt;p&gt;And please&amp;mdash;&lt;strong&gt;tell your Postgres friends&lt;/strong&gt;, so they don’t miss out!&lt;/p&gt;

&lt;p&gt;🗓️ &lt;strong&gt;Add the livestreams to your calendar&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.addevent.com/event/Dl23540657"&gt;Livestream 1&lt;/a&gt;: Tue June 10&lt;sup&gt;th&lt;/sup&gt; 8am-2pm PDT (UTC-7)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.addevent.com/event/sP23540668"&gt;Livestream 2&lt;/a&gt;: Wed June 11&lt;sup&gt;th&lt;/sup&gt; 8am-2pm CEST (UTC+2)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.addevent.com/event/Uq23540679"&gt;Livestream 3&lt;/a&gt;: Wed June 11&lt;sup&gt;th&lt;/sup&gt; 8am-2pm PDT (UTC-7)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.addevent.com/event/mr23540675"&gt;Livestream 4&lt;/a&gt;: Thu June 12&lt;sup&gt;th&lt;/sup&gt; 8am-2pm CEST (UTC+2)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Watch last year&amp;rsquo;s talks in advance&lt;/strong&gt;: And if you want to get ready, check out the &lt;a href="https://aka.ms/posette-playlist-2024"&gt;POSETTE 2024 playlist on YouTube&lt;/a&gt;. Lots of gems in there.&lt;/p&gt;

&lt;h2 id=""&gt;Acknowledgements &amp; Gratitude&lt;/h2&gt;

&lt;p&gt;I&amp;rsquo;ve already thanked the amazing speakers above. In addition, thanks go to Daniel Gustafsson, Teresa Giacomini, and My Nguyen for reviewing parts of this post before publication. And of course, big thank you to the &lt;a href="https://posetteconf.com/2025/about/#organizing-team"&gt;POSETTE 2025 organizing team&lt;/a&gt; and &lt;a href="https://posetteconf.com/2025/about/#talk-selection-team"&gt;POSETTE talk selection team&lt;/a&gt;&amp;mdash;without you, there would be no POSETTE!&lt;/p&gt;

&lt;figure&gt;
&lt;picture&gt;
&lt;source srcset="https://cdn.citusdata.com/images/blog/ultimate-guide-posette-2025-figure4-discord.webp" type="image/webp"&gt;
&lt;img src="https://cdn.citusdata.com/images/blog/ultimate-guide-posette-2025-figure4-discord.jpg" alt="Join the virtual hallway track on Discord" width="800" height="450" loading="lazy" /&gt;
&lt;/picture&gt;
&lt;figcaption&gt;&lt;strong&gt;Figure 4:&lt;/strong&gt; Visual invitation to join the virtual hallway track for POSETTE 2025 on the Microsoft Open Source Discord. So you can chat with the speakers &amp; others in the Postgres community.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;style&gt;.blog-article-content__text li small{color:#767676}.inline-newsletter-signup{display:none}&lt;/style&gt;
&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href='https://www.citusdata.com/blog/2025/06/04/ultimate-guide-to-posette-2025/'&gt;citusdata.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;</content>
  </entry>
  <entry>
    <title> Distribute PostgreSQL 17 with Citus 13 </title>
    <link rel="alternate" href="https://www.citusdata.com/blog/2025/02/06/distribute-postgresql-17-with-citus-13/"/>
    <id>https://www.citusdata.com/blog/2025/02/06/distribute-postgresql-17-with-citus-13/</id>
    <published>2025-02-06T18:45:00+00:00</published>
    <updated>2025-02-06T18:45:00+00:00</updated>
    <author>Naisila Puka</author>
    <content type="html">&lt;p&gt;The Citus 13.0 release is out and includes PostgreSQL 17.2 support! We know you’ve been waiting, and we’ve been hard at work adding features we believe will take your experience to the next level, focusing on bringing the &lt;a href="https://www.postgresql.org/docs/17/release-17.html"&gt;Postgres 17 exciting improvements&lt;/a&gt; to you at distributed scale.&lt;/p&gt;

&lt;p&gt;The Citus database is an &lt;a href="https://github.com/citusdata/citus"&gt;open-source extension of Postgres&lt;/a&gt; that brings the power of Postgres to any scale, from a single node to a distributed database cluster. Since Citus is an extension, using Citus means you&amp;#39;re also using Postgres, giving you direct access to the Postgres features. And the latest of such features came with Postgres 17 release! In addition, Citus 13 will be made available on the &lt;a href="https://techcommunity.microsoft.com/blog/adforpostgresql/postgres-horizontal-scaling-with-elastic-clusters-on-azure-database-for-postgres/4303508"&gt;elastic clusters (preview) feature&lt;/a&gt; on Azure Database for PostgreSQL - Flexible Server, along with PostgreSQL 17 support, in the near future.&lt;/p&gt;

&lt;p&gt;PostgreSQL 17 highlights include performance improvements in query execution for indexes, a revamped memory management system for vacuum, new monitoring and analysis features, expanded functionality for managing data in partitions, optimizer improvements, and enhancements for high-concurrency workloads. PostgreSQL 17 also expands on SQL syntax that benefits both new workloads and mission-critical systems, such as the addition of the SQL/JSON &lt;code&gt;JSON_TABLE()&lt;/code&gt; command for developers, and the expansion of the &lt;code&gt;MERGE&lt;/code&gt; command. For those of you who are interested in upgrading to Postgres 17 and scaling these new features of Postgres: you can upgrade to Citus 13.0!&lt;/p&gt;

&lt;p&gt;Along with Postgres 17 support, Citus 13.0 also fixes important bugs, and we are happy to say that we had many community contributions here as well. These bugfixes focus on data integrity and correctness, crash and fault tolerance, and cluster management, all of which are critical for ensuring reliable operations and user confidence in a distributed PostgreSQL environment.&lt;/p&gt;

&lt;p&gt;Let&amp;#39;s take a closer look at what&amp;#39;s new in Citus 13.0: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="#pg17-support"&gt;Postgres 17 support in Citus 13.0&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#optimizer-improvements"&gt;Leveraging optimizer improvements with Citus in Postgres 17&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#important-bugfixes"&gt;Important bugfixes, including community contributions into Citus 13.0&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="pg17-support"&gt;Postgres 17 support in Citus 13.0&lt;/h2&gt;

&lt;p&gt;Citus 13.0 introduces support for PostgreSQL 17. This means that just by enabling PG17.2 in Citus 13.0, all the query performance improvements directly reflect on the Citus distributed queries, and several optimizer improvements benefit queries in Citus out of the box! Among the many new features in PG 17, the following capabilities enabled in Citus 13.0 are especially noteworthy for Citus users:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="#json-table"&gt;JSON_TABLE() support in distributed queries&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#merge"&gt;Propagate &amp;quot;MERGE ... WHEN NOT MATCHED BY SOURCE&amp;quot; syntax&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#partitioned-tables"&gt;Expanded functionality on distributed partitioned tables&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#explain"&gt;Propagate new EXPLAIN options: MEMORY and SERIALIZE&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To learn more about how you can use Citus 13.0 + PostgreSQL 17.2, as well as currently unsupported features and future work, you can consult the &lt;a href="/updates/v13-0/"&gt;Citus 13.0 Updates page&lt;/a&gt;, which gives you detailed release notes.&lt;/p&gt;

&lt;h3 id="json-table"&gt;JSON_TABLE() support in distributed queries&lt;/h3&gt;

&lt;p&gt;One of the most discussed features in Postgres 17 is the enhanced management of JSON data. The key addition here is the &lt;code&gt;JSON_TABLE()&lt;/code&gt; function. &lt;code&gt;JSON_TABLE()&lt;/code&gt; converts JSON data into a standard PostgreSQL table. This means that developers can leverage the full capabilities of SQL on data that was originally in JSON format, by getting data from complicated JSONs into a simpler relational view. &lt;a href="https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=de3600452"&gt;The commit that adds basic JSON_TABLE() functionality&lt;/a&gt; explains that through &lt;code&gt;JSON_TABLE()&lt;/code&gt; function, &lt;code&gt;JSON&lt;/code&gt; data can be used like other tabular data, with capabilities such as sorting, filtering, joining with regular Postgres tables and usage in a FROM clause. Joining is specifically interesting for Citus: how can we perform a join between the &lt;code&gt;JSON_TABLE()&lt;/code&gt; function and a Citus distributed table?&lt;/p&gt;

&lt;p&gt;As you may know, Citus uses the sharding technique to distribute the tables, where each table is divided into smaller chunks called shards. Citus 13.0 supports &lt;code&gt;JSON_TABLE()&lt;/code&gt; in a distributed query by treating it as a recurring tuple, i.e. an expression that gives the same set of results across multiple shards. This is useful for executing a JOIN between a distributed table and a table coming from the &lt;code&gt;JSON_TABLE()&lt;/code&gt; function. The main point is that recurring tuples &amp;quot;recur&amp;quot; for each shard in a multi-shard query. For more technical details on this, check out &lt;a href="https://github.com/citusdata/citus/blob/main/src/backend/distributed/README.md#recurring-tuples"&gt;Citus Technical Documentation on recurring tuples&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let’s look at example distributed queries of how you can combine the &lt;code&gt;JSON_TABLE()&lt;/code&gt; function with Citus distributed tables:&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- create, distribute and populate the table with a jsonb column&lt;/span&gt;
&lt;span class="c1"&gt;-- we will use that column as a regular postgres table through JSON_TABLE()&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;my_favorite_books&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;book_collection_id&lt;/span&gt; &lt;span class="n"&gt;bigserial&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;jsonb_column&lt;/span&gt; &lt;span class="n"&gt;jsonb&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;create_distributed_table&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;' my_favorite_books '&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'book_collection_id'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- these books will be inserted with the automatic book_collection_id of 1&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;my_favorite_books&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jsonb_column&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
&lt;span class="s1"&gt;'{ "favorites" : [
   { "kind" : "mystery", "books" : [
     { "title" : "The Count of Monte Cristo", "author" : "Alexandre Dumas"},
     { "title" : "Crime and Punishment", "author" : "Fyodor Dostoevsky" } ] },
   { "kind" : "drama", "books" : [
     { "title" : "Anna Karenina", "author" : "Leo Tolstoy" } ] },
   { "kind" : "poetry", "books" : [
     { "title" : "Masnavi", "author" : "Jalal al-Din Muhammad Rumi" } ] },
   { "kind" : "autobiography", "books" : [
     { "title" : "The Autobiography of Malcolm X", "author" : "Alex Haley" } ] }
  ] }'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- these books will be inserted with the automatic book_collection_id of 2&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;my_favorite_books&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jsonb_column&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
&lt;span class="s1"&gt;'{ "favorites" : [
   { "kind" : "mystery", "books" : [
     { "title" : "To Kill a Mockingbird", "author" : "Harper Lee"},
     { "title" : "Our Mutual Friend", "author" : "Charles Dickens" } ] },
   { "kind" : "drama", "books" : [
     { "title" : "Pride and Prejudice", "author" : "Jane Austen" } ] },
   { "kind" : "poetry", "books" : [
     { "title" : "The Odyssey", "author" : "Homer" } ] },
   { "kind" : "autobiography", "books" : [
     { "title" : "The Diary of a Young Girl", "author" : "Anne Frank" } ] }
  ] }'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- a simple router query, that outputs all the books under book_collection_id = 1&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;json_table_output&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt;
&lt;span class="n"&gt;my_favorite_books&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="n"&gt;JSON_TABLE&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt; &lt;span class="n"&gt;jsonb_column&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'$.favorites[*]'&lt;/span&gt; &lt;span class="n"&gt;COLUMNS&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
   &lt;span class="k"&gt;key&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;ORDINALITY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;kind&lt;/span&gt; &lt;span class="nb"&gt;text&lt;/span&gt; &lt;span class="n"&gt;PATH&lt;/span&gt; &lt;span class="s1"&gt;'$.kind'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="n"&gt;NESTED&lt;/span&gt; &lt;span class="n"&gt;PATH&lt;/span&gt; &lt;span class="s1"&gt;'$.books[*]'&lt;/span&gt; &lt;span class="n"&gt;COLUMNS&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
     &lt;span class="n"&gt;title&lt;/span&gt; &lt;span class="nb"&gt;text&lt;/span&gt; &lt;span class="n"&gt;PATH&lt;/span&gt; &lt;span class="s1"&gt;'$.title'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;author&lt;/span&gt; &lt;span class="nb"&gt;text&lt;/span&gt; &lt;span class="n"&gt;PATH&lt;/span&gt; &lt;span class="s1"&gt;'$.author'&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;json_table_output&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;my_favorite_books&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="n"&gt;book_collection_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

 &lt;span class="k"&gt;key&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;     &lt;span class="n"&gt;kind&lt;/span&gt;      &lt;span class="o"&gt;|&lt;/span&gt;             &lt;span class="n"&gt;title&lt;/span&gt;              &lt;span class="o"&gt;|&lt;/span&gt;           &lt;span class="n"&gt;author&lt;/span&gt;
&lt;span class="c1"&gt;-----+---------------+--------------------------------+----------------------------&lt;/span&gt;
   &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;mystery&lt;/span&gt;       &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;The&lt;/span&gt; &lt;span class="k"&gt;Count&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="n"&gt;Monte&lt;/span&gt; &lt;span class="n"&gt;Cristo&lt;/span&gt;      &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Alexandre&lt;/span&gt; &lt;span class="n"&gt;Dumas&lt;/span&gt;
   &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;mystery&lt;/span&gt;       &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Crime&lt;/span&gt; &lt;span class="k"&gt;and&lt;/span&gt; &lt;span class="n"&gt;Punishment&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Fyodor&lt;/span&gt; &lt;span class="n"&gt;Dostoevsky&lt;/span&gt;
   &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;drama&lt;/span&gt;         &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Anna&lt;/span&gt; &lt;span class="n"&gt;Karenina&lt;/span&gt;                  &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Leo&lt;/span&gt; &lt;span class="n"&gt;Tolstoy&lt;/span&gt;
   &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;poetry&lt;/span&gt;        &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Masnavi&lt;/span&gt;                        &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Jalal&lt;/span&gt; &lt;span class="n"&gt;al&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;Din&lt;/span&gt; &lt;span class="n"&gt;Muhammad&lt;/span&gt; &lt;span class="n"&gt;Rumi&lt;/span&gt;
   &lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;autobiography&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;The&lt;/span&gt; &lt;span class="n"&gt;Autobiography&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="n"&gt;Malcolm&lt;/span&gt; &lt;span class="n"&gt;X&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Alex&lt;/span&gt; &lt;span class="n"&gt;Haley&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;-- a simple multi-shard query, where we want to see all the books&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;json_table_output&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt;
&lt;span class="n"&gt;my_favorite_books&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="n"&gt;JSON_TABLE&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt; &lt;span class="n"&gt;jsonb_column&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'$.favorites[*]'&lt;/span&gt; &lt;span class="n"&gt;COLUMNS&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
   &lt;span class="k"&gt;key&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;ORDINALITY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;kind&lt;/span&gt; &lt;span class="nb"&gt;text&lt;/span&gt; &lt;span class="n"&gt;PATH&lt;/span&gt; &lt;span class="s1"&gt;'$.kind'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="n"&gt;NESTED&lt;/span&gt; &lt;span class="n"&gt;PATH&lt;/span&gt; &lt;span class="s1"&gt;'$.books[*]'&lt;/span&gt; &lt;span class="n"&gt;COLUMNS&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
     &lt;span class="n"&gt;title&lt;/span&gt; &lt;span class="nb"&gt;text&lt;/span&gt; &lt;span class="n"&gt;PATH&lt;/span&gt; &lt;span class="s1"&gt;'$.title'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;author&lt;/span&gt; &lt;span class="nb"&gt;text&lt;/span&gt; &lt;span class="n"&gt;PATH&lt;/span&gt; &lt;span class="s1"&gt;'$.author'&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;json_table_output&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

 &lt;span class="k"&gt;key&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;     &lt;span class="n"&gt;kind&lt;/span&gt;      &lt;span class="o"&gt;|&lt;/span&gt;             &lt;span class="n"&gt;title&lt;/span&gt;              &lt;span class="o"&gt;|&lt;/span&gt;           &lt;span class="n"&gt;author&lt;/span&gt;
&lt;span class="c1"&gt;-----+---------------+--------------------------------+----------------------------&lt;/span&gt;
   &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;mystery&lt;/span&gt;       &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;The&lt;/span&gt; &lt;span class="k"&gt;Count&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="n"&gt;Monte&lt;/span&gt; &lt;span class="n"&gt;Cristo&lt;/span&gt;      &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Alexandre&lt;/span&gt; &lt;span class="n"&gt;Dumas&lt;/span&gt;
   &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;mystery&lt;/span&gt;       &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Crime&lt;/span&gt; &lt;span class="k"&gt;and&lt;/span&gt; &lt;span class="n"&gt;Punishment&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Fyodor&lt;/span&gt; &lt;span class="n"&gt;Dostoevsky&lt;/span&gt;
   &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;mystery&lt;/span&gt;       &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Our&lt;/span&gt; &lt;span class="n"&gt;Mutual&lt;/span&gt; &lt;span class="n"&gt;Friend&lt;/span&gt;              &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Charles&lt;/span&gt; &lt;span class="n"&gt;Dickens&lt;/span&gt;
   &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;mystery&lt;/span&gt;       &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;To&lt;/span&gt; &lt;span class="n"&gt;Kill&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="n"&gt;Mockingbird&lt;/span&gt;          &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Harper&lt;/span&gt; &lt;span class="n"&gt;Lee&lt;/span&gt;
   &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;drama&lt;/span&gt;         &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Anna&lt;/span&gt; &lt;span class="n"&gt;Karenina&lt;/span&gt;                  &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Leo&lt;/span&gt; &lt;span class="n"&gt;Tolstoy&lt;/span&gt;
   &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;drama&lt;/span&gt;         &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Pride&lt;/span&gt; &lt;span class="k"&gt;and&lt;/span&gt; &lt;span class="n"&gt;Prejudice&lt;/span&gt;            &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Jane&lt;/span&gt; &lt;span class="n"&gt;Austen&lt;/span&gt;
   &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;poetry&lt;/span&gt;        &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Masnavi&lt;/span&gt;                        &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Jalal&lt;/span&gt; &lt;span class="n"&gt;al&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;Din&lt;/span&gt; &lt;span class="n"&gt;Muhammad&lt;/span&gt; &lt;span class="n"&gt;Rumi&lt;/span&gt;
   &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;poetry&lt;/span&gt;        &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;The&lt;/span&gt; &lt;span class="n"&gt;Odyssey&lt;/span&gt;                    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Homer&lt;/span&gt;
   &lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;autobiography&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;The&lt;/span&gt; &lt;span class="n"&gt;Autobiography&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="n"&gt;Malcolm&lt;/span&gt; &lt;span class="n"&gt;X&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Alex&lt;/span&gt; &lt;span class="n"&gt;Haley&lt;/span&gt;
   &lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;autobiography&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;The&lt;/span&gt; &lt;span class="n"&gt;Diary&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="n"&gt;Young&lt;/span&gt; &lt;span class="n"&gt;Girl&lt;/span&gt;      &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Anne&lt;/span&gt; &lt;span class="n"&gt;Frank&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;-- more complex router query involving LATERAL and LIMIT&lt;/span&gt;
&lt;span class="c1"&gt;-- select two books under book_collection_id = 2&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;sub&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;my_favorite_books&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="k"&gt;lateral&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;JSON_TABLE&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jsonb_column&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'$.favorites[*]'&lt;/span&gt;
    &lt;span class="n"&gt;COLUMNS&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;key&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;ORDINALITY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;kind&lt;/span&gt; &lt;span class="nb"&gt;text&lt;/span&gt; &lt;span class="n"&gt;PATH&lt;/span&gt; &lt;span class="s1"&gt;'$.kind'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;NESTED&lt;/span&gt; &lt;span class="n"&gt;PATH&lt;/span&gt; &lt;span class="s1"&gt;'$.books[*]'&lt;/span&gt; &lt;span class="n"&gt;COLUMNS&lt;/span&gt;
            &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt; &lt;span class="nb"&gt;text&lt;/span&gt; &lt;span class="n"&gt;PATH&lt;/span&gt; &lt;span class="s1"&gt;'$.title'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;author&lt;/span&gt; &lt;span class="nb"&gt;text&lt;/span&gt; &lt;span class="n"&gt;PATH&lt;/span&gt; &lt;span class="s1"&gt;'$.author'&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
    &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;json_table_output&lt;/span&gt; &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="k"&gt;key&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt; &lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sub&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;my_favorite_books&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;book_collection_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

 &lt;span class="k"&gt;key&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;     &lt;span class="n"&gt;kind&lt;/span&gt;      &lt;span class="o"&gt;|&lt;/span&gt;           &lt;span class="n"&gt;title&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt;   &lt;span class="n"&gt;author&lt;/span&gt;
&lt;span class="c1"&gt;-----+---------------+---------------------------+------------&lt;/span&gt;
   &lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;autobiography&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;The&lt;/span&gt; &lt;span class="n"&gt;Diary&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="n"&gt;Young&lt;/span&gt; &lt;span class="n"&gt;Girl&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Anne&lt;/span&gt; &lt;span class="n"&gt;Frank&lt;/span&gt;
   &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;poetry&lt;/span&gt;        &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;The&lt;/span&gt; &lt;span class="n"&gt;Odyssey&lt;/span&gt;               &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Homer&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="-- create, distribute and populate the table with a jsonb column
-- we will use that column as a regular postgres table through JSON_TABLE()
CREATE TABLE my_favorite_books(book_collection_id bigserial, jsonb_column jsonb);
SELECT create_distributed_table(&amp;#39; my_favorite_books &amp;#39;, &amp;#39;book_collection_id&amp;#39;);

-- these books will be inserted with the automatic book_collection_id of 1
INSERT INTO my_favorite_books (jsonb_column) VALUES (
&amp;#39;{ &amp;quot;favorites&amp;quot; : [
   { &amp;quot;kind&amp;quot; : &amp;quot;mystery&amp;quot;, &amp;quot;books&amp;quot; : [
     { &amp;quot;title&amp;quot; : &amp;quot;The Count of Monte Cristo&amp;quot;, &amp;quot;author&amp;quot; : &amp;quot;Alexandre Dumas&amp;quot;},
     { &amp;quot;title&amp;quot; : &amp;quot;Crime and Punishment&amp;quot;, &amp;quot;author&amp;quot; : &amp;quot;Fyodor Dostoevsky&amp;quot; } ] },
   { &amp;quot;kind&amp;quot; : &amp;quot;drama&amp;quot;, &amp;quot;books&amp;quot; : [
     { &amp;quot;title&amp;quot; : &amp;quot;Anna Karenina&amp;quot;, &amp;quot;author&amp;quot; : &amp;quot;Leo Tolstoy&amp;quot; } ] },
   { &amp;quot;kind&amp;quot; : &amp;quot;poetry&amp;quot;, &amp;quot;books&amp;quot; : [
     { &amp;quot;title&amp;quot; : &amp;quot;Masnavi&amp;quot;, &amp;quot;author&amp;quot; : &amp;quot;Jalal al-Din Muhammad Rumi&amp;quot; } ] },
   { &amp;quot;kind&amp;quot; : &amp;quot;autobiography&amp;quot;, &amp;quot;books&amp;quot; : [
     { &amp;quot;title&amp;quot; : &amp;quot;The Autobiography of Malcolm X&amp;quot;, &amp;quot;author&amp;quot; : &amp;quot;Alex Haley&amp;quot; } ] }
  ] }&amp;#39;);

-- these books will be inserted with the automatic book_collection_id of 2
INSERT INTO my_favorite_books (jsonb_column) VALUES (
&amp;#39;{ &amp;quot;favorites&amp;quot; : [
   { &amp;quot;kind&amp;quot; : &amp;quot;mystery&amp;quot;, &amp;quot;books&amp;quot; : [
     { &amp;quot;title&amp;quot; : &amp;quot;To Kill a Mockingbird&amp;quot;, &amp;quot;author&amp;quot; : &amp;quot;Harper Lee&amp;quot;},
     { &amp;quot;title&amp;quot; : &amp;quot;Our Mutual Friend&amp;quot;, &amp;quot;author&amp;quot; : &amp;quot;Charles Dickens&amp;quot; } ] },
   { &amp;quot;kind&amp;quot; : &amp;quot;drama&amp;quot;, &amp;quot;books&amp;quot; : [
     { &amp;quot;title&amp;quot; : &amp;quot;Pride and Prejudice&amp;quot;, &amp;quot;author&amp;quot; : &amp;quot;Jane Austen&amp;quot; } ] },
   { &amp;quot;kind&amp;quot; : &amp;quot;poetry&amp;quot;, &amp;quot;books&amp;quot; : [
     { &amp;quot;title&amp;quot; : &amp;quot;The Odyssey&amp;quot;, &amp;quot;author&amp;quot; : &amp;quot;Homer&amp;quot; } ] },
   { &amp;quot;kind&amp;quot; : &amp;quot;autobiography&amp;quot;, &amp;quot;books&amp;quot; : [
     { &amp;quot;title&amp;quot; : &amp;quot;The Diary of a Young Girl&amp;quot;, &amp;quot;author&amp;quot; : &amp;quot;Anne Frank&amp;quot; } ] }
  ] }&amp;#39;);

-- a simple router query, that outputs all the books under book_collection_id = 1
SELECT json_table_output.* FROM
my_favorite_books,
JSON_TABLE ( jsonb_column, &amp;#39;$.favorites[*]&amp;#39; COLUMNS (
   key FOR ORDINALITY, kind text PATH &amp;#39;$.kind&amp;#39;,
   NESTED PATH &amp;#39;$.books[*]&amp;#39; COLUMNS (
     title text PATH &amp;#39;$.title&amp;#39;, author text PATH &amp;#39;$.author&amp;#39;))) AS json_table_output
WHERE my_favorite_books. book_collection_id = 1
ORDER BY 1, 2, 3, 4;

 key |     kind      |             title              |           author
-----+---------------+--------------------------------+----------------------------
   1 | mystery       | The Count of Monte Cristo      | Alexandre Dumas
   1 | mystery       | Crime and Punishment           | Fyodor Dostoevsky
   2 | drama         | Anna Karenina                  | Leo Tolstoy
   3 | poetry        | Masnavi                        | Jalal al-Din Muhammad Rumi
   4 | autobiography | The Autobiography of Malcolm X | Alex Haley
(5 rows)

-- a simple multi-shard query, where we want to see all the books
SELECT json_table_output.* FROM
my_favorite_books,
JSON_TABLE ( jsonb_column, &amp;#39;$.favorites[*]&amp;#39; COLUMNS (
   key FOR ORDINALITY, kind text PATH &amp;#39;$.kind&amp;#39;,
   NESTED PATH &amp;#39;$.books[*]&amp;#39; COLUMNS (
     title text PATH &amp;#39;$.title&amp;#39;, author text PATH &amp;#39;$.author&amp;#39;))) AS json_table_output
ORDER BY 1, 2, 3, 4;

 key |     kind      |             title              |           author
-----+---------------+--------------------------------+----------------------------
   1 | mystery       | The Count of Monte Cristo      | Alexandre Dumas
   1 | mystery       | Crime and Punishment           | Fyodor Dostoevsky
   1 | mystery       | Our Mutual Friend              | Charles Dickens
   1 | mystery       | To Kill a Mockingbird          | Harper Lee
   2 | drama         | Anna Karenina                  | Leo Tolstoy
   2 | drama         | Pride and Prejudice            | Jane Austen
   3 | poetry        | Masnavi                        | Jalal al-Din Muhammad Rumi
   3 | poetry        | The Odyssey                    | Homer
   4 | autobiography | The Autobiography of Malcolm X | Alex Haley
   4 | autobiography | The Diary of a Young Girl      | Anne Frank
(10 rows)

-- more complex router query involving LATERAL and LIMIT
-- select two books under book_collection_id = 2
SELECT sub.*
FROM my_favorite_books,
lateral(SELECT * FROM JSON_TABLE (jsonb_column, &amp;#39;$.favorites[*]&amp;#39;
    COLUMNS (key FOR ORDINALITY, kind text PATH &amp;#39;$.kind&amp;#39;,
        NESTED PATH &amp;#39;$.books[*]&amp;#39; COLUMNS
            (title text PATH &amp;#39;$.title&amp;#39;, author text PATH &amp;#39;$.author&amp;#39;)))
    AS json_table_output ORDER BY key DESC LIMIT 2) as sub
WHERE my_favorite_books.book_collection_id = 2;

 key |     kind      |           title           |   author
-----+---------------+---------------------------+------------
   4 | autobiography | The Diary of a Young Girl | Anne Frank
   3 | poetry        | The Odyssey               | Homer
(2 rows)
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;p&gt;Furthermore, &lt;code&gt;JSON_TABLE()&lt;/code&gt; can be on the inner part of an outer join, as well as in the outer part of a join as long as there is one distributed table (or even more). The limitations of using &lt;code&gt;JSON_TABLE()&lt;/code&gt; in distributed queries are the same as the general limitations of the usage of recurring tuples in distributed queries. For more technical examples on usages of &lt;code&gt;JSON_TABLE()&lt;/code&gt; in distributed queries, as well as the limitations, you can check out the &lt;a href="/updates/v13-0/"&gt;Updates page&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id="merge"&gt;Propagate "MERGE ... WHEN NOT MATCHED BY SOURCE" syntax&lt;/h3&gt;

&lt;p&gt;As you may know, the &lt;code&gt;MERGE&lt;/code&gt; statement in SQL is used to perform &lt;code&gt;INSERT&lt;/code&gt;, &lt;code&gt;UPDATE&lt;/code&gt;, and &lt;code&gt;DELETE&lt;/code&gt; operations on a target table based on the results of a join with a source table. This allows for efficient data synchronization between the target and source tables because it combines multiple operations into one.&lt;/p&gt;

&lt;p&gt;PG15 added support for &lt;code&gt;MERGE&lt;/code&gt;, with the syntax originally allowing only defining actions for rows that exist in the data source, but not in the target relation., i.e. &lt;code&gt;WHEN NOT MATCHED BY TARGET&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;As of PG17, one may use the &lt;code&gt;MERGE&lt;/code&gt; command to operate on rows that exist in the target relation, but not in the data source, by using &lt;code&gt;WHEN NOT MATCHED BY SOURCE&lt;/code&gt;. This is a fantastic addition and will greatly simplify various data loading and updating processes, because if a row in the target table being merged doesn’t exist in the source table, we can now perform any necessary actions on that row. Citus extended its already existing strategies employed for handling &lt;code&gt;MERGE&lt;/code&gt; in a distributed environment to include this syntax as well. For more details, you can look at &lt;a href="https://www.citusdata.com/blog/2023/07/27/how-citus-12-supports-postgres-merge/"&gt;how Citus 12 supports MERGE&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let’s see a simple example, similar to the tests in Postgres, on how to make use of &lt;code&gt;MERGE ... WHEN NOT MATCHED BY SOURCE&lt;/code&gt; with Citus managed tables:&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- create and distribute the target and source tables&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;target_table&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tid&lt;/span&gt; &lt;span class="nb"&gt;integer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;balance&lt;/span&gt; &lt;span class="nb"&gt;float&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;val&lt;/span&gt; &lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;source_table&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sid&lt;/span&gt; &lt;span class="nb"&gt;integer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;delta&lt;/span&gt; &lt;span class="nb"&gt;float&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;create_distributed_table&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'target_table'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'tid'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;create_distributed_table&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'source_table'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'sid'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- populate the tables&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;target_table&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'initial'&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;generate_series&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;source_table&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;generate_series&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- Use WHEN NOT MATCHED BY SOURCE&lt;/span&gt;
&lt;span class="n"&gt;MERGE&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;target_table&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt;
    &lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="n"&gt;source_table&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt;
    &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tid&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sid&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;tid&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
    &lt;span class="k"&gt;WHEN&lt;/span&gt; &lt;span class="n"&gt;MATCHED&lt;/span&gt; &lt;span class="k"&gt;THEN&lt;/span&gt;
        &lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;balance&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;balance&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;delta&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;val&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;val&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="s1"&gt;' updated by merge'&lt;/span&gt;
    &lt;span class="k"&gt;WHEN&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="n"&gt;MATCHED&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;TARGET&lt;/span&gt; &lt;span class="k"&gt;THEN&lt;/span&gt;
        &lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;delta&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'inserted by merge'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;WHEN&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="n"&gt;MATCHED&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="k"&gt;SOURCE&lt;/span&gt; &lt;span class="k"&gt;THEN&lt;/span&gt;
        &lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;val&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;val&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="s1"&gt;' not matched by source'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- see the updated distributed target table&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;target_table&lt;/span&gt; &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;tid&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

 &lt;span class="n"&gt;tid&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;balance&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;              &lt;span class="n"&gt;val&lt;/span&gt;
&lt;span class="c1"&gt;-----+---------+-------------------------------&lt;/span&gt;
   &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;     &lt;span class="mi"&gt;110&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;initial&lt;/span&gt; &lt;span class="n"&gt;updated&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="n"&gt;merge&lt;/span&gt;
   &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;      &lt;span class="mi"&gt;20&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;inserted&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="n"&gt;merge&lt;/span&gt;
   &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;      &lt;span class="mi"&gt;30&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;inserted&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="n"&gt;merge&lt;/span&gt;
   &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;     &lt;span class="mi"&gt;300&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;initial&lt;/span&gt; &lt;span class="k"&gt;not&lt;/span&gt; &lt;span class="n"&gt;matched&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="k"&gt;source&lt;/span&gt;
   &lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;      &lt;span class="mi"&gt;40&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;inserted&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="n"&gt;merge&lt;/span&gt;
   &lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;     &lt;span class="mi"&gt;500&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;initial&lt;/span&gt; &lt;span class="k"&gt;not&lt;/span&gt; &lt;span class="n"&gt;matched&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="k"&gt;source&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="-- create and distribute the target and source tables
CREATE TABLE target_table (tid integer, balance float, val text);
CREATE TABLE source_table (sid integer, delta float);
SELECT create_distributed_table(&amp;#39;target_table&amp;#39;, &amp;#39;tid&amp;#39;);
SELECT create_distributed_table(&amp;#39;source_table&amp;#39;, &amp;#39;sid&amp;#39;);

-- populate the tables
INSERT INTO target_table SELECT id, id * 100, &amp;#39;initial&amp;#39; FROM generate_series(1,5,2) AS id;
INSERT INTO source_table SELECT id, id * 10 FROM generate_series(1,4) AS id;

-- Use WHEN NOT MATCHED BY SOURCE
MERGE INTO target_table t
    USING source_table s
    ON t.tid = s.sid AND tid = 1
    WHEN MATCHED THEN
        UPDATE SET balance = balance + delta, val = val || &amp;#39; updated by merge&amp;#39;
    WHEN NOT MATCHED BY TARGET THEN
        INSERT VALUES (sid, delta, &amp;#39;inserted by merge&amp;#39;)
    WHEN NOT MATCHED BY SOURCE THEN
        UPDATE SET val = val || &amp;#39; not matched by source&amp;#39;;

-- see the updated distributed target table
SELECT * FROM target_table ORDER BY tid;

 tid | balance |              val
-----+---------+-------------------------------
   1 |     110 | initial updated by merge
   2 |      20 | inserted by merge
   3 |      30 | inserted by merge
   3 |     300 | initial not matched by source
   4 |      40 | inserted by merge
   5 |     500 | initial not matched by source
(6 rows)
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;h3 id="partitioned-tables"&gt;Expanded functionality on distributed partitioned tables&lt;/h3&gt;

&lt;p&gt;PG17 has expanded functionality for managing data in partitions. Now you can specify an access method for partitioned tables. Also, you can add exclusion constraints on partitions. Another great addition is supporting identity columns in partitioned tables. Citus has extended the distributed tables capabilities to include these 3 amazing functionalities for distributed partitioned tables as well! Let’s dive in a bit more detail below:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Citus 13.0 allows specifying an access method for distributed partitioned tables&lt;/strong&gt;: After specifying a table access method via &lt;code&gt;CREATE TABLE ... USING&lt;/code&gt; for a partitioned table, you can then distribute it through the Citus signature function: &lt;code&gt;create_distributed_table()&lt;/code&gt;. From that point forward, this table will be managed by Citus with the specified access method. Also, Citus propagates &lt;code&gt;ALTER TABLE ... SET ACCESS METHOD&lt;/code&gt; to all the nodes in the cluster, allowing to not only specify the access method for the distributed partitioned table, but also modify it.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Adds support for identity columns in distributed partitioned tables&lt;/strong&gt;: Citus on Postgres 17 allows specifying generated identity columns for Citus managed tables by maintaining generated identity logic while propagating distributed partitioned table DDL to all the cluster nodes. For more details, check out how &lt;a href="https://www.citusdata.com/blog/2023/02/08/whats-new-in-citus-11-2-patroni-ha-support/#identity-columns"&gt;Citus 11.2 introduced identity column support&lt;/a&gt; for Citus managed tables.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Allows exclusion constraints on distributed partitioned tables&lt;/strong&gt;: Similarly, Citus now allows adding an exclusion constraint because it seamlessly propagates the &lt;code&gt;ALTER TABLE distributed_partitioned_table ADD CONSTRAINT ...&lt;/code&gt; SQL command to all the nodes in the cluster.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s demonstrate all of the above with examples below:&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- let's say we are at node 0&lt;/span&gt;
&lt;span class="c1"&gt;-- create a partitioned table&lt;/span&gt;
&lt;span class="c1"&gt;-- specify access method as columnar, use generated identity column&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;dist_partitioned_table&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt; &lt;span class="n"&gt;id_test&lt;/span&gt; &lt;span class="nb"&gt;bigint&lt;/span&gt; &lt;span class="k"&gt;GENERATED&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="k"&gt;IDENTITY&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;START&lt;/span&gt; &lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt; &lt;span class="k"&gt;INCREMENT&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;RANGE&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="n"&gt;columnar&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- create a partition for the table&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;pt_1&lt;/span&gt; &lt;span class="n"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;OF&lt;/span&gt; &lt;span class="n"&gt;dist_partitioned_table&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- distribute the table, making it a distributed partitioned table&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;create_distributed_table&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'dist_partitioned_table'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'id_test'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- create another partition of the table, it will be automatically distributed&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;pt_2&lt;/span&gt; &lt;span class="n"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;OF&lt;/span&gt; &lt;span class="n"&gt;dist_partitioned_table&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- Altering an access method for a partitioned table lets the value be used&lt;/span&gt;
&lt;span class="c1"&gt;-- for all future partitions created under it.&lt;/span&gt;
&lt;span class="c1"&gt;-- Existing partitions are not modified.&lt;/span&gt;
&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;dist_partitioned_table&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="k"&gt;ACCESS&lt;/span&gt; &lt;span class="k"&gt;METHOD&lt;/span&gt; &lt;span class="n"&gt;heap&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- Add an exclusion constraint, which will be part of current and future partitions as well&lt;/span&gt;
&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;dist_partitioned_table&lt;/span&gt; &lt;span class="k"&gt;ADD&lt;/span&gt; &lt;span class="n"&gt;EXCLUDE&lt;/span&gt; &lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="n"&gt;btree&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id_test&lt;/span&gt; &lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- Attaching a partition inherits the identity column from the parent table&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;pt_3&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id_test&lt;/span&gt; &lt;span class="nb"&gt;bigint&lt;/span&gt; &lt;span class="k"&gt;not&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;dist_partitioned_table&lt;/span&gt; &lt;span class="n"&gt;ATTACH&lt;/span&gt; &lt;span class="n"&gt;PARTITION&lt;/span&gt; &lt;span class="n"&gt;pt_3&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- verify that the identity column is inherited in all children&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;attrelid&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;regclass&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;attname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;attidentity&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg_attribute&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;attname&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'id_test'&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;attidentity&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'d'&lt;/span&gt; &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="n"&gt;attrelid&lt;/span&gt;        &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;attname&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;attidentity&lt;/span&gt;
&lt;span class="c1"&gt;------------------------+---------+-------------&lt;/span&gt;
 &lt;span class="n"&gt;dist_partitioned_table&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;id_test&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;
 &lt;span class="n"&gt;pt_1&lt;/span&gt;                   &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;id_test&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;
 &lt;span class="n"&gt;pt_2&lt;/span&gt;                   &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;id_test&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;
 &lt;span class="n"&gt;pt_3&lt;/span&gt;                   &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;id_test&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;-- the parent table and the new partition have the altered access method "heap"&lt;/span&gt;
&lt;span class="c1"&gt;-- whereas the old two partitions have the original access method "columnar"&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;relname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;amname&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg_class&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt; &lt;span class="k"&gt;LEFT&lt;/span&gt; &lt;span class="k"&gt;JOIN&lt;/span&gt; &lt;span class="n"&gt;pg_am&lt;/span&gt; &lt;span class="n"&gt;am&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;relam&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;am&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;oid&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;relname&lt;/span&gt; &lt;span class="k"&gt;IN&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'dist_partitioned_table'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'pt_1'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'pt_2'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'pt_3'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;relname&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="n"&gt;relname&lt;/span&gt;         &lt;span class="o"&gt;|&lt;/span&gt;  &lt;span class="n"&gt;amname&lt;/span&gt;
&lt;span class="c1"&gt;------------------------+----------&lt;/span&gt;
 &lt;span class="n"&gt;dist_partitioned_table&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;heap&lt;/span&gt;
 &lt;span class="n"&gt;pt_1&lt;/span&gt;                   &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;columnar&lt;/span&gt;
 &lt;span class="n"&gt;pt_2&lt;/span&gt;                   &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;columnar&lt;/span&gt;
 &lt;span class="n"&gt;pt_3&lt;/span&gt;                   &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;heap&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;-- verify that the distributed partitioned table and its distributed partitions&lt;/span&gt;
&lt;span class="c1"&gt;-- have exclude constraints&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;conname&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg_constraint&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;conname&lt;/span&gt; &lt;span class="k"&gt;LIKE&lt;/span&gt; &lt;span class="s1"&gt;'%id_test%'&lt;/span&gt; &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

           &lt;span class="n"&gt;conname&lt;/span&gt;
&lt;span class="c1"&gt;------------------------&lt;/span&gt;
&lt;span class="n"&gt;dist_partitioned_table_id_test_n_excl&lt;/span&gt;
 &lt;span class="n"&gt;pt_1_id_test_n_excl&lt;/span&gt;
 &lt;span class="n"&gt;pt_2_id_test_n_excl&lt;/span&gt;
 &lt;span class="n"&gt;pt_3_id_test_n_excl&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;-- now, verify correct propagation to all the nodes in the cluster&lt;/span&gt;
&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="k"&gt;c&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;node_1_port&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;attrelid&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;regclass&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;attname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;attidentity&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg_attribute&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;attname&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'id_test'&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;attidentity&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'d'&lt;/span&gt; &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="n"&gt;attrelid&lt;/span&gt;        &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;attname&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;attidentity&lt;/span&gt;
&lt;span class="c1"&gt;------------------------+---------+-------------&lt;/span&gt;
 &lt;span class="n"&gt;dist_partitioned_table&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;id_test&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;
 &lt;span class="n"&gt;pt_1&lt;/span&gt;                   &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;id_test&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;
 &lt;span class="n"&gt;pt_2&lt;/span&gt;                   &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;id_test&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;
 &lt;span class="n"&gt;pt_3&lt;/span&gt;                   &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;id_test&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;relname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;amname&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg_class&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt; &lt;span class="k"&gt;LEFT&lt;/span&gt; &lt;span class="k"&gt;JOIN&lt;/span&gt; &lt;span class="n"&gt;pg_am&lt;/span&gt; &lt;span class="n"&gt;am&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;relam&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;am&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;oid&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;relname&lt;/span&gt; &lt;span class="k"&gt;IN&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'dist_partitioned_table'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'pt_1'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'pt_2'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'pt_3'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;relname&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="n"&gt;relname&lt;/span&gt;         &lt;span class="o"&gt;|&lt;/span&gt;  &lt;span class="n"&gt;amname&lt;/span&gt;
&lt;span class="c1"&gt;------------------------+----------&lt;/span&gt;
 &lt;span class="n"&gt;dist_partitioned_table&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;heap&lt;/span&gt;
 &lt;span class="n"&gt;pt_1&lt;/span&gt;                   &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;columnar&lt;/span&gt;
 &lt;span class="n"&gt;pt_2&lt;/span&gt;                   &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;columnar&lt;/span&gt;
 &lt;span class="n"&gt;pt_3&lt;/span&gt;                   &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;heap&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;-- this node is not a coordinator&lt;/span&gt;
&lt;span class="c1"&gt;-- so we can also see the exclusion constraints on the shards&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;conname&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg_constraint&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;conname&lt;/span&gt; &lt;span class="k"&gt;LIKE&lt;/span&gt; &lt;span class="s1"&gt;'%id_test%'&lt;/span&gt; &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

           &lt;span class="n"&gt;conname&lt;/span&gt;
&lt;span class="c1"&gt;------------------------&lt;/span&gt;
&lt;span class="n"&gt;dist_partitioned_table_id_test_n_excl&lt;/span&gt;
&lt;span class="n"&gt;dist_partitioned_table_id_test_n_excl_102008&lt;/span&gt;
&lt;span class="p"&gt;....&lt;/span&gt;
 &lt;span class="n"&gt;pt_1_id_test_n_excl&lt;/span&gt;
 &lt;span class="n"&gt;pt_1_id_test_n_excl_102040&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;
 &lt;span class="n"&gt;pt_2_id_test_n_excl&lt;/span&gt;
 &lt;span class="n"&gt;pt_2_id_test_n_excl_102072&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;
 &lt;span class="n"&gt;pt_3_id_test_n_excl&lt;/span&gt;
 &lt;span class="n"&gt;pt_3_id_test_n_excl_102104&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="-- let&amp;#39;s say we are at node 0
-- create a partitioned table
-- specify access method as columnar, use generated identity column
CREATE TABLE dist_partitioned_table
( id_test bigint GENERATED BY DEFAULT AS IDENTITY (START WITH 10 INCREMENT BY 10),
n int )
PARTITION BY RANGE (n)
USING columnar;

-- create a partition for the table
CREATE TABLE pt_1 PARTITION OF dist_partitioned_table FOR VALUES FROM (1) TO (50);

-- distribute the table, making it a distributed partitioned table
SELECT create_distributed_table(&amp;#39;dist_partitioned_table&amp;#39;, &amp;#39;id_test&amp;#39;);

-- create another partition of the table, it will be automatically distributed
CREATE TABLE pt_2 PARTITION OF dist_partitioned_table FOR VALUES FROM (50) TO (1000);

-- Altering an access method for a partitioned table lets the value be used
-- for all future partitions created under it.
-- Existing partitions are not modified.
ALTER TABLE dist_partitioned_table SET ACCESS METHOD heap;

-- Add an exclusion constraint, which will be part of current and future partitions as well
ALTER TABLE dist_partitioned_table ADD EXCLUDE USING btree (id_test WITH =, n WITH =);

-- Attaching a partition inherits the identity column from the parent table
CREATE TABLE pt_3 (id_test bigint not null, n int);
ALTER TABLE dist_partitioned_table ATTACH PARTITION pt_3 FOR VALUES FROM (1000) TO (2000);

-- verify that the identity column is inherited in all children
SELECT attrelid::regclass, attname, attidentity FROM pg_attribute
WHERE attname = &amp;#39;id_test&amp;#39; AND attidentity = &amp;#39;d&amp;#39; ORDER BY 1;

        attrelid        | attname | attidentity
------------------------+---------+-------------
 dist_partitioned_table | id_test | d
 pt_1                   | id_test | d
 pt_2                   | id_test | d
 pt_3                   | id_test | d
(4 rows)

-- the parent table and the new partition have the altered access method &amp;quot;heap&amp;quot;
-- whereas the old two partitions have the original access method &amp;quot;columnar&amp;quot;
SELECT relname, amname FROM pg_class c LEFT JOIN pg_am am ON (c.relam = am.oid)
WHERE relname IN (&amp;#39;dist_partitioned_table&amp;#39;, &amp;#39;pt_1&amp;#39;, &amp;#39;pt_2&amp;#39;, &amp;#39;pt_3&amp;#39;) ORDER BY relname;

        relname         |  amname
------------------------+----------
 dist_partitioned_table | heap
 pt_1                   | columnar
 pt_2                   | columnar
 pt_3                   | heap
(4 rows)

-- verify that the distributed partitioned table and its distributed partitions
-- have exclude constraints
SELECT conname FROM pg_constraint
WHERE conname LIKE &amp;#39;%id_test%&amp;#39; ORDER BY 1;

           conname
------------------------
dist_partitioned_table_id_test_n_excl
 pt_1_id_test_n_excl
 pt_2_id_test_n_excl
 pt_3_id_test_n_excl
(4 rows)

-- now, verify correct propagation to all the nodes in the cluster
\c - - - :node_1_port
SELECT attrelid::regclass, attname, attidentity FROM pg_attribute
WHERE attname = &amp;#39;id_test&amp;#39; AND attidentity = &amp;#39;d&amp;#39; ORDER BY 1;

        attrelid        | attname | attidentity
------------------------+---------+-------------
 dist_partitioned_table | id_test | d
 pt_1                   | id_test | d
 pt_2                   | id_test | d
 pt_3                   | id_test | d
(4 rows)

SELECT relname, amname FROM pg_class c LEFT JOIN pg_am am ON (c.relam = am.oid)
WHERE relname IN (&amp;#39;dist_partitioned_table&amp;#39;, &amp;#39;pt_1&amp;#39;, &amp;#39;pt_2&amp;#39;, &amp;#39;pt_3&amp;#39;) ORDER BY relname;

        relname         |  amname
------------------------+----------
 dist_partitioned_table | heap
 pt_1                   | columnar
 pt_2                   | columnar
 pt_3                   | heap
(4 rows)

-- this node is not a coordinator
-- so we can also see the exclusion constraints on the shards
SELECT conname FROM pg_constraint
WHERE conname LIKE &amp;#39;%id_test%&amp;#39; ORDER BY 1;

           conname
------------------------
dist_partitioned_table_id_test_n_excl
dist_partitioned_table_id_test_n_excl_102008
....
 pt_1_id_test_n_excl
 pt_1_id_test_n_excl_102040
...
 pt_2_id_test_n_excl
 pt_2_id_test_n_excl_102072
...
 pt_3_id_test_n_excl
 pt_3_id_test_n_excl_102104
...
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;h3 id="explain"&gt;Propagate new EXPLAIN options: MEMORY and SERIALIZE&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;EXPLAIN&lt;/code&gt; in PG17 now includes two new options: &lt;code&gt;SERIALIZE&lt;/code&gt; and &lt;code&gt;MEMORY&lt;/code&gt;. &lt;code&gt;SERIALIZE&lt;/code&gt; option investigates the real cost of converting the query&amp;#39;s output data into displayable form and the cost of sending the data to the client, whereas the &lt;code&gt;MEMORY&lt;/code&gt; option reports planner memory consumption. Citus 13.0 allows these options when trying to explain a distributed query.&lt;/p&gt;

&lt;p&gt;As you may know, Citus distributes the query to the appropriate nodes that contain the shards that the query is referring to. Let’s refer to these as tasks sent to shards. For each of those shard tasks, it will return the explain output to the node that runs the &lt;code&gt;EXPLAIN&lt;/code&gt; query. As a start in Citus, the &lt;code&gt;MEMORY&lt;/code&gt; option will be especially useful for parallelized queries across shards to see the amount of memory used in a single task. The &lt;code&gt;SERIALIZE&lt;/code&gt; option is useful in the collecting node, because after retrieving the whole data of the query, the serialize time can be properly calculated. Let’s see a simple example below:&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- create, distribute and populate a simple table&lt;/span&gt;
&lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;citus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shard_count&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;dist_table&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;create_distributed_table&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'dist_table'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'a'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;dist_table&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;10000&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;generate_series&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="c1"&gt;-- explain a simple multi-shard query on the table using memory and serialize options&lt;/span&gt;
&lt;span class="k"&gt;EXPLAIN&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;costs&lt;/span&gt; &lt;span class="k"&gt;off&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;analyze&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;serialize&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;dist_table&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

                                      &lt;span class="n"&gt;QUERY&lt;/span&gt; &lt;span class="n"&gt;PLAN&lt;/span&gt;
&lt;span class="c1"&gt;--------------------------------------------------------------------------------------------&lt;/span&gt;
&lt;span class="n"&gt;Custom&lt;/span&gt; &lt;span class="n"&gt;Scan&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Citus&lt;/span&gt; &lt;span class="n"&gt;Adaptive&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;actual&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;18&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;490&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;18&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;519&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1001&lt;/span&gt; &lt;span class="n"&gt;loops&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;Task&lt;/span&gt; &lt;span class="k"&gt;Count&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;
&lt;span class="n"&gt;Tuple&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="n"&gt;received&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;nodes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;8008&lt;/span&gt; &lt;span class="n"&gt;bytes&lt;/span&gt;
&lt;span class="n"&gt;Tasks&lt;/span&gt; &lt;span class="n"&gt;Shown&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;One&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;
&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;
    &lt;span class="n"&gt;Tuple&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="n"&gt;received&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;272&lt;/span&gt; &lt;span class="n"&gt;bytes&lt;/span&gt;
     &lt;span class="n"&gt;Node&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt; &lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;9702&lt;/span&gt; &lt;span class="n"&gt;dbname&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;Naisila&lt;/span&gt;
     &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;  &lt;span class="n"&gt;Seq&lt;/span&gt; &lt;span class="n"&gt;Scan&lt;/span&gt; &lt;span class="k"&gt;on&lt;/span&gt; &lt;span class="n"&gt;dist_table_102141&lt;/span&gt; &lt;span class="n"&gt;dist_table&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;actual&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;013&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;016&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;34&lt;/span&gt; &lt;span class="n"&gt;loops&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
         &lt;span class="n"&gt;Planning&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
           &lt;span class="n"&gt;Memory&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;used&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="n"&gt;kB&lt;/span&gt;  &lt;span class="n"&gt;allocated&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="n"&gt;kB&lt;/span&gt;
         &lt;span class="n"&gt;Planning&lt;/span&gt; &lt;span class="nb"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;024&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;
         &lt;span class="n"&gt;Serialization&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;000&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;  &lt;span class="k"&gt;output&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="n"&gt;kB&lt;/span&gt;  &lt;span class="n"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt;
         &lt;span class="n"&gt;Execution&lt;/span&gt; &lt;span class="nb"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;031&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;
&lt;span class="n"&gt;Planning&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="n"&gt;Memory&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;used&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;359&lt;/span&gt;&lt;span class="n"&gt;kB&lt;/span&gt; &lt;span class="n"&gt;allocated&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;512&lt;/span&gt;&lt;span class="n"&gt;kB&lt;/span&gt;
&lt;span class="n"&gt;Planning&lt;/span&gt; &lt;span class="nb"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;287&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;
&lt;span class="n"&gt;Serialization&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;097&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt; &lt;span class="k"&gt;output&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="n"&gt;kB&lt;/span&gt; &lt;span class="n"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt;
&lt;span class="n"&gt;Execution&lt;/span&gt; &lt;span class="nb"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;18&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;902&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;18&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="-- create, distribute and populate a simple table
SET citus.shard_count TO 32;
CREATE TABLE dist_table(a int, b int);
SELECT create_distributed_table(&amp;#39;dist_table&amp;#39;, &amp;#39;a&amp;#39;);
INSERT INTO dist_table SELECT c, c * 10000 FROM generate_series(0, 1000) AS c;
-- explain a simple multi-shard query on the table using memory and serialize options
EXPLAIN (costs off, analyze, serialize, memory) SELECT * FROM dist_table;

                                      QUERY PLAN
--------------------------------------------------------------------------------------------
Custom Scan (Citus Adaptive) (actual time=18.490..18.519 rows=1001 loops=1)
Task Count: 32
Tuple data received from nodes: 8008 bytes
Tasks Shown: One of 32
-&amp;gt; Task
    Tuple data received from node: 272 bytes
     Node: host=localhost port=9702 dbname=Naisila
     -&amp;gt;  Seq Scan on dist_table_102141 dist_table (actual time=0.013..0.016 rows=34 loops=1)
         Planning:
           Memory: used=7kB  allocated=8kB
         Planning Time: 0.024 ms
         Serialization: time=0.000 ms  output=0kB  format=text
         Execution Time: 0.031 ms
Planning:
Memory: used=359kB allocated=512kB
Planning Time: 0.287 ms
Serialization: time=0.097 ms output=20kB format=text
Execution Time: 18.902 ms
(18 rows)
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;p&gt;This &lt;code&gt;EXPLAIN&lt;/code&gt; query is showing one of 32 tasks, where tasks correspond with shards. We can see the amount of memory consumed in a single task in the node where it was executed. In the example above, there is more memory consumed in the coordinator node because the Custom Scan node is coalescing results from all the tasks. Serialization value is shown for the collected results only, for now.&lt;/p&gt;

&lt;h2 id="optimizer-improvements"&gt;Leveraging optimizer improvements with Citus in Postgres 17&lt;/h2&gt;

&lt;p&gt;As soon as we enable PG17 in Citus, we can make use of query and optimizer improvements without any further action needed. That’s because such improvements are reflected directly in Citus table shards, which are essentially regular Postgres tables. Thanks to PG17 enabling correlated subqueries to be pulled to a join, Citus can make use of this feature in its distributed planning phase and run even more types of distributed queries, which were not supported with previous PG versions.&lt;/p&gt;

&lt;p&gt;PG17 has several commits that bring significant optimizer improvements.  This commit in particular: &lt;a href="https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=9f1337639"&gt;Allow correlated IN subqueries to be transformed into joins&lt;/a&gt;, is worth calling out because it enables Citus 13.0 to plan and execute a query with a correlated IN subquery using query pushdown, where it was challenging for pre-PG17 Citus to plan the query. Let’s see what type of new queries in Citus 13.0 with PG17 we can run:&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- create, distribute and populate two simple tables&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;customer&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;contact&lt;/span&gt; &lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;category&lt;/span&gt; &lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt; &lt;span class="n"&gt;customer_id&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;category&lt;/span&gt; &lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;create_distributed_table&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'customer'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'id'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;create_distributed_table&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'orders'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'customer_id'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;customer&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'Beana'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'beana1234@gmail.com'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'books'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                            &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'Erida'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'erida1234@gmail.com'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'notebooks'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                            &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'Redi'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'redi1234@gmail.com'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'pens'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'books'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'notebooks'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'hats'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- with Citus 13.0 in PG17 we are able to run a query&lt;/span&gt;
&lt;span class="c1"&gt;-- on the customer table that has a correlated subquery!&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;contact&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;customer&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;customer_id&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt; &lt;span class="n"&gt;o&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;o&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;category&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;category&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="n"&gt;name&lt;/span&gt;  &lt;span class="o"&gt;|&lt;/span&gt;       &lt;span class="n"&gt;contact&lt;/span&gt;
&lt;span class="c1"&gt;------+---------------------&lt;/span&gt;
&lt;span class="n"&gt;Beana&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;beana1234&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;gmail&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;com&lt;/span&gt;
&lt;span class="n"&gt;Erida&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;erida1234&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;gmail&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;com&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;-- pre Citus 13 or pre PG17 this query would fail with the following&lt;/span&gt;
&lt;span class="n"&gt;ERROR&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;  &lt;span class="n"&gt;complex&lt;/span&gt; &lt;span class="n"&gt;joins&lt;/span&gt; &lt;span class="k"&gt;are&lt;/span&gt; &lt;span class="k"&gt;only&lt;/span&gt; &lt;span class="n"&gt;supported&lt;/span&gt; &lt;span class="k"&gt;when&lt;/span&gt; &lt;span class="k"&gt;all&lt;/span&gt; &lt;span class="n"&gt;distributed&lt;/span&gt; &lt;span class="n"&gt;tables&lt;/span&gt;
        &lt;span class="k"&gt;are&lt;/span&gt; &lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;located&lt;/span&gt; &lt;span class="k"&gt;and&lt;/span&gt; &lt;span class="n"&gt;joined&lt;/span&gt; &lt;span class="k"&gt;on&lt;/span&gt; &lt;span class="n"&gt;their&lt;/span&gt; &lt;span class="n"&gt;distribution&lt;/span&gt; &lt;span class="n"&gt;columns&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="-- create, distribute and populate two simple tables
CREATE TABLE customer ( id int, name text, contact text, category text);
CREATE TABLE orders ( customer_id int, category text);
SELECT create_distributed_table(&amp;#39;customer&amp;#39;, &amp;#39;id&amp;#39;);
SELECT create_distributed_table(&amp;#39;orders&amp;#39;, &amp;#39;customer_id&amp;#39;);
INSERT INTO customer VALUES (1, &amp;#39;Beana&amp;#39;, &amp;#39;beana1234@gmail.com&amp;#39;, &amp;#39;books&amp;#39;),
                            (2, &amp;#39;Erida&amp;#39;, &amp;#39;erida1234@gmail.com&amp;#39;, &amp;#39;notebooks&amp;#39;),
                            (3, &amp;#39;Redi&amp;#39;, &amp;#39;redi1234@gmail.com&amp;#39;, &amp;#39;pens&amp;#39;);
INSERT INTO orders VALUES (1, &amp;#39;books&amp;#39;), (2, &amp;#39;notebooks&amp;#39;), (3, &amp;#39;hats&amp;#39;);

-- with Citus 13.0 in PG17 we are able to run a query
-- on the customer table that has a correlated subquery!
SELECT c.name, c.contact
FROM customer c
WHERE c.id in (SELECT customer_id FROM orders o WHERE o.category = c.category);

name  |       contact
------+---------------------
Beana | beana1234@gmail.com
Erida | erida1234@gmail.com
(2 rows)

-- pre Citus 13 or pre PG17 this query would fail with the following
ERROR:  complex joins are only supported when all distributed tables
        are co-located and joined on their distribution columns
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;p&gt;Let’s get into more details on how Citus 13.0 leverages PG17 optimizer improvements and is able to plan the query and execute it:&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;EXPLAIN&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;costs&lt;/span&gt; &lt;span class="k"&gt;off&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;contact&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;customer&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;customer_id&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt; &lt;span class="n"&gt;o&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;o&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;category&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;category&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="EXPLAIN (costs off)
SELECT c.name, c.contact
FROM customer c
WHERE c.id in (SELECT customer_id FROM orders o WHERE o.category = c.category);
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;p&gt;With Citus 13.0 the plan for this query is:&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;                              &lt;span class="n"&gt;QUERY&lt;/span&gt; &lt;span class="n"&gt;PLAN&lt;/span&gt;
&lt;span class="c1"&gt;--------------------------------------------------------------------------&lt;/span&gt;
&lt;span class="n"&gt;Custom&lt;/span&gt; &lt;span class="n"&gt;Scan&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Citus&lt;/span&gt; &lt;span class="n"&gt;Adaptive&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="n"&gt;Task&lt;/span&gt; &lt;span class="k"&gt;Count&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;
   &lt;span class="n"&gt;Tasks&lt;/span&gt; &lt;span class="n"&gt;Shown&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;One&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;
   &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;
    &lt;span class="n"&gt;Node&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;localhost&lt;/span&gt; &lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;9701&lt;/span&gt; &lt;span class="n"&gt;dbname&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;citus&lt;/span&gt;
    &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Hash&lt;/span&gt; &lt;span class="k"&gt;Join&lt;/span&gt;
        &lt;span class="n"&gt;Hash&lt;/span&gt; &lt;span class="n"&gt;Cond&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;category&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;o&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;category&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;o&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;customer_id&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Seq&lt;/span&gt; &lt;span class="n"&gt;Scan&lt;/span&gt; &lt;span class="k"&gt;on&lt;/span&gt; &lt;span class="n"&gt;customer_105861&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;
        &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Hash&lt;/span&gt;
            &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;HashAggregate&lt;/span&gt;
        &lt;span class="k"&gt;Group&lt;/span&gt; &lt;span class="k"&gt;Key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;o&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;category&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;o&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;customer_id&lt;/span&gt;
            &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Seq&lt;/span&gt; &lt;span class="n"&gt;Scan&lt;/span&gt; &lt;span class="k"&gt;on&lt;/span&gt; &lt;span class="n"&gt;orders_105893&lt;/span&gt; &lt;span class="n"&gt;o&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="                              QUERY PLAN
--------------------------------------------------------------------------
Custom Scan (Citus Adaptive)
   Task Count: 32
   Tasks Shown: One of 32
   -&amp;gt; Task
    Node: host=localhost port=9701 dbname=citus
    -&amp;gt; Hash Join
        Hash Cond: ((c.category = o.category) AND (c.id = o.customer_id))
        -&amp;gt; Seq Scan on customer_105861 c
        -&amp;gt; Hash
            -&amp;gt; HashAggregate
        Group Key: o.category, o.customer_id
            -&amp;gt; Seq Scan on orders_105893 o
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;p&gt;The Postgres 17 planner converts the IN subquery to a join between the &lt;code&gt;customer&lt;/code&gt; and `orders table (technically it is a semi-join). Then, Citus can push down the query to all worker nodes because the join includes an equality on the distribution columns of the tables. In contrast, the same query hits a planning limitation with previous versions of Citus:&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Pre-13.0 Citus:&lt;/span&gt;
&lt;span class="k"&gt;EXPLAIN&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;costs&lt;/span&gt; &lt;span class="k"&gt;off&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;contact&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;customer&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;customer_id&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt;  &lt;span class="n"&gt;o&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;o&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;category&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;category&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;DEBUG&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;  &lt;span class="n"&gt;skipping&lt;/span&gt; &lt;span class="k"&gt;recursive&lt;/span&gt; &lt;span class="n"&gt;planning&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;subquery&lt;/span&gt; &lt;span class="n"&gt;since&lt;/span&gt; &lt;span class="n"&gt;it&lt;/span&gt; &lt;span class="k"&gt;contains&lt;/span&gt;
        &lt;span class="k"&gt;references&lt;/span&gt; &lt;span class="k"&gt;to&lt;/span&gt; &lt;span class="k"&gt;outer&lt;/span&gt; &lt;span class="n"&gt;queries&lt;/span&gt;
&lt;span class="n"&gt;ERROR&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;  &lt;span class="n"&gt;complex&lt;/span&gt; &lt;span class="n"&gt;joins&lt;/span&gt; &lt;span class="k"&gt;are&lt;/span&gt; &lt;span class="k"&gt;only&lt;/span&gt; &lt;span class="n"&gt;supported&lt;/span&gt; &lt;span class="k"&gt;when&lt;/span&gt; &lt;span class="k"&gt;all&lt;/span&gt; &lt;span class="n"&gt;distributed&lt;/span&gt; &lt;span class="n"&gt;tables&lt;/span&gt;
        &lt;span class="k"&gt;are&lt;/span&gt; &lt;span class="n"&gt;co&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;located&lt;/span&gt; &lt;span class="k"&gt;and&lt;/span&gt; &lt;span class="n"&gt;joined&lt;/span&gt; &lt;span class="k"&gt;on&lt;/span&gt; &lt;span class="n"&gt;their&lt;/span&gt; &lt;span class="n"&gt;distribution&lt;/span&gt; &lt;span class="n"&gt;columns&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="-- Pre-13.0 Citus:
EXPLAIN (costs off)
SELECT c.name, c.contact
FROM customer c
WHERE c.id in (SELECT customer_id FROM orders  o WHERE o.category = c.category);
DEBUG:  skipping recursive planning for the subquery since it contains
        references to outer queries
ERROR:  complex joins are only supported when all distributed tables
        are co-located and joined on their distribution columns
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;p&gt;Prior to version 17, Postgres planned the subquery as a correlated Subplan and applied that as a filter on the customer table. With Citus, if it is not possible to push down a correlated subquery. But with Postgres 17 the subquery is planned as a join, the query plan has no correlated Subplans, and Citus can naturally pushdown this join!&lt;/p&gt;

&lt;h2 id="important-bugfixes"&gt;Important bugfixes, including community contributions into Citus 13.0&lt;/h2&gt;

&lt;p&gt;Citus 13.0 has bug fixes that address some crashes caused by unsafe catalog access, and segmentation faults in distributed procedures. Citus 13.0 also resolves issues related to role synchronization across nodes, server crashes in specific cluster configurations, and improves handling of shard placement when new nodes are introduced without required reference data.&lt;/p&gt;

&lt;p&gt;Other than work from Citus engineers, we have seen significant community contributions to Citus, which we always love to see. We are really grateful for all the contributions to the Citus open-source repository in GitHub, both pull requests and issues. We would like to thank:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/c2main"&gt;Cédric Villemain&lt;/a&gt; for his contributions in fault tolerance by &lt;a href="https://github.com/citusdata/citus/pull/7288"&gt;fixing a segfault when calling distributed procedure with a parameterized distribution argument&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Green-Chan"&gt;Karina&lt;/a&gt; for her contributions in crash tolerance by &lt;a href="https://github.com/citusdata/citus/pull/7552"&gt;fixing a server crash when trying to execute activate_node_snapshot() on a single-node cluster&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/crabhi"&gt;Filip Sedlák&lt;/a&gt; for his contributions in cluster management &amp;amp; coordination by &lt;a href="https://github.com/citusdata/citus/pull/7467"&gt;improving citus_move_shard_placement() to fail early if there is a new node without reference tables yet&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more details on these community contributions, and more notable fixes, you can check the &lt;a href="/updates/v13-0"&gt;Citus 13.0 Updates page&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;Diving deeper into Citus 13.0 and distributed Postgres&lt;/h2&gt;

&lt;p&gt;To learn more about Citus 13.0, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check out the &lt;a href="/updates/v13-0"&gt;13.0 Updates page&lt;/a&gt; to get the detailed release notes.&lt;/li&gt;
&lt;li&gt;Watch the replay of the &lt;a href="https://www.youtube.com/live/DrCu8SX2nDM"&gt;Citus 13.0 Release Party livestream&lt;/a&gt; to see demos and learn more about how Citus 13.0 distributes PostgreSQL 17, as well as exciting Citus team updates.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also stay connected on the &lt;a href="https://slack.citusdata.com/"&gt;Citus Slack&lt;/a&gt; and visit the &lt;a href="https://github.com/citusdata/citus"&gt;Citus open source GitHub repo&lt;/a&gt; to see recent developments as well. If there’s something you’d like to see next in Citus, feel free to also open a feature request issue :)&lt;/p&gt;
&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href='https://www.citusdata.com/blog/2025/02/06/distribute-postgresql-17-with-citus-13/'&gt;citusdata.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>CFP talk proposal ideas for POSETTE: An Event for Postgres 2025</title>
    <link rel="alternate" href="https://www.citusdata.com/blog/2025/02/04/cfp-talk-proposal-ideas-for-posette-2025/"/>
    <id>https://www.citusdata.com/blog/2025/02/04/cfp-talk-proposal-ideas-for-posette-2025/</id>
    <published>2025-02-04T21:02:01+00:00</published>
    <updated>2025-02-04T21:02:01+00:00</updated>
    <author>Claire Giordano</author>
    <content type="html">&lt;p&gt;Some of you have been asking for advice about what to submit to the &lt;a href="https://posetteconf.com/2025/cfp/"&gt;CFP for POSETTE: An Event for Postgres 2025&lt;/a&gt;. So this post aims to give you ideas that might help you submit a talk proposal (or 2, or 3) before the upcoming CFP deadline.&lt;/p&gt;

&lt;p&gt;If you’re not yet familiar with this conference, &lt;a href="https://posetteconf.com/2025/"&gt;POSETTE: An Event for Postgres 2025&lt;/a&gt; is a free &amp;amp; virtual developer event now in its 4th year, organized by the Postgres team at Microsoft.&lt;/p&gt;

&lt;p&gt;I love the virtual aspect of POSETTE because the conference talks are so accessible&amp;mdash;for both speakers and attendees. If you’re a speaker, you don’t need travel budget $$&amp;mdash;and you don’t have to leave home. Also, the talk you’ve poured all that energy into is not limited to the people in the room, and has the potential to reach so many more people. If you’re an attendee, well, all you need is an internet connection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The CFP for POSETTE: An Event for Postgres will be open until Sunday Feb 9th at 11:59pm PST&lt;/strong&gt;. So as of the publication date of this blog post, you still have time to submit a CFP proposal (or 2, or 3, or 4)&amp;mdash;and to remind your Postgres teammates and friends of the speaking opportunity. &lt;/p&gt;

&lt;p&gt;If you have a Postgres experience, success story, failure, best practice, “how-to”, collection of tips, lesson about something that&amp;#39;s new, or deep dive to share—not just about the core of Postgres, but about anything in the Postgres ecosystem, including extensions, and tooling, and monitoring—maybe you should consider submitting a &lt;a href="https://posetteconf.com/2025/cfp/"&gt;talk proposal to the CFP for POSETTE&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you’re not sure about whether to give a conference talk, &lt;a href="https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/why-give-a-conference-talk-or-why-give-a-postgres-talk/ba-p/3056872"&gt;there are a boatload of reasons why you should&lt;/a&gt;. And there’s also a &lt;a href="https://talkingpostgres.com/episodes/why-giving-talks-at-postgres-conferences-matters"&gt;podcast episode&lt;/a&gt; with Álvaro Herrera, Boriss Mejías, and Pino de Candia that makes the case for why giving conference talks matters. For inspiration, you can also take a look at the &lt;a href="https://aka.ms/posette-playlist-2024"&gt;playlist of POSETTE 2024 talks&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And if you’re looking for even more CFP ideas, you’ve come to the right place! Read on…&lt;/p&gt;

&lt;h2&gt;Ideas for talks you might propose in the POSETTE CFP&lt;/h2&gt;

&lt;p&gt;On the CFP page there is a list of possible talk titles (screenshot below) you might submit&amp;mdash;these are good ideas, although the list is by no means exhaustive, and we welcome talk proposals that are not on this list.&lt;/p&gt;

&lt;figure&gt;
&lt;picture&gt;
&lt;source srcset="https://cdn.citusdata.com/images/blog/posette2025-cfp-topics.webp" type="image/webp"&gt;
&lt;img src="https://cdn.citusdata.com/images/blog/posette2025-cfp-topics.jpg" alt="topic ideas" loading="lazy" width="800" height="478" /&gt;
&lt;/picture&gt;
&lt;figcaption&gt;&lt;strong&gt;Figure 1:&lt;/strong&gt; POSETTE CFP talk topics taken from the CFP page on &lt;a href="https://posetteconf.com" rel="noopener" target="_blank"&gt;PosetteConf.com&lt;/a&gt;&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;On Telegram the other day, when answering the question “Do you have any ideas of what I should submit?”, I found myself suggesting different TYPES of talks. Not specific ideas and talk titles, but rather I framed the different categories. So I decided to share these different &amp;ldquo;types&amp;rdquo; and &amp;ldquo;classes&amp;rdquo; of talks with all of you, in the hopes this might give you a good talk proposal idea.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First you need to pick your audience&lt;/strong&gt;: Before you think about what type of talk to give, remember that the POSETTE team is focused on serving the needs of both the USER community&amp;mdash;as well as the Postgres contributor &amp;amp; hacker communities.&lt;/p&gt;

&lt;p&gt;That means first you need to decide on your audience. Are you giving a talk for &lt;a href="https://www.postgresql.org/"&gt;PostgreSQL&lt;/a&gt; users, or &lt;a href="https://learn.microsoft.com/azure/postgresql/"&gt;Azure Database for PostgreSQL&lt;/a&gt; customers, or the PostgreSQL contributor community? All are good choices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Then you need to decide: what do you want to accomplish with your talk?&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Do you want to skill up the Postgres hacker community?&lt;/strong&gt;: If you want to help skill-up the developer/contributor community, maybe pick a part of Postgres that new contributors often ask a lot of questions about, get stuck on, need help with, etc&amp;mdash;and give a &amp;ldquo;tour&amp;rdquo; of its mechanics, starting with the basics.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Do you want to help grow the Postgres community?&lt;/strong&gt;: If you want to help grow the Postgres community of contributors and developers, you could propose a talk that would motivate tomorrow&amp;#39;s developers/contributors to get involved in the project. Imagine you were going to a university to give a talk about &amp;quot;why work on Postgres&amp;quot;… what would you say? And how would you entice people to work on Postgres?&lt;/p&gt;

&lt;p&gt;What pain points would you challenge them with?&lt;/p&gt;

&lt;p&gt;What benefits would you share from your own Postgres experience that might inspire these developers to think seriously about Postgres as a career path?&lt;/p&gt;

&lt;p&gt;You could also shine a light on the different ways people can (and do!) contribute to the Postgres community: from mentoring to translations to organizing conferences to podcasts to speaking at conferences to publishing PostgreSQL Person of the Week.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Do you want to share your expertise with Postgres users?&lt;/strong&gt;: If you want your talk to benefit users, maybe pick an area that you are already expert in (or want an excuse to dig into and learn about?) and create a Beginners Guide for it? Or Advanced Tips for it? Or Surprising Benefits of? Or Things People Might Not Know?&lt;/p&gt;

&lt;p&gt;Especially if there is a part of Postgres you feel like people sometimes mis-use, or don&amp;#39;t take enough advantage of....&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Do you want to share your customer experiences with Azure Database for PostgreSQL, or Postgres more generally?&lt;/strong&gt;: Maybe you have a wild success story you think others will benefit from. Or you want to share a problem you had and how you used Postgres to solve it? People love customer stories.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Do you want to shine a light on the broader Postgres ecosystem?&lt;/strong&gt;: If you want to target users with your talk, don’t limit yourself the Postgres core. There is a rich ecosystem that surrounds Postgres and people need to understand the ecosystem, too. So maybe there are tools or Postgres extensions or forks or startups that you can give a useful talk about?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Do you want to help experts in other database technologies learn about Postgres?&lt;/strong&gt;: If you have expertise in other databases as well as Postgres, maybe you can help people who who are skilled in running workloads on other databases and are looking to skill up on Postgres&amp;mdash;by helping them understand what’s similar, and what’s different. As if you’re giving them a dictionary to translate from their familiar database to Postgres, and vice versa.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;There are so many more possibilities&lt;/strong&gt;: Often I look at the schedule from previous years to look for inspiration (and to make sure that my talk proposal is not a duplicate of a talk that&amp;rsquo;s already been given.) And I think about pain points, things people get confused about, or questions that come up a lot. Another thing to keep in mind: how can you help your story to &amp;quot;stick&amp;quot;? Can you make it entertaining? How do you share your story in a way that keeps people watching (versus looking at their phone instead?)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;Key things to know about POSETTE: An Event for Postgres 2025&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;CFP deadline&lt;/strong&gt;: The &lt;a href="https://posetteconf.com/2025/cfp/"&gt;CFP for POSETTE&lt;/a&gt; will close on Sunday, Feb 9th 2025 @ 11:59pm Pacific Time (PST)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;No travel required&lt;/strong&gt;: free &amp;amp; virtual developer event&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Length of talks&lt;/strong&gt;: 25 minutes/session&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Language&lt;/strong&gt;: All talks will be in English&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Talks will be pre-recorded&lt;/strong&gt;: All talks will be pre-recorded by the POSETTE team during the weeks of Apr 28th and May 5th (with accepted speakers presenting remotely)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;When is the event?&lt;/strong&gt;: Jun 10-12, 2025&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Format of the virtual event&lt;/strong&gt;: All pre-recorded talks will be livestreamed in one of 4 unique livestreams on Jun 10-12, 2025—all with parallel live text chats on Discord. Two of the livestreams will be in Americas-friendly times of day (8:00am-2:00pm PDT) and two of the livestreams will be in EMEA-friendly times of day (8:00am-2:00pm CEST). All talks will be published online after the event is over.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;More info about the CFP&lt;/strong&gt;: All the details, including key dates and how to submit on Sessionize, are spelled out on the &lt;a href="https://posetteconf.com/2025/cfp/"&gt;CFP page for POSETTE 2025&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Code-of-conduct&lt;/strong&gt;: You can find the &lt;a href="https://posetteconf.com/2025/conduct/"&gt;Code of Conduct for POSETTE&lt;/a&gt; online. Please help us to provide a respectful, friendly, and professional experience for everybody involved in this virtual conference.&lt;/li&gt;
&lt;/ul&gt;

&lt;figure&gt;
&lt;picture&gt;
&lt;source srcset="https://cdn.citusdata.com/images/blog/youre-invited-submit-cfp-2025-social.webp" type="image/webp"&gt;
&lt;img src="https://cdn.citusdata.com/images/blog/youre-invited-submit-cfp-2025-social.jpg" alt="You're invited to submit to the CFP" loading="lazy" width="800" height="478" /&gt;
&lt;/picture&gt;
&lt;figcaption&gt;&lt;strong&gt;Figure 2:&lt;/strong&gt; the CFP is open for POSETTE: An Event for Postgres 2025 until Sunday Feb 9th at 11:59pm PST. What Postgres story do you want to share?&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href='https://www.citusdata.com/blog/2025/02/04/cfp-talk-proposal-ideas-for-posette-2025/'&gt;citusdata.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>Say hello to the Talking Postgres podcast</title>
    <link rel="alternate" href="https://www.citusdata.com/blog/2024/07/09/say-hello-to-the-talking-postgres-podcast/"/>
    <id>https://www.citusdata.com/blog/2024/07/09/say-hello-to-the-talking-postgres-podcast/</id>
    <published>2024-07-09T15:15:01+00:00</published>
    <updated>2024-07-09T15:15:01+00:00</updated>
    <author>Claire Giordano</author>
    <content type="html">&lt;p&gt;The TL;DR of this blog post is simple: the &amp;ldquo;Path To Citus Con&amp;rdquo; podcast for developers who love Postgres has been renamed&amp;mdash;and the new name is &lt;a href="https://talkingpostgres.com"&gt;Talking Postgres&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And if you&amp;rsquo;re just hearing about the Talking Postgres podcast for the first time, it is a monthly podcast for developers who love Postgres, with amazing guests from the Postgres world who talk about the human side of Postgres, databases, and open source.&lt;/p&gt;

&lt;div class="normal-quote" aria-hidden="true"&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Listening to the Talking Postgres podcast is the next best thing to being in the hallway at a Postgres conference, eavesdropping on other people&amp;rsquo;s conversations and learning from the experiences of experts. As Floor Drees says, it&amp;rsquo;s as if you&amp;rsquo;re sharing a coffee with them.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Past podcast guests include (in order of appearance) some amazing Postgres, database, and open source people such as:  Simon Willison, Marco Slot, Abdullah Ustuner, Burak Yucesoy, Melanie Plageman, Samay Sharma, Álvaro Herrera, Boriss Mejías, Thomas Munro, Grant Fritchey, Ryan Booz, Chelsea Dole, Floor Drees, Paul Ramsey, Regina Obe, Andres Freund, Heikki Linnakangas, Dimitri Fontaine, Vik Fearing, Lukas Fittl, Rob Treat, Jelte Fennema-Nio, Derk van Veen, Arda Aytekin, Chris Ellis, Michael Christofides, Aaron Wislang, and Teresa Giacomini. The podcast is produced by the Postgres team at Microsoft&amp;mdash;and I have the privilege of being your host.&lt;/p&gt;

&lt;p&gt;So whether you&amp;rsquo;re an existing listener or new to this podcast, we hope you enjoy the Talking Postgres podcast&amp;mdash;and help to spread the word about the new name.&lt;/p&gt;

&lt;figure&gt;
&lt;picture&gt;
&lt;source srcset="https://cdn.citusdata.com/images/blog/citus-website-podcast-page-1200x675.webp" type="image/webp"&gt;
&lt;img src="https://cdn.citusdata.com/images/blog/citus-website-podcast-page-1200x675.jpg" alt="Talking Postgres logo" width="850" height="478" loading="lazy" /&gt;
&lt;/picture&gt;
&lt;figcaption&gt;&lt;strong&gt;Figure 1:&lt;/strong&gt; The new “Talking Postgres with Claire Giordano” podcast name (formerly called Path To Citus Con) is depicted here with the same elephant mascot we’ve always used.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h2&gt;Some key things to know about the Talking Postgres podcast&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Why did we rename the podcast?&lt;/strong&gt;: Guests &amp;amp; friends repeatedly&amp;mdash;and I mean repeatedly&amp;mdash;nudged us to rename the podcast to a name that makes it more clear what the podcast is about. And at the end of the day it&amp;rsquo;s about Postgres things! So while the podcast was born in March 2023 as a pre-event to last year&amp;rsquo;s Citus Con&amp;mdash;hence the original name, &amp;ldquo;Path To Citus Con&amp;rdquo;&amp;mdash;&lt;a href="https://talkingpostgres.com"&gt;Talking Postgres&lt;/a&gt; has grown into its own monthly podcast that has everything to do with Postgres and little to do with Citus Con (now called POSETTE.)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Where can you catch up on past podcast episodes?&lt;/strong&gt;: All 16 of the past episodes of Path To Citus Con can now be found on the &lt;a href="https://talkingpostgres.com"&gt;talkingpostgres.com&lt;/a&gt; site&amp;mdash;as well as on the Talking Postgres &lt;a href="https://aka.ms/TalkingPostgres-playlist"&gt;playlist on YouTube&lt;/a&gt;, and wherever you get your podcasts.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;If you&amp;rsquo;re already subscribed to the podcast, are you still subscribed?&lt;/strong&gt;: We renamed the previous podcast, so if you were already subscribed, you should still be subscribed. Same thing for the RSS feed, it should just work! If you have any problems, please let us know via the &lt;code&gt;#talkingpostgres&lt;/code&gt; channel in the &lt;a href="https://aka.ms/open-source-discord"&gt;Microsoft Open Source Discord&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Can you still attend the LIVE recording of the podcast on Discord each month?&lt;/strong&gt;: Yes! Inspired by the Oxide and Friends podcast that is hosted by Bryan Cantrill and Adam Leventhal&amp;mdash;two of my former teammates from the kernel group at Sun Microsystems&amp;mdash;we also record Talking Postgres (formerly Path To Citus Con) each month on Discord&amp;mdash;with a parallel live text chat that is quite fun to be part of.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;When are the future LIVE podcast recordings&lt;/strong&gt;: If you&amp;rsquo;ve never participated in this type of live podcast recording, you might want to give it a try. It&amp;rsquo;s easy to &lt;a href="https://aka.ms/TalkingPostgres-cal"&gt;subscribe to the Talking Postgres calendar&lt;/a&gt; of future LIVE podcast recordings: we usually record on Wed mornings Pacific Time (PT) on the 1&lt;sup&gt;st&lt;/sup&gt; or 2&lt;sup&gt;nd&lt;/sup&gt; Wednesday of the month.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Episode 17 of Talking Postgres will be recorded LIVE this Wed July 10, 2024&lt;/strong&gt;: This July, our guest is Pino de Candia&amp;mdash;the former co-host of the podcast&amp;mdash;and the topic will be a bit &amp;ldquo;meta&amp;rdquo; this time! We&amp;rsquo;ll explore &amp;ldquo;Podcasting about Postgres&amp;rdquo; and we&amp;rsquo;ll look back at some of our greatest hits, talk about some of the other wonderful Postgres podcasts we listen to, and of course we&amp;rsquo;ll spend a few minutes reflecting on the podcast rename (why why why!) &lt;a href="https://aka.ms/TalkingPostgres-Ep17-cal"&gt;This Ep17 calendar invite&lt;/a&gt; should give you all the instructions you need to join us live on Discord this Wed Jul 10th at 10:00am PDT.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;And David Rowley is scheduled to be the guest on the August episode&lt;/strong&gt;: Since David Rowley&amp;mdash;a Postgres committer&amp;mdash;is based in New Zealand, &lt;a href="https://aka.ms/TalkingPostgres-Ep18-cal"&gt;Ep18 in August with David Rowley&lt;/a&gt; will be recorded at an unusual time for us, on a Tuesday, specifically at 4:00pm PDT on Tue Aug 6th. The topic will be &amp;ldquo;How I got started as a developer &amp;amp; in Postgres&amp;rdquo; and we hope you can join us on the parallel live text chat! David is brilliant&amp;mdash;and I&amp;rsquo;m definitely going to have to do my homework on Postgres performance topics to prep for the conversation, since that is one of David&amp;rsquo;s specialties.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Let us know what you think of the podcast, be sure to use hashtag #TalkingPostgres&lt;/h2&gt;

&lt;p&gt;The new hashtag for the new podcast name is #TalkingPostgres and as soon as we see some interesting tweets, toots, and threads about the podcast using the new name, perhaps we’ll add some of them to the &lt;a href="https://talkingpostgres.com"&gt;talkingpostgres.com&lt;/a&gt; website.&lt;/p&gt;

&lt;p&gt;For now, here is a screenshot highlighting guests and topics for some of the most recent podcast episodes. You can of course &lt;a href="https://talkingpostgres.com/subscribe"&gt;subscribe to Talking Postgres&lt;/a&gt; and listen from anywhere, wherever you get your podcasts from. Enjoy!&lt;/p&gt;

&lt;figure&gt;
&lt;picture&gt;
&lt;source srcset="https://cdn.citusdata.com/images/blog/screencapture-talkingpostgres-2024-07-08.webp" type="image/webp"&gt;
&lt;img src="https://cdn.citusdata.com/images/blog/screencapture-talkingpostgres-2024-07-08.jpg" alt="screenshot of Talking Postgres website" width="800" height="1646" loading="lazy" /&gt;
&lt;/picture&gt;
&lt;figcaption&gt;&lt;strong&gt;Figure 2:&lt;/strong&gt; Screenshot of the Talking Postgres web page at &lt;code&gt;talkingpostgres.com&lt;/code&gt;, showing the most recent 5 episodes.&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href='https://www.citusdata.com/blog/2024/07/09/say-hello-to-the-talking-postgres-podcast/'&gt;citusdata.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;</content>
  </entry>
</feed>
