<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>Citus Data Blog - Articles by Alexander Kukushkin</title>
  <author>
    <name>Alexander Kukushkin</name>
  </author>
  <subtitle>Scaling data and analytics with Postgres</subtitle>
  <id>https://www.citusdata.com/blog/</id>
  <link href="https://www.citusdata.com/blog/"/>
  <link href="https://www.citusdata.com/blog/feed/alexander-kukushkin.xml" rel="self"/>
  <updated>2023-03-06T16:51:00+00:00</updated>
  <entry>
    <title>Patroni 3.0 &amp; Citus: Scalable, Highly Available Postgres</title>
    <link rel="alternate" href="https://www.citusdata.com/blog/2023/03/06/patroni-3-0-and-citus-scalable-ha-postgres/"/>
    <id>https://www.citusdata.com/blog/2023/03/06/patroni-3-0-and-citus-scalable-ha-postgres/</id>
    <published>2023-03-06T16:51:00+00:00</published>
    <updated>2023-03-06T16:51:00+00:00</updated>
    <author>Alexander Kukushkin</author>
    <content type="html">&lt;p&gt;Citus is a PostgreSQL extension that makes PostgreSQL scalable by transparently distributing and/or replicating tables across one or more PostgreSQL nodes. Citus could be used either on Azure cloud, or since the &lt;a href="https://github.com/citusdata/citus"&gt;Citus database extension&lt;/a&gt; is fully open source, you can &lt;a href="/download/"&gt;download&lt;/a&gt; and install Citus anywhere you like.&lt;/p&gt;

&lt;p&gt;A typical Citus cluster consists of a special node called coordinator and a few worker nodes. Applications usually send their queries to the Citus coordinator node, which relays them to worker nodes and accumulates the results. (Unless of course you&amp;rsquo;re using the Citus &lt;a href="/blog/2022/06/17/citus-11-goes-fully-open-source/#any-node"&gt;query from any node&lt;/a&gt; feature, an optional feature introduced in Citus 11, in which case the queries can be routed to any of the nodes in the cluster.)&lt;/p&gt;

&lt;p&gt;Anyway, one of the most frequently asked questions is: &amp;ldquo;How does Citus handle failures of the coordinator or worker nodes? What&amp;rsquo;s the HA story?&amp;rdquo;&lt;/p&gt;

&lt;p&gt;And with the exception of when you&amp;rsquo;re &lt;a href="https://learn.microsoft.com/azure/cosmos-db/postgresql/concepts-high-availability"&gt;running Citus in a managed service&lt;/a&gt; in the cloud, the answer so far was not great&amp;mdash;just use PostgreSQL streaming to run coordinator and workers with HA and it is up to you how to handle a failover.&lt;/p&gt;

&lt;p&gt;In this blog post, you&amp;rsquo;ll learn how Patroni 3.0+ can be used to deploy a highly available Citus database cluster&amp;mdash;just by adding a few lines to the Patroni configuration file.  Let&amp;rsquo;s take a walk through these topics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="#patroni"&gt;What is Patroni?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#citus-support"&gt;Introducing Citus support in Patroni 3.0&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#distributed-cluster"&gt;Our first distributed Citus cluster with Patroni&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#ha-switchover"&gt;Our first HA switchover with Patroni and Citus&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#future-plans"&gt;Future plans &amp;amp; possible improvements&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#conclusion"&gt;Conclusion: Patroni and Citus together for distributed PostgreSQL HA&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="meanings-of-cluster"&gt;Terminology clarification: the many competing meanings of &amp;ldquo;cluster&amp;rdquo;&lt;/h2&gt;

&lt;p&gt;In the Postgres world, the word &amp;ldquo;cluster&amp;rdquo; is used in many different contexts, and so it is easy to get confused. Here&amp;rsquo;s how we&amp;rsquo;re using the term:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Database cluster&lt;/strong&gt; (the SQL standard calls it the &lt;code&gt;catalog cluster&lt;/code&gt;): a collection of databases that is managed by a single instance of a running database server.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;PostgreSQL cluster&lt;/strong&gt; (or Patroni cluster): multiple database instances, primary with a few standby nodes, usually connected via streaming replication.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Citus cluster&lt;/strong&gt;: a distributed set of database nodes, formation of one or many PostgreSQL clusters logically connected using Citus extension to Postgres.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Kubernetes cluster&lt;/strong&gt;: a set of node machines for running containerized applications. Kubernetes could be used to deploy Citus or PostgreSQL clusters at scale.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this blog post we will be mostly talking about distributed Citus clusters and PostgreSQL clusters managed by Patroni (or Patroni clusters.)&lt;/p&gt;

&lt;h2 id="patroni"&gt;What is Patroni? (skip this if you already know)&lt;/h2&gt;

&lt;p&gt;Patroni is an open-source tool that helps to deploy, manage, and monitor highly available PostgreSQL clusters using physical streaming replication. The Patroni daemon runs on all nodes of PostgreSQL cluster, monitors the state of Postgres process(es), and publishes the state to the Distributed Key-Value Store.&lt;/p&gt;

&lt;p&gt;There a few properties required from Distributed Key-Value (Configuration) Store (DCS):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It must implement the consensus algorithm, like Raft, Paxos, Zab, or similar&lt;/li&gt;
&lt;li&gt;It must support Compare-And-Set operations&lt;/li&gt;
&lt;li&gt;It should have Sessions/Lease/TTL mechanisms to expire keys&lt;/li&gt;
&lt;li&gt;It is nice if it provides WATCH API, to subscribe and receive changes of certain keys&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The last two properties are nice to have, but Patroni could still work if they are not supported/implemented, while the first two are mandatory.&lt;/p&gt;

&lt;p&gt;Patroni supports the following DCS: etcd, Consul, ZooKeeper, and Kubernetes API:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Consul and etcd implement Raft protocol&lt;/li&gt;
&lt;li&gt;ZooKeeper implements Zab&lt;/li&gt;
&lt;li&gt;Kubernetes API is backed by etcd&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every node of the Patroni/PostgreSQL cluster maintains a &lt;code&gt;member&lt;/code&gt; key in DCS with its own name. The value of &lt;code&gt;member&lt;/code&gt; key contains the address of the node (host and port), and state of PostgreSQL, like role (&lt;code&gt;primary&lt;/code&gt; or &lt;code&gt;standby&lt;/code&gt;), current Postgres &lt;a href="https://www.postgresql.org/docs/current/datatype-pg-lsn.html"&gt;LSN&lt;/a&gt;, &lt;a href="https://patroni.readthedocs.io/en/latest/SETTINGS.html#tags"&gt;tags&lt;/a&gt;, and so on. Member keys allow automatic discovery of all nodes of the given Patroni/PostgreSQL cluster.&lt;/p&gt;

&lt;p&gt;Patroni running next to the Postgres primary also maintains the &lt;code&gt;/leader&lt;/code&gt; key in DCS.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;/leader&lt;/code&gt; key has the limited TTL and expires if it doesn&amp;rsquo;t receive regular updates.&lt;/li&gt;
&lt;li&gt;If the &lt;code&gt;/leader&lt;/code&gt; key is missing, standby nodes do the leader race trying to create the new &lt;code&gt;/leader&lt;/code&gt; key.&lt;/li&gt;
&lt;li&gt;Patroni on the node that created the new &lt;code&gt;/leader&lt;/code&gt; key promotes Postgres to the primary and Patroni on remaining standby nodes reconfigure Postgres to stream from the new primary.&lt;/li&gt;
&lt;li&gt;What is important, that all operations on the &lt;code&gt;/leader&lt;/code&gt; key are protected with Compare-And-Set.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Patroni on standby nodes is using the &lt;code&gt;/leader&lt;/code&gt; and &lt;code&gt;member&lt;/code&gt; keys to figure out which nodes is the primary and configure governed Postgres to replicate from the primary. &lt;strong&gt;Besides Automatic Failover for HA, Patroni helps automating all kinds of management operations:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Initialization of new nodes using pg_basebackup or 3rd-party backup tools like pgBackRest, wal-g/wal-e, barman, and so on&lt;/li&gt;
&lt;li&gt;Handles synchronous replication requirements&lt;/li&gt;
&lt;li&gt;Supports running &lt;a href="https://www.postgresql.org/docs/current/app-pgrewind.html"&gt;pg_rewind&lt;/a&gt; after failover to join the old primary to the Postgres cluster as a standby&lt;/li&gt;
&lt;li&gt;Helps with PITR by initializing new PostgreSQL clusters from backup instead of using initdb&lt;/li&gt;
&lt;li&gt;And many more&lt;/li&gt;
&lt;/ul&gt;

&lt;figure&gt;
&lt;img src="https://cdn.citusdata.com/images/blog/fig1-diagram-typical-patroni-architecture.png" alt="Figure 1: typical Patroni architecture" loading="lazy" width="850" height="448" style="box-shadow:0 0 12px rgba(0,0,0,.2)" /&gt;
&lt;figcaption&gt;&lt;strong&gt;Figure 1:&lt;/strong&gt; A typical deployment of PostgreSQL HA cluster managed by Patroni with etcd as Distributed Key-Value Store and HAProxy to provide a single endpoint for client connections to the primary node.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h2 id="citus-support"&gt;Introducing Citus support in Patroni 3.0&lt;/h2&gt;

&lt;p&gt;Patroni 3.0 brings official Citus support to Patroni. While it was already possible to run Patroni with Citus before Patroni 3.0 (due to the flexibility and extensibility of Patroni!), the 3.0 release has made the  integration with Citus for HA much better and easier to use.&lt;/p&gt;

&lt;p&gt;Patroni is relying on DCS to discover nodes of the PostgreSQL cluster and configure streaming replication. As already explained in the &amp;ldquo;Terminology clarification&amp;rdquo; section, the Citus cluster is just a bunch of PostgreSQL clusters that are logically connected using Citus extension to Postgres. Hence, it was logical to extend Patroni so it could discover not only nodes of the given Patroni/PostgreSQL cluster, but also discover nodes in a Citus cluster, such as when a new Citus worker node has been added. As Citus nodes are discovered, they are added to the &lt;a href="https://docs.citusdata.com/en/latest/develop/api_metadata.html"&gt;Citus coordinator pg_dist_node metadata&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;There are only a few simple rules one should follow to enable Citus support in Patroni:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Scope (cluster name)&lt;/strong&gt;: The &lt;a href="https://patroni.readthedocs.io/en/latest/SETTINGS.html#global-universal"&gt;scope&lt;/a&gt; must be the same for all Citus nodes&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Superuser username/password&lt;/strong&gt;: The superuser username/password preferably should be the same on coordinator and worker nodes or you should configure superuser &lt;a href="https://docs.citusdata.com/en/latest/admin_guide/cluster_management.html#connection-management"&gt;connections&lt;/a&gt; between nodes using client certificates. Of course, &lt;a href="https://www.postgresql.org/docs/current/auth-pg-hba-conf.html"&gt;pg_hba.conf&lt;/a&gt; should allow superuser connections across all nodes.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;REST API access&lt;/strong&gt;: Patroni &lt;a href="https://patroni.readthedocs.io/en/latest/SETTINGS.html#rest-api"&gt;REST API&lt;/a&gt; access should be allowed from worker nodes to the coordinator. E.g., credentials should be the same and if configured, client certificates from worker nodes must be accepted by the coordinator.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Adding Citus to the Patroni configuration file&lt;/strong&gt;: Add the following section to the &lt;code&gt;patroni.yaml&lt;/code&gt;. The full example of the Patroni configuration file is available on &lt;a href="https://github.com/zalando/patroni/blob/master/postgres0.yml"&gt;GitHub&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;citus&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;X&lt;/span&gt;  &lt;span class="c1"&gt;# 0 for coordinator and 1, 2, 3, etc for workers&lt;/span&gt;
  &lt;span class="na"&gt;database&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;citus&lt;/span&gt;  &lt;span class="c1"&gt;# must be the same on all nodes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="citus:
  group: X  # 0 for coordinator and 1, 2, 3, etc for workers
  database: citus  # must be the same on all nodes
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;p&gt;That&amp;rsquo;s it! Now you can start Patroni and enjoy Citus integration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Patroni will handle all of the following:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Citus extension will be automatically added to &lt;a href="https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-SHARED-PRELOAD-LIBRARIES"&gt;shared_preload_libraries&lt;/a&gt; (to the first place in the list!)&lt;/li&gt;
&lt;li&gt;If &lt;a href="https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAX-PREPARED-TRANSACTIONS"&gt;max_prepared_transactions&lt;/a&gt; is not explicitly set in the &lt;a href="https://patroni.readthedocs.io/en/latest/dynamic_configuration.html"&gt;global dynamic configuration&lt;/a&gt; Patroni will automatically set it to 2*&lt;a href="https://www.postgresql.org/docs/15/runtime-config-connection.html#GUC-MAX-CONNECTIONS"&gt;max_connections&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;citus.database&lt;/code&gt; will be automatically created followed by &lt;code&gt;CREATE EXTENSION citus;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Current superuser credentials (from patroni.yaml) will be added to the &lt;a href="https://docs.citusdata.com/en/latest/develop/api_metadata.html#connection-credentials-table"&gt;pg_dist_authinfo&lt;/a&gt; table to allow cross-node communication. Do not forget to update them if later you decide to change superuser &lt;a href="https://patroni.readthedocs.io/en/latest/SETTINGS.html#postgresql"&gt;username/password/sslcert/sslkey&lt;/a&gt;!&lt;/li&gt;
&lt;li&gt;The coordinator primary node will automatically discover worker primary nodes and add them to the &lt;a href="https://docs.citusdata.com/en/latest/develop/api_metadata.html#worker-node-table"&gt;pg_dist_node&lt;/a&gt; table using the &lt;a href="https://docs.citusdata.com/en/latest/develop/api_udf.html#citus-add-node"&gt;citus_add_node()&lt;/a&gt; function.&lt;/li&gt;
&lt;li&gt;Patroni will also maintain &lt;a href="https://docs.citusdata.com/en/latest/develop/api_metadata.html#worker-node-table"&gt;pg_dist_node&lt;/a&gt; in case failover/switchover on the coordinator or worker clusters occurs.&lt;/li&gt;
&lt;li&gt;Last, but not least, Patroni will pause client connections on the coordinator primary when controlled switchover is executed on the worker cluster, so that clients will not get any visible error.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The diagram below is an example of Citus HA deployment with Patroni 3.0.0.&lt;/p&gt;

&lt;figure&gt;
&lt;picture&gt;
&lt;source srcset="https://cdn.citusdata.com/images/blog/fig2-diagram-new-patroni-architecture.webp" type="image/webp"&gt;
&lt;img src="https://cdn.citusdata.com/images/blog/fig2-diagram-new-patroni-architecture.png" alt="Figure 2: new Patroni architecture" loading="lazy" width="850" height="671" style="box-shadow:0 0 12px rgba(0,0,0,.2)" /&gt;
&lt;/picture&gt;
&lt;figcaption&gt;&lt;strong&gt;Figure 2:&lt;/strong&gt; Patroni on the coordinator node automatically discovers and registers Citus worker nodes in cluster metadata. All connections across distributed Citus nodes work without middleware like HAProxy, thereby reducing complexity and infrastructure maintenance cost. The second HAproxy instance (on the right side) is offered for the scenario where your application is taking advantage of the optional &amp;ldquo;query from any node&amp;rdquo; feature in Citus, sometimes used to increase parallelization and throughput.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h2 id="distributed-cluster"&gt;Our first distributed Citus cluster with Patroni&lt;/h2&gt;

&lt;p&gt;To deploy our test cluster locally we will use &lt;a href="https://www.docker.com/"&gt;docker&lt;/a&gt; and &lt;a href="https://pypi.org/project/docker-compose/"&gt;docker-compose&lt;/a&gt;. The &lt;a href="https://github.com/zalando/patroni/blob/master/Dockerfile.citus"&gt;Dockerfile.citus&lt;/a&gt; is in the &lt;a href="https://github.com/zalando/patroni"&gt;Patroni repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;First, we need to clone Patroni repo and build &lt;code&gt;patroni-citus&lt;/code&gt; docker image:&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight bash"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git clone https://github.com/zalando/patroni.git
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;patroni
&lt;span class="nv"&gt;$ &lt;/span&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; patroni-citus &lt;span class="nt"&gt;-f&lt;/span&gt; Dockerfile.citus &lt;span class="nb"&gt;.&lt;/span&gt;
Sending build context to Docker daemon  573.6MB
Step 1/36 : ARG &lt;span class="nv"&gt;PG_MAJOR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;15
… skip intermediate logs
Step 36/36 : ENTRYPOINT &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"/bin/sh"&lt;/span&gt;, &lt;span class="s2"&gt;"/entrypoint.sh"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="nt"&gt;---&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Running &lt;span class="k"&gt;in &lt;/span&gt;1933967fcb58
Removing intermediate container 1933967fcb58
&lt;span class="nt"&gt;---&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; 0eea66f3c4c7
Successfully built 0eea66f3c4c7
Successfully tagged patroni-citus:latest
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="$ git clone https://github.com/zalando/patroni.git
$ cd patroni
$ docker build -t patroni-citus -f Dockerfile.citus .
Sending build context to Docker daemon  573.6MB
Step 1/36 : ARG PG_MAJOR=15
… skip intermediate logs
Step 36/36 : ENTRYPOINT [&amp;quot;/bin/sh&amp;quot;, &amp;quot;/entrypoint.sh&amp;quot;]
---&amp;gt; Running in 1933967fcb58
Removing intermediate container 1933967fcb58
---&amp;gt; 0eea66f3c4c7
Successfully built 0eea66f3c4c7
Successfully tagged patroni-citus:latest
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;p&gt;Once the image is ready, we will deploy the stack with:&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight bash"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; docker-compose-citus.yml up &lt;span class="nt"&gt;-d&lt;/span&gt;
Creating demo-etcd1   ... &lt;span class="k"&gt;done
&lt;/span&gt;Creating demo-work1-2 ... &lt;span class="k"&gt;done
&lt;/span&gt;Creating demo-coord2  ... &lt;span class="k"&gt;done
&lt;/span&gt;Creating demo-coord3  ... &lt;span class="k"&gt;done
&lt;/span&gt;Creating demo-work1-1 ... &lt;span class="k"&gt;done
&lt;/span&gt;Creating demo-etcd2   ... &lt;span class="k"&gt;done
&lt;/span&gt;Creating demo-work2-2 ... &lt;span class="k"&gt;done
&lt;/span&gt;Creating demo-coord1  ... &lt;span class="k"&gt;done
&lt;/span&gt;Creating demo-work2-1 ... &lt;span class="k"&gt;done
&lt;/span&gt;Creating demo-haproxy ... &lt;span class="k"&gt;done
&lt;/span&gt;Creating demo-etcd3   ... &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="$ docker-compose -f docker-compose-citus.yml up -d
Creating demo-etcd1   ... done
Creating demo-work1-2 ... done
Creating demo-coord2  ... done
Creating demo-coord3  ... done
Creating demo-work1-1 ... done
Creating demo-etcd2   ... done
Creating demo-work2-2 ... done
Creating demo-coord1  ... done
Creating demo-work2-1 ... done
Creating demo-haproxy ... done
Creating demo-etcd3   ... done
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;p&gt;Now we can verify that containers are up and running:&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight bash"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker ps
CONTAINER ID   IMAGE            COMMAND                  CREATED              STATUS              PORTS                              NAMES
e7740f00796d   patroni-citus    &lt;span class="s2"&gt;"/bin/sh /entrypoint…"&lt;/span&gt;   About a minute ago   Up About a minute                                      demo-etcd2
8a3903ca40a7   patroni-citus    &lt;span class="s2"&gt;"/bin/sh /entrypoint…"&lt;/span&gt;   About a minute ago   Up About a minute                                      demo-etcd3
3d384bf74315   patroni-citus    &lt;span class="s2"&gt;"/bin/sh /entrypoint…"&lt;/span&gt;   About a minute ago   Up About a minute   0.0.0.0:5000-5001-&amp;gt;5000-5001/tcp   demo-haproxy
2f6c9e4c63b8   patroni-citus    &lt;span class="s2"&gt;"/bin/sh /entrypoint…"&lt;/span&gt;   About a minute ago   Up About a minute                                      demo-work2-1
4bd35bfdba58   patroni-citus    &lt;span class="s2"&gt;"/bin/sh /entrypoint…"&lt;/span&gt;   About a minute ago   Up About a minute                                      demo-coord1
8dce43a4f499   patroni-citus    &lt;span class="s2"&gt;"/bin/sh /entrypoint…"&lt;/span&gt;   About a minute ago   Up About a minute                                      demo-work1-1
e76372163464   patroni-citus    &lt;span class="s2"&gt;"/bin/sh /entrypoint…"&lt;/span&gt;   About a minute ago   Up About a minute                                      demo-work2-2
0de7bf5044fd   patroni-citus    &lt;span class="s2"&gt;"/bin/sh /entrypoint…"&lt;/span&gt;   About a minute ago   Up About a minute                                      demo-coord3
633f9700e86f   patroni-citus    &lt;span class="s2"&gt;"/bin/sh /entrypoint…"&lt;/span&gt;   About a minute ago   Up About a minute                                      demo-coord2
f50bb1e1d6e7   patroni-citus    &lt;span class="s2"&gt;"/bin/sh /entrypoint…"&lt;/span&gt;   About a minute ago   Up About a minute                                      demo-etcd1
03bd34403ac2   patroni-citus    &lt;span class="s2"&gt;"/bin/sh /entrypoint…"&lt;/span&gt;   About a minute ago   Up About a minute                                      demo-work1-2
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="$ docker ps
CONTAINER ID   IMAGE            COMMAND                  CREATED              STATUS              PORTS                              NAMES
e7740f00796d   patroni-citus    &amp;quot;/bin/sh /entrypoint…&amp;quot;   About a minute ago   Up About a minute                                      demo-etcd2
8a3903ca40a7   patroni-citus    &amp;quot;/bin/sh /entrypoint…&amp;quot;   About a minute ago   Up About a minute                                      demo-etcd3
3d384bf74315   patroni-citus    &amp;quot;/bin/sh /entrypoint…&amp;quot;   About a minute ago   Up About a minute   0.0.0.0:5000-5001-&amp;gt;5000-5001/tcp   demo-haproxy
2f6c9e4c63b8   patroni-citus    &amp;quot;/bin/sh /entrypoint…&amp;quot;   About a minute ago   Up About a minute                                      demo-work2-1
4bd35bfdba58   patroni-citus    &amp;quot;/bin/sh /entrypoint…&amp;quot;   About a minute ago   Up About a minute                                      demo-coord1
8dce43a4f499   patroni-citus    &amp;quot;/bin/sh /entrypoint…&amp;quot;   About a minute ago   Up About a minute                                      demo-work1-1
e76372163464   patroni-citus    &amp;quot;/bin/sh /entrypoint…&amp;quot;   About a minute ago   Up About a minute                                      demo-work2-2
0de7bf5044fd   patroni-citus    &amp;quot;/bin/sh /entrypoint…&amp;quot;   About a minute ago   Up About a minute                                      demo-coord3
633f9700e86f   patroni-citus    &amp;quot;/bin/sh /entrypoint…&amp;quot;   About a minute ago   Up About a minute                                      demo-coord2
f50bb1e1d6e7   patroni-citus    &amp;quot;/bin/sh /entrypoint…&amp;quot;   About a minute ago   Up About a minute                                      demo-etcd1
03bd34403ac2   patroni-citus    &amp;quot;/bin/sh /entrypoint…&amp;quot;   About a minute ago   Up About a minute                                      demo-work1-2
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;p&gt;In total we have 11 containers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;three containers with etcd (forming a three-node etcd cluster),&lt;/li&gt;
&lt;li&gt;seven containers with Patroni+PostgreSQL+Citus (three coordinator nodes, and two worker clusters with two nodes each), and&lt;/li&gt;
&lt;li&gt;one container with HAProxy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The HAProxy listens on ports 5000 (connects to the Citus coordinator primary) and 5001 (which does load balancing between worker primary nodes):&lt;/p&gt;

&lt;p&gt;In a few seconds, our Citus cluster will be up and running. We can verify it by using &lt;code&gt;patronictl&lt;/code&gt; tool from the &lt;code&gt;demo-haproxy&lt;/code&gt; container:&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight bash"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-ti&lt;/span&gt; demo-haproxy bash
postgres@haproxy:~&lt;span class="nv"&gt;$ &lt;/span&gt;patronictl list
+ Citus cluster: demo &lt;span class="nt"&gt;---------&lt;/span&gt;+--------------+---------+----+-----------+
| Group | Member  | Host       | Role         | State   | TL | Lag &lt;span class="k"&gt;in &lt;/span&gt;MB |
+-------+---------+------------+--------------+---------+----+-----------+
|     0 | coord1  | 172.19.0.8 | Sync Standby | running |  1 |         0 |
|     0 | coord2  | 172.19.0.7 | Leader       | running |  1 |           |
|     0 | coord3  | 172.19.0.6 | Replica      | running |  1 |         0 |
|     1 | work1-1 | 172.19.0.5 | Sync Standby | running |  1 |         0 |
|     1 | work1-2 | 172.19.0.2 | Leader       | running |  1 |           |
|     2 | work2-1 | 172.19.0.9 | Sync Standby | running |  1 |         0 |
|     2 | work2-2 | 172.19.0.4 | Leader       | running |  1 |           |
+-------+---------+------------+--------------+---------+----+-----------+
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="$ docker exec -ti demo-haproxy bash
postgres@haproxy:~$ patronictl list
+ Citus cluster: demo ---------+--------------+---------+----+-----------+
| Group | Member  | Host       | Role         | State   | TL | Lag in MB |
+-------+---------+------------+--------------+---------+----+-----------+
|     0 | coord1  | 172.19.0.8 | Sync Standby | running |  1 |         0 |
|     0 | coord2  | 172.19.0.7 | Leader       | running |  1 |           |
|     0 | coord3  | 172.19.0.6 | Replica      | running |  1 |         0 |
|     1 | work1-1 | 172.19.0.5 | Sync Standby | running |  1 |         0 |
|     1 | work1-2 | 172.19.0.2 | Leader       | running |  1 |           |
|     2 | work2-1 | 172.19.0.9 | Sync Standby | running |  1 |         0 |
|     2 | work2-2 | 172.19.0.4 | Leader       | running |  1 |           |
+-------+---------+------------+--------------+---------+----+-----------+
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;p&gt;Now, let&amp;#39;s connect to the coordinator primary via &lt;code&gt;HAProxy&lt;/code&gt; and verify that Citus extension was created and worker nodes are registered in coordinator metadata:&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="n"&gt;postgres&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;haproxy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;~&lt;/span&gt;&lt;span class="err"&gt;$&lt;/span&gt; &lt;span class="n"&gt;psql&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;h&lt;/span&gt; &lt;span class="n"&gt;localhost&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="mi"&gt;5000&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;U&lt;/span&gt; &lt;span class="n"&gt;postgres&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt; &lt;span class="n"&gt;citus&lt;/span&gt;
&lt;span class="n"&gt;Password&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;user&lt;/span&gt; &lt;span class="n"&gt;postgres&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;postgres&lt;/span&gt;
&lt;span class="n"&gt;psql&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Debian&lt;/span&gt; &lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pgdg110&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="n"&gt;SSL&lt;/span&gt; &lt;span class="k"&gt;connection&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;protocol&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;TLSv1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cipher&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;TLS_AES_256_GCM_SHA384&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;compression&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;off&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;Type&lt;/span&gt; &lt;span class="nv"&gt;"help"&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;help&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;

&lt;span class="n"&gt;citus&lt;/span&gt;&lt;span class="o"&gt;=#&lt;/span&gt; &lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="n"&gt;dx&lt;/span&gt;
                    &lt;span class="n"&gt;List&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="n"&gt;installed&lt;/span&gt; &lt;span class="n"&gt;extensions&lt;/span&gt;
     &lt;span class="n"&gt;Name&lt;/span&gt;      &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Version&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;   &lt;span class="k"&gt;Schema&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt;         &lt;span class="n"&gt;Description&lt;/span&gt;
&lt;span class="c1"&gt;---------------+---------+------------+------------------------------&lt;/span&gt;
&lt;span class="n"&gt;citus&lt;/span&gt;          &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;  &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;pg_catalog&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Citus&lt;/span&gt; &lt;span class="n"&gt;distributed&lt;/span&gt; &lt;span class="k"&gt;database&lt;/span&gt;
&lt;span class="n"&gt;citus_columnar&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;  &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;pg_catalog&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Citus&lt;/span&gt; &lt;span class="n"&gt;Columnar&lt;/span&gt; &lt;span class="n"&gt;extension&lt;/span&gt;
&lt;span class="n"&gt;plpgsql&lt;/span&gt;        &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;     &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;pg_catalog&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;PL&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;pgSQL&lt;/span&gt; &lt;span class="k"&gt;procedural&lt;/span&gt; &lt;span class="k"&gt;language&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;citus&lt;/span&gt;&lt;span class="o"&gt;=#&lt;/span&gt; &lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="n"&gt;nodeid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;groupid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;nodename&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;nodeport&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;noderole&lt;/span&gt;
&lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pg_dist_node&lt;/span&gt; &lt;span class="k"&gt;order&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="n"&gt;groupid&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;nodeid&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;groupid&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;  &lt;span class="n"&gt;nodename&lt;/span&gt;  &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;nodeport&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;noderole&lt;/span&gt;
&lt;span class="c1"&gt;-------+---------+------------+----------+----------&lt;/span&gt;
     &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;       &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;172&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;19&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;     &lt;span class="mi"&gt;5432&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;primary&lt;/span&gt;
     &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;       &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;172&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;19&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;     &lt;span class="mi"&gt;5432&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;primary&lt;/span&gt;
     &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;       &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;172&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;19&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;     &lt;span class="mi"&gt;5432&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;primary&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="postgres@haproxy:~$ psql -h localhost -p 5000 -U postgres -d citus
Password for user postgres: postgres
psql (15.1 (Debian 15.1-1.pgdg110+1))
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)
Type &amp;quot;help&amp;quot; for help.

citus=# \dx
                    List of installed extensions
     Name      | Version |   Schema   |         Description
---------------+---------+------------+------------------------------
citus          | 11.2-1  | pg_catalog | Citus distributed database
citus_columnar | 11.2-1  | pg_catalog | Citus Columnar extension
plpgsql        | 1.0     | pg_catalog | PL/pgSQL procedural language
(3 rows)

citus=# select nodeid, groupid, nodename, nodeport, noderole
from pg_dist_node order by groupid;
nodeid | groupid |  nodename  | nodeport | noderole
-------+---------+------------+----------+----------
     1 |       0 | 172.19.0.7 |     5432 | primary
     3 |       1 | 172.19.0.2 |     5432 | primary
     2 |       2 | 172.19.0.4 |     5432 | primary
(3 rows)
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;p&gt;So far, so good. :)&lt;/p&gt;

&lt;p&gt;In this specific setup Patroni is configured to use client certificates in addition to passwords for superuser connections between nodes. Since Citus actively uses superuser connections to communicate between nodes, Patroni also took care about configuring authentication parameters via &lt;a href="https://docs.citusdata.com/en/latest/develop/api_metadata.html#connection-credentials-table"&gt;pg_dist_authinfo&lt;/a&gt;:&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="n"&gt;citus&lt;/span&gt;&lt;span class="o"&gt;=#&lt;/span&gt; &lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pg_dist_authinfo&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;nodeid&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;rolename&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;                                                   &lt;span class="n"&gt;authinfo&lt;/span&gt;
&lt;span class="c1"&gt;-------+----------+--------------------------------------------------------------------------------------------------------------&lt;/span&gt;
     &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;postgres&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;postgres&lt;/span&gt; &lt;span class="n"&gt;sslcert&lt;/span&gt;&lt;span class="o"&gt;=/&lt;/span&gt;&lt;span class="n"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;ssl&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;certs&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;ssl&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;cert&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;snakeoil&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pem&lt;/span&gt; &lt;span class="n"&gt;sslkey&lt;/span&gt;&lt;span class="o"&gt;=/&lt;/span&gt;&lt;span class="n"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;ssl&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;private&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;ssl&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;cert&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;snakeoil&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;key&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="k"&gt;row&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="citus=# select * from pg_dist_authinfo;
nodeid | rolename |                                                   authinfo
-------+----------+--------------------------------------------------------------------------------------------------------------
     0 | postgres | password=postgres sslcert=/etc/ssl/certs/ssl-cert-snakeoil.pem sslkey=/etc/ssl/private/ssl-cert-snakeoil.key
(1 row)
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;p&gt;Don&amp;rsquo;t be scared by the password you see in the &lt;code&gt;authinfo&lt;/code&gt; field. Why? Because first of all, access to &lt;a href="https://docs.citusdata.com/en/latest/develop/api_metadata.html#connection-credentials-table"&gt;pg_dist_authinfo&lt;/a&gt; is restricted to the superuser. Secondly, it is possible to setup &lt;a href="https://docs.citusdata.com/en/latest/admin_guide/cluster_management.html#connection-management"&gt;authentication&lt;/a&gt; using only client certificates, what is actually the recommended way.&lt;/p&gt;

&lt;h2 id="ha-switchover"&gt;Our first HA switchover with Patroni and Citus&lt;/h2&gt;

&lt;p&gt;In Postgres HA terminology, and in Patroni terminology, a &amp;ldquo;switchover&amp;rdquo; is an intentional failover. It&amp;rsquo;s something that you do when you have planned maintenance and you need to trigger a failover yourself for some reason.&lt;/p&gt;

&lt;p&gt;Before doing a switchover with Patroni, let&amp;rsquo;s first create a Citus distributed table and start writing some data to it using the &lt;code&gt;\watch&lt;/code&gt; &lt;a href="https://www.postgresql.org/docs/current/app-psql.html"&gt;psql&lt;/a&gt; command:&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="n"&gt;citus&lt;/span&gt;&lt;span class="o"&gt;=#&lt;/span&gt; &lt;span class="k"&gt;create&lt;/span&gt; &lt;span class="k"&gt;table&lt;/span&gt; &lt;span class="n"&gt;my_distributed_table&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="nb"&gt;bigint&lt;/span&gt; &lt;span class="k"&gt;not&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt; &lt;span class="k"&gt;generated&lt;/span&gt; &lt;span class="n"&gt;always&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="k"&gt;identity&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt; &lt;span class="nb"&gt;double&lt;/span&gt; &lt;span class="nb"&gt;precision&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt;
&lt;span class="n"&gt;citus&lt;/span&gt;&lt;span class="o"&gt;=#&lt;/span&gt; &lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="n"&gt;create_distributed_table&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'my_distributed_table'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'id'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="n"&gt;create_distributed_table&lt;/span&gt;
&lt;span class="c1"&gt;--------------------------&lt;/span&gt;

&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="k"&gt;row&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;citus&lt;/span&gt;&lt;span class="o"&gt;=#&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;inserted&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;insert&lt;/span&gt; &lt;span class="k"&gt;into&lt;/span&gt; &lt;span class="n"&gt;my_distributed_table&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
     &lt;span class="k"&gt;values&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="n"&gt;RETURNING&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;now&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;inserted&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="n"&gt;watch&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="citus=# create table my_distributed_table(id bigint not null generated always as identity, value double precision);
CREATE TABLE
citus=# select create_distributed_table(&amp;#39;my_distributed_table&amp;#39;, &amp;#39;id&amp;#39;);
 create_distributed_table
--------------------------

(1 row)

citus=# with inserted as (
    insert into my_distributed_table(value)
     values(random()) RETURNING id
) SELECT now(), id from inserted\watch 0.01
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;p&gt;The &lt;code&gt;\watch 0.01&lt;/code&gt; will execute the given query every 10ms and the query will return inserted &lt;code&gt;id&lt;/code&gt; plus the current time with microsecond precession, so that we can see how the switchover affects it.&lt;/p&gt;

&lt;p&gt;Meanwhile, in different terminal we will initiate a switchover on one of the worker nodes:&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight bash"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-ti&lt;/span&gt; demo-haproxy bash

postgres@haproxy:~&lt;span class="nv"&gt;$ &lt;/span&gt;patronictl switchover
Current cluster topology
+ Citus cluster: demo &lt;span class="nt"&gt;---------&lt;/span&gt;+--------------+---------+----+-----------+
| Group | Member  | Host       | Role         | State   | TL | Lag &lt;span class="k"&gt;in &lt;/span&gt;MB |
+-------+---------+------------+--------------+---------+----+-----------+
|     0 | coord1  | 172.19.0.8 | Sync Standby | running |  1 |         0 |
|     0 | coord2  | 172.19.0.7 | Leader       | running |  1 |           |
|     0 | coord3  | 172.19.0.6 | Replica      | running |  1 |         0 |
|     1 | work1-1 | 172.19.0.5 | Sync Standby | running |  1 |           |
|     1 | work1-2 | 172.19.0.2 | Leader       | running |  1 |         0 |
|     2 | work2-1 | 172.19.0.9 | Sync Standby | running |  1 |         0 |
|     2 | work2-2 | 172.19.0.4 | Leader       | running |  1 |           |
+-------+---------+------------+--------------+---------+----+-----------+
Citus group: 2
Primary &lt;span class="o"&gt;[&lt;/span&gt;work2-2]:
Candidate &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'work2-1'&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;[]&lt;/span&gt;:
When should the switchover take place &lt;span class="o"&gt;(&lt;/span&gt;e.g. 2023-02-06T14:27 &lt;span class="o"&gt;)&lt;/span&gt;  &lt;span class="o"&gt;[&lt;/span&gt;now]:
Are you sure you want to switchover cluster demo, demoting current leader work2-2? &lt;span class="o"&gt;[&lt;/span&gt;y/N]: y
2023-02-06 13:27:56.00644 Successfully switched over to &lt;span class="s2"&gt;"work2-1"&lt;/span&gt;
+ Citus cluster: demo &lt;span class="o"&gt;(&lt;/span&gt;group: 2, 7197024670041272347&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="nt"&gt;------&lt;/span&gt;+
| Member  | Host       | Role    | State   | TL | Lag &lt;span class="k"&gt;in &lt;/span&gt;MB |
+---------+------------+---------+---------+----+-----------+
| work2-1 | 172.19.0.9 | Leader  | running |  1 |           |
| work2-2 | 172.19.0.4 | Replica | stopped |    |   unknown |
+---------+------------+---------+---------+----+-----------+
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="$ docker exec -ti demo-haproxy bash

postgres@haproxy:~$ patronictl switchover
Current cluster topology
+ Citus cluster: demo ---------+--------------+---------+----+-----------+
| Group | Member  | Host       | Role         | State   | TL | Lag in MB |
+-------+---------+------------+--------------+---------+----+-----------+
|     0 | coord1  | 172.19.0.8 | Sync Standby | running |  1 |         0 |
|     0 | coord2  | 172.19.0.7 | Leader       | running |  1 |           |
|     0 | coord3  | 172.19.0.6 | Replica      | running |  1 |         0 |
|     1 | work1-1 | 172.19.0.5 | Sync Standby | running |  1 |           |
|     1 | work1-2 | 172.19.0.2 | Leader       | running |  1 |         0 |
|     2 | work2-1 | 172.19.0.9 | Sync Standby | running |  1 |         0 |
|     2 | work2-2 | 172.19.0.4 | Leader       | running |  1 |           |
+-------+---------+------------+--------------+---------+----+-----------+
Citus group: 2
Primary [work2-2]:
Candidate [&amp;#39;work2-1&amp;#39;] []:
When should the switchover take place (e.g. 2023-02-06T14:27 )  [now]:
Are you sure you want to switchover cluster demo, demoting current leader work2-2? [y/N]: y
2023-02-06 13:27:56.00644 Successfully switched over to &amp;quot;work2-1&amp;quot;
+ Citus cluster: demo (group: 2, 7197024670041272347) ------+
| Member  | Host       | Role    | State   | TL | Lag in MB |
+---------+------------+---------+---------+----+-----------+
| work2-1 | 172.19.0.9 | Leader  | running |  1 |           |
| work2-2 | 172.19.0.4 | Replica | stopped |    |   unknown |
+---------+------------+---------+---------+----+-----------+
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;p&gt;Finally, after the switchover is completed let&amp;#39;s check logs in the first terminal:&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight bash"&gt;&lt;code&gt;Mon Feb  6 13:27:54 2023 &lt;span class="o"&gt;(&lt;/span&gt;every 0.01s&lt;span class="o"&gt;)&lt;/span&gt;

             now              |  &lt;span class="nb"&gt;id&lt;/span&gt;
&lt;span class="nt"&gt;------------------------------&lt;/span&gt;+------
2023-02-06 13:27:54.441635+00 | 1172
&lt;span class="o"&gt;(&lt;/span&gt;1 row&lt;span class="o"&gt;)&lt;/span&gt;

Mon Feb  6 13:27:54 2023 &lt;span class="o"&gt;(&lt;/span&gt;every 0.01s&lt;span class="o"&gt;)&lt;/span&gt;

            now              |  &lt;span class="nb"&gt;id&lt;/span&gt;
&lt;span class="nt"&gt;-----------------------------&lt;/span&gt;+------
2023-02-06 13:27:54.45187+00 | 1173
&lt;span class="o"&gt;(&lt;/span&gt;1 row&lt;span class="o"&gt;)&lt;/span&gt;

Mon Feb  6 13:27:57 2023 &lt;span class="o"&gt;(&lt;/span&gt;every 0.01s&lt;span class="o"&gt;)&lt;/span&gt;

             now              |  &lt;span class="nb"&gt;id&lt;/span&gt;
&lt;span class="nt"&gt;------------------------------&lt;/span&gt;+------
2023-02-06 13:27:57.345054+00 | 1174
&lt;span class="o"&gt;(&lt;/span&gt;1 row&lt;span class="o"&gt;)&lt;/span&gt;

Mon Feb  6 13:27:57 2023 &lt;span class="o"&gt;(&lt;/span&gt;every 0.01s&lt;span class="o"&gt;)&lt;/span&gt;

             now              |  &lt;span class="nb"&gt;id&lt;/span&gt;
&lt;span class="nt"&gt;------------------------------&lt;/span&gt;+------
2023-02-06 13:27:57.351412+00 | 1175
&lt;span class="o"&gt;(&lt;/span&gt;1 row&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="Mon Feb  6 13:27:54 2023 (every 0.01s)

             now              |  id
------------------------------+------
2023-02-06 13:27:54.441635+00 | 1172
(1 row)

Mon Feb  6 13:27:54 2023 (every 0.01s)

            now              |  id
-----------------------------+------
2023-02-06 13:27:54.45187+00 | 1173
(1 row)

Mon Feb  6 13:27:57 2023 (every 0.01s)

             now              |  id
------------------------------+------
2023-02-06 13:27:57.345054+00 | 1174
(1 row)

Mon Feb  6 13:27:57 2023 (every 0.01s)

             now              |  id
------------------------------+------
2023-02-06 13:27:57.351412+00 | 1175
(1 row)
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;p&gt;As you may see, before the switchover happened, queries were consistently running every 10ms. Between ids &lt;code&gt;1173&lt;/code&gt; and &lt;code&gt;1174&lt;/code&gt; you may notice a short spike of latency, 2893ms (less than 3 seconds). This is how the controlled switchover manifested itself, producing no client errors!&lt;/p&gt;

&lt;p&gt;After switchover has finished, we can again check &lt;a href="https://docs.citusdata.com/en/latest/develop/api_metadata.html#worker-node-table"&gt;pg_dist_node&lt;/a&gt;:&lt;/p&gt;
    &lt;div class="highlight"&gt;
      &lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="n"&gt;citus&lt;/span&gt;&lt;span class="o"&gt;=#&lt;/span&gt; &lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="n"&gt;nodeid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;groupid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;nodename&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;nodeport&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;noderole&lt;/span&gt;
&lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pg_dist_node&lt;/span&gt; &lt;span class="k"&gt;order&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="n"&gt;groupid&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;nodeid&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;groupid&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;  &lt;span class="n"&gt;nodename&lt;/span&gt;  &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;nodeport&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;noderole&lt;/span&gt;
&lt;span class="c1"&gt;-------+---------+------------+----------+----------&lt;/span&gt;
     &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;       &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;172&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;19&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;     &lt;span class="mi"&gt;5432&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;primary&lt;/span&gt;
     &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;       &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;172&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;19&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;     &lt;span class="mi"&gt;5432&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;primary&lt;/span&gt;
     &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;       &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;172&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;19&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;     &lt;span class="mi"&gt;5432&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;primary&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
      &lt;button class="copy-button" data-clipboard-action="copy" data-clipboard-text="citus=# select nodeid, groupid, nodename, nodeport, noderole
from pg_dist_node order by groupid;
nodeid | groupid |  nodename  | nodeport | noderole
-------+---------+------------+----------+----------
     1 |       0 | 172.19.0.7 |     5432 | primary
     3 |       1 | 172.19.0.2 |     5432 | primary
     2 |       2 | 172.19.0.9 |     5432 | primary
(3 rows)
"&gt;Copy&lt;/button&gt;
    &lt;/div&gt;

&lt;p&gt;As you may see, &lt;code&gt;nodename&lt;/code&gt; for the primary in a group 2 was automatically changed by Patroni from &lt;code&gt;172.19.0.4&lt;/code&gt; to &lt;code&gt;172.19.0.9&lt;/code&gt;.&lt;/p&gt;

&lt;h2 id="future-plans"&gt;Future plans &amp; possible improvements&lt;/h2&gt;

&lt;p&gt;The article would not be complete without explaining what further work on Patroni &amp;amp; Citus integration is possible. And there are quite a few options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Read scaling&lt;/strong&gt;: We could register worker standby nodes in &lt;a href="https://docs.citusdata.com/en/latest/develop/api_metadata.html#worker-node-table"&gt;pg_dist_node&lt;/a&gt; so they could be used for scaling read-only queries.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Connection pooling&lt;/strong&gt;: When communicating between nodes, Citus has an option to use connection pooling. To facilitate it the &lt;a href="https://docs.citusdata.com/en/latest/develop/api_metadata.html#connection-pooling-credentials"&gt;pg_dist_poolinfo&lt;/a&gt; should be automatically filled and kept up to date on failover/switchover.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Multiple databases&lt;/strong&gt;: Currently Patroni only supports clusters with a single Citus-enabled database, but there are users that have more than one.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id="conclusion"&gt;Together, Patroni and Citus give distributed PostgreSQL users a good HA solution&lt;/h2&gt;

&lt;p&gt;Patroni opens a route to automated, fully-declarative, open source Postgres deployments of Citus distributed database clusters with high availability (HA)&amp;mdash;on any imaginable platform. In our examples we have used &lt;code&gt;docker&lt;/code&gt; and &lt;code&gt;docker-compose&lt;/code&gt;, but the real production deployment doesn&amp;rsquo;t require using containers.&lt;/p&gt;

&lt;p&gt;Even though Patroni 3.0 supports Citus all the way back to Citus version 10.0, we recommend using the latest versions of &lt;a href="/download"&gt;Citus&lt;/a&gt; and &lt;a href="https://www.postgresql.org/"&gt;PostgreSQL 15&lt;/a&gt; to fully benefit from transparent switchovers and/or restarts of worker nodes. From the &lt;a href="/updates/v11-2/#patroni_support"&gt;Citus 11.2 Updates page&lt;/a&gt;, a.k.a. the release notes page, you can see that:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;The main thing that we improved [for HA in Citus 11.2], is that we now transparently reconnect when we detect that a cached connection to a worker got disconnected while we were not using it.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To get started with Citus and Patroni there are some great docs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.citusdata.com/en"&gt;Citus docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://patroni.readthedocs.io/"&gt;Patroni docs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For Kubernetes lovers we also have good news: please check &lt;a href="https://github.com/zalando/patroni/tree/master/kubernetes#citus-on-k8s"&gt;Citus on Kubernetes&lt;/a&gt; in the Patroni repository. Please keep in mind that this is just an example and not meant for production usage. For the real production usage, we recommend waiting until Postgres Operators from Crunchy, Zalando, or OnGres start to support Citus.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href='https://www.citusdata.com/blog/2023/03/06/patroni-3-0-and-citus-scalable-ha-postgres/'&gt;citusdata.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;</content>
  </entry>
</feed>
