POSETTE 2024 is a wrap! 💯 Thanks for joining the fun! Missed it? Watch all 42 talks online 🍿
POSETTE 2024 is a wrap! 💯 Thanks for joining the fun! Missed it? Watch all 42 talks online 🍿
Written by Claire Giordano
November 14, 2020
When I gave the kickoff talk in the Postgres devroom at FOSDEM this year, one of the Q&A questions was: “what’s happening with the Citus open source extension to Postgres?” The answer is, a lot. Since FOSDEM, Marco Slot and I have blogged about how Citus 9.2 speeds up large-scale htap workloads on Postgres, the Citus 9.3 release notes, and what’s new in Citus 9.4.
Now it’s time to walk through everything new in the Citus 9.5 open source release.
Side-note: If you’re new to Citus and are wondering what it is, the short answer is that Citus is an extension to Postgres that transforms Postgres into a distributed database—distributing your data and your queries across multiple nodes. Citus is available as open source and in the cloud as a managed service on Azure. [Update in October 2022: The Citus managed service on Azure is now known as Azure Cosmos DB for PostgreSQL.
If you like bullets, there is a much longer bulleted list of 47 fixes and improvements in the CHANGELOG for Citus v9.5.0 in the Citus open source GitHub repo.
A high-level overview of what’s new in Citus 9.5 encompasses these 8 buckets:
Citus 9.5 is the first version of Citus to come out after the Postgres 13 release. So we added support for PostgreSQL 13 into Citus 9.5, so you can take advantage of all the latest Postgres features.
Some of the notable features in Postgres 13 include the many query performance improvements; index performance improvements (think: faster and smaller indexes); parallel vacuuming of indexes; disk-based hash aggregation, which means fewer out of memory errors; and FETCH FIRST 10 WITH TIES. There are also some sweet performance gains in Postgres 13 due to improvements in the Postgres query planner & also to partitioning.
Citus 9.5 also supports PostgreSQL 11 and 12.
You can find updated installation instructions on our Citus download page that now include the Citus 9.5 and Postgres 13 packages.
Citus parallelizes commands by opening multiple Postgres connections per Citus worker node and operating on all the shards simultaneously. By using multiple connections, Citus efficiently supports parallelism for any type of Postgres command. The COPY
command benefits from the parallelism of Citus in a big way: enabling you to utilize the compute, memory, and network bandwidth of all of the Citus worker nodes, thereby improving COPY performance significantly.
A downside of this approach is that connections in Postgres are a scarce resource—and when your application sends many commands to the Citus distributed database, this can lead to a very large number of connections to the Citus worker nodes.
Citus 9.3 addressed this using a technique called "adaptive connection management", which automatically limits the total number of connections per worker node for distributed Postgres queries.
In Citus 9.5, we have expanded this logic to also work for COPY
commands on distributed Citus tables. That means you can now run hundreds of COPY
commands simultaneously without the risk of running into errors, without the risk of incurring significant performance penalties.
In the future, the even better news is that one of the Postgres committers on our team, Andres Freund, has delivered substantial connection scalability improvements into Postgres 14. However, since these connection scalability improvements in Postgres 14 won’t be available until autumn of 2021, the new adaptive connection management for COPY
in Citus 9.5 should prove quite useful to you.
undistribute_table
)Many of you have requested this feature and we’re so happy to roll this out in Citus 9.5.
When distributing your Postgres database with Citus, one of the important early decisions you need to make is: “what will your distribution column be?” (Some people also refer to the “distribution column” as a “distribution key” or a “sharding key.” If you’re still learning about sharding and still deciding what your distribution column should be, the documentation on choosing the distribution column is a great resource.)
The distribution column is an important choice you need to make when you create_distributed_table
, and yet—when you first get started—it's not always immediately obvious what the best distribution column is.
And because you don’t necessarily need to distribute all of your Postgres tables (you might choose to leave a few of the tables as local Postgres tables on the Citus coordinator node), well, sometimes it’s not immediately obvious whether to distribute a table at all.
So many of you have asked for a Citus function to turn a distributed table back into a local Postgres table (and keep all the data, of course.) Sort of like an easy ctrl-Z for create_distributed_table
. As of Citus 9.5, we have now added a function called undistribute_table that turns a distributed table back into a local table.
For instance, you can now change a Citus distribution column (aka the distribution key) in two simple steps:
-- create a distributed table partitioned by key
CREATE TABLE items (user_id bigint, key text, value jsonb not null, primary key (user_id, key));
SELECT create_distributed_table('items', 'key');
-- actually, it's better to use user ID such that I can efficiently join with other tables distributed by user ID
SELECT undistribute_table('items');
SELECT create_distributed_table('items', 'user_id');
Stored procedure call delegation is a powerful technique for speeding up stored procedures in multi-tenant applications because stored procedures allow Citus to delegate the whole procedure call to a worker node in a single network round trip.
Our solutions architects discovered it is quite common for stored procedures to operate only on Citus reference tables. And they told us it would be useful to be able to load-balance those procedure calls across all the copies of the reference table(s) on all the worker nodes in a Citus database cluster—which effectively represents a new way of scaling the Citus database.
In the future, we'd like to be able to delegate and load-balance procedure calls automatically, but for now we enabled load-balancing by allowing you to co-locate your distributed function with a reference table and SET task_assignment_policy to 'round-robin'
.
SELECT create_distributed_function('my_procedure(int,int)', colocate_with := 'reference_table');
SET citus.task_assignment_policy TO 'round-robin';
SELECT my_procedure(3,4);
With this technique, you can horizontally scale the compute work in the stored procedure as long as the stored procedure only reads from reference tables. The scalability characteristics of this technique are similar to that of read replicas in Postgres, but without all the downsides (e.g. you can still write, and you can still get read-your-writes consistency).
We expect most of you who use Citus will be happy about this new feature.
When you run EXPLAIN ANALYZE
for a distributed query, Citus shows the EXPLAIN ANALYZE
output it gets when querying one of the shards on the worker node. It could happen that the query is much slower on one of the shards than on others, in which case the EXPLAIN ANALYZE
output did not explain much at all in the past.
As of Citus 9.5, we always show the slowest task instead. When you set citus.explain_all_tasks
to on, we also sort the tasks by execution time.
Over the past 6 months, our Citus open source team has significantly improved our procedures for catching bugs early. (1) We formalized our release process based on observations that many bugs were found in the weeks immediately after merging a new feature. (2) We started making better use of automated tools such as SQLancer. And (3) we repurposed existing automated tests to apply to new scenarios. As a result of these efforts—combined with learnings from new Postgres customers using Citus on Azure who were exercising Citus in new ways—we discovered some new and subtle bugs. So we took a pause from our usual release cycle and focused on fixing as many bugs as we could. In the end, we fixed over 25 issues.
The Citus task tracker (executor) is dead, long live repartition joins!
In Citus 9.5, we decided to tackle a refactoring project—and we reduced the size of our code base by removing a component called the task tracker from Citus. The task tracker was based on TaskTracker in Hadoop and was a task scheduler that ran on each worker node. The task tracker executor scheduled complex re-partition JOINs via these task schedulers.
The task tracker approach has some theoretical benefits for handling intermittent failure and avoiding some network round trips. However, the implementation was not well-optimized. And Marco (who has the history on this item) tells me we did not improve it because of an architectural shortcoming: Like Hadoop, task tracker is not suitable for distributed transactions.
In early 2020 in the Citus 9.2 release, we changed our re-partition join logic to starting using the new Citus adaptive executor, which encapsulates much of what we had learned over the years about running parallel, distributed transactions across Postgres servers. Hence, the Hadoop-inspired task tracker had become obsolete and it was time to say goodbye!
Removing the task tracker code meant we were able to remove ~7k lines of code from the ~140k lines of C code that make up the Citus open source engine. As a result we have less code to maintain, fewer bugs, faster test execution, a simplified codebase for new developers to learn about when they are onboarding, and faster debugging cycles too.
In the Citus 9.3 blog post, we mentioned that as a side-effect of another feature, Citus distributed tables can now sit entirely on the Citus coordinator node. This means all the shards can sit on the Citus coordinator node and your Postgres queries will just work.
Citus 9.5 fixes a few bugs and adds more automated testing for this feature. We think this ability to run a distributed Citus cluster on a single node could be super useful for testing purposes, and we’d love your feedback about whether we should make this a fully supported feature in the future.
If you want to get a sneak preview, you can run the following SQL commands. After you run these commands, you can create distributed tables on a single server.
-- add coordinator to the metadata with group 0
SELECT master_add_node('coordinator.hostname', 5432, groupid := 0);
-- add coordinator to the metadata with group 0
SELECT master_set_node_property('localhost', 5432, 'shouldhaveshards', true);
Our Citus open source team (aka the “Citus engine team” internally) continues to wow with their ongoing work to keep improving the user experience, the performance, and the reliability of Citus. And there’s even more exciting things to come in the next release of Citus.
If you have a data-intensive application (or a multi-tenant SaaS application that is growing fast) and are looking to scale out Postgres, the Citus open source documentation is a really good place to get started and includes tutorials for real-time analytics apps, multi-tenant applications, and time series workloads. And because Citus is open source, it’s easy to download Citus packages to try things out, too. Now with Postgres 13 support!
And be sure to join our Citus Slack to participate in the community discussion about Citus.