Citus Blog

Articles tagged: distributed databases

Citus 10.1 is out! In this latest release to the Citus extension to Postgres, our team focused on improving your user experience. Some of the 10.1 fixes are operational improvements—such as with the shard rebalancer, or with citus_update_node. Some are performance improvements—such as for multi-row INSERTs or with citus_shards. And some are fixes you’ll appreciate if you use Citus with lots of Postgres partitions.

Given that the previous Citus 10 release included a bevy of new features—including things like columnar storage, Citus on a single node, open sourcing the shard rebalancer, new UDFs so you can alter distributed table properties, and the ability to combine Postgres and Citus tables via support for JOINs between local and distributed tables, and foreign keys between local and reference tables—well, we felt that Citus 10.1 needed to prioritize some of our backlog items, the kinds of things that can make your life easier.

This post is your guide to the what’s new in Citus 10.1. And if you want to catch up on all the new things in past releases to Citus, check out the release notes posts about Citus 10, Citus 9.5, Citus 9.4, Citus 9.3, and Citus 9.2.

Keep reading

If you have a large PostgreSQL database that runs on a single node, eventually the single node’s resources—such as memory, CPU, and disk—may deliver query responses that are too slow. That is when you may want to use the Citus extension to Postgres to distribute your tables across a cluster of Postgres nodes.

In your large database, Citus will shine for large tables, since the distributed Citus tables will benefit from the memory across all of the nodes in the cluster. But what if your Postgres database also contains some small tables which easily fit into a single node’s memory? You might be wondering: do you need to distribute these smaller tables, even though there wouldn’t be much performance gain from distributing them?

Fortunately, as of the Citus 10 release, you do not have to choose: you can distribute your large tables across a Citus cluster and continue using your smaller tables as local Postgres tables on the Citus coordinator.

One of the new features in Citus 10 that enables you to use a hybrid “local+distributed” Postgres database is that you can now JOIN local tables and distributed tables. (The other new Citus 10 feature has to do with foreign keys between local and reference tables.)

Keep reading

One of the main reasons people use Citus to transform Postgres into a distributed database is that with Citus, you can scale out horizontally while still enjoying PostgreSQL’s great RDBMS features. Whether you’re already a Postgres expert or are new to Postgres, you probably know one of the benefits of using a relational database is to have relations between your tables. And one of the ways you can relate your tables is of course to use foreign keys.

A foreign key ensures referential integrity, which can help you to avoid bugs in applications. For example, a foreign key can be used to ensure that a table of “orders” can only reference customer IDs that exist in the “customers” table.

If you have already heard about Citus 10, you know that Citus 10 gives you more support for hybrid data models, which means that you can easily combine regular Postgres tables with distributed Citus tables to get the best of the single node and distributed Postgres worlds.

This post will walk you through one of the new features in Citus 10: support for foreign keys between local Postgres tables and Citus reference tables.

Keep reading

Citus is an extension to Postgres that lets you distribute your application’s workload across multiple nodes. Whether you are using Citus open source or using Citus as part of a managed Postgres service in the cloud, one of the first things you do when you start using Citus is to distribute your tables. While distributing your Postgres tables you need to decide on some properties such as distribution column, shard count, colocation. And even before you decide on your distribution column (sometimes called a distribution key, or a sharding key), when you create a Postgres table, your table is created with an access method.

Previously you had to decide on these table properties up front, and then you went with your decision. Or if you really wanted to change your decision, you needed to start over. The good news is that in Citus 10, we introduced 2 new user-defined functions (UDFs) to make it easier for you to make changes to your distributed Postgres tables.

Keep reading
Marco Slot

Citus Talk at CMU: Distributed PostgreSQL as an Extension

Written byBy Marco Slot | April 10, 2021Apr 10, 2021

Last month we released Citus 10 and we've received an overwhelming amount of positive feedback on the new columnar compression and single node Citus features, as well as the news that we’ve open sourced the shard rebalancer.

The new and exciting Citus 10 features are bringing in lots of new users of Citus open source and the Citus database service on Azure. And many of you are asking:

Keep reading

One of the main reasons people use the Citus extension for Postgres is to distribute the data in Postgres tables across multiple nodes. Citus does this by splitting the original Postgres table into multiple smaller tables and putting these smaller tables on different nodes. The process of splitting bigger tables into smaller ones is called sharding—and these smaller Postgres tables are called “shards”. Citus then allows you to query the shards as if they were still a single Postgres table.

One of the big changes in Citus 10—in addition to adding columnar storage, and the new ability to shard Postgres on a single Citus node—is that we open sourced the shard rebalancer.

Yes, that’s right, we have open sourced the shard rebalancer! The Citus 10 shard rebalancer gives you an easy way to rebalance shards across your cluster and helps you avoid data hotspots over time. Let’s dig into the what and the how.

Keep reading

Once you start using the Citus extension to distribute your Postgres database, you may never want to go back. But what if you just want to experiment with Citus and want to have the comfort of knowing you can go back? Well, as of Citus 9.5, now there is a new undistribute_table() function to make it easy for you to, well, to revert a distributed table back to being a regular Postgres table.

If you are familiar with Citus, you know that Citus is an open source extension to Postgres that distributes your data (and queries) to multiple machines in a cluster—thereby parallelizing your workload and scaling your Postgres database horizontally. When you start using Citus—whether you’re using Citus open source or whether you’re using Citus as part of a managed service in the cloud—usually the first thing you need to do is distribute your Postgres tables across the cluster.

Keep reading
Marco Slot

Making Postgres stored procedures 9X faster in Citus

Written byBy Marco Slot | November 21, 2020Nov 21, 2020

Stored procedures are widely used in commercial relational databases. You write most of your application logic in PL/SQL and achieve notable performance gains by pushing this logic into the database. As a result, customers who are looking to migrate from other databases to PostgreSQL usually make heavy use of stored procedures.

When migrating from a large database, using the Citus extension to distribute your database can be an attractive option, because you will always have enough hardware capacity to power your workload. The Hyperscale (Citus) option in Azure Database for PostgreSQL makes it easy to get a managed Citus cluster in minutes.

In the past, customers who migrated stored procedures to Citus often reported poor performance because each statement in the procedure involved an extra network round trip between the Citus coordinator node and the worker nodes. We also observed this ourselves when we evaluated Citus performance using the TPC-C-based workload in HammerDB (TPROC-C), which is implemented using stored procedures.

Keep reading

GPS has become part of our daily life. GPS is in cars for navigation, in smartphones helping us to find places, and more recently GPS has been helping us to avoid getting infected by COVID-19. Managing and analyzing mobility tracks is the core of my work. My group in Université libre de Bruxelles specializes in mobility data management. We build an open source database system for spatiotemporal trajectories, called MobilityDB. MobilityDB adds support for temporal and spatiotemporal objects to the Postgres database and its spatial extension, PostGIS. If you're not yet familiar with spatiotemporal trajectories, not to worry, we'll walk through some movement trajectories for a public transport bus in just a bit.

One of my team's projects is to develop a distributed version of MobilityDB. This is where we came in touch with the Citus extension to Postgres and the Citus engineering team. This post presents issues and solutions for distributed query processing of movement trajectory data. GPS is the most common source of trajectory data, but the ideas in this post also apply to movement trajectories collected by other location tracking sensors, such as radar systems for aircraft, and AIS systems for sea vessels.

Keep reading

Around 10 years ago I joined Amazon Web Services and that’s where I first saw the importance of trade-offs in distributed systems. In university I had already learned about the trade-offs between consistency and availability (the CAP theorem), but in practice the spectrum goes a lot deeper than that. Any design decision may involve trade-offs between latency, concurrency, scalability, durability, maintainability, functionality, operational simplicity, and other aspects of the system—and those trade-offs have meaningful impact on the features and user experience of the application, and even on the effectiveness of the business itself.

Perhaps the most challenging problem in distributed systems, in which the need for trade-offs is most apparent, is building a distributed database. When applications began to require databases that could scale across many servers, database developers began to make extreme trade-offs. In order to achieve scalability over many nodes, distributed key-value stores (NoSQL) put aside the rich feature set offered by the traditional relational database management systems (RDBMS), including SQL, joins, foreign keys, and ACID guarantees. Since everyone wants scalability, it would only be a matter of time before the RDBMS would disappear, right? Actually, relational databases have continued to dominate the database landscape. And here's why:

Keep reading

Page 2 of 3