Hyperscale (Citus) is now GA as a built-in deployment option in Azure Database for PostgreSQL. Read Craig’s blog
ZFS is a open source file system with the option to store data on disk in a compressed form. Itself ZFS supports a number of compression algorithms, giving you flexibility to optimize both performance and how much you store on disk. Compressing your data on disk offers two pretty straightforward advantages:
- Reduce the amount of storage you need—thus reducing costs
- When reading from disk, requires less data to be scanned, improving performance
To date, we have run Citus Cloud—our fully-managed database as a service that scales out Postgres horizontally—in production on EXT4. Today, we’re excited to announce a limited beta program of ZFS support for our Citus Cloud database. ZFS makes Citus Cloud even more powerful for certain use cases. If you are interested in access to the beta contact us to get more info, or continue reading to learn more about the use cases where ZFS and Citus and Postgres can help.
Lots of storage needed in your database cluster
If you have a massive amount of older historical data mixed with more recent data in your Postgres database, then the ZFS support in Citus could be useful for you. By leveraging Citus and pg_partman for time partitioning of data, you’re able to retain a large amount of time-series data, and retain each bucket of data in separate tables.
This means you could easily store a lot of historical data while the most recent data is kept fresh in cache. We’ve commonly seen compression rates on disk of 2X-3X, which means on a standard 2 TB disk sizing on Citus Cloud (that’s 2 TB per node, for each node in your Citus database cluster) you’re able to pack up to 6 TB per node. This can reduce the number of nodes you need in your Citus database cluster, when your biggest need is to retain more data.
Note: We do have custom storage options beyond 2 TB as well. If you have needs larger than 2 TB per node, please get in touch with us
Scanning lots of data in Postgres doesn’t have to be so painful
Beyond allowing you to store lots of data, certain workloads will see a big improvement in performance as well. If your workload is more analytics or data warehousing related, meaning you frequently have to scan all your data to get a result, then ZFS can give your sequential scans a boost. Since the data is compressed on disk, when you have to read it you’re reading less overall data from disk. As your data is read from disk, the data is decompressed in memory before it’s returned to you—and since reading from disk can be a slow operation you can see nice performance gains here.
How’s it all work?
ZFS compresses the data as it comes in and goes out, then writes the compressed format directly to disk. By default, ZFS stores the data in disk in blocks of 128KB. Any operation on the data needs to be done on whole block. If ZFS needs to bring some data from disk, it brings the whole block. Similarly if you are using compression, when you read or write some data, ZFS needs to decompress/compress whole block. This has some important performance impacts such as;
- If your application does random reads, ZFS would decompress whole block and only use small part of it.
- If your application writes to disk in small blocks, ZFS would still need to decompress whole block, add your new write and compress it back.
Regarding compression/decompression on writes, ZFS has some mechanisms to reduce write amplification such as not compressing the data directly but keeping the data in its ZIL (a log mechanism similar to WAL in PostgreSQL) for a while.
Apart from performance impact, bigger record size means better compression ratios, so there is a trade off between the compression ratio and performance. ZFS does more for us here in that it can also detect if certain data is not ideal for compression (the case with very random distributions of data) and in that case, ZFS will skip compression.
Getting started with ZFS and the Citus Cloud database
ZFS is enabled at a Citus Cloud formation level within Citus, in other words, on a per database cluster level (so you enable ZFS on all nodes in a Citus Cloud database cluster.)
If you’re interested in participating in the beta, all you need to do is contact us. Once you’re within the beta, you have two options for testing ZFS.
The first option is when provisioning a new cluster. At provision time, you’ll have the option to enable ZFS at the same time you specify the region of your formation.
If you’re already using Citus Cloud in production, don’t fear, we have a solution for you as well. From your production databasw cluster, you can spin up a fork or follower which has ZFS enabled. This will allow you to gradually test the compression in performance, and if all looks good we can perform a high availability failover to it for you.
We’re excited at the performance boosts we’ve seen in use cases with ZFS. Give it a try today, or let us know if you have questions.