POSETTE: An Event for Postgres 2025 will happen virtually Jun 10-12, 2025. Call for Speakers is open! 💥
POSETTE: An Event for Postgres 2025 will happen virtually Jun 10-12, 2025. Call for Speakers is open! 💥
Written by Hadi Moshayedi
April 3, 2014
We are excited to open source our columnar store extension for PostgreSQL, cstore_fdw and share it with the community! Columnar stores bring notable benefits for analytic workloads, where data is loaded in batches.
This columnar store extension uses the Optimized Row Columnar (ORC) format for its data layout. ORC improves upon the RCFile format developed at Facebook, and brings the following benefits:
Further, we used the Postgres foreign data wrapper APIs and type representations with this extension. This brings:
It's worth noting that the columnar store extension is self-contained. If you're a PostgreSQL user, you can get the entire source code and build using the instructions on our GitHub page. You can even join columnar store and regular Postgres tables in the same SQL query.
Now, let's see how all this fits together with an example. For this, we start by downloading customer review data from Amazon for the year 1998. In this table, each review represents an event. We find that events or fact tables, or tables that have a large number of columns are good fits for the columnar store.
wget http://examples.citusdata.com/customer_reviews_1998.csv.gz
gzip -d customer_reviews_1998.csv.gz
Next, let's connect to the PostgreSQL database and create a foreign table.
CREATE EXTENSION cstore_fdw;
CREATE SERVER cstore_server FOREIGN DATA WRAPPER cstore_fdw;
CREATE FOREIGN TABLE customer_reviews
(
customer_id TEXT,
review_date DATE,
review_rating INTEGER,
review_votes INTEGER,
review_helpful_votes INTEGER,
product_id CHAR(10),
product_title TEXT,
product_sales_rank BIGINT,
product_group TEXT,
product_category TEXT,
product_subcategory TEXT,
similar_product_ids CHAR(10)[]
)
SERVER cstore_server
OPTIONS(filename '/opt/citusdb/3.0/cstore/customer_reviews.cstore',
compression 'pglz');
Note that we specified the compression method as the LZ-class algorithm that's already built into PostgreSQL. The cstore extension will use this algorithm to compress column values in groups of 10K; and this group size is also configurable.
Next, we can use PostgreSQL's copy command to load data into the columnar store.
COPY customer_reviews FROM '/home/user/customer_reviews_1998.csv' WITH CSV;
Finally, let's run an example SQL query on the column store table.
-- Do we have a correlation between a book's title's length and its review ratings?
SELECT
width_bucket(length(product_title), 1, 50, 5) title_length_bucket,
round(avg(review_rating), 2) AS review_average,
count(*)
FROM
customer_reviews
WHERE
product_group = 'Book'
GROUP BY
title_length_bucket
ORDER BY
title_length_bucket;
This example shows that the columnar store extension works nicely with your PostgreSQL database. The natural follow up question is, what are the benefits? To quantify them, we ran some preliminary numbers using the industry standard TPC-H benchmark.
We started by measuring compression ratios for the tables in this benchmark and found that they compressed by 3.5x with pglz. This means that you can now fit 3.5x of your working set into memory. Also, when going to disk, you read proportionally less data. Finally, if you're using SSDs, you save notably from storage costs.
After looking at compression ratios, we next measured query run times on an m1.xlarge instance with rotational disks. We also flushed the page cache before each test to see the impact on disk I/O. Further, we ran Analyze on each foreign table so that PostgreSQL has the statistics it needs to choose the best query plan.
This first diagram shows four representative queries from the TPC-H benchmark, and highlights query run-times improving by about 2x. It's worth noting that the tables in this benchmark have 10-15 columns each and we see these benefits grow as the number of columns increase to 100s.
The second diagram focuses on disk I/O, and shows total volume of data read from disk with regular Postgres tables, cstore tables, and cstore tables with pglz compression. The benefits of reading only relevant columns are more apparent here. With pglz, cstore reduces data read from disk by more than 10x.
Besides all this, we are really excited by how we can further improve upon this extension! Some features that we're thinking about include new compression methods, better query cost estimation, improved read/write performance, and checksums. We outline these features in our GitHub page and welcome your opinions.
Finally, if you start using the columnar store and need for Postgres to scale out, the open source Citus extension to Postgres might be able to help. Citus extends Postgres to provide distributed queries across a cluster; and we're actively working on improving it. We also have a few cool projects in the works around cstore and more. So please do get in touch with us on Twitter or on our Citus public slack to tell us what you think!
Got comments? Join the discussion on Hacker News.