Citus 10 is out! New features include columnar storage & Citus on a single node—plus we’ve open-sourced the shard rebalancer. Read the Citus 10 blog.

Skip navigation

Combined row-based and columnar storage in PostgreSQL 9.5

Written by Marco Slot
October 5, 2015

One of the upcoming features in PostgreSQL 9.5 is foreign table inheritance, which offers a suite of new possibilities for users of foreign data wrappers. An exciting possibility is that you will be able to create a single table in which recent data is stored in a row-based format for fast appending and look-ups, while older data is stored in a columnar format for compression and fast analytics using cstore_fdw

An example use-case might be a news organization which has frequent look-ups of recent data (e.g. to generate news feeds), while older data is used for analytics (e.g., find correlations in events). We use the gdelt dataset containing hundreds of millions of news events as a reference. This use-case will benefit greatly from using row-based storage for recent data, and columnar storage for older data, so let’s see how we will be able to do this in PostgreSQL 9.5. The current release of cstore_fdw is not yet 9.5 compatible, but a preview is available on the 9.5-compatible branch.

To get started, we can create the events table and the row-based events_current table that inherits from it (see full source). When querying the events table, postgres includes inherited tables in the results.

CREATE TABLE events (
    event_date int,
    ...
);
CREATE TABLE events_current () INHERITS(events);

We also set up a trigger that routes INSERTs on the events table to the events_current table.

CREATE OR REPLACE FUNCTION events_insert_trigger() RETURNS TRIGGER
AS $$
BEGIN
    INSERT INTO events_current VALUES (NEW.*);
    RETURN NULL;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER insert_events BEFORE INSERT ON events
FOR EACH ROW EXECUTE PROCEDURE events_insert_trigger();

We can now start adding new data to our event table, which will use row-based storage. The gdelt dataset comprises over 100GB of news events in easy-to-digest CSV format. It can be copied into the events table using:

\COPY events FROM '20150101.export.CSV' WITH (NULL '')

After a while we can migrate older data to a cstore_fdw partition. We first create a new cstore_fdw table that inherits from the events table and add a constraint on the event_date to avoid scanning it in queries on recent data, we then delete old data from the events_current table and copy it into the cstore_fdw table. 

BEGIN;
-- Create cstore_fdw partition for January 2015
CREATE FOREIGN TABLE events_201501 (
      CHECK(20150101 <= event_date AND event_date <= 20150131)
) INHERITS(events) SERVER cstore_server OPTIONS(compression 'pglz');

-- Migrate data into cstore_fdw partition
WITH migrate AS (DELETE FROM events_current WHERE event_date <= 20150131 RETURNING *)
INSERT INTO events_201501 SELECT * FROM migrate;
END;

After this command, the storage size of the data in January is reduced by ~90%, and analytical queries over that time period typically run 5-10x faster. 

It is important to always create a new partition and migrate data in a single transaction. Once INSERT into the cstore_fdw is completed, the data in the table is visible to subsequent queries, even if the outer transaction has not committed yet. If we were to migrate data to a cstore_fdw table that was already in use, this could briefly cause the data to appear twice. We avoid this by always creating a new partition that is not visible to other transactions until the transaction commits.

For more information, check out our recent webinar on cstore_fdw, in which performed a live demo of this approach. For dealing with large time-series data sets in PostgreSQL, you might also want to look into sharding and distributed querying using CitusDB.