Citus 11 is out! Now 100% open source. Read all about it in Marco’s release blog. 💥
In an earlier blog post I wrote about how breaking problems down into a MapReduce style approach can give you much better performance. We’ve seen Citus is orders of magnitude faster than single node databases when we’re able to parallelize the workload across all the cores in a cluster. And while
count (*) and
avg is easy to break into smaller parts I immediately got the question what about count distinct, or the top from a list, or median?
Exact distinct count is admittedly harder to tackle, in a large distributed setup, because it requires a lot of data shuffling between nodes. Count distinct is indeed supported within Citus, but at times can be slow when dealing with especially larger datasets. Median across any moderate to large size dataset can become completely prohibitive for end users. Fortunately for nearly all of these there are approximation algorithms which provide close enough answers and do so with impressive performance characteristics.
In certain categories of applications such as web analytics, IoT (internet of things), and advertising counting the distinct number of times something has occurred is a common goal. HyperLogLog is a PostgreSQL data type extension which allows you to take the raw data and compress it into a value of how many uniques exist for some period of time.
The result of saving data into the HLL datatype is you would have a value of 25 uniques for Monday and 20 uniques for Tuesday. This data compresses down much more than the raw data. But where it really shines is that you can then combine these buckets, by unioning two HyperLogLog data types you can get back that there were 25 uniques on Monday and Tuesday because Tuesday you had 10 repeat visitors:
SELECT hll_union_agg(users) as unique_visitors FROM daily_uniques; unique_visitors ----------------- 35 (1 row)
Because HyperLogLog can be split up and composed in this way it also parallelizes well across all nodes within a Citus cluster
Another form of counting that we commonly find in web analytics, advertising applications, and security/log event applications is wanting to know the top set of actions or events that has occurred. This could be the top page views you see within google analytics, or could be the top errors that occurred from event logs.
TopN leverages an underlying JSONB datatype to store all of its data. But it then maintains a list of which are the top items and various data about the items. As the order reshuffles it purges old data, allowing it to now have to maintain a full list of all of the raw data.
In order to use it you’ll insert into it in a similar fashion to HyperLogLog:
# create table aggregated_topns (day date, topn jsonb); CREATE TABLE Time: 9.593 ms # insert into aggregated_topns select date_trunc('day', created_at), topn_add_agg((repo::json)->> 'name') as topn from github_events group by 1; INSERT 0 7 Time: 34904.259 ms (00:34.904)
And when querying you can easily get the top ten list for your data:
SELECT (topn(topn_union_agg(topn), 10)).* FROM aggregated_topns WHERE day IN ('2018-01-02', '2018-01-03'); ------------------------------------------------+----------- dipper-github-fra-sin-syd-nrt/test-ruby-sample | 12489 wangshub/wechat_jump_game | 6402 ...
We mentioned earlier that an operation like median can be much harder. And while an extension may not exist yet, there is a future that can support these operations. For median there are multiple different algorithms and approaches that exist. Two interesting ones which could be applied to Postgres:
Does an answer that is quite close but not perfectly exact meet your needs if it gives you sub second response across Terabytes of data? In my experience, the answer is often yes.
So, the next time you think something isn’t possible in a distributed setup explore a bit on what approximation algorithms exist.