Hyperscale (Citus) is now GA as a built-in deployment option in Azure Database for PostgreSQL. Read Craig’s blog
work_mem is perhaps the most confusing setting within Postgres.
work_mem is a configuration within Postgres that determines how much memory can be used during certain operations. At its surface, the
work_mem setting seems simple: after all,
work_mem just specifies the amount of memory available to be used by internal sort operations and hash tables before writing data to disk. And yet, leaving
work_mem unconfigured can bring on a host of issues. What perhaps is more troubling, though, is when you receive an out of memory error on your database and you jump in to tune
work_mem, only for it to behave in an un-intuitive manner.
Setting your default memory
work_mem value defaults to 4MB in Postgres, and that’s likely a bit low. This means that per Postgres activity (each join, some sorts, etc.) can consume 4MB before it starts spilling to disk. When Postgres starts writing temp files to disk, obviously things will be much slower than in memory. You can find out if you’re spilling to disk by searching for
temporary file within your PostgreSQL logs when you have
log_temp_files enabled. If you see
temporary file, it can be worth increasing your
On Citus Cloud (our fully-managed database as a service that scales out Postgres horizontally), we automatically tune
work_mem based on the overall memory available to the box. Our tuning is based on the years of experience of what we’ve seen work for a variety of production Postgres workloads, coupled with statistics to compute variations based on cluster sizing.
It’s tough to get the right value for
work_mem perfect, but often a sane default can be something like 64 MB, if you’re looking for a one size fits all answer.
It’s not just about the memory for queries
Let’s use an example to explore how to think about optimizing your
Say you have a certain amount of memory, say 10 GB. If you have 100 running Postgres queries, and each of those queries has a 10 MB connection overhead, then 100*10 MB (1 GB) of memory is taken up by the 100 connections—which leaves you with 9GB of memory.
With 9 GB of memory remaining, say you give 90 MB to
work_mem for the 100 running queries. But wait, it’s not that simple. Why? Well,
work_mem isn’t set on a per-query basis, rather, it’s set based on the number of sort/hash operations. But how many shorts/hashes and joins happen per query? Now that is a complicated question. A complicated question made more complicated if you have other processes that also consume memory, such as autovacuum.
Let’s reserve a little for maintenance tasks then and for vacuum and we’ll be okay then as long as we limit our connections right? Not so fast my friend.
Postgres now has parallel queries. If you’re using Citus for parallelism you’ve had this for a while, but now you have it on single node Postgres as well. What this means is on a single query you can have multiple processes running and performing work. This can result in some significant improvements in speed of queries, but each of those running processes can consume the specified amount of
work_mem. With our
64 MB default and 100 connections we could now have each of those running a query per each core consuming far more memory than we anticipated.
More work_mem, more problems
So we can see that getting it perfect is a little more work than ideal.
Let’s go back a little and try this more simply… we can start
work_mem small at say 16 MB and gradually increase
work_mem when we see
temporary file. But why not give each query as much memory as it would like? If we were to just say each process could consume up to 1 GB of memory what’s the harm? Well the other extreme out there is that queries begin consuming too much memory, more than you have available on your box. When that happens you get 100 queries that have 5 different sort operations and a few hash joins in them it’s in fact very possible to exhaust all the memory available to your database.
When you consume more memory than is available on your machine you can start to see out of
out of memory errors within your Postgres logs, or in worse cases the OOM killer can start to randomly kill running processes to free up memory. An out of memory error in Postgres simply errors on the query you’re running, where as the the OOM killer in linux begins killing running processes which in some cases might even include Postgres itself.
When you see an
out of memory error you either want to increase the overall RAM on the machine itself by upgrading to a larger instance OR you want to decrease the amount of memory that
work_mem uses. Yes, you read that right: out-of-memory it’s better to decrease
work_mem instead of increase since that is the amount of memory that can be consumed by each process and too many operations are leveraging up to that much memory.
General guidance for work_mem
While you can continual tune and tweak
work_mem a couple of broad guidelines for pairing to your workload can generally get you into a good spot:
If you have a number of short running queries that run very frequently and perform simple lookups and joins then maintaining a lower work_mem is ideal, in this case you get diminishing returns by allowing it to be significantly higher because it’s simply unused. If you’re workload is relatively few active queries at a time that are doing very complex sorts and joins then granting more memory to prevent things from spilling can give you great returns.
Happy database tuning
Postgres powerful feature set and flexibility means you have a lot of knobs you can turn and levers you can pull in tuning it. Postgres is often used for embedded systems, for time series data, for OLTP and OLAP as well. This flexibility can often mean an overwhelming set of options when tuning. On Citus Cloud we’ve configured this to be suitable for most workloads we see, think of it as one size fits most and then when you need you you’re able to customize. If you’re not running on Citus Cloud consider leveraging pgtune to help you get to a good starting point.