Testing your Django app with Citus

Written by Louise Grandjonc
July 5, 2019

Recently, I started working on the django-multitenant application. The main reason we created it was to to help django developers use citus in their app. While I was working on it, I wrote unit tests. And to be able to reproduce a customer’s production environment, I wanted the tests to use citus and not a single node postgres. If you are using citus as your production database, we encourage you to have it running in your development environment as well as your staging environments to be able to minimise the gap between dev and production. To understand better the importance of dev/prod parity, I recommend reading the Twelve-Factor app that will give you ideas to lower the chances of having last minute surprising when deploying on prod.

The goal of this article is to explain how I set it up as I think that if you are using citus on production, it is better to have tests running with citus.

Setting up docker

I will consider that you already have docker installed, otherwise I recommend taking a look at our article.

Once you have docker running, create in your app directory a docker-compose.yml containing:

version: '2.1'

    image: 'citusdata/citus:8.2.1'
    ports: ['5600:5432']
    labels: ['com.citusdata.role=Master']
    volumes: ['/var/run/postgresql']
    container_name: "${COMPOSE_PROJECT_NAME:-citus}_manager"
    image: 'citusdata/membership-manager:0.1.0'
    volumes: ['/var/run/docker.sock:/var/run/docker.sock']
    depends_on: { master: { condition: service_healthy } }
    image: 'citusdata/citus:8.2.1'
    ports: ['5601:5432']
    labels: ['com.citusdata.role=Worker']
    depends_on: { manager: { condition: service_healthy } }
    image: 'citusdata/citus:8.2.1'
    ports: ['5602:5432']
    labels: ['com.citusdata.role=Worker']
    depends_on: { manager: { condition: service_healthy } }
    image: busybox
      worker1: { condition: service_healthy }
      worker2: { condition: service_healthy }

You can now have the cluster running with

docker-compose up -d

-d is important to have all the containers opened to each other.

Changing your test settings

Django works with settings files. When you first start a django app, you simply have a settings.py. I usually work with different files for each use cases in a settings directory. The reason I do this is to avoid accidentally committing my production settings… Some of you might be working with environment variable, which is a good approach too, as it can be a way to make sure your settings between prod and dev are the same (while still not having passwords / access keys in a public file).

In this case, I used the following structure:


In the tests.py the only thing I changed was the database settings.

    "default": {
        'ENGINE': 'django_multitenant.backends.postgresql',
        "NAME": "postgres",
        "USER": "postgres",
        "PASSWORD": "",
        "HOST": "localhost",
        "PORT": 5600,
        "TEST": {
            "NAME": "postgres",
            "SERIALIZE": False

This way, your tests will run by connecting to the docker of the coordinator.

Running tests

If you try now to run your tests using:

python manage.py test mysite.tests

You will get the following errors:

Got an error creating the test database: database "postgres" already exists

Got an error recreating the test database: cannot drop the currently open database

The citus docker is configured to have citus extension installed on the postgres database, and the nodes are correctly set up, which is why it would be tricky to change the DATABASES['name'] in settings.

The idea is to use the same database instead of dropping it at the beginning of testing. You can run your tests with:

python manage.py test mysite.tests --keepdb

If you are using pytest, I recommend adding to your pytest.ini file

addopts = --reuse-db

And run

py.test -s mysite/tests


I hope this article helped you with setting up an automated test suite running with citus!

If you are running tests on a CI like circleCI or travis, you can configure it to create the docker cluster at the beginning of each test run. And drop the containers once it's over.

Louise Grandjonc

Written by Louise Grandjonc

Former Postgres and Citus solutions engineer at Microsoft. Loves working on database performance and tuning SQL queries. Speaker at PGConf.EU, PyCon, DjangoCon, PyBay, PyCaribbean, & more. Bass viol player. Avid reader.