As of beginning 2016 the docker toolset is not quite there to provide a heroku like experience when deploying to production. A lot of parts are already there, but have a few quirks that need to be addressed.
Because of that, the deployment strategy for djangopackages.org is a bit different from what you read in the getting started with docker tutorials.
First, we don't use docker-machine. It's not reliable and has no team of maintainers comparable to other distros like debian, ubuntu or rhel that manages security releases.
Second, there's no way do daemonize the docker compose process. When the underlying VM is restarted, the stack won't start automatically.
The current strategy is:
- Use a virtual machine with a well patched OS (debian, ubuntu, RHEL), djangopackages.org is using ubuntu 14.04
- Install docker, docker-compose, git and supervisord
- Clone the code on the server
- Let supervisord run it
The configuration in
docker-compose.yml contains 4 services:
postgresthat powers the database
django-bthat runs the WSGI server and serves the app through gunicorn
caddythat proxies incoming requests to the gunicorn server
Deploy code changes¶
Website releases are managed through Fabric.
deploy command is ran, Fabric will SSH to our production server, pull the latest changes from our GitHub repository, build a new Docker image, and then perform a blue/green deploy with our new container image.
To create a backup, run:
To list backups, run:
To restore a backup, run:
Backups are located at
/data/djangopackages/backups as plain SQL files.
Clear our Media Cache¶
Our static media files are behind a CDN. We occasionally need to purge cached files. To purge the cache:
Alternatively, you can use
When Things Go Wrong¶
- Is docker running?:
- Is supervisor and both daemonized processes running?:
- Are all services running?:
- Check the logs for all services:
- Check the logs for individual services: