Database tuning is often considered unnecessary, and many people leave it for the very end of development or completely skip that part. I'll be covering logging, backups, indexing and orm role, in order to give you some insight into different database related tasks.
The thing that many people don't get right when it comes to configuring postgresql properly is logging.
connections, disconnections, lock_waits, temp_files, checkpoints not only give us some overview but may be helpful to debug other configuration parameters that rely for instance on memory.
You can rely on that logging configuration for most of you projects - having them rotated daily with reasonable metadata will surely help you analyse the problems and bottlenecks of you DB setup.
Logging - PgBadger
Logging itself is a useful little feature, but we can do better :) We can have our logs analyzed by pgBadger. Which will not only aggregate some useful information like crud statistics, but provide a nice graphical representation of what's being done in our database.
Typical setup involves:
Downloading & installing pgbadger according to official documentation.
Setting up a cronjob that uses bash script which handles log files from /var/log/postgresql on a daily basis, creates pgbadger report and stores/uploads it somehwere.
There is no golden rule here, backup setup depends on project's business value and couple of other factors. For a small project having a small bash script which performs pg_dump and later pushing the output file to remote destination should be enough.
Add following script to the created file.
Save the file and create a cron entry:
It's an anti-pattern to store backups on the same server so either connect to your database from remote location or have it another way and create a script that will push your backups to another destination.
Backups - WAL-e
WAL-e is a tool created by heroku that provides continous archiving of WAL segments. To put it simply: if your app can't afford to loose transactions it's a way to go.
All your WAL files will be stored on S3 for backups, in case You need them. WAL is basically a utility which is used by postgresql when it comes to keeping track of database changes before they're applied. Archiving WAL segments with WAL-E will allow you to restore your database to the state from just before a crash.
It's not exactly part of this tutorial, but since I've mentioned creating cronjobs I think it's a good time to introduce You to flock. Most of linux distributions have a command called flock, which will run a command only if it can get a lock on a certain file.
So changing our entry in crontab to:
will prevent us from running duplicate cronjobs, which not only slows server but may lead to hard to track errors. That's how You can easily take care of problems that your cronjobs make cause. Personally, I wrap all crontab entries with flock and I recommend You to do so.
No golden rule here. First play with EXPLAIN ANALYZE to find slow queries. If that's not an option at some point use pgbadger and logs to find queries that may suffer without indexes.
Use partial indexes (also known as filtered) for stuff like:
querying database against a column which is NULL for most of the rows
querying database against some business value (salary, point of time, status or kind field)
This will index only subset of rows, which will keep index size significantly smaller compared to situation in which you'd index whole table.
Use composite indexes (known also as multicolumn) to index queries that rely constantly on same filtering conditions, like:
Generally, indexing foreign keys can be considered good practice (some SQL databases do that automatically). For that You may find the following query useful (shamelessly borrowed from here):
To put it simply: this query finds foreign keys that are not indexed :)
Know Your ORM
At the very end - I don't intend to post here another ORM showdown. What I want I want to outline is that it's good to learn your ORM, consider:
We will be playing with database with couple thousands of rows and following sqlalchemy models. Note that foreign keys are indexed already.
First let's start simple. Let's fetch all player ids, names and their team_ids;
Ok, let's see how fast sqlalchemy will deal with that query
3.5 seconds. No surprise here - we have to create objects, allocate memory ang generally process everything. But can we do better ?
Let's try fetching particular columns first
0.6 seconds. Not bad. But wait, we can drop declarative and try using core
0.4 seconds. Now, we're talking
Now, let's do some joining
So, using all those 3 techiniques:
takes respectively: 9.5, 1.3 and 0.95 seconds.
Apart from powerful core that allows You to build low level queries, sqlalchemy also comes with yield_per, bundles, powerful join system and from_statement, which are really handy when queries need to perform slightly better.