Inspired by Brett Cannon's post: http://www.snarky.ca/my-new-years-programming-resolutions and having in mind that 2016 started almost 2 months ago, here is my list:
I'm going to share my personal view on some cases, connected more or less with python exception handling.
1. Worst python antipattern revisited
Everyone knows this is harmful:
but, this (not so harmful in someone's eyes) snippet is even worse imho:
We may assume that someone was accessing complex function, which internals he was not familiar with that much, so he decided to log any exception that occur. And unless it wasn't quick script or snippet, that should tell us at least three things about this person:
a) he probably didn't bother writing unit tests
b) he didn't care enough to identify function internals and special cases that may occur (ValueErrors, KeyErrors etc)
c) he knows about the first anti-pattern (except / pass), but is lazy enough to actually reproduce it, just slightly improving on quality
and unless you're using sentry or somewhat similar software and rely purely on log files, I'm almost 100% sure that most of the exception will be lost in the abyss.
2. Ignoring with ignored
Introduced (to me) by Raymond Hettinger useful pattern to silencing unwanted exceptions - use with caution
3. Using what's already been done
Unless you're writing big or very generic library or some fancy project I'd suggest you rely on already defined exceptions, which are present in Standard Library
a) Raise TypeError: use when someone's tries to play with your business logic in a harmful way - e.g providing car instances to function that expects bank accounts to do some money operations.
b) Raise AttributeError: may be used instead common ConfigurationExceptions that are thrown whenever something is not present in configuration / setting files
c) Raise KeyError: whenever handler is missing for specific actions, like so:
d) Raise ValueError: throw anytime when data provided is invalid and doesn't match expectations, for instance You expect number between 1 and 100 and someone enters 1000
This little gang of four should keep you going for a while ;-)
4. Don't raise NotImplementedError
I literally hate snippets like this:
It's time abc module becomes more popular - it's much better and flexible for creating interfaces.
It's not the best idea to put pass in each function perhaps, but this pattern is extremely useful for application factories: http://flask.pocoo.org/docs/0.10/patterns/appfactories/, which may be used to setup different objects that may be used later on app startup.
5. Drop try except else finally statements and favour context managers
Instead of this:
put it in context manager and let your API be neat and beautiful
6. Know your library
Django, Sqlalchemy, requests -they all come with create deal of predefined exceptions: non existing rows, http errors, timeouts, validation errors - it's all there for you.
In case you've ever wanted to create your personal "must reads" on Python related stuff, search for something in articles published a while ago - here is a simple approach how one may try to tackle the problem:
Crawl reddit's r/Python:
That's a very basic scrapy spider, which You can invoke for instance like that:
and store all links with titles that lead to external blogs/services. That way You won't miss a single article :-). Hook this script to some database, add published_at field and get notified only when there is new stuff.
Coding style is important and in a way defines every programmer. Everyday we come across well tested code, code that works but is not generic, overthought code and of course crappy code :-). Personally I had a tendency to value code that is practical (does the job well and is straightforward) over generic solutions which I considered to be 'too much for now'. But, as time was passing I've learned that there are couple python tools that provide very good balance between something generic and practical and are still pythonic (that's what cool kids aim for).
The idea behind this article is to show different approaches to typical programmer struggles :-). We'll be designing a very simple metric system (basically a hit counter) for our views or business logic parts.
(Keep in mind that few lines are actually 'pseudo-code')
So we've decided to use redis to store our metrics - that's a plus. But, except that, overall quality is rather poor.
Improvements, improvements ...
First: we did not use INCR so we are not guarding against race conditions. Let's fix that (all imports are skipped for convenience):
Shorter and better. But something does not feel right. In this case connecting to redis is simple because we rely on defaults, but it may require specifying host, port and db so our code my slightly grow. Apart from that, key name is somewhat hardcoded. So each time we'd like to 'install' this piece of code in another view we'd have to duplicate everything. It's a job for decorator!
That feels great. Not only it's reusable, allows us to switch everything on and off - it looks pythonic (it means we're getting there). But still, something is missing. What if I'd like to use this code in 'regular' class or function, not as a decorator. We need something better:
Uff, that's pretty long. But boy, it was worth it. As I believe Raymond Hettinger said - context managers are one of the most underused features of Python, which is strange since they're a good choice when it comes to implementing acquire & release pattern (here it's not that obvious, but we could move redis connection to enter, and possibly provide better exception handing, nevertheless context manager is a protocol which we can freely use).
We can decorate our regular views (function names will be used as keys):
We can use custom metric around one block of code:
Or let ourselves bump couple of metrics manually:
Last, but no least - since it's a class we can support other backends:
That was a long road - and there are still places for improvements. As You can see with some tweaking we can improve quality of working code and bring it to another level. Here, we fixed race condition issue, improved reusability by rewriting component as decorator and allowed another, more raw usage from any class/function.
Really short entry, but I hope someone will benefit from it :)
Strace is a linux tool that is able to attach to any process and monitor system calls being made by it.
It's simple & complex at the same time and requires a little reading on the topic, but it's totally worth it.
Recently I had strange problems with postgresql (well not with postgresql itself but a certain postgresql configuration on a certain machine :)) which were pretty hard to debug using "standard" ways - monitoring software, logs, etc. Luckily I remembered about a small tool called strace, which to be honest I haven't used for like 2 or 3 years. With that utility, it was pretty easy to find & fix connection problems.
The point of this article: strace follows unix philosophy of doing one small thing pretty well, but in the world of many tools & open source software it's rather forgotton (at least to me it was), and it's a shame because it can be really helpful. Try to play with it and maybe one day you'll find it handy like I did :)
After all the problems I came across an article, which gives a more thorough explanation on this topic: Debugging obscure postgres problems
locust.io is a modern python tool for load testing. I have couple of small services working on development server and running on nanomsg, although nanomsg is still in beta phase I was tempted to move them to production environment (basically because most of them did one thing and did it well). But before deployment I wanted to test my architecture and I needed benchmarks for that.
It turns out that locust, which by default plays nicely with http based services, provides also a neat way to hook custom clients to its core features. So its fairly easy to test xml-rpc, zeromq, rabbitmq or nanomsg based apps.
Code at the bottom is pretty straightforward - it's a simple nanomsg client that serves as an example, to which we have to hook locust events in order to collect locust metrics. 20-30 lines of simple magic and we're done!
We can now run:
and enjoy :-)
Whenever in need to test custom architecture, find single points of failure or simply experiment with your stack - use locust.io to put your code under pressure :-)
Database tuning is often considered unnecessary, and many people leave it for the very end of development or completely skip that part. I'll be covering logging, backups, indexing and orm role, in order to give you some insight into different database related tasks.
The thing that many people don't get right when it comes to configuring postgresql properly is logging.
connections, disconnections, lock_waits, temp_files, checkpoints not only give us some overview but may be helpful to debug other configuration parameters that rely for instance on memory.
You can rely on that logging configuration for most of you projects - having them rotated daily with reasonable metadata will surely help you analyse the problems and bottlenecks of you DB setup.
Logging - PgBadger
Logging itself is a useful little feature, but we can do better :) We can have our logs analyzed by pgBadger. Which will not only aggregate some useful information like crud statistics, but provide a nice graphical representation of what's being done in our database.
Typical setup involves:
- Downloading & installing pgbadger according to official documentation.
- Setting up a cronjob that uses bash script which handles log files from /var/log/postgresql on a daily basis, creates pgbadger report and stores/uploads it somehwere.
There're many tutorials and guides how to setup pgbadger and tie it with cron: http://www.antelink.com/blog/using-pgbadger-monitor-your-postgresql-activity.html. In the end You'll be able to get visual representation of database queries which may look for instance like that:
There is no golden rule here, backup setup depends on project's business value and couple of other factors. For a small project having a small bash script which performs pg_dump and later pushing the output file to remote destination should be enough.
Add following script to the created file.
Save the file and create a cron entry:
It's an anti-pattern to store backups on the same server so either connect to your database from remote location or have it another way and create a script that will push your backups to another destination.
Backups - WAL-e
WAL-e is a tool created by heroku that provides continous archiving of WAL segments. To put it simply: if your app can't afford to loose transactions it's a way to go.
All your WAL files will be stored on S3 for backups, in case You need them. WAL is basically a utility which is used by postgresql when it comes to keeping track of database changes before they're applied. Archiving WAL segments with WAL-E will allow you to restore your database to the state from just before a crash.
It's not exactly part of this tutorial, but since I've mentioned creating cronjobs I think it's a good time to introduce You to flock. Most of linux distributions have a command called flock, which will run a command only if it can get a lock on a certain file.
So changing our entry in crontab to:
will prevent us from running duplicate cronjobs, which not only slows server but may lead to hard to track errors. That's how You can easily take care of problems that your cronjobs make cause. Personally, I wrap all crontab entries with flock and I recommend You to do so.
No golden rule here. First play with EXPLAIN ANALYZE to find slow queries. If that's not an option at some point use pgbadger and logs to find queries that may suffer without indexes.
Use partial indexes (also known as filtered) for stuff like:
- querying database against a column which is NULL for most of the rows
- querying database against some business value (salary, point of time, status or kind field)
This will index only subset of rows, which will keep index size significantly smaller compared to situation in which you'd index whole table.
Use composite indexes (known also as multicolumn) to index queries that rely constantly on same filtering conditions, like:
Generally, indexing foreign keys can be considered good practice (some SQL databases do that automatically). For that You may find the following query useful (shamelessly borrowed from here):
To put it simply: this query finds foreign keys that are not indexed :)
Know Your ORM
At the very end - I don't intend to post here another ORM showdown. What I want I want to outline is that it's good to learn your ORM, consider:
We will be playing with database with couple thousands of rows and following sqlalchemy models. Note that foreign keys are indexed already.
First let's start simple. Let's fetch all player ids, names and their team_ids;
Ok, let's see how fast sqlalchemy will deal with that query
3.5 seconds. No surprise here - we have to create objects, allocate memory ang generally process everything. But can we do better ?
Let's try fetching particular columns first
0.6 seconds. Not bad. But wait, we can drop declarative and try using core
0.4 seconds. Now, we're talking
Now, let's do some joining
So, using all those 3 techiniques:
takes respectively: 9.5, 1.3 and 0.95 seconds.
Apart from powerful core that allows You to build low level queries, sqlalchemy also comes with yield_per, bundles, powerful join system and from_statement, which are really handy when queries need to perform slightly better.
Stay tuned for part 3!
The aim of this guide (I plan to create couple of separate parts for different aspects of running web application in production mode) is to introduce You to key concepts and problems that may occur on live server. I'll touch server itself, database, application server and deployment and provide You with configuration files and flow that I use and rely on. Note that it's extremely subjective, but I think that configuration may be considered at least as "reasonable default". Let's dig in, shall we?
Most of the posts refer to the stack I work on, which is:
By Linux I refer to Ubuntu since that's the distribution I use for most of my projects. So, playing with linux configuration we can affect three things mostly:
Max open files
By default set to 1024. Since sockets are used to communication between different tools, they affect conncurrent connections which our stack is able to handle. If we expect high traffic, it's a good practice to tune that setting:
Now reload the changes
Kernel queue for accepting new connections
By default set to 128, it represents a size of kernel queue for accepting new connections.
By default set to 32768-61000, it represents range of ports that can be used by our system. The number affects number of concurrent open connections.
DO NOT ENABLE IT !
Common misconception is to enable fast recycling (most of tuning guides provide such advice), so sockets do not stay in TIME_WAIT, like that:
However, as explained here: Click its highly discouraged.
In order to improve I/O we can tell linux not to store information about last file access or read time (which it keeps by default). In order to change that, modify confiration of a partition which your files reside in
noatime affects files, and nodiratime directories respectively
In memory filesystem for /tmp
to your /etc/fstab file results in replacing a filesystem for /tmp directory with an in-memory filesystem. This will highly increase I/O performance on file uploads. Note that it may be a bottleneck when files that are being uploaded are large or if you are lacking RAM.
At the very end mount new filesystem:
Getting swap right
When you're forced to add some swap for your system be sure to put those two lines in your sysctl.conf:
which respectively tell our system not to swap data of RAM to swap place that often (swapiness) and
tell our system to cache access data so it's not looked up frequently (vfs_cache_pressure).
The three things I want you to remember after this part are:
- Default configuration of you system is good, but may not be properly tuned for high loads and getting maximum out of tools from your stack (nginx, haproxy).
- No one knows all that stuff by heart (at least I don't), so if you find that article useful, save it somewhere so you can look it up later when it comes to configuration, or create an ansible playbook that deals with that stuff :-).
- Stay tuned for part 2 !