Some of the tricks we used to speed up calls to our analytics API written in Python: played with asyncio, messed with SQLAlchemy, hacked deep in asyncpg, rewrote parts in Cython, found better data structures, replaced some pandas with pure numpy.
Python code optimization may seem easy or hard depending on the performance target. If the target is “best effort”, carefully choosing the algorithm and applying well-known common practices is usually enough. If the target is dictated by the UX, you have to go down a few abstraction layers and hack the system sometimes. Or rewrite the underlying libraries. Or change the language, really.
This post is about our experience in Python code optimizations when whatever you do is not fast enough. I personally had much fun undertaking the challenges and squeezing API response times under one second. So much fun, in fact, that we’ve got enough notes for a series of blog posts.
By writing “we” I mean the engineers working at Athenian. Athenian offers a SaaS to help engineering leaders build a continuous improvement software development culture. To translate from landing-pagish, we mirror GitHub and JIRA metadata to our own DB, analyze it, and display metrics and charts in the SPA. For now. The plan is to conquer the world, of course.
The API request processing typically balances between two poles: CPU and IO wait. There is no clear separation between them; like yin and yang, they dance together hand in hand with sophisticated relationships. If one profiles a request, they will see a messy DAG of function calls that can project to CPU and IO occupation axes. Let’s consider this simplified code:
We launch three coroutines that request data from a SQL DB. Athenian uses PostgreSQL, so let’s imagine that we work with PostgreSQL. Each coroutine passes through three stages:
Let’s suppose that (1) and (3) both elapse one second for each of the coroutines, and that PostgreSQL is infinitely powerful and always requires 5 seconds to compute the response for query_sql_a, 3 seconds for query_sql_b, and 1 second for query_sql_c. This doesn’t mean that, for example, query_sql_a will always spend 5 seconds in IO wait (2), because Python can only execute one of the three coroutines at each moment of time.
asyncio.gather launches coroutines in the order of passed arguments. That’s not written in the documentation and must be an implementation detail, but it’s important to consider: we will first execute (1) of query_sql_a, then (1) of query_sql_b, then (1) of query_sql_c, then wait one second while PostgreSQL is busy, then execute (3) in the opposite order.
According to the execution plan, we bottleneck in the CPU: 86% of the OS thread time CPU was doing some useful work. Now consider a different order of launching coroutines:
The second execution plan demonstrates how bad things go if we don’t guess the optimal order in which we should launch coroutines. The wall time increases from 7s to 10s — by 43%. We no longer heavily bottleneck in the CPU (60% vs. 86%).query_sql_c stage 3 competes with query_sql_a stage 1 and wins or loses depending on the event loop internals.
I am writing about Python code optimizations here, so I will not cover such outstanding issues as SQL performance and reducing individual IO wait-s. Hence my advice would be
Try to pass coroutines in asyncio.gather() ordered by the expected IO wait time descending. That is, the first argument should be the coroutine with the highest expected IO wait, and so on.
Real example: we have a spot in the code where we gather ~10 coroutines. When I ordered them according to the mentioned heuristic, the average overall execution time decreased x2.
Would it make sense to order arguments of a hypothetical thread_gather that launches and joins threads instead of coroutines? Of course not. Would it be faster to launch threads instead of coroutines in my example? Actually, coroutines perform better given the GIL:
Our API uses SQLAlchemy Core to generate SQL (no ORM). There are quite a few places where some conditions in WHERE repeat. One example is the replacement of OR with UNION ALL: instead of
Why UNION ALL is usually better in such a scenario is a topic of another blog post. Let’s focus on how UNION ALL looks in SQLAlchemy:
Imagine that instead of c = 3 there is a big expression, with variadic IN, etc. — constructing such an object twice is expensive. Instead, we can write:
This won’t work for every SQLAlchemy engine and for SQLite in particular because SQLAlchemy generates ?, ?, ? as parameter placeholders there and not indexed references $1, $2, $3. Nevertheless, together with the upgrade from SQLAlchemy 1.3 to 1.4 where they improved the treatment of big IN-s we got 1.5-2x faster SQL compilation.
We query PostgreSQL through asyncpg. asyncpg fetches return rows like nearly any other relational DB drivers. However, our analytics API needs to build pd.DataFrame-s which are columnar: the values of each returned column are stored together. Moreover, before pandas 2.0, several columns of the same dtype are stored together in the same numpy array aka the block.
Naively constructing DataFrame using DataFrame.from_records() is extremely inefficient. Suppose that PostgreSQL knocks at asyncpg’s door. Here is what comes next:
Given pure object columns (e.g., with SQL nulls), we touch their reference counters 4 times: in (1), (3), (4), and (5). asyncpg.Record is used as an auxiliary container and can be excluded. Moreover, we don’t have to perform (4) because we already know the correct dtypes from the SQL query. The end-to-end transition from pgproto to ready DataFrame, therefore, takes above 500ms with ~20 object and ~20 typed columns and 300,000 rows on a modern x86 CPU.
The ideal pipeline would be:
Pure object values would increment reference counters only once. The whole thing would elapse ~5ms by mere estimation. However, unfortunately, parsing pgproto and constructing asyncpg.Record-s reside deep inside Cython and even C code of asyncpg, so making the ideal pipeline means forking the project. We will surely fork it before conquering the world but have had to find a compromise in the meantime.
Our current compromise pipeline:
Now pure object values increment the refcounters twice: in (1) and (3). We no longer try to guess the types. The memory copy bloat is significantly reduced. Our measurements indicate at least 10x faster conversion times, around ~50ms.
The actual source code can be found in the repository where our API lives: athenianco/athenian-api-open. It’s not universal and there is not enough will to make it a proper open-source library. Feel free to adapt it to your needs! We distribute those files under the MIT license.
Let me finish this section by giving generic advice.
Avoid object columns in pandas DataFrame-s whenever possible. Operations with them are much slower than with properly typed ones.
A very specific objective: optimize the iteration over raw asyncpg.Record-s. It is indeed possible to work with them directly with GIL released. Cython code follows:
asyncpg_recordobj.h is a simplification of the real recordobj.h in asyncpg:
Depending on what the type value has, the nogil hack may be handy or appear useless. For example, if value is a string and your CPython stores Unicode strings in UTF-8 internally, <const char *>PyUnicode_Data(value) will work. If value is an integer, PyLong_AsLong(value) will work, too. But working with complex classes will require taking the GIL.
The speedup should be ~10x.
In case we work with tuples instead of asyncpg.Record-s, we can slightly change the code above to remain functional:
You’d better not mistake with indexing both asyncpg.Record-s and tuples because you’ll otherwise immediately catch a dragon in native code.
We currently store various precomputed data in PostgreSQL. We fetch it according to many filters coming from the application logic. The collected profile and traces in Sentry explicitly showed that we sometimes spent too much time in data serialization during INSERT INTO … VALUES and deserialization — the creation of Python objects while parsing pgproto that I mentioned in one of the previous sections.
We were able to optimize that hot spot by employing a special, limited, immutable data structure based on structured numpy arrays. In a nutshell, it is an array wrapper around bytes. That’s the only item in __slots__, really.
When we want to extract some field "foobar" from the structure, we execute:
Our serialization is zero copy:
And the deserialization is nothing, too:
dtype looks like np.dtype([("foobar", int), ("baz", "datetime64[ns]")])
The secret weapon of our structure is very efficient conversion to Pandas DataFrame:
The concatenation of bytes can be further optimized with nogil in Cython.
The actual implementation is more complex. It supports:
This is an example:
It’s hard to be faster than zero copy and O(1). @numpy_struct gave us at least 10–50x performance improvement compared to pickle and storing fields in individual SQL table columns.
There are drawbacks, however:
So @numpy_struct is not a universal solution to all the problems.
Pandas 1.x is a microperformance dumpster fire. That’s official. At the same time pandas is very convenient and overall is a great, well-documented tool.
We had to rewrite certain parts of the API code in favor of low-level numpy array manipulation. Let me give a few examples. The first is the trivial extraction of sub-dataframe by some condition on the columns.
We do it more verbose but more efficient:
If we call this in a loop with a few hundred repetitions, and the dataframe size is less than a hundred rows, our extraction runs an order of magnitude faster. It happens because df[...] selects by index values and therefore performs unnecessary index lookups, and also because we simply do not execute a lot of underlying glue code.
The second example is executing some function on the values of column “b” grouped by the values of column “a”.
This is an alternative, much faster way to do the same:
This code snippet leverages the power of np.unique that can efficiently count unique values (return_counts=True) in an array, and also find first encounters (return_index=True) or map each value to a unique index (return_inverse=True). We sort the elements of arr_a and iterate the groups knowing the size of each group.
Pandas uses a hash table technique for groupby under the hood and thus has a better big-O than sorting, however, high level of abstraction and poor microperformance add a huge linear penalty. The actual speedup depends on the size of the dataframe and the nature of columns “a” and “b”. In our production, the typical boost is 20 to 50x.
It is possible to similarly replace many other operations on top of groupby such as idxmin() or count() and even account for missing values via NaN-s and NaT-s.
We used to follow another approach in the past:
The np.unique way avoids materializing the whole list of variable-length array indexes for each group, hence is faster.
Instead of comparing performance optimization with shaving yaks, I will compare it with training to run a marathon. You start in a completely awful shape, then slowly progress, week by week, yielding slightly better results every time. Until one day you meet the physical requirements and run a marathon. Each kilometer of the race will remind you of the things you went through to be able to run forward.
Athenian API processes hundreds of thousands of items filtered by ten different properties, logically joining several software development activities in one giant queryable DAG, in milliseconds. We started with a really slow MVP codebase two years ago, only 4 months after the company emerged. I feel shame for that code, and that’s a good thing: we didn’t overkill it. Two years after, the same API queries execute ~1000x faster. I nearly scratched the surface of what we did to reach 1000x, and we by no means have finished! The following blog post should summarize our PostgreSQL query optimization experience. Considering only the Python code performance, it has improved ~100x.
I have considered a few tricks with Python code that helped us improve the analytics backend performance. They were:
Those tricks gave us two orders of magnitude performance improvement on our workload. The sample source code is on GitHub. I wrote the follow-up post about how we decided what to optimize: Continuous Performance Improvement of HTTP API. And similar one about PostgreSQL optimization: How we optimized PostgreSQL queries 100x.