Let’s say we have several tables in an SQL database referencing each other with foreign key constraints. If we need to delete a row in one of these tables as well as all other rows that reference it, then we have two options. The first option is to execute an explicit
DELETE statement for each table that contains referencing rows and then finish by deleting the referenced row (this order is important if we don’t want to violate any foreign key constraints). The other option is to declare our foreign keys as
ON DELETE CASCADE. This clause tells the database to automatically delete the referencing row if the row it references is deleted.
One important limitation of the first approach is that you need to know the primary key (here I assume that foreign keys reference primary keys in the target tables) of the referenced row in order to be able to delete all the referencing rows. Oftentimes this limitation makes the second approach the only practical option. For example, if we want to delete several rows matching a certain, non-primary key-based criteria, then to use the first approach we would first have to execute a
SELECT statement that returns all the primary keys matching this criteria and then iterate over these keys and execute a set of
DELETE statements for each of them. The same can be achieved with
ON DELETE CASCADE by executing just one
The question that I got the other day was this: if both approaches can be used, which one is better? But before trying to define what is better and answering this question, let me first give you some background on what triggered this question. This came up during my work on ODB, an object-relational mapping (ORM) system for C++. ODB allows you to persist C++ objects to a number of relational databases without having to deal with tables, columns, or SQL, and manually writing any of the mapping code.
In ODB, as in other ORMs, certain OOP constructs are mapped to additional tables. The two most notable ones are containers inside objects as well as object inheritance hierarchies. These additional tables are “linked” to the primary table (i.e., the object table in case of a container and the root object table in case of an inheritance hierarchy) with foreign key constraints. If you request ODB to generate the database schema for you, then ODB will add the
ON DELETE CASCADE clause to such foreign keys.
At the same time, ODB provides two ways to erase an object state from the database. If we know the object id (primary key), then we can use the
erase() function. Alternatively, to erase an object or multiple objects that match a certain criteria, we can call
erase_query(). As you may have guessed,
erase_query() is exactly the case where relying on the
ON DELETE CASCADE clause is the only practical option. For the
erase() case, however, either approach can be used. And we are back to the original question: which one is better?
First, let’s define better. One fairly natural definition would be faster is better. That is, whichever approach deletes objects faster is better. However, after some consideration, we may realize that other criteria, such as server load, can be equally or even more important. That is, whichever approach results in less sever load is better. Server load itself can be defined in many ways, for example, as CPU, memory, or disk resources needed to complete the task. For our case, both speed and CPU resources used seemed like the most relevant metrics, so that’s what I measured.
I used ODB to quickly create a benchmark that I could run against various databases. The resulting database schema consists of three tables, with the second and third referencing the first. For each row in the first table, both the second and the third tables each contain five rows. The benchmark first populates the database with 20,000 rows in the first table (100,000 rows in each of the second and third tables). It then proceeds to delete all the rows in a randomized order using either the three explicit
DELETE statements or a single statement that relies on the
ON DELETE CASCADE mechanism.
Note also that ODB uses low-level C APIs to access each database, so there are no “wrapper overhead” involved. All statements are prepared once and then reused. All the databases except SQLite are accessed remotely and are running on the same server machine connected to the client machine via gigabit ethernet (see the ODB Benchmark Results post for more information on the setup).
For all the databases except SQLite, the
ON DELETE CASCADE approach is faster than explicit
DELETE. I cannot publish actual times (so that you could compare different databases) due to licensing restrictions. So, instead, the following table shows the speedup (or slowdown, in case of SQLite) and consumed CPU resources decrease (or increase, in case of SQLite) for each database:
My interpretation of these results is as follows: while the
ON DELETE CASCADE approach is not necessarily faster intrinsically (it requires extra work by the database server), the overhead of executing a separate statement over a network connection dwarfs this extra work in comparison. One confirmation of this is the result for SQLite (its statement execution overhead is just a function call). The other is the results of this benchmark when both the database and the client are on the same machine (only tested some databases):
|Database||Speedup remote||Speedup local|
ON DELETE CASCADE approach is still faster, for Oracle, for example, there is almost no difference compared to explicit