Updated ODB benchmark results
In the release announcement for ODB 1.8.0 I have mentioned some performance numbers when using ODB with SQL Server. If you read that post you probably remember that, to put it mildly, the numbers for SQL Server didn’t look that good compared to other databases, especially on the API overhead benchmark.
In fact, the numbers were so bad that it made me suspect something else is going on here, not just poor ODBC, Native Client, or SQL Server performance. One major difference between the SQL Server test setup and other databases is the use of virtual machines. While all the other databases and tests were running on real hardware, SQL Server was running on a KVM virtual machine. So to make the benchmark results more accurate I decided to re-do all the tests on real, identical hardware.
Hi-end database hardware doesn’t normally lay around unused so I had to settle for a dual CPU, quad-core AMD Opteron 265 1.8 Ghz machine with 4GB or RAM and U320 15K Seagate Cheetah SCSI drives. While this is the right kind of hardware for a database server, it would be a very entry-level specification by today’s standards. So keep that in mind when I show the numbers below; here we are not after absolute values but rather a comparison between different database implementations, their client APIs, and ODB runtimes for these databases.
The above machine dual-boots to either Debian GNU/Linux with the Linux kernel 2.6.32 or to Windows Server 2008R2 SP1 Datacenter Edition. MySQL 5.5.17, PostgreSQL 9.1.2, and SQLite 3.7.9 run on Debian while SQL Server 2008R2 runs on Windows Server. The tests were built using g++ 4.6.2 for GNU/Linux and VC++ 10 for Windows. Some benchmarks were run on remote client machines all of which are faster than the database server. The server and clients were connected via gigabit switched ethernet.
The first benchmark that we normally run is the one from the Performance of ODB vs C# ORMs post. Essentially we are measuring how fast we can load an object with a couple of dozen members from the database. In other words, the main purpose of this test is to measure the overhead incurred by all the intermediary layers between the object in the application’s memory and its database state, and not the database server performance itself. Specifically, the layers in question are the ODB runtime, database access API, and transport layer.
Since the transport layer can vary from application to application, we ran this benchmark in two configurations: remote and local (expect for SQLite, which is an embedded database). In the remote configuration the benchmark application and the database server are on different machines connected via gigabit ethernet using TCP. In the local configuration the benchmark and the database are on the same machine and the database API uses the most efficient communication medium available (UNIX sockets, shared memory, etc).
The following table shows the average time it takes to load an object, in microseconds. For SQL Server we have two results for the remote configuration: one when running the client on Windows and the other — on GNU/Linux.
Database | Remote | Local |
---|---|---|
MySQL | 260μs | 110μs |
PostgreSQL | 410μs | 160μs |
SQL Server/Windows Client | 310μs | 130μs |
SQL Server/Linux Client | 240μs | — |
SQLite | 30μs |
For comparison, the following table lists the local configuration results for some of the databases when tested on more modern hardware (2-CPU, 8-core 2.27Ghz Xeon E5520 machine):
Database | Local |
---|---|
MySQL | 55μs |
PostgreSQL | 65μs |
SQLite | 17μs |
If you would like to run the benchmark on your setup, feel free to download the benchmark source code and give it a try. The accompanying README file has more information on how to build and run the test.
Now, let’s look at the concurrent access performance. To measure this we use an update-heavy, highly-contentious multi-threaded test from the ODB test suite, the kind you run to make sure things work properly in multi-threaded applications (see odb-tests/common/threads
if you are interested in details). To give you an idea about the amount of work done by the test, it performs 19,200 inserts, 6,400 updates, 19,200 deletes, and 134,400 selects concurrently from 32 threads all on the same table. It is customary for this test to push the database server CPU utilization to 100% on all cores. For all the databases, except SQLite, we ran this test in the remote configuration to make sure that each database has exactly the same resources available.
The following table shows the times it takes each database to complete this test, in seconds.
Database | Time |
---|---|
MySQL | 98s |
PostgreSQL | 92s |
SQL Server | 102s |
SQLite | 154s |
You may have noticed that the above tables are missing an entry for Oracle. Unfortunately, Oracle Corporation doesn’t allow anyone to publish any hard performance numbers about its database. To give you some general indications, however, let me say that Oracle 11.2 Enterprise Edition performs better than any of the other databases listed above in all the tests except for the first benchmark in the local configuration where it came very close to the top client-server performer (MySQL). In particular, in the second benchmark Oracle performed significantly better than all the other databases tested.
Let me also note that these numbers should be taken as indications only. It is futile to try to extrapolate some benchmark results to your specific application when it comes to databases. The only reliable approach is to create a custom test that mimics your application’s data, concurrency, and access patterns. Luckily, with ODB, creating such a test is a very easy job. You can use the above-mentioned benchmark source code as a starting point.