From shunsmailbox at gmail.com Tue Apr 1 03:29:17 2014 From: shunsmailbox at gmail.com (Shun Yu) Date: Tue Apr 1 03:29:25 2014 Subject: [odb-users] Multi-Threaded Environment Message-ID: Hi, I have a question, what is the proper usage of odb::database in a multi-threaded environment? Do I need multiple instances of odb::database for simultaneous threads or can I just use one instance of odb::database and execute queries using that one instance across threads? From boris at codesynthesis.com Tue Apr 1 06:11:27 2014 From: boris at codesynthesis.com (Boris Kolpackov) Date: Tue Apr 1 06:14:48 2014 Subject: [odb-users] Multi-Threaded Environment In-Reply-To: References: Message-ID: Hi Shun, Shun Yu writes: > Hi, I have a question, what is the proper usage of odb::database in a > multi-threaded environment? > Do I need multiple instances of odb::database for simultaneous threads or > can I just use one instance of odb::database and execute queries using that > one instance across threads? Sharing a single instance of odb::database between multiple threads is the recommended way. By default, it will use the connection pool and re-use connections for different threads, as required. Boris From finjulhich at gmail.com Wed Apr 2 03:17:13 2014 From: finjulhich at gmail.com (MM) Date: Wed Apr 2 03:17:21 2014 Subject: [odb-users] odb and cmake Message-ID: Hello, My project uses vs2010 on win32 and gcc4.8 on linux64, and cmake 2.8. I would like to integrate odb and I am looking at the various solutions. 1. To detect odb (and sqlite) headers and libraries as part of the regular initial cmake run, I see this FindOdb.cmake https://gist.github.com/davideanastasia/4952948, though it looks specific to linux paths. But I see also https://github.com/peredin/scaling-octo-batman/tree/master/odb 2. To run the odb compiler as a pre build step, I see this (for both vstudio and gcc): http://stackoverflow.com/questions/18427877/add-custom-build-step-in-cmake However, with the above odb.cmake in step 2., there seems to be a macro odb_compile. Would appreciate advice on how to go about it, and if there is a plan to include such cmake files to next cmake 3. Thanks MM From shunsmailbox at gmail.com Wed Apr 2 12:19:05 2014 From: shunsmailbox at gmail.com (Shun Yu) Date: Wed Apr 2 12:19:39 2014 Subject: [odb-users] Multi-Threaded Environment In-Reply-To: References: Message-ID: Thanks! On Tue, Apr 1, 2014 at 3:11 AM, Boris Kolpackov wrote: > Hi Shun, > > Shun Yu writes: > > > Hi, I have a question, what is the proper usage of odb::database in a > > multi-threaded environment? > > Do I need multiple instances of odb::database for simultaneous threads or > > can I just use one instance of odb::database and execute queries using > that > > one instance across threads? > > Sharing a single instance of odb::database between multiple threads > is the recommended way. By default, it will use the connection pool > and re-use connections for different threads, as required. > > Boris > From steven.cote at gmail.com Thu Apr 3 09:29:33 2014 From: steven.cote at gmail.com (=?UTF-8?B?U3RldmVuIEPDtHTDqQ==?=) Date: Thu Apr 3 09:31:47 2014 Subject: [odb-users] Best Practice deleting persistent objects Message-ID: I've been finding my way using odb lately and I've run into a situation that I'm not quite sure how to handle. At least not "correctly". I have a persistent class representing a "user" and another persistent class representing a "user group". The user group is basically just a name and a list of "user" objects. Running through the schema generator, this gives me three tables; user, group and group_member. The third table just contains the mapping between users and groups. All good so far. The question came when I tried to delete a user that was a member of a group. This threw a "FOREIGN KEY constraint failed" exception because now there was an entry in the group_member table to a user that didn't exist. So, my question is how is this best handled using odb? Obviously, we have to remove all references to the user about to be deleted from the group_member table. My first thought was to explicitly remove references to the user from the mapping table using erase_query(), but since the mapping table isn't one of my "persistent classes", that didn't seem right. Using just raw SQL also seems a bit heavy handed. Traditionally, this is the sort of thing I'd just handle with ON DELETE CASCADE in the schema, but since odb doesn't put those in, I wanted to see if there's another way. Any thoughts/suggestions? From boris at codesynthesis.com Thu Apr 3 11:03:07 2014 From: boris at codesynthesis.com (Boris Kolpackov) Date: Thu Apr 3 11:06:27 2014 Subject: [odb-users] Best Practice deleting persistent objects In-Reply-To: References: Message-ID: Hi Steven, Steven C?t? writes: > I have a persistent class representing a "user" and another persistent > class representing a "user group". The user group is basically just a name > and a list of "user" objects. > > Running through the schema generator, this gives me three tables; user, > group and group_member. The third table just contains the mapping between > users and groups. So you have something like this: #pragma db object class user { ... }; #pragma db object class group { ... std::vector member; }; > The question came when I tried to delete a user that was a member of a > group. This threw a "FOREIGN KEY constraint failed" exception because now > there was an entry in the group_member table to a user that didn't exist. > > So, my question is how is this best handled using odb? Ok, let me discuss all the possible ways to handle this: 0. Naturally, the best approach is not to have this problem in the first place ;-). The idea is to put the pointer (aka foreign key) into the user and not the group class. We can still have the container of pointers in group using the inverse relationship (see the manual for details on the inverse relationships): class group; #pragma db object class user { group* belongs; }; #pragma db object class group { ... #pragma db inverse(belongs) std::vector member; }; Now when we delete the user object, the foreign key gets deleted as well. The main shortcoming of this approach is that we now have the same problem if we try to delete the group. This, however, we can handle using one of the next methods. To put it another way, you would want to use this method for objects that you delete most often (most likely user in your case) since it doesn't cost anything. 1. The most obvious way to handle this is to delete the user entries from the affected group(s) before deleting the user: user& u = ... // User to delete. group& g = *u.belongs; // Group to which it belongs. // Find the user in g.members and delete it. // for (...) { } db.update (g); // Update the group. db.erase (u); // Delete the user. As you might have noticed, here we assume that there is a way to get from the user object to its group (inverse pointer). The for loop might not be the best way to do it if this kind of operation is performed a lot in your application. In this case a container like Boost multi-index might be a better option so that you could delete an entry given the user id. 2. Finally, for the next release of ODB (2.4.0) we have added the ON DELETE CASCADE support. Now you will be able to do: #pragma db object class group { ... #pragma db on_delete(cascade) std::vector member; }; See section 14.4.15 in the (2.4.0) manual for more information on this feature. The pre-release for 2.4.0 is available here: http://codesynthesis.com/~boris/tmp/odb/pre-release/ Let me know if something doesn't make sense. Boris From info at peredin.com Thu Apr 3 11:13:17 2014 From: info at peredin.com (Per Edin) Date: Thu Apr 3 11:13:25 2014 Subject: [odb-users] Best Practice deleting persistent objects In-Reply-To: References: Message-ID: Solution 0 limits the number of groups a user can be in to at most 1. Perhaps the first issue is to determine if a user shall be able to be in 1 or more groups? :) On Thu, Apr 3, 2014 at 5:03 PM, Boris Kolpackov wrote: > Hi Steven, > > Steven C?t? writes: > >> I have a persistent class representing a "user" and another persistent >> class representing a "user group". The user group is basically just a name >> and a list of "user" objects. >> >> Running through the schema generator, this gives me three tables; user, >> group and group_member. The third table just contains the mapping between >> users and groups. > > So you have something like this: > > #pragma db object > class user > { > ... > }; > > #pragma db object > class group > { > ... > > std::vector member; > }; > > >> The question came when I tried to delete a user that was a member of a >> group. This threw a "FOREIGN KEY constraint failed" exception because now >> there was an entry in the group_member table to a user that didn't exist. >> >> So, my question is how is this best handled using odb? > > Ok, let me discuss all the possible ways to handle this: > > 0. Naturally, the best approach is not to have this problem in the first > place ;-). The idea is to put the pointer (aka foreign key) into the > user and not the group class. We can still have the container of > pointers in group using the inverse relationship (see the manual for > details on the inverse relationships): > > class group; > > #pragma db object > class user > { > group* belongs; > }; > > #pragma db object > class group > { > ... > > #pragma db inverse(belongs) > std::vector member; > }; > > Now when we delete the user object, the foreign key gets deleted as > well. > > The main shortcoming of this approach is that we now have the same > problem if we try to delete the group. This, however, we can handle > using one of the next methods. To put it another way, you would want > to use this method for objects that you delete most often (most likely > user in your case) since it doesn't cost anything. > > 1. The most obvious way to handle this is to delete the user entries > from the affected group(s) before deleting the user: > > user& u = ... // User to delete. > group& g = *u.belongs; // Group to which it belongs. > > // Find the user in g.members and delete it. > // > for (...) > { > } > > db.update (g); // Update the group. > db.erase (u); // Delete the user. > > As you might have noticed, here we assume that there is a way to > get from the user object to its group (inverse pointer). > > The for loop might not be the best way to do it if this kind of > operation is performed a lot in your application. In this case > a container like Boost multi-index might be a better option so > that you could delete an entry given the user id. > > 2. Finally, for the next release of ODB (2.4.0) we have added the > ON DELETE CASCADE support. Now you will be able to do: > > #pragma db object > class group > { > ... > > #pragma db on_delete(cascade) > std::vector member; > }; > > See section 14.4.15 in the (2.4.0) manual for more information > on this feature. > > The pre-release for 2.4.0 is available here: > > http://codesynthesis.com/~boris/tmp/odb/pre-release/ > > Let me know if something doesn't make sense. > > Boris > From info at peredin.com Thu Apr 3 11:43:02 2014 From: info at peredin.com (Per Edin) Date: Thu Apr 3 11:43:09 2014 Subject: [odb-users] Best Practice deleting persistent objects In-Reply-To: References: Message-ID: I would advice against ON DELETE CASCADE in this particular case though. 1. It can lead to real headaches if someone removes a group by mistake, either by removing a group from within C++ code or by an accidental DELETE FROM group WHERE... the cascade doesn't give any warnings at all. 2. Removing a group doesn't necessarily mean the users shall be removed, since a user can be in 0 groups or already is a member of other groups. I think the best solution would be to have a list of users in the group and an inverse list of groups in the user. This seems to be the most logical approach since a group contains users instead of users pointing to their parents. Instead of relying on ON DELETE CASCADE you could implement void detach_all_users() in the group class which very clearly states what it does. On Thu, Apr 3, 2014 at 5:26 PM, Steven C?t? wrote: >> Solution 0 limits the number of groups a user can be in to at most 1. >> >> Perhaps the first issue is to determine if a user shall be able to be >> in 1 or more groups? :) > > > It is true. In this particular case, a user can be in 0 to * groups. So > option 0 as written won't work for me. Judging by the rest of the answers, > it sounds like the real answer is to link the group back to the user. So in > my case it would be something like: > > class group; > > #pragma db object > class user > { > ... > > std::vector belongs; > }; > > #pragma db object > class group > { > ... > > #pragma db inverse(belongs) > std::vector member; > }; > > Is it possible to describe a relationship like that in the pragma language? > I'll have a flip through the manual in the morning to see if there's any > mention of that. Otherwise I guess I'm waiting for version 2.4.0. From steven.cote at gmail.com Thu Apr 3 11:26:17 2014 From: steven.cote at gmail.com (=?UTF-8?B?U3RldmVuIEPDtHTDqQ==?=) Date: Thu Apr 3 13:19:44 2014 Subject: [odb-users] Best Practice deleting persistent objects In-Reply-To: References: Message-ID: > > Solution 0 limits the number of groups a user can be in to at most 1. > > Perhaps the first issue is to determine if a user shall be able to be > in 1 or more groups? :) > It is true. In this particular case, a user can be in 0 to * groups. So option 0 as written won't work for me. Judging by the rest of the answers, it sounds like the real answer is to link the group back to the user. So in my case it would be something like: class group; #pragma db object class user { ... std::vector belongs; }; #pragma db object class group { ... #pragma db inverse(belongs) std::vector member; }; Is it possible to describe a relationship like that in the pragma language? I'll have a flip through the manual in the morning to see if there's any mention of that. Otherwise I guess I'm waiting for version 2.4.0. From boris at codesynthesis.com Thu Apr 3 13:34:16 2014 From: boris at codesynthesis.com (Boris Kolpackov) Date: Thu Apr 3 13:37:37 2014 Subject: [odb-users] Best Practice deleting persistent objects In-Reply-To: References: Message-ID: Hi Steven, Steven C?t? writes: > So in my case it would be something like: > > class group; > > #pragma db object > class user > { > ... > > std::vector belongs; > }; > > #pragma db object > class group > { > ... > > #pragma db inverse(belongs) > std::vector member; > }; > > Is it possible to describe a relationship like that in the pragma language? You just did. This is a many-to-many relationship with one side inverse. There is a section in manual on this kind of relationships. > Otherwise I guess I'm waiting for version 2.4.0. You only need to wait for 2.4.0 if you want to use the on_delete clause. Boris From boris at codesynthesis.com Thu Apr 3 13:39:48 2014 From: boris at codesynthesis.com (Boris Kolpackov) Date: Thu Apr 3 13:43:08 2014 Subject: [odb-users] Best Practice deleting persistent objects In-Reply-To: References: Message-ID: Hi Per, Per Edin writes: > I would advice against ON DELETE CASCADE in this particular case > though. 1. It can lead to real headaches if someone removes a group by > mistake, either by removing a group from within C++ code or by an > accidental DELETE FROM group WHERE... the cascade doesn't give any > warnings at all. I agree, ON DELETE can bring trouble. And, IMO, the biggest issue is that there is no automatic way to synchronize objects that were loaded into memory with the database state once ON DELETE has done its job. But it seems people like this functionality and I think for certain kind of workflows it can work, if one is careful. > 2. Removing a group doesn't necessarily mean the users shall be > removed, since a user can be in 0 groups or already is a member > of other groups. Well, this one you can solve by using on_delete(set_null) instead of cascade. Boris From steven.cote at gmail.com Fri Apr 4 05:53:30 2014 From: steven.cote at gmail.com (=?UTF-8?B?U3RldmVuIEPDtHTDqQ==?=) Date: Fri Apr 4 06:11:25 2014 Subject: [odb-users] Best Practice deleting persistent objects In-Reply-To: References: Message-ID: > > You just did. This is a many-to-many relationship with one side > inverse. There is a section in manual on this kind of relationships. Sounds good when I read it, but I just gave it a try and it's still not working the way I would have expected. So, currently the classes look like this: class user; #pragma db object class group { ... #pragma db value_not_null unordered std::vector > Members_; }; #pragma db object class user { ... #pragma db value_not_null inverse(Members_) std::vector > Groups_; }; This creates the many-to-many relationship in the schema that I was expecting, so that's all good. Then I create a user and add it to a group. I then subsequently call erase() on that user. Now, I'm still getting the "FOREIGN KEY constraint failed" exception that I was getting before. I had thought that by adding the relationship to group from the user class, it would have auto-magically removed the corresponding entry from the group_Members table when erasing a user. Is that assumption wrong? From boris at codesynthesis.com Fri Apr 4 06:14:56 2014 From: boris at codesynthesis.com (Boris Kolpackov) Date: Fri Apr 4 06:18:17 2014 Subject: [odb-users] Best Practice deleting persistent objects In-Reply-To: References: Message-ID: Hi Steven, Steven C?t? writes: > #pragma db object > class group > { > ... > #pragma db value_not_null unordered > std::vector > Members_; > }; > > #pragma db object > class user > { > ... > #pragma db value_not_null inverse(Members_) > std::vector > Groups_; > }; > > Then I create a user and add it to a group. I then subsequently call > erase() on that user. Now, I'm still getting the "FOREIGN KEY constraint > failed" exception that I was getting before. I had thought that by adding > the relationship to group from the user class, it would have auto-magically > removed the corresponding entry from the group_Members table when erasing a > user. If you want to be able to erase user without having to clean any references, then you need to put the non-inverse side of the relationship in the user class. Think about it this way: the non-inverse side is the only relationship that actually exists in the database. So if erasing an object must also take down the relationship, then the non-inverse side should be part of that object. Boris From steven.cote at gmail.com Fri Apr 4 08:06:21 2014 From: steven.cote at gmail.com (=?UTF-8?B?U3RldmVuIEPDtHTDqQ==?=) Date: Mon Apr 7 03:42:13 2014 Subject: [odb-users] Best Practice deleting persistent objects In-Reply-To: References: Message-ID: > > > Then I create a user and add it to a group. I then subsequently call > > erase() on that user. Now, I'm still getting the "FOREIGN KEY constraint > > failed" exception that I was getting before. I had thought that by adding > > the relationship to group from the user class, it would have > auto-magically > > removed the corresponding entry from the group_Members table when > erasing a > > user. > > If you want to be able to erase user without having to clean any > references, then you need to put the non-inverse side of the > relationship in the user class. > > Think about it this way: the non-inverse side is the only relationship > that actually exists in the database. So if erasing an object must also > take down the relationship, then the non-inverse side should be part of > that object. > Ah ok, that makes sense. And sure enough, switching which class had the inverse statement makes all my unit tests pass. Thanks for the help guys! From dhinson at netrogenblue.com Thu Apr 3 13:50:27 2014 From: dhinson at netrogenblue.com (David Hinson) Date: Mon Apr 7 07:43:54 2014 Subject: [odb-users] Best Practice deleting persistent objects In-Reply-To: References: Message-ID: <001e01cf4f65$335d08a0$9a1719e0$@netrogenblue.com> At the risk of pushing the discussion beyond ODB support, another alternative would be to separate the concern of relationship ownership away from both the user and group classes into its own class that represents a relationship and is persisted directly. You could then make a relationship manager object that serves as both factory and lookup for those relationships. That would allow you to easily decouple relationship policies away from the principal objects which might be important if the relationships grow in complexity. For instance, if you were to start applying non-trivial criteria for group membership that pulled in other data model elements then it might become very undesirable to couple their implementations to either the user or group implementations. Of course that would be more of an enterprise level solution. If you're making something simpler like a device level ACL then you may not be able to justify those extra components. -----Original Message----- From: odb-users-bounces@codesynthesis.com [mailto:odb-users-bounces@codesynthesis.com] On Behalf Of Steven C?t? Sent: Thursday, April 03, 2014 11:26 AM To: ODB Users Mailing List Subject: Re: [odb-users] Best Practice deleting persistent objects > > Solution 0 limits the number of groups a user can be in to at most 1. > > Perhaps the first issue is to determine if a user shall be able to be > in 1 or more groups? :) > It is true. In this particular case, a user can be in 0 to * groups. So option 0 as written won't work for me. Judging by the rest of the answers, it sounds like the real answer is to link the group back to the user. So in my case it would be something like: class group; #pragma db object class user { ... std::vector belongs; }; #pragma db object class group { ... #pragma db inverse(belongs) std::vector member; }; Is it possible to describe a relationship like that in the pragma language? I'll have a flip through the manual in the morning to see if there's any mention of that. Otherwise I guess I'm waiting for version 2.4.0. ----- No virus found in this message. Checked by AVG - www.avg.com Version: 2013.0.3462 / Virus Database: 3722/7292 - Release Date: 04/03/14 From christian.lichtenberger at etm.at Tue Apr 15 05:01:26 2014 From: christian.lichtenberger at etm.at (Lichtenberger, Christian) Date: Tue Apr 15 05:49:00 2014 Subject: [odb-users] Erase/Remove Performance comparison to native SQL Message-ID: <6C48D395FE34B94FA716D062240DAB8A1AC331E4@ATNETS9912TMSX.ww300.siemens.net> Hi We currently compare ODB with native SQL by use of a SQLite database. The comparison tests several scenarios, by use of the same object model (and db schema). In the scenario "persist" ODB is equal (or faster) as native SQL. Both need in our case for 10000 (entries) x 2 (tables) x 3 (transactions) = 60000 (entries) about 5 seconds. ODB is by tendency a little bit faster. --> Yay! In the scenario "delete by ids" ODB is much slower. In this scenario we remove 10000 (entries) x 2 (tables in 2 transactions) by entering the id. In ODB we use "db->erase(id)" and in SQL we use "sqlite3_mprintf("delete from 'Object' where ID = '%d';", id)". With native SQL we need 0,3 seconds and with ODB 18,3 seconds. In the scenario "delete by query" ODB is also much slower. In this scenario we remove the same amount of data as before but use a query instead of a list of ids. In ODB we use "db->erase_query( query::id >= fromId && query::id <= toId )" and in SQL we use "sqlite3_mprintf("delete from 'Object' where id >= %d and id <= %d;", fromId, toId)". With native SQL we need 0,06 seconds and with ODB 3,4 seconds. In the scenario "delete object" ODB also seems to be slow. At the moment we did not have the corresponding native SQL code. But with ODB we use "db->erase(object). In this scenario we remove 10000 (entries) x 2 (tables) in one transaction. ODB need 37,5 seconds Further test scenarios come in future. Why is there such a difference? ODB is about 60 times slower than native SQL for deleting data. Did we do anything wrong? ODB compiler is started with following options: odb.exe -I..\..\\Qt4\include --std c++11 --database sqlite --generate-schema --generate-query --generate-session --profile qt --hxx-prologue "#include \"Global.hxx\"" --export-symbol MY_ODB_EXPORT Thanks, Christian From boris at codesynthesis.com Tue Apr 15 07:27:47 2014 From: boris at codesynthesis.com (Boris Kolpackov) Date: Tue Apr 15 07:31:06 2014 Subject: [odb-users] Erase/Remove Performance comparison to native SQL In-Reply-To: <6C48D395FE34B94FA716D062240DAB8A1AC331E4@ATNETS9912TMSX.ww300.siemens.net> References: <6C48D395FE34B94FA716D062240DAB8A1AC331E4@ATNETS9912TMSX.ww300.siemens.net> Message-ID: Hi Christian, Lichtenberger, Christian writes: > In the scenario "delete by ids" ODB is much slower. In this scenario > we remove 10000 (entries) x 2 (tables in 2 transactions) by entering > the id. In ODB we use "db->erase(id)" and in SQL we use > "sqlite3_mprintf("delete from 'Object' where ID = '%d';", id)". With > native SQL we need 0,3 seconds and with ODB 18,3 seconds. Generally, if you do the same things in ODB and native SQL, ODB should be at least as fast and often faster because of various reuse/caching mechanisms. It is hard to say why there is the difference without seeing the code, including the object model/schema (e.g., do you use containers)? Can you show the relevant transactions for each test (ODB case)? Also, can you enable statement tracing for each transaction: t.tracer (odb:stderr_tracer) And see which statements actually get executed by ODB under the hood? Do they match your native SQL? I would be interested to hear what you will find. Boris From christian.lichtenberger at etm.at Tue Apr 15 12:32:06 2014 From: christian.lichtenberger at etm.at (Lichtenberger, Christian) Date: Tue Apr 15 15:56:04 2014 Subject: AW: [odb-users] Erase/Remove Performance comparison to native SQL In-Reply-To: References: <6C48D395FE34B94FA716D062240DAB8A1AC331E4@ATNETS9912TMSX.ww300.siemens.net> Message-ID: <6C48D395FE34B94FA716D062240DAB8A1AC33381@ATNETS9912TMSX.ww300.siemens.net> Hi Boris I do some further tests and implement some other testcases. Below you can find the output of my test application for native SQL and ODB with much data (10000) and a further output with trace information with only 2 objects (to keep it readable). Persist and Update is faster or equal with ODB. But Select, Find and Delete is much faster with SQL Native. As you can see in the SQL statements, they are very similar in both. Only the testcase "Select Object IDs by ID-Range" is different because it is with ODB not possible to query directly ids. The objects must be loaded always. But that is not the problem because we could define a view in this case. The used test object model is very simple. See below. We use QSharedPointer for relations. QList we use only for inverse members. What did we do wrong? Our class model: #pragma db object class ObjectBase { public: public: > structures_; #pragma db not_null QSharedPointer objectType_; #pragma db null QSharedPointer type_; }; #pragma db object class Structure { public: public: source_; #pragma db not_null QSharedPointer object_; #pragma db not_null QSharedPointer relationType_; }; #pragma db object class EnumType { public: public: > TestSelect(int fromId, int toId) { typedef odb::query query; typedef odb::result result; QList> structures; transaction t (db_->begin ()); if (this->doSqlTrace) t.tracer(odb::stderr_tracer); result r (db_->query(query::id >= fromId && query::id <= toId)); for (result::iterator i (r.begin ()); i != r.end (); ++i) { structures.append(i.load()); } t.commit(); return structures; } And e.g. following in case of Testcase: Delete Objects by Query (ID-Range) void RemoveAllByQuery(int fromId, int toId) { typedef odb::query query; typedef odb::result result; transaction t (db_->begin ()); if (this->doSqlTrace) t.tracer(odb::stderr_tracer); db_->erase_query( query::id >= fromId && query::id <= toId ); t.commit(); } This is the output of my test application with 10000 ( x 2 Tables) for SQL native: =================================================== Testcase: Persist Objects generating content... for transaction 1 with (10000 elements) elapsed time for saving data to database: 1.88 generating content... for transaction 2 with (10000 elements) elapsed time for saving data to database: 1.265 generating content... for transaction 3 with (10000 elements) elapsed time for saving data to database: 1.423 elapsed global time for saving data to database: 4.568, average per transaction 1.52267 ==================================================== Testcase: Select Objects by ID-Range elapsed time for selecting 10000 data classes from database: 0.593 ==================================================== Testcase: Update Objects elapsed time for updateing 10000 data classes from database: 1.518 ==================================================== Testcase: Delete Objects elapsed time for deleting 10000 data classes from database: 0.275 ==================================================== Testcase: Select Object IDs by ID-Range elapsed time for selecting 10000 data by id range from database: 0.019 ==================================================== Testcase: Delete Objects by IDs elapsed time for deleting 10000 data by ids from database: 0.242 ==================================================== Testcase: Find Objects by IDs elapsed time for finding and loading 10000 data by ids from database: 1.61 ==================================================== Testcase: Delete Objects by Query (ID-Range) elapsed time for deleting 10000 data by id range query from database: 0.023 ==================================================== Testcase: Find non existing Objects by IDs elapsed time for finding (of non existing) and loading of 10000 data from database: 0.482 The same for ODB: ==================================================== Testcase: Persist Objects generating content... for transaction 1 with (10000 elements) elapsed time for saving data to database: 0.946 generating content... for transaction 2 with (10000 elements) elapsed time for saving data to database: 1.357 generating content... for transaction 3 with (10000 elements) elapsed time for saving data to database: 1.31 elapsed global time for saving data to database: 3.613, average per transaction 1.20433 ==================================================== Testcase: Select Objects by ID-Range elapsed time for selecting 10000 data classes from database: 26.539 ==================================================== Testcase: Update Objects elapsed time for updateing 10000 data classes from database: 1.878 ==================================================== Testcase: Delete Objects elapsed time for deleting 10000 data classes from database: 43.545 ==================================================== Testcase: Select Object IDs by ID-Range elapsed time for selecting 10000 data by id range from database: 20.864 ==================================================== Testcase: Delete Objects by IDs elapsed time for deleting 10000 data by ids from database: 21.701 ==================================================== Testcase: Find Objects by IDs elapsed time for finding and loading 10000 data by ids from database: 9.822 ==================================================== Testcase: Delete Objects by Query (ID-Range) elapsed time for deleting 10000 data by id range query from database: 3.309 ==================================================== Testcase: Find non existing Objects by IDs elapsed time for finding (of non existing) and loading of 10000 data from database: 0.008 And now the output inclusive trace information for SQL-Native but with only 2 instead of 10000 objects: ==================================================== Testcase: Persist Objects generating content... for transaction 1 with (2 elements) INSERT INTO 'wbeObjectBase' (extId, name, objectType, type) VALUES ('1', 'Test1', '3', '1'); INSERT INTO 'wbeStructure' (source, object, relationType) VALUES ( '1', '1', '5'); INSERT INTO 'wbeObjectBase' (extId, name, objectType, type) VALUES ('2', 'Test2', '3', '1'); INSERT INTO 'wbeStructure' (source, object, relationType) VALUES ( '1', '2', '5'); elapsed time for saving data to database: 0.1 generating content... for transaction 2 with (2 elements) INSERT INTO 'wbeObjectBase' (extId, name, objectType, type) VALUES ('2', 'Test2', '3', '1'); INSERT INTO 'wbeStructure' (source, object, relationType) VALUES ( '1', '3', '5'); INSERT INTO 'wbeObjectBase' (extId, name, objectType, type) VALUES ('4', 'Test4', '3', '1'); INSERT INTO 'wbeStructure' (source, object, relationType) VALUES ( '1', '4', '5'); elapsed time for saving data to database: 0.032 generating content... for transaction 3 with (2 elements) INSERT INTO 'wbeObjectBase' (extId, name, objectType, type) VALUES ('3', 'Test3', '3', '1'); INSERT INTO 'wbeStructure' (source, object, relationType) VALUES ( '1', '5', '5'); INSERT INTO 'wbeObjectBase' (extId, name, objectType, type) VALUES ('6', 'Test6', '3', '1'); INSERT INTO 'wbeStructure' (source, object, relationType) VALUES ( '1', '6', '5'); elapsed time for saving data to database: 0.024 elapsed global time for saving data to database: 0.156, average per transaction 0.052 ==================================================== Testcase: Select Objects by ID-Range SELECT id, source, object, relationType FROM 'wbeStructure' WHERE id >= 1 and id <= 2; SELECT id, discriminator, value, name, flags FROM 'wbeEnumType' WHERE id = 5; SELECT id, extId, name, objectType, type FROM 'wbeObjectBase' WHERE id = 1; SELECT id, discriminator, value, name, flags FROM 'wbeEnumType' WHERE id = 1; SELECT id, discriminator, value, name, flags FROM 'wbeEnumType' WHERE id = 3; SELECT id, source, object, relationType FROM 'wbeStructure' WHERE id = 1; SELECT id, extId, name, objectType, type FROM 'wbeObjectBase' WHERE id = 2; elapsed time for selecting 2 data classes from database: 0.004 ==================================================== Testcase: Update Objects UPDATE 'wbeObjectBase' SET extId = '1', name = 'AAAA_Test1', objectType = '3', type = '1' WHERE id = '1'; UPDATE 'wbeStructure' SET source = '1', object = '1', relationType = '5' WHERE id = '1'; UPDATE 'wbeObjectBase' SET extId = '2', name = 'AAAA_Test2', objectType = '3', type = '1' WHERE id = '2'; UPDATE 'wbeStructure' SET source = '1', object = '2', relationType = '5' WHERE id = '2'; elapsed time for updateing 2 data classes from database: 0.023 ==================================================== Testcase: Delete Objects DELETE FROM 'wbeObjectBase' WHERE ID = '1'; DELETE FROM 'wbeStructure' WHERE ID = '1'; DELETE FROM 'wbeObjectBase' WHERE ID = '2'; DELETE FROM 'wbeStructure' WHERE ID = '2'; elapsed time for deleting 2 data classes from database: 0.018 ==================================================== Testcase: Select Objects by IDs SELECT id, object FROM 'wbeStructure' WHERE id >= 3 and id <= 4; elapsed time for selecting 2 data by id range from database: 0.001 ==================================================== Testcase: Delete Objects by IDs DELETE FROM 'wbeStructure' WHERE ID = '3'; DELETE FROM 'wbeStructure' WHERE ID = '4'; DELETE FROM 'wbeObjectBase' WHERE ID = '3'; DELETE FROM 'wbeObjectBase' WHERE ID = '4'; elapsed time for deleting 2 data by ids from database: 0.024 ==================================================== Testcase: Find Objects by IDs SELECT id, source, object, relationType FROM 'wbeStructure' WHERE id = 5; SELECT id, source, object, relationType FROM 'wbeStructure' WHERE id = 1; SELECT id, discriminator, value, name, flags FROM 'wbeEnumType' WHERE id = 5; SELECT id, extId, name, objectType, type FROM 'wbeObjectBase' WHERE id = 5; SELECT id, discriminator, value, name, flags FROM 'wbeEnumType' WHERE id = 1; SELECT id, discriminator, value, name, flags FROM 'wbeEnumType' WHERE id = 3; SELECT id, source, object, relationType FROM 'wbeStructure' WHERE id = 6; SELECT id, source, object, relationType FROM 'wbeStructure' WHERE id = 1; SELECT id, extId, name, objectType, type FROM 'wbeObjectBase' WHERE id = 6; elapsed time for finding and loading 2 data by ids from database: 0.006 ==================================================== Testcase: Delete Objects by Query (ID-Range) DELETE FROM 'wbeStructure' WHERE id >= 5 and id <= 6; elapsed time for deleting 2 data by id range query from database: 0.017 ==================================================== Testcase: Find non existing Objects by IDs SELECT id, source, object, relationType FROM 'wbeStructure' WHERE id = 7; SELECT id, source, object, relationType FROM 'wbeStructure' WHERE id = 8; elapsed time for finding (of non existing) and loading of 2 data from database: 0.002 And the same for ODB: ==================================================== Testcase: Persist Objects generating content... for transaction 1 with (2 elements) INSERT INTO "wbeObjectBase" ("id", "extid", "name", "objectType", "type") VALUES (?, ?, ?, ?, ?) INSERT INTO "wbeStructure" ("id", "source", "object", "relationType") VALUES (?, ?, ?, ?) INSERT INTO "wbeObjectBase" ("id", "extid", "name", "objectType", "type") VALUES (?, ?, ?, ?, ?) INSERT INTO "wbeStructure" ("id", "source", "object", "relationType") VALUES (?, ?, ?, ?) elapsed time for saving data to database: 0.033 generating content... for transaction 2 with (2 elements) INSERT INTO "wbeObjectBase" ("id", "extid", "name", "objectType", "type") VALUES (?, ?, ?, ?, ?) INSERT INTO "wbeStructure" ("id", "source", "object", "relationType") VALUES (?, ?, ?, ?) INSERT INTO "wbeObjectBase" ("id", "extid", "name", "objectType", "type") VALUES (?, ?, ?, ?, ?) INSERT INTO "wbeStructure" ("id", "source", "object", "relationType") VALUES (?, ?, ?, ?) elapsed time for saving data to database: 0.026 generating content... for transaction 3 with (2 elements) INSERT INTO "wbeObjectBase" ("id", "extid", "name", "objectType", "type") VALUES (?, ?, ?, ?, ?) INSERT INTO "wbeStructure" ("id", "source", "object", "relationType") VALUES (?, ?, ?, ?) INSERT INTO "wbeObjectBase" ("id", "extid", "name", "objectType", "type") VALUES (?, ?, ?, ?, ?) INSERT INTO "wbeStructure" ("id", "source", "object", "relationType") VALUES (?, ?, ?, ?) elapsed time for saving data to database: 0.032 elapsed global time for saving data to database: 0.091, average per transaction 0.0303333 ==================================================== Testcase: Select Objects by ID-Range SELECT "wbeStructure"."id", "wbeStructure"."source", "wbeStructure"."object", "wbeStructure"."relationType" FROM "wbeStructure" WHERE ("wbeStructure"."id" >= ?) AND ("wbeStructure"."id" <= ?) SELECT "wbeObjectBase"."id", "wbeObjectBase"."extid", "wbeObjectBase"."name", "wbeObjectBase"."objectType", "wbeObjectBase"."type" FROM "wbeObjectBase" WHERE "wbeObjectBase"."id"=? SELECT "wbeEnumType"."id", "wbeEnumType"."discriminator", "wbeEnumType"."value", "wbeEnumType"."name", "wbeEnumType"."flags" FROM "wbeEnumType" WHERE "wbeEnumType"."id"=? SELECT "wbeStructure"."id" FROM "wbeStructure" WHERE "wbeStructure"."object"=? SELECT "wbeEnumType"."id", "wbeEnumType"."discriminator", "wbeEnumType"."value", "wbeEnumType"."name", "wbeEnumType"."flags" FROM "wbeEnumType" WHERE "wbeEnumType"."id"=? SELECT "wbeObjectBase"."id", "wbeObjectBase"."extid", "wbeObjectBase"."name", "wbeObjectBase"."objectType", "wbeObjectBase"."type" FROM "wbeObjectBase" WHERE "wbeObjectBase"."id"=? SELECT "wbeEnumType"."id", "wbeEnumType"."discriminator", "wbeEnumType"."value", "wbeEnumType"."name", "wbeEnumType"."flags" FROM "wbeEnumType" WHERE "wbeEnumType"."id"=? SELECT "wbeStructure"."id" FROM "wbeStructure" WHERE "wbeStructure"."object"=? SELECT "wbeEnumType"."id", "wbeEnumType"."discriminator", "wbeEnumType"."value", "wbeEnumType"."name", "wbeEnumType"."flags" FROM "wbeEnumType" WHERE "wbeEnumType"."id"=? elapsed time for selecting 2 data classes from database: 0.064 ==================================================== Testcase: Update Objects UPDATE "wbeObjectBase" SET "extid"=?, "name"=?, "objectType"=?, "type"=? WHERE "id"=? UPDATE "wbeStructure" SET "source"=?, "object"=?, "relationType"=? WHERE "id"=? UPDATE "wbeObjectBase" SET "extid"=?, "name"=?, "objectType"=?, "type"=? WHERE "id"=? UPDATE "wbeStructure" SET "source"=?, "object"=?, "relationType"=? WHERE "id"=? elapsed time for updateing 2 data classes from database: 0.03 ==================================================== Testcase: Delete Objects DELETE FROM "wbeObjectBase" WHERE "id"=? DELETE FROM "wbeStructure" WHERE "id"=? DELETE FROM "wbeObjectBase" WHERE "id"=? DELETE FROM "wbeStructure" WHERE "id"=? elapsed time for deleting 2 data classes from database: 0.021 ==================================================== Testcase: Select Objects by IDs SELECT "wbeStructure"."id", "wbeStructure"."source", "wbeStructure"."object", "wbeStructure"."relationType" FROM "wbeStructure" WHERE ("wbeStructure"."id" >= ?) AND ("wbeStructure"."id" <= ?) SELECT "wbeObjectBase"."id", "wbeObjectBase"."extid", "wbeObjectBase"."name", "wbeObjectBase"."objectType", "wbeObjectBase"."type" FROM "wbeObjectBase" WHERE "wbeObjectBase"."id"=? SELECT "wbeEnumType"."id", "wbeEnumType"."discriminator", "wbeEnumType"."value", "wbeEnumType"."name", "wbeEnumType"."flags" FROM "wbeEnumType" WHERE "wbeEnumType"."id"=? SELECT "wbeStructure"."id" FROM "wbeStructure" WHERE "wbeStructure"."object"=? SELECT "wbeEnumType"."id", "wbeEnumType"."discriminator", "wbeEnumType"."value", "wbeEnumType"."name", "wbeEnumType"."flags" FROM "wbeEnumType" WHERE "wbeEnumType"."id"=? SELECT "wbeObjectBase"."id", "wbeObjectBase"."extid", "wbeObjectBase"."name", "wbeObjectBase"."objectType", "wbeObjectBase"."type" FROM "wbeObjectBase" WHERE "wbeObjectBase"."id"=? SELECT "wbeEnumType"."id", "wbeEnumType"."discriminator", "wbeEnumType"."value", "wbeEnumType"."name", "wbeEnumType"."flags" FROM "wbeEnumType" WHERE "wbeEnumType"."id"=? SELECT "wbeStructure"."id" FROM "wbeStructure" WHERE "wbeStructure"."object"=? SELECT "wbeEnumType"."id", "wbeEnumType"."discriminator", "wbeEnumType"."value", "wbeEnumType"."name", "wbeEnumType"."flags" FROM "wbeEnumType" WHERE "wbeEnumType"."id"=? elapsed time for selecting 2 data by id range from database: 0.068 ==================================================== Testcase: Delete Objects by IDs DELETE FROM "wbeStructure" WHERE "id"=? DELETE FROM "wbeStructure" WHERE "id"=? DELETE FROM "wbeObjectBase" WHERE "id"=? DELETE FROM "wbeObjectBase" WHERE "id"=? elapsed time for deleting 2 data by ids from database: 0.027 ==================================================== Testcase: Find Objects by IDs SELECT "wbeStructure"."id", "wbeStructure"."source", "wbeStructure"."object", "wbeStructure"."relationType" FROM "wbeStructure" WHERE "wbeStructure"."id"=? SELECT "wbeObjectBase"."id", "wbeObjectBase"."extid", "wbeObjectBase"."name", "wbeObjectBase"."objectType", "wbeObjectBase"."type" FROM "wbeObjectBase" WHERE "wbeObjectBase"."id"=? SELECT "wbeEnumType"."id", "wbeEnumType"."discriminator", "wbeEnumType"."value", "wbeEnumType"."name", "wbeEnumType"."flags" FROM "wbeEnumType" WHERE "wbeEnumType"."id"=? SELECT "wbeStructure"."id" FROM "wbeStructure" WHERE "wbeStructure"."object"=? SELECT "wbeEnumType"."id", "wbeEnumType"."discriminator", "wbeEnumType"."value", "wbeEnumType"."name", "wbeEnumType"."flags" FROM "wbeEnumType" WHERE "wbeEnumType"."id"=? SELECT "wbeStructure"."id", "wbeStructure"."source", "wbeStructure"."object", "wbeStructure"."relationType" FROM "wbeStructure" WHERE "wbeStructure"."id"=? SELECT "wbeObjectBase"."id", "wbeObjectBase"."extid", "wbeObjectBase"."name", "wbeObjectBase"."objectType", "wbeObjectBase"."type" FROM "wbeObjectBase" WHERE "wbeObjectBase"."id"=? SELECT "wbeEnumType"."id", "wbeEnumType"."discriminator", "wbeEnumType"."value", "wbeEnumType"."name", "wbeEnumType"."flags" FROM "wbeEnumType" WHERE "wbeEnumType"."id"=? SELECT "wbeStructure"."id" FROM "wbeStructure" WHERE "wbeStructure"."object"=? SELECT "wbeEnumType"."id", "wbeEnumType"."discriminator", "wbeEnumType"."value", "wbeEnumType"."name", "wbeEnumType"."flags" FROM "wbeEnumType" WHERE "wbeEnumType"."id"=? elapsed time for finding and loading 2 data by ids from database: 0.072 ==================================================== Testcase: Delete Objects by Query (ID-Range) DELETE FROM "wbeStructure" WHERE ("wbeStructure"."id" >= ?) AND ("wbeStructure"."id" <= ?) elapsed time for deleting 2 data by id range query from database: 0.013 ==================================================== Testcase: Find non existing Objects by IDs SELECT "wbeStructure"."id", "wbeStructure"."source", "wbeStructure"."object", "wbeStructure"."relationType" FROM "wbeStructure" WHERE "wbeStructure"."id"=? SELECT "wbeStructure"."id", "wbeStructure"."source", "wbeStructure"."object", "wbeStructure"."relationType" FROM "wbeStructure" WHERE "wbeStructure"."id"=? elapsed time for finding (of non existing) and loading of 2 data from database: 0.015 Christian -----Urspr?ngliche Nachricht----- Von: Boris Kolpackov [mailto:boris@codesynthesis.com] Gesendet: Dienstag, 15. April 2014 13:28 An: Lichtenberger, Christian Cc: odb-users@codesynthesis.com Betreff: Re: [odb-users] Erase/Remove Performance comparison to native SQL Hi Christian, Lichtenberger, Christian writes: > In the scenario "delete by ids" ODB is much slower. In this scenario > we remove 10000 (entries) x 2 (tables in 2 transactions) by entering > the id. In ODB we use "db->erase(id)" and in SQL we use > "sqlite3_mprintf("delete from 'Object' where ID = '%d';", id)". With > native SQL we need 0,3 seconds and with ODB 18,3 seconds. Generally, if you do the same things in ODB and native SQL, ODB should be at least as fast and often faster because of various reuse/caching mechanisms. It is hard to say why there is the difference without seeing the code, including the object model/schema (e.g., do you use containers)? Can you show the relevant transactions for each test (ODB case)? Also, can you enable statement tracing for each transaction: t.tracer (odb:stderr_tracer) And see which statements actually get executed by ODB under the hood? Do they match your native SQL? I would be interested to hear what you will find. Boris From dstocking at extensionhealthcare.com Tue Apr 15 10:58:50 2014 From: dstocking at extensionhealthcare.com (David Stocking) Date: Tue Apr 15 16:13:52 2014 Subject: [odb-users] Emulating a sql union in ODB Message-ID: I am working on an application where I basically need to do a union but ill explain the data first. The classes I?m working with are #pragma db object polymorphic class RosterEntry { ?. } #pragma db object class Contact : public RosterEntry { ?. } #pragma db object class Group : public RosterEntry { ?. } Basically my old native query goes like this. SELECT Contact.name as name ?. all the columns here FROM Contact WHERE some conditions here UNION SELECT Group.name as name ?. all the columns here FROM ( SELECT Group.* ? Get a count of the number of members in this group (SELECT COUNT(*) FROM ContactGroup AS cg LEFT JOIN Contact ON cg.contact_id = Contact.id WHERE cg.group_id = Group.id) AS members FROM Group WHERE some other conditions here ) ORDER BY name ASC I thought I could pull this off with views and polymorphic types but I haven?t been able to figure out how I could. Is this possible? Should I even bother trying to get this query in a non native form or should i just make this huge query of doom a native sql query view? Any enlightenment on the subject would be greatly appreciated. David Stocking Software Engineer, Windows Desktop Extension Healthcare General: 877-207-3753 Helping hospitals achieve compliance with TJC NPSG on clinical alarm safety. Ask us about data-driven change with Extension Evaluate? This communication is confidential, intended only for the named recipient(s) above and may contain trade secrets or other information that is exempt from disclosure under applicable law. Any use, dissemination, distribution or copy of this communication by anyone other than the named recipient(s) is strictly prohibited. If you have received this communication in error, please immediately notify us by calling (877-207-3753). From boris at codesynthesis.com Tue Apr 15 16:22:51 2014 From: boris at codesynthesis.com (Boris Kolpackov) Date: Tue Apr 15 16:26:09 2014 Subject: [odb-users] Emulating a sql union in ODB In-Reply-To: References: Message-ID: Hi David, David Stocking writes: > I thought I could pull this off with views and polymorphic types > but I haven?t been able to figure out how I could. Is this possible? I doubt it. > Should I even bother trying to get this query in a non native form or > should i just make this huge query of doom a native sql query view? Views work in terms of JOINs (non-native views, that is). So if you can re-implement the same logic using JOINs, then your could use (non-native) views. Also, splitting the query into multiple result sets and then combining them at the application level could be a way to simplify things. But it might affect performance (either way, in fact) so test first. Otherwise, just use a native view. Boris From boris at codesynthesis.com Wed Apr 16 06:41:54 2014 From: boris at codesynthesis.com (Boris Kolpackov) Date: Wed Apr 16 06:45:13 2014 Subject: [odb-users] Erase/Remove Performance comparison to native SQL In-Reply-To: <6C48D395FE34B94FA716D062240DAB8A1AC33381@ATNETS9912TMSX.ww300.siemens.net> References: <6C48D395FE34B94FA716D062240DAB8A1AC331E4@ATNETS9912TMSX.ww300.siemens.net> <6C48D395FE34B94FA716D062240DAB8A1AC33381@ATNETS9912TMSX.ww300.siemens.net> Message-ID: Hi Christian, Ok, I have some observations (below) but ideally I would need to be able to study/run the whole test. Can you send it to me? You can send it off-list if you don't want to make the code public. 1. The load test: you can see a different set of statements is executed for the native test vs ODB. My guess is that because of relationships, ODB loads pointed-to objects. You should also probably use a session since you have potentially cyclical relationships. I am also not sure what your native test does in this case. Does it actually load the data from returned columns into variables? Again, I could have answered all these questions by taking a look at the actual code. 2. The delete tests: if you look at the results with tracing enabled, ODB is actually as fast or faster than the native test. While when you run it on 10000 objects, it is dramatically slower. So something fishy is going on here. Again, would need to be able to run the test myself to figure it out. Boris From boris at codesynthesis.com Thu Apr 17 08:04:36 2014 From: boris at codesynthesis.com (Boris Kolpackov) Date: Thu Apr 17 08:07:56 2014 Subject: [odb-users] Erase/Remove Performance comparison to native SQL In-Reply-To: <6C48D395FE34B94FA716D062240DAB8A1AC37BB0@ATNETS9912TMSX.ww300.siemens.net> References: <6C48D395FE34B94FA716D062240DAB8A1AC331E4@ATNETS9912TMSX.ww300.siemens.net> <6C48D395FE34B94FA716D062240DAB8A1AC33381@ATNETS9912TMSX.ww300.siemens.net> <6C48D395FE34B94FA716D062240DAB8A1AC37BB0@ATNETS9912TMSX.ww300.siemens.net> Message-ID: Hi Christian, [CC'ed odb-users back in.] Thanks for the test. It took me several hours to wade through it trying to figure out what's going on. The overall summary (for those who don't feel like reading through the gory details) is that you are either comparing different things (e.g., your native test does one thing while the ODB test does another), you configure SQLite differently for the two tests (e.g., foreign key support), or both. In every test case where I addressed these, ODB is faster, often significantly. Ok, now for the details. I started with the select test. By comparing the number of statements executed for each test, you immediately see the difference which tells us that your ODB and native tests perform different amount of work for some reason. After creating the session inside the transaction, the number of statements in the ODB test got reduced quite a bit, which tells us that before some objects were loaded multiple time while now they are loaded only once. Next, I looked at the inverse list in your ObjectBase class. In the native test you populate this in an ad-hoc manner. That is, you know that you are going to load all the structures for each object, so you populate the list by hand. ODB doesn't do that since it has no such application-specific knowledge. Instead, ODB has to run a SELECT query to load each inverse relationship (you can still use your ad-hoc approach with ODB by making the list transient and populating it manually, just like you did in your native test). To make the comparison apples-to-apples, I marked the list transient, which means ODB won't be loading it. With this change, ODB is quite a bit faster than your native test and the number of statements executed by ODB is actually fewer which tells me that your native test is probably unnecessarily re-loading the same objects multiple times (ODB avoids this with the help of a session). So the SELECT test was sorted. Next was the UPDATE test. This one took me some time to figure out. Long story short, you create the SQLite connection for your native test with foreign keys disabled while ODB, by default, enables them. Once I disabled foreign keys in ODB (pass 'false' as the third argument to odb::sqlite::database and also comment out the "PRAGMA foreign_keys ..." statements in database.hxx), ODB test was again faster than your native test. With these changes, the rest of the tests also fell in line. Below is my output for both cases. The only test where native is still faster than ODB is "Select Object IDs by ID-Range", but, as you said, to be a fair comparison you should use views rather than loading the whole objects (including their relationships). Native: ./driver -mode 1 -count 10000 -countTrans 3 Database connection test.s3db established, successfully! ==================================================== Testcase: Persist Objects generating content... for transaction 1 with (10000 elements) elapsed time for saving data to database: 0.39 generating content... for transaction 2 with (10000 elements) elapsed time for saving data to database: 0.4 generating content... for transaction 3 with (10000 elements) elapsed time for saving data to database: 0.39 elapsed global time for saving data to database: 1.18, average per transaction 0.393333 ==================================================== Testcase: Select Objects by ID-Range elapsed time for selecting 10000 data classes from database: 0.48 ==================================================== Testcase: Update Objects 10000 elapsed time for updateing 10000 data classes from database: 0.29 ==================================================== Testcase: Delete Objects 10000 elapsed time for deleting 10000 data classes from database: 0.25 ==================================================== Testcase: Select Object IDs by ID-Range elapsed time for selecting 10000 data by id range from database: 0.01 ==================================================== Testcase: Delete Objects by IDs elapsed time for deleting 10000 data by ids from database: 0.24 ==================================================== Testcase: Find Objects by IDs elapsed time for finding and loading 10000 data by ids from database: 1.23 ==================================================== Testcase: Delete Objects by Query (ID-Range) elapsed time for deleting 10000 data by id range query from database: 0.01 ==================================================== Testcase: Find non existing Objects by IDs elapsed time for finding (of non existing) and loading of 10000 data from database: 0.33 ODB: ./driver -mode 2 -count 10000 -countTrans 3 ==================================================== Testcase: Persist Objects generating content... for transaction 1 with (10000 elements) elapsed time for saving data to database: 0.13 generating content... for transaction 2 with (10000 elements) elapsed time for saving data to database: 0.13 generating content... for transaction 3 with (10000 elements) elapsed time for saving data to database: 0.12 elapsed global time for saving data to database: 0.38, average per transaction 0.126667 ==================================================== Testcase: Select Objects by ID-Range elapsed time for selecting 10000 data classes from database: 0.15 ==================================================== Testcase: Update Objects 10000 elapsed time for updateing 10000 data classes from database: 0.1 ==================================================== Testcase: Delete Objects 10000 elapsed time for deleting 10000 data classes from database: 0.07 ==================================================== Testcase: Select Object IDs by ID-Range elapsed time for selecting 10000 data by id range from database: 0.19 ==================================================== Testcase: Delete Objects by IDs elapsed time for deleting 10000 data by ids from database: 0.08 ==================================================== Testcase: Find Objects by IDs elapsed time for finding and loading 10000 data by ids from database: 0.2 ==================================================== Testcase: Delete Objects by Query (ID-Range) elapsed time for deleting 10000 data by id range query from database: 0 ==================================================== Testcase: Find non existing Objects by IDs elapsed time for finding (of non existing) and loading of 10000 data from database: 0.02 Boris From boris at codesynthesis.com Thu Apr 17 09:49:02 2014 From: boris at codesynthesis.com (Boris Kolpackov) Date: Thu Apr 17 09:52:20 2014 Subject: [odb-users] Erase/Remove Performance comparison to native SQL In-Reply-To: <6C48D395FE34B94FA716D062240DAB8A1AC38146@ATNETS9912TMSX.ww300.siemens.net> References: <6C48D395FE34B94FA716D062240DAB8A1AC331E4@ATNETS9912TMSX.ww300.siemens.net> <6C48D395FE34B94FA716D062240DAB8A1AC33381@ATNETS9912TMSX.ww300.siemens.net> <6C48D395FE34B94FA716D062240DAB8A1AC37BB0@ATNETS9912TMSX.ww300.siemens.net> <6C48D395FE34B94FA716D062240DAB8A1AC38146@ATNETS9912TMSX.ww300.siemens.net> Message-ID: Hi Christian, Lichtenberger, Christian writes: > Is a session required to enable caching? Or why the number of statements > got reduced? I thought a cache exists per connection! But now I read in > the manual that a session is primary a cache. There are different kind of caches in ODB. There is a statement cache per connection, but that you don't see. Session is an object cache. That is, if a session instance is created, then every object loaded is added to the session. And, if an object already exists in the cache, then it is returned directly, instead of loading it from the database. The manual explains all this in detail. > If a attribute is inverse and defined as QLazySharedPointer also a > SELECT statement is performed? Yes, that's correct. > I thought that the lazy means that it is loaded later. The objects are loaded later, not the list of objects. However, you can get this bahavior (lazy list of lazy pointers) using a lazy-loaded section (Chapter 9, "Sections" in the ODB manual). > Or means lazy that a SELECT for the ids is performed and the objects > behind are loaded later!? Exactly. > The foreign key problem I must figure out later. There is not much to figure, really. If you enable foreign key checking, SQLite runs slower since it now has to check them. So, this boils down to whether you are willing to sacrifice some performance for extra checks or not. Boris From christian.lichtenberger at etm.at Thu Apr 17 08:45:31 2014 From: christian.lichtenberger at etm.at (Lichtenberger, Christian) Date: Mon Apr 21 03:07:29 2014 Subject: AW: [odb-users] Erase/Remove Performance comparison to native SQL In-Reply-To: References: <6C48D395FE34B94FA716D062240DAB8A1AC331E4@ATNETS9912TMSX.ww300.siemens.net> <6C48D395FE34B94FA716D062240DAB8A1AC33381@ATNETS9912TMSX.ww300.siemens.net> <6C48D395FE34B94FA716D062240DAB8A1AC37BB0@ATNETS9912TMSX.ww300.siemens.net> Message-ID: <6C48D395FE34B94FA716D062240DAB8A1AC38146@ATNETS9912TMSX.ww300.siemens.net> Hi Boris Thanks for the analyses. Now it looks very good! Could you send (off-list) the test-code inclusive changes back to me. To see directly what you have changed. Than I have some further questions: Is a session required to enable caching? Or why the number of statements got reduced? I thought a cache exists per connection! But now I read in the manual that a session is primary a cache. If a attribute is inverse and defined as QLazySharedPointer also a SELECT statement is performed? I thought that the lazy means that it is loaded later. Or means lazy that a SELECT for the ids is performed and the objects behind are loaded later!? The foreign key problem I must figure out later. Thanks!! Christian -----Urspr?ngliche Nachricht----- Von: Boris Kolpackov [mailto:boris@codesynthesis.com] Gesendet: Donnerstag, 17. April 2014 14:05 An: Lichtenberger, Christian Cc: odb-users@codesynthesis.com Betreff: Re: [odb-users] Erase/Remove Performance comparison to native SQL Hi Christian, [CC'ed odb-users back in.] Thanks for the test. It took me several hours to wade through it trying to figure out what's going on. The overall summary (for those who don't feel like reading through the gory details) is that you are either comparing different things (e.g., your native test does one thing while the ODB test does another), you configure SQLite differently for the two tests (e.g., foreign key support), or both. In every test case where I addressed these, ODB is faster, often significantly. Ok, now for the details. I started with the select test. By comparing the number of statements executed for each test, you immediately see the difference which tells us that your ODB and native tests perform different amount of work for some reason. After creating the session inside the transaction, the number of statements in the ODB test got reduced quite a bit, which tells us that before some objects were loaded multiple time while now they are loaded only once. Next, I looked at the inverse list in your ObjectBase class. In the native test you populate this in an ad-hoc manner. That is, you know that you are going to load all the structures for each object, so you populate the list by hand. ODB doesn't do that since it has no such application-specific knowledge. Instead, ODB has to run a SELECT query to load each inverse relationship (you can still use your ad-hoc approach with ODB by making the list transient and populating it manually, just like you did in your native test). To make the comparison apples-to-apples, I marked the list transient, which means ODB won't be loading it. With this change, ODB is quite a bit faster than your native test and the number of statements executed by ODB is actually fewer which tells me that your native test is probably unnecessarily re-loading the same objects multiple times (ODB avoids this with the help of a session). So the SELECT test was sorted. Next was the UPDATE test. This one took me some time to figure out. Long story short, you create the SQLite connection for your native test with foreign keys disabled while ODB, by default, enables them. Once I disabled foreign keys in ODB (pass 'false' as the third argument to odb::sqlite::database and also comment out the "PRAGMA foreign_keys ..." statements in database.hxx), ODB test was again faster than your native test. With these changes, the rest of the tests also fell in line. Below is my output for both cases. The only test where native is still faster than ODB is "Select Object IDs by ID-Range", but, as you said, to be a fair comparison you should use views rather than loading the whole objects (including their relationships). Native: ./driver -mode 1 -count 10000 -countTrans 3 Database connection test.s3db established, successfully! ==================================================== Testcase: Persist Objects generating content... for transaction 1 with (10000 elements) elapsed time for saving data to database: 0.39 generating content... for transaction 2 with (10000 elements) elapsed time for saving data to database: 0.4 generating content... for transaction 3 with (10000 elements) elapsed time for saving data to database: 0.39 elapsed global time for saving data to database: 1.18, average per transaction 0.393333 ==================================================== Testcase: Select Objects by ID-Range elapsed time for selecting 10000 data classes from database: 0.48 ==================================================== Testcase: Update Objects 10000 elapsed time for updateing 10000 data classes from database: 0.29 ==================================================== Testcase: Delete Objects 10000 elapsed time for deleting 10000 data classes from database: 0.25 ==================================================== Testcase: Select Object IDs by ID-Range elapsed time for selecting 10000 data by id range from database: 0.01 ==================================================== Testcase: Delete Objects by IDs elapsed time for deleting 10000 data by ids from database: 0.24 ==================================================== Testcase: Find Objects by IDs elapsed time for finding and loading 10000 data by ids from database: 1.23 ==================================================== Testcase: Delete Objects by Query (ID-Range) elapsed time for deleting 10000 data by id range query from database: 0.01 ==================================================== Testcase: Find non existing Objects by IDs elapsed time for finding (of non existing) and loading of 10000 data from database: 0.33 ODB: ./driver -mode 2 -count 10000 -countTrans 3 ==================================================== Testcase: Persist Objects generating content... for transaction 1 with (10000 elements) elapsed time for saving data to database: 0.13 generating content... for transaction 2 with (10000 elements) elapsed time for saving data to database: 0.13 generating content... for transaction 3 with (10000 elements) elapsed time for saving data to database: 0.12 elapsed global time for saving data to database: 0.38, average per transaction 0.126667 ==================================================== Testcase: Select Objects by ID-Range elapsed time for selecting 10000 data classes from database: 0.15 ==================================================== Testcase: Update Objects 10000 elapsed time for updateing 10000 data classes from database: 0.1 ==================================================== Testcase: Delete Objects 10000 elapsed time for deleting 10000 data classes from database: 0.07 ==================================================== Testcase: Select Object IDs by ID-Range elapsed time for selecting 10000 data by id range from database: 0.19 ==================================================== Testcase: Delete Objects by IDs elapsed time for deleting 10000 data by ids from database: 0.08 ==================================================== Testcase: Find Objects by IDs elapsed time for finding and loading 10000 data by ids from database: 0.2 ==================================================== Testcase: Delete Objects by Query (ID-Range) elapsed time for deleting 10000 data by id range query from database: 0 ==================================================== Testcase: Find non existing Objects by IDs elapsed time for finding (of non existing) and loading of 10000 data from database: 0.02 Boris From marascio at gmail.com Tue Apr 29 16:39:49 2014 From: marascio at gmail.com (Louis Marascio) Date: Tue Apr 29 16:40:16 2014 Subject: [odb-users] Problem with bi-directional one-to-many relationship Message-ID: Hi folks, I've run into an issue that I'm unable to solve myself. I have a simple bi-directional one-to-many relationship between two tables. The tables look like this: CREATE TABLE locations ( id SERIAL PRIMARY KEY, name TEXT NOT NULL ); CREATE TABLE servers ( id SERIAL PRIMARY KEY, hostname TEXT NOT NULL, location_id INTEGER REFERENCES locations(id) NOT NULL ); FWIW, this database schema is NOT managed by ODB. My application requires ODB to map to an existing schema. The mappings are equally simplistic, as you can imagine. The fields of interest are mapped as follows (partial classes shown, of course): class Location { private: #pragma db id auto unsigned long id_; #pragma db value_not_null inverse(location_) Servers_type servers_; }; class Server { private: #pragma db id auto unsigned long id_; #pragma db not_null column("location_id") Location_ptr location_; }; This is, I believe, nearly identical to the example shown in the documentation. Location_ptr and Server_ptr are both std::shared_ptr typedefs for their respective classes. I have attempted with both std::weak_ptr and odb::lazy_weak_ptr with no change to the outcome. The error I'm running into is simple: I can persist these objects without issue. I have verified that the database contains the correct rows with correct IDs. When I try to load() a Location, I receive an odb::object_not_persistent error. In examining the PostgreSQL log files, I see the following: execute Location_find: SELECT "locations"."id", "locations"."name" FROM "locations" WHERE "locations"."id"=$1 parameters: $1 = '1' execute Location_servers_select: SELECT "servers"."id" FROM "servers" WHERE "servers"."location_id"=$1 parameters: $1 = '1' execute Server_find: SELECT "servers"."id", "servers"."hostname", "servers"."location_id" FROM "servers" WHERE "servers"."id"=$1 parameters: $1 = '4294967296' The ID is correct for the first two SELECT statements, but is garbage for the SELECT that loads the Servers for the Location. The garbage ID is suspiciously 0xFFFFFFFF + 1. A pattern exists as well. If I re-run the test program, the failing ID is 8589934592. Run it a third time, and it is 12884901888. Each failed run increments the garbage ID by 0xFFFFFFFF + 1. The load() works if I use odb::lazy_weak_ptr in the vector of Servers within Location. However, when I attempt to .load() the lazy_weak_ptr I hit the same issue. I have attempted to debug this but I'm afraid my understanding of the ODB internals is lacking. I don't think its a bug on my side, but would be relieved to have a trivial error in my own code pointed out to me. I have created a reproducible test case that can be downloaded here: https://dl.dropboxusercontent.com/u/140772/lrm_odbtest.tar.gz Inside the archive you'll find the simple test driver, database models, SQL script to make the databsae, and a Makefile. You'll also find the PostgreSQL statement log that I captured while running the test program. I have annotated it pointing out the important pieces, as far as I can tell. ODB 2.3.0 libodb 2.3.0 libodb-pgsql 2.3.1 PostgreSQL 9.3.2 g++ 4.6.4 Linux 3.12.9 x86_64 (Arch linux, if that matters) The ODB compiler was not built from source. I am using the compiler binaries as provided on the downloads section of the website. libodb and libodb-pgsql were, of course, built from source. Thank you, and please let me know if there is any additional data or detail that you need. Louis --- Louis R. Marascio 512-964-4569 From marascio at gmail.com Tue Apr 29 18:30:02 2014 From: marascio at gmail.com (Louis Marascio) Date: Tue Apr 29 18:30:30 2014 Subject: [odb-users] Problem with bi-directional one-to-many relationship Message-ID: Ok I've found my problem. The root cause is embarrassingly simple. I had my id column defined as an unsigned long, but the PostgreSQL data type mappings will map a PostgreSQL INTEGER to C++ int. By redefining the C++ id type to be unsigned int the problem is resolved. I'm not sure why the subtle overflow is happening, but regardless that has solved the problem for me. Thanks, Louis --- Louis R. Marascio 512-964-4569 From boris at codesynthesis.com Wed Apr 30 11:21:02 2014 From: boris at codesynthesis.com (Boris Kolpackov) Date: Wed Apr 30 11:24:17 2014 Subject: [odb-users] Problem with bi-directional one-to-many relationship In-Reply-To: References: Message-ID: Hi Louis, Louis Marascio writes: > I had my id column defined as an unsigned long, but the PostgreSQL > data type mappings will map a PostgreSQL INTEGER to C++ int. By > redefining the C++ id type to be unsigned int the problem is resolved. > > I'm not sure why the subtle overflow is happening [...]. What happens is this: by default ODB maps the long C++ type to the BIGINT PG data type. But in your custom schema the column corresponding to this long member has the INTEGER type. The PG client library (libpq) works in such a way that it returns the data using the type that's in the database. So what happens is ODB expects to get 8-byte BIGINT but PG returns a 4-byte buffer since the returned value is INTEGER. I've added a TODO item to see if we can detect such cases somehow. One way to make sure that your schema matches what ODB "thinks" your schema looks like is to actually make ODB generate the database schema. You can then compare the two to see if there are any type mismatches, etc. Note also that you can override the database type for any member using the type pragma: #pragma db id type("INTEGER") unsigned long id; Boris