From rudy.depena at gmail.com Thu Jan 7 02:06:49 2016 From: rudy.depena at gmail.com (Rudy Depena) Date: Thu Jan 7 02:06:56 2016 Subject: [odb-users] Question about type conversion in "Hello" example Message-ID: Hi, I noticed that ~\odb-examples-2.4.0\hello\driver.cxx file has some lines for persistence of a person object ... *unsigned long john_id, joe_id;* *...* *john_id = db->persist(john);* In database.ixx we can see that the persist() function is defined as *template * * inline typename object_traits::id_type database::* * persist (T& obj)* * {* * return persist_ (obj);* * }* I can see that while the return type for persist() is *typename object_traits::id_type*, the type of john_id is unsigned long. How is this not a compiler error such as the following? "Error: No suitable conversion function from "odb::access::object_traits::id_type" to "unsigned long" exists" What is being done to implicitly convert the id_type to an unsigned long? Thanks, Rudy P.S. - I am looking at the hello example in examples-sqlite-vc12 solution for Windows. From boris at codesynthesis.com Thu Jan 7 10:26:20 2016 From: boris at codesynthesis.com (Boris Kolpackov) Date: Thu Jan 7 10:26:00 2016 Subject: [odb-users] Question about type conversion in "Hello" example In-Reply-To: References: Message-ID: Hi Rudy, Rudy Depena writes: > I can see that while the return type for persist() is typename > object_traits::id_type, the type of john_id is unsigned long. How is > this not a compiler error such as the following? object_traits::id_type is the type alias (typedef) for the object's id member type. Since the id type for person in this example is unsigned long, all is good. Boris From albert.gu at ringcentral.com Fri Jan 8 11:41:32 2016 From: albert.gu at ringcentral.com (Albert (Jinku) Gu) Date: Sat Jan 9 10:57:15 2016 Subject: [odb-users] Failed: when building and running examples Message-ID: Dear engineers, I am trying to use the ODB. And tried to build and run the examples but failed. The failure info is as following: LMXMN006:odb-examples-2.4.0 albert.gu$ ./configure --with-database sqlite configure: WARNING: you should use --build, --host, --target checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... config/install-sh -c -d checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... yes checking how to create a ustar tar archive... none checking for style of include used by make... GNU checking for sqlite-gcc... no checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking dependency style of gcc... gcc3 checking for sqlite-ar... no checking for sqlite-lib... no checking for sqlite-link... no checking for ar... ar checking the archiver (ar) interface... ar checking build system type... Invalid configuration `sqlite': machine `sqlite' not recognized configure: error: /bin/sh config/config.sub sqlite failed I am using a MacBook Pro. And installed the odb complier, libodb, odb-sqlite. Even the sqlite tool. Could you help do me a favour? Any help would be appreciated! Thanks in advance! Regards, Albert From dieter.govaerts at bricsys.com Sat Jan 9 07:02:40 2016 From: dieter.govaerts at bricsys.com (dieter.govaerts@bricsys.com) Date: Sat Jan 9 10:57:15 2016 Subject: [odb-users] Manual schema migration for SQLite Message-ID: <1452340960.581627555@apps.rackspace.com> Hello, Is there a supported way to implement a manual SQLite database schema migration (not data migration) to work around odb migration limitations? Basically I'd like suppress the generation of a schema_catalog_migrate_entry en define it myself. I normally work within odbs limitations, but now I'd like to perform a major update including clean-up of obsolete columns etc. I'd like to implement it using temporary tables, replacing old tables with new tables, etc. stuff beyond the scope of odbs migration. Best regards, Dieter Govaerts From jnw at xs4all.nl Sat Jan 9 11:15:56 2016 From: jnw at xs4all.nl (Jeroen N. Witmond) Date: Sat Jan 9 11:16:06 2016 Subject: [odb-users] Failed: when building and running examples In-Reply-To: References: Message-ID: My guess is that your problem is a missing equal sign; that is, your command should be ./configure --with-database=sqlite On 2016-01-08 17:41, Albert (Jinku) Gu wrote: > Dear engineers, > > I am trying to use the ODB. And tried to build and run the examples but > failed. > > The failure info is as following: > > LMXMN006:odb-examples-2.4.0 albert.gu$ ./configure --with-database > sqlite > configure: WARNING: you should use --build, --host, --target > checking for a BSD-compatible install... /usr/bin/install -c > checking whether build environment is sane... yes > checking for a thread-safe mkdir -p... config/install-sh -c -d > checking for gawk... no > checking for mawk... no > checking for nawk... no > checking for awk... awk > checking whether make sets $(MAKE)... yes > checking how to create a ustar tar archive... none > checking for style of include used by make... GNU > checking for sqlite-gcc... no > checking for gcc... gcc > checking whether the C compiler works... yes > checking for C compiler default output file name... a.out > checking for suffix of executables... > checking whether we are cross compiling... no > checking for suffix of object files... o > checking whether we are using the GNU C compiler... yes > checking whether gcc accepts -g... yes > checking for gcc option to accept ISO C89... none needed > checking dependency style of gcc... gcc3 > checking for sqlite-ar... no > checking for sqlite-lib... no > checking for sqlite-link... no > checking for ar... ar > checking the archiver (ar) interface... ar > checking build system type... Invalid configuration `sqlite': machine > `sqlite' not recognized > configure: error: /bin/sh config/config.sub sqlite failed > > I am using a MacBook Pro. And installed the odb complier, libodb, > odb-sqlite. Even the sqlite tool. > > Could you help do me a favour? Any help would be appreciated! > > Thanks in advance! > > Regards, > Albert From abv150ci at gmail.com Sat Jan 9 13:22:52 2016 From: abv150ci at gmail.com (=?UTF-8?Q?Aar=C3=B3n_Bueno_Villares?=) Date: Sat Jan 9 13:23:40 2016 Subject: [odb-users] Accesor/modifier/column built-in regex Message-ID: I would like to know what is the built-in regexes for accesors and modifiers searching, because the documentation don't give specific details about that. Specifically, my "private member declaration" pattern is: - a one, two or three characters identifying the type of the object - an underscore - the data member For example: u_age : age is a private (_) unsigned (u). `age` as column name. str_name : name is a private (_) string (str). `name` as column name. And its accessor would be age(), age(const unsigned&) and name()/name(const string&). Would the builtin rules work in my case? Best regards, Peregring-lk From abv150ci at gmail.com Sat Jan 9 16:15:41 2016 From: abv150ci at gmail.com (=?UTF-8?Q?Aar=C3=B3n_Bueno_Villares?=) Date: Sat Jan 9 16:16:29 2016 Subject: [odb-users] Const members Message-ID: I'm a little confused about const-members, because, though they cannot be updated (which has sense, because there are presumed to don't be updated), how are these members loaded? The documentation says nothing about the semantics of const members for other database operations like persist, load or find. The most obvious thing is they are not loaded at all because you know, how could ODB assigns values to it? But I don't know if ODB performs some kind of const_cast to their members through reference accessors or whatever.... In other words, what are the semantics and what methods are required for an object class, for: - const members - non-const but read-only members ? Best regards, Peregring-lk From mne at qosmotec.com Mon Jan 11 04:12:48 2016 From: mne at qosmotec.com (Marcel Nehring) Date: Mon Jan 11 04:13:47 2016 Subject: [odb-users] Warnings in Visual Studio 2015 Message-ID: <1dce6c31d90643aa82378cfdca3c21ce@QEX.qosmotec.com> Hi, when compiling code that uses ODB 2.4.0 with Visual Studio 2015 one gets many warnings like: C4275 non - DLL-interface classkey class "std::exception" used as base for DLL-interface classkey struct "odb::exception" \odb\exception.hxx 19 Furthermore when linking everything together I get the warning LNK4006 __NULL_IMPORT_DESCRIPTOR already defined in "odb-d.lib(odb-d.dll); second definition ignored. odb-oracle-d.lib(odb-oracle-d.dll) 1 So far we don't experience any problems based on these warnings, however. Regards, Marcel From albert.gu at ringcentral.com Sat Jan 9 20:57:42 2016 From: albert.gu at ringcentral.com (Albert (Jinku) Gu) Date: Mon Jan 11 08:49:33 2016 Subject: [odb-users] Failed: when building and running examples In-Reply-To: References: Message-ID: <94308C4A-2F10-4BE1-9ED1-E211CCED0724@ringcentral.com> Hi Jeroen, Great, it works now! Thank you! By the way, the command in web page seems easy to cause misunderstanding. There is no equal sign. http://www.codesynthesis.com/products/odb/doc/install-unix.xhtml Building and Running the Examples If you would like to build and run the ODB examples, download the odb-examples package and use the standard autotools build system to compile it on your machine. Normally, the following commands are sufficient: ./configure --with-database make Could we update it? Regards, Albert On Jan 10, 2016, at 12:15 AM, Jeroen N. Witmond > wrote: My guess is that your problem is a missing equal sign; that is, your command should be ./configure --with-database=sqlite On 2016-01-08 17:41, Albert (Jinku) Gu wrote: Dear engineers, I am trying to use the ODB. And tried to build and run the examples but failed. The failure info is as following: LMXMN006:odb-examples-2.4.0 albert.gu$ ./configure --with-database sqlite configure: WARNING: you should use --build, --host, --target checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... config/install-sh -c -d checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... yes checking how to create a ustar tar archive... none checking for style of include used by make... GNU checking for sqlite-gcc... no checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking dependency style of gcc... gcc3 checking for sqlite-ar... no checking for sqlite-lib... no checking for sqlite-link... no checking for ar... ar checking the archiver (ar) interface... ar checking build system type... Invalid configuration `sqlite': machine `sqlite' not recognized configure: error: /bin/sh config/config.sub sqlite failed I am using a MacBook Pro. And installed the odb complier, libodb, odb-sqlite. Even the sqlite tool. Could you help do me a favour? Any help would be appreciated! Thanks in advance! Regards, Albert From boris at codesynthesis.com Mon Jan 11 08:53:43 2016 From: boris at codesynthesis.com (Boris Kolpackov) Date: Mon Jan 11 08:53:21 2016 Subject: [odb-users] Failed: when building and running examples In-Reply-To: <94308C4A-2F10-4BE1-9ED1-E211CCED0724@ringcentral.com> References: <94308C4A-2F10-4BE1-9ED1-E211CCED0724@ringcentral.com> Message-ID: Hi Albert, Albert (Jinku) Gu writes: > ./configure --with-database Fixed, thanks for letting us know! Boris From boris at codesynthesis.com Mon Jan 11 09:28:57 2016 From: boris at codesynthesis.com (Boris Kolpackov) Date: Mon Jan 11 09:28:34 2016 Subject: [odb-users] Accesor/modifier/column built-in regex In-Reply-To: References: Message-ID: Hi Aar?n, Aar?n Bueno Villares writes: > I would like to know what is the built-in regexes for accesors and > modifiers searching, because the documentation don't give specific details > about that. We don't list them because they are quite hairy and may change in the future. But you can always check the source code (odb/context.cxx): data_->accessor_regex_.push_back (regexsub ("/(.+)/get_$1/")); // get_foo data_->accessor_regex_.push_back (regexsub ("/(.+)/get\\u$1/")); // getFoo data_->accessor_regex_.push_back (regexsub ("/(.+)/get$1/")); // getfoo data_->accessor_regex_.push_back (regexsub ("/(.+)/$1/")); // foo data_->modifier_regex_.push_back (regexsub ("/(.+)/set_$1/")); // set_foo data_->modifier_regex_.push_back (regexsub ("/(.+)/set\\u$1/")); // setFoo data_->modifier_regex_.push_back (regexsub ("/(.+)/set$1/")); // setfoo data_->modifier_regex_.push_back (regexsub ("/(.+)/$1/")); // foo Also, you can always see what gets tried and what matched and in which order with --accessor-regex-trace. Boris From boris at codesynthesis.com Mon Jan 11 09:33:09 2016 From: boris at codesynthesis.com (Boris Kolpackov) Date: Mon Jan 11 09:32:44 2016 Subject: [odb-users] Const members In-Reply-To: References: Message-ID: Hi Aar?n, Aar?n Bueno Villares writes: > The most obvious thing is they are not loaded at all because you know, how > could ODB assigns values to it? But I don't know if ODB performs some kind > of const_cast to their members through reference accessors or whatever.... They are loaded and, yes, ODB uses const_cast to obtain a non-const reference. The rest of the semantics of const/readonly member is described in Section 14.4.12, "readonly". Boris From boris at codesynthesis.com Mon Jan 11 09:53:57 2016 From: boris at codesynthesis.com (Boris Kolpackov) Date: Mon Jan 11 09:53:33 2016 Subject: [odb-users] Manual schema migration for SQLite In-Reply-To: <1452340960.581627555@apps.rackspace.com> References: <1452340960.581627555@apps.rackspace.com> Message-ID: Hi Dieter dieter.govaerts@bricsys.com writes: > Is there a supported way to implement a manual SQLite database schema > migration (not data migration) to work around odb migration limitations? > Basically I'd like suppress the generation of a schema_catalog_migrate_entry > en define it myself. While there is no way to suppress it, you can always skip executing this step by implementing the "migration loop" yourself instead of calling schema_catalog::migrate (). See the example at the end of Chapter 13.3.1, "Immediate Data Migration". Note also that even if we wanted to suppress the generation of a migration step, I don't think we could, at least no easily. This is because ODB needs to know the state of the object model after this step in order to continue with further migrations. There could also be complications with the "generate but skip" approach since ODB does not support certain object model changes (e.g., removal of the object id). In such cases it will suggest that you reimplement your changes in terms of creating new persistent objects, etc. Which might actually not be a bad idea: you are planning to re-build tables, etc, so why not make ODB drop old tables and create new ones for you? To achieve this you, for example, could change the table names of your persistent classes (#pragma db table). To ODB this will seem as if you just deleted one class and added another, which it will translate to dropping the old table and creating a new one. The only potential drawback of this approach is that you cannot "reuse" the old table names. But perhaps this is small price to pay? Let me know if none of this works for you or if you run into some issues. Also, it would be great if you could share the approach you used in the end and how it worked out. Boris From b.noushin7 at yahoo.com Wed Jan 13 03:09:04 2016 From: b.noushin7 at yahoo.com (Noushin B) Date: Wed Jan 13 03:12:11 2016 Subject: [odb-users] ODB support for calling oracle stored procedures References: <2061506012.3541911.1452672544795.JavaMail.yahoo.ref@mail.yahoo.com> Message-ID: <2061506012.3541911.1452672544795.JavaMail.yahoo@mail.yahoo.com> Hello,I want to know if odb supports oracle stored procedures?If yes, would you please giuide me how to do it? I have searched the manual, but couldn't find any thing about oracle stored procedures.Thanks From mne at qosmotec.com Wed Jan 13 10:11:09 2016 From: mne at qosmotec.com (Marcel Nehring) Date: Wed Jan 13 10:12:10 2016 Subject: AW: [odb-users] Storing files in an oracle database In-Reply-To: References: Message-ID: Hi ODB users, Hi Boris, I am picking up on this issue since I was not yet able to solve it. > Skimming through the OCI docs, BFILE appears to be special in that on INSERT or UPDATE you specify the file name, not its data. Not sure what SELECT returns... It returns a locator similar to a SELECT on a BLOB column. > I wonder if there is a way to cast BFILE to BLOB? If this were possible then you could map BFILE to BLOB and use the BFILENAME function for INSERT and UPDATE (the 'to' expression) To me it seems that it is not possible to cast BFILE to BLOB. Although both data types behave very similar when reading from the database, a simple cast doesn't seem to help. I am getting an ORA-00932 error. My basic idea this time was to split my internal data members into two. One for the filename and one for the file contents. Since you mentioned that ODB can be smart when handling LOBs the usage of virtual data members didn't seem to be required anymore. In principle it should therefore work to map BFILE to BLOB (what turned out to not work) when reading from the database and use the second member containing the filename when inserting/updating the database: //... #pragma db map type("BFILE") as("BLOB") to("BFILENAME('ODB_DATA_DIR', (?))") from("(?)")) PRAGMA_DB(column("FILE") type("BFILE") get(std::vector(cbegin(this.m_filename), cend(this.m_filename)))) std::vector m_contents; PRAGMA_DB(transient) std::string m_filename; //... However this crashes when trying to persist a new object since in the param_callback() method of the std::vector and BLOB value traits front() is called on an empty vector. So in the end I was neither able to read my class from the database (after manual insertion) nor to persist it. Are there any other ODB mechanisms I could try to solve the problem? Thanks in advance. Regards, Marcel From boris at codesynthesis.com Wed Jan 13 10:41:17 2016 From: boris at codesynthesis.com (Boris Kolpackov) Date: Wed Jan 13 10:41:21 2016 Subject: [odb-users] ODB support for calling oracle stored procedures In-Reply-To: <2061506012.3541911.1452672544795.JavaMail.yahoo@mail.yahoo.com> References: <2061506012.3541911.1452672544795.JavaMail.yahoo.ref@mail.yahoo.com> <2061506012.3541911.1452672544795.JavaMail.yahoo@mail.yahoo.com> Message-ID: Hi, Noushin B writes: > Hello, I want to know if odb supports oracle stored procedures? If yes, > would you please giuide me how to do it? What kind of Oracle procedure do you want to call, and, in particular, what does it return (and how)? A concrete example of the procedure and what you would expect in C++ as the result of the call would be helpful. Boris From abv150ci at gmail.com Wed Jan 13 20:54:26 2016 From: abv150ci at gmail.com (=?UTF-8?Q?Aar=C3=B3n_Bueno_Villares?=) Date: Wed Jan 13 20:55:13 2016 Subject: [odb-users] Configure MySQL client connection In-Reply-To: References: Message-ID: Thanks for you reply. It's pretty clear. There is, however, a possibility that the (network) connection to the > database will be lost at some point. In > this case, the pool will try to automatically re-connect by creating a new > connection. > In the case of disconnection... for example, if in a constructor of a class (a main class which lifecycle is the application's one), I create a transaction, and call transaction::connection() to asign the returned reference to a member of my object (for example, to be used/check later or whatever), and the connection is lost, will that connection object be destroyed under any circunstance? My doubt comes from that sentence from the docs: To obtain a connection we call the database::connection() function. The > connection is returned as odb::connection_ptr, which is an > implementation-specific smart pointer with the shared pointer semantics. > (...). Once the last instance of connection_ptr pointing to the same > connection is destroyed, the connection is returned to the database > instance. > So since as you said, after a disconnection the database will try to get a new connection, it means that it won't reuse the previous connection object, and consequently, it is useless and can be a target of deletion if "there is no more instances of connection_ptr pointing to the same connection" (if there's no more instances of connection_ptr, the database thinks it's the only owner of the object, and thus, it can delete it if it is useless). But, since I have got the connection as a reference, and saved that connection, it could point to a destroyed object because the database thought nobody had access to it. Is it that scene likely, or the internal behaviour of ODB works differently as I've depicted? From boris at codesynthesis.com Thu Jan 14 11:56:19 2016 From: boris at codesynthesis.com (Boris Kolpackov) Date: Thu Jan 14 11:56:24 2016 Subject: [odb-users] Configure MySQL client connection In-Reply-To: References: Message-ID: Hi Aar?n, Aar?n Bueno Villares writes: > Is it that scene likely, or the internal behaviour of ODB works differently > as I've depicted? Here is how ODB does it: When one of the (low-level) MySQL operations on a connection fails, before throwing a corresponding exception, ODB checks whether the error code indicates the connection is no loner usable and, if so, marks it as failed. You can check if a connection has been marked as failed with connection::failed() predicate. When the connection is returned to the connection_pool (i.e., there are no more connection_ptr instances pointing to it), the pool checks if it is failed. If it is still ok, then it is added back to the pool to be reused. If it is failed, then it is simply destroyed. For MySQL, there is also support for pinging the connection which the pool does by default just before returning the connection to the requester (see mysql::connection::ping()). So, in your situation, if you have a connection_ptr instance pointing to a failed connection and you keep trying to perform transactions on this connection, you will keep getting the exception. Boris From steffen at boast.nl Thu Jan 14 11:08:16 2016 From: steffen at boast.nl (steffen@boast.nl) Date: Thu Jan 14 12:38:16 2016 Subject: [odb-users] Delayed initialization of polymorphic_entry_for_X and create_schema_entry_ Message-ID: Hi, I ran into a problem which I think is related to delayed initialization of the variable polymorphic_entry_for_X. I have a polymorphic object structure that is peristed to ODB, say A : B. I now try to read all objects using database.query(). Entities are stored using perist(QSharedPointer); Thus, odb is never (compile time) made aware of the existence of type A in this case. I have two applications with the same code; one works, one throws a "no type information" exception? By adding a reading database.query() to the code, it works in both applications. As the code is shared between the two application, the only cause I can think of is a delayed initialization of polymorphic_entry_for_A. Is that correct? Also, can the same problem occur on create_schema_entry_? I have more situations where I expect ODB to have knowledge of all my classes while they are not explicitly used. To my current understanding of C++ you can not guarantee this right? Should I make a (compile time) reference to all the odb generated code to be sure, or am I missing something? Greetings, Steffen From boris at codesynthesis.com Thu Jan 14 12:59:21 2016 From: boris at codesynthesis.com (Boris Kolpackov) Date: Thu Jan 14 12:59:26 2016 Subject: [odb-users] Delayed initialization of polymorphic_entry_for_X and create_schema_entry_ In-Reply-To: References: Message-ID: Hi Steffen, steffen@boast.nl writes: > I ran into a problem which I think is related to delayed > initialization of the variable polymorphic_entry_for_X. > > I have a polymorphic object structure that is peristed to ODB, say A > : B. I now try to read all objects using database.query(). > Entities are stored using perist(QSharedPointer); Thus, odb is > never (compile time) made aware of the existence of type A in this > case. I have two applications with the same code; one works, one > throws a "no type information" exception? > > By adding a reading database.query() to the code, it works in > both applications. As the code is shared between the two > application, the only cause I can think of is a delayed > initialization of polymorphic_entry_for_A. Is that correct? The only situation where you can have delayed static global (as opposed to function-local) object initialization is if you delay loading the shared library that contains it. I've never heard of delayed initialization of objects that are already loaded into the process' memory. Maybe with RTLD_LAZY? Or, perhaps, you are using a static library and since nobody references A, the corresponding object file doesn't get linked. I think this must be it. What exactly is your setup (i.e., is A in a static/shared library and if so how is it loaded) and platform? Boris From abv150ci at gmail.com Thu Jan 14 15:22:19 2016 From: abv150ci at gmail.com (=?UTF-8?Q?Aar=C3=B3n_Bueno_Villares?=) Date: Thu Jan 14 15:23:06 2016 Subject: [odb-users] Pthreads issue Message-ID: Trying to compile libodb-boost in my platform (Ubuntu 14.04), I have discovered the following issue: If I call ./configure withouth params for example it works without problems, but if I pass the flag option -Werror, it says that it cannot find the pthread library: ./configure --disable-static CXX=g++4.8 CXXFLAGS='-O3 -Werror -std=c++11' Output: checking for the pthreads library -lpthreads... no checking for the pthreads library -lpthread... no checking whether pthreads work without any flags... no checking whether pthreads work with -Kthread... no checking whether pthreads work with -kthread... no checking for the pthreads library -llthread... no checking whether pthreads work with -pthread... no checking whether pthreads work with -pthreads... no checking whether pthreads work with -mthreads... no checking whether pthreads work with --thread-safe... no checking whether pthreads work with -mt... no checking for pthread-config... no And it seems that is the only compiler option which makes ./configure doesn't work properly: ./configure --disable-static CXX=g++4.8 CXXFLAGS='-O3 -Wall -pedantic -pedantic-errors -Wextra -std=c++11' Output: // ... checking for the pthreads library -lpthreads... no checking for the pthreads library -lpthread... yes checking if more special flags are required for pthreads... -D_REENTRANT checking for __thread keyword... yes // .... But as soon I pass the -Werror compiler flag, it crash. From steffen at boast.nl Fri Jan 15 05:42:51 2016 From: steffen at boast.nl (steffen@boast.nl) Date: Sun Jan 17 03:48:38 2016 Subject: [odb-users] Delayed initialization of polymorphic_entry_for_X and create_schema_entry_ In-Reply-To: References: Message-ID: <60025628d0c3dbef625a67266f73e76c@boast.nl> On 14-01-2016 18:59, Boris Kolpackov wrote: > Hi Steffen, > > steffen@boast.nl writes: > >> I ran into a problem which I think is related to delayed >> initialization of the variable polymorphic_entry_for_X. >> >> I have a polymorphic object structure that is peristed to ODB, say A >> : B. I now try to read all objects using database.query(). >> Entities are stored using perist(QSharedPointer); Thus, odb is >> never (compile time) made aware of the existence of type A in this >> case. I have two applications with the same code; one works, one >> throws a "no type information" exception? >> >> By adding a reading database.query() to the code, it works in >> both applications. As the code is shared between the two >> application, the only cause I can think of is a delayed >> initialization of polymorphic_entry_for_A. Is that correct? > > The only situation where you can have delayed static global (as opposed > to > function-local) object initialization is if you delay loading the > shared > library that contains it. I've never heard of delayed initialization of > objects that are already loaded into the process' memory. Maybe with > RTLD_LAZY? Or, perhaps, you are using a static library and since nobody > references A, the corresponding object file doesn't get linked. I think > this must be it. > > What exactly is your setup (i.e., is A in a static/shared library and > if > so how is it loaded) and platform? > > Boris Hi Boris, Sorry, forgot to mention my setup. I also did some more research in the mean time. We're using qt's .pro files as a build system. All the entities + generated code are part of the application. However, the .pro files are organized in modules, and they are all compiled separately into libs and then statically linked into a single executable. So that's probably where the problem is. The application that did not have the problem consists of a single .pro file. The C++ standard explicitly allows a compiler to postpone initialization until something from the compilation unit is called. So it probably is allowed for the linker to ignore the compilation unit, since it was never explicitly used. Thanks for the help! Steffen From Robert.Seymour at arris.com Sat Jan 16 22:10:05 2016 From: Robert.Seymour at arris.com (Seymour, Robert) Date: Sun Jan 17 03:48:38 2016 Subject: [odb-users] Same obect in multiple schemas with different table names Message-ID: <9622780823C872458FCE0EA26134045FD5564547@SDCEXMBX1.ARRS.ARRISI.com> Hi, I'm trying to use the same class in two schemas within the same application. I'm building for SQLITE (I have a database file per schema) When I do the odb::schema_catalog::create_schema() call I give the corresponding schema name for each database. This all looks correct, when I use sqlite command line I see the schema in each database as expected. However I notice that when performing transactions if the table names are not the same in both schemas for the given class I get an exception about no such table existing and it gives the name from the other schema. The code is segregated so there is no overlap in the header files. Is it possible to have the same class in multiple schemas (-schema-name) that have different table names for sqlite? Thanks, -Rob From boris at codesynthesis.com Sun Jan 17 03:53:27 2016 From: boris at codesynthesis.com (Boris Kolpackov) Date: Sun Jan 17 03:53:30 2016 Subject: [odb-users] Delayed initialization of polymorphic_entry_for_X and create_schema_entry_ In-Reply-To: <60025628d0c3dbef625a67266f73e76c@boast.nl> References: <60025628d0c3dbef625a67266f73e76c@boast.nl> Message-ID: Hi Steffen, steffen@boast.nl writes: > However, the .pro files are organized in modules, and they are all > compiled separately into libs and then statically linked into a single > executable. So that's probably where the problem is. Yes, this is definitely your problem. This post explains the potential issues with static libraries and polymorphism: http://www.codesynthesis.com/pipermail/odb-users/2013-May/001286.html While this post explains how to fix it in VC++: http://www.codesynthesis.com/pipermail/odb-users/2013-May/001289.html Boris From boris at codesynthesis.com Sun Jan 17 03:57:07 2016 From: boris at codesynthesis.com (Boris Kolpackov) Date: Sun Jan 17 03:57:10 2016 Subject: [odb-users] Pthreads issue In-Reply-To: References: Message-ID: Hi Aar?n, Aar?n Bueno Villares writes: > If I call ./configure withouth params for example it works without > problems, but if I pass the flag option -Werror, it says that it cannot > find the pthread library: The test that checks for pthread probably issues a warning. Since you passed -Werror, now it is treated as an error, the test fails, and configure assumes there is no pthread support. Check config.log for details on which warning it is; maybe we can fix it. Boris From boris at codesynthesis.com Sun Jan 17 04:06:08 2016 From: boris at codesynthesis.com (Boris Kolpackov) Date: Sun Jan 17 04:06:11 2016 Subject: [odb-users] Same obect in multiple schemas with different table names In-Reply-To: <9622780823C872458FCE0EA26134045FD5564547@SDCEXMBX1.ARRS.ARRISI.com> References: <9622780823C872458FCE0EA26134045FD5564547@SDCEXMBX1.ARRS.ARRISI.com> Message-ID: Hi Robert, Seymour, Robert writes: > Is it possible to have the same class in multiple schemas (-schema-name) > that have different table names for sqlite? No, each persistent C++ class in ODB gets a single set of SQL statements and they have the table name hard-wired in them. If you really have to have this, one way to do it would be via templates, something along these lines: template class object { ... }; struct schema1_tag {}; struct schema2_tag {}; using schema1_object = object; using schema2_object = object; #pragma db object(schema1_object) table("name1") #pragma db object(schema2_object) table("name2") The biggest issue with this approach is that if you have common code that must work with "both" objects, you will have to templatize it, for example: template void print (const object& o) { ... } But if all you want is to avoid duplicating the classes, then this approach could work. Boris From boris at codesynthesis.com Sun Jan 17 09:18:06 2016 From: boris at codesynthesis.com (Boris Kolpackov) Date: Sun Jan 17 09:18:08 2016 Subject: [odb-users] Warnings in Visual Studio 2015 In-Reply-To: <1dce6c31d90643aa82378cfdca3c21ce@QEX.qosmotec.com> References: <1dce6c31d90643aa82378cfdca3c21ce@QEX.qosmotec.com> Message-ID: Hi Marcel, Marcel Nehring writes: > when compiling code that uses ODB 2.4.0 with Visual Studio 2015 one gets > many warnings like: > > C4275 non - DLL-interface classkey class "std::exception" used as base > for DLL-interface classkey struct "odb::exception" \odb\exception.hxx 19 I did some research[1] and apparently in VS 2015 std::exception is no longer exported. Can you add the following two lines at the end of libodb/odb/compilers/vc/pre.hxx and see if that helps: #pragma warning (disable:4275) // "C4251 is essentially noise and can be // silenced" - Stephan T. Lavavej > Furthermore when linking everything together I get the warning > > LNK4006 __NULL_IMPORT_DESCRIPTOR already defined in "odb-d.lib(odb-d.dll); > second definition ignored. odb-oracle-d.lib(odb-oracle-d.dll) 1 This one is strange. Apparently[2][3], it is issued if you link a static library twice though I don't see how this can happen here (there are no static libraries involved). Also your error looks quite a bit different compared to theirs. Can you send the complete linker command line (from Projects->...->Linker Command Line) as well as the complete diagnostics output? I don't even know what is being linked here: odb-oracle-d.dll, your application, something else..? [1] http://stackoverflow.com/questions/24511376/how-to-dllexport-a-class-derived-from-stdruntime-error [2] http://stackoverflow.com/questions/24103488/lnk4006-lnk4221-warnings-when-using-static-library-that-includes-another-static [3] https://social.msdn.microsoft.com/forums/windowsapps/en-us/5d79a108-6516-42d9-9626-05c622d2a007/want-to-fix-a-linker-warning Boris From boris at codesynthesis.com Sun Jan 17 09:50:02 2016 From: boris at codesynthesis.com (Boris Kolpackov) Date: Sun Jan 17 09:50:03 2016 Subject: [odb-users] Storing files in an oracle database In-Reply-To: References: Message-ID: Hi Marcel, Marcel Nehring writes: > > Skimming through the OCI docs, BFILE appears to be special in that on > > INSERT or UPDATE you specify the file name, not its data. Not sure what > > SELECT returns... > > It returns a locator similar to a SELECT on a BLOB column. Ok, so it should be possible to provide proper support for BFILE in libodb-oracle. The dual mapping might be tricky though. > To me it seems that it is not possible to cast BFILE to BLOB. Although > both data types behave very similar when reading from the database, a > simple cast doesn't seem to help. I am getting an ORA-00932 error. It seems one could create a function that does the conversion: http://stackoverflow.com/questions/12263816/function-in-pl-sql-for-reading-bfile-into-blob-dont-show-the-result And you can use such functions in 'db map' pragmas as shown in the oracle/custom test in the odb-tests package. It's possible this defeats the whole purpose of using BFILE in the first place. But it would be interesting to know if it actually works. Could you give it a try? > My basic idea this time was to split my internal data members into > two. One for the filename and one for the file contents. You could probably almost do it with sections except that ODB will always persist both members. What could works is encoding the file name as BLOB. Then you would write another function, blob_to_bfile(), except instead of the binary data blob will contain the file name. Given these two functions: #pragma db map type("BFILE") as("BLOB") \ to("blob_to_bfile((?))") \ from("bfile_to_blob((?))") Boris From Robert.Seymour at arris.com Sun Jan 17 13:27:04 2016 From: Robert.Seymour at arris.com (Seymour, Robert) Date: Mon Jan 18 08:28:42 2016 Subject: [odb-users] Same obect in multiple schemas with different table names In-Reply-To: References: <9622780823C872458FCE0EA26134045FD5564547@SDCEXMBX1.ARRS.ARRISI.com> Message-ID: <9622780823C872458FCE0EA26134045FD55645D2@SDCEXMBX1.ARRS.ARRISI.com> Thanks Boris. Using the same name is not a problem, appreciate the quick response. - Rob -----Original Message----- From: Boris Kolpackov [mailto:boris@codesynthesis.com] Sent: Sunday, January 17, 2016 1:06 AM To: Seymour, Robert Cc: odb-users@codesynthesis.com Subject: Re: [odb-users] Same obect in multiple schemas with different table names Hi Robert, Seymour, Robert writes: > Is it possible to have the same class in multiple schemas > (-schema-name) that have different table names for sqlite? No, each persistent C++ class in ODB gets a single set of SQL statements and they have the table name hard-wired in them. If you really have to have this, one way to do it would be via templates, something along these lines: template class object { ... }; struct schema1_tag {}; struct schema2_tag {}; using schema1_object = object; using schema2_object = object; #pragma db object(schema1_object) table("name1") #pragma db object(schema2_object) table("name2") The biggest issue with this approach is that if you have common code that must work with "both" objects, you will have to templatize it, for example: template void print (const object& o) { ... } But if all you want is to avoid duplicating the classes, then this approach could work. Boris From mne at qosmotec.com Mon Jan 18 10:14:09 2016 From: mne at qosmotec.com (Marcel Nehring) Date: Mon Jan 18 10:14:45 2016 Subject: AW: [odb-users] Storing files in an oracle database In-Reply-To: References: Message-ID: Hi Boris, as always your thoughts were very helpful and brought us closer to a working solution. Thanks for that! > Ok, so it should be possible to provide proper support for BFILE in libodb-oracle. The dual mapping might be tricky though. Would be cool to see that in a future version of ODB. > And you can use such functions in 'db map' pragmas as shown in the oracle/custom test in the odb-tests package. It's possible this defeats the whole purpose of using BFILE in the first place. But it would be interesting to know if it actually works. Could you give it a try? > What could works is encoding the file name as BLOB. Then you would write another function, blob_to_bfile(), except instead of the binary data blob will contain the file name. I wasn't aware of the fact that it is possible to use custom functions with ODB. I tried that and got it working in principle. An initial insert of a new row and reading that row back again works. However, the problem are updates. Once the row is loaded and the corresponding class member contains binary data ODB uses these binary data in the update clause. That is why I tried to separate filename and binary data into two separate members. I would then need to use the filename member in INSERT and UPDATE queries and the binary member for SELECT. Since my tries with the get-pragma did not work you mentioned one could achieve this with sections. Unfortunately I don't see how, could you elaborate on this? Regards, Marcel From mne at qosmotec.com Tue Jan 19 10:10:04 2016 From: mne at qosmotec.com (Marcel Nehring) Date: Tue Jan 19 10:10:41 2016 Subject: AW: [odb-users] Warnings in Visual Studio 2015 In-Reply-To: References: <1dce6c31d90643aa82378cfdca3c21ce@QEX.qosmotec.com> Message-ID: <181996fd87b94ac5ad1cf81bc6a97d00@QEX.qosmotec.com> Hi Boris, > I did some research[1] and apparently in VS 2015 std::exception is no longer exported. Can you add the following two lines at the end of libodb/odb/compilers/vc/pre.hxx and see if that helps I used your pragma to disable the warning and it is gone. The comment, however, looks a bit odd since it mentions a different warning. Regarding the linker warning I now think it was my bad. I was building a static library which was linking to both odb.lib and odb-oracle.lib. That's what caused the problem. I now postpone specifying against what to link until actually linking together the final binary and the warning is gone. Thanks for your hints and sorry for the inconvenience. Regards, Marcel From abv150ci at gmail.com Tue Jan 19 20:28:50 2016 From: abv150ci at gmail.com (=?UTF-8?Q?Aar=C3=B3n_Bueno_Villares?=) Date: Tue Jan 19 20:29:37 2016 Subject: [odb-users] Iterate twice a result object Message-ID: If I execute a query and get a result object, can I make a double pass of the "underlying" stream? auto result(db,query()); for (auto& o : result) // sth with o // later for (auto& o : result) // sth with o A related question: when is the cached result of the query deleted? when all copies of the result object are deleted (shared_ptr semantics)? or when the first pass reaches to an end? Best regards, From abodrin at gmail.com Tue Jan 19 13:50:54 2016 From: abodrin at gmail.com (=?UTF-8?B?0JDRgNGC0ZHQvCDQkdC+0LTRgNC40L0=?=) Date: Wed Jan 20 10:42:39 2016 Subject: [odb-users] Bug in qt/basic/pgsql/quuid-traits.hxx Message-ID: Hello, developers 8-) I guess there is a bug in qt/basic/pgsql/quuid-traits.hxx:45: std::memcpy( i, &v.data1, 16 ); as a result i contains bytes in a host byteorder (littleendian, x86-64), so this piece of code QUuid description_id = QUuid( "02797688-2916-4cfb-ad2a-8379c9fb523a" ); result res( m_db->query< protobuf_descriptions >( "id = " + query::_val< odb::pgsql::id_uuid >( description_id ) ) ); results to SQL statement on the backend (PostgreSQL log lines): SELECT ""id"", ""class_id"", ""protobuf_description"" FROM ""objects"".""protobuf_descriptions"" WHERE id = $1","parameters: $1 = '88767902-1629-fb4c-ad2a-8379c9fb523a' If we fix data type for source of memcpy (2-nd parameter became const char* ), then it is all ok: std::memcpy( i, v.toRfc4122().constData(), 16 ); PostgreSQL log lines: SELECT ""id"", ""class_id"", ""protobuf_description"" FROM ""objects"".""protobuf_descriptions"" WHERE id = $1","?????????: $1 = '02797688-2916-4cfb-ad2a-8379c9fb523a' PS: 1) protobuf_descriptions defined as follows: struct protobuf_descriptions { QUuid id; QUuid class_id; QString protobuf_description; }; #ifdef ODB_COMPILER #pragma db view( protobuf_descriptions ) \ table ( "objects.protobuf_descriptions" ) #pragma db member ( protobuf_descriptions::id ) \ column( "id" ) type( "UUID" ) #pragma db member ( protobuf_descriptions::class_id ) \ column( "class_id" ) type( "UUID" ) #pragma db member ( protobuf_descriptions::protobuf_description ) \ column( "protobuf_description" ) type( "TEXT" ) #endif 2) odb libraries family version 2.4.0 3) gcc --version gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2 4) odb is awesome! 8-)) Regards, Bodrin Artem. From boris at codesynthesis.com Wed Jan 20 12:05:17 2016 From: boris at codesynthesis.com (Boris Kolpackov) Date: Wed Jan 20 12:05:17 2016 Subject: [odb-users] Bug in qt/basic/pgsql/quuid-traits.hxx In-Reply-To: References: Message-ID: Hi ?????, ????? ?????? writes: > std::memcpy( i, v.toRfc4122().constData(), 16 ); Thanks for the bug report and the suggested fix! As you have discovered, it appears PostgreSQL's binary UUID representation is big-endian in the RFC4122 layout. I've committed the fix that also addresses the receiving part: http://scm.codesynthesis.com/?p=odb/libodb-qt.git;a=commit;h=ad72d3a438129df5158b3baf91623d3ab3e21b49 BTW, for those wondering if the boost::uuid mapping has the same bug, the answer is no, since boost::uuid stores the data in big-endian/RFC4122. > 4) odb is awesome! 8-)) Thanks, I am glad you are enjoying it ;-). Boris From boris at codesynthesis.com Wed Jan 20 12:09:53 2016 From: boris at codesynthesis.com (Boris Kolpackov) Date: Wed Jan 20 12:09:53 2016 Subject: [odb-users] Warnings in Visual Studio 2015 In-Reply-To: <181996fd87b94ac5ad1cf81bc6a97d00@QEX.qosmotec.com> References: <1dce6c31d90643aa82378cfdca3c21ce@QEX.qosmotec.com> <181996fd87b94ac5ad1cf81bc6a97d00@QEX.qosmotec.com> Message-ID: Hi Marcel, Marcel Nehring writes: > I used your pragma to disable the warning and it is gone. The comment, > however, looks a bit odd since it mentions a different warning. Yes, C4275 is essentially the same thing. But I've added a note to clarify. > I now think it was my bad. Good. I wasn't looking forward to chasing that one down ;-). Boris From boris at codesynthesis.com Wed Jan 20 12:16:09 2016 From: boris at codesynthesis.com (Boris Kolpackov) Date: Wed Jan 20 12:16:09 2016 Subject: [odb-users] Iterate twice a result object In-Reply-To: References: Message-ID: Hi Aar?n, Aar?n Bueno Villares writes: > If I execute a query and get a result object, can I make a double pass > of the "underlying" stream? No, it is an input iterator, as specified in the manual. > A related question: when is the cached result of the query deleted? when > all copies of the result object are deleted (shared_ptr semantics)? or when > the first pass reaches to an end? The cache you are referring to (if there is one; currently it only really exists for MySQL) is the internal, binary representation of the returned objects, not the objects themselves. For MySQL it is freed as soon as you reach the end of the result stream (or if you destroy the result before reaching the end). Boris From boris at codesynthesis.com Wed Jan 20 12:36:32 2016 From: boris at codesynthesis.com (Boris Kolpackov) Date: Wed Jan 20 12:36:32 2016 Subject: [odb-users] Storing files in an oracle database In-Reply-To: References: Message-ID: Hi Marcel, Marcel Nehring writes: > as always your thoughts were very helpful and brought us closer to > a working solution. Thanks for that! Glad I could help. If, at the end, you could post your functions that implement the mapping for BFILE, that would be much appreciated. > Would be cool to see that in a future version of ODB. It is not clear how widely used this type is (you are the first person interested in using BFILE with ODB) as well as how difficult it will be to support this dual mapping. In fact, we did something similar for SQLite not long ago (Incremental BLOB/TEXT I/O support) and it turned out to be tricky, to put it mildly. If you would like to investigate what it would take to add this support to ODB, I would be happy to help/guide. > I wasn't aware of the fact that it is possible to use custom functions with > ODB. I tried that and got it working in principle. An initial insert of a new > row and reading that row back again works. However, the problem are > updates. Once the row is loaded and the corresponding class member contains > binary data ODB uses these binary data in the update clause. That is why I > tried to separate filename and binary data into two separate members. I would > then need to use the filename member in INSERT and UPDATE queries and the > binary member for SELECT. Since my tries with the get-pragma did not work you > mentioned one could achieve this with sections. Unfortunately I don't see > how, could you elaborate on this? Here is how you could do it (an outline; you will also need the db map pragma): struct object { string file_name; vector file_data; #pragma db member(file_name) transient #pragma db member(file_data) transient #pragma db member(file) virtual(vector) type("BFILE") \ get(to_bfile) set(file_data) vector to_bfile () const { return vector (file_name.c_str (), file_name.c_str () + file_name.size ()); } }; Once this is working, you could create an encapsulated BFILE mapping: struct bfile { string file_name; vector file_data; }; #pragma db value(bfile) type("BFILE") To complete this, you would either need to provide the value_traits specialization that maps bfile to BLOB or, with the upcoming version of ODB (or a pre-release ;-)), you will be able to simply map bfile to, say, vector by providing a pair of conversion functions (very similar to db map except for C++ types rather than database types). Boris From eugescha at yandex.ru Fri Jan 22 03:32:19 2016 From: eugescha at yandex.ru (=?UTF-8?B?0JXQstCz0LXQvdC40Lkg0JDQudC00LDRgNC+0LI=?=) Date: Fri Jan 22 04:38:25 2016 Subject: [odb-users] Error using odb.exe with Multi-Database Support and QT Message-ID: <56A1E913.5060902@yandex.ru> Hello! I'm using VS 2008 While trying to compile: *.hxx file I'm trying to run it with following parametrs odb.exe -m dynamic -d common -d mssql -d pgsql -I "c:\Qt\5.4.0.vs2008\include" --profile qt .\DOL\abonent.hxx And I've got following errors odb abonent.hxx 1>.\DOL\abonent.hxx:19:4: error: 'QString' does not name a type 1> QString srv_ip; 1> ^ 1>.\DOL\abonent.hxx:21:4: error: 'QString' does not name a type 1> QString virtual_name; 1> ^ 1>.\DOL\abonent.hxx:29:4: error: 'QString' does not name a type 1> QString classif_violit_code; While I run them separatly everything is fine: odb.exe -m dynamic -d mssql -d pgsql -I "c:\Qt\5.4.0.vs2008\include" --profile qt .\DOL\abonent.hxx odb.exe -I $(QTDIR)\include -d pgsql --generate-query --generate-schema --cxx-prologue "#include \"stdafx.h\" " --output-dir .\DOL --profile qt .\DOL\abonent.hxx Please help me! -- AidarovEugene From abodrin at gmail.com Fri Jan 22 07:17:42 2016 From: abodrin at gmail.com (=?UTF-8?B?0JDRgNGC0ZHQvCDQkdC+0LTRgNC40L0=?=) Date: Fri Jan 22 09:38:34 2016 Subject: [odb-users] Typos in documentation Message-ID: Greetings, developers! This time it is about some typos found in official documentation (PDF format). I don't know, if this is a proper mailing list for such messages... If it is not, please tell me about the right one. To the point: 1) 4.5 Prepared Queries, page 73: "... While the query(), query_one(), and query_one() database operations..." I think one of "query_one()" must be replaced with "query_value()"? 2) 10.3 Table Views, page 162: "Both the asociated table names and the column names can be qualified..." Misspelled word "associated" Regards, Artem Bodrin. From boris at codesynthesis.com Fri Jan 22 09:50:20 2016 From: boris at codesynthesis.com (Boris Kolpackov) Date: Fri Jan 22 09:50:18 2016 Subject: [odb-users] Typos in documentation In-Reply-To: References: Message-ID: Hi ?????, ????? ?????? writes: > I don't know, if this is a proper mailing list for such messages... No, this is the right place, thanks for the bug reports! > 1) 4.5 Prepared Queries, page 73: > "... While the query(), query_one(), and query_one() database operations..." > I think one of "query_one()" must be replaced with "query_value()"? > > 2) 10.3 Table Views, page 162: > "Both the asociated table names and the column names can be qualified..." > Misspelled word "associated" Fixed both, thanks: http://scm.codesynthesis.com/?p=odb/odb.git;a=commit;h=f12b9121475a2eb80c61a9aa5d76a7bdb0885214 Boris From boris at codesynthesis.com Fri Jan 22 09:56:17 2016 From: boris at codesynthesis.com (Boris Kolpackov) Date: Fri Jan 22 09:56:15 2016 Subject: [odb-users] Error using odb.exe with Multi-Database Support and QT In-Reply-To: <56A1E913.5060902@yandex.ru> References: <56A1E913.5060902@yandex.ru> Message-ID: Hi ???????, ??????? ??????? writes: > odb.exe -m dynamic -d common -d mssql -d pgsql -I > "c:\Qt\5.4.0.vs2008\include" --profile qt .\DOL\abonent.hxx > > odb abonent.hxx > 1>.\DOL\abonent.hxx:19:4: error: 'QString' does not name a type > 1> QString srv_ip; > > While I run them separatly everything is fine: > odb.exe -m dynamic -d mssql -d pgsql -I > "c:\Qt\5.4.0.vs2008\include" --profile qt .\DOL\abonent.hxx Hm, it seems the difference is the '-d common' option. Do you actually include the Qt header in abonent.hxx? I.e., do you have something like: #include If that doesn't help, try to run this command line and send its output: odb.exe -v -m dynamic -d common -I "c:\Qt\5.4.0.vs2008\include" --profile qt .\DOL\abonent.hxx Boris From mne at qosmotec.com Fri Jan 22 10:17:30 2016 From: mne at qosmotec.com (Marcel Nehring) Date: Fri Jan 22 10:17:55 2016 Subject: AW: [odb-users] Storing files in an oracle database In-Reply-To: References: Message-ID: Hi Boris, > It is not clear how widely used this type is (you are the first person interested in using BFILE with ODB) as well as how difficult it will be to support this dual mapping. In fact, we did something similar for SQLite not long ago (Incremental BLOB/TEXT I/O support) and it turned out to be tricky, to put it mildly. The hard problems are the interesting ones, aren't they? ;-) > Here is how you could do it (an outline; you will also need the db map pragma) I had to modify your example code in a few ways, but it seems to be working now. INSERT, SELECT, UPDATE all succeed and the results look good. * In the virtual member declaration I had to replace vector with a corresponding typedef * Accessing the filename member via a special get method didn't work since ODB requires a method returning by const-ref. I ended up having the member of type vector with get/set methods converting it to string. Not the prettiest solution but at least this implementation detail is hidden. So my implementation looks similar to this: struct object { using Binary = std::vector; #pragma db map type("BFILE") as("BLOB") to("blob_to_bfile((?))") from("bfile_to_blob((?))") #pragma db member(filename) virtual(std::string) #pragma db transient vector m_filename; #pragma db member(file) virtual(Binary) type("BFILE") get(m_filename) set(m_file) #pragma db transient vector m_file; }; The corresponding functions in my Oracle database are as follows -- Found via Google at http://technologydribble.info/2009/08/18/loading-a-file-into-a-blob-object-in-oracle/ CREATE OR REPLACE FUNCTION ODB.bfile_to_blob(file BFILE) RETURN BLOB AS dest_loc BLOB := empty_blob(); src_loc BFILE := file; BEGIN -- Open source binary file from OS DBMS_LOB.OPEN(src_loc, DBMS_LOB.LOB_READONLY); -- Create temporary LOB object DBMS_LOB.CREATETEMPORARY( lob_loc => dest_loc , cache => true , dur => dbms_lob.session ); -- Open temporary lob DBMS_LOB.OPEN(dest_loc, DBMS_LOB.LOB_READWRITE); -- Load binary file into temporary LOB DBMS_LOB.LOADFROMFILE( dest_lob => dest_loc , src_lob => src_loc , amount => DBMS_LOB.getLength(src_loc)); -- Close lob objects DBMS_LOB.CLOSE(dest_loc); DBMS_LOB.CLOSE(src_loc); -- Return temporary LOB object RETURN dest_loc; END bfile_to_blob; CREATE OR REPLACE FUNCTION ODB.blob_to_bfile(filename BLOB) RETURN BFILE AS src_loc BFILE := BFILENAME('ODB_DATA_DIR', UTL_RAW.CAST_TO_VARCHAR2(filename)); BEGIN RETURN src_loc; END blob_to_bfile; > with the upcoming version of ODB (or a pre-release ;-)), you will be able to simply map bfile to, say, vector by providing a pair of conversion functions (very similar to db map except for C++ types rather than database types) This feature sounds interesting, looking forward to the release :-) Thanks again for your kind support. Regards, Marcel From eugescha at yandex.ru Fri Jan 22 10:02:02 2016 From: eugescha at yandex.ru (=?UTF-8?B?0JXQstCz0LXQvdC40Lkg0JDQudC00LDRgNC+0LI=?=) Date: Fri Jan 22 10:20:50 2016 Subject: [odb-users] Error using odb.exe with Multi-Database Support and QT In-Reply-To: References: <56A1E913.5060902@yandex.ru> Message-ID: <56A2446A.5050801@yandex.ru> Hi Boris. You are right. I added qt headers and it worked. Thank you. 22.01.2016 17:56, Boris Kolpackov ?????: > Hi ???????, > > ??????? ??????? writes: > >> odb.exe -m dynamic -d common -d mssql -d pgsql -I >> "c:\Qt\5.4.0.vs2008\include" --profile qt .\DOL\abonent.hxx >> >> odb abonent.hxx >> 1>.\DOL\abonent.hxx:19:4: error: 'QString' does not name a type >> 1> QString srv_ip; >> >> While I run them separatly everything is fine: >> odb.exe -m dynamic -d mssql -d pgsql -I >> "c:\Qt\5.4.0.vs2008\include" --profile qt .\DOL\abonent.hxx > Hm, it seems the difference is the '-d common' option. Do you actually > include the Qt header in abonent.hxx? I.e., do you have something like: > > #include > > If that doesn't help, try to run this command line and send its output: > > odb.exe -v -m dynamic -d common -I "c:\Qt\5.4.0.vs2008\include" > --profile qt .\DOL\abonent.hxx > > Boris -- ??????? ??????? From boris at codesynthesis.com Fri Jan 22 10:30:50 2016 From: boris at codesynthesis.com (Boris Kolpackov) Date: Fri Jan 22 10:30:48 2016 Subject: [odb-users] Storing files in an oracle database In-Reply-To: References: Message-ID: Hi Marcel, Marcel Nehring writes: > I had to modify your example code in a few ways, but it seems to be > working now. INSERT, SELECT, UPDATE all succeed and the results look > good. Glad to hear it is working now and thanks for sharing your code, much appreciated! Boris From mne at qosmotec.com Fri Jan 29 10:58:22 2016 From: mne at qosmotec.com (Marcel Nehring) Date: Fri Jan 29 11:03:53 2016 Subject: [odb-users] SQL Epiloge Message-ID: Hi, I noticed some small things regarding the --sql-epilogue-file command line option of the ODB compiler. * Providing this option more than once is silently ignored and only the first file specified will be appended. If it would be possible to use this (and the similar options) more than once it could be helpful. If this is on purpose maybe adding a warning message for clarification would be nice. * If the provided file uses both, carriage return and line feed to start a new line this will result in each new line to be doubled (i.e. empty lines) in the resulting database schema file. Regards, Marcel From albert.gu at ringcentral.com Sun Jan 31 20:45:57 2016 From: albert.gu at ringcentral.com (Albert (Jinku) Gu) Date: Mon Feb 1 14:23:32 2016 Subject: [odb-users] ODB Crashed in a multi-threaded environment Message-ID: <1514EAEB-F0D1-4D3D-805A-9A17436A26B5@ringcentral.com> Hi guys, According to these two links: * http://www.codesynthesis.com/pipermail/odb-users/2011-June/000124.html * http://www.codesynthesis.com/pipermail/odb-users/2014-November/002242.html We can know that odb::database is thread safe. With the connection pool, ODB will assign different odd::sqlite::connection instance to different threads. In our app, there are two threads with a shared odd::sqlite::database instance, one thread is responsible for writing and reading, the other one is for reading only. Each of them will apply a connection before executing a transaction. Sometimes, these two threads will access the same table, maybe the same row in database. At this time, odb will crash. And the exception message is ?transaction already in progress in this thread?. Am I doing something wrong? How can I resolve this issue? Any help will be appreciated! Regards, Albert