From ratkaisut at gmail.com Sat May 2 11:50:54 2020 From: ratkaisut at gmail.com (Sten Kultakangas) Date: Sat May 2 12:00:48 2020 Subject: [odb-users] custom type mapping using value_traits specialization and memory leak prevention Message-ID: Hello I am implementing nvarchar <-> utf8-encoded std::string database type mapping. Everything seems to be easy for nvarchar fields not exceeding certain limit in length: #include #include #include "core/db_types.h" using Poco::UnicodeConverter; using namespace std; namespace odb { namespace mssql { void value_traits::set_value(string &value, const ucs2_char *buffer, size_t buffer_size, bool is_null) { if(is_null) value = ""; else UnicodeConverter::convert(buffer, buffer_size, value); } void value_traits::set_image(ucs2_char *buffer, size_t buffer_size, size_t &actual_size, bool &is_null, const string &value) { Poco::UTF16String utf16; UnicodeConverter::convert(value, utf16); is_null = false; actual_size = utf16.size(); if(actual_size > buffer_size) actual_size = buffer_size; memcpy(buffer, utf16.data(), actual_size * sizeof(ucs2_char)); } } } However, i would like to implement the specialization for the id_long_nstring type also so i can work with nvarchar fields exceeding the length limit. The main concern is whether the callback is called with chunk_type=chunk_last even in the case of an exception thrown due to an I/O error. I could not find any destructor to prove that the callback will be called in such scenario. If the callback is not called in the case of an I/O error, the non-trivially destructible object referenced in the "user context" parameter will not be destroyed and a memory leak will occur. If there is such a limitation, then i must design a trivially destructible state machine object and place it in the provided buffer to prevent memory leaks. Can you confirm my concern that the callback will not be called in the case of an exception thrown during the set_value/set_image operation for the id_long_nstring type ? Best regards, Sten Kultakangas From ratkaisut at gmail.com Sat May 2 16:00:02 2020 From: ratkaisut at gmail.com (Sten Kultakangas) Date: Sat May 2 16:09:58 2020 Subject: [odb-users] Re: custom type mapping using value_traits specialization and memory leak prevention In-Reply-To: References: Message-ID: Hello Answering my own question. I analyzed the source code mssql/statement.cxx and came to the conclusion that everything we need is already provided so we don't need any other state machine information. Here's the source code of read_callback() registered by value_traits::set_value() using Poco::UTF8Encoding; using Poco::UTF16Encoding; /* SQLRETURN SQLGetData( SQLHSTMT StatementHandle, SQLUSMALLINT Col_or_Param_Num, SQLSMALLINT TargetType, SQLPOINTER TargetValuePtr, SQLLEN BufferLength, SQLLEN * StrLen_or_IndPtr); After SQLGetData() is called with BufferLength=0, the callback function is called for the first time with the 'chunk' parameter set to any of the following values: - chunk_null, if 'StrLen_or_IndPtr' == SQL_NULL_DATA; - chunk_one, if 'StrLen_or_IndPtr' == 0; - chunk_first, otherwise. Since BufferLength=0, the buffer does not contain any data yet. Unless the 'chunk' was chunk_null, chunk_one or chunk_last, the callback function must set the variables pointed to by 'buffer' and 'size' parameters. SQLGetData() will store at most 'size' bytes to the buffer. */ void value_traits::read_callback( void *context, // User context. size_t *position, // Position context. An implementation is free // to use this to track position information. It // is initialized to zero before the first call. void **buffer, // [in/out] Buffer to copy the data to. On the // the first call it contains a pointer to the // long_callback struct (used for redirections). size_t *size, // [in/out] In: amount of data copied into the // buffer after the previous call. Out: capacity // of the buffer. chunk_type chunk, // The position of this chunk; chunk_first means // this is the first call, chunk_last means there // is no more data, chunk_null means this value is // NULL, and chunk_one means the value is empty. size_t size_left, // Contains the amount of data left or 0 if this // information is not available. void *tmp_buffer, // A temporary buffer that may be used by the // implementation. size_t tmp_capacity // Capacity of the temporary buffer. ) { string &result(*static_cast(context)); if(chunk == chunk_null || chunk_one) { result.clear(); return; } if(chunk == chunk_first) { *buffer = tmp_buffer; *size = tmp_capacity; return; } /* Convert at most * (char *)(*buffer) + *size - (char *)tmp_buffer * bytes containing the UTF16 character sequence from 'tmp_buffer' and * append the resulting UTF8 characters to the 'result'. If the sequence * is truncated, move the unconverted bytes to the beginning of the * 'tmp_buffer', increase the pointer stored in the variable pointed to * by 'buffer' by the number of bytes containing the unconverted UTF16 * characters and decrease the variable pointed to by 'size' by the said * number. */ UTF8Encoding utf8; UTF16Encoding utf16(UTF16Encoding::LITTLE_ENDIAN_BYTE_ORDER); size_t chunk_left = (char *)(*buffer) + *size - (char *)tmp_buffer; auto chunk_p = (unsigned char *)(tmp_buffer); while(chunk_left != 0) { auto char_size = utf16.sequenceLength(chunk_p, chunk_left); if(char_size <= 0) break; if((size_t)char_size > chunk_left) break; auto ch = utf16.queryConvert(chunk_p, chunk_left); unsigned char utf8_char[6]; auto utf8_char_size = utf8.convert(ch, utf8_char, sizeof(utf8_char)); result.append((char *)utf8_char, utf8_char_size); chunk_p += char_size; chunk_left -= char_size; } if(chunk == chunk_last) { if(chunk_left == 0) return; Scope; Error << "nvarchar contains truncated data, bytes left " << chunk_left; return; } if(chunk_left != 0) memmove(tmp_buffer, chunk_p, chunk_left); *buffer = (char *)tmp_buffer + chunk_left; *size = tmp_capacity - chunk_left; } On Sat, May 2, 2020 at 6:50 PM Sten Kultakangas wrote: > Hello > > I am implementing nvarchar <-> utf8-encoded std::string database type > mapping. Everything seems to be easy for nvarchar fields not exceeding > certain limit in length: > > #include > #include > #include "core/db_types.h" > > using Poco::UnicodeConverter; > using namespace std; > > namespace odb { > namespace mssql { > > void value_traits::set_value(string &value, > const ucs2_char *buffer, size_t buffer_size, bool is_null) > { > if(is_null) value = ""; > else UnicodeConverter::convert(buffer, buffer_size, value); > } > > > void value_traits::set_image(ucs2_char *buffer, > size_t buffer_size, size_t &actual_size, bool &is_null, const string > &value) > { > Poco::UTF16String utf16; > UnicodeConverter::convert(value, utf16); > > is_null = false; > actual_size = utf16.size(); > if(actual_size > buffer_size) actual_size = buffer_size; > memcpy(buffer, utf16.data(), actual_size * sizeof(ucs2_char)); > } > > } > } > > However, i would like to implement the specialization for the > id_long_nstring type also so i can work with nvarchar fields exceeding the > length limit. The main concern is whether the callback is called with > chunk_type=chunk_last even in the case of an exception thrown due to an I/O > error. I could not find any destructor to prove that the callback will be > called in such scenario. If the callback is not called in the case of an > I/O error, the non-trivially destructible object referenced in the "user > context" parameter will not be destroyed and a memory leak will occur. If > there is such a limitation, then i must design a trivially destructible > state machine object and place it in the provided buffer to prevent memory > leaks. > > Can you confirm my concern that the callback will not be called in the > case of an exception thrown during the set_value/set_image operation for > the id_long_nstring type ? > > Best regards, > Sten Kultakangas > > From ratkaisut at gmail.com Sat May 2 18:32:25 2020 From: ratkaisut at gmail.com (Sten Kultakangas) Date: Sat May 2 18:42:21 2020 Subject: [odb-users] nvarchar to std::string mapping In-Reply-To: References: Message-ID: Hello If anyone else wants nvarchar / UTF-8 std::string mapping, here's my implementation. Feel free to use it in your projects. #ifndef CORE_DB_TYPES_H_ #define CORE_DB_TYPES_H_ #include #include namespace odb { namespace mssql { template <> struct value_traits { typedef std::string value_type; typedef std::string query_type; typedef details::buffer image_type; static void set_value(std::string &value, const ucs2_char *buffer, std::size_t buffer_size, bool is_null); static void set_image(ucs2_char *buffer, std::size_t buffer_size, std::size_t &actual_size, bool &is_null, const std::string &value); }; template <> struct value_traits { typedef std::string value_type; typedef std::string query_type; typedef long_callback image_type; static void set_value(std::string &v, result_callback_type& cb, void *&context); static void set_image(param_callback_type &cb, const void *&context, bool &is_null, const std::string &v); static void write_callback(const void *context, std::size_t *position, const void **buffer, std::size_t *size, chunk_type *chunk, void *tmp_buffer, std::size_t tmp_capacity); static void read_callback(void *context, std::size_t *position, void **buffer, std::size_t *size, chunk_type chunk, std::size_t size_left, void *tmp_buffer, std::size_t tmp_capacity); }; } } #endif /* CORE_DB_TYPES_H_ */ #include #include #include #include #include "core/db_types.h" #include "core/log.h" using Poco::UTF16String; using Poco::UTF8Encoding; using Poco::UTF16Encoding; using Poco::UnicodeConverter; using namespace std; namespace odb { namespace mssql { void value_traits::set_value(string &value, const ucs2_char *buffer, size_t buffer_size, bool is_null) { if(is_null) value = ""; else UnicodeConverter::convert(buffer, buffer_size, value); } void value_traits::set_image(ucs2_char *buffer, size_t buffer_size, size_t &actual_size, bool &is_null, const string &value) { UTF16String utf16_string; UnicodeConverter::convert(value, utf16_string); is_null = false; actual_size = utf16_string.size(); if(actual_size > buffer_size) actual_size = buffer_size; memcpy(buffer, utf16_string.data(), actual_size * sizeof(ucs2_char)); } void value_traits::set_value(string &v, result_callback_type &cb, void *&context) { cb = &read_callback; context = &v; } void value_traits::set_image(param_callback_type& cb, const void *&context, bool &is_null, const string &v) { is_null = false; cb = &write_callback; context = &v; } /* * The callback function is called before calling SQLPutData(). The variables * pointed to by 'buffer' and 'size' parameters are passed to SQLPutData(). * If the callback function set the variable pointed to by 'chunk' to the * 'chunk_next' value, the operation is repeated. */ void value_traits::write_callback( const void *context, // User context. size_t *position, // Position context. An implementation is free // to use this to track position information. It // is initialized to zero before the first call. const void **buffer, // [in/out] Buffer contaning the data. On the // the first call it contains a pointer to the // long_callback struct (used for redirections). size_t *size, // [out] Data size. chunk_type *chunk, // [out] The position of this chunk of data. void *tmp_buffer, // A temporary buffer that may be used by the // implementation. size_t tmp_capacity // Capacity of the temporary buffer. ) { UTF8Encoding utf8; UTF16Encoding utf16(UTF16Encoding::LITTLE_ENDIAN_BYTE_ORDER); const string &value(*static_cast(context)); *buffer = tmp_buffer; *size = 0; auto tmp_p = (unsigned char *)tmp_buffer; auto value_p = (const unsigned char *)value.data() + *position; auto value_left = value.size() - *position; while(value_left != 0) { auto utf8_char_size = utf8.sequenceLength(value_p, value_left); if(utf8_char_size <= 0) break; if((size_t)utf8_char_size > value_left) break; auto ch = utf8.queryConvert(value_p, value_left); auto utf16_char_size = utf16.convert(ch, tmp_p, tmp_capacity); if((size_t)utf16_char_size > tmp_capacity) break; *position += utf8_char_size; *size += utf16_char_size; tmp_p += utf16_char_size; tmp_capacity -= utf16_char_size; value_p += utf8_char_size; value_left -= utf8_char_size; } if(value_left != 0) { if(tmp_capacity < 4) { *chunk = chunk_next; return; } Scope; Error << "Truncated UTF-8 data, bytes left " << value_left; } *chunk = chunk_last; } /* SQLRETURN SQLGetData( SQLHSTMT StatementHandle, SQLUSMALLINT Col_or_Param_Num, SQLSMALLINT TargetType, SQLPOINTER TargetValuePtr, SQLLEN BufferLength, SQLLEN * StrLen_or_IndPtr); After SQLGetData() is called with BufferLength=0, the callback function is called for the first time with the 'chunk' parameter set to any of the following values: - chunk_null, if 'StrLen_or_IndPtr' == SQL_NULL_DATA; - chunk_one, if 'StrLen_or_IndPtr' == 0; - chunk_first, otherwise. Since BufferLength=0, the buffer does not contain any data yet. Unless the 'chunk' was chunk_null, chunk_one or chunk_last, the callback function must set the variables pointed to by 'buffer' and 'size' parameters. SQLGetData() will store at most 'size' bytes to the buffer. */ void value_traits::read_callback( void *context, // User context. size_t *position, // Position context. An implementation is free // to use this to track position information. It // is initialized to zero before the first call. void **buffer, // [in/out] Buffer to copy the data to. On the // the first call it contains a pointer to the // long_callback struct (used for redirections). size_t *size, // [in/out] In: amount of data copied into the // buffer after the previous call. Out: capacity // of the buffer. chunk_type chunk, // The position of this chunk; chunk_first means // this is the first call, chunk_last means there // is no more data, chunk_null means this value is // NULL, and chunk_one means the value is empty. size_t size_left, // Contains the amount of data left or 0 if this // information is not available. void *tmp_buffer, // A temporary buffer that may be used by the // implementation. size_t tmp_capacity // Capacity of the temporary buffer. ) { string &result(*static_cast(context)); if(chunk == chunk_null || chunk == chunk_one) { result.clear(); return; } if(chunk == chunk_first) { *buffer = tmp_buffer; *size = tmp_capacity; return; } /* Convert at most * (char *)(*buffer) + *size - (char *)tmp_buffer * bytes containing the UTF16 character sequence from 'tmp_buffer' and * append the resulting UTF8 characters to the 'result'. If the sequence * is truncated, move the unconverted bytes to the beginning of the * 'tmp_buffer', increase the pointer stored in the variable pointed to * by 'buffer' by the number of bytes containing the unconverted UTF16 * characters and decrease the variable pointed to by 'size' by the said * number. */ UTF8Encoding utf8; UTF16Encoding utf16(UTF16Encoding::LITTLE_ENDIAN_BYTE_ORDER); size_t chunk_left = (char *)(*buffer) + *size - (char *)tmp_buffer; auto chunk_p = (unsigned char *)(tmp_buffer); while(chunk_left != 0) { auto char_size = utf16.sequenceLength(chunk_p, chunk_left); if(char_size <= 0) break; if((size_t)char_size > chunk_left) break; auto ch = utf16.queryConvert(chunk_p, chunk_left); unsigned char utf8_char[6]; auto utf8_char_size = utf8.convert(ch, utf8_char, sizeof(utf8_char)); result.append((char *)utf8_char, utf8_char_size); chunk_p += char_size; chunk_left -= char_size; } if(chunk == chunk_last) { if(chunk_left == 0) return; Scope; Error << "nvarchar contains truncated data, bytes left " << chunk_left; return; } if(chunk_left != 0) memmove(tmp_buffer, chunk_p, chunk_left); *buffer = (char *)tmp_buffer + chunk_left; *size = tmp_capacity - chunk_left; } } } From sean.clarke at sec-consulting.co.uk Sun May 3 10:00:08 2020 From: sean.clarke at sec-consulting.co.uk (Sean Clarke) Date: Sun May 3 10:11:00 2020 Subject: [odb-users] Internal compiler error linked to spdlog Message-ID: Hi, a few may recall I have had some previous issues with ODB and compiler seg faults when using Debian 10 or odb 2.5 on anything other than Devian 9 (stretch). I tried compiling teh latest version and in teh end gave up and removed ODB from that workflow. Having had some time I have revisited and have identified some sort of incompatibility between a logging library and its use of fmt. It is a bit complex and I have had to change a few things to get it down to a small test set - but essentially if I add some logging in one of the ODB compiled classes I get an ODB compiler seg fault, remove the logging and it is all fine (that statement also means the application runs fine, nit just compiles). The logline is in a simple accessor: (include): //#include and accessor statement: std::shared_ptr
const& address() const { // if (m_short_code == "PRIVATE") { // auto log = spdlog::get("main"); // log->error("Customer::address : Customer address of PRIVATE customer cannot be accessed"); // } return m_address; } The logging library is spdlog: libspdlog-dev - Very fast, header only, C++ logging library Interestingly the problem was more prominent with the later version that is packaged in Debian 10 (Buster), Debian 10 version is 1:1.3.1-1, Debian 9 is 1:0.11.0-2 I do not believe (I may be wrong) that the issue is directly related to the logging library, I believe this, as I have had a similar seg fault when using an anonymous function inline in a database class that did nothing more than string manipulation in the standard library. Stack trace from the above code looks like: [ 1%] Generating src/types/odb_gen/MyClass-odb.cxx *** WARNING *** there are active plugins, do not report this as a bug unless you can reproduce it without enabling any plugins. Event | Plugins PLUGIN_START_UNIT | odb PLUGIN_PRAGMAS | odb PLUGIN_OVERRIDE_GATE | odb In file included from /usr/include/spdlog/fmt/bundled/format.h:3545:0, from /usr/include/spdlog/fmt/fmt.h:21, from /usr/include/spdlog/common.h:28, from /usr/include/spdlog/spdlog.h:12, from src/types/More_of_my_classes.hpp:3, from src/types/Another_one_of_my_classes.hpp:4, from src/types/MyClass.hpp:3: /usr/include/spdlog/fmt/bundled/format-inl.h: In function 'int fmt::v5::{anonymous}::safe_strerror(int, char*&, std::size_t)': /usr/include/spdlog/fmt/bundled/format-inl.h:99:5: internal compiler error: Segmentation fault int safe_strerror( ^~~~~~~~~~~~~ Please submit a full bug report, with preprocessed source if appropriate. See for instructions. Has anyone else seen anything similar? Regards Sean Clarke From ratkaisut at gmail.com Sun May 3 11:15:59 2020 From: ratkaisut at gmail.com (Sten Kultakangas) Date: Sun May 3 11:25:55 2020 Subject: [odb-users] Internal compiler error linked to spdlog In-Reply-To: References: Message-ID: Hi. How did you build ODB ? Try to build it using these commands. And then make sure to remove it from /usr/local or a similar location It's not a good idea to install ODB to /usr/local or to the same directory as build2. Also it's not a good idea to build ODB when you have some junk in CPATH, LIBRARY_PATH or LD_LIBRARY_PATH environment variables. We maintain strict isolation between ODB, build2, distribution-provided development packages and non-distribution-provided packages and there are no issues with ABI compatibility. mkdir build2-build cd build2-build wget https://download.build2.org/0.12.0/build2-install-0.12.0.sh chmod +x build2-install-0.12.0.sh ./build2-install-0.12.0.sh /opt/build2 cd .. export PATH=/opt/build2/bin:$PATH bpkg create -d odb-gcc-8 cc config.cxx=g++ config.cc.coptions=-O3 config.install.root=/opt/odb cd odb-gcc-8 bpkg build odb@https://pkg.cppget.org/1/beta bpkg test odb bpkg install odb export PATH=/opt/odb/bin:$PATH cd .. bpkg create -d libodb-gcc-8 config.cxx=g++ config.cc.coptions=-O3 config.install.root=/opt/odb cd libodb-gcc-8 bpkg add https://pkg.cppget.org/1/beta bpkg fetch bpkg build libodb bpkg build libodb-mysql bpkg build libodb-mssql bpkg build libodb-boost bpkg install --all --recursive After that make sure your development environment has the following environment variables export PATH=/opt/odb/bin:$PATH export CPATH=/opt/odb/include:$CPATH export LD_LIBRARY_PATH=/opt/odb/lib:$LD_LIBRARY_PATH export LIBRARY_PATH=/opt/odb/lib:$LIBRARY_PATH On Sun, May 3, 2020 at 5:00 PM Sean Clarke wrote: > Hi, > a few may recall I have had some previous issues with ODB and compiler > seg faults when using Debian 10 or odb 2.5 on anything other than Devian 9 > (stretch). I tried compiling teh latest version and in teh end gave up and > removed ODB from that workflow. > > Having had some time I have revisited and have identified some sort of > incompatibility between a logging library and its use of fmt. > > It is a bit complex and I have had to change a few things to get it down to > a small test set - but essentially if I add some logging in one of the ODB > compiled classes I get an ODB compiler seg fault, remove the logging and it > is all fine (that statement also means the application runs fine, nit just > compiles). > > The logline is in a simple accessor: > > (include): > //#include > > and accessor statement: > std::shared_ptr
const& address() const { > // if (m_short_code == "PRIVATE") { > // auto log = spdlog::get("main"); > // log->error("Customer::address : Customer address of PRIVATE > customer cannot be accessed"); > // } > return m_address; > } > > The logging library is spdlog: > libspdlog-dev - Very fast, header only, C++ logging library > > Interestingly the problem was more prominent with the later version that is > packaged in Debian 10 (Buster), > > Debian 10 version is 1:1.3.1-1, Debian 9 is 1:0.11.0-2 > > I do not believe (I may be wrong) that the issue is directly related to the > logging library, I believe this, as I have had a similar seg fault when > using an anonymous function inline in a database class that did nothing > more than string manipulation in the standard library. > > Stack trace from the above code looks like: > > [ 1%] Generating src/types/odb_gen/MyClass-odb.cxx > *** WARNING *** there are active plugins, do not report this as a bug > unless > you can reproduce it without enabling any plugins. > Event | Plugins > PLUGIN_START_UNIT | odb > PLUGIN_PRAGMAS | odb > PLUGIN_OVERRIDE_GATE | odb > In file included from /usr/include/spdlog/fmt/bundled/format.h:3545:0, > from /usr/include/spdlog/fmt/fmt.h:21, > from /usr/include/spdlog/common.h:28, > from /usr/include/spdlog/spdlog.h:12, > from src/types/More_of_my_classes.hpp:3, > from src/types/Another_one_of_my_classes.hpp:4, > from src/types/MyClass.hpp:3: > /usr/include/spdlog/fmt/bundled/format-inl.h: In function 'int > fmt::v5::{anonymous}::safe_strerror(int, char*&, std::size_t)': > /usr/include/spdlog/fmt/bundled/format-inl.h:99:5: internal compiler error: > Segmentation fault > int safe_strerror( > ^~~~~~~~~~~~~ > Please submit a full bug report, > with preprocessed source if appropriate. > See for instructions. > > Has anyone else seen anything similar? > > Regards > Sean Clarke > From boris at codesynthesis.com Mon May 4 06:06:13 2020 From: boris at codesynthesis.com (Boris Kolpackov) Date: Mon May 4 06:16:47 2020 Subject: [odb-users] nvarchar to std::string mapping In-Reply-To: References: Message-ID: Sten Kultakangas writes: > If anyone else wants nvarchar / UTF-8 std::string mapping, here's my > implementation. Feel free to use it in your projects. Thanks for sharing! While your implementation depends on some third- party libraries (such as Poco) which not all projects may be open to, it can nevertheless serve as a nice reference implementation. From boris at codesynthesis.com Mon May 4 10:11:21 2020 From: boris at codesynthesis.com (Boris Kolpackov) Date: Mon May 4 10:21:58 2020 Subject: [odb-users] Internal compiler error linked to spdlog In-Reply-To: References: Message-ID: Sean Clarke writes: > Has anyone else seen anything similar? I was able to reproduce and fix this based on your description (details for those interested below). I've also staged the fix so if you would like to give it a try, just follow the build2-based installation instructions[1] but use this repository: https://stage.build2.org/1 Instead of: https://pkg.cppget.org/1/beta To get ODB. (I believe you should be able to use the 0.12.0 build2 toolchain but if not, upgrade to staged[2].) Now, if anyone is interested, the problem was caused by this code in spdlog: namespace spdlog { class logger { log_err_handler err_handler_{[this](const std::string &msg) { this->default_err_handler_(msg); }}; } } It appears that in GCC's AST this is represented as an injected into the spdlog namespace class-type that implements the lambda. It turns out such a type cannot be treated as an ordinary class which is what ODB tried to do. The fix is simply to ignore such lambda classes since there is nothing interesting about them from the ODB's POV. Here is the commit that fixes this: https://git.codesynthesis.com/cgit/odb/odb/commit/?id=ba69cf5f0d916c4fdc943f2171691e074417f2e8 [1] https://codesynthesis.com/products/odb/doc/install-build2.xhtml [2] https://build2.org/community.xhtml#stage From PStath at jmawireless.com Mon May 4 12:28:41 2020 From: PStath at jmawireless.com (Paul Stath) Date: Mon May 4 12:39:26 2020 Subject: [odb-users] Internal compiler error linked to spdlog In-Reply-To: References: Message-ID: Hi Boris, Thanks for the explanation and the fix in the ODB 2.5.0 beta. I have been investigating changing our logging layer and spdlog one of the solutions to be evaluated. It was not absolutely clear from the example, but I'm going to assume the accessor statement was inline-ed in the header file, since the ODB compiler was having issues. Is this how you were able to recreate the issue? If so, a simple work-around would be to simply declare the accessor in the persistent class header, and implement the accessor in the cxx file. Right? For those stuck on ODB 2.4.0, and unwilling to keep spdlog out of the header files, it should be possible to wrap the spdlog code bits in #ifndef ODB_COMPILER sections. Right? --- Paul -----Original Message----- From: Boris Kolpackov Sent: Monday, May 4, 2020 10:11 AM To: Sean Clarke Cc: odb-users@codesynthesis.com Subject: Re: [odb-users] Internal compiler error linked to spdlog Sean Clarke writes: > Has anyone else seen anything similar? I was able to reproduce and fix this based on your description (details for those interested below). I've also staged the fix so if you would like to give it a try, just follow the build2-based installation instructions[1] but use this repository: https://stage.build2.org/1 Instead of: https://pkg.cppget.org/1/beta To get ODB. (I believe you should be able to use the 0.12.0 build2 toolchain but if not, upgrade to staged[2].) Now, if anyone is interested, the problem was caused by this code in spdlog: namespace spdlog { class logger { log_err_handler err_handler_{[this](const std::string &msg) { this->default_err_handler_(msg); }}; } } It appears that in GCC's AST this is represented as an injected into the spdlog namespace class-type that implements the lambda. It turns out such a type cannot be treated as an ordinary class which is what ODB tried to do. The fix is simply to ignore such lambda classes since there is nothing interesting about them from the ODB's POV. Here is the commit that fixes this: https://git.codesynthesis.com/cgit/odb/odb/commit/?id=ba69cf5f0d916c4fdc943f2171691e074417f2e8 [1] https://codesynthesis.com/products/odb/doc/install-build2.xhtml [2] https://build2.org/community.xhtml#stage From boris at codesynthesis.com Tue May 5 07:02:37 2020 From: boris at codesynthesis.com (Boris Kolpackov) Date: Tue May 5 07:13:15 2020 Subject: [odb-users] Internal compiler error linked to spdlog In-Reply-To: References: Message-ID: Paul Stath writes: > It was not absolutely clear from the example, but I'm going to assume > the accessor statement was inline-ed in the header file, since the ODB > compiler was having issues. Is this how you were able to recreate the > issue? Yes, just including the header is sufficient to trigger this bug. > If so, a simple work-around would be to simply declare the accessor in > the persistent class header, and implement the accessor in the cxx file. > Right? Correct. > [...] and unwilling to keep spdlog out of the header files, it should > be possible to wrap the spdlog code bits in #ifndef ODB_COMPILER > sections. Right? Correct, you can do something along these lines: #ifndef ODB_COMPILER # include #endif #pragma db object struct object { std::shared_ptr
const& address() const { #ifndef ODB_COMPILER if (m_short_code == "PRIVATE") { auto log = spdlog::get("main"); log->error("Customer::address : Customer address of PRIVATE customer cannot be accessed"); } #endif return m_address; } }; > For those stuck on ODB 2.4.0 [...] I am sure there are good reasons for this, but this case illustrates one of the major benefits of switching to the 2.5.0 beta/build2-based build: I was able to fix the bug, stage the fix, have it tested on all the supported platforms[1], and make it available to anyone interested (again, regardless of their platform of choice) all in a few hours. There is no way I would have been able to do (nor, to be honest, interested in doing) the same if it required building and testing binary packages/installers for a bunch of platforms. [1] https://stage.build2.org/?builds=odb From filyobendeguz at gmail.com Wed May 6 12:04:35 2020 From: filyobendeguz at gmail.com (=?UTF-8?B?QmVuZGVnw7p6IEZpbHnDsw==?=) Date: Wed May 6 12:15:33 2020 Subject: [odb-users] Ubuntu repository Message-ID: Hi! I'm curious when or if version 2.5 of odb is expected to replace version 2.4 in the official ubuntu repositories. Thanks, Bendeg?z Fily? From boris at codesynthesis.com Thu May 7 06:29:55 2020 From: boris at codesynthesis.com (Boris Kolpackov) Date: Thu May 7 06:40:38 2020 Subject: [odb-users] Ubuntu repository In-Reply-To: References: Message-ID: Bendeg?z Fily? writes: > I'm curious when or if version 2.5 of odb is expected to replace version > 2.4 in the official ubuntu repositories. That's hard to say. First we would have to make the final release on our side (we are making progress but are still not there yet). Then it will be up to Canonical (first, probably Debian) to pick up the release and package it. I think if you need ODB in the official Ubuntu repositories, then your best bet is to stick with 2.4.0 and try to get Canonical to fix[1] their package (for example, by investigating the issue and comming up with a solution). [1] https://bugs.launchpad.net/bugs/1871095 From javier.gutierrez at web.de Fri May 8 13:19:56 2020 From: javier.gutierrez at web.de (Javier Gutierrez) Date: Fri May 8 13:31:06 2020 Subject: [odb-users] Understanding odb::recoverable exceptions Message-ID: <034801d6255c$ee20bc30$ca623490$@web.de> Hi Boris, I am implementing the piece of code you propose in the section of the ODB manual: Error Handling and Recovery (https://www.codesynthesis.com/products/odb/doc/manual.xhtml#3.7) I did my tests with MySQL. It seems that when the database is not available, opening a new transaction does not throw a odb::recoverable but rather a "Can't connect to MySQL server on ..." (odb::exception?). If the database connection is lost after opening the transaction, then a odb::recoverable is thrown. In this case your piece of code attempts a retry by running the transaction again. But as said, this does not throw a odb::recoverable so it goes immediately out of the loop never reaching the max_retries. Am I missing something ? I modified slightly the code as below which works for me, but I still wonder if I am missing something... const unsigned short max_retries = 5; bool is_recoverable = false; for (unsigned short retry_count (0); ; retry_count++) { try { transaction t (db.begin ()); is_recoverable = false; ... t.commit (); break; } catch (const odb::recoverable& e) { Is_recoverable = true; continue; } catch (const odb::exception& e) { If (is_recoverable) { if (retry_count > max_retries) throw; else continue; } else throw; }} Thanks a lot. Best regards, Javier From boris at codesynthesis.com Mon May 11 07:17:19 2020 From: boris at codesynthesis.com (Boris Kolpackov) Date: Mon May 11 07:28:15 2020 Subject: [odb-users] Understanding odb::recoverable exceptions In-Reply-To: <034801d6255c$ee20bc30$ca623490$@web.de> References: <034801d6255c$ee20bc30$ca623490$@web.de> Message-ID: Javier Gutierrez writes: > I did my tests with MySQL. > It seems that when the database is not available, opening a new transaction > does not throw a odb::recoverable but rather a "Can't connect to MySQL > server on ..." (odb::exception?). > If the database connection is lost after opening the transaction, then a > odb::recoverable is thrown. In this case your piece of code attempts a retry > by running the transaction again. But as said, this does not throw a > odb::recoverable so it goes immediately out of the loop never reaching the > max_retries. > > Am I missing something ? These situations are not exactly/always the same, right? The idea here is that if the connection is lost mid-transaction that you will probably want to try to retry it at least once (because things were working just a moment ago). On the other hand, if you could not connect to the database in the first place, this is as likely a permanent error as transient. Also, keep in mind that all these exceptions are mapped from error codes returned by the MySQL client library. So to understand the exact semantics for a particular database don't be afraid to look at the source code. For MySQL, the relevant code is in error.cxx: switch (e) { case CR_OUT_OF_MEMORY: { throw bad_alloc (); } case ER_LOCK_DEADLOCK: { throw deadlock (); } case CR_SERVER_LOST: case CR_SERVER_GONE_ERROR: { c.mark_failed (); throw connection_lost (); } case CR_UNKNOWN_ERROR: { c.mark_failed (); } // Fall through. default: { ... throw database_exception (e, sqlstate, msg); } } From alcatania at gmail.com Mon May 25 10:28:41 2020 From: alcatania at gmail.com (Alessandro Catania) Date: Tue May 26 07:31:19 2020 Subject: [odb-users] Get results from sqlite pragma query Message-ID: Hi Boris, I work with odb 2.3.0 version. In sqlite some pragma return results, for example: pragma foreign_key_list(tableName); or pragma table_info(tableName); Is it possible get results with orm odb? If yes how do I? Thanks. Al From boris at codesynthesis.com Tue May 26 08:07:55 2020 From: boris at codesynthesis.com (Boris Kolpackov) Date: Tue May 26 08:19:38 2020 Subject: [odb-users] Get results from sqlite pragma query In-Reply-To: References: Message-ID: Alessandro Catania writes: > In sqlite some pragma return results, for example: > > pragma foreign_key_list(tableName); > or > pragma table_info(tableName); > > Is it possible get results with orm odb? If yes how do I? I would use a native view for that: https://codesynthesis.com/products/odb/doc/manual.xhtml#10.6 From ravil.nugmanov at gmail.com Thu May 28 23:51:47 2020 From: ravil.nugmanov at gmail.com (ravil.nugmanov@gmail.com) Date: Fri May 29 00:03:49 2020 Subject: [odb-users] Cannot connect to Microsoft SQL server in Ubuntu 18.04 Message-ID: <010b01d6356c$7ab54b10$701fe130$@gmail.com> Hi, Previously by mistake sent e-mail from different address, which is not subscribed to the mail list, sorry. First of all, thank you for this great library! Meanwhile, the issue described below was solved using older version of unixOdbc (2.3.1) and MS SQL native client library, following instructions at https://www.codesynthesis.com/~boris/blog/2011/12/02/microsoft-sql-server-od bc-driver-linux/ The original issue was: I am using odb version 2.5.0 in Ubuntu 18.04, trying to connect to MS SQL Server, but getting "invalid handle" error. ODBC driver from Microsoft is installed following steps described in https://docs.microsoft.com/en-us/sql/connect/odbc/linux-mac/installing-the-m icrosoft-odbc-driver-for-sql-server?view=sql-server-ver15#ubuntu17 Note that with sqlcmd command I can connect to the database successfully. In C++ code using contractor for odb::mssql::database which takes connection string as argument: Server=tcp:192.168.1.80,1433;Driver=ODBC Driver 17 for SQL Server;Connection Timeout=10;Uid=sa;Pwd=password;Encrypt=no;Database=master; I wonder, what can cause this connection problem? Should I do something with files /etc/odbc.ini or /etc/obdcinst.ini ? Or maybe option Driver is not correct? There is the entry with drive name in file /etc/odbcinst.ini though: [ODBC Driver 17 for SQL Server] Description=Microsoft ODBC Driver 17 for SQL Server Driver=/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.5.so.2.1 UsageCount=1 Tried to pass connection string with Driver= set to so file name too, no luck: [Server=tcp:192.168.1.80,1433;Driver=/opt/microsoft/msodbcsql17/lib64/libmso dbcsql-17.5.so.2.1;UID=sa;PWD=password;Connection Timeout=10;Encrypt=no;Database=master; The same code works in Windows build. Thank you, Ravil