Thursday Nov 28, 2013

JSON UDF functions version 0.2.1 have been released.

Today new version of JSON UDF functions: 0.2.1 was released. This is maintenance release which added no new functionality and only contains bug fixes. However, it also includes improvements for build ans test procedures. As usual, you can download source and binary packages at MySQL Labs. Binary packages were build for MySQL server 5.6.14. If you want to use other version of the server, you need to recompile functions.

What was changed? Let me quote the ChangeLog.

Functionality added or changed:

Added cmake option WITH_PCRE which alolows to specify if existent or bundled version of PCRE  should be used. Bundled is default on Windows. To compile with bundled version, run: "cmake . -DMYSQL_DIR=/path/to/mysql/dir -DWITH_PCRE=bundled", to turn bundled version off on Windows, run: "cmake . -DMYSQL_DIR=/path/to/mysql/dir -DWITH_PCRE=system"

This means you don't need to type additional commands and move files from one directory to another to build JSON UDFs on Windows.

Added possibility to run tests, using command make test. By default tests are running without valgrind. If you need to run tests under valgrind, add option VALGRIND=1 to cmake command: "cmake . -DMYSQL_DIR=/path/to/mysql/dir -DVALGRIND=1"

You can automatically test on both UNIX and Windows. To test on UNIX simply run make test. On Windows: open solution my_json_udf.sln in Visual Studio, then choose target ALL_TESTS and run it. If you know how to run custom project of a solution on command line please let me know too =)

Added cmake target build_source_dist which creates source tarball. Usage: "make build_source_dist".

This addition is mostly for developers and needed to create packages for MySQL Labs.

Minimum supported cmake version changed to 2.8. It is necessary to run tests and build PCRE.

Bugs Fixed:

70392/17491678 MySQL JSON UDFs binary is called but DDL uses

70573/17583505 Typo in README for JSON_MERGE

70605/17596660 Include all references to documentaiton to README file

70569/17583335 JSON_VALID allows mixed case in the keyword names

70568/17583310 JSON_VALID treats invalid values as valid

70570/17583367 JSON_VALID allows \u two-hex-digit while standard allows only \u four-hex-digit

70574/17583525 JSON_MERGE treats document without opening bracket as valid

70486/17548362 When using json_replace(), '}' of the end disappear.

70839/17751860 JSON_VALID allows to have two elements with the same key

I hope this version will be more stable than the first one and I hope you enjoy it.

Wednesday Nov 27, 2013

To be safe or to be fast?

When I designed first version of JSON UDFs which was reviewed only internally, I let all functions to validate input and output JSON. But my colleagues told me to remove this functionality, because it makes such functions, as json_search, json_replace or json_contains_key deadly slow if they find the occurrence in the beginning of the long document. And first published version of JSON UDFs: 0.2.0 has not this functionality. What we expected is that users would call json_valid if they want to be 100% sure the document is valid.

But I was not surprised that some users expect JSON functions to work as it was in the first version: validate first, then process. For example, Ammon Sutherland writes: "json_set - according to the documentation a sort of 'INSERT... ON DUPLICATE KEY UPDATE' function which checks and parses the JSON (but not the whole document in this version for some reason)." Ulf Wendel also writes: "Taken from the README. For most functions manipulating or generating JSON documents is true: Warning! This version does not check whole document for validity. Hey, it is a pre-production release."

So this is certain what at least some of users want to have the behavior which was rejected. But since I believe others still can want better speed, I decided to put in my plans implementation of second set of the functions: safe JSON functions.

First, and more logical, candidate for this implementation is json_safe_merge, because currently json_merge is mostly not usable: if you checked already that documents are valid, you can easily split them. Therefore I created a separate feature request about this function:

But I am not sure regarding other functions. This is why I created one more public feature request: "Implement SAFE versions for all JSON functions". Will I implement this or not would depend from your opinions. Therefore, if you want me to do so, please vote in the bug report by clicking "Affects Me!" button and comment if you want this feature will be implemented only for a few functions (or sooner for a particular function).

Tuesday Nov 05, 2013

Late feedback

MySQL Community team asked me to write about Devconf 2013 few months ago. Conference was in June, 2013, but I remembered about this my promise only now: month later after my participating in MySQL Connect and Expert Troubleshooting seminar (change country to United Kingdom if you see blank page). I think it is too late for the feedback, but I still have few thoughts which I want to record.

DevConf (former PHPConf) always was a place where I tried new topics. At first, because I know audience there very well and they will be bored if I repeat a story which I was telling last year, but also because it is much easier to get feedback in your own native language. But last years my habit seems started to change and I presented improved version of my 2012 MySQL Connect talk about MySQL backups. Of course, I also had a seminar with unique topic, made for this conference first time: Troubleshooting MySQL Performance with EXPLAIN and Using Performance Schema to Troubleshoot MySQL. And these topics, improved, were presented at the expert seminar. It is interesting how my habit changes and one public speaking activity interferes next one.

What is good about DevConf is it forces you to create new ideas and do it really well, because audience is not forgiving at all, so they catch everything you miss or prepared not good enough. This can be bad if you want to make a marketing-style topic for free, but allows to present technical features in really good details: all these sudden discussions really help.

In year 2013 Oracle had a booth at the conference and was presented by a bunch of people. Dmitry Lenev presented topic "New features of replication in MySQL 5.6" and Victoria Reznichenko worked on the booth.

What was new at the conference this year is greater interest in
NoSQL, scale and fast development solutions. This, unfortunately, means
not so huge interest in MySQL as it was earlier. However, at the same
time, "Common" track was really MySQL track: not only Oracle, but people
from other companies presented about it.


Working blog of Sveta Smirnova - MySQL Senior Principal Support Engineer working in Bugs Analysis Support Group


« November 2013 »