Tuesday Dec 31, 2013

JSON UDF functions version 0.3.0 have been released.

Today new version of JSON UDF functions: 0.3.0 was released. This is major release which contains new functionality. You can download functions from the MySQL Labs website.

What was added?

Functions JSON_EXTRACT, JSON_CONTAINS_KEY, JSON_APPEND, JSON_REMOVE, JSON_REPLACE, JSON_SET now accept both string and integer arguments as array indexes. (Bug #70393/17491709)

Now syntax JSON_EXTRACT(doc, 'key', 1); is accepted.

New function JSON_SAFE_MERGE checks JSON documents for validity before merging them. In addition JSON_MERGE now merges valid documents, having array as root element. (Bug #70575/17583568)

JSON_VALID now accepts array as a root element of the JSON document (Bug #70567/17583282)

Also this release contains following bug fixes:

71050/17882710 json_extract returning same column twice if key names are not fully distinct. (Fixed for JSON_EXTRACT, JSON_CONTAINS_KEY, JSON_APPEND, JSON_REMOVE, JSON_REPLACE, JSON_SET)

71180/17975709 JSON_CONTAINS_KEY founds not existent key in the array element

70579/17584678 Add ability to append last element to array in JSON_APPEND

And, of course, this release contains all changes which version 0.2.1 has

Tuesday Dec 17, 2013

JSON UDFs: is it hard to type keys?

Currently, if you want to search a key using JSON UDFs you have to specify each its part as a separate argument to the function: JSON_CONTAINS_KEY(doc, 'root', 'child', 'child of child', 'child of child of child', etc.....). This way of working with parameters is easy for developer, less error-prone, but can be not very beautiful.

I was suggested by some of users to change it to '/parent/child/child-of-child/...' or to 'parent:child:child-of-child:...' There are, probably, can be other suggestions. What do you like the best? How do you feel about current style with separate argument for each key element? Should we change or extend this syntax?

Wednesday Dec 11, 2013

New cmake options for JSON UDFs.

Two weeks ago I announced new maintenance release of JSON UDFs: 0.2.1. It not only contains bug fixes, how you can expect from a maintenance release, but also contains improvements in build and test scripts.

First improvement is the easier way to build JSON UDFs on Windows. In the first version building on Windows was a pane: you had to build PCRE library, copy four files to directories where Visual Studio can find them and only then build JSON functions themselves. Now you can build this way too, but only if you really wish.

By default cmake will create appropriate makefiles for bundled PCRE sources and for JSON functions. Only command you need to prepare sources is:

"C:\Program Files (x86)\CMake 2.8\bin\cmake.exe" -G "Visual Studio 11 Win64" . -DMYSQL_DIR="/path/to/mysql/installation"   

And then you can build functions:

devenv my_json_udf.sln /build Release

This is done by introducing new cmake option WITH_PCRE which can take two parameters: system and bundled. For Windows default is bundled which means bundled version of PCRE will be used. Default parameter for UNIX is system, because you usually have regular expression library installed and no need to build it statically into JSON functions. But you can overwrite defaults on both systems, specifying option -DWITH_PCRE=system|bundled for the cmake command.

Another improvement is test automation. In the first available version you had to use strange file Makefile.unix and weird instructions to run tests. In 0.2.1 this file is not needed anymore: simply run `make test` on UNIX or open solution my_json_udf.sln in Visual Studio and execute target ALL_TESTS. Here I have a question for mature Windows users: do you know how to run this on command line? But regardless to command line or not this also means you don't need Cygwin to run tests on Windows. Perl, however, is still required and must be in the path, because it is required by the MySQL Test Framework.

Of course, MySQL server directory should contain binaries. In other words: if you use source dir when build JSON functions you need to compile MySQL server to be able to run tests.

On Linux, as well on other OSes which can run Valgrind, you can run tests with valgrind too. Instructions are a bit weird: you need to specify option -DVALGRIND=1 when build functions to run tests under valgrind. Command, which you need to use to run tests under valgrind is still the same: make test

And, finally, last improvement is possibility to create a source package by running command make build_source_dist. This option can look not very useful from the user point of view, but it really helps producing packages for MySQL Labs out of internal repository with sources. If you run make build_source_dist file, named mysql-json-udfs-0.2.1-labs-json-udfs-src.tar.gz, which contains all source code, will be produced. On Windows, again, you need to open my_json_udf.sln solution and build this target.

Wednesday Dec 04, 2013

Last element for JSON array: what do you think?

After I released maintenance release of JSON UDFs last week it is time to think about which features I should implement in upcoming major version.

Many users asked me about the possibility to explicitly specify if they want to append last element to a JSON array. This feature can be made for two functions: json_append and json_set.

I have four ideas of how to implement this. All have pros and contras.



  1. Create new function called json_append_last which will work exactly like json_append, but it will add the element to the end of array. I.e., for JSON document {"colors": ["red", "green", "blue"]} json_append_last(doc, 'colors', 'orange') returns {"colors": ["red", "green", "blue", "orange"]}


    1. Pros:


      1. Such a code is easy to understand


    2. Contras:


      1.  There is no sense to implement analogous function json_set_last, therefore this possibility is for json_append only

      2. You should explicitly branch your code if you want to add an element to known position of the array and if you want to add last element



  2. Use string keyword. For example, 'LAST'. I.e., for JSON document {"colors": ["red", "green", "blue"]} json_append(doc, 'colors', 'LAST', 'orange') returns {"colors": ["red", "green", "blue", "orange"]} 


    1. Pros:


      1. It is easy to implement same thing for json_set

      2. You should not branch code if you need both to add an element to known position and to the end of array


    2. Contras:


      1. It is easy to misuse. For example, if you mistakenly passed JSON document {"colors": {"LAST": "transparent"}} to json_append(doc, 'colors', 'LAST', 'orange') it returns {"colors": {"LAST": "orange"}} instead of error



  3. Use NULL instead of string keyword. I.e., for JSON document {"colors": ["red", "green", "blue"]} json_append(doc, 'colors', NULL, 'orange') returns {"colors": ["red", "green", "blue", "orange"]} 


    1. Pros:


      1. It is easy to implement same thing for json_set

      2. You should not branch code if you need both to add an element to known position and to the end of array

      3. No way to misuse: JSON functions don't take NULL as an argument anywhere else.


    2. Contras:


      1. Looks like a hack: NULL is not the type of last element

      2. It is hard to guess what this function is doing without looking into user manual



  4. Use negative index. So, -1 means last element, -2 last before last and so on. I.e., for JSON document {"colors": ["red", "green", "blue"]} json_append(doc, 'colors', -1, 'orange') returns {"colors": ["red", "green", "blue", "orange"]} and json_set(doc, 'colors', -2, 'orange') returns {"colors": ["red", "green", "orange"]}. json_append returns original document in case of -2.


    1. Pros:


      1. Works for both json_append and json_set

      2. You should not branch code if you need both to add an element to known position and to the end of array

      3. No way to misuse: JSON functions don't take negative number as an argument anywhere else.


    2. Contras:


      1. Confusing: while users of programming languages which use negative array indexes can guess what these functions do it is still needed to look in the manual to find out how -1, -2, etc. are interpreted

      2. Error-prone: it is easy to mix up such numbers





As a conclusion I can say that I think solutions 1 and 3 are less error-prone and solution 4 introduces useful side-effects. I think that it is better to implement either 1 or 3, but certainly not both.

But what do you think about it? Please either comment here or at bugs.mysql.com

About

Working blog of Sveta Smirnova - MySQL Senior Principal Support Engineer working in Bugs Analysis Support Group

Search

Categories
Archives
« December 2013 »
SunMonTueWedThuFriSat
1
2
3
5
6
7
8
9
10
12
13
14
15
16
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today