Thursday Jan 23, 2014

MySQL and PostgreSQL JSON functions: do they differ much?

As author of MySQL JSON functions I am also interested in how development goes in another parties. JSON functions and operators in PostgreSQL, indeed, have great features. Some of them, such as operators, I can not do using UDF functions only. But lets see how these functions are interchangeable.

Note: all PostgreSQL examples were taken from PostgreSQL documentation.

First topic is syntax sugar.

 PostgreSQL
MySQL

Operator ->

postgres=# select '[1,2,3]'::json->2;
 ?column?
----------
 3
(1 row)

postgres=# select '{"a":1,"b":2}'::json->'b';
 ?column?
----------
 2
(1 row)


JSON_EXTRACT

mysql> select json_extract('[1,2,3]', 2);
+----------------------------+
| json_extract('[1,2,3]', 2) |
+----------------------------+
| 3                          |
+----------------------------+
1 row in set (0.00 sec)

mysql> select json_extract('{"a":1,"b":2}', 'b');
+------------------------------------+
| json_extract('{"a":1,"b":2}', 'b') |
+------------------------------------+
| 2                                  |
+------------------------------------+
1 row in set (0.00 sec)

Operator ->>

postgres=# select '[1,2,3]'::json->>2;
 ?column?
----------
 3
(1 row)

postgres=# select '{"a":1,"b":2}'::json->>'b';
 ?column?
----------
 2
(1 row)


JSON_EXTRACT

mysql> select json_extract('[1,2,3]', 2);
+----------------------------+
| json_extract('[1,2,3]', 2) |
+----------------------------+
| 3                          |
+----------------------------+
1 row in set (0.00 sec)

mysql> select json_extract('{"a":1,"b":2}', 'b');
+------------------------------------+
| json_extract('{"a":1,"b":2}', 'b') |
+------------------------------------+
| 2                                  |
+------------------------------------+
1 row in set (0.00 sec)

Operator #>

postgres=# select '{"a":[1,2,3],"b":[4,5,6]}'::json#>'{a,2}';
 ?column?
----------
 3
(1 row)


JSON_EXTRACT

mysql> select json_extract('{"a":[1,2,3],"b":[4,5,6]}', 'a', 2);
+---------------------------------------------------+
| json_extract('{"a":[1,2,3],"b":[4,5,6]}', 'a', 2) |
+---------------------------------------------------+
| 3                                                 |
+---------------------------------------------------+
1 row in set (0.00 sec)

Operator #>>

postgres=# select '{"a":[1,2,3],"b":[4,5,6]}'::json#>>'{a,2}';
 ?column?
----------
 3
(1 row)

JSON_EXTRACT

mysql> select json_extract('{"a":[1,2,3],"b":[4,5,6]}', 'a', 2);
+---------------------------------------------------+
| json_extract('{"a":[1,2,3],"b":[4,5,6]}', 'a', 2) |
+---------------------------------------------------+
| 3                                                 |
+---------------------------------------------------+
1 row in set (0.00 sec)


Then PostgreSQL JSON functions

 PostgreSQL
 MySQL
array_to_json(anyarray [, pretty_bool])  Not supported
row_to_json(record [, pretty_bool])  Not supported
to_json(anyelement)  Not supported
json_array_length(json)  Not supported, planned as bug #70580
json_each(json)  Not supported, cannot be implemented using UDFs only
json_each_text(from_json json)  Not supported, cannot be implemented using UDFs only

json_extract_path(from_json json, VARIADIC path_elems text[])


postgres=# select json_extract_path('{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}','f4');
  json_extract_path   
----------------------
 {"f5":99,"f6":"foo"}
(1 row)

JSON_EXTRACT

mysql> select json_extract('{"f2":{"f3":1},\c

"f4":{"f5":99,"f6":"foo"}}','f4');
+---------------------------------+
| json_extract('{"f2":{"f3":1},

"f4":{"f5":99,"f6":"foo"}}','f4') |
+---------------------------------+
| {"f5":99,"f6":"foo"}            |
+---------------------------------+
1 row in set (0.00 sec)

json_extract_path_text(from_json json, VARIADIC path_elems text[])


postgres=# select json_extract_path_text('{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}','f4', 'f6');
 json_extract_path_text
------------------------
 foo
(1 row)

 JSON_EXTRACT

select json_extract('{"f2":{"f3":1},\c

"f4":{"f5":99,"f6":"foo"}}','f4', 'f6');
+---------------------------------------+
| json_extract('{"f2":{"f3":1},

"f4":{"f5":99,"f6":"foo"}}','f4', 'f6') |
+---------------------------------------+
| foo                                   |
+---------------------------------------+
1 row in set (0.01 sec)

json_object_keys(json)  Not supported, cannot be implemented using UDFs only
json_populate_record(base anyelement, from_json json, [, use_json_as_text bool=false]  Not supported, cannot be implemented using UDFs only
json_populate_recordset(base anyelement, from_json json, [, use_json_as_text bool=false]  Not supported, cannot be implemented using UDFs only
json_array_elements(json)  Not supported, cannot be implemented using UDFs only

And, finally, MySQL functions

 MySQL
 PostgreSQL
json_append  Not supported, but user can convert JSON value into various formats, then create new JSON document from it
json_contains_key  Not supported, however has function json_object_keys(json)
json_extract  Supported in numerous ways, see above
json_merge   Not supported, but user can convert JSON value into various formats, then create new JSON document from it
json_remove   Not supported, but user can convert JSON value into various formats, then create new JSON document from it
 json_replace
  Not supported, but user can convert JSON value into various formats, then create new JSON document from it
json_search  Not supported
json_set   Not supported, but user can convert JSON value into various formats, then create new JSON document from it
json_test_parser   Not supported
json_valid  Not supported, however it has JSON type and checks every element for validity


As a conclusion I can say that MySQL and PostgreSQL implementations mostly intersect in extracting elements while solve element validation and manipulation tasks in different ways.

Tuesday Dec 17, 2013

JSON UDFs: is it hard to type keys?

Currently, if you want to search a key using JSON UDFs you have to specify each its part as a separate argument to the function: JSON_CONTAINS_KEY(doc, 'root', 'child', 'child of child', 'child of child of child', etc.....). This way of working with parameters is easy for developer, less error-prone, but can be not very beautiful.

I was suggested by some of users to change it to '/parent/child/child-of-child/...' or to 'parent:child:child-of-child:...' There are, probably, can be other suggestions. What do you like the best? How do you feel about current style with separate argument for each key element? Should we change or extend this syntax?

Wednesday Dec 11, 2013

New cmake options for JSON UDFs.

Two weeks ago I announced new maintenance release of JSON UDFs: 0.2.1. It not only contains bug fixes, how you can expect from a maintenance release, but also contains improvements in build and test scripts.

First improvement is the easier way to build JSON UDFs on Windows. In the first version building on Windows was a pane: you had to build PCRE library, copy four files to directories where Visual Studio can find them and only then build JSON functions themselves. Now you can build this way too, but only if you really wish.

By default cmake will create appropriate makefiles for bundled PCRE sources and for JSON functions. Only command you need to prepare sources is:

"C:\Program Files (x86)\CMake 2.8\bin\cmake.exe" -G "Visual Studio 11 Win64" . -DMYSQL_DIR="/path/to/mysql/installation"   

And then you can build functions:

devenv my_json_udf.sln /build Release

This is done by introducing new cmake option WITH_PCRE which can take two parameters: system and bundled. For Windows default is bundled which means bundled version of PCRE will be used. Default parameter for UNIX is system, because you usually have regular expression library installed and no need to build it statically into JSON functions. But you can overwrite defaults on both systems, specifying option -DWITH_PCRE=system|bundled for the cmake command.

Another improvement is test automation. In the first available version you had to use strange file Makefile.unix and weird instructions to run tests. In 0.2.1 this file is not needed anymore: simply run `make test` on UNIX or open solution my_json_udf.sln in Visual Studio and execute target ALL_TESTS. Here I have a question for mature Windows users: do you know how to run this on command line? But regardless to command line or not this also means you don't need Cygwin to run tests on Windows. Perl, however, is still required and must be in the path, because it is required by the MySQL Test Framework.

Of course, MySQL server directory should contain binaries. In other words: if you use source dir when build JSON functions you need to compile MySQL server to be able to run tests.

On Linux, as well on other OSes which can run Valgrind, you can run tests with valgrind too. Instructions are a bit weird: you need to specify option -DVALGRIND=1 when build functions to run tests under valgrind. Command, which you need to use to run tests under valgrind is still the same: make test

And, finally, last improvement is possibility to create a source package by running command make build_source_dist. This option can look not very useful from the user point of view, but it really helps producing packages for MySQL Labs out of internal repository with sources. If you run make build_source_dist file, named mysql-json-udfs-0.2.1-labs-json-udfs-src.tar.gz, which contains all source code, will be produced. On Windows, again, you need to open my_json_udf.sln solution and build this target.

Wednesday Dec 04, 2013

Last element for JSON array: what do you think?

After I released maintenance release of JSON UDFs last week it is time to think about which features I should implement in upcoming major version.

Many users asked me about the possibility to explicitly specify if they want to append last element to a JSON array. This feature can be made for two functions: json_append and json_set.

I have four ideas of how to implement this. All have pros and contras.



  1. Create new function called json_append_last which will work exactly like json_append, but it will add the element to the end of array. I.e., for JSON document {"colors": ["red", "green", "blue"]} json_append_last(doc, 'colors', 'orange') returns {"colors": ["red", "green", "blue", "orange"]}


    1. Pros:


      1. Such a code is easy to understand


    2. Contras:


      1.  There is no sense to implement analogous function json_set_last, therefore this possibility is for json_append only

      2. You should explicitly branch your code if you want to add an element to known position of the array and if you want to add last element



  2. Use string keyword. For example, 'LAST'. I.e., for JSON document {"colors": ["red", "green", "blue"]} json_append(doc, 'colors', 'LAST', 'orange') returns {"colors": ["red", "green", "blue", "orange"]} 


    1. Pros:


      1. It is easy to implement same thing for json_set

      2. You should not branch code if you need both to add an element to known position and to the end of array


    2. Contras:


      1. It is easy to misuse. For example, if you mistakenly passed JSON document {"colors": {"LAST": "transparent"}} to json_append(doc, 'colors', 'LAST', 'orange') it returns {"colors": {"LAST": "orange"}} instead of error



  3. Use NULL instead of string keyword. I.e., for JSON document {"colors": ["red", "green", "blue"]} json_append(doc, 'colors', NULL, 'orange') returns {"colors": ["red", "green", "blue", "orange"]} 


    1. Pros:


      1. It is easy to implement same thing for json_set

      2. You should not branch code if you need both to add an element to known position and to the end of array

      3. No way to misuse: JSON functions don't take NULL as an argument anywhere else.


    2. Contras:


      1. Looks like a hack: NULL is not the type of last element

      2. It is hard to guess what this function is doing without looking into user manual



  4. Use negative index. So, -1 means last element, -2 last before last and so on. I.e., for JSON document {"colors": ["red", "green", "blue"]} json_append(doc, 'colors', -1, 'orange') returns {"colors": ["red", "green", "blue", "orange"]} and json_set(doc, 'colors', -2, 'orange') returns {"colors": ["red", "green", "orange"]}. json_append returns original document in case of -2.


    1. Pros:


      1. Works for both json_append and json_set

      2. You should not branch code if you need both to add an element to known position and to the end of array

      3. No way to misuse: JSON functions don't take negative number as an argument anywhere else.


    2. Contras:


      1. Confusing: while users of programming languages which use negative array indexes can guess what these functions do it is still needed to look in the manual to find out how -1, -2, etc. are interpreted

      2. Error-prone: it is easy to mix up such numbers





As a conclusion I can say that I think solutions 1 and 3 are less error-prone and solution 4 introduces useful side-effects. I think that it is better to implement either 1 or 3, but certainly not both.

But what do you think about it? Please either comment here or at bugs.mysql.com

Thursday Nov 28, 2013

JSON UDF functions version 0.2.1 have been released.

Today new version of JSON UDF functions: 0.2.1 was released. This is maintenance release which added no new functionality and only contains bug fixes. However, it also includes improvements for build ans test procedures. As usual, you can download source and binary packages at MySQL Labs. Binary packages were build for MySQL server 5.6.14. If you want to use other version of the server, you need to recompile functions.

What was changed? Let me quote the ChangeLog.

Functionality added or changed:

Added cmake option WITH_PCRE which alolows to specify if existent or bundled version of PCRE  should be used. Bundled is default on Windows. To compile with bundled version, run: "cmake . -DMYSQL_DIR=/path/to/mysql/dir -DWITH_PCRE=bundled", to turn bundled version off on Windows, run: "cmake . -DMYSQL_DIR=/path/to/mysql/dir -DWITH_PCRE=system"


This means you don't need to type additional commands and move files from one directory to another to build JSON UDFs on Windows.

Added possibility to run tests, using command make test. By default tests are running without valgrind. If you need to run tests under valgrind, add option VALGRIND=1 to cmake command: "cmake . -DMYSQL_DIR=/path/to/mysql/dir -DVALGRIND=1"


You can automatically test on both UNIX and Windows. To test on UNIX simply run make test. On Windows: open solution my_json_udf.sln in Visual Studio, then choose target ALL_TESTS and run it. If you know how to run custom project of a solution on command line please let me know too =)

Added cmake target build_source_dist which creates source tarball. Usage: "make build_source_dist".

This addition is mostly for developers and needed to create packages for MySQL Labs.

Minimum supported cmake version changed to 2.8. It is necessary to run tests and build PCRE.

Bugs Fixed:

70392/17491678 MySQL JSON UDFs binary is called libmy_json_udf.so but DDL uses libmy_json.so

70573/17583505 Typo in README for JSON_MERGE

70605/17596660 Include all references to documentaiton to README file

70569/17583335 JSON_VALID allows mixed case in the keyword names

70568/17583310 JSON_VALID treats invalid values as valid

70570/17583367 JSON_VALID allows \u two-hex-digit while standard allows only \u four-hex-digit

70574/17583525 JSON_MERGE treats document without opening bracket as valid

70486/17548362 When using json_replace(), '}' of the end disappear.

70839/17751860 JSON_VALID allows to have two elements with the same key


I hope this version will be more stable than the first one and I hope you enjoy it.

Wednesday Nov 27, 2013

To be safe or to be fast?

When I designed first version of JSON UDFs which was reviewed only internally, I let all functions to validate input and output JSON. But my colleagues told me to remove this functionality, because it makes such functions, as json_search, json_replace or json_contains_key deadly slow if they find the occurrence in the beginning of the long document. And first published version of JSON UDFs: 0.2.0 has not this functionality. What we expected is that users would call json_valid if they want to be 100% sure the document is valid.

But I was not surprised that some users expect JSON functions to work as it was in the first version: validate first, then process. For example, Ammon Sutherland writes: "json_set - according to the documentation a sort of 'INSERT... ON DUPLICATE KEY UPDATE' function which checks and parses the JSON (but not the whole document in this version for some reason)." Ulf Wendel also writes: "Taken from the README. For most functions manipulating or generating JSON documents is true: Warning! This version does not check whole document for validity. Hey, it is a labs.mysql.com pre-production release."

So this is certain what at least some of users want to have the behavior which was rejected. But since I believe others still can want better speed, I decided to put in my plans implementation of second set of the functions: safe JSON functions.

First, and more logical, candidate for this implementation is json_safe_merge, because currently json_merge is mostly not usable: if you checked already that documents are valid, you can easily split them. Therefore I created a separate feature request about this function: http://bugs.mysql.com/bug.php?id=70575

But I am not sure regarding other functions. This is why I created one more public feature request: "Implement SAFE versions for all JSON functions". Will I implement this or not would depend from your opinions. Therefore, if you want me to do so, please vote in the bug report by clicking "Affects Me!" button and comment if you want this feature will be implemented only for a few functions (or sooner for a particular function).

Thursday Oct 31, 2013

JSON UDFs have own bugs.mysql.com category

JSON UDFs got own category at MySQL Bugs Database: "Server: JSON UDF"


Use this category to post new bug reports and vote for existent.

Saturday Apr 13, 2013

Yet another UDF tutorial

Some time ago I wrote a blog post describing a way I use to verify MySQL Server bugs. But my job consists not only of bugs which can be verified just by passing SQL queries to the server.

One of such examples is UDF bugs.

MySQL User Reference Manual is good source of information for those who want to write UDF functions, as well as book "MySQL 5.1 Plugin Development" by  Sergei Golubchik and Andrew Hutchings. But while the book describes in details how to write UDFs it was created in time when current MySQL version was 5.1 and does not contain information about how to build UDF nowadays. User Reference Manual has this information, of course. But it missed details, such as how to build UDF with custom library. And, last but not least, I need to create layout which allows me to test my UDF quickly with any server version I need to.

So here is brief overview of how I do it.

Lets took a MySQL bug report which I just created as example: "Bug #68946     UDF *_init and *_deinit functions called only once for multiple-row select".

All code, necessary to repeat the issue, is attached to the bug report. So you can download it and try. Or simply read this description.

After unpacking the archive we can see following layout:


-rwxrwx--- 1 sveta sveta  1070 Apr 13 11:48 CMakeLists.txt



-rwxrwx--- 1 sveta sveta   180 Apr 13 12:16 initid_bug.cc



-rwxrwx--- 1 sveta sveta   146 Apr 13 12:16 initid_bug.h



drwxrwx--- 1 sveta sveta     0 Apr 13 11:22 initid_bug_udf



-rwxrwx--- 1 sveta sveta   715 Apr 13 12:18 initid_bug_udf.cc



-rwxrwx--- 1 sveta sveta    76 Apr 13 11:48 initid_bug_udf.def


-rwxrwx--- 1 sveta sveta   484 Apr 13 12:08 initid_bug_udf.h


-rwxrwx--- 1 sveta sveta  6281 Apr 13 13:07 Makefile



-rwxrwx--- 1 sveta sveta   299 Apr 13 12:28 Makefile.unix


Lets start from code.


$ cat initid_bug_udf.def 

LIBRARY         initid_bug_udf

VERSION         0.1

EXPORTS

  initid_bug

This is common *.def file, which contains version of UDF and function names. I am going to use single function, showing issue with initid->ptr pointer.

initid_bug.h and initid_bug.cc contain declaration and definition of a helper function, necessary to demonstrate the problem. For this particular bug report I didn't need to create this function, but I had in mind future blog post when created it, so I have an example of external library which should be linked with the UDF:


$ cat initid_bug.h

/* MySQL */

#pragma once

#ifndef _INITID_BUG

#define _INITID_BUG

#define MAXRES 10

char* multiply_by_ten(int value);

#endif /* UNUTUD_BUG */

$ cat initid_bug.cc 

/* MySQL */

#include <stdio.h>

#include "initid_bug.h"

char* multiply_by_ten(int value)

{

    char* result= new char[MAXRES];

    sprintf(result, "%d", value * 10);

 

    return result;

}

Files initid_bug_udf.h and initid_bug_udf.cc contain code for the UDF itself.

initid_bug_udf.h is simple header file:


$ cat initid_bug_udf.h

/* MySQL */

#include <my_global.h>

#include <my_sys.h>

#include <mysql.h>

#include <string.h>

#include "initid_bug.h"

#ifdef __WIN__

#define WINEXPORT __declspec(dllexport)

//#define strcpy strcpy_s

#else

#define WINEXPORT

#endif

extern "C" {

WINEXPORT long long initid_bug(UDF_INIT *initid, UDF_ARGS *args, char *is_null, char *error);

 
WINEXPORT my_bool initid_bug_init(UDF_INIT *initid, UDF_ARGS *args, char *message);

 
WINEXPORT void initid_bug_deinit(UDF_INIT *initid);

}

And initid_bug_udf.cc contains code, demonstrating the issue:


$ cat initid_bug_udf.cc

/* MySQL */

#include "initid_bug_udf.h"

long long initid_bug(UDF_INIT *initid, UDF_ARGS *args, char *is_null, char *error)

{

    int result= atoi(initid->ptr);

    char* bug= multiply_by_ten(result);

    memcpy(initid->ptr, bug, strlen(bug));

    delete[] bug;

    return result;

}

 
my_bool initid_bug_init(UDF_INIT *initid, UDF_ARGS *args, char *message)

{

  if (!(initid->ptr= (char*)malloc(MAXRES)))

  {

    strcpy(message,"Couldn't allocate memory for result buffer");

    return 1;

  }

  memset(initid->ptr, '\0', MAXRES);

 

  memcpy(initid->ptr, "1", strlen("1"));

  initid->maybe_null= 1;

  initid->const_item= 0;

 

  return 0;

}

 
void initid_bug_deinit(UDF_INIT *initid)

{

  if (initid->ptr)

    free(initid->ptr);

}

So far so good. However there is nothing interesting yet.

And now is the part I am writing this tutorial for: how to build and test it.

User manual contains build instructions for version 5.5 and up at http://dev.mysql.com/doc/refman/5.6/en/udf-compiling.html

But since I want to test my UDF with several MySQL installations I need an easy way to pass value of MySQL's basedir to my build and test scripts. For this purpose I introduced variable MYSQL_DIR in CMakeLists.txt


$ cat CMakeLists.txt 

CMAKE_MINIMUM_REQUIRED(VERSION 2.6)

# Avoid warnings in higher versions

if("${CMAKE_MAJOR_VERSION}.${CMAKE_MINOR_VERSION}" GREATER 2.6)

 CMAKE_POLICY(VERSION 2.8)

endif()

PROJECT(initid_bug_udf)

# The version number.

set (initid_bug_udf_VERSION_MAJOR 0)

set (initid_bug_udf_VERSION_MINOR 1)

# Path for MySQL include directory

SET(MYSQL_DIR_NAME_DOCSTRING "Path to MySQL directory")

IF(DEFINED MYSQL_DIR)

  SET(MYSQL_DIR ${MYSQL_DIR} CACHE STRING ${MYSQL_DIR_NAME_DOCSTRING} FORCE)

ELSE()

  MESSAGE(WARNING "${MYSQL_DIR_NAME_DOCSTRING} was not specified. If something goes wrong re-reun with option -DMYSQL_DIR")  

ENDIF()

INCLUDE_DIRECTORIES("${MYSQL_DIR}/include")

 


I also added the library here




ADD_LIBRARY(initid_bug initid_bug.cc)

ADD_DEFINITIONS("-DMYSQL_DYNAMIC_PLUGIN")

ADD_DEFINITIONS("-fPIC")

ADD_DEFINITIONS("-g")

ADD_LIBRARY(initid_bug_udf MODULE initid_bug_udf.cc initid_bug_udf.def)


And linked it


IF(${CMAKE_SYSTEM_NAME} MATCHES "Windows")

TARGET_LINK_LIBRARIES(initid_bug_udf initid_bug wsock32)

ELSE()

TARGET_LINK_LIBRARIES(initid_bug_udf initid_bug)

ENDIF()


In other aspects this CMakeLists.txt is the same as described in the user manual.

Now it is easy to build UDF for any server, installed on the same machine as UDF sources.

On Linux/Solaris/Mac:

cmake . -DMYSQL_DIR=/home/sveta/src/mysql-5.6
make

On some Mac machines it is failed to create 64-bit binaries. You can build universal binaries instead providing option -DCMAKE_OSX_ARCHITECTURES="x86_64;i386;ppc"

On Windows:

You need Visual Studio (I did not test with Express, but I hope it works) and cmake (cmake.org) If you want to run automatic tests you need Perl.

To create makefiles run:


"C:\Program Files (x86)\CMake 2.8\bin\cmake.exe" -G "Visual Studio 11 Win64" . -DMYSQL_DIR="D:/build/mysql-5.5"


As you can see I have Visual Studio 11. If you have another version, change accordingly, for example: "Visual Studio 10 Win64"


Then compile:


devenv my_json_udf.sln /build Release


In all cases change MYSQL_DIR value so it points to basedir of your MySQL installation. It does not matter if you compiled MySQL server yourself or have pre-compiled binaries.

After compiling the library, install it or test. Since I care about tests mostly I did not create install script, but created tests and makefile to run them.

Tests are located in the initid_bug_udf directory:


$ ls -l initid_bug_udf

total 4

drwxrwx--- 1 sveta sveta 4096 Apr 13 13:08 include

drwxrwx--- 1 sveta sveta    0 Apr 13 11:31 plugin

drwxrwx--- 1 sveta sveta    0 Apr 13 11:59 r

drwxrwx--- 1 sveta sveta    0 Apr 13 11:21 t


They are in MTR format. I put all installation-related and commonly used scripts into include directory and copy UDF binary into plugin directory.

Test case itself is easy:


$ cat initid_bug_udf/t/initid_bug.test 

--source suite/initid_bug_udf/include/initid_bug_udf_install.inc

--source suite/initid_bug_udf/include/initid_bug_udf_testdata.inc

select initid_bug() from t1;

--source suite/initid_bug_udf/include/initid_bug_udf_cleanup.inc


As well as initialization and cleanup files:


$ cat initid_bug_udf/include/initid_bug_udf_install.inc 

--disable_query_log

let $win=`select @@version_compile_os like 'win%'`;

if($win==1)

{

--source suite/initid_bug_udf/include/initid_bug_udf_install_win.inc

}

if($win==0)

{

--source suite/initid_bug_udf/include/initid_bug_udf_install_unix.inc

}

--enable_result_log

--enable_query_log

$ cat initid_bug_udf/include/initid_bug_udf_install_unix.inc 

--disable_query_log

--disable_warnings

drop function if exists initid_bug;

--enable_warnings

create function initid_bug returns integer soname 'libinitid_bug_udf.so';

--enable_query_log

$ cat initid_bug_udf/include/initid_bug_udf_install_win.inc 

--disable_query_log

--disable_warnings

drop function if exists initid_bug;

--enable_warnings

create function initid_bug returns integer soname 'initid_bug_udf.dll';

--enable_query_log

However you can use *install* scripts as templates for installation automation if you care more about usage than testing.



Note, I did not test this particular UDF on Windows, so Windows-related code can be buggy.

Also, if you use versions prior to 5.5, you could not use initid_bug_udf/include/initid_bug_udf_install.inc script, but have to distinguish Windows and UNIX installation scripts in different way.


$ cat initid_bug_udf/include/initid_bug_udf_testdata.inc 

--disable_warnings

drop table if exists t1;

--enable_warnings

create table t1(f1 int);

insert into t1 values(1);

insert into t1 select f1 from t1;

insert into t1 select f1 from t1;

insert into t1 select f1 from t1;


And, finally, cleanup:


$ cat initid_bug_udf/include/initid_bug_udf_cleanup.inc 

--disable_query_log

--disable_warnings

drop table if exists t1;

drop function if exists initid_bug;

--enable_warnings

--enable_query_log


To run tests easily I created Makefile.unix file. I leave a task to create similar file for Windows up to you, because this depends from Windows Perl installation: scripts for Cygwin, ActiveState Perl or whichever Perl you have can vary in slight details.


$ cat Makefile.unix 

#MYSQL_DIR - path to source dir

test_initid_bug:    

            cp libinitid_bug_udf.so initid_bug_udf/plugin

            cp -R initid_bug_udf  $(MYSQL_DIR)/mysql-test/suite

            cd $(MYSQL_DIR)/mysql-test; \

                perl mtr  --suite=initid_bug_udf --mysqld=--plugin-dir=$(MYSQL_DIR)/mysql-test/suite/initid_bug_udf/plugin


Finally, to run tests you simply need to call following command:


$ MYSQL_DIR=/home/sveta/src/mysql-5.6 make -f Makefile.unix test_initid_bug


It will copy test directory under mysql-test/suite directory of the MySQL installation, pointed by MYSQL_DIR variable, then run the test.

If you did not create result file you will see test failure and result of queries. You can create result file after that and put it into r directory. ZiP archive, attached to the bug report, contains result file.

About

Working blog of Sveta Smirnova - MySQL Senior Principal Support Engineer working in Bugs Analysis Support Group

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today