X

The leading edge of scripting languages and Oracle Database brought to you by the Data Access Development team

Recent Posts

Oracle Instant Client Downloadable Without Login - Breaking News

Stop the presses! Downloads of Oracle Instant Client 19.3 packages for Linux and Windows no longer require login or a click-through. You can now automate downloads using your favorite tool such as wget or curl. URLs can be found on the platform pages: Windows x64 64-bit Windows 32-bit Linux x86-64 64-bit Linux x86 32-bit Update: 19c package for these platforms are now also accessible without click-through: Solaris SPARC 64-bit Solaris SPARC 32-bit Linux on Power Little Endian (64-bit) z/Linux Update: 19c for macOS is now available without click-through: macOS Many people have wanted the click-throughs removed. We're very pleased to have worked through all the processes and approvals to get to this point. I'm continuing to work with the website and development teams on other platforms, and also on creating permanent URLs which will always get the latest packages when we release new versions. Oracle Instant Client enables applications to connect to a local or remote Oracle Database for development and production deployment. The Instant Client libraries provide the necessary network connectivity, as well as basic and high end data features, to make full use of Oracle Database. Instant Client underlies the Oracle APIs of popular languages and environments including Node.js, Python and PHP, as well as providing access for OCI, OCCI, JDBC, ODBC and Pro*C applications. Tools included in Instant Client, such as SQL*Plus and Oracle Data Pump, provide quick and convenient data access. Spread the news! PS Instant Client Linux RPM packages from yum.oracle.com have been installable without click-through for some time. For example, you can install on Oracle Linux with: yum -y install oracle-release-el7 yum-config-manager --enable ol7_oracle_instantclient yum -y install oracle-instantclient19.3-basic A Dockerfile is also available.

Stop the presses! Downloads of Oracle Instant Client 19.3 packages for Linux and Windows no longer require login or a click-through. You can now automate downloads using your favorite tool such as wget...

Node.js

Oracle DB Named Objects and Advanced Queuing Support New in node-oracledb 4.0

  Release announcement: A new release of node-oracledb, the Node.js module for accessing Oracle Database, is available from npm. Top features in node-oracledb 4.0: Querying and binding of named Oracle types; Oracle Advanced Queuing API. The team has been working hard on node-oracledb 4.0 and is very pleased to release it to you. You can install node-oracledb 4.0 from npm. It requires Node.js 8 or higher. With Node.js 8 the minimum patch level is 8.16. With Node.js 10 the minimum patch level is 10.16. REFACTORED IMPLEMENTATION Internally there were significant code changes to node-oracledb. While the module is still a JavaScript wrapper over a binary layer that calls Oracle client libraries, the binary layer now uses Node.js's N-API interface instead of NAN. NAN was the original Node.js C++ wrapper to provide modules with portability when the V8 engine was updated. Using the newer N-API interface brings a number of benefits: N-API has its own API versioning with a cumulative approach, allowing us to build one node-oracledb binary (per operating system architecture) that can work with current and future Node.js versions (and flavors of Node.js). As a result, when a new Node.js version arrives: You can upgrade Node.js without needing to update node-oracledb. You don't have to wait for us to build a new node-oracledb binary module, or work out how to build the module yourself (not that building is difficult). We don't have to wait for a NAN update which supports any Node.js API or V8 changes. We don't have to update our usage of any deprecated NAN or Node.js functionality. The npm download package is a quarter of the size of the node-oracledb 3.1 package, since the package doesn't need to contain a binary for each Node.js version. Node-oracledb code is now pure C, instead of C++ so the C compiler needed to build the module no longer needs C++11 support. This can make it easier to build in some older environments. There was also some modernization of the JavaScript layer. For example class methods are now configurable, e.g. via Object.defineProperty. Node-oracledb 4.0 requires Node.js 8 or higher. With Node.js 8 the minimum patch level is 8.16. With Node.js 10 the minimum patch level is 10.16. An important N-API performance fix landed in those releases. While the refactoring is fun for us as maintainers, and let us simplify and standardize a lot of code, there are features with a bigger impact to application developers. NAMED ORACLE OBJECT QUERY AND BIND SUPPORT The new support for fetching and binding of named Oracle Database types works very smoothly indeed. This makes Node.js a great environment for working with SQL or PL/SQL user-defined types, or when working with pre-created types like those used by Oracle's Spatial features. For example, let's create some types and a table: CREATE TYPE dbharvesttype AS VARRAY(10) OF VARCHAR2(20) / CREATE TYPE dbfarmtype AS OBJECT ( farmername VARCHAR2(20), harvest dbharvesttype) / CREATE TABLE farmtab (id NUMBER, farm dbfarmtype); In node-oracledb 4.0 you can map JavaScript objects for insertion into the column farm: // A JavaScript object automatically mapped to the DBFARMTYPE object. // The case of the properties matters. const newFarm = { FARMERNAME: 'McDonald', HARVEST: [ 'corn', 'wheat', 'barley' ] }; await connection.execute( `INSERT INTO farmtab (id, farm) VALUES (:id, :f)`, { id: 1, f: { type: 'DBFARMTYPE', // name of the top level DB type, case sensitive val: newFarm } } ); Querying FARMTAB returns a DbObject, which is new in node-oracledb 4.0: const result = await connection.execute(`SELECT farm FROM farmtab WHERE id = 1 `); // a DbObject for the named Oracle type: const farm = result.rows[0][0]; // The whole object: // [MYUSER.DBFARMTYPE] { FARMERNAME: 'MacDonald', HARVEST: [ 'corn', 'wheat', 'barley' ] } console.log(farm); // Objects can be stringified: // {"FARMERNAME":"MacDonald","HARVEST":["corn","wheat","barley"]} console.log(JSON.stringify(farm)); // Iterate over the collection: console.log(farm.FARMERNAME); // "MacDonald" for (const crop of farm.HARVEST) { console.log(crop); // "corn", "wheat", "barley" } Node-oracledb settings such as fetchAsString do not affect DbObjects. There are examples in selectobject.js and selectgeometry.js, which shows using SDO_GEOMETRY. Working with PL/SQL objects and collections is similar to the farm sample above. Look at runnable examples such as selectvarray.js and plsqlvarrayrecord.js. When inserting a number of objects, use executeMany(), which will be more efficient than repeated calls to execute(). See plsqlrecord.js for a runnable example. There is some overhead getting type information from the database. Node-oracledb tries to minimize this by using a cache. The cache is keyed from the fully qualified name so make sure you use the FQN when binding multiple times: . . . type: 'MYUSER.DBFARMTYPE', // bind type is name of the top level DB type, case sensitive . . . Alternatively explicitly call getDbObjectClass() and bind the prototype object class it returns: const FarmType = connection.getDbObjectClass("DBFARMTYPE"); . . . type: FarmType, // bind type is the prototype object from getDbObjectClass() . . . ORACLE ADVANCED QUEUING (AQ) Oracle Advanced Queuing is a highly configurable and scalable messaging feature of Oracle Database. It has interfaces in various languages, letting you integrate multiple tools in your architecture to send and receive messages. Node-oracledb 4.0 has an Advanced Queuing API with support for 'RAW' and object queues. You can send and receive String, Buffer or object messages. For example, if a 'RAW' queue has been configured in the database, you can send a simple string: const queue = await connection.getQueue('MYQUEUE'); const messageString = 'This is my message'; await queue.enqOne(messageString); await connection.commit(); To receive it, another application can execute: const queue = await connection.getQueue('MYQUEUE'); const msg = await queue.deqOne(); await connection.commit(); console.log(msg.payload.toString()); Messages are received as AqMessage objects. In these, string messages are encoded as UTF-8 buffers so toString() was used to extract the message text. AQ has a lot of options that can be set on queues and on messages. For example, you might want to send messages without committing the whole transaction on the connection: const queue = await connection.getQueue('MYQUEUE'); queue.enqOptions.visibility = oracledb.AQ_VISIBILITY_IMMEDIATE; await queue.enqOne('This is my message'); Or you may want to wait no longer than five seconds if no messages are in the queue: const queue = await connection.getQueue('MYQUEUE'); Object.assign(queue.deqOptions, { wait: 5 } ); const msg = await queue.deqOne(); await connection.commit(); if (msg) console.log(msg.payload.toString()); Note the direct assignment to the visibility property, or the use of Object.assign for setting the queue options. At the message level you could expire messages from the queue if they have not been dequeued within a certain time, for example ten seconds: const queue = await connection.getQueue('MYQUEUE'); const message = { payload: 'This is my other message', expiration: 10 }; await queue.enqOne(message); await connection.commit(); These examples show enqOne() takes either a string message directly, or it takes a JavaScript object with a payload property that is the string message, like the very last example above. The same goes for Buffers and DbObjects: they are either passed directly to enqOne() or as the value of a payload property in a JavaScript object argument. See aqraw.js, aqoptions.js, and aqobject.js for runnable examples. Node-oracledb 4.0 also has batch enqueue and dequeue methods, see aqmulti.js. IMPLICIT RESULTS Support for Implicit Results has arrived in node-oracledb 4.0. This makes it even easier to return query result sets from PL/SQL to node-oracledb. With Implicit Results you don't need bind variables. The DBMS_SQL.RETURN_RESULT interface lets a cursor be returned via node-oracledb's new results.implicitResults property: result = await connection.execute( `DECLARE c1 SYS_REFCURSOR; c2 SYS_REFCURSOR; BEGIN OPEN c1 FOR SELECT city, postal_code FROM locations; DBMS_SQL.RETURN_RESULT(c1); OPEN C2 FOR SELECT employee_id, last_name FROM employees; DBMS_SQL.RETURN_RESULT(c2); END;`, [], // no binds needed { resultSet: true } ); // Iterate over all the ResultSets for (const rs of result.implicitResults) { while ((row = await rs.getRow())) console.log(row); await rs.close(); } See impres.js for a runnable example. A FEW MORE CHANGES One important change is that the Type Constants and Oracle Database Type Constants were renumbered to allow for future bind enhancements. Make sure you use the constant names in applications, not their values. There is a new connection.currentSchema property for setting the schema qualifier to be used when a qualifier is omitted from SQL statements. This is an efficient alternative to ALTER SESSION SET CURRENT_SCHEMA. The relative performance of fetching query row output as JavaScript objects vs arrays via the outFormat setting has improved. It is still faster to fetch each row as an array, so that remains the default mode. We introduced new constants oracledb.OUT_FORMAT_ARRAY and oracledb.OUT_FORMAT_OBJECT to replace the slightly ambiguously named oracledb.ARRAY and oracledb.OBJECT, which are still usable but now deprecated. The outFormat setting continues to affect the row representation, not the representation of each column item. Look at the results of querying the ID and FARM columns from the FARMTAB table of the first Oracle object example above. With an execute() query option { outFormat: oracledb.OUT_FORMAT_OBJECT }, each queried row is like: { ID: 1, FARM: [MYUSER.DBFARMTYPE] { FARMERNAME: 'McDonald', HARVEST: [ 'corn', 'wheat', 'barley' ] } } With the default oracledb.OUT_FORMAT_ARRAY, each queried row is an array of values, one per column: [ 1, [MYUSER.DBFARMTYPE] { FARMERNAME: 'McDonald', HARVEST: [ 'corn', 'wheat', 'barley' ] } ] In both cases the FARM column is a DbObject type. The only difference is how you access the column from the execute() row results, either as an object reference result.rows[i].FARM or as array reference result.rows[i][1]. If you missed the recent announcement: the existing SODA APIs for NoSQL-style document access are out of preview status and are now supported with Oracle Client 18.5 and Oracle Client 19.3, or later, when using Oracle Database 18 and above. This is also true for SODA in node-oracledb 3.1.2. SODA Bulk insert methods were added in 4.0. These are in preview status. We updated examples to use the Async/Await style of programming, which is generally cleaner and easier to follow. I still see people using multiple programming styles in their code, presumably from trying to integrate examples from different sources or times. This gets mighty confusing for them. Where possible, start moving to async/await - but don't forget to use 'await' where its needed. See the CHANGELOG and Migrating from node-oracledb 3.1 for other changes and more information about node-oracledb 4.0. CONCLUSION Node-oracledb 4.0 is a big release with some significant features, making it even easier to build sophisticated applications in Node.js. A special shout out to our testers who have been putting in extra effort. Along with their other projects, they not only test the parts of node-oracledb you see, but also make sure that future Oracle Database features and versions have great Node.js support. RESOURCES Node-oracledb installation instructions are here. Node-oracledb documentation is here. Node-oracledb change log is here. Issues and questions about node-oracledb can be posted on GitHub or Slack (link to join Slack). Follow us on Twitter or Facebook. Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

  Release announcement: A new release of node-oracledb, the Node.js module for accessing Oracle Database, is available from npm. Top features in node-oracledb 4.0: Querying and binding of named Oracle...

Python

Improved Oracle Advanced Queuing in cx_Oracle 7.2

cx_Oracle 7.2, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package for the Python Database API specification with many additions to support advanced Oracle Database features.   It's time for a tidy batch of enhancements and fixes to cx_Oracle. The top changes: Improved Oracle Advanced Queuing support: Added support for enqueue and dequeue of "RAW" payloads (adding to the previous support for object AQ messages). This lets you used strings and buffers for messages. Added bulk enqueue and dequeue of messages. This can significantly improve performance when dealing with multiple messages. To align the new and existing AQ support, a new method Connection.queue() was introduced, This returns a queue, which has methods and properties for AQ functionality. The previous AQ methods Connection.deq(), Connection.deqOptions(), Connection.enq(), and Connection.enqOptions() are deprecated and will be removed in cx_Oracle 8.   Added support for SODA bulk insert, available when cx_Oracle uses Oracle client libraries 18.5 and higher. This is currently in 'preview' status. Added support for setting the LOB prefetch length indicator in order to reduce the number of round trips when processing LOB values. Thanks are deserved, as always, to everyone who has contributed directly and indirectly. Installing or Upgrading cx_Oracle You can install or upgrade cx_Oracle by running: python -m pip install cx_Oracle --upgrade See cx_Oracle 7 Installation for details.  If the upgrade seems to have no effect, add the pip install option --force-reinstall. Linux packages will be available soon from Oracle Linux for Python Developers. cx_Oracle's new AQ interface Here's an example of the new interface to AQ. I use a RAW queue to send simple strings. First we need a RAW queue itself, which I'll call DEMO_RAW_QUEUE. This has to be configured in the database, for example in SQL*Plus: dbms_aqadm.create_queue_table('MY_QUEUE_TABLE', 'RAW'); dbms_aqadm.create_queue('DEMO_RAW_QUEUE', 'MY_QUEUE_TABLE'); dbms_aqadm.start_queue('DEMO_RAW_QUEUE'); To send messages in Python you connect and get a 'queue' object. The queue can be used for enqueuing, dequeuing, or both. Refer to the cx_Oracle and Oracle Advanced Queuing documentation for details on enqueue and dequeue options. Here I use ENQ_IMMEDIATE so that a commit on the connection is not needed to commit messages: import cx_Oracle connection = cx_Oracle.connect(. . .) queue = connection.queue("DEMO_RAW_QUEUE") queue.enqOptions.visibility = cx_Oracle.ENQ_IMMEDIATE Now messages can be queued. Here I'm sending three messages, setting an expiration of 60 seconds on them. This means that if no dequeue operation occurs in 60 seconds then the messages will be dropped from the queue: PAYLOAD_DATA = [ "The first message", "The second message", "The third message" ] for data in PAYLOAD_DATA: queue.enqOne(connection.msgproperties(payload=data, expiration=60)) Dequeuing is similar. Strings are encoded as buffers, so a decode() is used to get the text: queue = connection.queue("DEMO_RAW_QUEUE") queue.deqOptions.wait = cx_Oracle.DEQ_NO_WAIT queue.deqOptions.navigation = cx_Oracle.DEQ_FIRST_MSG while True: props = queue.deqOne() if not props: break print(props.payload.decode()) connection.commit() There are examples in the samples directory. Readers with a good memory will recall I mentioned bulk AQ operations but the example above used a loop for each string. You can convert the examples easily. The bulk enqueue operation simply takes an array of messages. The bulk dequeue method returns an array of messages; the array length is up to a specified size. cx_Oracle References Home page: oracle.github.io/python-cx_Oracle/index.html Installation instructions: cx-oracle.readthedocs.io/en/latest/installation.html Documentation: cx-oracle.readthedocs.io/en/latest/index.html Release Notes: cx-oracle.readthedocs.io/en/latest/releasenotes.html Source Code Repository: github.com/oracle/python-cx_Oracle Facebook group: www.facebook.com/groups/oraclescripting Questions: github.com/oracle/python-cx_Oracle/issues

cx_Oracle 7.2, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package for the Python Database API specification with many additions...

General

ODPI-C 3.2 has been released on GitHub

  Release 3.2 of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub. ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications written in C or C++. Top feature: Oracle Advanced Queue support for 'RAW' queues Excellent features and improvements have landed in ODPI-C 3.2. The highlights are: Advanced Queuing (AQ) support was enhanced with support for 'RAW' queue payloads. (This adds to the previous support for object queues). In practice RAW support means strings and binary text can be used directly for AQ messages. AQ was also enhanced with the addition of bulk enqueue and dequeue operations, adding performance when dealing with multiple messages. SODA bulk insertion methods were added, which can significantly improve the performance of document loads. These methods are in preview mode. Reduced the number of round-trips required between the application and the database when streaming LOB columns Various bug fixes also landed. Check the release notes for all the changes. We're making great use of ODPI-C for our Python cx_Oracle and Node.js node-oracledb interfaces. Other users have based language drivers on it and have given it well deserved kudos. ODPI-C References Home page: oracle.github.io/odpi/ Code: github.com/oracle/odpi Documentation: oracle.github.io/odpi/doc/index.html Release Notes: oracle.github.io/odpi/doc/releasenotes.html Installation Instructions: oracle.github.io/odpi/doc/installation.html Report issues and discuss: github.com/oracle/odpi/issues

  Release 3.2 of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub. ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications...

Docker

Dockerfiles for node-oracledb are Easy and Simple

Things change fast and it's time for an update on running Node.js in Docker. Let's dive right in. Here's a Dockerfile that pulls the lightweight Oracle Linux "slim" image, installs the latest Node.js runtime, and installs the Oracle Client libraries needed for the node-oracledb module. Note no Docker store login or manual downloads are required: FROM oraclelinux:7-slim RUN yum -y install oracle-release-el7 oracle-nodejs-release-el7 && \ yum-config-manager --disable ol7_developer_EPEL && \ yum -y install oracle-instantclient19.3-basiclite nodejs && \ rm -rf /var/cache/yum WORKDIR /myapp ADD package.json /myapp/ ADD index.js /myapp/ RUN npm install CMD exec node index.js If you have Oracle Net configuration files such as a tnsnames.ora file, or wallet files, you can add an extra COPY after installing Instant Client to move the files to the default Oracle location, for example: COPY wallet/* /usr/lib/oracle/19.3/client64/lib/network/admin/ Put your package.json and application index.js in the directory with your Dockerfile. The package.json file should include the node-oracledb module as a normal dependency: { "name": "cjtest", "version": "1.0.0", "private": true, "description": "Test application", "scripts": { "start": "node index.js" }, "keywords": [ "myapp" ], "dependencies": { "oracledb" : "^3.1" }, "author": "CJ", "license": "MIT" } The app index.js is, for the sake of a quick demonstration, a very simple app: const oracledb = require('oracledb'); console.log(oracledb.versionString); Then build your image: sudo docker build --pull -t cjn . This installs all required dependencies, which, in this example is just the node-oracledb module. Now you can run it: sudo docker run cjn The output is the expected node-oracledb version number: 3.1.2 There you go. No mess, no fuss. You have deployed Node.js in a Docker container on a supported operating system with a simple, clean Dockerfile setup. One final tip: If you want to install other Instant Client packages check the Instant Client 19c Dockerfile. Update #1: if you are using your own containers to deploy in Oracle Functions, don't forget to add this to your Dockerfile otherwise database connections will fail with 'ORA-12560: TNS:protocol adapter error': groupadd --gid 1000 fn && \ adduser --uid 1000 --gid fn fn See Permissions Granted to Containers Running Functions. Update #2: For lots of detail, check out Blaine Carter's post How To Connect Your Node.js App In Docker To Your Oracle Database. Update #3: if you prefer using Node.js images, then you can try a Dockerfile something like: FROM node:12.9.1-buster-slim WORKDIR /tmp RUN apt-get update && apt-get -y upgrade && apt-get -y dist-upgrade && apt-get install -y alien libaio1 RUN wget http://yum.oracle.com/repo/OracleLinux/OL7/oracle/instantclient/x86_64/getPackage/oracle-instantclient19.3-basiclite-19.3.0.0.0-1.x86_64.rpm RUN alien -i --scripts oracle-instantclient*.rpm RUN rm -f oracle-instantclient19.3*.rpm && apt-get -y autoremove && apt-get -y clean WORKDIR /myapp ADD package.json /myapp/ ADD index.js /myapp/ RUN npm install CMD exec node index.js

Things change fast and it's time for an update on running Node.js in Docker. Let's dive right in. Here's a Dockerfile that pulls the lightweight Oracle Linux "slim" image, installs the latest Node.js...

Node.js

Demo: GraphQL with Oracle Database SODA and node-oracledb

This is a GraphQL demo that stores data in an Oracle Database SODA collection. GraphQL is a query language for services. Oracle Database SODA is a NoSQL-like API for storing documents in Oracle Database. The demo follows on from my previous "Demo: GraphQL with Oracle Database and node-oracledb" which shows the same demonstration using relational tables and SQL queries. Read that post to learn about GraphQL and GraphQL queries. Also see the associated SODA presentation Getting Started with GraphQL APIs on Oracle Database with Node.js which shows some SODA features. Demo prerequisites: Oracle Database 18.3, or later Oracle Client libraries 18.5 or 19.5, or later Steps to run the demo: Download the source code from here and extract it with tar -xf graphql_oracle_soda.tgz Change to the graphql_oracle_soda directory, edit dbconfig.js and set your database schema credentials. Run npm install. This will install dependencies and run setup.js to create the SODA schema. Run npm start to start the GraphQL server. Load the URL http://localhost:3000/graphql In the browser, execute GraphQL requests shown in SAMPLE_QUERIES.txt Since my previous blog discussed the behavior of the GraphQL inputs and outputs, let's look at the SODA implementation. In setup.js a collection is created: const sodaCollectionName = "myBlogs"; let collection = await soda.openCollection(sodaCollectionName); This method will actually open the collection if it already exists, so the next commands drop the collection so the demo can recreate it and run with a known, clean collection: collection.drop(); await conn.execute(`begin execute immediate 'drop sequence "mySequence"'; exception when others then if sqlcode <> -2289 then raise; end if; end;`); The cleanup is in two parts because the collection that will be created uses a sequence for the document keys. While SODA will auto-create the sequence, the collection.drop() method doesn't automatically drop the created sequence. The collection is then created using custom metadata: const metaData = { "schemaName": config.user.toUpperCase(), "tableName": "myBlogs", "keyColumn": { "name": "ID", "assignmentMethod": "SEQUENCE", "sequenceName": "mySequence" }, "contentColumn": { "name": "JSON_DOCUMENT", "sqlType": "BLOB", "compress": "NONE", "cache": true, "encrypt": "NONE", "validation": "STANDARD" }, "versionColumn": { "name": "VERSION", "method": "SHA256" }, "lastModifiedColumn": { "name": "LAST_MODIFIED" }, "creationTimeColumn": { "name": "CREATED_ON" }, "readOnly": false}; collection = await soda.createCollection(sodaCollectionName, {metaData: metaData}); I only needed to specify the metadata because I wanted to change the default key generation from a SHA hash to be a sequence, to align with GraphQL's use of integers for identifiers. The final part of the setup is creation of a couple of documents in the collection: await collection.insertOne({"title": "Blog Title 1", "content": "This is blog 1"}); await collection.insertOne({"title": "Blog Title 2", "content": "This is blog 2"}); I could have done all the setup in a SQL file, but decided to be more Node.js-centric in this particular demo. Let's look at the GraphQL code in graphql_oracle_soda.js. The GraphQL typedefs and resolvers are the same as the previous SQL example - as is the output from running the demo. The updated helpers that interact with the database are the interesting parts. In the new SODA example, the helper to get one document aka blog is: async function getOneBlogHelper(id) { let conn = await oracledb.getConnection(); let soda = conn.getSodaDatabase(); let collection = await soda.openCollection(sodaCollectionName); let document = await collection.find().key(id.toString()).getOne(); let j = null; if (document) { let c = document.getContent(); j = {id: id, title: c.title, content: c.content}; } await conn.close(); return j; } The SODA filter find() is used to find the SODA document with the requested id. The document content is extracted with getConection() and the document values are mapped to a JavaScript object returned back up via the GraphQL resolver. The other helpers are similarly straightforward. This simple demo shows how to use some of Oracle's exciting SODA features. SODA APIs are available in a number of languages including Java and Python, letting you access your stored data from familar environments. If you do interesting things with SODA, let us know.

This is a GraphQL demo that stores data in an Oracle Database SODA collection. GraphQL is a query language for services. Oracle Database SODA is a NoSQL-like API for storing documents in Oracle...

Instant Client

Oracle Instant Client 19c for Linux is Available

Good news - Oracle Instant Client 19.3 is available for Linux on OTN and yum. A Docker file is also available on GitHub. Oracle Instant Client comprises tools, libraries and SDKs for building and connecting applications to Oracle Database. It comes in a number of packages: 'Basic' and 'Basic Light' packages provide a cornerstone, and additional packages can be installed to get extra functionality. Typical uses of Instant Client are for Oracle Database APIs like Python cx_Oracle and Node.js node-oracledb. Instant Client 19c can connect to Oracle Database 11.2 (note some tools may have extra limitations), so I'd encourage you to upgrade. As the first yearly release (following on from 18c) in the new Oracle Database numbering system, the 19c development cycle seemed short. However there have been useful changes and fixes. Check out the documentation for the areas you are interested in. One thing I like is the enhanced Easy Connect Plus syntax, which removes the need to have a sqlnet.ora file for some commonly used connection options. Another key change is that Instant Client 19c (along with Oracle Database 19c) requires glibc 2.14 and won't run on Oracle Linux 6 or other, similarly old, Linux distributions.  For RPM users, there are two things to note about Instant Client 19c package installation: The global system library search path is now automatically set during installation of the (requisite) Basic and Basic Light packages. You no longer need to use ldconfig or set LD_LIBRARY_PATH. You can only have one version of the packages installed by default. For example, the 19c RPMs will conflict with the 18c RPMs. This is to avoid surprises and precedence issues when packages might otherwise attempt to update the library search path.  This default behavior can be circumvented by choosing the right install options (e.g.  'rpm -i --nodeps --replacefiles'), then manually setting the desired search path in '/etc/ld.so.conf.d/oracle-instantclient.conf' and finally running ldconfig. These two notes do not apply to the ZIP files: you still have to manually configure the library search path to suit the directory you installed into. And you can have as many ZIP versions installed concurrently as you like. Some old news, but worth repeating is that the RPM packages on yum.oracle.com do not require a login or click through to download. If you want to install the packages you can simply do: yum -y install oracle-release-el7 && yum-config-manager --enable ol7_oracle_instantclient and then install the desired packages like: yum -y install oracle-instantclient19.3-basic Update: on one network, I started seeing my apps fail to connect with the error "ORA-12637: Packet receive failed". This is under diagnosis. The solution is to create a sqlnet.ora file on the machine my apps are on containing "DISABLE_OOB=ON". On Linux and macOS, you can put this line in $HOME/.sqlnet.ora (note the leading period).

Good news - Oracle Instant Client 19.3 is available for Linux on OTN and yum. A Docker file is also available on GitHub. Oracle Instant Client comprises tools, libraries and SDKs for building and...

Python

Python cx_Oracle 7.1's Connection Fix-up Callback Improves Application Scalability

    cx_Oracle 7.1, the extremely popular Oracle Database interface for Python, is now Production on PyPI.   Another great release of cx_Oracle is available from PyPI, this time with a focus on session pooling. There were also a number of incremental improvements and fixes, all detailed in the release notes. Session Pooling When applications use a lot of connections for short periods, Oracle recommends using a session pool for efficiency. The session pool is a pool of connections to Oracle Database. (For all practical purposes, a 'session' is the same as a 'connection'). Many applications set some kind of state in connections (e.g. using ALTER SESSION commands to set the date format, or a time zone) before executing the 'real' application SQL. Pooled connections will retain this state after they have been released back to the pool with conn.close() or pool.release(), and the next user of the connection will see the same state. However, because the number of connections in a pool can vary over time, or connections in the pool can be recreated, there is no guarantee a subsequent pool.acquire() call will return a database connection that has any particular state. In previous versions of cx_Oracle, any ALTER SESSION commands had to be run after each and every pool.acquire() call. This added load and reduced system efficiency. In cx_Oracle 7.1, a new cx_Oracle.SessionPool() option sessionCallback reduces configuration overhead, as featured in the three scenarios shown below. Further details on session callbacks can be found in my post about the equivalent feature set in node-oracledb. Scenario 1: All Connections Should Have the Same State When all connections in a pool should have exactly the same state, you can set sessionCallback to a Python function: def InitSession(conn, requestedTag): cursor = conn.cursor() cursor.execute("alter session ....") pool = cx_Oracle.SessionPool(un, pw, connectstring, sessionCallback=InitSession, threaded=True) . . . The function InitSession will be called whenever a pool.acquire() call selects a newly created database connection in the pool that has not been used before. It will not be called if the connection in the pool was previously used by the application. It is called before pool.acquire() returns. The big advantage is that it saves the cost of altering session state if a previous user of the connection has already set it. Also the current caller of pool.acquire() can always assume the correct state is set. If you need to execute more than one SQL statement in the callback, use a PL/SQL block to reduce round-trips between Python and the database: def InitSession(conn, requestedTag): cursor = conn.cursor() cursor.callproc( """begin execute immediate 'alter session set nls_date_format = ''YYYY-MM-DD'' nls_language = AMERICAN'; -- other SQL statements could be put here end;""") The requestedTag parameter is shown in the next section. Scenario 2: Connections Need Different State When callers of pool.acquire() need different session states, for example if they need different time zones, then session tagging can be used in conjunction with sessionCallback. See SessionCallback.py for a runnable example. A tag is a semi-arbitrary string that you assign to connections before you release them to the pool. Typically a tag represents the session state you have set in the connection. Note that when cx_Oracle is using Oracle Client 12.2 (or later) libraries then tags are multi-property and must be in the form of one or more "name=value" pairs, separated by a semi-colon. You can choose the property names and values. Subsequent pool.acquire() calls may request a connection be returned that has a particular tag already set, for example: conn = pool.acquire(tag="NLS_DATE_FORMAT=SIMPLE") This will do one of: Select an existing connection in the pool that has the requested tag. In this case, the sessionCallback function is NOT called. Select a new, previously unused connection in the pool (which will have no tag) and call the sessionCallback function. Will select a previously used connection with a different tag. The existing session and tag are cleared, and the sessionCallback function is called. An optional matchanytag parameter can be used: conn = pool.acquire(tag="TIME_ZONE=MST", matchanytag=True) In this case, a connection that has a different tag may be selected from the pool and the sessionCallback function will be invoked. When the callback is executed, it can compare the requested tag with the tag that the connection currently has. It can then set the desired connection state and update the connection's tag to represent that state. The connection is then returned to the application by the pool.acquire() call: def InitSession(conn, requestedTag): # Display the requested and actual tags print("InitSession(): requested tag=%r, actual tag=%r" % (requestedTag, conn.tag)) # Compare the requested and actual tags and set some state . . . cursor = conn.cursor() cursor.execute("alter session ....") # Assign the requested tag to the connection so that when the connection # is closed, it will automatically be retagged conn.tag = requestedTag The sessionCallback function is always called before pool.acquire() returns. The underlying Oracle Session Pool tries to optimally select a connection from the pool. Overall, a pool.acquire() call will try to return a connection which has the requested tag string or tag properties, therefore avoiding invoking the sessionCallback function. Scenario 3: Using Database Resident Connection Pooling (DRCP) When using Oracle client libraries 12.2 (or later) the sessionCallback can alternatively be a PL/SQL procedure. Instead of setting sessionCallback to a Python function, set it to a string containing the name of a PL/SQL procedure, for example: pool = cx_Oracle.SessionPool(un, pw, connectstring, sessionCallback="myPlsqlCallback", threaded=True) The procedure has the declaration: PROCEDURE myPlsqlCallback ( requestedTag IN VARCHAR2, actualTag IN VARCHAR2 ); For an example PL/SQL callback, see SessionCallbackPLSQL.py. The PL/SQL procedure is called only when the properties in the requested connection tag do not match the properties in the actual tag of the connection that was selected from the pool. The callback can then change the state before pool.acquire() returns to the application. When DRCP connections are being used, invoking the PL/SQL callback procedure does not need round-trips between Python and the database. In comparison, a complex (or badly coded) Python callback function could require lots of round-trips, depending on how many ALTER SESSION or other SQL statements it executes. A PL/SQL callback can also be used without DRCP; in this case invoking the callback requires just one round-trip. Summary cx_Oracle 7.1 is a solid release which should particularly please session pool users. cx_Oracle References Home page: oracle.github.io/python-cx_Oracle/index.html Installation instructions: cx-oracle.readthedocs.io/en/latest/installation.html Documentation: cx-oracle.readthedocs.io/en/latest/index.html Release Notes: cx-oracle.readthedocs.io/en/latest/releasenotes.html Source Code Repository: github.com/oracle/python-cx_Oracle Facebook group: https://www.facebook.com/groups/418337538611212/ Questions: github.com/oracle/python-cx_Oracle/issues

    cx_Oracle 7.1, the extremely popular Oracle Database interface for Python, is now Production on PyPI.   Another great release of cx_Oracle is available from PyPI, this time with a focus on session...

Node.js

Use node-oracledb 3.1's connection tagging to efficiently set session state

Release announcement: The latest version of node-oracledb, the Node.js module for accessing Oracle Database, is on npm. Top features: Connection tagging; New-connection callback; Explicit session drop; Multi-binary install; Deferred Oracle Client library initialization.     This release of node-oracledb has features to make installation and configuration easier, and for working with pooled connections more efficiently. The highlights are below, but check the CHANGELOG for all improvements and bug fixes. The available pre-built node-oracledb binaries are now bundled into the node-oracledb npm package, making installation easier and allowing applications to be more portable. Loading of the Oracle Client libraries is now deferred until the first use of oracledb.getConnection(), oracledb.createPool(), oracledb.oracleClientVersion, or oracledb.oracleClientVersionString. Pooled connections can now be tagged with a text string when releasing them to the connection pool. This can be used to indicate what 'session state' (e.g. ALTER SESSION settings or PL/SQL package state) the connection has. The application can later call pool.getConnection() and request a connection which has been tagged. This lets the application use a connection with a known session state. A Node.js callback function can be invoked when a pooled connection is newly created and has never been acquired from the pool before, or when requested and actual tags mismatch. Alternatively a PL/SQL procedure can be invoked when requested and actual tags mismatch. This lets applications set session state before the connection is used. Connections in the connection pool can now be forcefully dropped so that the next pool.getConnection() that causes the pool to grow will re-establish a new connection to the database. Install Changes The node-oracledb module necessarily has a binary layer that links with Oracle Client libraries to communicate to Oracle Database. (Technically this makes it an 'add-on', not a module.) From node-oracledb 3.1 the pre-built binaries we provide are now bundled into the node-oracledb npm package. The bootstrap install script no longer does a separate request to download a binary from GitHub - this change may help anyone with network restrictions. After installation, the correct binary (if one is available for your operating system and Node.js version) is loaded by require('oracledb') at run-time. This means that you can upgrade Node.js, or copy the node_modules/oracledb directory to another operating system without always needing to re-install node-oracledb, making it easier to deploy some classes of application. If you are deploying to a fixed environment, such as a Docker container, and really want to minimize the install footprint, you can remove binaries for other Node.js versions by running 'npm run prune' in the node_modules/oracledb directory. This saves a few megabytes. Node-oracledb source code remains available at GitHub or git://oss.oracle.com/git/oracle/node-oracledb.git/ which you can use to build node-oracledb binaries on platforms that do not have pre-built binaries. Deferred Loading Connections to Oracle are often known as 'sessions'. Each connection can have what is called 'session state' controlling behavior. State can be set in various way such as via environment variables or with ALTER SESSION statements. For node-oracledb, a commonly set environment variable is ORA_SDTZ which controls the default session time zone: $ export ORA_SDTZ=UTC $ node myapp.js Instead of setting the value before starting Node, some developers prefer setting it inside the application under the impression there is no possibility of mis-configuration: process.env.ORA_SDTZ = 'UTC'; const oracledb = require('oracledb'); But a number of people set environment variables too late, or discover insurmountable operating system differences (e.g. between PATH on Windows and LD_LIBRARY_PATH on Linux, which has to be set before a process starts), and basically get confused by trying to set the environment inside applications. Overall I don't recommend this usage because I'm the person who keeps having to help these users! Luckily for people who ignore my advice, node-oracledb 3.1's new deferred library loading change makes node-oracledb more tolerant of Oracle environment setting placement. Initialization of the Oracle Client libraries is now deferred until the first use of oracledb.getConnection(), oracledb.createPool(), oracledb.oracleClientVersion, or oracledb.oracleClientVersionString. If the Oracle Client cannot be loaded, getConnection() and createPool() will return an error via the callback. Accessing oracledb.oracleClientVersion or oracledb.oracleClientVersionString will throw an error. Previously require('oracledb') would throw an error. Since Oracle client libraries are not loaded until a connection is used, you can now do: const oracledb = require('oracledb'); . . . // Some time later process.env.ORA_SDTZ = 'UTC'; let pool = oracledb.createPool( . . . ) More practically, this change makes require('oracledb') always succeed (if the module is installed!), allowing node-oracledb constants and other attributes to be accessed even if the Oracle Client is not available. This makes it easier to include node-oracledb in multi-database applications where not all users will be accessing Oracle Database. It allows code generation tools to access node-oracledb constants without needing Oracle Client installed on the computer that generates Node.js code. Session State Getting back to altering session state, some applications do this with explicit ALTER SESSION commands. Pooled connections will retain this session state after they have been released back to the pool with connection.close() and the next user of the connection will see the same state. However, because the number of connections in a pool can vary over time, or connections in the pool can be recreated, there is no guarantee a subsequent getConnection() call will return a database connection that has any particular state. So in previous versions of node-oracledb, any ALTER SESSION commands had to be run after each and every pool.getConnection() call. This added load and reduced system efficiency: // In the past with node-oracledb 3.0 connection = await pool.getConnection(); await connection.execute(`alter session set time_zone='UTC'`); await connection.execute(`select ...`); // actually use the connection In node-oracledb 3.1, the new createPool() option sessionCallback can be used to set session state efficiently. You can set sessionCallback to a Node.js function that will be called whenever pool.getConnection() will return a newly created database connection that has not been used before. It is also called when connection tagging is being used and the requested tag does not match the actual tag of the connection selected from the pool. It is called before getConnection() returns in these two cases. It will not be called in other cases. The big advantage is that it saves the cost of altering session state if a previous user of the connection has already set it. Also the current caller of getConnection() can always assume the correct state is set. The following example prints "In initSession" just once - for the first getConnection() call. The second getConnection() call returns the previously used connection (because poolMax is 1) which has already been initialized. The function initSession is not called a second time: // New in node-oracledb 3.1 const oracledb = require('oracledb'); function initSession(connection, requestedTag, cb) { console.log("In initSession"); cb(); } (async function() { try { let pool = await oracledb.createPool({ user: 'hr', password: 'welcome', connectString: 'localhost/xe, sessionCallback: initSession, poolMin: 1, poolMax: 1, poolIncrement: 0 }); let connection; connection = await pool.getConnection(); await connection.close(); connection = await pool.getConnection(); await connection.close(); } catch (err) { console.error(err.message); } })(); The initSession() parameter requestedTag is only set when tagging is being used, see later. Although a sessionCallback function can be used for logging, it is more commonly used to set session state. This state should be set in the function before it invokes its own callback cb. The following example sets two NLS settings in each pooled connection. They are only set the very first time connections are established to the database. Subsequent pool.getConnection() calls that return a previously used connection will not cause initSession to be called: function initSession(connection, requestedTag, cb) { connection.execute( `alter session set nls_date_format = 'YYYY-MM-DD' nls_language = AMERICAN`, cb); } If you need to execute multiple SQL statements in the callback, use an anonymous PL/SQL block to avoid repeated execute() calls and save round-trips between node-oracledb and Oracle Database: function initSession(connection, requestedTag, cb) { connection.execute( `begin execute immediate 'alter session set nls_date_format = ''YYYY-MM-DD'' nls_language = AMERICAN'; -- other SQL statements could be put here end;`, cb); } Let's say that you implement initSession in a micro-service. The service simply connects, does one query, and then disconnects. Compare it with node-oracledb 3.0 where you would have to execute ALTER SESSION each time you call getConnection(). The new version will only execute ALTER SESSION once for each connection in the pool. When poolMax is 4 and you call the service 1000 times, the number of statement executions required for each version can be compared: Node-oracledb version Service Called poolMax ALTER SESSION calls SELECT calls Statement Executions 3.0 1000 4 1000 1000 2000 3.1 1000 4 4 1000 1004   Using sessionCallback in node-oracledb 3.1 significantly reduces the number of statement executions and therefore reduces the round-trips between node-oracledb and Oracle Database. In one basic comparison of an application like this, the average response time went down from 0.35 ms to 0.27 ms by using a sessionCallback function. Tagging Using a simple sessionCallback is great when you want every connection to have the same session settings. If some connections need different state to others, then you can set the desired state in a connection (e.g. with ALTER SESSION) and set an application-specific string 'tag' on the connection to record the session state. Later, other pool.getConnection() calls can request a connection with a specified tag, i.e. a connection that has a known session state: // Get initial connection connection = await pool.getConnection(); // Change the session state and set the tag to record that state. await connection.execute(`ALTER SESSION SET TIME_ZONE='UTC'`); connection.tag = "TZ=UTC"; // With 12.2 client libs the tag must be like "k=v" . . . // Use connection // Closing saves the current connection.tag value for that connection await connection.close(); . . . // Ask to be given a connection which has a specific tag. If no // suitable connection exists in the pool, a brand new connection may // be returned; this won't have a tag set. connection = await pool.getConnection({tag: "TZ=UTC"}); console.log('Connection tag is ' + connection.tag); if (connection.tag !== "TZ=UTC") { await connection.execute(`ALTER SESSION SET TIME_ZONE='UTC'`); connection.tag = "TZ=UTC"; } // Else got an initialized session so don't need to do ALTER SESSION. // The aim is to avoid doing ALTER SESSION unless necessary. . . . // Use connection You would use tagging where the aim is to get connections which have one of several different states. Do not use tagging if all connections should have the same state - you can simply use sessionCallback as shown earlier. Also there is little point using tagging with huge numbers of different tags since the chances of getConnection() returning an already initialized connection are low, so most connections will need some kind of session reset, and the cost of tag management will add overhead to the situation. This is also true if the pool frequently grows and shrinks. As you can guess, sessionCallback can be used with tagging to set state. When the requested tag doesn't match the connection's actual tag, the sessionCallback function will be called before pool.getConnection() returns. The function can check the requestedTag parameter against connection.tag, and then adjust the session state and connection.tag value as desired. // The pool sessionCallback function function initSession(connection, requestedTag, cb) { console.log(`initSession() requested tag: ${requestedTag}, actual tag: ${connection.tag}`); // requestedTag and connection.tag can be parsed to decide what state to set . . . connection.execute(`ALTER SESSION SET . . .`, (err) => { connection.tag = ". . ."; // Update connection.tag to match the new state cb(err); // Call cb() after setting any state } ); } In initSession(), the value of requestedTag will be different to connection.tag by definition because the callback is not called if the tags are the same. The edge-case exception is if the getConnection() call asked for a tag of "" (empty string) and the connection is new and hasn't had a tag set. However, to use tagging efficiently, you should request initialized sessions, and not mix getConnection() calls that request tags with calls that request no tags. A best-practice recommendation is to set connection.tag in the callback where you update the session state, however it can be set anytime prior to the connection being closed if needed. The underlying Oracle Session Pool has various modes and heuristics to optimally select the connection to return from the pool. Overall, a pool.getConnection() call will try to select a connection which has the requested tag, therefore avoiding invoking the sessionCallback function. If no available connection with that tag exists in the pool, an untagged connection or a connection with a new session will be selected, and thus invoke the sessionCallback function; in this case the connection.tag property at the time the sessionCallback is invoked will be an empty string. If the optional getConnection() attribute 'matchAnyTag' is true, then a connection that has a different tag may be selected and connection.tag in the sessionCallback function will have that different tag. In summary, Oracle will try its best to reuse connections, but it is up to the sessionCallback function to check the actual and requested tags and make necessary session state changes. When node-oracledb is using Oracle Client libraries 12.2 or later, node-oracledb always uses Oracle's multi-property tagging mode and the tag string must be of the form of one or more "name=value" pairs, separated by a semi-colon, for example "loc=uk;lang=cy" or "TIME_ZONE=UTC;NLS_DATE_FORMAT=YYYY-MM-DD" (otherwise you'll get an error like 'ORA-24487: Invalid properties or values provided for OCISessionGet'). The property keys and values in a multi-property tag are case sensitive. The order of properties influences which connection is returned by the connection pool, so put the most important properties early in the tag. You can use a multi-property tag to record various session settings so your sessionCallback function can parse the tag and decide which specific settings should be reset. Note that the underlying Oracle session pool parses tag properties so "a=b;c=d" is considered the same as "c=d;a=b" when choosing which connection in the pool should be used. However the Node.js callback is invoked after a simple string comparison of the full tag strings and would treat these examples as different. You can make the callback as complex or as simple as needed, depending on the business requirements and what tag properties are expected. This example assumes that tags have the format USER_TZ=X, where X is a valid Oracle timezone: // See examples/sessiontagging1.js // The connection callback function function initSession(connection, requestedTag, cb) { console.log(`In initSession. requested tag: ${requestedTag}, actual tag: ${connection.tag}`); const tagParts = requestedTag.split('='); if (tagParts[0] != 'USER_TZ') { cb(new Error('Error: Only property USER_TZ is supported')); return; } // Execute the session state change. connection.execute( `ALTER SESSION SET TIME_ZONE = '${tagParts[1]}'`, (err) => { connection.tag = requestedTag; // Record the new state of the connection cb(err); } ); } // Start a connection pool try { await oracledb.createPool({ user: 'hr', password: 'welcome', connectString: 'localhost/XE', sessionCallback: initSession }); } catch (err) { console.error('init() error: ' + err.message); } // . . . Later use the pool // Request a connection from the pool cache with a given tag, but // accept any tag being returned. let connection = await oracledb.getConnection( {poolAlias: 'default', tag: "USER_TZ=UTC", matchAnyTag: true}); // What happens: // (i) If a brand new connection was created in the pool, initSession() is // called to set the state and set connection.tag. // (ii) If a matching connection was found in the pool, connection.tag // will equal "USER_TZ=UTC". In this case initSession() is not called. // (iii) If another connection is returned, it will have no tag (if it // is a new or recreated connection) or a different tag (because // matchAnyTag was true) that another user has set. In this case, // initSession() is called to set the state and update connection.tag. // Use the connection, which is in the UTC timezone let result = connection.execute(. . .); // Closing the connection retains the value of connection.tag await connection.close(); . . . } A more complete example of a callback is shown in examples/sessiontagging2.js. When using Oracle Client 12.2, a PL/SQL session state fix-up procedure can alternatively be called instead of the Node.js function. The PL/SQL procedure is called only when the properties in the requested connection tag do not match the properties in the actual tag. Instead of setting sessionCallback to a Node.js function, set it to a string containing the name of your PL/SQL procedure: try { let pool = await oracledb.createPool({ user: 'hr', password: 'welcome', connectString: 'localhost/XE', sessionCallback: "myPackage.myPlsqlCallback" }); . . . let connection = await pool.getConnection({tag: "DF=YYYY-MM-DD"}); . . . // Use connection // The session state will be whatever myPlsqlCallback sets it to. // connection.tag will always be the requested tag "DF=YYYY-MM-DD" await conn.close(); } The PL/SQL callback declaration is: PROCEDURE myPlsqlCallback ( requestedTag IN VARCHAR2, actualTag IN VARCHAR2 ); When DRCP connections are being used, a PL/SQL callback procedure does not need any round-trips between Node.js and the database. In comparison, a complex (or badly coded) Node.js function could require lots of round-trips. A PL/SQL callback can also be specified if DRCP is not being used; in this case invoking the PL/SQL callback requires just one round-trip.  An example PL/SQL procedure is shown in the documentation.   Since DRCP sessions are a database-server resource that can be shared between a number of applications, multi-property tagging can be particularly useful to let your callback procedure decide which parts of the session state are relevant for your application, and which parts can be ignored. Dropping Pooled Connections Sometimes, instead of releasing a connection back to a pool for reuse, you just want to get rid of it. Perhaps you have set some special session state and it's easier to drop the connection now so that the sessionCallback function can reset the normal state when the connection is recreated. In node-oracledb 3.1 you can now force connections to be dropped from the pool: await connection.close({drop: true}); Remember that normally you don't want to do this, because it means a new connection will have to be created and initialized the next time one is needed. For non-pooled connections, using a simple connection.close() continues to completely terminate a connection. Summary Overall, the node-oracledb 3.1 release brings some welcome usability and performance benefits giving you greater control over your applications in dynamic environments. Resources Node-oracledb installation instructions are here. Node-oracledb documentation is here. Node-oracledb change log is here. Issues and questions about node-oracledb can be posted on GitHub or Slack (link to join Slack). Follow us on Twitter or Facebook. Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

Release announcement: The latest version of node-oracledb, the Node.js module for accessing Oracle Database, is on npm. Top features: Connection tagging; New-connection callback; Explicit session drop;...

General

ODPI-C 3.1 is Available

Release 3.1 of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub.  ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications written in C or C++. Top features: Multi-property Session Tags The ODPI-C 3.1 release introduces some small enhancements and corrects a number of small issues that were discovered over the past few months. The main change is support for Oracle Call Interface 12.2's multi-property session tagging, allowing connections in a session pool to be assigned a semi-arbitrary string tag that you can use to represent the session state (e.g. ALTER SESSION values) of each connection. With multi-property tagging you can assign a PL/SQL procedure to 'fix-up' the session state, if necessary, before a connection is returned to the application from the pool. This is an efficient way to make sure connections have a required state. See the release notes for all the changes. ODPI-C References Home page: https://oracle.github.io/odpi/ Code: https://github.com/oracle/odpi Documentation: https://oracle.github.io/odpi/doc/index.html Release Notes: https://oracle.github.io/odpi/doc/releasenotes.html Installation Instructions: oracle.github.io/odpi/doc/installation.html Report issues and discuss: https://github.com/oracle/odpi/issues

Release 3.1 of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub.  ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications...

Node.js

Node-oracledb Connection Samples: Proxies and External Authentication

We recently reviewed 'external authentication' and 'proxy connections' support in node-oracledb and did a few tweaks that will appear in the future node-oracledb 3.1. You can use 'external authentication' to connect without requiring a password being stored in your Node.js applications. This is useful, for example, to authenticate via LDAP or use an Oracle Wallet. The idea of a proxy connection is to create a schema in one database user name. Privilege is granted on that schema to other database users so they can access the schema and manipulate its data. This aids three-tier applications where one user owns the schema while multiple end-users access the data. You can use external authentication and proxy connections together or separately. In the 'so I can find it again' category, here are some behaviors of node-oracledb connections. For ease of testing, my external authentication was via Operating System authentication using an 'OPS$' schema. Since my database was remote to the node-oracledb application I used the INSECURE setting 'alter system set remote_os_authent=true scope=spfile;'. Do not do use this in real life! SQL: create user mynormaluser identified by mynormaluserpw; grant create session to mynormaluser; create user myproxyuser identified by myproxyuserpw; grant create session to myproxyuser; create user mysessionuser1 identified by doesnotmatter; grant create session to mysessionuser1; alter user mysessionuser1 grant connect through myproxyuser; -- I logged into my computer as the 'oracle' OS user: create user ops$oracle identified externally; grant connect, resource to ops$oracle; alter user ops$oracle default tablespace users; alter user mysessionuser2 grant connect through ops$oracle; JavaScript: const oracledb = require('oracledb'); async function ShowUserInfo(conn) { let result = await conn.execute(` select sys_context('USERENV', 'PROXY_USER'), sys_context('USERENV', 'SESSION_USER') from dual`); console.log(" Proxy User:", result.rows[0][0]); console.log(" Session User:", result.rows[0][1]); console.log(); }; (async function() { let conn, pool, config, testdesc; // -------------------- STANDALONE CONNECTIONS -------------------- console.log("(1) Standalone: Basic Auth"); // Gives: // Proxy User: null // Session User: MYNORMALUSER try { config = { connectString: "localhost/orclpdb", user: "mynormaluser", password: "mynormaluserpw" }; conn = await oracledb.getConnection(config); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(2) Standalone: External Auth"); // Gives: // Proxy User: null // Session User: OPS$ORACLE try { config = { connectString: "localhost/orclpdb", externalAuth: true }; conn = await oracledb.getConnection(config); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(3) Standalone: Basic Auth with proxy"); // Gives: // Proxy User: MYPROXYUSER // Session User: MYSESSIONUSER1 try { config = { connectString: "localhost/orclpdb", user: "myproxyuser[mysessionuser1]", password: "myproxyuserpw" }; conn = await oracledb.getConnection(config); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(4) Standalone: External Auth with proxy in brackets"); // With node-oracledb 3.0 gives: // DPI-1032: user name and password cannot be set when using external authentication // With node-oracledb 3.1 gives: // Proxy User: OPS$ORACLE // Session User: MYSESSIONUSER2 try { config = { connectString: "localhost/orclpdb", user: "[mysessionuser2]", externalAuth: true }; conn = await oracledb.getConnection(config); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(5) Standalone: External Auth with proxy"); // With node-oracledb 3.0 gives: // DPI-1032: user name and password cannot be set when using external authentication // With node-oracledb 3.1 gives: // DPI-1069: proxy user name must be enclosed in [] when using external authentication try { config = { connectString: "localhost/orclpdb", user: "mysessionuser2", externalAuth: true }; conn = await oracledb.getConnection(config); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } // -------------------- POOLED CONNECTIONS -------------------- console.log("(6) Pooled: Basic Auth"); // Gives: // Proxy User: null // Session User: MYNORMALUSER try { config = { connectString: "localhost/orclpdb", user: "mynormaluser", password: "mynormaluserpw" }; pool = await oracledb.createPool(config); conn = await pool.getConnection(); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(7) Pooled: External Auth"); // Gives: // Proxy User: null // Session User: OPS$ORACLE try { config = { connectString: "localhost/orclpdb", externalAuth: true }; pool = await oracledb.createPool(config); conn = await pool.getConnection(); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(8) Pooled: Basic Auth with proxy in pool creation"); // Gives: // Proxy User: MYPROXYUSER // Session User: MYSESSIONUSER1 try { config = { connectString: "localhost/orclpdb", user: "myproxyuser[mysessionuser1]", password: "myproxyuserpw" }; pool = await oracledb.createPool(config); conn = await pool.getConnection(); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(9) Pooled: Basic Auth with proxy in connection"); // Gives: // Proxy User: MYPROXYUSER // Session User: MYSESSIONUSER1 try { config = { connectString: "localhost/orclpdb", user: "myproxyuser", password: "myproxyuserpw", homogeneous: false }; pool = await oracledb.createPool(config); conn = await pool.getConnection({ "user": "mysessionuser1" }); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(10) Pooled: Basic Auth with proxy in brackets in connection"); // Gives: // ORA-00987 missing or invalid username(s) try { config = { connectString: "localhost/orclpdb", user: "myproxyuser", password: "myproxyuserpw", homogeneous: false }; pool = await oracledb.createPool(config); conn = await pool.getConnection({ "user": "[mysessionuser1]" }); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(11) Pooled: External Auth with proxy in brackets in pool creation"); // Gives: // DPI-1032 user name and password cannot be set when using external authentication try { config = { connectString: "localhost/orclpdb", user: "[mysessionuser2]", externalAuth: true }; pool = await oracledb.createPool(config); conn = await pool.getConnection(); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(12) Pooled: External Auth with proxy in pool creation"); // Gives: // DPI-1032: user name and password cannot be set when using external authentication try { config = { connectString: "localhost/orclpdb", user: "mysessionuser2", externalAuth: true }; pool = await oracledb.createPool(config); conn = await pool.getConnection(); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(13) Pooled: External Auth with proxy in brackets in connection"); // Using Oracle 12.2 client libraries gives // Proxy User: null // Session User: OPS$ORACLE // Using Oracle 18.3 client libraries gives // Proxy User: OPS$ORACLE // Session User: MYSESSIONUSER2 // This was an enhancement in Oracle 18.3 try { config = { connectString: "localhost/orclpdb", externalAuth: true }; pool = await oracledb.createPool(config); conn = await pool.getConnection({ user: "[mysessionuser2]" }); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } console.log("(14) Pooled: External Auth with proxy in connection"); // With node-oracledb 3.0 gives: // Proxy User: null // Session User: OPS$ORACLE // With node-oracledb 3.1 gives: // DPI-1069: proxy user name must be enclosed in [] when using external authentication // This change in node-oracledb 3.1 prevents connecting with an unexpected session user try { config = { connectString: "localhost/orclpdb", externalAuth: true }; pool = await oracledb.createPool(config); conn = await pool.getConnection({ user: "mysessionuser2" }); await ShowUserInfo(conn); } catch (e) { console.log(e.message + "\n"); } })(); Summary: (4) "Standalone: External Auth with proxy": Fixed in node-oracledb 3.1 (5) "Standalone: External Auth with proxy no brackets": Error message was improved in node-oracledb 3.1 (13) "Pooled: External Auth with proxy in brackets in connection": Works with Oracle Client 18.3+ (14) "Pooled: External Auth with proxy in connection": In node-oracledb 3.1 connection fails due to improved validation. In node-oracledb 3.0 connection succeeded despite the options being inconsistent. We've had some discussions about the use of "[]" brackets - which is a standard Oracle syntax passed through to the Oracle client libraries. We could have done some manipulation in node-oracledb for consistency between pooled and standalone connections, but we decided not to make node-oracledb behave differently than other Oracle language interfaces. Finally, if you're interested in tracking users or in mid-tier authentication, don't forget to check out the clientId attribute. 

We recently reviewed 'external authentication' and 'proxy connections' support in node-oracledb and did a few tweaks that will appear in the future node-oracledb 3.1. You can use 'external...

General

Historical Links for Oracle Net Services aka SQL*Net

Sometime you just have to do a cleanup but it seems a waste not to keep information that may still be valuable for the historical record. This is one such time. Here are some of the older resources about Oracle Net Services, aka SQL*Net taken from a web page that is about to be revamped. Of course, if you want current information, visit the Oracle Net Service page, the Net Services Net Services Reference, or the latest Net Services Adminstrator's Guide. Overview Oracle Net Services provides a scalable, secure, and easy-to-use high-availability network infrastructure for Oracle environment. It eases the complexities of network configuration and management, maximizes performance, and improves network security and diagnostic capabilities as summarized hereafter. Connectivity: Oracle Net Services enables a network session from a client application to an Oracle database server. Once a network session is established, Oracle Net acts as the data courier for both the client application and the database server. It is responsible for establishing and maintaining the connection between the client application and database server, as well as exchanging messages between them. Manageability: it encompasses location transparency, centralized configuration and management, quick installation and configuration. Location Transparency service allows a database client to identify the target database server; to achieve this goal, several naming methods are available today: Oracle Net Directory naming, Local naming (TNSNAMES.ORA), Host naming, and External naming. Centralized Configuration and Management: allows administrators, in large network environments, to easily access a centralized repository (i.e., LDAP-compliant directory server such as Oracle Internet Directory) to specify and modify the network configuration. Quick Installation and Configuration: networking components for the Oracle database server and clients are preconfigured for most environments. The Oracle database service is resolved using various naming methods. As a result, clients and servers are ready to immediately connect when installed. Performance and Scalability: features such as Database Resident Connection Pool (connection pooling), Shared Server (session multiplexing), and scalable event models (polling) enable performance and high scalability. Network Security: Oracle Net Services enables database access control using features of firewall access control and protocol access control. Diagnosability: a diagnostic and performance analysis tool, Trace Assistant, provides detailed information about the source and context of problems as they arise. What's New in Oracle Database 12c The new features for Net Services in Oracle Database 12c include: Internet Protocol Version 6 (IPv6) Addressing and Connectivity support over Public VIPs to Real Application Clusters (please refer the IPv6 Statement of Direction below for more information) Networking support for new architectures, such as, Database Consolidation and Global Data Services Large Session Data Unit (SDU) sizes, with a new upper limit of 2 MB. The larger SDU size can be used to achieve better utilization of available bandwidth in networks that have high bandwidth delay products and host resources, according to application characteristics. Advanced Network Compression. This feature can be used to reduce the amount of data transmitted over the network. New parameters for the sqlnet.ora file enable compression and selection of the preferred compression level. Please refer the Advanced Network Compression white paper below for more information. Dead Connection Detection has been enhanced to reduce the amount of time taken to detect terminated connections. The SQLNET.EXPIRE_TIME parameter must be set in the sqlnet.ora file to detect terminated clients. If the system supports TCP keepalive tuning, then Oracle Net Services automatically uses the enhanced detection model, and tunes the TCP keepalive parameters at a per-connection level. Intelligent Client Connection establishment feature reduces the priority of connection attempts to unresponsive nodes in the address string of connect descriptors. No configuration changes are required to use this feature. Incident Generation for Process Failures for the following Oracle Net Services components has been added: Oracle Net Listener Listener control utility (LSNRCTL) Connection Manager Administration (CMADMIN) Oracle Connection Manager gateway (CMGW) What's New in Oracle Database 11gR1 & 11gR2 The new features for Net Services in Oracle Database 11g Release 2 include: Internet Protocol Version 6 (IPv6) Addressing and Connectivity for single-instance database (please refer the IPv6 Statement of Direction below for more information) Oracle Restart in single instance environment for restarting the listener, the database and other Oracle components following hardware/software failure or host restart. Support for CONNECT_TIMEOUT and TRANSPORT_CONNECT_TIMEOUT for each IP address CIDR notation and wildcard support for valid node checking The new features for Net Services in Oracle Database 11g Release 1 include: Support for authenticated LDAP access for Net Naming Performance Improvements Fastpath for common usage scenarios, enforced transparently (no configuration required) Efficient network support for bulk/large data transfers (such as, SecureFile LOBs) Efficient event dispatch mechanisms for PHP and process-based systems (enforced transparently, no config. required) Fast Reconnects for High Availability: efficient detection of terminated instances and conect time failover Support for Database Resident Connection Pool Enhancements to the Easy Connect Naming method For more details see the Oracle Database Net Services Administrator's Guide Technical Information Oracle OpenWorld Oracle Net Services 12c: Best Practices for Database Performance and Scalability Oracle Database 12c Oracle Database and IPv6 Statement of Direction Advanced Network Compression Dead Connection Detection Oracle Database 11gR2 Application Failover in Oracle Database 11g Oracle Database 11gR1 Authenticated LDAP Name Lookup (PDF) Configuring Microsoft Active Directory 2003 for Net Naming (PDF) Oracle Net Listener Connection Rate Limiter (PDF) Oracle Easy Connect Naming (PDF) Oracle Database 10g Oracle Net Services Overview for Oracle10g (PDF) Oracle Net High-Speed Interconnect Support (PDF) Oracle Connection Manager Overview (HTML) Oracle Database 9i Oracle Net Services New Features Overview (HTML) Directory Naming Migration: New Features and a Case Study (PDF) Migrating from Oracle Names and Local Naming to Directory Naming (PDF)

Sometime you just have to do a cleanup but it seems a waste not to keep information that may still be valuable for the historical record. This is one such time. Here are some of the older...

General

ODPI-C: A Light Weight Driver for Oracle Database

This is a guest post by Oracle’s Vaddeti Karthik Raju, a contributor to the ODPI-C project. What is ODPI-C ? Oracle Database Programming Interface for C (ODPI-C) is an open source library of C code that simplifies the use of common Oracle Call Interface (OCI) features for Oracle Database drivers and user applications which are written in C or C++. It is a wrapper over Oracle Call Interface (OCI) that makes applications and language interfaces easier to develop. For complete details of ODPI-C, visit the homepage. Here you can find a list of all its powerful features, links to the source code, the documentation, and some samples. Why ODPI-C ? Since ODPI-C is a wrapper on top of OCI and exposes common OCI functionality, why use ODPI-C? Though OCI is highly efficient and flexible, it requires a lot of code and technical skill to use well. ODPI-C reduces the amount of code and the skill level required, which is particularly useful for language driver creators. Thus it reduces the amount of time required for a developer to implement new Oracle features in a particular language or application. User applications and language drivers have been written in ODPI-C. Drivers currently using ODPI-C include: Oracle Drivers cx_Oracle Python interface node-oracledb Node.js module Third-party Drivers go-goracle Go Driver mirmir Rust Bindings odpic-raw Haskell Raw Bindings ruby-ODPI Ruby Interface rust-oracle Driver for Rust Common setup Let us go through few example programs which demonstrates how to perform operations in ODPI-C. For installation of ODPI-C, visit the installation instructions. In summary, the supplied Makefile or Makefile.win32 can be used to build a shared library, which can be linked with samples and tests. Once the library has been built, locate the directory it is in and add the directory name to your system library search path, e.g. PATH on Windows or LD_LIBRARY_PATH on Linux. You'll also need some Oracle client libraries, such as from the Oracle Instant Client "Basic" package. Before going further, create the database objects using the below code. Here I am using database credentials scott/tiger@localhost/orclpdb: $ sqlplus scott/tiger@localhost/orclpdb create table TestTable ( IntCol number(9) not null, StringCol varchar2(100) ); Include the below headers, defines and printError( ) function for all the programs shown in this blog post: this code is common for all programs. Make sure that you connect to the same schema where you created the table. #include <dpi.h> #include <stdlib.h> #include <stdio.h> #include <string.h> #define USER "scott" #define PASSWORD "tiger" #define CONNECT_STRING "localhost/orclpdb" static dpiContext *gContext = NULL; static dpiErrorInfo gErrorInfo; //----------------------------------------------------------------------------- // printError() // Prints the error message. The error is first fetched // from the global DPI context. //----------------------------------------------------------------------------- int printError(void) { if (gContext) dpiContext_getError(gContext, &gErrorInfo); fprintf(stderr, " [FAILED]\n"); fprintf(stderr, " FN: %s\n", gErrorInfo.fnName); fprintf(stderr, " ACTION: %s\n", gErrorInfo.action); fprintf(stderr, " MSG: %.*s\n", gErrorInfo.messageLength, gErrorInfo.message); fflush(stderr); return DPI_FAILURE; } Inserting The program insert.c shows how to perform simple insert operation that adds couple of records to the table. // insert.c int main() { const char *insertSql = "insert into TestTable values (:1, :2)"; dpiData intColValue, stringColValue; dpiConn *conn; dpiStmt *stmt; // create context if (dpiContext_create(DPI_MAJOR_VERSION, DPI_MINOR_VERSION, &gContext, &gErrorInfo) < 0) return printError(); // create connection if (dpiConn_create(gContext, USER, strlen(USER), PASSWORD, strlen(PASSWORD), CONNECT_STRING, strlen(CONNECT_STRING), NULL, NULL, &conn) < 0) return printError(); // prepare insert statement for execution if (dpiConn_prepareStmt(conn, 0, insertSql, strlen(insertSql), NULL, 0, &stmt) < 0) return printError(); // create first row dpiData_setInt64(&intColValue, 1); if (dpiStmt_bindValueByPos(stmt, 1, DPI_NATIVE_TYPE_INT64, &intColValue) < 0) return printError(); dpiData_setBytes(&stringColValue, "Test data 1", strlen("Test data 1")); if (dpiStmt_bindValueByPos(stmt, 2, DPI_NATIVE_TYPE_BYTES, &stringColValue) < 0) return printError(); if (dpiStmt_execute(stmt, DPI_MODE_EXEC_DEFAULT, NULL) < 0) return printError(); // create second row dpiData_setInt64(&intColValue, 2); if (dpiStmt_bindValueByPos(stmt, 1, DPI_NATIVE_TYPE_INT64, &intColValue) < 0) return printError(); dpiData_setBytes(&stringColValue, "Test data 2", strlen("Test data 2")); if (dpiStmt_bindValueByPos(stmt, 2, DPI_NATIVE_TYPE_BYTES, &stringColValue) < 0) return printError(); if (dpiStmt_execute(stmt, DPI_MODE_EXEC_DEFAULT, NULL) < 0) return printError(); // commit changes if (dpiConn_commit(conn) < 0) return printError(); // cleanup dpiStmt_release(stmt); dpiConn_release(conn); return DPI_SUCCESS; } Remember to add the headers, defines and printError() to insert.c before compiling it. I recommend using the Makefile or Makefile.win32 from the samples directory to help you compile. Let us walk through each function of the code: Create the ODPI-C context which is the first function that must be called in all ODPI-C applications (dpiContext_create()). Create a standalone connection to the database (dpiConn_create()). Prepare a statement for execution (dpiConn_prepareStmt()). Populate data in structures and bind it to the statement (dpiData_setInt64(), dpiData_setBytes(), dpiStmt_bindValueByPos()). Execute the statement (dpiStmt_execute()). Repeat steps 4 and 5 for the second row. Commit the data (dpiConn_commit()). Clean up (dpiStmt_release(), dpiConn_release()). Inserting multiple records In insert.c we saw how to insert single record at a time. If you want to insert multiple records into a table you can use the dpiStmt_executeMany() function. In insertmany.c we will add multiple records to the table more efficiently by using an array bind operation. // insertmany.c int main() { const char *insertSql = "insert into TestTable values (:1, :2)"; dpiData *intData, *strData; uint32_t numRows = 5, i; dpiVar *intVar, *strVar; char buffer[100]; dpiConn *conn; dpiStmt *stmt; // create context if (dpiContext_create(DPI_MAJOR_VERSION, DPI_MINOR_VERSION, &gContext, &gErrorInfo) < 0) return printError(); // create connection if (dpiConn_create(gContext, USER, strlen(USER), PASSWORD, strlen(PASSWORD), CONNECT_STRING, strlen(CONNECT_STRING), NULL, NULL, &conn) < 0) return printError(); // prepare and bind insert statement if (dpiConn_prepareStmt(conn, 0, insertSql, strlen(insertSql), NULL, 0, &stmt) < 0) return printError(); if (dpiConn_newVar(conn, DPI_ORACLE_TYPE_NUMBER, DPI_NATIVE_TYPE_INT64, numRows, 0, 0, 0, NULL, &intVar, &intData) < 0) return printError(); if (dpiStmt_bindByPos(stmt, 1, intVar) < 0) return printError(); if (dpiConn_newVar(conn, DPI_ORACLE_TYPE_VARCHAR, DPI_NATIVE_TYPE_BYTES, numRows, 100, 1, 0, NULL, &strVar, &strData) < 0) return printError(); if (dpiStmt_bindByPos(stmt, 2, strVar) < 0) return printError(); // populate some dummy data for (i = 0; i < numRows; i++) { dpiData_setInt64(&intData[i], i + 1); sprintf(buffer, "Test data %d", i + 1); if (dpiVar_setFromBytes(strVar, i, buffer, strlen(buffer)) < 0) return printError(); } // perform execute many if (dpiStmt_executeMany(stmt, DPI_MODE_EXEC_DEFAULT, numRows) < 0) return printError(); // commit changes if (dpiConn_commit(conn) < 0) return printError(); // cleanup if (dpiVar_release(intVar) < 0) return printError(); if (dpiVar_release(strVar) < 0) return printError(); dpiStmt_release(stmt); dpiConn_release(conn); return DPI_SUCCESS; } Let us walk through each function of the code: Create ODPI-C context which is the first function that must be called in all ODPI-C applications (dpiContext_create()). Create standalone connection to the database (dpiConn_create()). Prepare statement for execution (dpiConn_prepareStmt()). Create variables and bind them to the statement by position. (dpiConn_newVar(), dpiStmt_bindByPos()). Populate data into bound variables (dpiData_setInt64(), dpiVar_setFromBytes()). Execute the statement by specifying the number of times (dpiStmt_executeMany()). Commit the data (dpiConn_commit()). Clean up (dpiVar_release(), dpiStmt_release(), dpiConn_release()). Fetching So far we have seen how to insert data into a table. In the following program we will see how to fetch records from the table. In order to make sure some records exist in the table run insertmany before running fetch. // fetch.c int main() { const char *selectSql = "select IntCol, StringCol from TestTable"; dpiData *intColValue, *stringColValue; dpiNativeTypeNum nativeTypeNum; uint32_t bufferRowIndex; dpiConn *conn; dpiStmt *stmt; int found; // create context if (dpiContext_create(DPI_MAJOR_VERSION, DPI_MINOR_VERSION, &gContext, &gErrorInfo) < 0) return printError(); // create connection if (dpiConn_create(gContext, USER, strlen(USER), PASSWORD, strlen(PASSWORD), CONNECT_STRING, strlen(CONNECT_STRING), NULL, NULL, &conn) < 0) return printError(); // prepare select statement if (dpiConn_prepareStmt(conn, 0, selectSql, strlen(selectSql), NULL, 0, &stmt) < 0) return printError(); if (dpiStmt_execute(stmt, DPI_MODE_EXEC_DEFAULT, NULL) < 0) return printError(); // fetch rows while (1) { if (dpiStmt_fetch(stmt, &found, &bufferRowIndex) < 0) return printError(); if (!found) break; if (dpiStmt_getQueryValue(stmt, 1, &nativeTypeNum, &intColValue) < 0) return printError(); if (dpiStmt_getQueryValue(stmt, 2, &nativeTypeNum, &stringColValue) < 0) return printError(); printf("Int = %ld String = '%.*s'\n", intColValue->value.asInt64, stringColValue->value.asBytes.length, stringColValue->value.asBytes.ptr); } // cleanup dpiStmt_release(stmt); dpiConn_release(conn); return DPI_SUCCESS; } When I run the program, the output is: Int = 1 String = 'Test data 1' Int = 2 String = 'Test data 2' Int = 3 String = 'Test data 3' Int = 4 String = 'Test data 4' Int = 5 String = 'Test data 5' Let us walk through each function of the code: Create ODPI-C context which is the first function that must be called in all ODPI-C applications (dpiContext_create()). Create standalone connection to the database (dpiConn_create()). Prepare statement for execution (dpiConn_prepareStmt()). Execute the statement (dpiStmt_execute()). Fetch the row and get column values (dpiStmt_fetch(), dpiStmt_getQueryValue()). Clean up (dpiStmt_release(), dpiConn_release()). Scrolling In the above example rows are retrieved in order from the statement until the rows are exhausted. Now let us see an example on how to scroll between rows using the dpiStmt_scroll() function. To make sure some records exist in the table, run insertmany before running this program. // scroll.c int main() { const char *sql = "select IntCol from TestTable order by IntCol"; dpiNativeTypeNum nativeTypeNum; uint32_t numQueryColumns; uint32_t bufferRowIndex; dpiData *data; dpiStmt *stmt; dpiConn *conn; int found; // create context if (dpiContext_create(DPI_MAJOR_VERSION, DPI_MINOR_VERSION, &gContext, &gErrorInfo) < 0) return printError(); // create connection if (dpiConn_create(gContext, USER, strlen(USER), PASSWORD, strlen(PASSWORD), CONNECT_STRING, strlen(CONNECT_STRING), NULL, NULL, &conn) < 0) return printError(); // prepare select statement if (dpiConn_prepareStmt(conn, 1, sql, strlen(sql), NULL, 0, &stmt) < 0) return printError(); // number of rows fetched from the database if (dpiStmt_setFetchArraySize(stmt, 5) < 0) return printError(); if (dpiStmt_execute(stmt, DPI_MODE_EXEC_DEFAULT, &numQueryColumns) < 0) return printError(); // set the relative position to scroll if (dpiStmt_scroll(stmt, DPI_MODE_FETCH_RELATIVE, 4, 0) < 0) return printError(); // fetch the row if (dpiStmt_fetch(stmt, &found, &bufferRowIndex) < 0) return printError(); if (dpiStmt_getQueryValue(stmt, 1, &nativeTypeNum, &data) < 0) return printError(); printf("\nIntCol = %ld\n", data->value.asInt64); // cleanup dpiStmt_release(stmt); dpiConn_release(conn); return DPI_SUCCESS; } When I run scroll the output is: IntCol = 5 Let us walk through each function of the code: Create ODPI-C context which is the first function that must be called in all ODPI-C applications (dpiContext_create()). Create standalone connection to the database (dpiConn_create()). Prepare statement for execution (dpiConn_prepareStmt()). Set the fetch array size (dpiStmt_setFetchArraySize()). Execute the statement (dpiStmt_execute()). Scroll the statement to the specified position (dpiStmt_scroll()). Fetch the row and get column value (dpiStmt_fetch(), dpiStmt_getQueryValue()). Clean up (dpiStmt_release(), dpiConn_release()). Wrap up ODPI-C has a number of advanced features letting you create powerful applications. For more ODPI-C tests and samples visit the tests and samples directories. For complete documentation details visit the ODPI-C documentation page. If you have questions, feel free to contact us on GitHub. Vaddeti Karthik Raju is a Senior Member Technical Staff for Oracle Bengaluru. He contributes to a number of projects, including ODPI-C.

This is a guest post by Oracle’s Vaddeti Karthik Raju, a contributor to the ODPI-C project. What is ODPI-C ? Oracle Database Programming Interface for C (ODPI-C) is an open source library of C code that...

What are you doing at Oracle OpenWorld and Code One 2018?

The time for Oracle OpenWorld and CodeOne conferences is fast approaching.  These two conferences run concurrently in San Francisco over October 22 - 25.  If you are flying in, you will have already sorted out your conference tickets, but if you are local there are options from the free Discover pass (search the page for 'Discover') on upwards that you should take advantage of. There are plenty of things to attend and do - you can get your hands dirty in any number of ways! There are so many sessions running that you need to keep a clear head.  If you want to see sessions that are related to the areas this blog covers check out our 'Focus On" document Application Development with Node.js, Python, PHP, Ruby, R, C and C++.  There is a short URL: https://tinyurl.com/AppDevOOW18 which will also take you there.  This document has the abstracts and will be up to date if there are any changes, but for ease of reference here is the current list of wonderful events: HOL (Hands-on Lab) Sessions Python and Oracle Database: Scripting for the Future - BYOL [HOL5052] Monday, Oct 22, 9:00 a.m. - 11:00 a.m. | Moscone West - Overlook 2A (HOL) Python and Oracle Database 18c: Scripting for the Future [HOL6329] Tuesday, Oct 23, 2:15 p.m. - 3:15 p.m. | Marriott Marquis (Yerba Buena Level) - Salon 3/4 Developer Sessions Getting Started with R and ROracle for Data Science with Oracle Database [DEV5049] Monday, Oct 22, 11:30 a.m. - 12:15 p.m. | Moscone West - Room 2016 The Go Language: Principles and Practices for Oracle Database [DEV5047] Monday, Oct 22, 12:30 p.m. - 1:15 p.m. | Moscone West - Room 2001 How to Build Geospatial Analytics with Python and Oracle Database [DEV5185] Monday, Oct 22, 1:30 p.m. - 2:15 p.m. | Moscone West - Room 2003 How Does Ruby on Rails Work with MySQL, Oracle Database, and Other Databases [DEV4948] Monday, Oct 22, 2:30 p.m. - 3:15 p.m. | Moscone West - Room 2014 Getting Started with GraphQL APIs on Oracle Database with Node.js [DEV4879] Tuesday, Oct 23, 11:30 a.m. - 12:15 p.m. | Moscone West - Room 2012 Product Training Sessions Oracle Net Services: Best Practices for Database Performance and High Availability [TRN4073] Monday, Oct 22, 3:45 p.m. - 4:30 p.m. | Moscone West - Room 3009 A Database Proxy for Transparent High Availability, Performance, Routing, and Security [TRN4070] Wednesday, Oct 24, 11:15 a.m. - 12:00 p.m. | Moscone West - Room 3009 Application High Availability Best Practices and New Features [TRN4078] Thursday, Oct 25, 10:00 a.m. - 10:45 a.m. | Moscone West - Room 3009 Using Location in Cloud Applications with Python, Node.js, and More [TRN4089] Thursday, Oct 25, 11:00 a.m. - 11:45 a.m. | Moscone West - Room 3001 Building Real-Time Data in Web Applications with Node.js [TRN4081] Thursday, Oct 25, 1:00 p.m. - 1:45 p.m. | Moscone West - Room 3009 IoT for Oracle Database: Soldering, Python, and a Little PL/SQL [TRN4077] Thursday, Oct 25, 2:00 p.m. - 2:45 p.m. | Moscone West - Room 3009 Tips and Tricks Sessions Python and Oracle Database on the Table [TIP4076] Tuesday, Oct 23, 12:30 p.m. - 1:15 p.m. | Moscone West - Room 2007 Node.js: Async Data In and Data Out with Oracle Database [TIP4080] Thursday, Oct 25, 11:00 a.m. - 11:45 a.m. | Moscone West - Room 3009 Performance and Scalability Techniques for Oracle Database Applications [TIP4075] Thursday, Oct 25, 12:00 p.m. - 12:45 p.m. | Moscone West - Room 3009 Meet the Experts Node.js, Python, PHP, and Go with Oracle Database [MTE6765] Wednesday, Oct 24, 3:00 p.m. - 3:50 p.m. | Moscone West - The Hub - Lounge B Demos We'll also have a demo booth in the "Exchange Hall" in Moscone South. Python, Node.js, Go, C, and C++ Application Development for Oracle Database [APD-A03] Other Happenings Some other 'Focus On' Documents from my wider group are: SQL Developer, Data Modeler, ORDS, SQLcl, SQLDev Web and Migrations .NET Development for Oracle Database Oracle Database on Windows Java Applications Development using ADBA, AoJ, JDBC, UCP, and OJVM And our link again: Application Development with Node.js, Python, PHP, Ruby, R, C and C++

The time for Oracle OpenWorld and CodeOneconferences is fast approaching.  These two conferences run concurrently in San Francisco over October 22 - 25.  If you are flying in, you will have already...

Node.js

node-oracledb 3.0 Introduces SODA Document Storage

    Release announcement: Node-oracledb 3.0.0, the Node.js module for accessing Oracle Database, is on npm. Top features: Simple Oracle Document Access (SODA) preview, Connection Pool draining, Call timeouts.   Node-oracledb 3 has been released, which is very exciting. This release adds support for some Oracle Database and Oracle Client 18c features. As with earlier versions, node-oracledb 3 is usable with Oracle Client libraries 11.2 onwards. This allows it to connect to Oracle Database 9.2 or later, depending on the client version. But only when using Oracle Client 18.3 libraries and connected to Oracle Database 18.3 will you get the latest and greatest set of Oracle features, which we are pleased to bring you. Here are the highlights of thise node-oracledb 3.0 release: Oracle Simple Document Access (SODA) support - an exciting addition to the standard relational access model is the new set of APIs for Oracle SODA. See the section lower down for examples. A new drainTime argument to pool.close() allows pools to be force-closed after a specified number of seconds. This feature was a contribution from Danilo Silva. Thanks Danilo! When a pool is closed with a given drainTime, any subsequent getConnection() calls will fail, but connections currently in use will continue working. This allows code to complete and outstanding transactions to be committed. When no connections are in use, or at the end of the drain time (whichever occurs first), the pool and all its connections will be forcibly closed. Although the words 'forcibly closed' seem harsh, the drain time actually allows more graceful shutdown of applications, so that users will get clear messages that the pool is closing (or has closed), and letting the database-side sessions be cleanly freed without waiting for them to timeout. The drainTime can also be zero, forcing the immediate close of all sessions - this is handy when you want to kill an app but be nice to the database. Installation of the pre-built node-oracledb binaries got a bit easier with basic proxy authentication support. Also the 'npm config' proxy value is now used when installing if there are no proxy environment variables set. These changes were a contribution from Cemre Mengu. Thank you Cemre! Added a connection.callTimeout property to interrupt long running database calls. This is available when node-oracledb 3 is using Oracle Client libraries version 18.1, or later, regardless of Oracle Database version. The call timeout is a useful backstop to prevent out-of-control SQL and PL/SQL statement execution. For background, the main code layer beneath node-oracledb's implementation is Oracle Call Interface. This API handles all the network connectivity to Oracle Database. For each OCI function executed by node-oracledb, zero or more 'round-trips' to the database can occur - calling the database and getting a response back. The callTimeout value applies to each round-trip individually, not to the sum of all round-trips. Time spent processing in node-oracledb before or after the completion of each round-trip is not counted. If the time from the start of any one round-trip to the completion of that same round-trip exceeds callTimeout milliseconds, then the operation is halted and an error is returned. In the case where a node-oracledb operation requires more than one round-trip and each round-trip takes less than callTimeout milliseconds, then no timeout will occur, even if the sum of all round-trip calls exceeds callTimeout. If no round-trip is required, the operation will never be interrupted. When callTimeout is exceeded, node-oracledb attempts to clean up the internal connection state. The cleanup is allowed to take another callTimeout milliseconds. If the cleanup was successful, a "DPI-1067: call timeout of N ms exceeded with ORA-XXX" error will be returned and the application can continue to use the connection. For small values of callTimeout, the connection cleanup may not complete successfully within the additional callTimeout period. In this case an ORA-3114 is returned and the connection will no longer be usable. It should be closed. On Windows, node-oracledb will now attempt to load the Oracle Client libraries from the 'node_modules\oracledb\build\Release' directory before doing the standard Windows library directory search i.e. of the PATH directories. This new feature could be useful if you are bundling up applications on Windows and want to include the Oracle Instant Client. By putting the client libraries in the 'node_modules\oracledb\build\Release' directory there is no need to set PATH, no need to worry about users changing PATH, and no need to worry about having multiple versions of Oracle Client libraries in PATH. poolPingInterval functionality has been 're-enabled' when using the connection pool with Oracle Client libraries 12.2, or later. Previously it was deliberately not enabled with these versions, since an internal check in those Oracle clients is very efficient for seeing if the network has dropped out. However users were unhappy that the internal check does not identify connections that are unusable because they have exceeded database session resource limits (e.g. return ORA-02396 when used), or have been explicitly closed by DBAs (e.g. return ORA-00028). This is where poolPingInterval helps. This change can make your apps seem more highly available but there is a drawback: your apps may be silently reconnecting more than is optimal, and you might not be aware of connection storms if a large pool needs to be re-established. You should monitor AWR reports to see if connections are occurring too frequently, and then work with your network and DBA administrators to prevent idle sessions being killed. These are just the highlights. For other changes and improvements see the CHANGELOG. But read on to hear more about SODA . . . . Simple Oracle Document Access (SODA) in node-oracledb Oracle Simple Document Access (SODA) is a set of NoSQL-style APIs that let you create and store collections of documents (in particular JSON) in Oracle Database, retrieve them, and query them, without needing to know SQL. SODA support is now available in node-oracledb 3 when using Oracle Database 18.3 and Oracle Client 18.3, or higher. SODA APIs are also available for Python, C, Java, PL/SQL and via REST, so it is widely accessible and bound to be a useful tool in your data toolkit. We are currently labelling node-oracledb 3 SODA APIs as a 'preview' but, with a future version of the Oracle Client libraries, this will change. The class diagram of node-oracledb shows the separation of the relational and SODA worlds: In reality, SODA is backed by Oracle Database tables, providing a well known, secure, and efficient storage solution. You could access those tables via SQL but this would rarely be needed, perhaps for some advanced Oracle Database functionality such as analytics for reporting. Instead you will almost certainly just use the new classes and methods. Once a DBA has granted you the SODA_APP privilege, you can simply create collections and store documents (e.g JSON) in them. Some basic examples are: // Create the parent object for SODA. soda = await connection.getSodaDatabase(); // Create a new SODA collection, if it doesn't exist. // This will open an existing collection, if the name is already in use. collection = await soda.createCollection("mycollection"); // Insert a document. // A system generated key is created by default. content = {name: "Matilda", address: {city: "Melbourne"}}; doc = await collection.insertOneAndGet(content); key = doc.key; console.log("The key of the new SODA document is: ", key); You can then get documents back via a key look up, or by a search. A key lookup is straightforward: // Fetch the document back doc = await collection.find().key(key).getOne(); // A SodaDocument content = doc.getContent(); // A JavaScript object console.log('Retrieved SODA document as an object:'); console.log(content); For documents that can be converted to JSON you can alternatively get them as a string: content = doc.getContentAsString(); // A JSON string console.log('Retrieved SODA document as a string:'); console.log(content); The find() method is an operation builder, with methods that allow progressively limiting criteria to be set, reducing the set of documents that are then operated on by a terminal method such as getOne(), getDocuments() or count(), amongst others. With JSON documents, a complete filtering specification language can be used for query-by-example (QBE) to find documents. A brief example is: // Find all documents with city names starting with 'S' documents = await collection.find() .filter({"address.city": {"$like": "S%"}}) .getDocuments(); for (let i = 0; i < documents.length; i++) { content = documents[i].getContent(); console.log(' city is: ', content.address.city); } A runnable example is in soda1.js Check out the node-oracledb SODA manual section and the Introduction to Simple Oracle Document Access (SODA) manual to see its power and simplicity. Update: the SODA team's recent 'Office Hours' walks through a node-oracledb SODA demo in this video. You don't have Oracle Database 18.3 yet? Get it from here. Or you may be interested in using JSON with older versions of Oracle Database. Summary We are continuing to introduce important features to node-oracledb to make your development experience better. We have a long wishlist and will continue our work. Contributions from the community are always welcome, and we thank the people who have contributed to this and previous releases of node-oracledb for helping to make it better. Resources Node-oracledb installation instructions are here. Node-oracledb documentation is here. Node-oracledb change log is here. Issues and questions about node-oracledb can be posted on GitHub or Slack (link to join Slack). Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING. Follow us on Twitter or Facebook.

    Release announcement: Node-oracledb 3.0.0, the Node.js module for accessing Oracle Database, is on npm. Top features: Simple Oracle Document Access (SODA) preview, Connection Pool draining, Call...

General

ODPI-C 3.0 Introduces SODA Document Storage

Release 3.0 of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub.  ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications written in C or C++.     This release introduces support for Simple Oracle Document Access (SODA) when using Oracle Database 18c. SODA provides a non-SQL API for storing and accessing documents. Commonly documents are JSON, but other types can also be used. In this release the SODA API is a Preview. More details about SODA are in the companion release announcement for Python cx_Oracle 7.0. Also introduced is a call timeout feature for Oracle Client 18c users. This allows applications direct control over how long database operations are allowed to run, making it easier for applications to control outcomes and keep control of user interaction. Again, details are in the cx_Oracle announcement. If you're creating Windows applications for distribution, a change to how Oracle client libraries are located will be helpful. ODPI-C will try to load the Oracle client from the same directory as the ODPI-C binary, before defaulting to the standard search, i.e. using PATH. This means you can bundle a specific version of Instant Client with your application and know that it will be used in preference to any other Oracle libraries on the system. There are a raft of other tweaks and improvements which can be found in the release notes. ODPI-C References Home page: https://oracle.github.io/odpi/ Code: https://github.com/oracle/odpi Documentation: https://oracle.github.io/odpi/doc/index.html Release Notes: https://oracle.github.io/odpi/doc/releasenotes.html Installation Instructions: oracle.github.io/odpi/doc/installation.html Report issues and discuss: https://github.com/oracle/odpi/issues

Release 3.0 of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub.  ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications...

Python

Python cx_Oracle 7 Introduces SODA Document Storage

cx_Oracle 7.0, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package that covers the Python Database API specification with many additions to support Oracle advanced features.     Anthony Tuininga has just released cx_Oracle 7.0. This release brings some key technologies and new features to the Python developer: Oracle Simple Document Access (SODA) support - an exciting addition to the standard relational access model is the new set of APIs for Oracle SODA. See below. Added Connection.callTimeout to support call timeouts when cx_Oracle is using Oracle Client 18.1 and higher. This is a useful backstop to prevent out-of-control SQL and PL/SQL statement execution. The main code layer beneath cx_Oracle's implementation is Oracle Call Interface. This API handles all the network connectivity to Oracle Database. For each OCI function executed by cx_Oracle, zero or more 'round-trips' to the database can occur - calling the database and getting a response back. The callTimeout value applies to each round-trip individually, not to the sum of all round-trips. Time spent processing in cx_Oracle before or after the completion of each round-trip is not counted. If the time from the start of any one round-trip to the completion of that same round-trip exceeds callTimeout milliseconds, then the operation is halted and an error is returned. In the case where a cx_Oracle operation requires more than one round-trip and each round-trip takes less than callTimeout milliseconds, then no timeout will occur, even if the sum of all round-trip calls exceeds callTimeout. If no round-trip is required, the operation will never be interrupted. After a timeout is triggered, cx_Oracle attempts to clean up the internal connection state. The cleanup is allowed to take another callTimeout milliseconds. If the cleanup was successful, a DPI-1067 error will be returned and the application can continue to use the connection. For small values of callTimeout, the connection cleanup may not complete successfully within the additional callTimeout period. In this case an ORA-3114 is returned and the connection will no longer be usable. It should be closed. Added support for closing a session pool via the function SessionPool.close(). This is useful for being 'nice' to the database and making sure that database sessions are not left dangling until the database cleans them up. In particular the optional 'force' argument is handy when you need to suddenly halt a Python application and immediately free all the sessions in the database. Added support for getting the contents of a SQL collection object as a dictionary, where the keys are the indices of the collection and the values are the elements of the collection. See function Object.asdict(). On Windows, cx_Oracle will now attempt to load the Oracle client libraries from the same directory as the cx_Oracle module before doing the standard Windows library location search, e.g. in the directories in the PATH environment variable. This new feature could be useful if you are bundling up applications and want to include the Oracle Instant Client. By putting the client in the same directory as the cx_Oracle library there is no need to set PATH, no need to worry about users changing PATH, and no need to worry about having multiple versions of Oracle client libraries in PATH. A change in cx_Oracle 7 is that when a DML RETURNING statement is executed, variables bound to it will return an array when calling Variable.getvalue(). Attempts to set cx_Oracle.__future__.dml_ret_array_val are now ignored. When a connection is used as a context manager, the connection is now closed when the block ends. Attempts to set cx_Oracle.__future__.ctx_mgr_close are now ignored. The full release notes show the other new features and changes. Review this list before you upgrade: python -m pip install cx_Oracle --upgrade SODA in Python cx_Oracle Oracle Simple Document Access (SODA) support was originally introduced in Java and recently exposed to C. Python support for SODA is now available in cx_Oracle 7 when using Oracle client 18.3 libraries and connecting to Oracle Database 18.1 or higher. SODA is all hot and new and under rapid development. For this cx_Oracle release we're labelling SODA support as a 'preview'. With a future version of the Oracle Client libraries this will change. SODA is typically used to store JSON documents in Oracle Database, but has flexibility to let you store other types of content. Once a DBA has granted you the SODA_APP privilege, you can simply create collections and store documents in them. Some basic examples are: # Create the parent object for SODA soda = connection.getSodaDatabase() # Create a new SODA collection # This will open an existing collection, if the name is already in use. collection = soda.createCollection("mycollection") # Insert a document # A system generated key is created by default. content = {'name': 'Matilda', 'address': {'city': 'Melbourne'}} doc = collection.insertOneAndGet(content) key = doc.key print('The key of the new SODA document is: ', key) You can then get documents back via a key look up, or by a search. A key lookup is straightforward: # Fetch the document back doc = collection.find().key(key).getOne() # A SodaDocument content = doc.getContent() # A JavaScript object print('Retrieved SODA document dictionary is:') print(content) For documents that can be converted to JSON you can alternatively get them as string: content = doc.getContentAsString() # A JSON string print('Retrieved SODA document string is:') print(content) The find() method is an operation builder, with methods that allow progressive filtering criteria to be set, limiting the set of documents that are then operated on by a terminal method such as getOne(), getDocuments() and count().   With JSON documents, a complete filtering specification language can be used to pattern match documents. A brief example is: # Find all documents with names like 'Ma%' print("Names matching 'Ma%'") documents = collection.find().filter({'name': {'$like': 'Ma%'}}).getDocuments() for d in documents: content = d.getContent() print(content["name"]) A runnable example is in SodaBasic.py Check out the cx_Oracle SODA manual and the Introduction to Simple Oracle Document Access (SODA) manual to see its power and simplicity. Check it out! PS the photo is one I took last weekend on a beach in Australia, which is coming into Spring. I thought you'd like it better than corporate clip art. cx_Oracle References Home page: oracle.github.io/python-cx_Oracle/index.html Installation instructions: cx-oracle.readthedocs.io/en/latest/installation.html Documentation: cx-oracle.readthedocs.io/en/latest/index.html Release Notes: cx-oracle.readthedocs.io/en/latest/releasenotes.html Source Code Repository: github.com/oracle/python-cx_Oracle

cx_Oracle 7.0, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package that covers the Python Database API specification with many...

Node.js

So you want to use JSON in Oracle Database with Node.js?

The JavaScript JSON.parse() and JSON.stringify() methods make it easy to work with JavaScript objects in Node.js and store them in Oracle Database using the node-oracledb module. I'll start with some examples showing a simple, naive, implementation which you can use with all versions of Oracle Database. Then I'll go on to show some of the great JSON functionality introduced in Oracle Database 12.1.0.2. The examples below use the async/await syntax available in Node 7.6, but they can be rewritten to use promises or callbacks, if you have an older version of Node.js. Storing JSON as character data in Oracle Database 11.2 At the simplest, you can stores JSON as character strings, such as in the column C of MYTAB: CREATE TABLE mytab (k NUMBER, c CLOB); Using a CLOB means we don't need to worry about the length restrictions of a VARCHAR2. A JavaScript object like myContent can easily be inserted into Oracle Database with the node-oracledb module by stringifying it: const oracledb = require('oracledb'); let connection, myContent, json, result; async function run() { try { connection = await oracledb.getConnection( {user: "hr", password: "welcome", connectString: "localhost/orclpdb"}); myContent = {name: "Sally", address: {city: "Melbourne"}}; json = JSON.stringify(myContent); result = await connection.execute( 'insert into mytab (k, c) values (:kbv, :cbv)', { kbv: 1, cbv: json } ); console.log('Rows inserted: ' + result.rowsAffected); } catch (err) { console.error(err); } finally { if (connection) { try { await connection.close(); } catch (err) { console.error(err); } } } } run(); If you are just inserting one record you may want to autocommit, but make sure you don't unnecessarily commit, or break transactional consistency by committing a partial set of data: myContent = {name: "Sally", address: {city: "Melbourne"}}; json = JSON.stringify(myContent); result = await connection.execute( 'insert into mytab (k, c) values (:kbv, :cbv)', { kbv: 1, cbv: json }, { autoCommit: true} ); console.log('Rows inserted: ' + result.rowsAffected); The output is: Rows inserted: 1 To retrieve the JSON content you have to use a SQL query. This is fine when you only need to lookup records by their keys: result = await connection.execute( 'select c from mytab where k = :kbv', { kbv: 1 }, // the key to find { fetchInfo: {"C": {type: oracledb.STRING } }}); if (result.rows.length) { js = JSON.parse(result.rows[0]); console.log('Name is: ' + js.name); console.log('City is: ' + js.address.city); } else { console.log('No rows fetched'); } The fetchInfo clause is used to return the CLOB as a string. This is simpler and generally faster than the default, streamed access method for LOBs. (Streaming is great for huge data streams such as videos.) The JSON.parse() call converts the JSON string into a JavaScript object so fields can be accessed like 'js.address.city'. Output is: Name is: Sally City is: Melbourne Code gets trickier if you need to match JSON keys in the query. You need to write your own matching functionality using LOB methods like dbms_lob.instr(): result = await connection.execute( 'select c from mytab where dbms_lob.instr(c, \'"name":"\' || :cbv ||\'"\') > 0', { cbv: 'Sally' }, { fetchInfo: {"C": {type: oracledb.STRING } }}); if (result.rows.length) { js = JSON.parse(result.rows[0]); console.log('Name is: ' + js.name); console.log('City is: ' + js.address.city); } else { console.log('No rows fetched'); } You can see this could be slow to execute, error prone to do correctly, and very hard to work with when the JSON is highly nested. But there is a solution . . . Oracle Database 12c JSON With Oracle 12.1.0.2 onward you can take advantage of Oracle's JSON functionality. Data is stored as VARCHAR2 or LOB so the node-oracledb code is similar to the naive storage solution above. However, in the database, extensive JSON functionality provides tools for data validation, indexing and matching, for working with GeoJSON, and even for working with relational data. Check the JSON Developer's Guide for more information. You may also be interested in some of the JSON team's blog posts. To start with, when you create a table, you can specify that a column should be validated so it can contain only JSON: c CLOB CHECK (c IS JSON)) LOB (c) STORE AS (CACHE) In this example I also take advantage of Oracle 12c's 'autoincrement' feature called 'identity columns'. This automatically creates a monotonically increasing sequence number for the key. The complete CREATE TABLE statement used for following examples is: CREATE TABLE myjsontab (k NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY(START WITH 1), c CLOB CHECK (c IS JSON)) LOB (c) STORE AS (CACHE); Strictly speaking, since I know my application will insert valid JSON, I could have improved database performance by creating the table without the CHECK (c IS JSON) clause. However, if you don't know where your data is coming from, letting the database do validation is wise. Inserting a JavaScript object data uses the same stringification as the previous section. Since we don't need to supply a key now, we can use a DML RETURNING clause to get the new key's autoincremented value: myContent = {name: "Sally", address: {city: "Melbourne"}}; json = JSON.stringify(myContent); result = await connection.execute( 'insert into myjsontab (c) values (:cbv) returning k into :kbv', { cbv: json, kbv: { type: oracledb.NUMBER, dir: oracledb.BIND_OUT } }, { autoCommit: true} ); console.log('Data key is: ' + result.outBinds.kbv); This inserts the data and returns the key of the new record. The output is: Data key is: 1 To extract data by the key, a standard SQL query can be used, identical to the naive CLOB implementation previously shown. Oracle Database's JSON functionality really comes into play when you need to match attributes of the JSON string. You may even decide not to have a key column. Using Oracle 12.2's 'dotted' query notation you can do things like: result = await connection.execute( 'select c from myjsontab t where t.c.name = :cbv', { cbv: 'Sally' }, { fetchInfo: {"C": {type: oracledb.STRING } }}); if (result.rows.length) { js = JSON.parse(result.rows[0]); console.log('Name is: ' + js.name); console.log('City is: ' + js.address.city); } else { console.log('No rows fetched'); } Output is: Name is: Sally City is: Melbourne (If you use Oracle Database 12.1.0.2, then the dotted notation used in the example needs to be replaced with a path expression, see the JSON manual for the syntax). Other JSON functionality is usable, for example to find any records that have an 'address.city' field: select c FROM myjsontab where json_exists(c, '$.address.city') If you have relational tables, Oracle Database 12.2 has a JSON_OBJECT function that is a great way to convert relational table data to JSON: result = await connection.execute( `select json_object('deptId' is d.department_id, 'name' is d.department_name) department from departments d where department_id < :did`, { did: 50 }, { fetchInfo: {"C": {type: oracledb.STRING } }}); if (result.rows.length) { for (var i = 0; i < result.rows.length; i++) { console.log("Department: " + result.rows[i][0]); js = JSON.parse(result.rows[i][0]); console.log('Department Name is: ' + js.name); } } else { console.log('No rows fetched'); } Output is: Department: {"deptId":10,"name":"Administration"} Department Name is: Administration Department: {"deptId":20,"name":"Marketing"} Department Name is: Marketing Department: {"deptId":30,"name":"Purchasing"} Department Name is: Purchasing Department: {"deptId":40,"name":"Human Resources"} Department Name is: Human Resources If you are working with JSON tables that use BLOB storage instead of CLOB, for example: CREATE TABLE myjsonblobtab (k NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY(START WITH 1), c BLOB CHECK (c IS JSON)) LOB (c) STORE AS (CACHE); Then you need to bind a Buffer for insert: myContent = {name: "Sally", address: {city: "Melbourne"}}; json = JSON.stringify(myContent); b = Buffer.from(json, 'utf8'); result = await connection.execute( 'insert into myjsonblobtab (k, c) values (:kbv, :cbv)', { kbv: 1, cbv: b }, { autoCommit: true} ); console.log('Rows inserted: ' + result.rowsAffected); Querying needs to return a Buffer too: result = await connection.execute( 'select c from myjsonblobtab t where t.c.name = :cbv', { cbv: 'Sally' }, { fetchInfo: {"C": {type: oracledb.BUFFER } }}); if (result.rows.length) { js = JSON.parse(result.rows[0].toString('utf8')); console.log('Name is: ' + js.name); console.log('City is: ' + js.address.city); } else { console.log('No rows fetched'); } A final JSON tip One final tip is to avoid JSON.parse() if you don't need it. An example is where you need to pass a JSON string to a web service or browser. You may be able pass the JSON string returned from a query directly. In some cases the JSON string may need its own key, in which case simple string concatenation may be effective. In this example, the Oracle Locator method returns a GeoJSON string: result = await connection.execute( `select sdo_util.to_geojson( sdo_geometry(2001, 8307, sdo_point_type(-90, 20, null), null, null)) as c from dual`, { }, // no binds { fetchInfo: {"C": {type: oracledb.STRING } }}); json = '{"geometry":' + result.rows[0][0] + '}'; console.log(json); The concatenation above avoids the overhead of a parse and re-stringification: js = JSON.parse(result.rows[0][0]); jstmp = {geometry: js}; json = JSON.stringify(jstmp); Summary The JavaScript JSON.parse() and JSON.stringify() methods make it easy to work with JSON in Node.js and Oracle Database. Combined with node-oracledb's ability to work with LOBs as Node.js Strings, database access is very efficient. Oracle Database 12.1.0.2's JSON features make JSON operations in the database simple. Advances in Oracle Database 12.2 and 18c further improve the functionality and usability. Resources Node-oracledb installation instructions are here. Node-oracledb documentation is here. Issues and questions about node-oracledb can be posted on GitHub. The Oracle JSON Developer's Guide is here.

The JavaScript JSON.parse() and JSON.stringify() methods make it easy to work with JavaScript objects in Node.js and store them in Oracle Database using the node-oracledb module. I'll start with some...

Instant Client

Some New Features of Oracle Instant Client 18.3

We released Oracle Database 18.3 for Linux last week. It (and the "full" Oracle Client) are downloadable from here. Read this to find out about some of the new database features. Many of the readers of my blog have an equal interest in the "client side". You'll be happy that Oracle Instant Client 18.3 for Linux 64-bit and 32-bit is also available. Instant Client is just a rebundling of the Oracle client libraries and some tools. They are the same ones available with an Oracle Database installation or the "full" Oracle Client installation but installation is much simpler: you just unzip a file, or install an RPM package on Linux and use them to connect your applications to Oracle Database. The "Oracle Client", in whatever install footprint you choose, covers a number of technologies and provides a lot of language APIs. The Instant Client packages contain these APIs and selected tools like SQL*Plus and Data Pump. I'll let those teams blow their own trumpets about the new release. Here I'll talk about some of the Oracle Client functionality that benefits the Oracle Oracle Call Interface (OCI) API for C programs, and all the scripting languages that use OCI: My wider group's most exciting project in 18.3 is the Connection Manager (CMAN) Traffic Director mode whose sub-location in the Oracle manual is a sign of how the feature its transparent, and not indicative of the huge engineering effort that went into it. CMAN in Traffic Director Mode is a proxy between the database clients and the database instances. Supported OCI clients from Oracle Database 11g Release 2 (11.2) and later can connect to CMAN to get improved high availability (HA) for planned and unplanned database server outages, connection multiplexing support, and load balancing. Cherry picking some notable Oracle Client 18c features that are available via OCI: You probably know that Oracle Database 18c is really just a re-badged 12.2.0.2. Due to the major version number change and the new release strategy, there is a new OCIServerRelease2() call to get the database version number. The old OCIServerRelease() function will give just the base release information so use the new function to get the actual DB patch level. Why? Let's just say there were robust discussions about the upgrade and release cycles, and about handling the "accelerated" version change across the whole database product suite and how things like upgrade tools were affected. Extracting Instant Client 18.3 ZIP files now pre-creates symbolic links for the C and C++ client libraries on relevant operating systems. Yay! One fewer install step. Instant Client now also pre-creates a network/admin sub-directory to show where you can put any optional network and other configuration files such as tnsnames.ora, sqlnet.ora, ldap.ora, and oraaccess.xml. This directory will be used by default for any application that loads the related Oracle Client libraries. Support for Client Result Caching with dynamic binds where descriptors are not involved and the bind length is less than 32768. Since scripting languages tend to use dynamic binds for character data this could be a big performance win for your lookup table queries. Unique ID generation improvements. One little old gotcha, particularly in some hosted or cloud environments, were errors when Oracle applications tried to generate a unique key for your client. This manifested itself as an Oracle error when you tried to start a program. Workarounds included adding a hostname to /etc/hosts. There were improvements in Oracle Client 18c for unique key generation so the problem should be less common. A new call timeout parameter can be enabled for C applications. This applies to post-connection round-trips to the database, making it easier to interrupt long running calls and satisfy application quality of service requirements. After you connect, each OCI call may make one of more round-trips to Oracle database: If the time from the start of any one round-trip to the completion of that same round-trip exceeds the call timeout milliseconds, then the operation is halted and an Oracle error is returned. In the case where an OCI call requires more than one round-trip and each round-trip takes less than the specified number of milliseconds, then no timeout will occur, even if the sum of all round-trip calls exceeds the call timeout value. If no round-trip is required, the operation will never be interrupted. After a timeout has occurred, the connection must be cleaned up. This is allowed to run for the same amount of time as specified for the original timeout. For very small timeouts, if the cleanup fails, then an ORA-3114 is returned and the connection must be released. However if the cleanup is successful then an ORA-3136 is returned and the application can continue using the connection. You can see this will be most useful for interrupting SQL statements whose "execute" phase may take some time. The OCI Session pool underlays many application connection pools (and if it doesn't underlay yours, then it should - ask me why). Improvements in 18c session pooling include some usability "do-what-I-mean" parameter size check tweaks, internal lock improvements, and a new attribute OCI_ATTR_SPOOL_MAX_USE_SESSION. One other change that was much debated during development is the OCISessionGet() behavior of OCI_SPOOL_ATTRVAL_NOWAIT mode when a pool has to grow. Prior to 18c, even though it was a 'no wait' operation, getting a connection would actually wait for the pool to grow. Some users didn't like this. Since creating connections could take a few moments they had no way to control the quality of service. Now in 18c the mode doesn't wait - if there's no free connection immediately available, then control is returned to the application with an error. If you are impacted by the new behavior, then look at using alternative session acquire modes like OCI_SPOOL_ATTRVAL_TIMEDWAIT. Or better, keep your pool a constant size so it doesn't need to grow, which is what is recommended by Oracle's Real World Performance Group. SODA support. Simple Oracle Document Access (SODA) that was previously only available via JDBC is now available in OCI. Yum. Let's see what we can do with this now it's in C. More on this later. I hope this has given you a taste of some Oracle Client 18c changes and given you links to explore more. Don't forget that much new database functionality is available to clients transparently or via SQL and PL/SQL. Finally, remember that Oracle has client-server version interoperability so 18c OCI programs can connect to Oracle Database 11.2 or later. It's time to upgrade your client!

We released Oracle Database 18.3 for Linux last week. It (and the "full" Oracle Client) are downloadable from here. Read this to find out about some of the new database features. Many of the readers...

Node.js

Demo: GraphQL with Oracle Database and node-oracledb

Some of our node-oracledb users recently commented they have moved from REST to GraphQL so I thought I'd take a look at what it is all about. I can requote the GraphQL talking points with the best of them, but things like "Declarative Data Fetching" and "a schema with a defined type system is the contract between client and server" are easier to undstand with examples. In brief, GraphQL: Provides a single endpoint that responds to queries. No need to create multiple endpoints to satisfy varying client requirements. Has more flexibility and efficiency than REST. Being a query language, you can adjust which fields are returned by queries, so less data needs to be transfered. You can parameterize the queries, for example to alter the number of records returned - all without changing the API or needing new endpoints. Let's look at the payload of a GraphQL query. This query with the root field 'blog' asks for the blog with id of 2. Specifically it asks for the id, the title and the content of that blog to be returned: { blog(id: 2) { id title content } } The response from the server would contain the three request fields, for example: { "data": { "blog": { "id": 2, "title": "Blog Title 2", "content": "This is blog 2" } } } Compare that result with this query that does not ask for the title: { blog(id: 2) { id content } } With the same data, this would give: { "data": { "blog": { "id": 2, "content": "This is blog 2" } } } So, unlike REST, we can choose what data needs to be transferred. This makes clients more flexible to develop. Let's looks at some code. I came across this nice intro blog post today which shows a basic GraphQL server in Node.js. For simplicity its data store is an in-memory JavaScript object. I changed it to use an Oracle Database backend. The heart of GraphQL is the type system. For the blog example, a type 'Blog' is created in our Node.js application with three obvious values and types: type Blog { id: Int!, title: String!, content: String! } The exclamation mark means a field is required. The part of the GraphQL Schema to query a blog post by id is specified in the root type 'Query': type Query { blog(id: Int): Blog } This defines a capability to query a single blog post and return the Blog type we defined above. We may also want to get all blog posts, so we add a "blogs" field to the Query type: type Query { blog(id: Int): Blog blogs: [Blog], } The square brackets indicates a list of Blogs is returned. A query to get all blogs would be like: { blogs { id title content } } You can see that the queries include the 'blog' or 'blogs' field. We can pass all queries to the one endpoint and that endpoint will determine how to handle each. There is no need for multiple endpoints. To manipulate data requires some 'mutations', typically making up the CUD of CRUD: input BlogEntry { title: String!, content: String! } type Mutation { createBlog(input: BlogEntry): Blog!, updateBlog(id: Int, input: BlogEntry): Blog!, deleteBlog(id: Int): Blog! } To start with, the "input" type allows us to define input parameters that will be supplied by a client. Here a BlogEntry contains just a title and content. There is no id, since that will be automatically created when a new blog post is inserted into the database. In the mutations, you can see a BlogEntry type is in the argument lists for the createBlog and updateBlog fields. The deleteBlog field just needs to know the id to delete. The mutations all return a Blog. An example of using createBlog is shown later. Combined, we represent the schema in Node.js like: const typeDefs = ` type Blog { id: Int!, title: String!, content: String! } type Query { blogs: [Blog], blog(id: Int): Blog } input BlogEntry { title: String!, content: String! } type Mutation { createBlog(input: BlogEntry): Blog!, updateBlog(id: Int, input: BlogEntry): Blog!, deleteBlog(id: Int): Blog! }`; This is the contract, defining the data types and available operations. In the backend, I decided to use Oracle Database 12c's JSON features. There's no need to say that using JSON gives developers power to modify and improve the schema during the life of an application: CREATE TABLE blogtable (blog CLOB CHECK (blog IS JSON)); INSERT INTO blogtable VALUES ( '{"id": 1, "title": "Blog Title 1", "content": "This is blog 1"}'); INSERT INTO blogtable VALUES ( '{"id": 2, "title": "Blog Title 2", "content": "This is blog 2"}'); COMMIT; CREATE UNIQUE INDEX blog_idx ON blogtable b (b.blog.id); CREATE SEQUENCE blog_seq START WITH 3; Each field of the JSON strings corresponds to the values of the GraphQL Blog type. (The 'dotted' notation syntax I'm using in this post requires Oracle DB 12.2, but can be rewritten for 12.1.0.2.) The Node.js ecosystem has some powerful modules for GraphQL. The package.json is: { "name": "graphql-oracle", "version": "1.0.0", "description": "Basic demo of GraphQL with Oracle DB", "main": "graphql_oracle.js", "keywords": [], "author": "christopher.jones@oracle.com", "license": "MIT", "dependencies": { "oracledb": "^2.3.0", "express": "^4.16.3", "express-graphql": "^0.6.12", "graphql": "^0.13.2", "graphql-tools": "^3.0.2" } } If you want to see the full graphql_oracle.js file it is here. Digging into it, the application has some 'Resolvers' to handle the client calls. From Dhaval Nagar's demo, I modified these resolvers to invoke new helper functions that I created: const resolvers = { Query: { blogs(root, args, context, info) { return getAllBlogsHelper(); }, blog(root, {id}, context, info) { return getOneBlogHelper(id); } }, [ . . . ] }; To conclude the GraphQL part of the sample, the GraphQL and Express modules hook up the schema type definition from above with the resolvers, and start an Express app: const schema = graphqlTools.makeExecutableSchema({typeDefs, resolvers}); app.use('/graphql', graphql({ graphiql: true, schema })); app.listen(port, function() { console.log('Listening on http://localhost:' + port + '/graphql'); }) On the Oracle side, we want to use a connection pool, so the first thing the app does is start one: await oracledb.createPool(dbConfig); The helper functions can get a connection from the pool. For example, the helper to get one blog is: async function getOneBlogHelper(id) { let sql = 'SELECT b.blog FROM blogtable b WHERE b.blog.id = :id'; let binds = [id]; let conn = await oracledb.getConnection(); let result = await conn.execute(sql, binds); await conn.close(); return JSON.parse(result.rows[0][0]); } The JSON.parse() call nicely converts the JSON string that is stored in the database into the JavaScript object to be returned. Starting the app and loading the endpoint in a browser gives a GraphiQL IDE. After entering the query on the left and clicking the 'play' button, the middle pane shows the returned data. The right hand pane gives the API documentation: To insert a new blog, the createBlog mutation can be used: If you want to play around more, I've put the full set of demo-quality files for you to hack on here. [Update: There is also a more fully explored demo here from Stephen Black]. You may want to look at the GraphQL introductory videos, such as this comparison with REST. To finish, GraphQL has the concept of real time updates with subscriptions, something that ties in well with the Continous Query Notification feature of node-oracledb 2.3. Yay - something else to play with! But that will have to wait for another day. Let me know if you beat me to it. Update: an implementation using SODA instead of SQL is discussed in a more recent demonstration blog post.

Some of our node-oracledb users recently commented they have moved from REST to GraphQL so I thought I'd take a look at what it is all about. I can requote the GraphQL talking points with the best of...

Node.js

Node-oracledb 2.3 with Continuous Query Notifications is on npm

Release announcement: Node-oracledb 2.3.0, the Node.js module for accessing Oracle Database, is on npm. Top features: Continuous Query Notifications. Heterogeneous Connection Pools.     Our 2.x release series continues with some interesting improvements: Node-oracledb 2.3 is now available for your pleasure. Binaries for the usual platforms are available for Node.js 6, 8, and 10; source code is available on GitHub. We are not planning on releasing binaries for Node.js 4 or 9 due to the end of life of Node.js 4, and the release of Node.js 10. The main new features in node-oracledb 2.3 are: Support for Oracle Database Continuous Query Notifications, allowing JavaScript methods to be called when database changes are committed. This is a cool feature useful when applications want to be notified that some data in the database has been changed by anyone. I recently posted a demo showing CQN and Socket.IO keeping a notification area of a web page updated. Check it out. The new node-oracledb connection.subscribe() method is used to register a Node.js callback method, and the SQL query that you want to monitor. It has two main modes: for object-level changes, and for query-level changes. These allow you to get notifications whenever an object changes, or when the result set from the registered query would be changed, respectively. There are also a bunch of configuration options for the quality-of-service and other behaviors. It's worth noting that CQN requires the database to establish a connection back to your node-oracledb machine. Commonly this means that your node-oracledb machine needs a fixed IP address, but it all depends on your network setup. Oracle Database CQN was designed for infrequently modified tables, so make sure you test your system scalability. Support for heterogeneous connection pooling and for proxy support in connection pools. This allows each connection in the pool to use different database credentials. Some users migrating to node-oracledb had schema architectures that made use of this connection style for data encapsulation and auditing. Note that making use of the existing clientId feature may be a better fit for new code, or code that does mid-tier authentication. A Pull Request from Danilo Silva landed, making it possible for Windows users to build binaries for self-hosting. Thanks Danilo! Previously this was only possible on Linux and macOS. Support for 'fetchAsString' and 'fetchInfo' to allow fetching RAW columns as hex-encoded strings. See the CHANGELOG for the bug fixes and other changes. Resources Node-oracledb installation instructions are here. Node-oracledb documentation is here. Node-oracledb change log is here. Issues and questions about node-oracledb can be posted on GitHub. Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

Release announcement: Node-oracledb 2.3.0, the Node.js module for accessing Oracle Database, is on npm. Top features: Continuous Query Notifications. Heterogeneous Connection Pools.     Our 2.x release...

General

ODPI-C 2.4 has been released

Release 2.4 of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub. ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications written in C or C++. Top features: Better database notification support. New pool timeout support.   I'll keep this brief. See the Release Notes for all changes. Support for Oracle Continuous Query Notification and Advanced Queuing notifications was improved. Notably replacement subscribe and unsubscribe methods were introduced to make use more flexible. Support for handling AQ notifications was added, so now you can get notified there is a message to dequeue. And settings for the listening IP address, for notification grouping, and to let you check the registration status are now available. Some additional timeout options for connection pools were exposed. Some build improvements were made: the SONAME is set in the shared library on *ix platforms. There is also a new Makefile 'install' target that installs using a standard *ix footprint. ODPI-C References Home page: https://oracle.github.io/odpi/ Code: https://github.com/oracle/odpi Documentation: https://oracle.github.io/odpi/doc/index.html Release Notes: https://oracle.github.io/odpi/doc/releasenotes.html Installation Instructions: oracle.github.io/odpi/doc/installation.html Report issues and discuss: https://github.com/oracle/odpi/issues

Release 2.4 of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub. ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications...

Node.js

Demo: Oracle Database Continuous Query Notification in Node.js

Native Oracle Database Continuous Query Notification (CQN) code has landed in the node-oracledb master branch on GitHub. If you want to play with it, but don't want to wait for the next binary node-oracledb release, you can compile node-oracledb yourself and play with this demo. [Update: node-oracledb 2.3 has been released and binaries are now available.]     Some of you may already be using CQN via its PL/SQL APIs. The new, native support in node-oracledb makes it all so much nicer. Check out the development documentation for connection.subscribe() and the 'user manual'. There are a couple of examples cqn1.js and cqn2.js available, too. CQN allows JavaScript methods to be called when database changes are committed by any transaction. You enable it in your node-oracledb app by registering a SQL query. CQN has two main modes: object-level and query-level. The former sends notifications (i.e. calls your nominated JavaScript method) when changes are made to database objects used in your registered query. The query-level mode only sends notifications when database changes are made that would impact the result set of the query, e.g. the WHERE clause is respected. If you're not using CQN, then you might wonder when you would. For infrequently updated tables you can get CQN to generate notifications on any data or table change. I can see how query-level mode might be useful for proactive auditing to send alerts when an unexpected, but valid, value is inserted or deleted from a table. For tables with medium levels of updates, CQN allows grouping of notifications by time, which is a way of reducing load by preventing too many notifications being generated in too short a time span. But, as my colleague Dan McGhan points out, if you know the table is subject to a lot of change, then your apps will be better off simply polling the table and avoiding any CQN overhead. Note that CQN was designed to be used for relatively infrequently updated tables. DEMO APP I've thrown together a little app that uses CQN and Socket.IO to refresh a message notification area on a web page. It's really just a simple smush of the Socket.IO intro example and the node-oracledb CQN examples. There is a link to all the code in the next section of this post; I'll just show snippets inline here. I'm sure Dan will update his polished 'Real-time Data' example soon, but until then here is my hack code. It uses Node.js 8's async/await style - you can rewrite it if you have an older Node.js version. One thing about CQN is that the node-oracledb computer must be resolvable by the Database computer; typically this means having a fixed IP address which may be an issue with laptops and DHCP. Luckily plenty of other cases work too. For example, I replaced my Docker web service app with a CQN example and didn't need to do anything with ports or identifying IP addresses. I'll leave you to decide how to run it in your environment. There are CQN options to set the IP address and port to listen on, which may be handy. The demo premise is a web page with a message notification area that always shows the five most recent messages in a database table. The messages are being inserted into that table by other apps (I'll just use SQL*Plus to do these inserts) and the web page needs to be updated with them only when there is a change. I'm just using dummy data and random strings: To see how it fits together, look at this no-expense-spared character graphic showing the four components: SQL*Plus, the database, the browser and the Node.js app: SQL*PLUS: DATABASE: insert into msgtable >-------> msgtable >-------CQN-notification------------------+ commit | | BROWSER: <-------+ NODE.JS APP: | 5 Message | URL '/' serves index.html | 4 Message | | 3 Message | CQN: | 2 Message | subscribe to msgtable with callback myCallback | 1 Message | | | myCallback: <------------------------------------+ | query msgtable +-----------< send rows to browser to update the DOM The app (bottom right) serves the index page to the browser. It connects to the DB and uses CQN to register interest in msgtable. Any data change in the table from SQL*Plus (top left) triggers a CQN notification from the database to the application, and the callback is invoked. This callback queries the table and uses Socket.IO to send the latest records to the browser, which updates the index.html DOM. The first thing is to get your DBA (i.e. log in as the SYSTEM user) to give you permission to get notifications: GRANT CHANGE NOTIFICATION TO cj; We then need a table that our app will get notifications about, and then query to get the latest messages: CREATE TABLE cj.msgtable ( k NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY(START WITH 1), message VARCHAR(100) ); The column K is an Oracle Database 12c identity column that will automatically get a unique number inserted whenever a new message is inserted. In older database versions you would create a sequence and trigger to do the same. The little SQL script I use to insert data (and trigger notifications) is: INSERT INTO msgtable (message) VALUES (DBMS_RANDOM.STRING('A', 10)); COMMIT; The Node.js app code is more interesting, but not complex. Here is the code that registers the query: conn = await oracledb.getConnection(); await conn.subscribe('mysub', { callback: myCallback, sql: "SELECT * FROM msgtable" }); console.log("CQN subscription created"); Although CQN has various options to control its behavior, here I keep it simple - I just want to get notifications when any data change to msgtable is committed. When the database sends a notifications, the method 'myCallback' will get a message, the contents of which will vary depending on the subscription options. Since I know the callback is invoked when any table data has changed, I ignore the message contents and go ahead and query the table. The rows are then stringified and, by the magic of Socket.IO, sent to the web page: async function myCallback(message) { let rows = await getData(); // query the msgtable io.emit('message', JSON.stringify(rows)); // update the web page } The helper function to query the table is obvious: async function getData() { let sql = `SELECT k, message FROM msgtable ORDER BY k DESC FETCH NEXT :rowcount ROWS ONLY`; let binds = [5]; // get 5 most recent messages let options = { outFormat: oracledb.OBJECT }; let conn = await oracledb.getConnection(); let result = await conn.execute(sql, binds, options); await conn.close(); return result.rows; } At the front end, the HTML for the web page contains a 'messages' element that is populated by JQuery code when a message is received by Socket.IO: <ul id="messages"></ul> <script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.1.1/socket.io.js"></script> <script src="https://code.jquery.com/jquery-3.3.1.js"></script> <script> $(function () { var socket = io(); socket.on('message', function(msg){ $('#messages').empty(); $.each(JSON.parse(msg), function(idx, obj) { $('#messages').append($('<li>').text(obj.K + ' ' + obj.MESSAGE)); }); }); }); </script> You can see that the JSON string received from the app server is parsed and the K and MESSAGE fields of each row object (corresponding to the table columns of the same names) are inserted into the DOM in an unordered list. That's it. DEMO IN ACTION To see it in action, extract the code and install the dependencies: cjones@mdt:~/n/cqn-sockets$ npm install npm WARN CQN-Socket-Demo@0.0.1 No repository field. npm WARN CQN-Socket-Demo@0.0.1 No license field. added 86 packages in 2.065s I cheated a bit there and didn't show node-oracledb compiling. Once a production release of node-oracledb is made, you should edit the package.json dependency to use its pre-built binaries. Until then, node-oracledb code will be downloaded and compiled - check the instructions for compiling. Edit server.js and set your database credentials - or set the referenced environment variables: let dbConfig = { user: process.env.NODE_ORACLEDB_USER, password: process.env.NODE_ORACLEDB_PASSWORD, connectString: process.env.NODE_ORACLEDB_CONNECTIONSTRING, events: true // CQN needs events mode } Then start the app server: cjones@mdt:~/n/cqn-sockets$ npm start > CQN-Socket-Demo@0.0.1 start /home/cjones/n/cqn-sockets > node server.js CQN subscription created Listening on http://localhost:3000 Then load http://localhost:3000/ in a browser. Initially the message pane is blank - I left bootstrapping it as an exercise for the reader. Start SQL*Plus in a terminal window and create a message: SQL> INSERT INTO msgtable (message) VALUES (DBMS_RANDOM.STRING('A', 10)); SQL> COMMIT; Every time data is committed to msgtable, the message list on the web page is automatically updated: If you don't see messages, review Troubleshooting CQN Registrations. The common problems will be network related: the node-oracledb machine must be resolvable, the port must be open etc. Try it out and let us know how you go. Remember you are using development code that just landed, so there may be a few rough edges.

Native Oracle Database Continuous Query Notification (CQN) code has landed in the node-oracledb master branch on GitHub. If you want to play with it, but don't want to wait for the next binary...

Docker

A node-oracledb Web Service in Docker

This post shows how to run a node-oracledb application in a Docker Container. For bonus points, the application connects to an Oracle Database running in a second container. The steps are the 'show notes' from a recent talk at Oracle Code. The demo app is a simple Express web service that accepts REST calls.   DOCKER Oracle Docker images are available from https://store.docker.com/ and also mirrored on https://container-registry.oracle.com If you're not familiar with Docker, it helps to know basic terminology: Images: Collection of software to be run as a container. Images are immutable. Changes to an image require a new image build. Registry: Place to store and download images. Container: A lightweight standalone, executable piece of software that includes everything required to run it on a host. Containers are spun up from images. Containers are non-persistent. Once a container is deleted, all files inside that container are gone. Docker engine: The software engine running containers. Volumes: Place to persist data outside the container. CONFIGURE A DOCKER HOST For my host, I used Oracle Linux 7, which has the ol7_latest and ol7_uekr4  channels already enabled. Install the Docker engine as the root user by running 'sudo su -', or prefix each command with 'sudo': # yum-config-manager --enable ol7_addons # yum install docker-engine # systemctl enable docker # systemctl start docker DOWNLOAD INSTANT CLIENT AND DATABASE DOCKER IMAGES Sign in to the container registry https://container-registry.oracle.com/ with your (free) Oracle "single sign-on" (SSO) credentials. Accept the license on the container registry. On your OL7 Docker host, log in to the registry. Remember to run Docker commands as 'root': # docker login container-registry.oracle.com This prompts for your Oracle SSO credentials. Get the Oracle Database and Oracle Instant Client images: # docker pull container-registry.oracle.com/database/enterprise:12.2.0.1 # docker pull container-registry.oracle.com/database/instantclient:12.2.0.1 This can take a while. For testing, you may want to pull the smaller, 'slim' version of the database. [Update: Oracle Instant Client 18.3 can be downloaded without clickthrough meaning you can build images easily with this Dockerfile. Since it is based on the Oracle Linux 'slim' image, the container size is considerably smaller.] View the installed images with: # docker images REPOSITORY TAG IMAGE ID CREATED SIZE container-registry.oracle.com/database/enterprise 12.2.0.1 12a359cd0528 7 months ago 3.44GB container-registry.oracle.com/database/instantclient 12.2.0.1 fda46de41de3 7 months ago 407MB CREATE A DATABASE CONTAINER FROM THE DATABASE IMAGE Start the database container: # docker run -d --name demodb -P container-registry.oracle.com/database/enterprise:12.2.0.1 The '-P' option maps the ports used, allowing access to the database from outside the container. Check for its health and wait until it shows 'healthy' # docker ps CONTAINER ID IMAGE COMMAND STATUS PORTS NAMES 9596bc2345d3 [...]/database/[...] "/bin/sh -c '/bin/..." Up 3 hours (healthy) ...->1521/tcp, ...->5500/tcp demodb Find the database container's IP address: # docker inspect -f "{{ .NetworkSettings.IPAddress }}" demodb You will use this IP in database connect strings in your applications. You can stop and start the container as desired: # docker stop demodb # docker start demodb The data is persistent as long as the container exists. Use 'docker ps --all' to show all containers, running or not. CREATE A NEW SCHEMA Create a SQL file called createschema.sql: SET ECHO ON ALTER SESSION SET CONTAINER=orclpdb1; DROP USER scott CASCADE; CREATE USER scott IDENTIFIED BY tiger; GRANT CONNECT, RESOURCE TO scott; ALTER USER scott QUOTA UNLIMITED ON USERS; DROP TABLE scott.bananas; CREATE TABLE scott.bananas (shipment VARCHAR2(4000) CHECK (shipment IS JSON)); INSERT INTO scott.bananas VALUES ('{ "farmer": "Gita", "ripeness": "All Green", "kilograms": 100 }'); INSERT INTO scott.bananas VALUES ('{ "farmer": "Ravi", "ripeness": "Full Yellow", "kilograms": 90 }'); INSERT INTO scott.bananas VALUES ('{ "farmer": "Mindy", "ripeness": "More Yellow than Green", "kilograms": 92 }'); COMMIT; EXIT For this demo, you can see I used the Oracle Database 12.1.0.2 JSON data type. Execute createschema.sql in your favorite tool, such as SQL*Plus. In my case I actually ran SQL*Plus on my Docker host machine. Cheating a bit on giving details here, I had downloaded the Instant Client Basic and SQL*Plus packages and unzipped as shown in the the Instant Client instructions. I then set my shell to use the SQL*Plus binary: # export LD_LIBRARY_PATH=/home/cjones/instantclient_12_2 # export PATH=/home/cjones/instantclient_12_2:$PATH Using the database IP address as shown earlier you can now run the script in SQL*Plus against the container database. In my environment the database IP was 172.17.0.2: # sqlplus -l sys/Oradoc_db1@172.17.0.2/orclpdb1.localdomain as sysdba @createschema.sql The database password and service name shown are the defaults in the image. CREATE A NODE.JS IMAGE Let's add Node.js to the Instant Client image. Create a sub-directory nodejs-scripts # mkdir nodejs-scripts Create a new file 'nodejs-scripts/Dockerfile'. This is the 'recipe' for building a Docker image. Here Node.js is added to the Instant Client image to create a new image usable by any Node.js application. The Node.js 8 package for Oracle Linux is handy. The Dockerfile should contain: FROM container-registry.oracle.com/database/instantclient:12.2.0.1 ADD ol7_developer_nodejs8.repo /etc/yum.repos.d/ol7_developer_nodejs8.repo RUN echo proxy=http://my-proxy.example.com:80 >> /etc/yum.conf RUN yum -y update && \ rm -rf /var/cache/yum && \ yum -y install nodejs The FROM line shows that we base our new image on the Instant Client image. If you are not behind a proxy, you can omit the proxy line. Or change the line to use your proxy. For quick testing, you may want to omit the 'yum -y update' command. The Dockerfile ADD command copies 'ol7_developer_nodejs8.repo' from the host file system into the image's file system. Create 'nodejs-scripts/ol7_developer_nodejs8.repo' containing: [ol7_developer_nodejs8] name=Oracle Linux $releasever Node.js 8 Packages for Development and test ($basearch) baseurl=https://yum.oracle.com/repo/OracleLinux/OL7/developer_nodejs8/$basearch/ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle gpgcheck=1 enabled=1 Now the new image with Oracle Instant Client and Node.js 8 can be built using this Dockerfile: docker build -t cjones/nodejs-image nodejs-scripts The 'cjones/nodejs-image' is the image name, not a directory path. You can see the new image has been created: # docker images REPOSITORY TAG IMAGE ID CREATED SIZE cjones/nodejs-image latest e048b739bb63 29 minutes ago 1.51GB container-registry.oracle.com/database/enterprise 12.2.0.1 12a359cd0528 7 months ago 3.44GB container-registry.oracle.com/database/instantclient 12.2.0.1 fda46de41de3 7 months ago 407MB CREATE A NODE.JS DEMO IMAGE The new Node.js image is refined by installing our demo application. This creates another new image that we can later run whenever we want to use the application. Create a sub-directory 'ws-demo-scripts': # mkdir ws-demo-scripts Create a new file 'ws-demo-scripts/Dockerfile' containing: FROM cjones/nodejs-image ENV https_proxy=http://my-proxy.example.com:80 WORKDIR workdir COPY package.json package.json COPY server.js server.js RUN npm install CMD ["npm", "start"] The first line shows the new image should be based on the Node.js image 'cjones/nodejs-image' created in the section above. Again, adjust the proxy line as needed by your network. You can see the Dockerfile copies two files from our host file system into the image. These files are shown below. When the image is created, the RUN command will install the Node.js dependencies from package.json. When a container starts, the CMD action is taken, which runs 'npm start', in turn invoking the 'main' target in package.json. Looking below to the package.json content, you can see this means 'node server.js' is run. Create a file 'ws-demo-scripts/package.json' containing: { "name": "banana-farmer", "version": "1.0.0", "description": "RESTful API using Node.js Express Oracle DB", "main": "server.js", "author": "Oracle", "license": "Apache", "dependencies": { "body-parser": "^1.18.2", "express": "^4.16.0", "oracledb": "^2.2.0" } } As obvious, this application installs the body-parser module, the node-oracledb module, and also express. This demo is an Express web service application. And yes, it is a Banana Farmer web service. The default run target of package.json is the application file 'server.js'. Create the application file 'ws-demo-scripts/server.js' containing the contents from here. The demo application is just this one file. Build the demo image: # docker build -t cjones/ws-demo ws-demo-scripts We now have our fourth image which contains our runnable application: # docker images REPOSITORY TAG IMAGE ID CREATED SIZE cjones/ws-demo latest 31cbe6d2ea4e 21 seconds ago 1.51GB cjones/nodejs-image latest e048b739bb63 29 minutes ago 1.51GB container-registry.oracle.com/database/enterprise 12.2.0.1 12a359cd0528 7 months ago 3.44GB container-registry.oracle.com/database/instantclient 12.2.0.1 fda46de41de3 7 months ago 407MB DEMO APPLICATION OVERVIEW The Banana Farmer scenario is that shipments of bananas from farmers are recorded. They can have a farmer name, ripeness, and weight. Shipments can be inserted, queried, updated or deleted. Let's look at a couple of snippets from ws-demo-scripts/server.js. A connection helper creates a pool of database connections: oracledb.createPool({  user: process.env.NODE_ORACLEDB_USER, password: process.env.NODE_ORACLEDB_PASSWORD, connectString: process.env.NODE_ORACLEDB_CONNECTIONSTRING }, . . . The credentials are taken from environment variables. When we run the app container we will pass value for those environment variables into the container. The application has Express routes for REST GET, POST, PUT and DELETE calls. The code to handle a GET request looks like: // HTTP method: GET // URI : /bananas/FARMER // Get the banana shipment for FARMER app.get('/bananas/:FARMER', function (req, res) { doGetConnection(res, function(err, connection) { if (err) return; connection.execute( "SELECT b.shipment FROM bananas b WHERE b.shipment.farmer = :f", { f: req.params.FARMER }, function (err, result) { if (err) { res.set('Content-Type', 'application/json'); res.status(500).send(JSON.stringify({ status: 500, message: "Error getting the farmer's profile", detailed_message: err.message })); } else if (result.rows.length < 1) { res.set('Content-Type', 'application/json'); res.status(404).send(JSON.stringify({ status: 404, message: "Farmer doesn't exist", detailed_message: "" })); } else { res.contentType('application/json'); res.status(200).send(JSON.stringify(result.rows)); } doRelease(connection, "GET /bananas/" + req.params.FARMER); }); }); }); Express makes it easy. It handles the routing to this code when a GET request with the URL '/bananas/<name>' e.g. '/bananas/Gita' is called. This simply binds the URL route parameter containing the farmer’s name into the SELECT statement. Binding is important for security and scalability, as you know. The SQL syntax used is the JSON 'dot' notation of Oracle Database 12.2 but it could be rewritten to work with 12.1.0.2. The bulk of the code is error handling, looking after the cases where there was a processing error or no rows returned. It sends back HTTP status codes 500 or 404, respectively. The success code path sends back the query output 'result.rows' as a JSON string, with the HTTP success status code 200. START THE DEMO CONTAINER Let's run the application. Create a file 'ws-demo-scripts/envfile.list' with the credentials for the application. Use the IP address of your database container found with the 'docker inspect' command shown previously. In my environment, the database IP address was '172.17.0.2' NODE_ORACLEDB_USER=scott NODE_ORACLEDB_PASSWORD=tiger NODE_ORACLEDB_CONNECTIONSTRING=172.17.0.2/orclpdb1.localdomain Start the Node.js web service container # docker run -d --name nodejs -P --env-file ws-demo-scripts/envfile.list cjones/ws-demo STATUS CHECK To recap what's happened, the Docker images are: # docker images REPOSITORY TAG IMAGE ID CREATED SIZE cjones/ws-demo latest 25caede29b17 12 minutes ago 1.51GB cjones/nodejs-image latest 138f2b76ffe7 13 minutes ago 1.51GB container-registry.oracle.com/database/enterprise 12.2.0.1 12a359cd0528 7 months ago 3.44GB container-registry.oracle.com/database/instantclient 12.2.0.1 fda46de41de3 7 months ago 407MB Two base images were downloaded, An image with Node.js was created from the Instant Client image. Finally a fourth image 'cjones/ws-demo' with the Node.js, Instant Client and the application code was created. We have started database ('demodb') and application containers ('nodejs'): # docker ps CONTAINER ID IMAGE COMMAND STATUS PORTS NAMES 2924e1225290 cjones/ws-demo ”npm start" Up 3 hours nodejs 9596bc2345d3 [...]/database/[...] "/bin/sh -c '/bin/..." Up 3 hours (healthy) ...->1521/tcp, ...->5500/tcp demodb We found the IP address of the database container, and knew (by reading the container registry documentation) the default credentials of the SYS user. We created a schema SCOTT on the database, with a table containing some JSON data. An application container was started, with the database application credentials and connection string specified in an environment file outside the container. SUBMIT REST REQUESTS Now we can call our application, and it will access the database. Install the browser extension HttpRequester (in Firefox) or Postman (in Chrome). Find the IP of the demo web service container: # docker inspect -f "{{ .NetworkSettings.IPAddress }}" nodejs In my environment, it was '172.17.0.3'. Use this with the port (3000) and various endpoints (e.g. '/bananas/<farmer>') defined in server.js for REST requests. In the HttpRequester or Postman extensions you can make various REST calls. Get all shipments: GET http://172.17.0.3:3000/bananas Get one farmer's shipment(s): GET http://172.17.0.3:3000/bananas/Gita New data: POST http://172.17.0.3:3000/bananas { "farmer" : "CJ", "ripeness" : "Light Green", "kilograms" : 50 } Update data: PUT http://172.17.0.3:3000/bananas/CJ { "farmer" : "CJ", "ripeness" : "50% Green, 50% Yellow", "kilograms" : 45 } Here's a screenshot of HttpRequester in action doing a GET request to get all banana shipments. On the left, the red boxes show the URL for the '/bananas' endpoint was executed as a GET request. On the right, the response shows the success HTTP status code of 200 and the returned data from the request: When you are finished with the containers you can stop them: # docker stop demodb # docker stop nodejs If you haven't tried Docker yet, now is the perfect time! Containers make deployment and development easy. Oracle's Docker images let you get started with Oracle products very quickly. If you want to learn more about Node.js and REST, check out the series Creating a REST API with Node.js and Oracle Database. And finally, if you've read this far, you can download all the files used from here.

This post shows how to run a node-oracledb application in a Docker Container. For bonus points, the application connects to an Oracle Database running in a second container. The steps are the 'show...

Python

Efficient and Scalable Batch Statement Execution in Python cx_Oracle

  Today's guest post is by Oracle's Anthony Tuininga, creator and lead maintainer of cx_Oracle, the extremely popular Oracle Database interface for Python. It shows how to use a feature of cx_Oracle that improves performance of large INSERT and UPDATE operations. This is very useful for loading data into Oracle Database, for example from CSV files.   Introduction This article shows how batch statement execution in the Python cx_Oracle interface for Oracle Database can significantly improve performance and make working with large data sets easy. In many cx_Oracle applications, executing SQL and PL/SQL statements using the method cursor.execute() is perfect. But if you intend to execute the same statement repeatedly for a large set of data, your application can incur significant overhead, particularly if the database is on a remote network. The method cursor.executemany() gives you the ability to reduce network transfer costs and database load, and can significantly outperform repeated calls to cursor.execute(). SQL To help demonstrate batch execution, the following tables and data will be used: create table ParentTable ( ParentId number(9) not null, Description varchar2(60) not null, constraint ParentTable_pk primary key (ParentId) ); create table ChildTable ( ChildId number(9) not null, ParentId number(9) not null, Description varchar2(60) not null, constraint ChildTable_pk primary key (ChildId), constraint ChildTable_fk foreign key (ParentId) references ParentTable ); insert into ParentTable values (10, 'Parent 10'); insert into ParentTable values (20, 'Parent 20'); insert into ParentTable values (30, 'Parent 30'); insert into ParentTable values (40, 'Parent 40'); insert into ParentTable values (50, 'Parent 00'); insert into ChildTable values (1001, 10, 'Child A of Parent 10'); insert into ChildTable values (1002, 20, 'Child A of Parent 20'); insert into ChildTable values (1003, 20, 'Child B of Parent 20'); insert into ChildTable values (1004, 20, 'Child C of Parent 20'); insert into ChildTable values (1005, 30, 'Child A of Parent 30'); insert into ChildTable values (1006, 30, 'Child B of Parent 30'); insert into ChildTable values (1007, 40, 'Child A of Parent 40'); insert into ChildTable values (1008, 40, 'Child B of Parent 40'); insert into ChildTable values (1009, 40, 'Child C of Parent 40'); insert into ChildTable values (1010, 40, 'Child D of Parent 40'); insert into ChildTable values (1011, 40, 'Child E of Parent 40'); insert into ChildTable values (1012, 50, 'Child A of Parent 50'); insert into ChildTable values (1013, 50, 'Child B of Parent 50'); insert into ChildTable values (1014, 50, 'Child C of Parent 50'); insert into ChildTable values (1015, 50, 'Child D of Parent 50'); commit; Simple Execution To insert a number of rows into the parent table, the following naive Python script could be used: data = [ (60, "Parent 60"), (70, "Parent 70"), (80, "Parent 80"), (90, "Parent 90"), (100, "Parent 100") ] for row in data: cursor.execute(""" insert into ParentTable (ParentId, Description) values (:1, :2)""", row) This works as expected and five rows are inserted into the table. Each execution, however, requires a "round-trip" to the database. A round-trip is defined as the client (i.e. the Python script) making a request to the database and the database sending back its response to the client. In this case, five round-trips are required. As the number of executions increases, the cost increases in a linear fashion based on the average round-trip cost. This cost is dependent on the configuration of the network between the client (Python script) and the database server, as well as on the capability of the database server. Batch Execution Performing the same inserts using executemany() would be done as follows: data = [ (60, "Parent 60"), (70, "Parent 70"), (80, "Parent 80"), (90, "Parent 90"), (100, "Parent 100") ] cursor.executemany(""" insert into ParentTable (ParentId, Description) values (:1, :2)""", data) In this case there is only one round-trip to the database, not five. In fact, no matter how many rows are processed at the same time there will always be just one round-trip. As the number of rows processed increases, the performance advantage of cursor.executemany() skyrockets. For example, on my machine inserting 1,000 rows into the same table in a database on the local network using cursor.execute() takes 410 ms, whereas using cursor.executemany() requires only 20 ms. Increasing the number to 10,000 rows requires 4,000 ms for cursor.execute() but only 60 ms for cursor.executemany()! For really huge data sets there may be external buffer or network limits to how many rows can be processed at one time. These limits are based on both the number of rows being processed as well as the "size" of each row that is being processed. The sweet spot can be found by tuning your application. Repeated calls to executemany() are still better than repeated calls to execute(). As mentioned earlier, execution of PL/SQL statements is also possible. Here is a brief example demonstrating how to do so: data = [[2], [6], [4]] var = cursor.var(str, arraysize = len(data)) data[0].append(var) # OUT bind variable ':2' cursor.executemany(""" declare t_Num number := :1; t_OutValue varchar2(100); begin for i in 1..t_Num loop t_OutValue := t_OutValue || 'X'; end loop; :2 := t_OutValue; end;""", data) print("Result:", var.values) This results in the following output: Result: ['XX', 'XXXXXX', 'XXXX'] Using executemany() With the significant performance advantages that can be seen by performing batch execution of a single statement it would seem obvious to use cursor.executemany() whenever possible. Let's look at some of the other features of executemany() useful for common data handling scenarios. Scenario 1: Getting Affected Row Counts One scenario that may arise is the need to determine how many rows are affected by each row of data that is passed to cursor.executemany(). Consider this example: for parentId in (10, 20, 30): cursor.execute("delete from ChildTable where ParentId = :1", [parentId]) print("Rows deleted for parent id", parentId, "are", cursor.rowcount) This results in the following output: Rows deleted for parent id 10 are 1 Rows deleted for parent id 20 are 3 Rows deleted for parent id 30 are 2 Since each delete is performed independently, determining how many rows are affected by each delete is easy to do. But what happens if we use cursor.executemany() in order to improve performance as in the following rewrite? data = [[10], [20], [30]] cursor.executemany("delete from ChildTable where ParentId = :1", data) print("Rows deleted:", cursor.rowcount) This results in the following output: Rows deleted: 6 You'll note this is the sum of all of the rows that were deleted in the prior example, but the information on how many rows were deleted for each parent id is missing. Fortunately, that can be determined by enabling the Array DML Row Counts feature, available in Oracle Database 12.1 and higher: data = [[10], [20], [30]] cursor.executemany("delete from ChildTable where ParentId = :1", data, arraydmlrowcounts = True) for ix, rowsDeleted in enumerate(cursor.getarraydmlrowcounts()): print("Rows deleted for parent id", data[ix][0], "are", rowsDeleted) This results in the same output as was shown for the simple cursor.execute(): Rows deleted for parent id 10 are 1 Rows deleted for parent id 20 are 3 Rows deleted for parent id 30 are 2 Scenario 2: Handling Bad Data Another scenario is handling bad data. When processing large amounts of data some of that data may not fit the constraints imposed by the database. Using cursor.execute() such processing may look like this: data = [ (1016, 10, 'Child B of Parent 10'), (1017, 10, 'Child C of Parent 10'), (1018, 20, 'Child D of Parent 20'), (1018, 20, 'Child D of Parent 20'), # duplicate key (1019, 30, 'Child C of Parent 30'), (1020, 30, 'Child D of Parent 40'), (1021, 600, 'Child A of Parent 600'), # parent does not exist (1022, 40, 'Child F of Parent 40'), ] for ix, row in enumerate(data): try: cursor.execute(""" insert into ChildTable (ChildId, ParentId, Description) values (:1, :2, :3)""", row) except cx_Oracle.DatabaseError as e: print("Row", ix, "has error", e) This results in the following output: Row 3 has error ORA-00001: unique constraint (EMDEMO.CHILDTABLE_PK) violated Row 6 has error ORA-02291: integrity constraint (EMDEMO.CHILDTABLE_FK) violated - parent key not found If you make use of cursor.executemany(), however, execution stops at the first error that is encountered: data = [ (1016, 10, 'Child B of Parent 10'), (1017, 10, 'Child C of Parent 10'), (1018, 20, 'Child D of Parent 20'), (1018, 20, 'Child D of Parent 20'), # duplicate key (1019, 30, 'Child C of Parent 30'), (1020, 30, 'Child D of Parent 40'), (1021, 600, 'Child A of Parent 600'), # parent does not exist (1022, 40, 'Child F of Parent 40'), ] try: cursor.executemany(""" insert into ChildTable (ChildId, ParentId, Description) values (:1, :2, :3)""", data) except cx_Oracle.DatabaseError as e: errorObj, = e.args print("Row", cursor.rowcount, "has error", errorObj.message) This results in the following output: Row 3 has error ORA-00001: unique constraint (EMDEMO.CHILDTABLE_PK) violated Fortunately there is an option to help here as well, using the Batch Errors feature available in Oracle Database 12.1 and higher. This can be seen using the following code: data = [ (1016, 10, 'Child B of Parent 10'), (1017, 10, 'Child C of Parent 10'), (1018, 20, 'Child D of Parent 20'), (1018, 20, 'Child D of Parent 20'), # duplicate key (1019, 30, 'Child C of Parent 30'), (1020, 30, 'Child D of Parent 40'), (1021, 600, 'Child A of Parent 600'), # parent does not exist (1022, 40, 'Child F of Parent 40'), ] cursor.executemany(""" insert into ChildTable (ChildId, ParentId, Description) values (:1, :2, :3)""", data, batcherrors = True) for errorObj in cursor.getbatcherrors(): print("Row", errorObj.offset, "has error", errorObj.message) This results in the following output, which is identical to the example that used cursor.execute(): Row 3 has error ORA-00001: unique constraint (EMDEMO.CHILDTABLE_PK) violated Row 6 has error ORA-02291: integrity constraint (EMDEMO.CHILDTABLE_FK) violated - parent key not found In both the execute() and executemany() cases, rows that were inserted successfully open a transaction which will need to be either committed or rolled back with connection.commit() or connection.rollback(), depending on the needs of your application. Note that if you use autocommit mode, the transaction is committed only when no errors are returned; otherwise, a transaction is left open and will need to be explicitly committed or rolled back. Scenario 3: DML RETURNING Statements The third scenario that I will consider is that of DML RETURNING statements. These statements allow you to bundle a DML statement (such as INSERT, UPDATE, DELETE and MERGE statements) along with a query to return some data at the same time. With cursor.execute() this is done easily enough using something like the following code: childIdVar = cursor.var(int) cursor.setinputsizes(None, childIdVar) for parentId in (10, 20, 30): cursor.execute(""" delete from ChildTable where ParentId = :1 returning ChildId into :2""", [parentId]) print("Child ids deleted for parent id", parentId, "are", childIdVar.values) This produces the following output: Child ids deleted for parent id 10 are [1001] Child ids deleted for parent id 20 are [1002, 1003, 1004] Child ids deleted for parent id 30 are [1005, 1006] Support for DML RETURNING in cursor.executemany() was introduced in cx_Oracle 6.3. Because it was supported only in execute() prior to cx_Oracle 6.3, the cx_Oracle.__future__ object must have the attribute "dml_ret_array_val" set to True to allow multiple values to be returned by executemany(). Failing to set this to True when calling executemany() will result in an error. Finally, the variable created to accept the returned values must have an array size large enough to accept the rows that are returned (one array of output data is returned for each of the input records that are provided). The following code shows the new executemany() support in cx_Oracle 6.3: cx_Oracle.__future__.dml_ret_array_val = True data = [[10], [20], [30]] childIdVar = cursor.var(int, arraysize = len(data)) cursor.setinputsizes(None, childIdVar) cursor.executemany(""" delete from ChildTable where ParentId = :1 returning ChildId into :2""", data) for ix, inputRow in enumerate(data): print("Child ids deleted for parent id", inputRow[0], "are", childIdVar.getvalue(ix)) This results in the same output as was seen with cursor.execute(): Child ids deleted for parent id 10 are [1001] Child ids deleted for parent id 20 are [1002, 1003, 1004] Child ids deleted for parent id 30 are [1005, 1006] Note: that using "dml_ret_array_val" set to True with execute() causes arrays to be returned for each bind record. In any future cx_Oracle 7 this will become the only behavior available. Scenario 4: Variable Data lengths When multiple rows of data are being processed there is the possibility that the data is not uniform in type and size. cx_Oracle makes some effort to accommodate such differences. For example, type determination is deferred until a value that is not None is found in the data. If all values in a particular column are None, then cx_Oracle assumes the type is a string and has a length of 1. cx_Oracle will also adjust the size of the buffers used to store strings and bytes when a longer value is encountered in the data. These sorts of operations, however, will incur overhead as cx_Oracle has to reallocate memory and copy all of the data that has been processed thus far. To eliminate this overhead, the method cursor.setinputsizes() should be used to tell cx_Oracle about the type and size of the data that is going to be used. For example: data = [ (110, "Parent 110"), (2000, "Parent 2000"), (30000, "Parent 30000"), (400000, "Parent 400000"), (5000000, "Parent 5000000") ] cursor.setinputsizes(None, 20) cursor.executemany(""" insert into ParentTable (ParentId, Description) values (:1, :2)""", data) In this example, without the call to cursor.setinputsizes(), cx_Oracle would perform five allocations of increasing size as it discovered each new, larger string. The value 20, however, tells cx_Oracle that the maximum size of the strings that will be processed is 20 characters. Since cx_Oracle allocates memory for each row based on this value it is best not to oversize it. Note that if the type and size are uniform (like they are for the first column in the data being inserted), the type does not need to be specified and None can be provided, indicating that the default type (in this case cx_Oracle.NUMBER) should be used. Conclusion As can be seen by the preceding examples, cursor.executemany() lets you manage data easily and enjoy high performance at the same time!

  Today's guest post is by Oracle's Anthony Tuininga, creator and lead maintainer of cx_Oracle, the extremely popular Oracle Database interface for Python. It shows how to use a feature of cx_Oracle...

Python

Python cx_Oracle 6.3 Supports DML RETURNING for Batch Statement Execution

cx_Oracle 6.3, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package that covers the Python Database API specification with many additions to support Oracle advanced features. Top Feature: Cursor.executemany() support for OUT bind variables in DML RETURNING statements.   This release contains a number of bug fixes and useful improvements. For the full list, see the Release Notes, but I wanted to highlight a few features: Support for binding integers and floats as cx_Oracle.NATIVE_FLOAT. Support for true heterogeneous session pools that use different username/password combinations for each session acquired from the pool. All cx_Oracle exceptions raised by cx_Oracle now produce a cx_Oracle._Error object. Support for getting the OUT values of bind variables bound to a DML RETURNING statement when calling Cursor.executemany(). For technical reasons, this requires setting a new attribute in cx_Oracle.__future__. As an example: cx_Oracle.__future__.dml_ret_array_val = True data = [[10], [20], [30]] childIdVar = cursor.var(int, arraysize = len(data)) cursor.setinputsizes(None, childIdVar) cursor.executemany(""" delete from ChildTable where ParentId = :1 returning ChildId into :2""", data) for ix, inputRow in enumerate(data): print("Child ids deleted for parent id", inputRow[0], "are", childIdVar.getvalue(ix)) Want to know what this displays? Stay tuned to this blog site for an upcoming post on using executemany() in cx_Oracle! cx_Oracle References Home page: oracle.github.io/python-cx_Oracle/index.html Installation instructions: cx-oracle.readthedocs.io/en/latest/installation.html Documentation: cx-oracle.readthedocs.io/en/latest/index.html Release Notes: cx-oracle.readthedocs.io/en/latest/releasenotes.html Source Code Repository: github.com/oracle/python-cx_Oracle

cx_Oracle 6.3, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package that covers the Python Database API specification with many...

General

ODPI-C 2.3 is now on GitHub

Release 2.3 of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub.  ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications written in C or C++. Top features: Improve Batch Statement Execution   ODPI-C 2.3 improves support for Batch Statement execution with dpiStmt_executeMany(). To support DML RETURNING producing multiple rows for each iteration, a new function dpiVar_getReturnedData() was added, replacing the function dpiVar_getData() which will be deprecated in a future release. A fix for binding LONG data in dpiStmt_executeMany() also landed. If you haven't heard of Batch Statement Executation (sometimes referred to as Array DML), check out this Python cx_Oracle example or this Node.js node-oracledb example. A number of other issues were addressed in ODPI-C 2.3. See the release notes for more information. ODPI-C References Home page: https://oracle.github.io/odpi/ Code: https://github.com/oracle/odpi Documentation: https://oracle.github.io/odpi/doc/index.html Release Notes: https://oracle.github.io/odpi/doc/releasenotes.html Installation Instructions: oracle.github.io/odpi/doc/installation.html Report issues and discuss: https://github.com/oracle/odpi/issues

Release 2.3 of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub.  ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications...

Node.js

Node-oracledb 2.2 with Batch Statement Execution (and more) is out on npm

Release announcement: Node-oracledb 2.2, the Node.js module for accessing Oracle Database, is on npm. Top features: Batch Statement Execution In the six-or-so weeks since 2.1 was released, a bunch of new functionality landed in node-oracledb 2.2. This shows how much engineering went into the refactored lower abstraction layer we introduced in 2.0, just to make it easy to expose Oracle features to languages like Node.js. The top features in node-oracledb 2.2 are: Added oracledb.edition to support Edition-Based Redefinition (EBR). The EBR feature of Oracle Database allows multiple versions of views, synonyms, PL/SQL objects and SQL Translation profiles to be used concurrently. This lets database logic be updated and tested while production users are still accessing the original version. The new edition property can be set at the global level, when creating a pool, or when creating a standalone connection. This removes the need to use an ALTER SESSION command or ORA_EDITION environment variable. Added oracledb.events to allow the Oracle client library to receive Oracle Database service events, such as for Fast Application Notification (FAN) and Runtime Load Balancing (RLB). The new events property can be set at the global level, when creating a pool, or when creating a standalone connection. This removes the need to use an oraaccess.xml file to enable event handling, making it easier to use Oracle high availablility features, and makes it available for the first time to users who are linking node-oracledb with version 11.2 Oracle client libraries. Added connection.changePassword() for changing passwords. Passwords can also be changed when calling oracledb.getConnection(), which is the only way to connect when a password has expired. Added connection.executeMany() for efficient batch execution of DML (e.g. INSERT, UPDATE and DELETE) and PL/SQL execution with multiple records. See the example below. Added connection.getStatementInfo() to find information about a SQL statement without executing it. This is most useful for finding column types of queries and for finding bind variables names. It does require a 'round-trip' to the database, so don't use it without reason. Also there are one or two quirks because the library underneath that provides the implementation has some 'historic' behavior. Check the manual for details. Added connection.ping() to support system health checks. This verifies that a connection is usable and that the database service or network have not gone down. This requires a round-trip to the database so you wouldn't use it without reason. Although it doesn't replace error handling in execute(), sometimes you don't want to be running a SQL statement just to check the connection status, so it is useful in the arsenal of features for keeping systems running reliably. See the CHANGELOG for all changes. One infrastructure change we recently made was to move the canonical home for documentation to GitHub 'pages'. This will be kept in sync with the current production version of node-oracledb. If you update your bookmarks to the new locations, it will allow us to update the source code repository documentation mid-release without confusing anyone about available functionality. Home page: https://oracle.github.io/node-oracledb Installation: https://oracle.github.io/node-oracledb/INSTALL.html Documentation: https://oracle.github.io/node-oracledb/doc/api.html Batch Statement Execution The new connection.executeMany() method allows many sets of data values to be bound to one DML or PL/SQL statement for execution. It is like calling connection.execute() multiple times for one statement but requires fewer round-trips overall. This is an efficient way to handle batch changes, for example when inserting or updating multiple rows, because the reduced cost of round-trips has a significant affect on performance and scalability. Depending on the number of records, their sizes, and on the network speed to the database, the performance of executeMany() can be significantly faster than the equivalent use of execute(). In one little test I did between Node.js on my laptop and a database running on my adjacent desktop, I saw that executeMany() took 16 milliseconds whereas execute() took 2.3 seconds to insert 1000 rows, each consisting of a number and a very short string. With larger data sizes and slower (or faster!) networks the performance characteristics will vary, but the overall benefit is widespread. The executeMany() method supports IN, IN OUT and OUT variables. Binds from RETURNING INTO clauses are supported, making it easy to insert a number of rows and find, for example, the ROWIDs of each. With an optional batchErrors mode, you can insert 'noisy' data easily. Batch Errors allows valid rows to be inserted and invalid rows to be rejected. A transaction will be started but not committed, even if autocommit mode is enabled. The application can examine the errors, find the bad data, take action, and explicitly commit or rollback as desired. To give one example, let's look at the use of batchErrors when inserting data: var sql = "INSERT INTO childtab VALUES (:1, :2, :3)"; // There are three value in each nested array since there are // three bind variables in the SQL statement. // Each nested array will be inserted as a new row. var binds = [ [1016, 10, "apples"], [1017, 10, "bananas"], [1018, 20, "cherries"], [1018, 20, "damson plums"], // duplicate key [1019, 30, "elderberry"], [1020, 40, "fig"], [1021, 75, "golden kiwifruit"], // parent does not exist [1022, 40, "honeydew melon"] ]; var options = { autoCommit: true, // autocommit if there are no batch errors batchErrors: true, // identify invalid records; start a transaction for valid ones bindDefs: [ // describes the data in 'binds' { type: oracledb.NUMBER }, { type: oracledb.NUMBER }, { type: oracledb.STRING, maxSize: 16 } // size of the largest string, or as close as possible ] }; connection.executeMany(sql, binds, options, function (err, result) { if (err) consol.error(err); else { console.log("Result is:", result); } }); Assuming appropriate data exists in the parent table, the output might be like: Result is: { rowsAffected: 6, batchErrors: [ { Error: ORA-00001: unique constraint (CJ.CHILDTAB_PK) violated errorNum: 1, offset: 3 }, { Error: ORA-02291: integrity constraint (CJ.CHILDTAB_FK) violated - parent key not found errorNum: 2291, offset: 6 } ] } This shows that 6 records were inserted but the records at offset 3 and 6 (using a 0-based index into the 'binds' variable array) were problematic. Because of these batch errors, the other records were not committed, despite autoCommit being true. However they were inserted and the transaction could be committed or rolled back. We know some users are inserting very large data sets so executeMany() will be very welcome. At the very huge end of the data spectrum you may want to call executeMany() with batches of data to avoid size limitations in various layers of the Oracle and operating system stack. Your own testing will determine the best approach. See Batch Execution in the manual for more information about the modes of executeMany() and how to use it in various cases. There are runnable examples in the GitHub examples directory. Look for the files prefixed 'em_'. There are two variants of each sample: one uses call-back style, and the other uses the Async/Await interface available with Node.js 8. Resources Node-oracledb installation instructions are here. Node-oracledb documentation is here. Node-oracledb change log is here. Issues and questions about node-oracledb can be posted on GitHub. Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

Release announcement: Node-oracledb 2.2, the Node.js module for accessing Oracle Database, is on npm. Top features: Batch Statement Execution In the six-or-so weeks since 2.1 was released, a bunch of...

Python

Python cx_Oracle 6.2 is out on PyPI

cx_Oracle 6.2, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package that covers the Python Database API specification with many additions to support Oracle advanced features. This release: Adds support for creating temporary CLOBs, BLOBs or NCLOBs via a new method Connection.createlob(). Adds support for binding a LOB value directly to a cursor. Adds support for closing the connection when reaching the end of a 'with' code block controlled by the connection as a context manager. See cx_Oracle.__future__ for more information. Was internally updated to the newest ODPI-C data access layer, which brings numerous stability fixes and code improvements including: Open statements and LOBs are tracked and automatically closed when the related connection is closed; this eliminates the need for users of cx_Oracle to track them, and removes the error "DPI-1054: connection cannot be closed when open statements or LOBs exist". Errors during implicit rollback at connection close are ignored - but if an error does occur, ensure the connection is dropped from the connection pool. This reduces app errors in cases like where a DBA has killed a session. Avoids an unnecessary round trip to the database when a connection is released back to the pool by preventing a rollback from being called when no transaction is in progress. There was also an internal code restructure to simplify maintenance and consolidate transformations to/from Python objects. See the Release Notes for all the fixes. To upgrade to cx_Oracle 6.2 most users will be able to run: python -m pip install cx_Oracle --upgrade Spread the word! cx_Oracle References Home page: oracle.github.io/python-cx_Oracle/index.html Installation instructions: cx-oracle.readthedocs.io/en/latest/installation.html Documentation: cx-oracle.readthedocs.io/en/latest/index.html Release Notes: cx-oracle.readthedocs.io/en/latest/releasenotes.html Source Code Repository: github.com/oracle/python-cx_Oracle

cx_Oracle 6.2, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package that covers the Python Database API specification with many...

General

ODPI-C 2.2 Release: Powering Oracle Database Access

ODPI-C 2.2.1 has been tagged for release. Oracle Database Programming Interface for C (ODPI-C) is an open source library of C code that simplifies the use of common Oracle Call Interface (OCI) features for Oracle Database drivers and user applications. The ODPI-C project is open source and maintained by Oracle Corp. ODPI-C is used as a data access layer in drivers for Node.js, Python, Ruby, Go, Rust, Haskell and more. Changes in ODPI-C 2.2 from 2.1 include: Open statements and LOBs are tracked and automatically closed when the related connection is closed; this eliminates the need for users of the driver to do so and removes the error "DPI-1054: connection cannot be closed when open statements or LOBs exist". Errors during implicit rollback at connection close are ignored - but if an error does occur, ensure the connection is dropped from the connection pool. This reduces app errors in cases like where a DBA has killed a session. Avoid a round trip to the database when a connection is released back to the pool by preventing a rollback from being called when there is no transaction in progress. A new, optional, way of including the source code in your projects: embed/dpi.c was added. This simply includes all other source files. You can reliably link with just dpi.c and not have to update your projects if, and when, new ODPI-C versions have new source files. Many stability fixes, code improvements, new tests, and documentation updates. See the release notes for all changes. In my opinion, the stability fixes justify upgrading immediately. The eagle-eyed will note that today is a 2.2.1 release but we actually tagged 2.2.0 a few weeks ago. ODPI-C 2.2.0 was tagged solely to give an identifiable base for node-oracledb 2.2 to use. However Anthony had some ODPI-C fixes queued up in areas of code not used by node-oracledb, hence today's "official" ODPI-C 2.2.1 announcement. ODPI-C References Home page: oracle.github.io/odpi Code: github.com/oracle/odpi Documentation: oracle.github.io/odpi/doc/index.html Release Notes: oracle.github.io/odpi/doc/releasenotes.html Report issues and discuss: github.com/oracle/odpi/issues Installation Instructions: oracle.github.io/odpi/doc/installation.html.

ODPI-C 2.2.1 has been tagged for release. Oracle Database Programming Interface for C (ODPI-C) is an open source library of C code that simplifies the use of common Oracle Call Interface (OCI) features...

General

Installing the Oracle ODBC Driver on macOS

A bonus for today is a guest post by my colleague Senthil Dhamotharan. He shares the steps to install the Oracle Instant Client ODBC driver and the unixODBC Driver Manager on macOS. ODBC is an open specification for accessing databases. The Oracle ODBC driver for Oracle Database enables ODBC applications to connect to Oracle Database. In addition to standard ODBC functions, users can leverage Oracle specific features for high performance data access. Install the unixODBC Driver Manager Download unixODBC from ftp.unixodbc.org/pub/unixODBC. I used unixODBC-2.3.1.tar.gz. Open a Terminal and extract the package: tar -zxvf unixODBC-2.3.1.tar.gz Configure unixODBC: cd unixODBC-2.3.1 ./configure Note if you use the configure option "--prefix" to install into locations other than the default directory (/usr/local) then macOS's SIP features may prevent the unixODBC libraries being located correctly by the ODBC driver. Build and install unixODBC: make sudo make install Install the Oracle ODBC Driver Download the Oracle 12.2 Instant Client Basic and ODBC packages from Instant Client Downloads for macOS (Intel x86). To reduce the installation size, the Basic Light package be used instead of Basic, if its character sets and languages are sufficient. Extract both ZIP files: unzip instantclient-basic-macos.x64-12.2.0.1.0-2.zip unzip instantclient-odbc-macos.x64-12.2.0.1.0-2.zip This will create a subdirectory instantclient_12_2 The Oracle Instant Client libraries need to be in the macOS library search path, generally either in /usr/lib/local or in your home directory under ~/lib. I did: mkdir ~/lib cd instantclient_12_2 ln -s $(pwd)/libclntsh.dylib.12.1 $(pwd)/libclntshcore.dylib.12.1 ~/lib With version 12.2, a small patch to the driver name in instantclient_12_2/odbc_update_ini.sh is required on macOS. I changed line 101 from: SO_NAME=libsqora.so.12.1 to SO_NAME=libsqora.dylib.12.1 Run the configuration script cd instantclient_12_2 sudo odbc_update_ini.sh /usr/local sudo chown $USER ~/.odbc.ini This creates a default DSN of "OracleODBC-12c" Edit the new ~/.odbc.ini configuration file and add the Oracle Database connection string. My database is running on the same machine as ODBC (inside a VirtualBox VM) and has a service name of 'orclpdb', so my connection string is 'localhost/orclpdb'. I changed: ServerName = to ServerName = localhost/orclpdb Verify the installation Run the isql utility to verify installation. Pass in the DSN name, and an existing database username and password: $ isql OracleODBC-12c scott tiger +---------------------------------------+ | Connected! | | | | sql-statement | | help [tablename] | | quit | | | +---------------------------------------+ SQL> You can execute SQL statements and quit when you are done. Test Program To test a program that makes ODBC calls, download odbcdemo.c. Edit odbcdemo.c and set the USERNAME and PASSWORD constants to the database credentials. Build it: gcc -o odbcdemo -g -lodbc odbcdemo.c Run it ./odbcdemo The output will be like: Connecting to the DB .. Done Executing SQL ==> SELECT SYSDATE FROM DUAL Result ==> 2018-02-21 02:53:47 Summary ODBC is a popular API for accessing databases. The Oracle ODBC Driver is the best way to access Oracle Database. Resources Using the Oracle ODBC Driver. Oracle ODBC Drivers Discussion Forum Oracle Instant Client ODBC Release Notes Instant Client Downloads

A bonus for today is a guest post by my colleague Senthil Dhamotharan. He shares the steps to install the Oracle Instant Client ODBC driver and the unixODBC Driver Manager on macOS. ODBC is an open...

php

Installing XAMPP on Windows for PHP and Oracle Database

Today's guest post comes from Tianfang Yang who's been working with the Oracle Database extensions for PHP. This post shows how to install XAMPP on Windows to run PHP applications that connect to a remote Oracle Database. XAMPP is an open source package that contains Apache, PHP and many PHP 'extensions'. One of these extension is PHP OCI8 which connects to Oracle Database. To install XAMPP: Download "XAMPP for Windows" and follow the installer wizard. I installed into my D: drive. Start the Apache server via the XAMPP control panel. Visit http://localhost/dashboard/phpinfo.php via your browser to see the architecture and thread safety mode of the installed PHP. Please note this is the architecture of the installed PHP and not the architecture of your machine. It’s possible to run a x86 PHP on an x64 machine. [Optional] Oracle OCI8 is pre-installed in XAMPP but if you need a newer version you can download an updated OCI8 PECL package from pecl.php.net. Pick an OCI8 release and select the DLL according to the architecture and thread safety mode. For example, if PHP is x86 and thread safety enabled, download "7.2 Thread Safe (TS) x86". Then replace "D:\xampp\php\ext\php_oci8_12c.dll" with the new "php_oci8_12c.dll" from the OCI8 PECL package. Edit "D:\xampp\php\php.ini" and uncomment the line "extension=oci8_12c". Make sure "extension_dir" is set to the directory containing the PHP extension DLLs. For example, extension=oci8_12c extension_dir="D:\xampp\php\ext" Download the Oracle Instant Client Basic package from OTN. Select the correct architecture to align with PHP's. For Windows x86 download "instantclient-basic-nt-12.2.0.1.0.zip" from the Windows 32-bit page. Extract the file in a directory such as "D:\Oracle". A subdirectory "D:\Oracle\instantclient_12_2" will be created. Add this subdirectory to the PATH environment variable. You can update PATH in Control Panel -> System -> Advanced System Settings -> Advanced -> Environment Variables -> System Variables -> PATH. In my example I set it to "D:\Oracle\instantclient_12_2". Restart the Apache server and check the phpinfo.php page again. It shows the OCI8 extension is loaded successfully. If you also run PHP from a terminal window, make sure to close and reopen the terminal to get the updated PATH value. To run your first OCI8 application, create a new file in the XAMPP document root "D:\xampp\htdocs\test.php". It should contain: <?php error_reporting(E_ALL); ini_set('display_errors', 'On'); $username = "hr"; // Use your username $password = "welcome"; // and your password $database = "localhost/orclpdb"; // and the connect string to connect to your database $query = "select * from dual"; $c = oci_connect($username, $password, $database); if (!$c) { $m = oci_error(); trigger_error('Could not connect to database: '. $m['message'], E_USER_ERROR); } $s = oci_parse($c, $query); if (!$s) { $m = oci_error($c); trigger_error('Could not parse statement: '. $m['message'], E_USER_ERROR); } $r = oci_execute($s); if (!$r) { $m = oci_error($s); trigger_error('Could not execute statement: '. $m['message'], E_USER_ERROR); } echo "<table border='1'>\n"; $ncols = oci_num_fields($s); echo "<tr>\n"; for ($i = 1; $i <= $ncols; ++$i) { $colname = oci_field_name($s, $i); echo " <th><b>".htmlspecialchars($colname,ENT_QUOTES|ENT_SUBSTITUTE)."</b></th>\n"; } echo "</tr>\n"; while (($row = oci_fetch_array($s, OCI_ASSOC+OCI_RETURN_NULLS)) != false) { echo "<tr>\n"; foreach ($row as $item) { echo "<td>"; echo $item!==null?htmlspecialchars($item, ENT_QUOTES|ENT_SUBSTITUTE):"&nbsp;"; echo "</td>\n"; } echo "</tr>\n"; } echo "</table>\n"; ?> You need to edit this file and set your database username, password and connect string. If you are using Oracle Database XE, then the connect string should be "localhost/XE". The SQL query can also be changed. Currently it queries the special DUAL table, which every user has. Load the test program in a browser using http://localhost/test.php. The output will be the single value "X" in the column called "DUMMY". You can read more about PHP OCI8 in the PHP manual, and in the free Underground PHP and Oracle Manual from Oracle. Enjoy your coding with OCI8!

Today's guest post comes from Tianfang Yang who's been working with the Oracle Database extensions for PHP. This post shows how to install XAMPP on Windows to run PHP applications that connect to a...

Node.js

node-oracledb 2.1 is now available from npm

Release announcement: Node-oracledb 2.1.0, the Node.js module for accessing Oracle Database, is on npm. Top features: Privileged connections such as SYSDBA. A new 'queryStream()' Stream 'destroy()' method After the big refactoring of node-oracledb 2.0, we're pleased to push node-oracledb forward with the release of 2.1.0.   Changes in node-oracledb 2.1.0 include: Support for SYSDBA, SYSOPER, SYSASM, SYSBACKUP, SYSDG, SYSKM, and SYSRAC privileges in standalone connections. You can now connect like: oracledb.getConnection( { user: 'sys', password: 'secret', connectString: 'localhost/orclpdb', privilege: oracledb.SYSDBA }, function(err, connection) { if (err) console.error(err); else console.log('I have power'); } // . . . ); The internal implementations of 'direct fetches' and 'queryStream()' have been simplified by reusing the ResultSet changes of 2.0. You might notice speedups in some cases. The 'queryStream()' result Stream now supports Node 8's destroy() method, which obsoletes the previous experimental '_close()' method of node-oracledb. If you are using '_close()' in Node 8 or later, you should: Change the method name from '_close()' to 'destroy()'. Stop passing a callback. Optionally pass an error. Improved the Error object with new 'errorNum' and 'offset' properties. The properties will be included when you get errors from Oracle Database such as this: connection.execute( "select wrong_col from departments", function(err, result) { if (err) console.error(err); else console.log(result.rows); }); The output is: { Error: ORA-00904: "WRONG_COL": invalid identifier errorNum: 904, offset: 7 } The offset is relative to the start of the SQL statement. For non SQL errors, the offset will be 0. New 'versionSuffix' and 'versionString' properties to the oracledb object to aid showing the release status and version. The 'versionSuffix' attribute will be an empty string for production releases, but may be something like '-dev' or '-beta.1' for other builds. The existing 'version' attribute is great for runtime comparisons, but not as handy as the new 'versionString' attribute for displaying to users. With a code contribution from Sagie Gur-Ari an annoying little gotcha was fixed. Now 'connectionString' is an alias for 'connectString'. oracledb.getConnection( { user: 'hr', password: 'welcome', connectionString: 'localhost/orclpdb', }, function(err, connection) { if (err) console.error(err); else console.log('I am HR'); } ); The underlying data access layer ODPI-C had some improvements which flow through to node-oracledb. Connections can now be closed when ResultSets and Lobs are open. This removes the DPI-1054 error you might have seen if you closed resources in the 'wrong' order. At connection close there is no longer an unconditional rollback sent to the database. Instead ODPI-C makes use of some internal state to avoid the overhead of the 'round trip' to the database when it is known that there is no open transaction. Node-oracledb 2.1 no longer compiles with the long-obsolete Node 0.10 or 0.12 versions. We'd stopped testing and claiming support for these versions a while back, but technically compilation had been possible until now. Updates to NAN API usage and other cleanups have lead to this change. Our testing and documentation improved, as always. See the CHANGELOG for other changes in node-oracledb 2.1 Finally, you may be interested to know about these cool new resources: The yum.oracle.com Node.js RPM Packages for Oracle Linux The Oracle mirror of github.com/oracle. Resources Node-oracledb installation instructions are here. Node-oracledb documentation is here. Node-oracledb change log is here. For information on migrating see Migrating from node-oracledb 2.0 to node-oracledb 2.1. Issues and questions about node-oracledb can be posted on GitHub. Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

Release announcement: Node-oracledb 2.1.0, the Node.js module for accessing Oracle Database, is on npm. Top features: Privileged connections such as SYSDBA. A new 'queryStream()' Stream 'destroy()'...

Node.js

Node-oracledb v2 Query Methods and Fetch Tuning

  For our Node.js node-oracledb v2 add-on for Oracle Database we revamped its lower data access layer and made it into a standalone project ODPI-C, which is being reused by several other language APIs. ODPI-C gave us a reason and opportunity to simplify some of internal query handling code in node-oracledb. To recap, node-oracledb has four ways to execute queries against an Oracle Database. These are the same in version 1 and version 2: Direct Fetches - these are non-ResultSet, non- queryStream() fetches. All rows are returned in one big array, limited to maxRows (v2 allows an unlimited array size). ResultSet getRow() - return one row on each call until all rows are returned. ResultSet getRows(numRows) - return batches of rows in each call until all rows are returned. queryStream() - stream rows until all rows are returned. The changes in node-oracledb v2 are: Enhanced direct fetches to allow an unlimited number of rows to be fetched, and made this the default. This occurs when maxRows = 0 Replaced prefetchRows (previously used for internal fetch buffering and tuning) with a new property fetchArraySize; the default size is 100. fetchArraySize affects direct fetches, ResultSet getRow() and queryStream(). getRows(numRows,...) internal fetch buffering is now only tuned by the numRows value. Previously prefetchRows could also affect internal buffering. queryStream() now use fetchArraySize for internal buffer sizing. Implemented getRow() in JavaScript for better performance and use fetchArraySize as the way to tune internal buffer sizes. The change in v2 that I want to discuss is how a 'direct fetch' does its internal buffering. To make fetching an unlimited number of rows feasible for direct fetches, data is now internally fetched from Oracle Database in batches of size fetchArraySize and then concatenated to form one big array of results that is returned in the callback. This lets you use fetchArraySize to tune fetch performance. In node-oracledb v1, one big array of size maxRows was allocated before the data was fetched from the database. (With node-oracledb v2 you can have the same behavior, if you really want it, by setting fetchArraySize = maxRows, where maxRows > 0). Let's look at two different scenarios that fetch 50000 rows with fetchArraySize of 100 (the default) and 1000. The code is at the end of the post. Direct fetch: rows: 50000, batch size: 100, seconds: 4.054 Direct fetch: rows: 50000, batch size: 1000, seconds: 1.643 You can see in this case (with a local database) that increasing fetchArraySize improved performance. There may be various factors for this, but the common one is reduced network cost and reduced load on the database because there were fewer 'round trips' between Node.js and the database to get batches of records. Each query and environment will be different, and require its own tuning. The benefits of using fetchArraySize for direct fetches are: Performance of batching and network transfer of data can be tuned. Memory can incrementally grow when the number of query rows is unknown, or varies from execution to execution. A single large chunk of memory (based on maxRows in v1) doesn't need to pre-allocated to handle the 'worst case' of a large number of rows. There are two drawbacks with direct fetches: One big array of results is needed. This is the same in v1 and v2. The concatenation of batches of records can use more memory than the final array requires, and can cause fragmentation. Let's look at timings for all query methods. This is one run; there was expected variability each time I ran the scripts. The 'batch size' number is numRows for getRows(numRows) calls, or fetchArraySize for the other fetch methods. Direct fetch: rows: 50000, batch size: 100, seconds: 4.054 ResultSet getRow(): rows: 50000, batch size: 100, seconds: 1.625 ResultSet getRows(): rows: 50000, batch size: 100, seconds: 1.586 queryStream(): rows: 50000, batch size: 100, seconds: 1.691 Direct fetch: rows: 50000, batch size: 1000, seconds: 1.643 ResultSet getRow(): rows: 50000, batch size: 1000, seconds: 1.471 ResultSet getRows(): rows: 50000, batch size: 1000, seconds: 1.327 queryStream(): rows: 50000, batch size: 1000, seconds: 1.415 The ResultSet and queryStream() methods don't have to store all rows in memory at once so there is less memory management involved. The outlier is obviously the first result: the memory management of concatenating small chunks of memory together is big. We have some ideas on what we can do inside node-oracledb to improve this a bit, but that is a future project to be investigated, and can't eliminate all costs of concatenation, and can't prevent all rows having be held in memory together. [Update: improvements were made to direct fetches in node-oracledb 2.1, and this particular case shows significantly improved performance]. The conclusion is to use ResultSets or streaming for large numbers of rows. This is the same recommendation we gave for v1. For small numbers of rows, the various query methods perform pretty much the same. The timings are so short that you can treat the differences in the one run shown below as noise. Here each query only returned 1 row: Direct fetch: rows: 1, batch size: 100, seconds: 0.011 ResultSet getRow(): rows: 1, batch size: 100, seconds: 0.012 ResultSet getRows(): rows: 1, batch size: 100, seconds: 0.013 queryStream(): rows: 1, batch size: 100, seconds: 0.013 Direct fetch: rows: 1, batch size: 1, seconds: 0.012 ResultSet getRow(): rows: 1, batch size: 1, seconds: 0.012 ResultSet getRows(): rows: 1, batch size: 1, seconds: 0.013 queryStream(): rows: 1, batch size: 1, seconds: 0.013 Although the timings are small, I suggest using a small fetchArraySize or numRows if you are querying a small handful of rows, particularly if the number of rows is known (such as 1). This reduces the amount of memory that needs to be allocated throughout node-oracledb, the Oracle client libraries, and also in the database. References Node.oracledb documentation is here. If you are currently using node-oracledb v1, you may be interested in the documentation on Migrating from node-oracledb 1.13 to node-oracledb 2.0. Code Here are the rudimentary test scripts I used. The ResultSet code originated in the v1 examples from https://jsao.io/2015/07/an-overview-of-result-sets-in-the-nodejs-driver/ The config.js file is at the end. The dbconfig.js file is the same as in the examples. The timings include statement execution in the DB, though this is not controlled by node-oracledb. With direct fetches there isn't a way in JavaScript to distinguish the query execute cost from the data fetch costs that fetchArraySize controls. Direct Fetch // direct fetch var oracledb = require('oracledb'); var dbConfig = require('./dbconfig.js'); var config = require('./config.js'); oracledb.getConnection( dbConfig, function(err, connection) { if (err) throw err; var rowsProcessed = 0; var startTime = Date.now(); connection.execute( 'select * from all_objects where rownum <= :mr', [ config.maxRows ], { fetchArraySize: config.batchSize }, function(err, results) { if (err) throw err; rowsProcessed = results.rows.length; // do work on the rows here var t = ((Date.now() - startTime)/1000); console.log('Direct fetch: rows: ' + rowsProcessed + ', batch size: ' + config.batchSize + ', seconds: ' + t); connection.release(function(err) { if (err) console.error(err.message); }); }); }); ResultSet getRow() // ResultSet getRow() var oracledb = require('oracledb'); var dbConfig = require('./dbconfig.js'); var config = require('./config.js'); oracledb.getConnection( dbConfig, function(err, connection) { if (err) throw err; var rowsProcessed = 0; var startTime = Date.now(); connection.execute( 'select * from all_objects where rownum <= :mr', [ config.maxRows ], { resultSet: true, fetchArraySize: config.batchSize }, function(err, results) { if (err) throw err; function processResultSet() { results.resultSet.getRow(function(err, row) { if (err) throw err; if (row) { rowsProcessed++; // do work on the row here processResultSet(); // try to get another row from the result set return; // exit recursive function prior to closing result set } var t = ((Date.now() - startTime)/1000); console.log('ResultSet getRow(): rows: ' + rowsProcessed + ', batch size: ' + config.batchSize + ', seconds: ' + t); results.resultSet.close(function(err) { if (err) console.error(err.message); connection.release(function(err) { if (err) console.error(err.message); }); }); }); } processResultSet(); } ); } ); ResultSet getRows() // ResultSet getRows() var oracledb = require('oracledb'); var dbConfig = require('./dbconfig.js'); var config = require('./config.js'); oracledb.getConnection( dbConfig, function(err, connection) { if (err) throw err; var rowsProcessed = 0; var startTime = Date.now(); oracledb.fetchArraySize = 1; connection.execute( 'select * from all_objects where rownum <= :mr', [ config.maxRows ], { resultSet: true }, function(err, results) { var rowsProcessed = 0; if (err) throw err; function processResultSet() { results.resultSet.getRows(config.batchSize, function(err, rows) { if (err) throw err; if (rows.length) { rows.forEach(function(row) { rowsProcessed++; // do work on the row here }); processResultSet(); // try to get more rows from the result set return; // exit recursive function prior to closing result set } var t = ((Date.now() - startTime)/1000); console.log('ResultSet getRows(): rows: ' + rowsProcessed + ', batch size: ' + config.batchSize + ', seconds: ' + t); results.resultSet.close(function(err) { if (err) console.error(err.message); connection.release(function(err) { if (err) console.error(err.message); }); }); }); } processResultSet(); }); }); queryStream() // queryStream() var oracledb = require('oracledb'); var dbConfig = require('./dbconfig.js'); var config = require('./config.js'); oracledb.getConnection( dbConfig, function(err, connection) { if (err) throw err; var rowsProcessed = 0; var startTime = Date.now(); var stream = connection.queryStream( 'select * from all_objects where rownum <= :mr', [ config.maxRows ], { fetchArraySize: config.batchSize } ); stream.on('data', function (data) { // do work on the row here rowsProcessed++; }); stream.on('end', function () { var t = ((Date.now() - startTime)/1000); console.log('queryStream(): rows: ' + rowsProcessed + ', batch size: ' + config.batchSize + ', seconds: ' + t); connection.close( function(err) { if (err) { console.error(err.message); } }); }); }); The Configuration File // config.js var maxRows; // number of rows to query var batchSize; // batch size for fetching rows maxRows = 50000; batchSize = 1000 exports.maxRows = maxRows; exports.batchSize = batchSize;

  For our Node.js node-oracledb v2 add-on for Oracle Database we revamped its lower data access layer and made it into a standalone project ODPI-C, which is being reused by several other language APIs....

Node.js

node-oracledb 2.0 with pre-built binaries is on npm

Release announcement: Node-oracledb 2.0.15, the Node.js add-on for Oracle Database, is now on npm for general use. Top features: Pre-built binaries, Query fetch improvements It's time to shake things up. Node-oracledb version 1 has been stable for a while. Our tight, hard working team is now proud to deliver node-oracledb 2 to you. With improvements throughout the code and documentation, this release is looking great. There are now over 3000 functional tests, as well as solid stress tests we run in various environments under Oracle's internal testing infrastructure. Review the CHANGELOG for all changes. For information on migrating see Migrating from node-oracledb 1.13 to node-oracledb 2.0. node-oracledb v2 highlights node-oracledb 2.0 is the first release to have pre-built binaries. These are provided for convenience and will make life a lot easier, particularly for Windows users. Binaries for Node 4, 6, 8 and 9 are available for Windows 64-bit, macOS 64-bit, and Linux 64-bit (built on Oracle Linux 6). Simply add oracledb to your package.json dependencies or manually install with: npm install oracledb (Yes, Oracle Client libraries are still required separately - these do all the heavy lifting.) We've got great error messages when a binary isn't available, and improved the messages when require('oracledb') fails, however Linux users with older glibc libraries may get Linux runtime errors - you should continue building node-oracledb from source in this case, see below. There is support for hosting binary packages on your own internal web server, which will be great for users with a large number of deployments. See package/README. This is the first release to use the ODPI-C abstraction layer which is also used by Python's cx_Oracle 6.x API, as well as 3rd party APIs for other languages. Using ODPI is the the main change that allowed node-oracledb 2.0 binaries to be distributed. As another consequence of ODPI-C, any node-oracledb 2 binary will run with any of the Oracle client 11.2, 12.1 or 12.2 libraries without needing recompilation. This improves portability when node-oracledb builds are copied between machines. Since the available Oracle functionality will vary with different Oracle Client (and Oracle Database!) versions, it's important to test your applications using your intended deployment environment. The driver can still be built from source code if you need, or want, to do so. Compiling is now simpler than in version 1, since you no longer need Oracle header files, and no longer need OCI_INC_DIR or OCI_LIB_DIR environment variables. To build from source you need to pull the code from a GitHub branch or tag - generally the most recent release tag is what you want. Make sure you have Python 2.7, the 'git' utility, and a compiler, and add oracle/node-oracledb.git#v2.0.15 to your package.json dependencies. Or manually run the install: npm install oracle/node-oracledb.git#v2.0.15 Users without 'git', or with older versions of Node that won't pull the ODPI submodule code, can use the source package: npm install https://github.com/oracle/node-oracledb/releases/download/v2.0.15/oracledb-src-2.0.15.tgz I've noticed GitHub can be a little slow to download the source before compilation can begin, so bear with it. Improved query handling: Enhanced direct fetches to allow an unlimited number of rows to be fetched and changed the default number of rows fetched by this default method to be unlimited. The already existing ResultSet and Streaming methods are still recommended for large numbers of rows. Since ODPI-C internally uses 'array fetches' instead of 'prefetching' (both are underlying methods for buffering/tuning differing only in where the buffering takes place - both are transparent to applications), we've replaced prefetchRows with a new, almost equivalent property fetchArraySize.. We've moved the buffering or rows for getRow() into JavaScript for better performance. It no longer needs to call down into lower layers as often. Check my blog post Node-oracledb v2 Query Methods and Fetch Tuning for some tips. We tightened up some resource handling to make sure applications don't leak resources. If you inadvertently try to close a connection when a LOB or ResultSet is open, you will see a new error DPI-1054. The node-oracledb size restrictions on LOB fetchAsString and fetchAsBuffer queries, and also on LOB binds. In node-oracledb 1 these were particularly low when Oracle 11gR2 client libraries were used, so this should be welcome for people who have not updated the Oracle client. Node.js and V8 will still limit sizes, so continue to use the Stream interface for large LOBs. Added support for ROWID and UROWID. Data is fetched as a String Added support for fetching columns types LONG (as String) and LONG RAW (as Buffer). Added support for TIMESTAMP WITH TIME ZONE date type. These are mapped to a Date object in node-oracledb using LOCAL TIME ZONE. The TIME ZONE component is not available in the Date object. Added query support for NCHAR, NVARCHAR2 and NCLOB columns. Binding for DML may not insert data correctly, depending on the database character set and the database national character set. Plans for Version 1 Our stated plan was to cease formal support for version 1 when Node 4 LTS maintenance ended in April 2018. We're pleased 1.13.1 has been stable for some time, and we are not anticipating needing any further node-oracledb 1 releases, unless exceptional circumstances occur. Plans for Version 2 We are now testing with Node 4, 6, 8 and 9. This list will, of course, change as new Node versions are released. The pre-built binaries will also change and availability is not guaranteed. ODPI-C forms a solid base to extend node-oracledb. Users of Python cx_Oracle 6, which is also based on ODPI-C, are appreciating all the advanced Oracle features that are available. Many of these features have also been requested by node-oracledb users. As with any open source project, there are no hard and fast guarantees for node-oracledb, but you can see the general direction we'd like to head in. Pull Requests are welcome. One little thing you might be unaware of is that along the way we have been testing (and helping create) the next major release of Oracle Database, so sometimes we've had to take detours from direct node-oracledb work order to move the whole of Oracle along. Whether we work on the "client" or the "server", we look forward to bringing you great things in future releases. Resources Node-oracledb installation instructions are here. Node-oracledb documentation is here. Node-oracledb change log is here. For information on migrating see Migrating from node-oracledb 1.13 to node-oracledb 2.0. Issues and questions about node-oracledb can be posted on GitHub. Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

Release announcement: Node-oracledb 2.0.15, the Node.js add-on for Oracle Database, is now on npm for general use. Top features: Pre-built binaries, Query fetch improvements It's time to shake things...

Python

cx_Oracle 6.1 with Oracle Database Sharding Support is now Production on PyPI

cx_Oracle 6.1, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package that covers the Python Database API specification with many additions to support Oracle advanced features. In the words of the creator and maintainer, Anthony Tuininga: The cx_Oracle 6.1 release has a number of enhancements building upon the release of 6.0 made a few months ago. Topping the list is support for accessing sharded databases via new shardingkey and supershardingkey parameters for connections and session pools. Support for creating connections using the SYSBACKUP, SYSDG, SYDKM and SYSRAC roles was also added, as was support for identifying the id of the transaction which spawned a subscription message. For those on Windows, improved error messages were created for when the wrong architecture Oracle Client is in the PATH environment variable. Improvements were also made to the debugging infrastructure and a number of bugs were squashed. The test suite has also been expanded. See the full release notes for more information. cx_Oracle References Home page: https://oracle.github.io/python-cx_Oracle/index.html Installation instructions: http://cx-oracle.readthedocs.io/en/latest/installation.html Documentation: http://cx-oracle.readthedocs.io/en/latest/index.html Release Notes: http://cx-oracle.readthedocs.io/en/latest/releasenotes.html Source Code Repository: https://github.com/oracle/python-cx_Oracle

cx_Oracle 6.1, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package that covers the Python Database API specification with many...

General

ODPI-C 2.1 is now available for all your Oracle Database Access Needs

ODPI-C Anthony Tuininga has just released version 2.1 of the Oracle Database Programming Interface for C (ODPI-C) on GitHub. ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications written in C or C++. Key new feature: support for accessing sharded databases.   In the words of Anthony: This release has a number of small enhancements intended to build upon the release of 2.0 made a few months ago. Topping the list is support for accessing sharded databases, a new feature in Oracle Database 12.2. Support for creating connections using the SYSBACKUP, SYSDG, SYDKM and SYSRAC roles was also added, as was support for identifying the id of the transaction which spawned a subscription message. For those on Windows, improved error messages were created for when the wrong architecture Oracle Client is in the PATH environment variable. Improvements were also made to the debugging infrastructure. The test suite has also been expanded considerably. A number of bugs were squashed to improve ODPI-C and in preparation for the upcoming releases of cx_Oracle 6.1 and node-oracledb 2.0, both of which use ODPI-C. See the full release notes for more information. ODPI-C References Home page: https://oracle.github.io/odpi/ Code: https://github.com/oracle/odpi Documentation: https://oracle.github.io/odpi/doc/index.html Release Notes: https://oracle.github.io/odpi/doc/releasenotes.html Report issues and discuss: https://github.com/oracle/odpi/issues

ODPI-C Anthony Tuininga has just released version 2.1 of the Oracle Database Programming Interface for C (ODPI-C) on GitHub. ODPI-C is an open source library of C code that simplifies access to Oracle...

General

What's on at Oracle OpenWorld for Developers using Python, Node.js, PHP, R, C and C++

The Oracle OpenWorld in San Francisco is approaching and it's time to plan your schedule.  The sessions that I'm tracking are all listed on the official schedule page Application Development with Node.js, Python, PHP, R, C and C++. Bookmark that link! And make sure to visit our Demo booth ("SOA-042") in Moscone West.   Conference Sessions Oracle Net Services 12c: Best Practices for Database Performance and Scalability [CON6718] Monday, Oct 02, 4:45 p.m. - 5:30 p.m. | Moscone West - Room 3011 This session is always a gold mine of useful information about network configuration and tuning. The tips can help improve performance and scalability of applications. Best Practices for Application Development with Oracle Database Cloud Services [CON6717] Monday, Oct 02, 5:45 p.m. - 6:30 p.m. | Moscone West - Room 3008 Configuration, deployment, and optimization of database applications for Oracle Cloud is covered by the VP most intimately involved with make it possible for programs and applications to connect to (and from) Oracle Database cloud services. Building Python Messaging Apps with Oracle Database [CON7344] Tuesday, Oct 03, 12:15 p.m. - 1:00 p.m. | Moscone West - Room 2002 This is a practical, code-heavy session at the Oracle Code part of JavaOne. Oracle Database Advanced Queuing features usable directly in Python are explained via an application. Node.js: JavaScript Application Development for Oracle Database [CON6712] Tuesday, Oct 03, 5:45 p.m. - 6:30 p.m. | Moscone West - Room 3008 See what you can do with Node.js and Oracle Database and find out how the node-oracledb driver is evolving with Node.js. Building Real-Time Data in Web Applications with Node.js [CON6738] Wednesday, Oct 04, 2:00 p.m. - 2:45 p.m. | Moscone West - Room 3014 (Repeated as [CON1240] 2:45 p.m. - 3:30 p.m. | Moscone West - Room 2003) Dan McGhan is giving his popular talk on Node.js and the node-oracledb driver. It's so popular that he's giving it twice - back to back! Best Practices for Application High Availability [CON6711] Wednesday, Oct 04, 3:30 p.m. - 4:15 p.m. | Moscone West - Room 3012 High Availability is a topic everyone is interested in. Learn when and where to use Oracle Database HA features to make applications resilient in planned and un-planned database outages. Python and Oracle Database: Tips, Tricks, and the Best New Features [CON6714] Wednesday, Oct 04, 4:30 p.m. - 5:15 p.m. | Moscone West - Room 3012 This session is packed full of tips on how to connect Python applications to Oracle Database. Performance and Scalability Techniques for Oracle Database Applications [CON6710] Wednesday, Oct 04, 5:30 p.m. - 6:15 p.m. | Moscone West - Room 3012 This session from the development group responsible for Oracle Database client APIs is great for anyone trying to write code that connects to Oracle Database. There are some great tips here. Hands-on Lab Session Python and Oracle Database 12c: Scripting for the Future [HOL7605] Wednesday, Oct 04, 9:45 a.m. - 10:45 a.m. | Hilton San Francisco Union Square (Ballroom Level) - Continental Ballroom 6 Sit down, and start typing: we've got machines set up ready for you to learn how to use Python to connect to Oracle Database. DevLive Interviews Come and hear these informal chats at the DevLive Stage, Moscone West Level 1. The discussion is bound to be broad reaching and lively. Node.js: JavaScript Application Development for Oracle Database Monday, Oct 02, 3:00 p.m. - 3:30 p.m. | Developer Lounge, Moscone West Building Python Messaging Apps with Oracle DB12c Tuesday, Oct 03, 3:00 p.m. - 3:30 p.m. | Developer Lounge, Moscone West What Else? Also check out the best Java sessions and .NET sessions too!

The Oracle OpenWorld in San Francisco is approaching and it's time to plan your schedule.  The sessions that I'm tracking are all listed on the official schedule page Application Development with...

General

Ruby and Ruby on Rails Recognized by Oracle Technology Network

I'm really pleased that two key contributors to the Ruby and Ruby on Rails communities for Oracle Database have been honored. Takehiro Kubo and Yasuo Honda have both become Oracle ACEs. The Oracle Technology Network's Oracle ACE program describes itself as recognizing Oracle evangelists and enthusiasts. Both Takehiro and Yasuo fit this description exactly. To me, Takehiro Kubo is most visible in his work on the ruby-oci8 gem. Ruby-oci8 is a Ruby interface for Oracle Database. Takehiro created ruby-oci8 and is actively maintaining it and helping users. He also willingly contributes his knowledge to Oracle Database interfaces for other programming languages, helping both maintainers and users. An eager adopter of new technology, he is currently developing ruby-odpi, a rewrite of the interface that is based on the strengths of Oracle's new ODPI-C wrapper. Most Oracle Ruby and JRuby developers use the Ruby on Rails web application framework. Here Yasuo Honda is the key person. He has been the lead maintainer of the Ruby on Rails Oracle ActiveRecord Adapter for some years now. He has nurtured an active community of users and contributors, keeping up with both framework and library improvements. He has contributed enormously to its status as a very popular development environment. He freely contributes his knowledge and expertise. Both Takehiro and Yasuo personify the ideal open source maintainers. They are able to create useful, beautiful software components that other people want to use. They take their roles seriously, and have shown long term commitment to their projects' successes. Congratulations!

I'm really pleased that two key contributors to the Ruby and Ruby on Rails communities for Oracle Database have been honored. Takehiro Kubo and Yasuo Honda have both become Oracle ACEs. The...

Instant Client

Oracle Instant Client 12.2 now has SQL*Loader and Data Pump

This is a guest post by Jim Stenoish, Senior Director of Software Development, whose products include all tools used for the high speed movement, transformation, and loading of data and metadata for Oracle Database. Oracle Instant Client 12.2 now includes SQL*Loader as well as the Data Pump command line utilities expdp and impdp, and the traditional Export/Import utilities exp and imp. Previously, these tools were only available with a 'full' client installation, or in a database installation. Being in Instant Client allows users to access these tools with a smaller footprint on disk and with a much quicker install than for a full client. The OTN license is also more convenient for many users. Now these tools are part of Instant Client, it's easy to load data from a file on a system the doesn't already have Oracle software installed. Installing the Tools You can install the tools by unzipping the Oracle Instant Client 'Basic' (or 'Basic Light') package and the 'Tools' package to the same location. See the Instant Client home page for links to download for each platform. Follow the platform-specific instructions given on the download page. For example, on Linux set LD_LIBRARY_PATH and PATH to the Instant Client directory. On Windows set PATH to the Instant Client directory and also install the Microsoft Visual Studio 2013 Redistributable. Using the Tools SQL*Loader allows you to load data from files, such as comma-separated value files (CSV), into the tables. The expdp and impdp utility allows you to move metadata and data between Oracle databases. The expdp utility unload metadata and data from the database into Data Pump files on the database server. The impdp utility recreates metadata defined in a Data Pump file and loads table data stored in the files on the database server. It also provides a way to extract metadata and data over a DB Link (no files involved) from one database to another database. Documentation and examples for the tools are in the Database Utilities manual. If you have questions about SQL*Loader or Data Pump you can ask them on OTN.

This is a guest post by Jim Stenoish, Senior Director of Software Development, whose products include all tools used for the high speed movement, transformation, and loading of data and metadata for...

Python

Python cx_Oracle 6 Production Release: Hitting A Six

cx_Oracle 6.0, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package that covers the Python Database API specification with many additions to support Oracle advanced features. It was begun by Anthony Tuininga in 1998 in the era of Oracle Database 8i and Python 1.5. Since then there have been over 30 releases covering up to Oracle Database 12c and Python 3. Anthony joined Oracle relatively recently. In his time with us, he has been hard at work in the language API space, leading cx_Oracle changes as well as creating the ODPI-C interface, and also making contributions to node-oracledb. cx_Oracle 6.0 is the latest and greatest release from our group. It can connect to both on-premise databases and to Oracle Database Cloud Services. It can also be run in the cloud itself. The use cases for cx_Oracle are endless. Both small and big projects make use of cx_Oracle for all kinds of data manipulation and application development projects. cx_Oracle has extensive API documentation, solid samples, and a clean test suite. (We also run stress tests for cx_Oracle in the Oracle Database development environment). To complete the link dump, the home page is here and the source code is hosted on GitHub. cx_Oracle 6 Features cx_Oracle 6's support for Oracle Database is excellent. Easily installed from PyPI with Wheels. Support for Python 2 and 3. Support for Oracle Client 11.2, 12.1 and 12.2. Oracle's standard cross-version interoperability allows easy upgrades and connectivity to different Oracle Database versions. Connect to Oracle Database 9.2, 10, 11 or 12 (depending on the Oracle Client version used). SQL and PL/SQL Execution. The underlying Oracle Client libraries have significant optimizations including compressed fetch, pre-fetching, client and server result set caching, and statement caching with auto-tuning. Full use of Oracle Network Service infrastructure, including encrypted network traffic, authentication, and security features. Extensive Oracle data type support, including large object support (CLOB and BLOB). Direct binding to SQL objects. One great use case is binding Python objects to Oracle Spatial SDO objects. Array operations for efficient INSERT and UPDATEs. Array row counts and batch error handling for array operations. Fetching of large result sets. REF CURSOR support. Support for scrollable cursors. Go back and forth through your query results. Fetch PL/SQL Implicit Results. Easily return query results from PL/SQL. Row Prefetching. Efficient use of the network. Client Result Caching. Improve performance of frequently executed look-up statements. Support for Advanced Queuing. Use database notifications to build micro-service applications. Continuous Query Notification. Get notified when data changes. Support for Edition Based Redefinition. Easily switch applications to use updated PL/SQL logic. Support for setting application context during the creation of a connection, making application metadata more accessible to the database, including in LOGON triggers. End-to-end monitoring and tracing. Transaction Management. Session Pooling. Database Resident Connection Pooling (DRCP). Privileged Connections. External Authentication. Database startup and shutdown. Oracle Database High Availability Features, such as FAN notifications and Transaction Guard support. Changes since cx_Oracle 5.3 The main changes in cx_Oracle 6 are: Re-implemented to use our new ODPI-C abstraction layer for Oracle Database. The cx_Oracle API is unchanged. The cx_Oracle design, build and linking process has improved because of ODPI-C. Python Wheels are now available for install. This is made possible by the ODPI-C architecture. Less code in Python's Global Interpreter Lock, giving better scalability. Support for DML RETURNING of multiple rows. Support for Universal ROWIDs. LOB locators are now associated to LOB objects so they are not overwritten on database round trips. As you can see, the use of ODPI-C was a significant change, leading to code refactoring and simplification. It gives us a springboard to make future improvements to cx_Oracle using a code base that has multiple consumers (and hence testers) including node-oracledb 2. See What's New in the cx_Oracle documentation for the full change list. With the release of cx_Oracle 6, no new features are planned for cx_Oracle 5. Installing cx_Oracle With the new cx_Oracle 6 Wheels, installing is straightforward as: python -m pip install cx_Oracle --upgrade If a binary is not available, the cx_Oracle source will be compiled. Oracle Client 11.2, 12.1 or 12.2 libraries are needed in your system library path (such as PATH on Windows, or LD_LIBRARY_PATH on Linux). These libraries provide a lot of features such as connection management, query and statement cache management, as well as high availability features. Libraries can be easily installed from the free Oracle Instant Client, an Oracle Database installation, or a full Oracle Client installation. Oracle's standard client-server interoperability applies, e.g. Oracle Client 12.2 can connect to Oracle Database 11.2 onward. Thanks to ODPI-C, you do not need to update cx_Oracle if you decide to upgrade your Oracle Client from 11.2 or 12.1 to a newer version. You can simply change your library loading path, e.g. PATH or LD_LIBRARY_PATH, to the new version of the Oracle Client libraries. Once you have cx_Oracle installed, you can connect to your database. For example: from __future__ import print_function import cx_Oracle connection = cx_Oracle.connect("hr", "welcome", "localhost/orclpdb") cursor = connection.cursor() cursor.execute(""" SELECT first_name, last_name FROM employees WHERE department_id = :did AND employee_id > :eid""", did = 50, eid = 190) for fname, lname in cursor: print("Values:", fname, lname) cx_Oracle References Home page: https://oracle.github.io/python-cx_Oracle/index.html Installation instructions: http://cx-oracle.readthedocs.io/en/latest/installation.html Documentation: http://cx-oracle.readthedocs.io/en/latest/index.html Source Code Repository: https://github.com/oracle/python-cx_Oracle Last Words The cx_Oracle interface is the most popular scripting language for Oracle Database. It has a long history, is widely used, and exposes features for building all kinds of database applications. The release of cx_Oracle 6 brings a refactored implementation that is more scalable and makes it easier to bring you great, future Oracle functionality. Thank you to all our users and testers who are part of a very big & happy cx_Oracle community. Keep in touch!

cx_Oracle 6.0, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package that covers the Python Database API specification with many...

General

Open Source Library Drives Powerful Oracle Database Applications

ODPI-C The production release of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub.  ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications written in C or C++.   ODPI-C: exposes a C API in a readily consumable way to C and C++ developers writing applications for Oracle Database. is aimed at language interfaces and applications where speed of development is important, and where special-case Oracle features are not needed. is already being used for Python, Node.js, Go and Rust interfaces, as well as for custom applications. allows faster application implementation with less code. makes memory and resource management simpler, particularly for 'binding' and 'defining' data. A reference counting mechanism adds resiliency by stopping applications destroying in-use resources. is available as source code on GitHub under the Apache 2.0 and/or the Oracle UPL licenses, for direct inclusion into your own code bases. ODPI-C is a wrapper over Oracle Call Interface. OCI is Oracle Database's main C API and is widely used by our products and user applications so it is stable, fast, scalable and supported. OCI's API is extremely flexible and gives fine-grained control to the developer for a very wide range of use cases. ODPI-C is also flexible but is aimed primarily at language interface creators. These creators are programming within the confines of a scripting language's type system and semantics. The languages often expose simplified data access to users through cross-platform, common-denominator APIs. Therefore it makes sense for ODPI-C to provide easy to use functionality for common data access, while still allowing the power of Oracle Database to be used. Of course ODPI-C isn't just restricted to language interface usage. Since it provides a simple programming experience, if ODPI-C has the functionality you need for accessing Oracle Database, then you can add it to your own projects. ODPI-C is developed by the Oracle Database Data Access team, who also look after OCI and other language APIs for Oracle Database. Anthony Tuininga has been leading the ODPI-C effort. He has made full use of his extensive knowledge as creator and maintainer of the extremely popular, and full featured, Python cx_Oracle interface. A big thank you to all the users who have given feedback on ODPI-C pre-releases. ODPI-C Features The ODPI-C feature list currently includes all the normal calls you'd expect to manage connections and to execute SQL and PL/SQL efficiently. It also has such gems as SQL and PL/SQL object support, scrollable cursors, Advanced Queuing, and Continuous Query Notification. ODPI-C does extra validation and has a reference counting mechanism for resiliency to help stop applications destroying in-use OCI resources. Writing threaded mode applications is easier. The Oracle feature list in this initial release, in no particular order, is: 11.2, 12.1 and 12.2 Oracle Client support 9.2 and higher Oracle Database support (depending on Oracle Client version) SQL and PL/SQL execution Character data types (CHAR, VARCHAR2, NCHAR, NVARCHAR2, CLOB, NCLOB, LONG) Numeric data types (NUMBER, BINARY_FLOAT, BINARY_DOUBLE) Dates, Timestamps, Intervals Binary types (BLOB, BFILE, RAW, LONG RAW) PL/SQL data types (PLS_INTEGER, BOOLEAN, Collections, Records) JSON User Defined Types REF CURSOR, Nested cursors, Implicit Result Sets Array fetch Array bind/execute Session pools, with connection validation Standalone connections Database Resident Connection Pooling (DRCP) External authentication Statement caching End-to-end tracing, mid-tier authentication and auditing Edition Based Redefinition Scrollable cursors DML RETURNING Privileged connection support (SYSDBA, SYSOPER, SYSASM, PRELIM_AUTH) Database Startup/Shutdown Session Tagging Proxy authentication Batch Errors Array DML Row Counts Query Result Caching Application Continuity Query Metadata Password Change Two Phase Commit Continuous Query Notification Advanced Queuing In case you want to access other OCI functionality without having to modify ODPI-C code, there is a call to get the underlying OCI Service Context handle. ODPI-C applications can take advantage of OCI features which don't require API access, such as the oraaccess.xml configuration for enabling statement cache auto-tuning. Similarly, Oracle Database features controlled by SQL and PL/SQL, such as partitioning, can be used in applications, as you would expect. Since communication to the database is handled by Oracle's Network Services, features such as encrypted communication and LDAP can easily be configured. Using ODPI-C See ODPI-C Installation for detailed instructions. ODPI-C code can be included in your C or C++ applications and compiled like any OCI application. Or, if you want to use ODPI-C as a shared library, use the provided example Makefile.  To try out ODPI-C, build it as a shared library and then compile the sample programs. These show a number of ODPI-C features. You can also view the test suite. ODPI-C makes it easy to build and distribute application code or binaries because it is open source, and OCI headers and libraries are not needed at compilation. Applications need standard Oracle client libraries only at run time. Oracle client libraries must be installed separately. Version 11.2, 12.1 or 12.2 are required. These allow applications to connect to Oracle Database 9.2 or later (depending on the client version). ODPI-C uses the shared library loading mechanism available on each supported platform to load the Oracle Client library at run time. This allows code using ODPI-C to be built only once, and then run using available Oracle Client libraries. Oracle client libraries are available in the free, easily installed Oracle Instant Client "Basic" and "Basic Light" packages. Client libraries are also available in any Oracle Database installation, or in the full Oracle Client installation. ODPI-C has been tested on Windows, macOS and Linux. Compilation on other platforms such as AIX and Solaris x64 is possible, but these have not been fully tested. Wrap Up ODPI-C provides an API to Oracle Database that enables functional, flexible application and scripting language API development. ODPI-C is being used by us for the Python cx_Oracle 6 interface, and in node-oracledb 2 for Node.js. Third party developers have also started Go and Rust interfaces too. These code bases are handy, bigger examples of how to use ODPI-C. It's also pleasing to have gotten positive feedback from internal Oracle projects that have adopted ODPI-C for a variety of other application development purposes. ODPI-C will continue to be enhanced, bringing great Oracle Database technologies to developers. Scripting language drivers will similarly see enhancements and get benefits of the common ODPI-C code base. I think you'll be pleased with our direction and plans for developers and scripting languages in 2017. ODPI-C References Home page: https://oracle.github.io/odpi/ Code: https://github.com/oracle/odpi Documentation: https://oracle.github.io/odpi/doc/index.html Report issues and discuss: https://github.com/oracle/odpi/issues Footnote Trivia Why is this first production release called version 2?  Because ODPI-C is a significantly updated, refactored, and standalone release of code which had origins in a node-oracledb 1.x. abstraction layer.

ODPI-C The production release of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub.  ODPI-C is an open source library of C code that simplifies access to Oracle Database...

General

Python cx_Oracle 6.0 RC 2 is on PyPI

Python cx_Oracle is the Python interface for Oracle Database Anthony Tuininga has released the second (and probably final) Release Candidate of Python cx_Oracle 6.0 on PyPI. It's now full steam ahead towards the production release, so keep on testing. Issues can be reported on GitHub or the mailing list. To take a look, use the '--pre' option to pip to install this pre-release: python -m pip install cx_Oracle --upgrade --pre You will also need Oracle client libraries, such as from the free Oracle Instant Client. I want to highlight a few of changes, focusing, as readers of my blog will know I favor, on usability. This release picks up the latest ODPI-C Oracle DB abstraction layer, which provides some nice fixes. In particular one fix resolved a confusing Windows system message 'UnicodeDecodeError' displaying when cx_Oracle was imported. Now the actual underlying Windows error message is displayed, allowing you to see what the root problem is. The installation notes have been tidied up and made into a new Installation Chapter of the documentation, complete with troubleshooting tips. Some more introductory samples have been added, and the sample and test schema creation scripts improved. The scripts now reference a common file to set credentials, making it easier to play with them without needing to edit every single one. The full cx_Oracle release notes are here. Let us know what you find.

Python cx_Oracle is the Python interface for Oracle Database Anthony Tuininga has released the second (and probably final) Release Candidate of Python cx_Oracle 6.0 on PyPI. It's now full steam ahead...

General

Ever Evolving SQL*Plus 12.2.0.1 Adds New Performance Features

This is a guest post by Luan Nim, Senior Development Manager at Oracle. SQL*Plus 12.2.0.1 has introduced a number of features to improve the performance and ease of use in general. These features can be enabled with SET commands, or via the command line. New Oracle SQL*Plus 12.2.0.1 features include: SET MARKUP CSV This option lets you generate output in CSV format. It also lets you choose the delimiter character to use and enable quotes ON or OFF around data. The benefit of using CSV format is that it is fast. This option improves the performance for querying large amount of data where formatted output is not needed. Syntax: SET MARKUP CSV ON [DELIMI[TER] character] [QUOTE {ON|OFF}] Example: SQL> set markup csv on SQL> select * from emp; "EMPNO","ENAME","JOB","MGR","HIREDATE","SAL","COMM","DEPTNO" 7369,"SMITH","CLERK",7902,"17-DEC-80",800,,20 7499,"ALLEN","SALESMAN",7698,"20-FEB-81",1600,300,30 7521,"WARD","SALESMAN",7698,"22-FEB-81",1250,500,30 7566,"JONES","MANAGER",7839,"02-APR-81",2975,,20 7654,"MARTIN","SALESMAN",7698,"28-SEP-81",1250,1400,30 7698,"BLAKE","MANAGER",7839,"01-MAY-81",2850,,30 7782,"CLARK","MANAGER",7839,"09-JUN-81",2450,,10 7788,"SCOTT","ANALYST",7566,"19-APR-87",3000,,20 7839,"KING","PRESIDENT",,"17-NOV-81",5000,,10 7844,"TURNER","SALESMAN",7698,"08-SEP-81",1500,0,30 7876,"ADAMS","CLERK",7788,"23-MAY-87",1100,,20 7900,"JAMES","CLERK",7698,"03-DEC-81",950,,30 7902,"FORD","ANALYST",7566,"03-DEC-81",3000,,20 7934,"MILLER","CLERK",7782,"23-JAN-82",1300,,10 14 rows selected. This option is also available from command line with the "-m csv" argument. $ sqlplus –m “csv on” scott/tiger @emp.sql SQL*Plus: Release 12.2.0.2.0 Development on Wed Jul 5 23:12:14 2017 Copyright (c) 1982, 2017, Oracle. All rights reserved. Last Successful login time: Wed Jul 05 2017 23:11:46 -07:00 Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.2.0 - 64bit Development "EMPNO","ENAME","JOB","MGR","HIREDATE","SAL","COMM","DEPTNO" 7369,"SMITH","CLERK",7902,"17-DEC-80",800,,20 7499,"ALLEN","SALESMAN",7698,"20-FEB-81",1600,300,30 7521,"WARD","SALESMAN",7698,"22-FEB-81",1250,500,30 7566,"JONES","MANAGER",7839,"02-APR-81",2975,,20 7654,"MARTIN","SALESMAN",7698,"28-SEP-81",1250,1400,30 7698,"BLAKE","MANAGER",7839,"01-MAY-81",2850,,30 7782,"CLARK","MANAGER",7839,"09-JUN-81",2450,,10 7788,"SCOTT","ANALYST",7566,"19-APR-87",3000,,20 7839,"KING","PRESIDENT",,"17-NOV-81",5000,,10 7844,"TURNER","SALESMAN",7698,"08-SEP-81",1500,0,30 7876,"ADAMS","CLERK",7788,"23-MAY-87",1100,,20 7900,"JAMES","CLERK",7698,"03-DEC-81",950,,30 7902,"FORD","ANALYST",7566,"03-DEC-81",3000,,20 7934,"MILLER","CLERK",7782,"23-JAN-82",1300,,10 14 rows selected. SET FEEDBACK ONLY The new ONLY option to SET FEEDBACK is to display the number of rows selected without displaying data. This is useful for users who are interested in measuring the time taken to fetch data from the database, without actually displaying that data. Example: SQL> set feedback only SQL> select * from emp; 14 rows selected. SET STATEMENTCACHE This option is to cache executed statements in the current session. The benefit of this setting is that it reduces unnecessary parsing time for the same query. Therefore it improves performance when repeatedly executing a query in a session. Example: SQL> set statementcache 20 SQL> select * from emp; SQL> select * from emp; SET LOBPREFETCH This option is to improve access of smaller LOBs where LOB data is prefetched and cached. The benefit of this setting is to reduce the number of network round trips to the server, allowing LOB data to be fetched in one round trip when LOB data is within the LOBPREFETCH size defined. Example: SQL> set lobprefetch 2000 SQL> select * from lob_tab; SET ROWPREFETCH This option is to minimize server round trips in a query. The data is prefetched in a result set rows when executing a query. The number of rows to prefetch can be set using this SET ROWPREFETCH This option can reduce round trips by allowing Oracle to transfer query results on return from its internal OCI execute call, removing the need for the subsequent internal OCI fetch call to make another round trip to the DB. Example: SQL> set rowprefetch 20 SQL> Select * from emp; If, for example, you expect only a single row returned, set ROWPREFETCH to 2, which allows Oracle to get the row efficiently and to confirm no other rows need fetching. Command line –FAST option. This command line option improves performance in general. When this option is used, it changes the following SET options to new values: ARRAYSZE 100 LOBPREFETCH 16384 PAGESIZE 50000 ROWPREFETCH 2 STATEMENTCACHE 20 Once logged in, these setting can also be changed manually. Syntax: $ sqlplus –f @emp.sql I hope the new features described above are helpful to you. For more information, please refer to the SQL*Plus Users Guide and Reference. If you have questions about SQL, PL/SQL or SQL*Plus, post them in the appropriate OTN space.

This is a guest post by Luan Nim, Senior Development Manager at Oracle. SQL*Plus 12.2.0.1 has introduced a number of features to improve the performance and ease of use in general. These features can...

php

Updated PHP 7.2 PDO_OCI install 'configure' syntax

Frank Yang at Oracle has updated the PHP 7.2 'configure' option for PDO_OCI and added some of the heuristics previously only in the PHP OCI8 extension configure option. This has allowed the two extension option syntaxes to be aligned. PDO_OCI is PHP's PDO driver for Oracle Database. PHP 7.2 is in Alpha status. In common with most other database PDO drivers, the PDO_OCI driver is part of the PHP bundle and is not on PECL. The new PHP 7.2 PDO_OCI 'configure' syntax is like: --with-pdo-oci[=DIR] PDO: Oracle OCI support. DIR defaults to $ORACLE_HOME. Use --with-pdo-oci=instantclient,/path/to/instant/client/lib for an Oracle Instant Client installation. So now, for example, you could use: ./configure --with-pdo-oci=instantclient,/usr/lib/oracle/12.2/client64/lib \ --with-oci8=instantclient,/usr/lib/oracle/12.2/client64/lib Prior to PHP 7.2, configuring PDO_OCI with the Oracle Instant Client libraries required something like: ./configure --with-pdo-oci=instantclient,/usr,12.1 The version number on the end of the option was only partially validated. In many cases it was possible to use an 'incorrect' number and still get the desired libraries to be used. (Also each new Oracle Database version required PDO_OCI code changes to update a white list). When building with PHP 7.1 (and earlier) and using Instant Client, this old syntax is still needed. If you know PHP OCI8 'configure' options well, you may wonder why the install simplification heuristic on Linux to automatically use the highest installed version of Instant Client RPMs wasn't ported to PDO_OCI's 'configure'. Well, I still dream that bigger improvements to PDO_OCI will occur. The current incarnation of this dream is that PDO_OCI could one day be rewritten to use ODPI-C, like cx_Oracle 6 and node-oracledb 2. If this dream ever became reality, the configure option would become simply '--with-pdo-oci' since ODPI-C doesn't need Oracle headers or libraries at build time. ODPI-C only needs Oracle client libraries in LD_LIBRARY_PATH (or PATH on Windows) at run time. But I've had the dream of various PDO_OCI improvements for at least 10 years.... For the moment Oracle still recommends using PHP OCI8 in preference to PDO_OCI because OCI8 has superior features and can also take advantage of various Oracle client library connection and caching functionality.

Frank Yang at Oracle has updated the PHP 7.2 'configure' option for PDO_OCI and added some of the heuristics previously only in the PHP OCI8 extension configure option. This has allowed the two...

Node.js

Node-oracledb 2.0.13-Development is now on GitHub

Node-oracledb 2.0.13-Development is now on GitHub. Node-oracledb is the Node.js interface for Oracle Database. Top features: Version 2 is based on the new ODPI-C abstraction layer. Additional data type support. The full Changelog is here. The node-oracledb 2.0.13-Development documentation is here. I'd recommend testing and reporting any issues as early as possible during the 2.0 Development release cycle. This is a development release so we are aware of some rough edges. I'll start a GitHub issue to track them. Full installation instructions are here but note that node-oracledb 2.0 is not yet on npm so you need to install from GitHub with: npm install oracle/node-oracledb.git#dev-2.0 All you then need are Oracle client 12.2, 12.1 or 11.2 libraries (e.g. the Oracle Instant Client 'Basic' or 'Basic Light' package) in your PATH or LD_LIBRARY_PATH or equivalent. Users of macOS must put the Oracle client libraries in ~/lib or /usr/local/lib. The use of ODPI-C makes installation a bit easier. Oracle header files are no longer needed. The OCI_LIB_DIR and OCI_INC_DIR environment variables are not needed. A compiler with C++11 support, and Python 2.7 are still needed, but a single node-oracledb binary now works with any of the Oracle client 11.2, 12.1 or 12.2 releases, improving portability when node-oracledb builds are copied between machines. You will get run time errors if you try to use a new Oracle Database feature that isn't supported by older client libraries, so make sure you test in an environment that resembles your deployment one. Other changes in this release are: Lob.close() now marks LOBs invalid immediately rather than during the asynchronous portion of the close() method, so that all other attempts are no-ops. Incorrect application logic in version 1 that attempted to close a connection while certain LOB, ResultSet or other database operations were still occurring gave an NJS-030, NJS-031 or NJS-032 "connection cannot be released" error. Now in version 2 the connection will be closed but any operation that relied on the connection being open will fail. Some NJS and DPI error messages and numbers have changed. This is particularly true of DPI errors due to the use of ODPI-C. Stated compatibility is now for Node.js 4, 6 and 8. Added support for fetching columns types LONG (as String) and LONG RAW (as Buffer). There is no support for streaming these types, so the value stored in the DB may not be able to be completely fetched if Node.js and V8 memory limits are reached. You should convert applications to use LOBs, which can be streamed. Added support for TIMESTAMP WITH TIME ZONE date type. These are mapped to a Date object in node-oracledb using LOCAL TIME ZONE. The TIME ZONE component is not available in Node.js's Date object. Added support for ROWID without needing an explicit fetchAsString. Data is now fetched as a String by default. Added support for UROWID. Data is fetched as a String. Added query support for NCHAR and NVARCHAR2 columns. Binding for DML may not insert data correctly, depending on the database character set and the database national character set. Added query support for NCLOB columns. NCLOB data can be streamed or fetched as String. Binding for DML may not insert data correctly, depending on the database character set and the database national character set. Removed node-oracledb size restrictions on LOB fetchAsString and fetchAsBuffer queries, and also on LOB binds. Node.js and V8 memory restrictions will still prevent large LOBs being manipulated in single chunks. The v1 limits really only affected users who linked node-oracledb with 11.2 client libraries. Statements that generate errors are now dropped from the statement cache. Applications running while table definitions change will no longer end up with unusable SQL statements due to stale cache entries. Applications may still get one error, but that will trigger the now invalid cache entry to be dropped so subsequent executions will succeed. ODPI-C has some extra smarts in there to make it even better than I describe. I can bore you with them if you ask - or you can check the ODPI-C source code. Note that Oracle best-practice is never to change table definitions while applications are executing. I know some test frameworks do it, but .... All these improvements are courtesy of ODPI-C's underlying handling. The use of ODPI-C is a great springboard for future features since it already has support for a number of things we can expose to Node.js. The ODPI-C project was an expansion of the previous DPI layer used solely by node-oracledb. Now ODPI-C is in use in Python cx_Oracle 6, and is being used in various other projects. For example Tamás Gulácsi has been working on a Go driver using ODPI-C. (Check out his branch here). Kubo Takehiro started an Oracle Rust driver too - before he decided that he preferred programming languages other than Rust! Our stated plan for node-oracledb is that maintenance of node-oracledb 1.x will end on 1st April 2018, coinciding with the end-of-life of Node 4, so start testing node-oracledb 2.0.13-Development now.

Node-oracledb 2.0.13-Development is now on GitHub. Node-oracledb is the Node.js interface for Oracle Database. Top features: Version 2 is based on the new ODPI-C abstraction layer. Additional data type...

General

ODPI-C 2.0.0-beta.4 released: More & Better-er

ODPI-C 2.0.0-beta.4 has been released to GitHub. ODPI-C is an abstraction layer for Oracle Call Interface OCI. This release has incremental improvements, based on user feedback and as a result of continued testing. There was some discussion around best practice number handling, and how to make use of ODPI-C 'variable' structures without requiring a DB connection. These have resulted in a few tweaks. See the release notes for all changes. The highlights in this release are: Added support for getting/setting attributes of objects or element values in collections that contain LOBs, BINARY_FLOAT values, BINARY_DOUBLE values and NCHAR and NVARCHAR2 values Enabled temporary LOB caching in order to avoid disk I/O. Changed default native type to DPI_ORACLE_TYPE_INT64 if the column metadata indicates that the values are able to fit inside a 64-bit integer. Added function dpiStmt_defineValue(), which gives the application the opportunity to specify the data type to use for fetching without having to create a variable. In addition, a new macro DPI_DEBUG_LEVEL can be enabled during compilation to display logging and trace information. Personally I'd set it to DPI_DEBUG_LEVEL_FREES in this initial Beta phase to check whether we've made any mistakes with resource handling. We'd like to wrap up the Beta phase soon, so please continue pounding on ODPI-C and send us feedback. If you want to see ODPI-C in a bigger project, check out Python cx_Oracle 6.0b2, which has been updated to use this new release of ODPI-C.

ODPI-C 2.0.0-beta.4 has been released to GitHub. ODPI-C is an abstraction layer for Oracle Call Interface OCI. This release has incremental improvements, based on user feedback and as a result of...

General

ODPI-C 2.0.0 Beta 2 is out with runtime client linking

Anthony Tuininga just released ODPI-C 2.0.0-beta.2 on GitHub. The release notes list all changes. ODPI-C is an open source library of C code that simplifies the use of common Oracle Call Interface (OCI) features for Oracle Database drivers and user applications. It sits on top of OCI and requires Oracle client libraries. The big change in this release is the addition of runtime dynamic linking of the Oracle client libraries. It was initiated by a comment from Vincent Rogier, author of OCILIB when he took a look at ODPI-C. One you have downloaded ODPI-C source (and written something using its API, or copied the samples) you can compile your code without needing the Oracle header files (e.g. you don't need the Oracle Instant Client SDK). All you need to have are the Oracle client libraries from Oracle Database 11.2, 12.1 or 12.2 in your path when you run your application. These client libraries can be found in the Instant Client Basic or Basic Light packages, in a database ORACLE_HOME, or in a full Oracle Client install. Your ODPI-C -based application will use the first Oracle client libraries in your load path. More details about configuring the Oracle client are in the documentation. In summary you only need to build your application once and it will run using Oracle 11.2, 12.1 or 12.2 client libraries. If you want to see ODPI-C in action in a bigger 'example', look at the master branch of cx_Oracle.

Anthony Tuininga just released ODPI-C 2.0.0-beta.2 on GitHub. The release notes list all changes. ODPI-C is an open source library of C code that simplifies the use of common Oracle Call Interface...

Node.js

Node-oracledb 1.13.0 is available on npm

Node-oracledb 1.13.0, the Node.js add-on for Oracle Database, is on npm. Top node-oracledb feature: a fetchAsBuffer mode for fetching BLOBs. A side note: Oracle Database 12.2 is also here! And Oracle Instant Client 12.2 is available too. Node-oracledb works with 12.2 client libraries (as well as 11.2 and 12.1). With 12.2 client, you can connect to 11.2, 12.1 and 12.2 databases, using Oracle's cross version interoperability which allows Oracle clients to connect to older (or newer) databases. One feature I like in the 12.2 client is its internal Session Pool connection checking, which transparently improves application connection reliability in unstable network environments. BLOB Enhancements in node-oracledb The final big piece of LOB support has been added to node-oracledb 1.13 with the addition of a new fetchAsBuffer option. This allows queries to return small BLOBs directly as Node.js Buffers. The new option is ideal for the many applications where BLOBs are kilobyte-sized or, at most, a few megabytes. Fetching as a Buffer is an alternative to the previous method of streaming which should continue to be used for larger BLOBs that can't (or shouldn't) be one chunk of Node.js memory. There is an example of fetchAsBuffer in examples/lobselect.js: oracledb.fetchAsBuffer = [ oracledb.BLOB ]; connection.execute( "SELECT b FROM mylobs WHERE id = :idbv", [2], function(err, result) { if (err) { . . . } if (result.rows.length === 0) { . . . } // no resultsvar blob = result.rows[0][0]; // This is a Buffer fs.writeFile('output.jpg', blob, "binary", function(err) { return cb(err, connection); }); }); An alternative to the global oracledb.fetchAsBuffer is to use a fetchInfo option for the column at execute() time: { fetchInfo: {"B": {type: oracledb.BUFFER}} }, [ . . . ] Pull Requests A small improvement was made to PL/SQL Index-by array binding error messages in 1.13. This was based on PR #470 submitted by Hariprasad Kulkarni. In an array bind situation, if a value with an unexpected data type is used, then messages NJS-037 and NJS-052 now give the bind variable name (or position), and the index of the unexpected data in the input data array. This makes it easier to find and fix the problem. The PR is much appreciated. Bug fixes Some bugs were happily squashed in node-oracledb 1.13: Fixed several crashes and a memory leak using CLOBs with fetchAsString. Fixed several issues including a crash using NULLs and empty strings for LOB BIND_INOUT binds. Automatically clean up sessions in the connection pool when they become unusable after an ORA-56600 occurs. Overall, this is a good release for users working with CLOBs and BLOBs. Plans for node-oracledb version 2 We are now going to work on a version 2 branch that incorporates the new ODPI-C layer. The code is mostly ready, though testing will take some time. We'll push a development release to GitHub soonish so you check it out and comment during the stabilization phase. Initially node-oracledb 2.x will have the same functionality as 1.x, with a few small additions made possible by ODPI-C. The plan is for node-oracledb 1.x to go into maintenance mode. Maintenance of node-oracledb 1.x will end on 1st April 2018, coinciding with the end-of-life of Node 4. Some more details are in the earlier announcement. Resources Issues and questions about node-oracledb can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line! node-oracledb installation instructions are here. Node-oracledb documentation is here. Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

Node-oracledb 1.13.0, the Node.js add-on for Oracle Database, is on npm. Top node-oracledb feature: a fetchAsBuffer mode for fetching BLOBs. A side note: Oracle Database 12.2 is also here! And Oracle...

Python

Python cx_Oracle 5.3 for Oracle Database Released

Today we are pleased to announce the release of cx_Oracle 5.3, the extremely popular Python interface for Oracle Database. Binary and source bundles can be installed from PyPi, as normal. cx_Oracle 5.3 works with Python version 2.7, and with versions 3.4 and higher. You can use cx_Oracle with Oracle 11.2, 12.1 and 12.2 client libraries, allowing connection to multiple Oracle Database versions. Oracle's standard client-server version interoperability allows connection to both older and newer databases. For example Oracle 11.2 client libraries can connect to Oracle Database 10.2 or later. This release marks the move of the source code repository and home page to GitHub. User documentation remains on readthedocs. The cx_Oracle mail list also continues to operate, but you may find it easier to transition to GitHub for questions and issues. The New cx_Oracle 5.3 Ever since Anthony Tuininga joined Oracle, he has been hard at work adding improvements to cx_Oracle. (He's also been laying the ground work for the next big release - more on that later in this post). I think you'll be impressed with cx_Oracle 5.3. At last year's Oracle OpenWorld, Anthony talked about some of the changes and got great positive feedback. Check out his presentation here. This is a big update. While a few of the changes are specific to Oracle Database 12c features (see release notes), you will appreciate all the things that will make your applications better even if you have older databases. And you are upgrading to Oracle Database 12.2, right? Don't forget you can get immediate access to Oracle Database 12.2 using the Oracle Database Cloud. Key New Features Support for Python 3.6. Note Python versions earlier than 2.6 are no longer supported. (You can still get cx_Oracle 5.2 from PyPI, if you really need Python 2.6 support). Support for Oracle client 12.2. Note Oracle clients earlier than 11.2 are no longer supported. Direct binding to SQL objects. One great use case is binding Python objects to Oracle Spatial SDO objects. See InsertGeometry.py PL/SQL records and collections can be bound. Interacting with PL/SQL stored logic has never been easier. Support for scrollable cursors. Go back and forth through your query results. See ScrollableCursors.py Support for Advanced Queuing. Use database notifications to build micro-service applications. See AdvancedQueuing.py Fetch PL/SQL implicit results. Easily return query results from PL/SQL. See ImplicitResults.py Transaction Guard support. Verify transaction status after unplanned server outages. See TransactionGuard.py Support for Edition Based Redefinition. Easily switch applications to use updated PL/SQL logic. See Editioning.py Support for Statement Caching. Reduce costs of frequently executed statements. Now available with all connections. Support for setting application context during the creation of a connection. Make application metadata more accessible to the database, including in LOGON triggers. See AppContext.py Other New Features Continuous Query Notification use has been simplified. See DatabaseChangeNotification.py and QueryChangeNotification.py Support for pickling/unpickling error objects Support for binding native integers Support setting the internal and external name for use in distributed transactions Set the maximum lifetime of connections in a session pool Larger row counts (greater than 2^32) Increased default query array size to 100 Changes from cx_Oracle 5.2 In line with previous deprecations and announcements, the following items are no longer usable in cx_Oracle 5.3: The variable attribute maxlength has been dropped. Use bufferSize instead. The variable attribute allocelems has been dropped. Use numElements instead. The types UNICODE, FIXED_UNICODE and LONG_UNICODE have been dropped. Use NCHAR, FIXED_NCHAR and LONG_NCHAR instead. Callbacks on OCI functions are no longer supported. Future Deprecation Announcements In a future cx_Oracle 6 release, the following items will be removed: The module, action and clientinfo arguments to connection creation are deprecated and will be removed in cx_Oracle 6. Use the new application context instead. The cursor attribute numbersAsStrings is deprecated and will be removed in cx_Oracle 6. An output type handler should be used instead. Things you could already do in cx_Oracle 5.2 All the new features in cx_Oracle 5.3 listed above are on top of a great history of database support. The high level features that you've already been using in the previous release are: SQL and PL/SQL Execution Extensive data type support Fetching of large result sets REF CURSORs Binding PL/SQL Arrays Large Objects: CLOBs and BLOBs Transaction Management Session Pooling Database Resident Connection Pooling (DRCP) Privileged Connections and Database startup/shutdown External Authentication Continuous Query Notification Row Prefetching Client Result Caching End-to-end tracing Oracle Database High Availability Features A future cx_Oracle 6 I said you'd be impressed with cx_Oracle 5.3, but Anthony has already looked ahead to cx_Oracle 6, which aims to be even better. Over the past year, Anthony has been working hard on ODPI-C, an open source library of C code that simplifies and standardizes the use of common Oracle Call Interface (OCI) features, such as those used by cx_Oracle. ODPI-C was recently released on Github. As you can see from the cx_Oracle source code, the master branch of cx_Oracle code has been updated to use ODPI-C. If you want to test the current state of this future release, download a bundle from GitHub and build it. The release notes (so far) for user visible changes are here. Summary The move of cx_Oracle to under the umbrella of Oracle has given Anthony more time to focus on cx_Oracle and on making database access better for all developers. Overall cx_Oracle is the most comprehensive and popular scripting language driver for Oracle Database. It takes advantage of the great features built into the Oracle client libraries and into Oracle Database. And, of course, Python cx_Oracle applications can take advantage of Oracle Net features such as network data encyption. cx_Oracle 5.3 is a big release with support for some great Oracle Database features. Take a look. cx_Oracle Resources Home Page Documentation Mail List | GitHub Issues Source Code

Today we are pleased to announce the release of cx_Oracle 5.3, the extremely popular Python interface for Oracle Database. Binary and source bundles can be installed from PyPi, as normal. cx_Oracle 5.3...

Node.js

node-oracledb 1.12: Working with LOBs as String and Buffer. Connection Pinging.

Node-oracledb 1.12, the Node.js add-on for Oracle Database, is now on NPM. Top features: LOBs as JavaScript Strings and Buffers. Pool connection 'aliveness' checking. The two main features of node-oracledb 1.12 add functionality that aids usability. You may not even notice one working in the background, silently re-establishing pooled connections after network outages. The other is a 'no brainer' that makes working with large objects (LOBs) easier. You'll probably use it and think nothing of it. This release also contains a number of bug fixes and small changes. Check the CHANGELOG for all the details. LOB Enhancements Adding to the existing support for using CLOBs and BLOBs as Node.js Streams, now Strings and Buffers can be bound to CLOBs and BLOBs for IN, IN OUT and OUT binds. CLOBs in queries can also be returned directly as Strings. And streaming to and from Oracle's 'temporary LOBs' is supported. Also in this release, Writeable LOB Streams now conclude with a 'close' event. The node-oracledb LOB documentation is worth reading. There are runnable examples in the examples directory. To insert large data, just bind a Node.js String or Buffer to a CLOB or BLOB column, respectively. For example: var fs = require('fs'); var str = fs.readFileSync('example.txt', 'utf 8'); conn.execute( "INSERT INTO mylobs (id, myclobcol) VALUES (:idbv, :cbv)", { idbv: 1, cbv: str }, // type and direction are optional for IN binds function(err, result) { if (err) console.error(err.message); else console.log('CLOB inserted from example.txt'); . . . You can also bind Strings and Buffers to PL/SQL LOB parameters as IN, OUT and IN OUT binds. To fetch CLOBs as Strings from queries, use fetchAsString or fetchInfo: // return all CLOBs as Strings oracledb.fetchAsString = [ oracledb.CLOB ]; conn.execute( "SELECT mycol FROM mylobs WHERE id = 1", function(err, result) { if (err) { console.error(err.message); return; } if (result.rows.length === 0) console.error("No results"); else { var clob = result.rows[0][0]; console.log(clob); } }); Currently BLOBs in queries must be streamed; we didn't want to hold up the 1.12 release. Use the right tool for the job: use Strings and Buffer for 'small' LOBs, and use Streams for everything else. There is a theoretical limit just under 1 GB in node-oracledb for manipulating LOBs as Strings or Buffers. Luckily you wouldn't want to be allocating that much memory in Node.js applications - it's better to stream. And Node.js itself doesn't cope well with large data. It can get unstable with Strings past the several hundred megabyte mark. (Note that if node-oracledb is linked with Oracle client 11g libraries, than the upper size of Strings and Buffers is just under the 64 KB mark: the moral is use Oracle client 12c libraries - you can still connect to Oracle Database 10.2, 11g and 12c). For larger data you have multiple options for streaming. To insert into a table column, use the existing RETURNING INTO method where you get an Oracle LOB locator and use Node.js Streams functionality to put data into it. For fetching, use the existing LOB Streaming method and stream data out of the Lob. If data is too big to bind as a String or Buffer to a PL/SQL parameter, you can create a 'temporary LOB' in node-oracledb, stream into it, and then bind it. This is new in node-oracledb 1.12 First, use the new connection.createLob() method to create a temporary LOB, here of type CLOB: conn.createLob(oracledb.CLOB, function(err, templob) { if (err) { . . . } // ... else use templob }); This creates an instance of a node-oracledb Lob class. Once created, data can be inserted into the Lob. For example to read a text file: templob.on('error', function(err) { somecallback(err); }); // The data was loaded into the temporary LOB, so use it templob.on('finish', function() { somecallback(null, templob); }); // copies the text from 'example.txt' to the temporary LOB var inStream = fs.createReadStream('example.txt'); inStream.on('error', function(err) { . . . }); inStream.pipe(templob); Now the LOB has been populated, it can be bound in somecallback() to a PL/SQL IN parameter: // For PROCEDURE lobs_in(p_id IN NUMBER, c_in IN CLOB) conn.execute( "BEGIN lobs_in (:id, :c); END", { id: 3, c: templob }, // type and direction are optional for IN binds function(err) { if (err) { return cb(err); } console.log("Call completed"); return cb(null, conn, templob); }); When the LOB is no longer needed, it must be closed with lob.close(): templob.close(function (err) { if (err) . . . else // success }); Connection Pool Enhancements The other main feature in node-oracledb 1.12 is connection pool session 'aliveness' checking. This is enabled by default. You may never notice it in action but it will improve application behavior where networks are unstable, or database services become temporarily unavailable. When pooled connections have been established to the database, but are idle in the connection pool not currently being used, there is a chance that a network outage will make those connections unusable. A common scenario is when a developer's laptop is closed for a while. Unless node-oracledb is linked with Oracle 12.2 client libraries, any node-oracledb pooled getConnection() call could return one of the unusable connections. For users of Oracle 11.2 or 12.1 client libraries, a new poolPingInterval setting will do a 'ping' to the database to check the connection is OK before returning that connection to the application. If it isn't OK, then another connection is returned. There is the overhead of a 'round trip' to the database in doing a ping, so there could be some impact on scalability. But the ping setting is time based so there is no overhead on active pools. The ping will only happen when a connection has been idle in the pool for poolPingInterval seconds. If a connection has been recently used, it will be returned to the application without being checked. The default interval is 60 seconds. The feature can be configured to always-force the ping, to totally disable it, or to be whatever interval meets your performance and quality-of-service requirements. The value can be set when a pool is created, or turned on globally: oracledb.poolPingInterval = 60; Applications should continue to detect and handle execution-time errors, since execution errors may occur, or a network outage may occur between getConnection() and execute() calls, or connections may have been in use and not released back to the pool when the network dropped, or the idle time may have been shorter than the ping interval setting and no ping performed. The new feature is great at improving the reliability of connections in unstable environments but is just one of the options and tools that can be used to provide a highly available application. The ping feature is a no-op when node-oracledb is linked with Oracle 12.2 client libraries (independent of the database version at the other end of the connection), since those libraries have an always-enabled, lightweight connection check that removes the need for node-oracledb to do its own ping. This adds even more certainty that a connection will be valid at time of first use after a getConnection() call. Oracle 12.2 is currently available in Oracle Cloud. The pool pinging documentation has more information. Documentation Updates If you haven't scanned the documentation for a while, take a look. Each release more and more tips and information gets added. A couple of community contributions from Leigh Schrandt and Nick Heiner regarding the README were gratefully received. Thank you! Testing Testing of node-oracle is also something I wanted to mention. The test suite continues to grow, as you can see when you run it. We also have additional stress and loads tests - overall these extra test take days to complete. There are no guarantees, of course, but node-oracledb is more than solid. Resources Issues and questions about node-oracledb can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line! node-oracledb installation instructions are here. Node-oracledb documentation is here. Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

Node-oracledb 1.12, the Node.js add-on for Oracle Database, is now on NPM. Top features: LOBs as JavaScript Strings and Buffers. Pool connection 'aliveness' checking. The two main features of...

General

Fast Generation of CSV and JSON from Oracle Database

Want a fast way to spool CSV or JSON from Oracle Database? Check out the new SQL*Plus 12.2 SET MARKUP CSV option and the new Oracle Database 12.2 JSON_OBJECT operator. You can try Oracle Database 12.2 now using Oracle Cloud Database Service. Fast, Easy CSV with SQL*Plus and Oracle Database First, let's see CSV ("comma separated values") output in SQL*Plus 12.2. Start with this script, t.sql: set feedback off select department_id, department_name from departments where department_id < 110; exit Executed traditionally you get formatted output: SQL> @t.sql DEPARTMENT_ID DEPARTMENT_NAME ------------- ------------------------------ 10 Administration 20 Marketing 30 Purchasing 40 Human Resources 50 Shipping 60 IT 70 Public Relations 80 Sales 90 Executive 100 Finance Running it with the new CSV mode: SQL> set markup csv on SQL> @t.sql "DEPARTMENT_ID","DEPARTMENT_NAME" 10,"Administration" 20,"Marketing" 30,"Purchasing" 40,"Human Resources" 50,"Shipping" 60,"IT" 70,"Public Relations" 80,"Sales" 90,"Executive" 100,"Finance" Simple! The full CSV syntax is: SET MARKUP CSV {ON|OFF} [DELIMI[TER] character] [QUOTE {ON|OFF}] You can see the delimiter can be changed from a comma, and quoting of fields can be disabled. The SET MARKUP option can also be enabled from the command line with the -m option: $ sqlplus -s -m 'csv on' cj@'"localhost/pdb1"' @t.sql "DEPARTMENT_ID","DEPARTMENT_NAME" 10,"Administration" 20,"Marketing" 30,"Purchasing" 40,"Human Resources" 50,"Shipping" 60,"IT" 70,"Public Relations" 80,"Sales" 90,"Executive" 100,"Finance" (Pro tip: the -s silent option may hide the password prompt on some platforms making SQL*Plus appear to hang! Enter the password and all will be well.) CSV mode bypasses the traditional SQL*Plus pagination routines, making output faster to generate. And using the -m 'csv on' option additionally invokes a faster I/O subsystem, and also automatically sets some SET options to the values of another new option sqlplus -f. Two of the changes are increasing SET ARRAYSIZE to 100 and SET ROWPRETCH to 2. Increasing SET ARRAYSIZE allows larger amounts of data to be returned in each low level request to the database, thus improving overall efficiency. Having SET ROWPRETCH to 2 reduces the database 'round trips' required between SQL*Plus and the database for queries that return only single rows. You should adjust the values of these, and other, settings to suit your data size and performance goals. CSV mode can be used when connecting SQL*Plus 12.2 to Oracle Database 12.2 or earlier. Fast, Easy JSON with SQL*Plus and Oracle Database The Oracle Database 12.2 JSON_OBJECT function is a great way to convert relational table output into JSON. Combined with SQL*Plus 12.2's efficient CSV output - and with quoting of columns disabled - you can spool JSON very easily. Here's a SQL*Plus script t2.sql to return JSON output: set heading off set feedback off select json_object ('deptId' is d.department_id, 'name' is d.department_name) department from departments d where department_id < 110; exit Running it generates the desired JSON output: $ sqlplus -s -m 'csv on quote off' cj@localhost/pdb1 @t2.sql {"deptId":10,"name":"Administration"} {"deptId":20,"name":"Marketing"} {"deptId":30,"name":"Purchasing"} {"deptId":40,"name":"Human Resources"} {"deptId":50,"name":"Shipping"} {"deptId":60,"name":"IT"} {"deptId":70,"name":"Public Relations"} {"deptId":80,"name":"Sales"} {"deptId":90,"name":"Executive"} {"deptId":100,"name":"Finance"} Summary SQL*Plus 12.2 and Oracle Database 12.2 allow efficient and easy access to CSV and JSON data. If you don't yet have 12.2, or you want to create web services and do more with the data, check out Oracle ORDS or roll your own solution using the node-oracledb driver for Node.js.

Want a fast way to spool CSV or JSON from Oracle Database? Check out the new SQL*Plus 12.2 SET MARKUP CSV option and the new Oracle Database 12.2 JSON_OBJECT operator. You can try Oracle Database...

General

Application Development at Oracle OpenWorld, San Francisco, September 2016

Well, there is certainly a lot going on at Oracle OpenWorld this September. You can browse the session catalog for interesting talks. Update If you need just one link, use our "Focus on Document" to guide your session choice. Here are a few highlights in my area: Python and Oracle Database: Tips, Tricks, and the Best New Features [CON6543] Room: Park Central - Concordia Monday 19th September, 03:00pm - 03:45pm Best Practices for Application Performance and Scalability [CON6538] Park Central - Metropolitan I Monday 19th September, 05:30pm - 06:15pm Session Title: Meet the Oracle Programming and Scripting Experts [MTE7200] Room: Moscone South - 310 Tuesday 20th September, 06:15pm - 07:00pm Best Practices for Application High Availability [CON6539] Park Central - Metropolitan II Wednesday 21st September, 12:15pm - 01:00pm Node.js: Asynchronous JavaScript Application Development for Oracle Database 12c [CON6542] Room: Park Central - Concordia Thursday 22nd September, 10:45am - 11:30am Best Practices for Application Development with Oracle Database Cloud Service [CON6540] Moscone South - 104 Thursday 22nd September, 01:15pm - 02:00pm That's some great content there. The "Meet the Experts" session is the interactive session where you get to hear from, and ask questions to, our key developers and managers in the Scripting Language and .NET area. If you're shy, you don't have to speak - just come and learn. We'll also have a demo booth open on the exhibition floor so you can come and chat. (Its location is yet to be announced). I really hope to see you during the 2016 conference.

Well, there is certainly a lot going on at Oracle OpenWorld this September. You can browse the session catalog for interesting talks. Update If you need just one link, use our "Focus on Document" to...

Node.js

Node-oracledb 1.11 Released to NPM

We've pushed out a release of node-oracledb to GitHub and NPM. Top feature: Connection Pool Cache The node-oracledb driver connects Node.js to Oracle Database for fast and functional applications. The changes in node-oracledb 1.11 are: Added a connection pool cache feature allowing pools to be given a string alias at creation. This makes pools easier to use. Pools are typically created in one file and later used in other files. Giving a pool a string alias allows these other modules to access the pool by knowing only its alias name. When pools are created with oracledb.createPool(), the pool attributes can now optionally contain a new poolAlias string. var hrPoolPromise = oracledb.createPool({ poolAlias: 'pool1', users: 'hr', password: 'welcome', connectString: 'localhost/orcl'}); Pools can be retrieved from the cache using a new oracledb.getPool() method, and then used as before to get connections. Multiple pools, each with a different alias, can be used. Interestingly, oracledb.getConnection() has been enhanced to take a pool alias specifying which pool to return a connection from. This bypasses the need to call oracledb.getPool(). And there's more! The first pool (in a normal code flow) created without a poolAlias attribute is given the alias "default". Connections can be retrieved from this pool by using oracledb.getConnection() without passing an alias at all. This means applications that create only a single pool can simply use oracledb.getConnection() anywhere to get a connection from the pool. More information and examples are in the documentation. It is worth checking this since oracledb.getConnection() now has several different behaviors, depending on the use of aliases (and whether callbacks or promises are used). We'd recommend using aliases all the time if you create more than one pool - and want to access them via aliases. Using aliases is optional. Don't mix un-aliased and aliased pools unless you want to confuse yourself. Improved the bootstrap error message when the node-oracledb binary cannot be loaded. Fixed memory leaks with DATE and TIMESTAMP bind values. Fixed external authentication which broke in 1.10. Fixed metadata scale and precision values on AIX. A couple of other internal improvements are shown in the CHANGELOG. Resources Issues and questions about node-oracledb can be posted on GitHub. Your input helps us schedule work on the add-on. Drop us a line! Node-oracledb installation instructions are here. Node-oracledb API and user documentation is here.

We've pushed out a release of node-oracledb to GitHub and NPM. Top feature: Connection Pool Cache The node-oracledb driver connects Node.js to Oracle Database for fast and functional applications. The...

php

PHP OCI8 2.1.2 and OCI8 2.0.12 released on PECL

PHP OCI8 2.1.2 (for PHP 7) and OCI8 2.0.12 (for PHP 5.2+) have been uploaded to PECL. The OCI8 extension allows PHP applications to connect to Oracle Database. Use pecl install oci8 to install PHP OCI8 2.1 for PHP 7. Use pecl install oci8-2.0.12 to install PHP OCI8 2.0 for PHP 5.2 - PHP 5.6. Note: PHP 5.6 is the oldest release which the PHP community is maintaining. As well as being available from PECL, PHP OCI8 is also included in the base PHP bundles. OCI8 2.0.12 will be included in PHP 5.6.26. OCI8 2.1.2 will be included in PHP 7.0.11 and PHP 7.1. The changes are: Fixed an invalid handle error with Implicit Result Sets. (OCI8 2.0.12 and 2.1.2) Fixed bug #72524 (Binding null values triggers ORA-24816 error) (OCI8 2.1). This necessarily reverts a change in PHP 7 (OCI8 2.1) back to match PHP 5's (OCI8 2.0) behavior. If you had PHP 7 code that was not specifying a size for OUT binds, then you will need to modify your code. Passing a maxlength to oci_bind_by_name() for OUT binds remains the documented and recommended best practice. Several internal code tidy ups (OCI8 2.1). More about PHP and OCI8 can be found in the The Underground PHP and Oracle Manual and on the Oracle Technology Network.

PHP OCI8 2.1.2 (for PHP 7) and OCI8 2.0.12 (for PHP 5.2+) have been uploaded to PECL. The OCI8 extension allows PHP applications to connect to Oracle Database. Use pecl install oci8 to install PHP OCI8...

Node.js

Node-oracledb 1.10 has Enhanced Metadata

Top feature: Enhanced Metadata The changes in node-oracledb 1.10 are: Enhanced query metadata thanks to a Pull Request from Leonardo. He kindly allowed us to take over and fine tune the implementation. Additional metadata for query and REF CURSOR columns is available in the metaData object when the new boolean oracledb.extendedMetaData attribute or corresponding execute() option attribute extendedMetaData are true. For example, if the DEPARTMENTS table is like: SQL> desc departments Name Null? Type ----------------------------------------- -------- ---------------------------- DEPARTMENT_ID NOT NULL NUMBER(4) DEPARTMENT_NAME NOT NULL VARCHAR2(30) MANAGER_ID NUMBER(6) LOCATION_ID NUMBER(4) Then a query in node-oracledb would give extended metadata: [ { name: 'DEPARTMENT_ID', fetchType: 2002, dbType: 2, precision: 4, scale: 0, nullable: false }, { name: 'DEPARTMENT_NAME', fetchType: 2001, dbType: 1, byteSize: 30, nullable: false }, { name: 'MANAGER_ID', fetchType: 2002, dbType: 2, precision: 6, scale: 0, nullable: true }, { name: 'LOCATION_ID', fetchType: 2002, dbType: 2, precision: 4, scale: 0, nullable: true } ] You can see that the available attributes vary with the database type. The attributes are described in the metaData documentation. The commonly used column name is always available in metaData regardless of the value of extendedMetaData. This is consistent with previous versions. The metadata dbType and fetchType attributes numbers are described in new DB_TYPE_* constants and the existing node-oracledb type constants, respectively. Your code should use these constants when checking metadata types. Why did we make the extra metadata optional and off by default? Why do the types use numbers instead of strings? We had a lot of debate about common use cases, out-of-box experience, performance etc. and this is the way the cookie crumbled. I know this enhancement will make your applications easier to maintain and more powerful. Fixed an issue preventing the garbage collector cleaning up when a query with LOBs is executed but LOB data isn't actually streamed. Report an error earlier when a named bind object is used in a bind-by-position context. A new error NJS-044 is returned. Previously errors like ORA-06502 were given since the expected attributes were not found and bind settings ended up as defaults. You can still use unnamed objects for bind-by-position binds like: var sql = "begin myproc(:1, :2, :3); end;";var binds = [ id, name, { type: oracledb.STRING, dir: oracledb.BIND_OUT } ]; Here the third array element is an unnamed object. Fixed a bug where an error event could have been emitted on a QueryStream instance prior to the underlying ResultSet having been closed. This would cause problems if the user tried to close the connection in the error event handler as the ResultSet could have prevented it. Fixed a bug where the public close method was invoked on the ResultSet instance that underlies the QueryStream instance if an error occurred during a call to getRows. The public method would have thrown an error had the QueryStream instance been created from a ResultSet instance via the toQueryStream method. Now the call to the C layer close method is invoked directly. Updated Pool._logStats to throw an error instead of printing to the console if the pool is not valid. Added GitHub Issue and Pull Request templates. Updated installation instructions for OS X using the new Instant Client 12.1 release. Added installation instructions for AIX and Solaris x64. Some enhancements were made to the underlying DPI data access layer. These were developed in conjuction with a non- node-oracledb consumer of DPI, but a couple of changes lay groundwork for potential, user-visible, node-oracledb enhancements: Allow SYSDBA connections Allow session tagging Allow the character set and national character set to be specified via parameters to the DPI layer Support heterogeneous pools (in addition to existing homogeneous pools) To reiterate, these are not exposed to node-oracledb. Resources Issues and questions about node-oracledb can be posted on GitHub. Your input helps us schedule work on the add-on. Drop us a line! node-oracledb installation instructions are here. Node-oracledb documentation is here.

Top feature: Enhanced Metadata The changes in node-oracledb 1.10 are: Enhanced query metadata thanks to a Pull Request from Leonardo. He kindly allowed us to take over and fine tune the implementation. Ad...

Instant Client

Using Oracle on OS X? Instant Client 12.1 is here

Oracle Instant Client 12.1 for OS X was just released and is now available for free download from OTN for 32-bit and 64-bit applications. Update: the bundles were re-released 14 June 2016 with a connectivity fix. Instant Client provides libraries and tools for connecting to Oracle Database. Among other uses, languages such as C, Python, PHP, Ruby, Perl and Node.js can use Instant Client for database connectivity. In addition to having Oracle 12.1 client features like auto-tuning, new in this release is an ODBC driver. The install instructions have been updated to reflect the resolution of the linking issues caused by the OS X El Capitan changes with SIP to ignore DYLD_LIBRARY_PATH in sub processes. The ~/lib location required for Instant Client 11.2 on El Capitan is no longer needed with Instant Client 12.1. Note if you are creating your own apps, you should link with -rpath. This release of Instant Client supports Mavericks, Yosemite, and El Capitan. Applications can connect to Oracle Database 10.2 or more recent. You should continue using the older 11.2 client if you need to connect to Oracle Database 9.2. Update: Official installation doc and release notes are now on the doc portal: Oracle Database Online Documentation 12c Release 1 (12.1). Questions and comments can be posted to the OTN forum for whichever component or tool you are using. General questions about Instant Client are best posted to the OCI Forum. If you are interested in running Oracle Database itself on OS X, see my earlier post The Easiest Way to Install Oracle Database on Mac OS X.

Oracle Instant Client 12.1 for OS X was just released and is now available for free download from OTN for 32-bit and 64-bit applications. Update: the bundles were re-released 14 June 2016 with...

Node.js

Node-oracledb 1.9.1 with Promises and Node 6 Support is on NPM

Top features: Promise support. Node 6 support. The new node-oracledb 1.9.1 release adds Promise support, it now supports the new Node 6 (as well as previous releases), and it has some welcome stability fixes. Other nice goodies are mentioned below. Thanks to everyone who gave feedback on our 1.9.0 development branch - we couldn't have done it without you. node-oracledb 1.9.1 is now available on NPM. Installation instructions are here. Documentation is here. The changes in node-oracledb 1.9.1 since 1.8 Promise support was added. All asynchronous functions can now optionally return Promises. When asynchronous functions are passed with a callback function parameter, the familiar callback flow is used. If the callback parameter is omitted, then a Promise is returned. There are some examples in the examples directory, see promises.js and webapppromises.js. Node Promises allow a programming paradigm that many Node users find comfortable. Some users had implemented their own Promise wrappers for node-oracledb; a few had even been published on NPM. Now the official implementation makes Promises available to everyone. The default Promise library is used for Node 0.12, 4, 5 and 6. It can be easily overridden if you wish to incorporate your own implementation. This is also useful for Node 0.10 where there is no native library. A new toQueryStream() method was added for ResultSets. It lets REF CURSORS be fetched via Readable Streams. See the example refcursortoquerystream.js It can also be used with ResultSets from top-level queries, allowing them to be converted to streams. However the existing connection.queryStream() method will probably be easier to use in this scenario. Aliases pool.close() and connection.close() have been added for pool.terminate() and connection.release() respectively. This should make it easier to remember which method to use for releasing a connection, terminating a connection pool, or closing a ResultSet: you can just use close(). An experimental query Stream _close() method was added. It allows query streams to be closed without needing to fetch all the data. It is marked experimental because there isn't good information about how safe interrupting the current Node Streams implementation is. Calling _close() invokes pause(), if necessary. On Node 0.10 you may get a pause event even if you had previously paused the stream. This is because Node 0.10 doesn't support the isPaused() call. Upgraded to NAN 2.3 for Node 6. This removes the deprecation warnings that Node 6 was generating with the older NAN 2.2. Older versions of Node can still be used, of course. Mitigated some annoying, seemingly random crashes if JavaScript objects were garbage collected too early. While many well programmed apps naturally hold the relevant JavaScript objects until no longer required, other apps don't. Premature garbage collection could affect these latter scripts if they finished quickly or when under load. A similar change was made to Lob buffers used during LOB insertion. Memory 'improvements': Fixed some memory leaks when using ResultSets. Fixed a memory leak with the Pool queue timer map. Fixed memory release logic when querying LOBs and an error occurs. Removed an extra call to getRows() made by queryStream() at end-of-fetch. Error changes: Some method parameter validation checks, such as the number or types of parameters, will now throw errors synchronously instead of returning errors via the callback. Improved validation for fetchInfo usage. Increased the internal buffer size for Oracle Database error messages. Altered some node-oracledb NJS-xyz error message text for consistency. The test suite is no longer automatically installed when installing with npm install oracledb from NPM. The test suite was (and will be) getting bigger and bigger. It is not being used by most people, so installing it automatically is wasteful. You can still run the tests using a GitHub clone. The updated test README has instructions for this. To allow tests to be moved and run anywhere, we left the tests' require('oracledb') calls without a path prefix so you may need to set NODE_PATH as described. Fixed a symbol name redefinition warning for DATA_BLOB when compiling on Windows. Overall node-oracledb 1.9.1 is a significant milestone with the addition of Promise support, the support for Node 6, and the general resiliency changes. I'm very happy with this release and would encourage upgrading to it. What's Next? What's next? There are still lots of enhancements for node-oracledb on the wish list. Please let us know your priorities. Issues and questions about node-oracledb can be posted on GitHub. Your input helps us schedule work on the add-on. Drop us a line!

Top features: Promise support. Node 6 support. The new node-oracledb 1.9.1 release adds Promise support, it now supports the new Node 6 (as well as previous releases), and it has some welcome...

Node.js

Getting a C++11 compiler for Node 4, 5 and 6 on Oracle Linux 6

A newer compiler is needed on Oracle Linux 6 when you want to use add-ons like node-oracledb with Node 4 or later. This is because add-ons for those versions need to be built with a C++11 compatibile compiler. The default compiler on OL 6 doesn't have this support. OL 7 does have such a compiler, so these instructions are not needed for that version. For OL 6 the easist way to get a new compiler is from the Software Collection Library (SCL). You enable the software collection yum channel, run a yum install command, and then the compiler is immediately available to use. Detailed installation SCL instructions are in the manual. The steps below show how to install node-oracledb on Oracle Linux 6 for Node.js 4 or later. Enabling the Software Collection Library If you are using yum.oracle.com (formerly known as public-yum.oracle.com) then edit /etc/yum.repos.d/public-yum-ol6.repo and enable the ol6_software_collections channel: [ol6_software_collections] name=Software Collection Library release 1.2 packages for Oracle Linux 6 (x86_64) baseurl=http://yum.oracle.com/repo/OracleLinux/OL6/SoftwareCollections12/x86_64/ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle gpgcheck=1 enabled=1 If necessary, you can get the latest channel list from http://yum.oracle.com/public-yum-ol6.repo and merge any updates from it into your existing /etc/yum.repos.d/public-yum-ol6.repo file. Alternatively, if your machine has a ULN support subscription, you can subscribe to the Oracle Software Collections 1.2 for Oracle Linux 6 channel in the Manage Subscription page on linux.oracle.com. Installing the Updated Compiler Once the channel is enabled, install the updated compiler with: yum install scl-utils devtoolset-3 This will install a number of packages that comprise the complete, updated tool set. Installing node-oracledb Installing node-oracledb on Node 4 (or later) is the same as in the install instructions, but using the new compiler. The Oracle Linux manual chapter Using the Software Collection Version of a Command shows various ways to enable the dev toolset. In summary, to install node-oracledb on Node 4 or later using Oracle Linux 6, first install an Oracle client such as Instant Client. If you have anything except the Instant Client RPM packages, tell the installer where the libraries and header files are located, for example: export OCI_LIB_DIR=$HOME/instantclient export OCI_INC_DIR=$HOME/instantclient/sdk/include If you are behind a firewall, set your proxy: export http_proxy=http://my.proxy.example.com:80/ In my development environments I often find some cleanup helps: which npm && rm -rf $(npm root)/oracledb $(npm root)/nan $HOME/.node-gyp $HOME/.npm \ && npm cache clean Now node-oracledb can be installed using the newer compiler: scl enable devtoolset-3 -- npm install oracledb You can also open a shell and use that to install: scl enable devtoolset-3 bash npm install oracledb Using Node Now you can use Node: $ node version.js node.js version: v4.4.3 node-oracledb version: 10900 node-oracledb text format: 1.9.0 oracle client library version: 1201000200 oracle client library text format: 12.1.0.2.0 oracle database version: 1201000200 oracle database text format: 12.1.0.2.0 $ cat /etc/oracle-release oracle linux server release 6.7

A newer compiler is needed on Oracle Linux 6 when you want to use add-ons like node-oracledb with Node 4 or later. This is because add-ons for those versions need to be built with a C++11 compatibile...

Node.js

node-oracledb 1.9.0-Dev Branch with Promises is on GitHub

Top features: Promise support node-oracledb 1.9.0-Development is now available only as a development-only branch on GitHub. It adds Promise support and some other goodies mentioned below. The plan is to incorporate any user feedback, stabilize the features, improve documentation, improve test covereage, and run stress tests. Once this is all done, then a production release to npm will be made. Personally I think this will take 2-4 weeks, but it is totally dependent on what is uncovered by you, the user. Since this is a development branch, features are subject to change. Use GitHub Issue #410 to ask design questions and discuss the branch. Install by setting OCI_LIB_DIR and OCI_INC_DIR as normal, and running npm install oracle/node-oracledb.git#dev-1.9. Or install from GitHub by cloning the repository, checking out the dev-1.9 branch, setting OCI_LIB_DIR and OCI_INC_DIR, and install with npm install Anyone is welcome to report test results, contribute new tests or update documentation (or code!) to help us get a production release out faster. You do need to have your OCA accepted first before we can look at any pull requests. As you can see, most of the doc is ready, and there are a solid number of tests for new functionality. You can help us by testing your own favorite Promise library, as we are only planning on testing the default Promise implementation in Node 0.12, 4 and 5. Also check out how connections are handled as you may like a different style - let us know. In node-oracledb 1.9-dev: Promise support was added. All asynchronous functions can now return promises. The default Promise library is used for Node 0.12, 4 and 5. It can be easily overridden if you wish to incorporate your own implementation. The current implemention typically requires two promise chains. Let us know what you think. There are solutions that could be made. What do you want? A new toQueryStream() method was added. It is for ResultSets. It lets REF CURSORS be transformed into Readable Streams. It can also be used to convert ResultSets from top-level queries to streams, however the existing connection.queryStream() method will probably be easier to use for these queries. An experimental query Stream _close() method was added. It allows query streams to be closed without needing to fetch all the data. It is marked experimental because there isn't good information about whether the current Node Streams implementation really allows interruptions. Open question: Should _close() invoke pause() internally? Aliases pool.close() and connection.close() have been added for pool.terminate() and connection.release() respectively. This should make it easier to remember which method to use for releasing a connection, terminating a connection pool, or closing a ResultSet: you can just use close(). Some method parameter validation checks, such as the number or types of parameters, will now throw errors synchronously instead of returning errors via the callback. Removed an extra call to getRows() made by queryStream() at end-of-fetch. Some annoying, seemingly random crashes caused by connections being garbage collected while still in use should no longer occur. These were more common in smaller scripts that finished quickly, but could be seen in systems under load. Resources Issues and questions about node-oracledb can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line! node-oracledb installation instructions are here. Node-oracledb documentation is here.

Top features: Promise support node-oracledb 1.9.0-Development is now available only as a development-only branch on GitHub. It adds Promise support and some other goodies mentioned below. The plan is...

Node.js

Node-oracledb 1.8 has a streaming mode for queries (Node.js add-on for Oracle Database)

Node-oracledb 1.8.0, the Node.js add-on for Oracle Database, is on NPM. Top new changes: New Query Result Streaming. Use DRCP connect strings only with a connection pool. I want to start by saying thanks to all contributors past and current! It is the community that defines the product. Query Result Streaming Node Streams can now be used for queries after a pull request from Sagie Gur-Ari was merged. A new connection.queryStream() method returns a Readable Stream. Of course, if you prefer callbacks, the existing ResultSet feature can continue to be used. The new query stream feature is implemented as a wrapper around the ResultSet Class. In particular it uses resultset.getRows() to fetch a subset of data, each row of which will generate a data event. The only reason to understand this is that getRows() takes a parameter to specify how many rows to fetch. We had some (well, a lot of) debate about to set this value and keep the API simple. For the moment, the value of oracle.maxRows is used. Note the value does not affect how many rows are returned by streamQuery() events because getRows() will be repeatedly called when more rows are needed. Instead, this parameter is used to tune stream performance. In the near future, when PR 361 is reviewed, we may introduce a specialized tuning parameter. We also want to look at ways of interrupting the streams early. But, in node-oracledb 1.8, make sure to read to the end of the query to avoid leaking a cursor. The ResultSet close() is executed internally; you do not need to call it to release ResultSet resources. An example of query streaming is: var stream = connection.queryStream( 'SELECT first_name, last_name FROM employees ORDER BY employee_id' ); stream.on('error', function (error) { console.error(error); return; }); stream.on('metadata', function (metadata) { console.log(metadata); }); stream.on('data', function (data) { console.log(data); }); stream.on(end, function () { connection.release( function(err) {if (err) { console.error(err.message);} }); }); There is a runnable example in examples/selectstream.js. Other useful examples are in the test file test/stream1.js. Use DRCP Connect Strings Only With a Connection Pool In node-oracledb 1.8 you must now use a connection pool if your connect string requests a DRCP connection. Previously this was just a best practice. Now it is enforced. Connect strings that request DRCP connections look either like: connectString : "mymachine/mydbservice:pooled" or connectString : "SALES" where the SALES connect identifier maps to a tnsnames.ora entry specifying SERVER=POOLED, for example: SALES=(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (HOST=mymachine) (PORT=1521))(CONNECT_DATA=(SERVICE_NAME=mydbservice) (SERVER=POOLED))) If you try oracledb.getConnection(...) you will get an error ORA-56609: Usage not supported with DRCP. Instead use a connection pool, see node-oracledb Connection Pooling documentation: oracledb.createPool ( { user : "hr" password : "welcome" connectString : "mymachine/mydbservice:pooled" }, function(err, pool) { pool.getConnection (function(err, connection){. . . // use connection}); }); In the unlikely case where the Node process is short-lived and you really, really just want a single connection, create a pool with a single session; the createPool() option attributes can be: poolMax: 1, poolMin: 1, poolIncrement: 0 Millisecond Precisions A pull request from Antonio Bustos has helped make some tests more portable by removing some time ambiguity. The Oracle DATE data type does not contain milliseconds. If an application inserts a JavaScript date with milliseconds, the DATE tests will now only compare the non-millisecond date components to validate results. Windows Debug Builds Kubo Takehiro (who is the maintainer of the popular Ruby ruby-oci8 extension) spotted a hierarchy problem with node-oracledb Windows Debug build options. The binding.gyp entry has now been fixed. Other Changes The driver name is now set to "node-oracledb : 1.8.0". This is visible to DBAs, for example in the V$SESSION_CONNECT_INFO view. It lets DBAs see what apps, and what versions, are connecting to the database. The pool queue stats now show the start time of the pool. A couple of parameter check bugs were also fixed. See CHANGELOG for details. Resources Issues and questions about node-oracledb can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line! node-oracledb installation instructions are here. Node-oracledb documentation is here.

Node-oracledb 1.8.0, the Node.js add-on for Oracle Database, is on NPM. Top new changes: New Query Result Streaming. Use DRCP connect strings only with a connection pool. I want to start by saying...

Instant Client

Using SQL*Plus Instant Client 11.2 on OS X El Capitan

The installation steps for using SQL*Plus 11.2 (Instant Client) and other OCI-based applications have necessarily changed since OS X El Capitan broke DYLD_LIBRARY_PATH. The updated instructions are given on the Instant Client Downloads for Mac OS X (Intel x86) page. Update: Instant Client 12.1 is out and installation is easier. Check out the above link for instructions Here are the 11.2 instructions repeated. The steps also work on earlier versions of OS X: Download the desired Instant Client ZIP files from OTN. All installations require the Basic or Basic Lite package. Open a terminal and unzip the packages into a single directory such as "~/instantclient_11_2". For example, to use SQL*Plus: cd ~ unzip instantclient-basic-macos.x64-11.2.0.4.0.zip unzip instantclient-sqlplus-macos.x64-11.2.0.4.0.zip Create the appropriate libclntsh.dylib link for the version of Instant Client. For example: cd ~/instantclient_11_2 ln -s libclntsh.dylib.11.1 libclntsh.dylib Note: OCCI programs will additionally need: ln -s libocci.dylib.11.1 libocci.dylib Add links to "~/lib" for required Basic package libraries. For example, to use OCI programs (including SQL*Plus, Python's cx_Oracle, PHP's OCI8, Node.js's node-oracledb, and Ruby's ruby-oci8 driver): mkdir ~/lib ln -s ~/instantclient_11_2/libclntsh.dylib.11.1 ~/lib/ ln -s ~/instantclient_11_2/{libnnz11.dylib,libociei.dylib} ~/lib/ To run SQL*Plus, add its libraries to "~/lib", and update PATH. For example: ln -s ~/instantclient_11_2/{libsqlplus.dylib,libsqlplusic.dylib} ~/lib/ export PATH=~/instantclient_11_2:$PATH

The installation steps for using SQL*Plus 11.2 (Instant Client) and other OCI-based applications have necessarily changed since OS X El Capitan broke DYLD_LIBRARY_PATH. The updated instructions are...

Node.js

node-oracledb 1.7.0 has a connection pool queue (Node.js add-on for Oracle Database)

Node-oracledb 1.7.0, the Node.js add-on for Oracle Database, is on NPM. Top features: a new connection pool queue to make apps more resilient, and "Bind by position" syntax for PL/SQL Index-by array binds. This release has a couple of interesting changes as well as some small bind fixes. A few reported build warnings with some compilers were also squashed. Extended PL/SQL Index-by Array Bind Syntax To start with, a followup PR from @doberkofler completes his PL/SQL Index-by array binding support project. In node-oracledb 1.7 he has added "bind by position" syntax to the already existing "bind by name" support. Thanks Dieter! The "bind by position" syntax looks like: connection.execute( "BEGIN mypkg.myinproc(:id, :vals); END;", [ 1234, { type: oracledb.NUMBER, dir: oracledb.BIND_IN, val: [1, 2, 23, 4, 10] } ], function (err) { . . . }); Personally I'd recommend using bind by name for clarity, but this PR makes the feature congruent with binding scalar values, which is always a good thing. Documentation is at PL/SQL Collection Associative Array (Index-by) Bind Parameters. New Transparent JavaScript Wrapper for Existing Classes The other major change in 1.7 is a new JavaScript wrapper over the current node-oracledb C++ API implementation, courtesy of some community discussion and the direction that users seemed to have been heading in: creating similar wrappers. It was also the result of some 'above and beyond' overtime from Dan McGhan who did the project. This wrapper should be transparent to most users. It gives a framework that will make it easier to extend node-oracledb in a consistent way and also let developers who know JavaScript better than C++ contribute to node-oracledb. New Connection Pool Queue Enabled by Default The layer has let Dan add his first new user feature: a request queue for connection pooling. It is enabled by a new Boolean pool attribute queueRequests. If a pool.getConnection() request is made but there are no free connections (aka sessions) in the pool, the request will now be queued until an in-use connection is released back to the pool. At this time the first request in the queue will be dequeued, and the underlying C++ implementation of pool.getConnection() will be called to return the now available connection to the waiting requester. A second new pool attribute queueTimeout uses setTimeout to automatically dequeue and return an error for any request that has been waiting in the queue too long. The default value is 60000 milliseconds, i.e. 60 seconds. In normal cases, when requests are dequeued because a connection does become available, the timer is stopped before the underlying C++ layer gets called to return the connection. The pool queue is enabled by default. If it is turned off, you get pre-1.7 behavior. For example if more requests are concurrently thrown at an app than the poolMax value, then some of the pool.getConnection() calls would likely return an error ORA-24418: Cannot open further sessions. When enabled, the new queue nicely stops this error occurring and lets apps be more resilient. The pool option attribute _enableStats turns on lightweight gathering of basic pool and queue statistics. It is false by default. If it is enabled, applications can output stats to the console by calling pool._logStats() whenever needed. I think it will be wise to monitor the queue statistics to make sure your pool configuration is suitable for the load. You don't want the queue to be an invisible bottle neck when too many pool.getConnection() requests end up in the queue for too long. Statistics and the API may change in future, so the attribute and method have an underscore prefix to indicate they are internal. Connection Queue Example To look at an example, I used ab to throw some load at an app based on examples/webapp.js I used a load concurrency of 25 parallel requests. The pool had a maximum of 20 sessions in its pool. The extra load was nicely handled by the connection queue without the application experiencing any connection failures. I'd modified the app to check for a particular URL and dump statistics on request: . . . var hs = http.createServer ( function(request, response) { var urlparts = request.url.split("/"); var arg = urlparts[1]; if (arg === 'stats') { pool._logStats(); } . . . Here is snapshot of the output from _logStats() at one point during the test: Pool statistics:...total connection requests: 26624147...total requests enqueued: 5821874...total requests dequeued: 5821874...total requests failed: 0...total request timeouts: 0...max queue length: 6...sum of time in queue (milliseconds): 13920717...min time in queue (milliseconds): 0...max time in queue (milliseconds): 1506...avg time in queue (milliseconds): 2...pool connections in use: 12...pool connections open: 20Related pool attributes:...queueRequests: true...queueTimeout (milliseconds): 0...poolMin: 10...poolMax: 20...poolIncrement: 10...poolTimeout: 0...stmtCacheSize: 30Related environment variables:...process.env.UV_THREADPOOL_SIZE: undefined The connection pool was semi-arbitrarily configured for testing. It started out with 10 sessions open (poolMin) and as soon as they were in use, the pool would have grown by another 10 sessions (poolIncrement) to the maximum of 20 (poolMax). What the stats show is that not all pool.getConnection() requests could get a pooled connection immediately. About 20% of requests ended up waiting in the queue. The connection pool poolMax is smaller than optimal for this load. The queue was never large; it never had more than 6 requests in it. This is within expectations since there are at least 5 more concurrent requests at a time than there are connections available in the pool. If this were a real app, I might decide to increase poolMax so no pool.getConnection() call ever waited. (I might also want to set poolTimeout so that when the pool was quiet, it could shrink, freeing up DB resources.) However the average wait time of 2 milliseconds is small. If I don't have DB resources to handle the extra sessions from a bigger pool, I might decide that a 2 millisecond wait is OK and that the pool size is fine. At least one connection spent 1.5 seconds in the queue. Since I know my test infrastructure I'm guessing this was when the pool ramped up in size and my small, loaded DB took some time to create the second set of 10 sessions. Maybe I should experiment with a smaller poolIncrement or bigger poolMin? Another important variable shown in the stats is UV_THREADPOOL_SIZE. I'd not set it so there were the default four worker threads in the Node process. Blindly increasing poolMax may not always help throughput. If DB operations take some time, you might find all threads get blocked waiting for their respective DB response. Increasing UV_THREADPOOL_SIZE may help improve application throughput. The best settings for pool configuration, UV_THREADPOOL_SIZE, and any DRCP pool size will depend on your application and environment. Connection Pool Queue Statistics The table below shows the node-oracledb 1.7 pool statistics descriptions. These stats and the APIs to enable and log them may change in future versions of node-oracledb. I look forward to getting some PRs, for example to add a standard logging capability which the stats generation can be part of. Connection Pool MetricDescription total connection requestsNumber of pool.getConnection() calls made for this pool total requests enqueuedNumber of connections that couldn't be satisfied because every session in the the pool was already being used, and so they had to be queued waiting for a session to be returned to the pool total requests dequeuedNumber of connection requests that were removed from the queue when a free connection has become available. This is triggered when the app has finished with a connection and calls release() to return it to the queue total requests failedNumber of connection calls that invoked the underlying C++ pool.getConnection() callback with an error state. Does not include queue request timeout errors. total request timeoutsNumber of connection requests that were waiting in the queue but exceeded the queueTimeout setting. The timeout is triggered with a JavaScript setTimeout call max queue lengthMaximum number of connection requests that were ever waiting at one time sum of time in queueTotal sum of time that connection requests have been waiting in the queue min time in queueSmallest amount of time that any request waited in the queue max time in queueLongest amount of time that any request waited in the queue avg time in queueDerived from the sum of time value divided by the number of requests enqueued pool connections in useA metric returned by the underlying Oracle C client session pool implementation. It is the number of currently active connections in the connection pool pool connections openAlso returned by the underlying library. It shows the number of currently open sessions in the underlying connection pool Note that the sum of time in queue, the min time in queue and the max time in queue values are calculated when queued requests are removed from the queue, so they don't record the amount of time for requests still waiting in the queue. Resources Issues and questions about node-oracledb can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line! node-oracledb installation instructions are here. Node-oracledb documentation is here.

Node-oracledb 1.7.0, the Node.js add-on for Oracle Database, is on NPM. Top features: a new connection pool queue to make apps more resilient, and "Bind by position" syntax for PL/SQL Index-by...

Node.js

Node-oracledb: Avoiding "ORA-01000: maximum open cursors exceeded"

Developers starting out with Node have to get to grips with the 'different' programming style of JavaScript that seems to cause methods to be called when least expected! While you are still in the initial hacking-around-with-node-oracledb phase you may sometimes encounter the error ORA-01000: maximum open cursors exceeded. A cursor is "a handle for the session-specific private SQL area that holds a parsed SQL statement and other processing information" Here are things to do when you see an ORA-1000: Avoid having too many incompletely processed statements open at one time: Close ResultSets before releasing the connection. If cursors are opened with dbms_sql.open_cursor() in a PL/SQL block, close them before the block returns - except for REF CURSORS being passed back to node-oracledb. (And if a future node-oracledb version supports Oracle Database 12c Implicit Result Sets, these cursors should likewise not be closed in the PL/SQL block) Make sure your application is handling connections and statements in the order you expect. Choose the appropriate Statement Cache size. Node-oracledb has a statement cache per connection. When node-oracledb internally releases a statement it will be put into the statement cache of that connection, but its cursor will remain open. This makes statement re-execution very efficient. The cache size is settable with the stmtCacheSize attribute. The appropriate statement cache size you choose will depend on your knowledge of the locality of the statements, and of the resources available to the application: are statements re-executed; will they still be in the cache when they get executed; how many statements do you want to be cached? In rare cases when statements are not re-executed, or are likely not to be in the cache, you might even want to disable the cache to eliminate its management overheads. Incorrectly sizing the statement cache will reduce application efficiency. Luckily with Oracle 12.1, the cache can be automatically tuned using an oraaccess.xml file. More information on node-oracledb statement caching is here. Don't forget to use bind variables otherwise each variant of the statement will have its own statement cache entry and cursor. With appropriate binding, only one entry and cursor will be needed. Set the database's open_cursors parameter appropriately. This parameter specifies the maximum number of cursors that each "session" (i.e each node-oracle connection) can use. When a connection exceeds the value, the ORA-1000 error is thrown. Documentation on open_cursors is here. Along with a cursor per entry in the connection's statement cache, any new statements that a connection is currently executing, or ResultSets that haven't been released (in neither situation are these yet cached), will also consume a cursor. Make sure that open_cursors is large enough to accommodate the maximum open cursors any connection may have. The upper bound required is stmtCacheSize+ the maximum number of executing statements in a connection . Remember this is all per connection. Also cache management happens when statements are internally released. The majority of your connections may use less than open_cursors cursors, but if one connection is at the limit and it then tries to execute a new statement, that connection will get ORA-1000: maximum open cursors exceeded.

Developers starting out with Node have to get to grips with the 'different' programming style of JavaScript that seems to cause methods to be called when least expected! While you are still in the...