X

The leading edge of scripting languages and Oracle Database brought to you by the Data Access Development team

Recent Posts

Python cx_Oracle 7 Introduces SODA Document Storage

cx_Oracle 7.0, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package that covers the Python Database API specification with many additions to support Oracle advanced features.     Anthony Tuininga has just released cx_Oracle 7.0. This release brings some key technologies and new features to the Python developer: Oracle Simple Document Access (SODA) support - an exciting addition to the standard relational access model is the new set of APIs for Oracle SODA. See below. Added Connection.callTimeout to support call timeouts when cx_Oracle is using Oracle Client 18.1 and higher. This is a useful backstop to prevent out-of-control SQL and PL/SQL statement execution. The main code layer beneath cx_Oracle's implementation is Oracle Call Interface. This API handles all the network connectivity to Oracle Database. For each OCI function executed by cx_Oracle, zero or more 'round-trips' to the database can occur - calling the database and getting a response back. The callTimeout value applies to each round-trip individually, not to the sum of all round-trips. Time spent processing in cx_Oracle before or after the completion of each round-trip is not counted. If the time from the start of any one round-trip to the completion of that same round-trip exceeds callTimeout milliseconds, then the operation is halted and an error is returned. In the case where a cx_Oracle operation requires more than one round-trip and each round-trip takes less than callTimeout milliseconds, then no timeout will occur, even if the sum of all round-trip calls exceeds callTimeout. If no round-trip is required, the operation will never be interrupted. After a timeout is triggered, cx_Oracle attempts to clean up the internal connection state. The cleanup is allowed to take another callTimeout milliseconds. If the cleanup was successful, a DPI-1067 error will be returned and the application can continue to use the connection. For small values of callTimeout, the connection cleanup may not complete successfully within the additional callTimeout period. In this case an ORA-3114 is returned and the connection will no longer be usable. It should be closed. Added support for closing a session pool via the function SessionPool.close(). This is useful for being 'nice' to the database and making sure that database sessions are not left dangling until the database cleans them up. In particular the optional 'force' argument is handy when you need to suddenly halt a Python application and immediately free all the sessions in the database. Added support for getting the contents of a SQL collection object as a dictionary, where the keys are the indices of the collection and the values are the elements of the collection. See function Object.asdict(). On Windows, cx_Oracle will now attempt to load the Oracle client libraries from the same directory as the cx_Oracle module before doing the standard Windows library location search, e.g. in the directories in the PATH environment variable. This new feature could be useful if you are bundling up applications and want to include the Oracle Instant Client. By putting the client in the same directory as the cx_Oracle library there is no need to set PATH, no need to worry about users changing PATH, and no need to worry about having multiple versions of Oracle client libraries in PATH. A change in cx_Oracle 7 is that when a DML RETURNING statement is executed, variables bound to it will return an array when calling Variable.getvalue(). Attempts to set cx_Oracle.__future__.dml_ret_array_val are now ignored. When a connection is used as a context manager, the connection is now closed when the block ends. Attempts to set cx_Oracle.__future__.ctx_mgr_close are now ignored. The full release notes show the other new features and changes. Review this list before you upgrade: python -m pip install cx_Oracle --upgrade SODA in Python cx_Oracle Oracle Simple Document Access (SODA) support was originally introduced in Java and recently exposed to C. Python support for SODA is now available in cx_Oracle 7 when using Oracle client 18.3 libraries and connecting to Oracle Database 18.1 or higher. SODA is all hot and new and under rapid development. For this cx_Oracle release we're labelling SODA support as a 'preview'. With a future version of the Oracle Client libraries this will change. SODA is typically used to store JSON documents in Oracle Database, but has flexibility to let you store other types of content. Once a DBA has granted you the SODA_APP privilege, you can simply create collections and store documents in them. Some basic examples are: # Create the parent object for SODA soda = connection.getSodaDatabase() # Create a new SODA collection # This will open an existing collection, if the name is already in use. collection = soda.createCollection("mycollection") # Insert a document # A system generated key is created by default. content = {'name': 'Matilda', 'address': {'city': 'Melbourne'}} doc = collection.insertOneAndGet(content) key = doc.key print('The key of the new SODA document is: ', key) You can then get documents back via a key look up, or by a search. A key lookup is straightforward: # Fetch the document back doc = collection.find().key(key).getOne() # A SodaDocument content = doc.getContent() # A JavaScript object print('Retrieved SODA document dictionary is:') print(content) For documents that can be converted to JSON you can alternatively get them as string: content = doc.getContentAsString() # A JSON string print('Retrieved SODA document string is:') print(content) The find() method is an operation builder, with methods that allow progressive filtering criteria to be set, limiting the set of documents that are then operated on by a terminal method such as getOne(), getDocuments() and count().   With JSON documents, a complete filtering specification language can be used to pattern match documents. A brief example is: # Find all documents with names like 'Ma%' print("Names matching 'Ma%'") documents = collection.find().filter({'name': {'$like': 'Ma%'}}).getDocuments() for d in documents: content = d.getContent() print(content["name"]) A runnable example is in SodaBasic.py Check out the cx_Oracle SODA manual and the Introduction to Simple Oracle Document Access (SODA) manual to see its power and simplicity. Check it out! PS the photo is one I took last weekend on a beach in Australia, which is coming into Spring. I thought you'd like it better than corporate clip art. cx_Oracle References Home page: oracle.github.io/python-cx_Oracle/index.html Installation instructions: cx-oracle.readthedocs.io/en/latest/installation.html Documentation: cx-oracle.readthedocs.io/en/latest/index.html Release Notes: cx-oracle.readthedocs.io/en/latest/releasenotes.html Source Code Repository: github.com/oracle/python-cx_Oracle

cx_Oracle 7.0, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package that covers the Python Database API specification with many...

node.js

So you want to use JSON in Oracle Database with Node.js?

The JavaScript JSON.parse() and JSON.stringify() methods make it easy to work with JavaScript objects in Node.js and store them in Oracle Database using the node-oracledb module. I'll start with some examples showing a simple, naive, implementation which you can use with all versions of Oracle Database. Then I'll go on to show some of the great JSON functionality introduced in Oracle Database 12.1.0.2. The examples below use the async/await syntax available in Node 7.6, but they can be rewritten to use promises or callbacks, if you have an older version of Node.js. Storing JSON as character data in Oracle Database 11.2 At the simplest, you can stores JSON as character strings, such as in the column C of MYTAB: CREATE TABLE mytab (k NUMBER, c CLOB); Using a CLOB means we don't need to worry about the length restrictions of a VARCHAR2. A JavaScript object like myContent can easily be inserted into Oracle Database with the node-oracledb module by stringifying it: const oracledb = require('oracledb'); let connection, myContent, json, result; async function run() { try { connection = await oracledb.getConnection( {user: "hr", password: "welcome", connectString: "localhost/orclpdb"}); myContent = {name: "Sally", address: {city: "Melbourne"}}; json = JSON.stringify(myContent); result = await connection.execute( 'insert into mytab (k, c) values (:kbv, :cbv)', { kbv: 1, cbv: json } ); console.log('Rows inserted: ' + result.rowsAffected); } catch (err) { console.error(err); } finally { if (connection) { try { await connection.close(); } catch (err) { console.error(err); } } } } run(); If you are just inserting one record you may want to autocommit, but make sure you don't unnecessarily commit, or break transactional consistency by committing a partial set of data: myContent = {name: "Sally", address: {city: "Melbourne"}}; json = JSON.stringify(myContent); result = await connection.execute( 'insert into mytab (k, c) values (:kbv, :cbv)', { kbv: 1, cbv: json }, { autoCommit: true} ); console.log('Rows inserted: ' + result.rowsAffected); The output is: Rows inserted: 1 To retrieve the JSON content you have to use a SQL query. This is fine when you only need to lookup records by their keys: result = await connection.execute( 'select c from mytab where k = :kbv', { kbv: 1 }, // the key to find { fetchInfo: {"C": {type: oracledb.STRING } }}); if (result.rows.length) { js = JSON.parse(result.rows[0]); console.log('Name is: ' + js.name); console.log('City is: ' + js.address.city); } else { console.log('No rows fetched'); } The fetchInfo clause is used to return the CLOB as a string. This is simpler and generally faster than the default, streamed access method for LOBs. (Streaming is great for huge data streams such as videos.) The JSON.parse() call converts the JSON string into a JavaScript object so fields can be accessed like 'js.address.city'. Output is: Name is: Sally City is: Melbourne Code gets trickier if you need to match JSON keys in the query. You need to write your own matching functionality using LOB methods like dbms_lob.instr(): result = await connection.execute( 'select c from mytab where dbms_lob.instr(c, \'"name":"\' || :cbv ||\'"\') > 0', { cbv: 'Sally' }, { fetchInfo: {"C": {type: oracledb.STRING } }}); if (result.rows.length) { js = JSON.parse(result.rows[0]); console.log('Name is: ' + js.name); console.log('City is: ' + js.address.city); } else { console.log('No rows fetched'); } You can see this could be slow to execute, error prone to do correctly, and very hard to work with when the JSON is highly nested. But there is a solution . . . Oracle Database 12c JSON With Oracle 12.1.0.2 onward you can take advantage of Oracle's JSON functionality. Data is stored as VARCHAR2 or LOB so the node-oracledb code is similar to the naive storage solution above. However, in the database, extensive JSON functionality provides tools for data validation, indexing and matching, for working with GeoJSON, and even for working with relational data. Check the JSON Developer's Guide for more information. You may also be interested in some of the JSON team's blog posts. To start with, when you create a table, you can specify that a column should be validated so it can contain only JSON: c CLOB CHECK (c IS JSON)) LOB (c) STORE AS (CACHE) In this example I also take advantage of Oracle 12c's 'autoincrement' feature called 'identity columns'. This automatically creates a monotonically increasing sequence number for the key. The complete CREATE TABLE statement used for following examples is: CREATE TABLE myjsontab (k NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY(START WITH 1), c CLOB CHECK (c IS JSON)) LOB (c) STORE AS (CACHE); Strictly speaking, since I know my application will insert valid JSON, I could have improved database performance by creating the table without the CHECK (c IS JSON) clause. However, if you don't know where your data is coming from, letting the database do validation is wise. Inserting a JavaScript object data uses the same stringification as the previous section. Since we don't need to supply a key now, we can use a DML RETURNING clause to get the new key's autoincremented value: myContent = {name: "Sally", address: {city: "Melbourne"}}; json = JSON.stringify(myContent); result = await connection.execute( 'insert into myjsontab (c) values (:cbv) returning k into :kbv', { cbv: json, kbv: { type: oracledb.NUMBER, dir: oracledb.BIND_OUT } }, { autoCommit: true} ); console.log('Data key is: ' + result.outBinds.kbv); This inserts the data and returns the key of the new record. The output is: Data key is: 1 To extract data by the key, a standard SQL query can be used, identical to the naive CLOB implementation previously shown. Oracle Database's JSON functionality really comes into play when you need to match attributes of the JSON string. You may even decide not to have a key column. Using Oracle 12.2's 'dotted' query notation you can do things like: result = await connection.execute( 'select c from myjsontab t where t.c.name = :cbv', { cbv: 'Sally' }, { fetchInfo: {"C": {type: oracledb.STRING } }}); if (result.rows.length) { js = JSON.parse(result.rows[0]); console.log('Name is: ' + js.name); console.log('City is: ' + js.address.city); } else { console.log('No rows fetched'); } Output is: Name is: Sally City is: Melbourne (If you use Oracle Database 12.1.0.2, then the dotted notation used in the example needs to be replaced with a path expression, see the JSON manual for the syntax). Other JSON functionality is usable, for example to find any records that have an 'address.city' field: select c FROM myjsontab where json_exists(c, '$.address.city') If you have relational tables, Oracle Database 12.2 has a JSON_OBJECT function that is a great way to convert relational table data to JSON: result = await connection.execute( `select json_object('deptId' is d.department_id, 'name' is d.department_name) department from departments d where department_id < :did`, { did: 50 }, { fetchInfo: {"C": {type: oracledb.STRING } }}); if (result.rows.length) { for (var i = 0; i < result.rows.length; i++) { console.log("Department: " + result.rows[i][0]); js = JSON.parse(result.rows[i][0]); console.log('Department Name is: ' + js.name); } } else { console.log('No rows fetched'); } Output is: Department: {"deptId":10,"name":"Administration"} Department Name is: Administration Department: {"deptId":20,"name":"Marketing"} Department Name is: Marketing Department: {"deptId":30,"name":"Purchasing"} Department Name is: Purchasing Department: {"deptId":40,"name":"Human Resources"} Department Name is: Human Resources If you are working with JSON tables that use BLOB storage instead of CLOB, for example: CREATE TABLE myjsonblobtab (k NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY(START WITH 1), c BLOB CHECK (c IS JSON)) LOB (c) STORE AS (CACHE); Then you need to bind a Buffer for insert: myContent = {name: "Sally", address: {city: "Melbourne"}}; json = JSON.stringify(myContent); b = Buffer.from(json, 'utf8'); result = await connection.execute( 'insert into myjsonblobtab (k, c) values (:kbv, :cbv)', { kbv: 1, cbv: b }, { autoCommit: true} ); console.log('Rows inserted: ' + result.rowsAffected); Querying needs to return a Buffer too: result = await connection.execute( 'select c from myjsonblobtab t where t.c.name = :cbv', { cbv: 'Sally' }, { fetchInfo: {"C": {type: oracledb.BUFFER } }}); if (result.rows.length) { js = JSON.parse(result.rows[0].toString('utf8')); console.log('Name is: ' + js.name); console.log('City is: ' + js.address.city); } else { console.log('No rows fetched'); } A final JSON tip One final tip is to avoid JSON.parse() if you don't need it. An example is where you need to pass a JSON string to a web service or browser. You may be able pass the JSON string returned from a query directly. In some cases the JSON string may need its own key, in which case simple string concatenation may be effective. In this example, the Oracle Locator method returns a GeoJSON string: result = await connection.execute( `select sdo_util.to_geojson( sdo_geometry(2001, 8307, sdo_point_type(-90, 20, null), null, null)) as c from dual`, { }, // no binds { fetchInfo: {"C": {type: oracledb.STRING } }}); json = '{"geometry":' + result.rows[0][0] + '}'; console.log(json); The concatenation above avoids the overhead of a parse and re-stringification: js = JSON.parse(result.rows[0][0]); jstmp = {geometry: js}; json = JSON.stringify(jstmp); Summary The JavaScript JSON.parse() and JSON.stringify() methods make it easy to work with JSON in Node.js and Oracle Database. Combined with node-oracledb's ability to work with LOBs as Node.js Strings, database access is very efficient. Oracle Database 12.1.0.2's JSON features make JSON operations in the database simple. Advances in Oracle Database 12.2 and 18c further improve the functionality and usability. Resources Node-oracledb installation instructions are here. Node-oracledb documentation is here. Issues and questions about node-oracledb can be posted on GitHub. The Oracle JSON Developer's Guide is here.

The JavaScript JSON.parse() and JSON.stringify() methods make it easy to work with JavaScript objects in Node.js and store them in Oracle Database using the node-oracledb module. I'll start with some...

Some New Features of Oracle Instant Client 18.3

We released Oracle Database 18.3 for Linux last week. It (and the "full" Oracle Client) are downloadable from here. Read this to find out about some of the new database features. Many of the readers of my blog have an equal interest in the "client side". You'll be happy that Oracle Instant Client 18.3 for Linux 64-bit and 32-bit is also available. Instant Client is just a rebundling of the Oracle client libraries and some tools. They are the same ones available with an Oracle Database installation or the "full" Oracle Client installation but installation is much simpler: you just unzip a file, or install an RPM package on Linux and use them to connect your applications to Oracle Database. The "Oracle Client", in whatever install footprint you choose, covers a number of technologies and provides a lot of language APIs. The Instant Client packages contain these APIs and selected tools like SQL*Plus and Data Pump. I'll let those teams blow their own trumpets about the new release. Here I'll talk about some of the Oracle Client functionality that benefits the Oracle Oracle Call Interface (OCI) API for C programs, and all the scripting languages that use OCI: My wider group's most exciting project in 18.3 is the Connection Manager (CMAN) Traffic Director mode whose sub-location in the Oracle manual is a sign of how the feature its transparent, and not indicative of the huge engineering effort that went into it. CMAN in Traffic Director Mode is a proxy between the database clients and the database instances. Supported OCI clients from Oracle Database 11g Release 2 (11.2) and later can connect to CMAN to get improved high availability (HA) for planned and unplanned database server outages, connection multiplexing support, and load balancing. Cherry picking some notable Oracle Client 18c features that are available via OCI: You probably know that Oracle Database 18c is really just a re-badged 12.2.0.2. Due to the major version number change and the new release strategy, there is a new OCIServerRelease2() call to get the database version number. The old OCIServerRelease() function will give just the base release information so use the new function to get the actual DB patch level. Why? Let's just say there were robust discussions about the upgrade and release cycles, and about handling the "accelerated" version change across the whole database product suite and how things like upgrade tools were affected. Extracting Instant Client 18.3 ZIP files now pre-creates symbolic links for the C and C++ client libraries on relevant operating systems. Yay! One fewer install step. Instant Client now also pre-creates a network/admin sub-directory to show where you can put any optional network and other configuration files such as tnsnames.ora, sqlnet.ora, ldap.ora, and oraaccess.xml. This directory will be used by default for any application that loads the related Oracle Client libraries. Support for Client Result Caching with dynamic binds where descriptors are not involved and the bind length is less than 32768. Since scripting languages tend to use dynamic binds for character data this could be a big performance win for your lookup table queries. Unique ID generation improvements. One little old gotcha, particularly in some hosted or cloud environments, were errors when Oracle applications tried to generate a unique key for your client. This manifested itself as an Oracle error when you tried to start a program. Workarounds included adding a hostname to /etc/hosts. There were improvements in Oracle Client 18c for unique key generation so the problem should be less common. A new call timeout parameter can be enabled for C applications. This applies to post-connection round-trips to the database, making it easier to interrupt long running calls and satisfy application quality of service requirements. After you connect, each OCI call may make one of more round-trips to Oracle database: If the time from the start of any one round-trip to the completion of that same round-trip exceeds the call timeout milliseconds, then the operation is halted and an Oracle error is returned. In the case where an OCI call requires more than one round-trip and each round-trip takes less than the specified number of milliseconds, then no timeout will occur, even if the sum of all round-trip calls exceeds the call timeout value. If no round-trip is required, the operation will never be interrupted. After a timeout has occurred, the connection must be cleaned up. This is allowed to run for the same amount of time as specified for the original timeout. For very small timeouts, if the cleanup fails, then an ORA-3114 is returned and the connection must be released. However if the cleanup is successful then an ORA-3136 is returned and the application can continue using the connection. You can see this will be most useful for interrupting SQL statements whose "execute" phase may take some time. The OCI Session pool underlays many application connection pools (and if it doesn't underlay yours, then it should - ask me why). Improvements in 18c session pooling include some usability "do-what-I-mean" parameter size check tweaks, internal lock improvements, and a new attribute OCI_ATTR_SPOOL_MAX_USE_SESSION. One other change that was much debated during development is the OCISessionGet() behavior of OCI_SPOOL_ATTRVAL_NOWAIT mode when a pool has to grow. Prior to 18c, even though it was a 'no wait' operation, getting a connection would actually wait for the pool to grow. Some users didn't like this. Since creating connections could take a few moments they had no way to control the quality of service. Now in 18c the mode doesn't wait - if there's no free connection immediately available, then control is returned to the application with an error. If you are impacted by the new behavior, then look at using alternative session acquire modes like OCI_SPOOL_ATTRVAL_TIMEDWAIT. Or better, keep your pool a constant size so it doesn't need to grow, which is what is recommended by Oracle's Real World Performance Group. SODA support. Simple Oracle Document Access (SODA) that was previously only available via JDBC is now available in OCI. Yum. Let's see what we can do with this now it's in C. More on this later. I hope this has given you a taste of some Oracle Client 18c changes and given you links to explore more. Don't forget that much new database functionality is available to clients transparently or via SQL and PL/SQL. Finally, remember that Oracle has client-server version interoperability so 18c OCI programs can connect to Oracle Database 11.2 or later. It's time to upgrade your client!

We released Oracle Database 18.3 for Linux last week. It (and the "full" Oracle Client) are downloadable from here. Read this to find out about some of the new database features. Many of the readers...

node.js

Demo: GraphQL with Oracle Database and node-oracledb

Some of our node-oracledb users recently commented they have moved from REST to GraphQL so I thought I'd take a look at what it is all about. I can requote the GraphQL talking points with the best of them, but things like "Declarative Data Fetching" and "a schema with a defined type system is the contract between client and server" are easier to undstand with examples. In brief, GraphQL: Provides a single endpoint that responds to queries. No need to create multiple endpoints to satisfy varying client requirements. Has more flexibility and efficiency than REST. Being a query language, you can adjust which fields are returned by queries, so less data needs to be transfered. You can parameterize the queries, for example to alter the number of records returned - all without changing the API or needing new endpoints. Let's look at the payload of a GraphQL query. This query with the root field 'blog' asks for the blog with id of 2. Specifically it asks for the id, the title and the content of that blog to be returned: { blog(id: 2) { id title content } } The response from the server would contain the three request fields, for example: { "data": { "blog": { "id": 2, "title": "Blog Title 2", "content": "This is blog 2" } } } Compare that result with this query that does not ask for the title: { blog(id: 2) { id content } } With the same data, this would give: { "data": { "blog": { "id": 2, "content": "This is blog 2" } } } So, unlike REST, we can choose what data needs to be transferred. This makes clients more flexible to develop. Let's looks at some code. I came across this nice intro blog post today which shows a basic GraphQL server in Node.js. For simplicity its data store is an in-memory JavaScript object. I changed it to use an Oracle Database backend. The heart of GraphQL is the type system. For the blog example, a type 'Blog' is created in our Node.js application with three obvious values and types: type Blog { id: Int!, title: String!, content: String! } The exclamation mark means a field is required. The part of the GraphQL Schema to query a blog post by id is specified in the root type 'Query': type Query { blog(id: Int): Blog } This defines a capability to query a single blog post and return the Blog type we defined above. We may also want to get all blog posts, so we add a "blogs" field to the Query type: type Query { blog(id: Int): Blog blogs: [Blog], } The square brackets indicates a list of Blogs is returned. A query to get all blogs would be like: { blogs { id title content } } You can see that the queries include the 'blog' or 'blogs' field. We can pass all queries to the one endpoint and that endpoint will determine how to handle each. There is no need for multiple endpoints. To manipulate data requires some 'mutations', typically making up the CUD of CRUD: input BlogEntry { title: String!, content: String! } type Mutation { createBlog(input: BlogEntry): Blog!, updateBlog(id: Int, input: BlogEntry): Blog!, deleteBlog(id: Int): Blog! } To start with, the "input" type allows us to define input parameters that will be supplied by a client. Here a BlogEntry contains just a title and content. There is no id, since that will be automatically created when a new blog post is inserted into the database. In the mutations, you can see a BlogEntry type is in the argument lists for the createBlog and updateBlog fields. The deleteBlog field just needs to know the id to delete. The mutations all return a Blog. An example of using createBlog is shown later. Combined, we represent the schema in Node.js like: const typeDefs = ` type Blog { id: Int!, title: String!, content: String! } type Query { blogs: [Blog], blog(id: Int): Blog } input BlogEntry { title: String!, content: String! } type Mutation { createBlog(input: BlogEntry): Blog!, updateBlog(id: Int, input: BlogEntry): Blog!, deleteBlog(id: Int): Blog! }`; This is the contract, defining the data types and available operations. In the backend, I decided to use Oracle Database 12c's JSON features. There's no need to say that using JSON gives developers power to modify and improve the schema during the life of an application: CREATE TABLE blogtable (blog CLOB CHECK (blog IS JSON)); INSERT INTO blogtable VALUES ( '{"id": 1, "title": "Blog Title 1", "content": "This is blog 1"}'); INSERT INTO blogtable VALUES ( '{"id": 2, "title": "Blog Title 2", "content": "This is blog 2"}'); COMMIT; CREATE UNIQUE INDEX blog_idx ON blogtable b (b.blog.id); CREATE SEQUENCE blog_seq START WITH 3; Each field of the JSON strings corresponds to the values of the GraphQL Blog type. (The 'dotted' notation syntax I'm using in this post requires Oracle DB 12.2, but can be rewritten for 12.1.0.2.) The Node.js ecosystem has some powerful modules for GraphQL. The package.json is: { "name": "graphql-oracle", "version": "1.0.0", "description": "Basic demo of GraphQL with Oracle DB", "main": "graphql_oracle.js", "keywords": [], "author": "christopher.jones@oracle.com", "license": "MIT", "dependencies": { "oracledb": "^2.3.0", "express": "^4.16.3", "express-graphql": "^0.6.12", "graphql": "^0.13.2", "graphql-tools": "^3.0.2" } } If you want to see the full graphql_oracle.js file it is here. Digging into it, the application has some 'Resolvers' to handle the client calls. From Dhaval Nagar's demo, I modified these resolvers to invoke new helper functions that I created: const resolvers = { Query: { blogs(root, args, context, info) { return getAllBlogsHelper(); }, blog(root, {id}, context, info) { return getOneBlogHelper(id); } }, [ . . . ] }; To conclude the GraphQL part of the sample, the GraphQL and Express modules hook up the schema type definition from above with the resolvers, and start an Express app: const schema = graphqlTools.makeExecutableSchema({typeDefs, resolvers}); app.use('/graphql', graphql({ graphiql: true, schema })); app.listen(port, function() { console.log('Listening on http://localhost:' + port + '/graphql'); }) On the Oracle side, we want to use a connection pool, so the first thing the app does is start one: await oracledb.createPool(dbConfig); The helper functions can get a connection from the pool. For example, the helper to get one blog is: async function getOneBlogHelper(id) { let sql = 'SELECT b.blog FROM blogtable b WHERE b.blog.id = :id'; let binds = [id]; let conn = await oracledb.getConnection(); let result = await conn.execute(sql, binds); await conn.close(); return JSON.parse(result.rows[0][0]); } The JSON.parse() call nicely converts the JSON string that is stored in the database into the JavaScript object to be returned. Starting the app and loading the endpoint in a browser gives a GraphiQL IDE. After entering the query on the left and clicking the 'play' button, the middle pane shows the returned data. The right hand pane gives the API documentation: To insert a new blog, the createBlog mutation can be used: If you want to play around more, I've put the full set of demo-quality files for you to hack on here. [Update: There is also a more fully explored demo here from Stephen Black]. You may want to look at the GraphQL introductory videos, such as this comparison with REST. To finish, GraphQL has the concept of real time updates with subscriptions, something that ties in well with the Continous Query Notification feature of node-oracledb 2.3. Yay - something else to play with! But that will have to wait for another day. Let me know if you beat me to it.

Some of our node-oracledb users recently commented they have moved from REST to GraphQL so I thought I'd take a look at what it is all about. I can requote the GraphQL talking points with the best of...

node.js

Node-oracledb 2.3 with Continuous Query Notifications is on npm

Release announcement: Node-oracledb 2.3.0, the Node.js module for accessing Oracle Database, is on npm. Top features: Continuous Query Notifications. Heterogeneous Connection Pools.     Our 2.x release series continues with some interesting improvements: Node-oracledb 2.3 is now available for your pleasure. Binaries for the usual platforms are available for Node.js 6, 8, and 10; source code is available on GitHub. We are not planning on releasing binaries for Node.js 4 or 9 due to the end of life of Node.js 4, and the release of Node.js 10. The main new features in node-oracledb 2.3 are: Support for Oracle Database Continuous Query Notifications, allowing JavaScript methods to be called when database changes are committed. This is a cool feature useful when applications want to be notified that some data in the database has been changed by anyone. I recently posted a demo showing CQN and Socket.IO keeping a notification area of a web page updated. Check it out. The new node-oracledb connection.subscribe() method is used to register a Node.js callback method, and the SQL query that you want to monitor. It has two main modes: for object-level changes, and for query-level changes. These allow you to get notifications whenever an object changes, or when the result set from the registered query would be changed, respectively. There are also a bunch of configuration options for the quality-of-service and other behaviors. It's worth noting that CQN requires the database to establish a connection back to your node-oracledb machine. Commonly this means that your node-oracledb machine needs a fixed IP address, but it all depends on your network setup. Oracle Database CQN was designed for infrequently modified tables, so make sure you test your system scalability. Support for heterogeneous connection pooling and for proxy support in connection pools. This allows each connection in the pool to use different database credentials. Some users migrating to node-oracledb had schema architectures that made use of this connection style for data encapsulation and auditing. Note that making use of the existing clientId feature may be a better fit for new code, or code that does mid-tier authentication. A Pull Request from Danilo Silva landed, making it possible for Windows users to build binaries for self-hosting. Thanks Danilo! Previously this was only possible on Linux and macOS. Support for 'fetchAsString' and 'fetchInfo' to allow fetching RAW columns as hex-encoded strings. See the CHANGELOG for the bug fixes and other changes. Resources Node-oracledb installation instructions are here. Node-oracledb documentation is here. Node-oracledb change log is here. Issues and questions about node-oracledb can be posted on GitHub. Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

Release announcement: Node-oracledb 2.3.0, the Node.js module for accessing Oracle Database, is on npm. Top features: Continuous Query Notifications. Heterogeneous Connection Pools.     Our 2.x release...

General

ODPI-C 2.4 has been released

Release 2.4 of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub. ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications written in C or C++. Top features: Better database notification support. New pool timeout support.   I'll keep this brief. See the Release Notes for all changes. Support for Oracle Continuous Query Notification and Advanced Queuing notifications was improved. Notably replacement subscribe and unsubscribe methods were introduced to make use more flexible. Support for handling AQ notifications was added, so now you can get notified there is a message to dequeue. And settings for the listening IP address, for notification grouping, and to let you check the registration status are now available. Some additional timeout options for connection pools were exposed. Some build improvements were made: the SONAME is set in the shared library on *ix platforms. There is also a new Makefile 'install' target that installs using a standard *ix footprint. ODPI-C References Home page: https://oracle.github.io/odpi/ Code: https://github.com/oracle/odpi Documentation: https://oracle.github.io/odpi/doc/index.html Release Notes: https://oracle.github.io/odpi/doc/releasenotes.html Installation Instructions: oracle.github.io/odpi/doc/installation.html Report issues and discuss: https://github.com/oracle/odpi/issues

Release 2.4 of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub. ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications...

node.js

Demo: Oracle Database Continuous Query Notification in Node.js

Native Oracle Database Continuous Query Notification (CQN) code has landed in the node-oracledb master branch on GitHub. If you want to play with it, but don't want to wait for the next binary node-oracledb release, you can compile node-oracledb yourself and play with this demo. [Update: node-oracledb 2.3 has been released and binaries are now available.]     Some of you may already be using CQN via its PL/SQL APIs. The new, native support in node-oracledb makes it all so much nicer. Check out the development documentation for connection.subscribe() and the 'user manual'. There are a couple of examples cqn1.js and cqn2.js available, too. CQN allows JavaScript methods to be called when database changes are committed by any transaction. You enable it in your node-oracledb app by registering a SQL query. CQN has two main modes: object-level and query-level. The former sends notifications (i.e. calls your nominated JavaScript method) when changes are made to database objects used in your registered query. The query-level mode only sends notifications when database changes are made that would impact the result set of the query, e.g. the WHERE clause is respected. If you're not using CQN, then you might wonder when you would. For infrequently updated tables you can get CQN to generate notifications on any data or table change. I can see how query-level mode might be useful for proactive auditing to send alerts when an unexpected, but valid, value is inserted or deleted from a table. For tables with medium levels of updates, CQN allows grouping of notifications by time, which is a way of reducing load by preventing too many notifications being generated in too short a time span. But, as my colleague Dan McGhan points out, if you know the table is subject to a lot of change, then your apps will be better off simply polling the table and avoiding any CQN overhead. Note that CQN was designed to be used for relatively infrequently updated tables. DEMO APP I've thrown together a little app that uses CQN and Socket.IO to refresh a message notification area on a web page. It's really just a simple smush of the Socket.IO intro example and the node-oracledb CQN examples. There is a link to all the code in the next section of this post; I'll just show snippets inline here. I'm sure Dan will update his polished 'Real-time Data' example soon, but until then here is my hack code. It uses Node.js 8's async/await style - you can rewrite it if you have an older Node.js version. One thing about CQN is that the node-oracledb computer must be resolvable by the Database computer; typically this means having a fixed IP address which may be an issue with laptops and DHCP. Luckily plenty of other cases work too. For example, I replaced my Docker web service app with a CQN example and didn't need to do anything with ports or identifying IP addresses. I'll leave you to decide how to run it in your environment. There are CQN options to set the IP address and port to listen on, which may be handy. The demo premise is a web page with a message notification area that always shows the five most recent messages in a database table. The messages are being inserted into that table by other apps (I'll just use SQL*Plus to do these inserts) and the web page needs to be updated with them only when there is a change. I'm just using dummy data and random strings: To see how it fits together, look at this no-expense-spared character graphic showing the four components: SQL*Plus, the database, the browser and the Node.js app: SQL*PLUS: DATABASE: insert into msgtable >-------> msgtable >-------CQN-notification------------------+ commit | | BROWSER: <-------+ NODE.JS APP: | 5 Message | URL '/' serves index.html | 4 Message | | 3 Message | CQN: | 2 Message | subscribe to msgtable with callback myCallback | 1 Message | | | myCallback: <------------------------------------+ | query msgtable +-----------< send rows to browser to update the DOM The app (bottom right) serves the index page to the browser. It connects to the DB and uses CQN to register interest in msgtable. Any data change in the table from SQL*Plus (top left) triggers a CQN notification from the database to the application, and the callback is invoked. This callback queries the table and uses Socket.IO to send the latest records to the browser, which updates the index.html DOM. The first thing is to get your DBA (i.e. log in as the SYSTEM user) to give you permission to get notifications: GRANT CHANGE NOTIFICATION TO cj; We then need a table that our app will get notifications about, and then query to get the latest messages: CREATE TABLE cj.msgtable ( k NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY(START WITH 1), message VARCHAR(100) ); The column K is an Oracle Database 12c identity column that will automatically get a unique number inserted whenever a new message is inserted. In older database versions you would create a sequence and trigger to do the same. The little SQL script I use to insert data (and trigger notifications) is: INSERT INTO msgtable (message) VALUES (DBMS_RANDOM.STRING('A', 10)); COMMIT; The Node.js app code is more interesting, but not complex. Here is the code that registers the query: conn = await oracledb.getConnection(); await conn.subscribe('mysub', { callback: myCallback, sql: "SELECT * FROM msgtable" }); console.log("CQN subscription created"); Although CQN has various options to control its behavior, here I keep it simple - I just want to get notifications when any data change to msgtable is committed. When the database sends a notifications, the method 'myCallback' will get a message, the contents of which will vary depending on the subscription options. Since I know the callback is invoked when any table data has changed, I ignore the message contents and go ahead and query the table. The rows are then stringified and, by the magic of Socket.IO, sent to the web page: async function myCallback(message) { let rows = await getData(); // query the msgtable io.emit('message', JSON.stringify(rows)); // update the web page } The helper function to query the table is obvious: async function getData() { let sql = `SELECT k, message FROM msgtable ORDER BY k DESC FETCH NEXT :rowcount ROWS ONLY`; let binds = [5]; // get 5 most recent messages let options = { outFormat: oracledb.OBJECT }; let conn = await oracledb.getConnection(); let result = await conn.execute(sql, binds, options); await conn.close(); return result.rows; } At the front end, the HTML for the web page contains a 'messages' element that is populated by JQuery code when a message is received by Socket.IO: <ul id="messages"></ul> <script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.1.1/socket.io.js"></script> <script src="https://code.jquery.com/jquery-3.3.1.js"></script> <script> $(function () { var socket = io(); socket.on('message', function(msg){ $('#messages').empty(); $.each(JSON.parse(msg), function(idx, obj) { $('#messages').append($('<li>').text(obj.K + ' ' + obj.MESSAGE)); }); }); }); </script> You can see that the JSON string received from the app server is parsed and the K and MESSAGE fields of each row object (corresponding to the table columns of the same names) are inserted into the DOM in an unordered list. That's it. DEMO IN ACTION To see it in action, extract the code and install the dependencies: cjones@mdt:~/n/cqn-sockets$ npm install npm WARN CQN-Socket-Demo@0.0.1 No repository field. npm WARN CQN-Socket-Demo@0.0.1 No license field. added 86 packages in 2.065s I cheated a bit there and didn't show node-oracledb compiling. Once a production release of node-oracledb is made, you should edit the package.json dependency to use its pre-built binaries. Until then, node-oracledb code will be downloaded and compiled - check the instructions for compiling. Edit server.js and set your database credentials - or set the referenced environment variables: let dbConfig = { user: process.env.NODE_ORACLEDB_USER, password: process.env.NODE_ORACLEDB_PASSWORD, connectString: process.env.NODE_ORACLEDB_CONNECTIONSTRING, events: true // CQN needs events mode } Then start the app server: cjones@mdt:~/n/cqn-sockets$ npm start > CQN-Socket-Demo@0.0.1 start /home/cjones/n/cqn-sockets > node server.js CQN subscription created Listening on http://localhost:3000 Then load http://localhost:3000/ in a browser. Initially the message pane is blank - I left bootstrapping it as an exercise for the reader. Start SQL*Plus in a terminal window and create a message: SQL> INSERT INTO msgtable (message) VALUES (DBMS_RANDOM.STRING('A', 10)); SQL> COMMIT; Every time data is committed to msgtable, the message list on the web page is automatically updated: If you don't see messages, review Troubleshooting CQN Registrations. The common problems will be network related: the node-oracledb machine must be resolvable, the port must be open etc. Try it out and let us know how you go. Remember you are using development code that just landed, so there may be a few rough edges.

Native Oracle Database Continuous Query Notification (CQN) code has landed in the node-oracledb master branch on GitHub. If you want to play with it, but don't want to wait for the next binary...

node.js

A node-oracledb Web Service in Docker

This post shows how to run a node-oracledb application in a Docker Container. For bonus points, the application connects to an Oracle Database running in a second container. The steps are the 'show notes' from a recent talk at Oracle Code. The demo app is a simple Express web service that accepts REST calls.   DOCKER Oracle Docker images are available from https://store.docker.com/ and also mirrored on https://container-registry.oracle.com If you're not familiar with Docker, it helps to know basic terminology: Images: Collection of software to be run as a container. Images are immutable. Changes to an image require a new image build. Registry: Place to store and download images. Container: A lightweight standalone, executable piece of software that includes everything required to run it on a host. Containers are spun up from images. Containers are non-persistent. Once a container is deleted, all files inside that container are gone. Docker engine: The software engine running containers. Volumes: Place to persist data outside the container. CONFIGURE A DOCKER HOST For my host, I used Oracle Linux 7, which has the ol7_latest and ol7_uekr4  channels already enabled. Install the Docker engine as the root user by running 'sudo su -', or prefix each command with 'sudo': # yum-config-manager --enable ol7_addons # yum install docker-engine # systemctl enable docker # systemctl start docker DOWNLOAD INSTANT CLIENT AND DATABASE DOCKER IMAGES Sign in to the container registry https://container-registry.oracle.com/ with your (free) Oracle "single sign-on" (SSO) credentials. Accept the license on the container registry. On your OL7 Docker host, log in to the registry. Remember to run Docker commands as 'root': # docker login container-registry.oracle.com This prompts for your Oracle SSO credentials. Get the Oracle Database and Oracle Instant Client images: # docker pull container-registry.oracle.com/database/enterprise:12.2.0.1 # docker pull container-registry.oracle.com/database/instantclient:12.2.0.1 This can take a while. For testing, you may want to pull the smaller, 'slim' version of the database. View the installed images with: # docker images REPOSITORY TAG IMAGE ID CREATED SIZE container-registry.oracle.com/database/enterprise 12.2.0.1 12a359cd0528 7 months ago 3.44GB container-registry.oracle.com/database/instantclient 12.2.0.1 fda46de41de3 7 months ago 407MB CREATE A DATABASE CONTAINER FROM THE DATABASE IMAGE Start the database container: # docker run -d --name demodb -P container-registry.oracle.com/database/enterprise:12.2.0.1 The '-P' option maps the ports used, allowing access to the database from outside the container. Check for its health and wait until it shows 'healthy' # docker ps CONTAINER ID IMAGE COMMAND STATUS PORTS NAMES 9596bc2345d3 [...]/database/[...] "/bin/sh -c '/bin/..." Up 3 hours (healthy) ...->1521/tcp, ...->5500/tcp demodb Find the database container's IP address: # docker inspect -f "{{ .NetworkSettings.IPAddress }}" demodb You will use this IP in database connect strings in your applications. You can stop and start the container as desired: # docker stop demodb # docker start demodb The data is persistent as long as the container exists. Use 'docker ps --all' to show all containers, running or not. CREATE A NEW SCHEMA Create a SQL file called createschema.sql: SET ECHO ON ALTER SESSION SET CONTAINER=orclpdb1; DROP USER scott CASCADE; CREATE USER scott IDENTIFIED BY tiger; GRANT CONNECT, RESOURCE TO scott; ALTER USER scott QUOTA UNLIMITED ON USERS; DROP TABLE scott.bananas; CREATE TABLE scott.bananas (shipment VARCHAR2(4000) CHECK (shipment IS JSON)); INSERT INTO scott.bananas VALUES ('{ "farmer": "Gita", "ripeness": "All Green", "kilograms": 100 }'); INSERT INTO scott.bananas VALUES ('{ "farmer": "Ravi", "ripeness": "Full Yellow", "kilograms": 90 }'); INSERT INTO scott.bananas VALUES ('{ "farmer": "Mindy", "ripeness": "More Yellow than Green", "kilograms": 92 }'); COMMIT; EXIT For this demo, you can see I used the Oracle Database 12.1.0.2 JSON data type. Execute createschema.sql in your favorite tool, such as SQL*Plus. In my case I actually ran SQL*Plus on my Docker host machine. Cheating a bit on giving details here, I had downloaded the Instant Client Basic and SQL*Plus packages and unzipped as shown in the the Instant Client instructions. I then set my shell to use the SQL*Plus binary: # export LD_LIBRARY_PATH=/home/cjones/instantclient_12_2 # export PATH=/home/cjones/instantclient_12_2:$PATH Using the database IP address as shown earlier you can now run the script in SQL*Plus against the container database. In my environment the database IP was 172.17.0.2: # sqlplus -l sys/Oradoc_db1@172.17.0.2/orclpdb1.localdomain as sysdba @createschema.sql The database password and service name shown are the defaults in the image. CREATE A NODE.JS IMAGE Let's add Node.js to the Instant Client image. Create a sub-directory nodejs-scripts # mkdir nodejs-scripts Create a new file 'nodejs-scripts/Dockerfile'. This is the 'recipe' for building a Docker image. Here Node.js is added to the Instant Client image to create a new image usable by any Node.js application. The Node.js 8 package for Oracle Linux is handy. The Dockerfile should contain: FROM container-registry.oracle.com/database/instantclient:12.2.0.1 ADD ol7_developer_nodejs8.repo /etc/yum.repos.d/ol7_developer_nodejs8.repo RUN echo proxy=http://my-proxy.example.com:80 >> /etc/yum.conf RUN yum -y update && \ rm -rf /var/cache/yum && \ yum -y install nodejs The FROM line shows that we base our new image on the Instant Client image. If you are not behind a proxy, you can omit the proxy line. Or change the line to use your proxy. For quick testing, you may want to omit the 'yum -y update' command. The Dockerfile ADD command copies 'ol7_developer_nodejs8.repo' from the host file system into the image's file system. Create 'nodejs-scripts/ol7_developer_nodejs8.repo' containing: [ol7_developer_nodejs8] name=Oracle Linux $releasever Node.js 8 Packages for Development and test ($basearch) baseurl=https://yum.oracle.com/repo/OracleLinux/OL7/developer_nodejs8/$basearch/ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle gpgcheck=1 enabled=1 Now the new image with Oracle Instant Client and Node.js 8 can be built using this Dockerfile: docker build -t cjones/nodejs-image nodejs-scripts The 'cjones/nodejs-image' is the image name, not a directory path. You can see the new image has been created: # docker images REPOSITORY TAG IMAGE ID CREATED SIZE cjones/nodejs-image latest e048b739bb63 29 minutes ago 1.51GB container-registry.oracle.com/database/enterprise 12.2.0.1 12a359cd0528 7 months ago 3.44GB container-registry.oracle.com/database/instantclient 12.2.0.1 fda46de41de3 7 months ago 407MB CREATE A NODE.JS DEMO IMAGE The new Node.js image is refined by installing our demo application. This creates another new image that we can later run whenever we want to use the application. Create a sub-directory 'ws-demo-scripts': # mkdir ws-demo-scripts Create a new file 'ws-demo-scripts/Dockerfile' containing: FROM cjones/nodejs-image ENV https_proxy=http://my-proxy.example.com:80 WORKDIR workdir COPY package.json package.json COPY server.js server.js RUN npm install CMD ["npm", "start"] The first line shows the new image should be based on the Node.js image 'cjones/nodejs-image' created in the section above. Again, adjust the proxy line as needed by your network. You can see the Dockerfile copies two files from our host file system into the image. These files are shown below. When the image is created, the RUN command will install the Node.js dependencies from package.json. When a container starts, the CMD action is taken, which runs 'npm start', in turn invoking the 'main' target in package.json. Looking below to the package.json content, you can see this means 'node server.js' is run. Create a file 'ws-demo-scripts/package.json' containing: { "name": "banana-farmer", "version": "1.0.0", "description": "RESTful API using Node.js Express Oracle DB", "main": "server.js", "author": "Oracle", "license": "Apache", "dependencies": { "body-parser": "^1.18.2", "express": "^4.16.0", "oracledb": "^2.2.0" } } As obvious, this application installs the body-parser module, the node-oracledb module, and also express. This demo is an Express web service application. And yes, it is a Banana Farmer web service. The default run target of package.json is the application file 'server.js'. Create the application file 'ws-demo-scripts/server.js' containing the contents from here. The demo application is just this one file. Build the demo image: # docker build -t cjones/ws-demo ws-demo-scripts We now have our fourth image which contains our runnable application: # docker images REPOSITORY TAG IMAGE ID CREATED SIZE cjones/ws-demo latest 31cbe6d2ea4e 21 seconds ago 1.51GB cjones/nodejs-image latest e048b739bb63 29 minutes ago 1.51GB container-registry.oracle.com/database/enterprise 12.2.0.1 12a359cd0528 7 months ago 3.44GB container-registry.oracle.com/database/instantclient 12.2.0.1 fda46de41de3 7 months ago 407MB DEMO APPLICATION OVERVIEW The Banana Farmer scenario is that shipments of bananas from farmers are recorded. They can have a farmer name, ripeness, and weight. Shipments can be inserted, queried, updated or deleted. Let's look at a couple of snippets from ws-demo-scripts/server.js. A connection helper creates a pool of database connections: oracledb.createPool({  user: process.env.NODE_ORACLEDB_USER, password: process.env.NODE_ORACLEDB_PASSWORD, connectString: process.env.NODE_ORACLEDB_CONNECTIONSTRING }, . . . The credentials are taken from environment variables. When we run the app container we will pass value for those environment variables into the container. The application has Express routes for REST GET, POST, PUT and DELETE calls. The code to handle a GET request looks like: // HTTP method: GET // URI : /bananas/FARMER // Get the banana shipment for FARMER app.get('/bananas/:FARMER', function (req, res) { doGetConnection(res, function(err, connection) { if (err) return; connection.execute( "SELECT b.shipment FROM bananas b WHERE b.shipment.farmer = :f", { f: req.params.FARMER }, function (err, result) { if (err) { res.set('Content-Type', 'application/json'); res.status(500).send(JSON.stringify({ status: 500, message: "Error getting the farmer's profile", detailed_message: err.message })); } else if (result.rows.length < 1) { res.set('Content-Type', 'application/json'); res.status(404).send(JSON.stringify({ status: 404, message: "Farmer doesn't exist", detailed_message: "" })); } else { res.contentType('application/json'); res.status(200).send(JSON.stringify(result.rows)); } doRelease(connection, "GET /bananas/" + req.params.FARMER); }); }); }); Express makes it easy. It handles the routing to this code when a GET request with the URL '/bananas/<name>' e.g. '/bananas/Gita' is called. This simply binds the URL route parameter containing the farmer’s name into the SELECT statement. Binding is important for security and scalability, as you know. The SQL syntax used is the JSON 'dot' notation of Oracle Database 12.2 but it could be rewritten to work with 12.1.0.2. The bulk of the code is error handling, looking after the cases where there was a processing error or no rows returned. It sends back HTTP status codes 500 or 404, respectively. The success code path sends back the query output 'result.rows' as a JSON string, with the HTTP success status code 200. START THE DEMO CONTAINER Let's run the application. Create a file 'ws-demo-scripts/envfile.list' with the credentials for the application. Use the IP address of your database container found with the 'docker inspect' command shown previously. In my environment, the database IP address was '172.17.0.2' NODE_ORACLEDB_USER=scott NODE_ORACLEDB_PASSWORD=tiger NODE_ORACLEDB_CONNECTIONSTRING=172.17.0.2/orclpdb1.localdomain Start the Node.js web service container # docker run -d --name nodejs -P --env-file ws-demo-scripts/envfile.list cjones/ws-demo STATUS CHECK To recap what's happened, the Docker images are: # docker images REPOSITORY TAG IMAGE ID CREATED SIZE cjones/ws-demo latest 25caede29b17 12 minutes ago 1.51GB cjones/nodejs-image latest 138f2b76ffe7 13 minutes ago 1.51GB container-registry.oracle.com/database/enterprise 12.2.0.1 12a359cd0528 7 months ago 3.44GB container-registry.oracle.com/database/instantclient 12.2.0.1 fda46de41de3 7 months ago 407MB Two base images were downloaded, An image with Node.js was created from the Instant Client image. Finally a fourth image 'cjones/ws-demo' with the Node.js, Instant Client and the application code was created. We have started database ('demodb') and application containers ('nodejs'): # docker ps CONTAINER ID IMAGE COMMAND STATUS PORTS NAMES 2924e1225290 cjones/ws-demo ”npm start" Up 3 hours nodejs 9596bc2345d3 [...]/database/[...] "/bin/sh -c '/bin/..." Up 3 hours (healthy) ...->1521/tcp, ...->5500/tcp demodb We found the IP address of the database container, and knew (by reading the container registry documentation) the default credentials of the SYS user. We created a schema SCOTT on the database, with a table containing some JSON data. An application container was started, with the database application credentials and connection string specified in an environment file outside the container. SUBMIT REST REQUESTS Now we can call our application, and it will access the database. Install the browser extension HttpRequester (in Firefox) or Postman (in Chrome). Find the IP of the demo web service container: # docker inspect -f "{{ .NetworkSettings.IPAddress }}" nodejs In my environment, it was '172.17.0.3'. Use this with the port (3000) and various endpoints (e.g. '/bananas/<farmer>') defined in server.js for REST requests. In the HttpRequester or Postman extensions you can make various REST calls. Get all shipments: GET http://172.17.0.3:3000/bananas Get one farmer's shipment(s): GET http://172.17.0.3:3000/bananas/Gita New data: POST http://172.17.0.3:3000/bananas { "farmer" : "CJ", "ripeness" : "Light Green", "kilograms" : 50 } Update data: PUT http://172.17.0.3:3000/bananas/CJ { "farmer" : "CJ", "ripeness" : "50% Green, 50% Yellow", "kilograms" : 45 } Here's a screenshot of HttpRequester in action doing a GET request to get all banana shipments. On the left, the red boxes show the URL for the '/bananas' endpoint was executed as a GET request. On the right, the response shows the success HTTP status code of 200 and the returned data from the request: When you are finished with the containers you can stop them: # docker stop demodb # docker stop nodejs If you haven't tried Docker yet, now is the perfect time! Containers make deployment and development easy. Oracle's Docker images let you get started with Oracle products very quickly. If you want to learn more about Node.js and REST, check out the series Creating a REST API with Node.js and Oracle Database. And finally, if you've read this far, you can download all the files used from here.

This post shows how to run a node-oracledb application in a Docker Container. For bonus points, the application connects to an Oracle Database running in a second container. The steps are the 'show...

python

Efficient and Scalable Batch Statement Execution in Python cx_Oracle

    Today's guest post is by Oracle's Anthony Tuininga, creator and lead maintainer of cx_Oracle, the extremely popular Oracle Database interface for Python.       Introduction This article shows how batch statement execution in the Python cx_Oracle interface for Oracle Database can significantly improve performance and make working with large data sets easy. In many cx_Oracle applications, executing SQL and PL/SQL statements using the method cursor.execute() is perfect. But if you intend to execute the same statement repeatedly for a large set of data, your application can incur significant overhead, particularly if the database is on a remote network. The method cursor.executemany() gives you the ability to reduce network transfer costs and database load, and can significantly outperform repeated calls to cursor.execute(). SQL To help demonstrate batch execution, the following tables and data will be used: create table ParentTable ( ParentId number(9) not null, Description varchar2(60) not null, constraint ParentTable_pk primary key (ParentId) ); create table ChildTable ( ChildId number(9) not null, ParentId number(9) not null, Description varchar2(60) not null, constraint ChildTable_pk primary key (ChildId), constraint ChildTable_fk foreign key (ParentId) references ParentTable ); insert into ParentTable values (10, 'Parent 10'); insert into ParentTable values (20, 'Parent 20'); insert into ParentTable values (30, 'Parent 30'); insert into ParentTable values (40, 'Parent 40'); insert into ParentTable values (50, 'Parent 00'); insert into ChildTable values (1001, 10, 'Child A of Parent 10'); insert into ChildTable values (1002, 20, 'Child A of Parent 20'); insert into ChildTable values (1003, 20, 'Child B of Parent 20'); insert into ChildTable values (1004, 20, 'Child C of Parent 20'); insert into ChildTable values (1005, 30, 'Child A of Parent 30'); insert into ChildTable values (1006, 30, 'Child B of Parent 30'); insert into ChildTable values (1007, 40, 'Child A of Parent 40'); insert into ChildTable values (1008, 40, 'Child B of Parent 40'); insert into ChildTable values (1009, 40, 'Child C of Parent 40'); insert into ChildTable values (1010, 40, 'Child D of Parent 40'); insert into ChildTable values (1011, 40, 'Child E of Parent 40'); insert into ChildTable values (1012, 50, 'Child A of Parent 50'); insert into ChildTable values (1013, 50, 'Child B of Parent 50'); insert into ChildTable values (1014, 50, 'Child C of Parent 50'); insert into ChildTable values (1015, 50, 'Child D of Parent 50'); commit; Simple Execution To insert a number of rows into the parent table, the following naive Python script could be used: data = [ (60, "Parent 60"), (70, "Parent 70"), (80, "Parent 80"), (90, "Parent 90"), (100, "Parent 100") ] for row in data: cursor.execute(""" insert into ParentTable (ParentId, Description) values (:1, :2)""", row) This works as expected and five rows are inserted into the table. Each execution, however, requires a "round-trip" to the database. A round-trip is defined as the client (i.e. the Python script) making a request to the database and the database sending back its response to the client. In this case, five round-trips are required. As the number of executions increases, the cost increases in a linear fashion based on the average round-trip cost. This cost is dependent on the configuration of the network between the client (Python script) and the database server, as well as on the capability of the database server. Batch Execution Performing the same inserts using executemany() would be done as follows: data = [ (60, "Parent 60"), (70, "Parent 70"), (80, "Parent 80"), (90, "Parent 90"), (100, "Parent 100") ] cursor.executemany(""" insert into ParentTable (ParentId, Description) values (:1, :2)""", data) In this case there is only one round-trip to the database, not five. In fact, no matter how many rows are processed at the same time there will always be just one round-trip. As the number of rows processed increases, the performance advantage of cursor.executemany() skyrockets. For example, on my machine inserting 1,000 rows into the same table in a database on the local network using cursor.execute() takes 410 ms, whereas using cursor.executemany() requires only 20 ms. Increasing the number to 10,000 rows requires 4,000 ms for cursor.execute() but only 60 ms for cursor.executemany()! For really huge data sets there may be external buffer or network limits to how many rows can be processed at one time. These limits are based on both the number of rows being processed as well as the "size" of each row that is being processed. The sweet spot can be found by tuning your application. Repeated calls to executemany() are still better than repeated calls to execute(). As mentioned earlier, execution of PL/SQL statements is also possible. Here is a brief example demonstrating how to do so: data = [[2], [6], [4]] var = cursor.var(str, arraysize = len(data)) data[0].append(var) # OUT bind variable ':2' cursor.executemany(""" declare t_Num number := :1; t_OutValue varchar2(100); begin for i in 1..t_Num loop t_OutValue := t_OutValue || 'X'; end loop; :2 := t_OutValue; end;""", data) print("Result:", var.values) This results in the following output: Result: ['XX', 'XXXXXX', 'XXXX'] Using executemany() With the significant performance advantages that can be seen by performing batch execution of a single statement it would seem obvious to use cursor.executemany() whenever possible. Let's look at some of the other features of executemany() useful for common data handling scenarios. Scenario 1: Getting Affected Row Counts One scenario that may arise is the need to determine how many rows are affected by each row of data that is passed to cursor.executemany(). Consider this example: for parentId in (10, 20, 30): cursor.execute("delete from ChildTable where ParentId = :1", [parentId]) print("Rows deleted for parent id", parentId, "are", cursor.rowcount) This results in the following output: Rows deleted for parent id 10 are 1 Rows deleted for parent id 20 are 3 Rows deleted for parent id 30 are 2 Since each delete is performed independently, determining how many rows are affected by each delete is easy to do. But what happens if we use cursor.executemany() in order to improve performance as in the following rewrite? data = [[10], [20], [30]] cursor.executemany("delete from ChildTable where ParentId = :1", data) print("Rows deleted:", cursor.rowcount) This results in the following output: Rows deleted: 6 You'll note this is the sum of all of the rows that were deleted in the prior example, but the information on how many rows were deleted for each parent id is missing. Fortunately, that can be determined by enabling the Array DML Row Counts feature, available in Oracle Database 12.1 and higher: data = [[10], [20], [30]] cursor.executemany("delete from ChildTable where ParentId = :1", data, arraydmlrowcounts = True) for ix, rowsDeleted in enumerate(cursor.getarraydmlrowcounts()): print("Rows deleted for parent id", data[ix][0], "are", rowsDeleted) This results in the same output as was shown for the simple cursor.execute(): Rows deleted for parent id 10 are 1 Rows deleted for parent id 20 are 3 Rows deleted for parent id 30 are 2 Scenario 2: Handling Bad Data Another scenario is handling bad data. When processing large amounts of data some of that data may not fit the constraints imposed by the database. Using cursor.execute() such processing may look like this: data = [ (1016, 10, 'Child B of Parent 10'), (1017, 10, 'Child C of Parent 10'), (1018, 20, 'Child D of Parent 20'), (1018, 20, 'Child D of Parent 20'), # duplicate key (1019, 30, 'Child C of Parent 30'), (1020, 30, 'Child D of Parent 40'), (1021, 600, 'Child A of Parent 600'), # parent does not exist (1022, 40, 'Child F of Parent 40'), ] for ix, row in enumerate(data): try: cursor.execute(""" insert into ChildTable (ChildId, ParentId, Description) values (:1, :2, :3)""", row) except cx_Oracle.DatabaseError as e: print("Row", ix, "has error", e) This results in the following output: Row 3 has error ORA-00001: unique constraint (EMDEMO.CHILDTABLE_PK) violated Row 6 has error ORA-02291: integrity constraint (EMDEMO.CHILDTABLE_FK) violated - parent key not found If you make use of cursor.executemany(), however, execution stops at the first error that is encountered: data = [ (1016, 10, 'Child B of Parent 10'), (1017, 10, 'Child C of Parent 10'), (1018, 20, 'Child D of Parent 20'), (1018, 20, 'Child D of Parent 20'), # duplicate key (1019, 30, 'Child C of Parent 30'), (1020, 30, 'Child D of Parent 40'), (1021, 600, 'Child A of Parent 600'), # parent does not exist (1022, 40, 'Child F of Parent 40'), ] try: cursor.executemany(""" insert into ChildTable (ChildId, ParentId, Description) values (:1, :2, :3)""", data) except cx_Oracle.DatabaseError as e: errorObj, = e.args print("Row", cursor.rowcount, "has error", errorObj.message) This results in the following output: Row 3 has error ORA-00001: unique constraint (EMDEMO.CHILDTABLE_PK) violated Fortunately there is an option to help here as well, using the Batch Errors feature available in Oracle Database 12.1 and higher. This can be seen using the following code: data = [ (1016, 10, 'Child B of Parent 10'), (1017, 10, 'Child C of Parent 10'), (1018, 20, 'Child D of Parent 20'), (1018, 20, 'Child D of Parent 20'), # duplicate key (1019, 30, 'Child C of Parent 30'), (1020, 30, 'Child D of Parent 40'), (1021, 600, 'Child A of Parent 600'), # parent does not exist (1022, 40, 'Child F of Parent 40'), ] cursor.executemany(""" insert into ChildTable (ChildId, ParentId, Description) values (:1, :2, :3)""", data, batcherrors = True) for errorObj in cursor.getbatcherrors(): print("Row", errorObj.offset, "has error", errorObj.message) This results in the following output, which is identical to the example that used cursor.execute(): Row 3 has error ORA-00001: unique constraint (EMDEMO.CHILDTABLE_PK) violated Row 6 has error ORA-02291: integrity constraint (EMDEMO.CHILDTABLE_FK) violated - parent key not found In both the execute() and executemany() cases, rows that were inserted successfully open a transaction which will need to be either committed or rolled back with connection.commit() or connection.rollback(), depending on the needs of your application. Note that if you use autocommit mode, the transaction is committed only when no errors are returned; otherwise, a transaction is left open and will need to be explicitly committed or rolled back. Scenario 3: DML RETURNING Statements The third scenario that I will consider is that of DML RETURNING statements. These statements allow you to bundle a DML statement (such as INSERT, UPDATE, DELETE and MERGE statements) along with a query to return some data at the same time. With cursor.execute() this is done easily enough using something like the following code: childIdVar = cursor.var(int) cursor.setinputsizes(None, childIdVar) for parentId in (10, 20, 30): cursor.execute(""" delete from ChildTable where ParentId = :1 returning ChildId into :2""", [parentId]) print("Child ids deleted for parent id", parentId, "are", childIdVar.values) This produces the following output: Child ids deleted for parent id 10 are [1001] Child ids deleted for parent id 20 are [1002, 1003, 1004] Child ids deleted for parent id 30 are [1005, 1006] Support for DML RETURNING in cursor.executemany() was introduced in cx_Oracle 6.3. Because it was supported only in execute() prior to cx_Oracle 6.3, the cx_Oracle.__future__ object must have the attribute "dml_ret_array_val" set to True to allow multiple values to be returned by executemany(). Failing to set this to True when calling executemany() will result in an error. Finally, the variable created to accept the returned values must have an array size large enough to accept the rows that are returned (one array of output data is returned for each of the input records that are provided). The following code shows the new executemany() support in cx_Oracle 6.3: cx_Oracle.__future__.dml_ret_array_val = True data = [[10], [20], [30]] childIdVar = cursor.var(int, arraysize = len(data)) cursor.setinputsizes(None, childIdVar) cursor.executemany(""" delete from ChildTable where ParentId = :1 returning ChildId into :2""", data) for ix, inputRow in enumerate(data): print("Child ids deleted for parent id", inputRow[0], "are", childIdVar.getvalue(ix)) This results in the same output as was seen with cursor.execute(): Child ids deleted for parent id 10 are [1001] Child ids deleted for parent id 20 are [1002, 1003, 1004] Child ids deleted for parent id 30 are [1005, 1006] Note: that using "dml_ret_array_val" set to True with execute() causes arrays to be returned for each bind record. In any future cx_Oracle 7 this will become the only behavior available. Scenario 4: Variable Data lengths When multiple rows of data are being processed there is the possibility that the data is not uniform in type and size. cx_Oracle makes some effort to accommodate such differences. For example, type determination is deferred until a value that is not None is found in the data. If all values in a particular column are None, then cx_Oracle assumes the type is a string and has a length of 1. cx_Oracle will also adjust the size of the buffers used to store strings and bytes when a longer value is encountered in the data. These sorts of operations, however, will incur overhead as cx_Oracle has to reallocate memory and copy all of the data that has been processed thus far. To eliminate this overhead, the method cursor.setinputsizes() should be used to tell cx_Oracle about the type and size of the data that is going to be used. For example: data = [ (110, "Parent 110"), (2000, "Parent 2000"), (30000, "Parent 30000"), (400000, "Parent 400000"), (5000000, "Parent 5000000") ] cursor.setinputsizes(None, 20) cursor.executemany(""" insert into ParentTable (ParentId, Description) values (:1, :2)""", data) In this example, without the call to cursor.setinputsizes(), cx_Oracle would perform five allocations of increasing size as it discovered each new, larger string. The value 20, however, tells cx_Oracle that the maximum size of the strings that will be processed is 20 characters. Since cx_Oracle allocates memory for each row based on this value it is best not to oversize it. Note that if the type and size are uniform (like they are for the first column in the data being inserted), the type does not need to be specified and None can be provided, indicating that the default type (in this case cx_Oracle.NUMBER) should be used. Conclusion As can be seen by the preceding examples, cursor.executemany() lets you manage data easily and enjoy high performance at the same time!

    Today's guest post is by Oracle's Anthony Tuininga, creator and lead maintainer of cx_Oracle, the extremely popular Oracle Database interface for Python.       Introduction This article shows how batch...

python

Python cx_Oracle 6.3 Supports DML RETURNING for Batch Statement Execution

cx_Oracle 6.3, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package that covers the Python Database API specification with many additions to support Oracle advanced features. Top Feature: Cursor.executemany() support for OUT bind variables in DML RETURNING statements.   This release contains a number of bug fixes and useful improvements. For the full list, see the Release Notes, but I wanted to highlight a few features: Support for binding integers and floats as cx_Oracle.NATIVE_FLOAT. Support for true heterogeneous session pools that use different username/password combinations for each session acquired from the pool. All cx_Oracle exceptions raised by cx_Oracle now produce a cx_Oracle._Error object. Support for getting the OUT values of bind variables bound to a DML RETURNING statement when calling Cursor.executemany(). For technical reasons, this requires setting a new attribute in cx_Oracle.__future__. As an example: cx_Oracle.__future__.dml_ret_array_val = True data = [[10], [20], [30]] childIdVar = cursor.var(int, arraysize = len(data)) cursor.setinputsizes(None, childIdVar) cursor.executemany(""" delete from ChildTable where ParentId = :1 returning ChildId into :2""", data) for ix, inputRow in enumerate(data): print("Child ids deleted for parent id", inputRow[0], "are", childIdVar.getvalue(ix)) Want to know what this displays? Stay tuned to this blog site for an upcoming post on using executemany() in cx_Oracle! cx_Oracle References Home page: oracle.github.io/python-cx_Oracle/index.html Installation instructions: cx-oracle.readthedocs.io/en/latest/installation.html Documentation: cx-oracle.readthedocs.io/en/latest/index.html Release Notes: cx-oracle.readthedocs.io/en/latest/releasenotes.html Source Code Repository: github.com/oracle/python-cx_Oracle

cx_Oracle 6.3, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package that covers the Python Database API specification with many...

node.js

Node-oracledb 2.2 with Batch Statement Execution (and more) is out on npm

Release announcement: Node-oracledb 2.2, the Node.js module for accessing Oracle Database, is on npm. Top features: Batch Statement Execution In the six-or-so weeks since 2.1 was released, a bunch of new functionality landed in node-oracledb 2.2. This shows how much engineering went into the refactored lower abstraction layer we introduced in 2.0, just to make it easy to expose Oracle features to languages like Node.js. The top features in node-oracledb 2.2 are: Added oracledb.edition to support Edition-Based Redefinition (EBR). The EBR feature of Oracle Database allows multiple versions of views, synonyms, PL/SQL objects and SQL Translation profiles to be used concurrently. This lets database logic be updated and tested while production users are still accessing the original version. The new edition property can be set at the global level, when creating a pool, or when creating a standalone connection. This removes the need to use an ALTER SESSION command or ORA_EDITION environment variable. Added oracledb.events to allow the Oracle client library to receive Oracle Database service events, such as for Fast Application Notification (FAN) and Runtime Load Balancing (RLB). The new events property can be set at the global level, when creating a pool, or when creating a standalone connection. This removes the need to use an oraaccess.xml file to enable event handling, making it easier to use Oracle high availablility features, and makes it available for the first time to users who are linking node-oracledb with version 11.2 Oracle client libraries. Added connection.changePassword() for changing passwords. Passwords can also be changed when calling oracledb.getConnection(), which is the only way to connect when a password has expired. Added connection.executeMany() for efficient batch execution of DML (e.g. INSERT, UPDATE and DELETE) and PL/SQL execution with multiple records. See the example below. Added connection.getStatementInfo() to find information about a SQL statement without executing it. This is most useful for finding column types of queries and for finding bind variables names. It does require a 'round-trip' to the database, so don't use it without reason. Also there are one or two quirks because the library underneath that provides the implementation has some 'historic' behavior. Check the manual for details. Added connection.ping() to support system health checks. This verifies that a connection is usable and that the database service or network have not gone down. This requires a round-trip to the database so you wouldn't use it without reason. Although it doesn't replace error handling in execute(), sometimes you don't want to be running a SQL statement just to check the connection status, so it is useful in the arsenal of features for keeping systems running reliably. See the CHANGELOG for all changes. One infrastructure change we recently made was to move the canonical home for documentation to GitHub 'pages'. This will be kept in sync with the current production version of node-oracledb. If you update your bookmarks to the new locations, it will allow us to update the source code repository documentation mid-release without confusing anyone about available functionality. Home page: https://oracle.github.io/node-oracledb Installation: https://oracle.github.io/node-oracledb/INSTALL.html Documentation: https://oracle.github.io/node-oracledb/doc/api.html Batch Statement Execution The new connection.executeMany() method allows many sets of data values to be bound to one DML or PL/SQL statement for execution. It is like calling connection.execute() multiple times for one statement but requires fewer round-trips overall. This is an efficient way to handle batch changes, for example when inserting or updating multiple rows, because the reduced cost of round-trips has a significant affect on performance and scalability. Depending on the number of records, their sizes, and on the network speed to the database, the performance of executeMany() can be significantly faster than the equivalent use of execute(). In one little test I did between Node.js on my laptop and a database running on my adjacent desktop, I saw that executeMany() took 16 milliseconds whereas execute() took 2.3 seconds to insert 1000 rows, each consisting of a number and a very short string. With larger data sizes and slower (or faster!) networks the performance characteristics will vary, but the overall benefit is widespread. The executeMany() method supports IN, IN OUT and OUT variables. Binds from RETURNING INTO clauses are supported, making it easy to insert a number of rows and find, for example, the ROWIDs of each. With an optional batchErrors mode, you can insert 'noisy' data easily. Batch Errors allows valid rows to be inserted and invalid rows to be rejected. A transaction will be started but not committed, even if autocommit mode is enabled. The application can examine the errors, find the bad data, take action, and explicitly commit or rollback as desired. To give one example, let's look at the use of batchErrors when inserting data: var sql = "INSERT INTO childtab VALUES (:1, :2, :3)"; // There are three value in each nested array since there are // three bind variables in the SQL statement. // Each nested array will be inserted as a new row. var binds = [ [1016, 10, "apples"], [1017, 10, "bananas"], [1018, 20, "cherries"], [1018, 20, "damson plums"], // duplicate key [1019, 30, "elderberry"], [1020, 40, "fig"], [1021, 75, "golden kiwifruit"], // parent does not exist [1022, 40, "honeydew melon"] ]; var options = { autoCommit: true, // autocommit if there are no batch errors batchErrors: true, // identify invalid records; start a transaction for valid ones bindDefs: [ // describes the data in 'binds' { type: oracledb.NUMBER }, { type: oracledb.NUMBER }, { type: oracledb.STRING, maxSize: 16 } // size of the largest string, or as close as possible ] }; connection.executeMany(sql, binds, options, function (err, result) { if (err) consol.error(err); else { console.log("Result is:", result); } }); Assuming appropriate data exists in the parent table, the output might be like: Result is: { rowsAffected: 6, batchErrors: [ { Error: ORA-00001: unique constraint (CJ.CHILDTAB_PK) violated errorNum: 1, offset: 3 }, { Error: ORA-02291: integrity constraint (CJ.CHILDTAB_FK) violated - parent key not found errorNum: 2291, offset: 6 } ] } This shows that 6 records were inserted but the records at offset 3 and 6 (using a 0-based index into the 'binds' variable array) were problematic. Because of these batch errors, the other records were not committed, despite autoCommit being true. However they were inserted and the transaction could be committed or rolled back. We know some users are inserting very large data sets so executeMany() will be very welcome. At the very huge end of the data spectrum you may want to call executeMany() with batches of data to avoid size limitations in various layers of the Oracle and operating system stack. Your own testing will determine the best approach. See Batch Execution in the manual for more information about the modes of executeMany() and how to use it in various cases. There are runnable examples in the GitHub examples directory. Look for the files prefixed 'em_'. There are two variants of each sample: one uses call-back style, and the other uses the Async/Await interface available with Node.js 8. Resources Node-oracledb installation instructions are here. Node-oracledb documentation is here. Node-oracledb change log is here. Issues and questions about node-oracledb can be posted on GitHub. Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

Release announcement: Node-oracledb 2.2, the Node.js module for accessing Oracle Database, is on npm. Top features: Batch Statement Execution In the six-or-so weeks since 2.1 was released, a bunch of...

python

Python cx_Oracle 6.2 is out on PyPI

cx_Oracle 6.2, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package that covers the Python Database API specification with many additions to support Oracle advanced features. This release: Adds support for creating temporary CLOBs, BLOBs or NCLOBs via a new method Connection.createlob(). Adds support for binding a LOB value directly to a cursor. Adds support for closing the connection when reaching the end of a 'with' code block controlled by the connection as a context manager. See cx_Oracle.__future__ for more information. Was internally updated to the newest ODPI-C data access layer, which brings numerous stability fixes and code improvements including: Open statements and LOBs are tracked and automatically closed when the related connection is closed; this eliminates the need for users of cx_Oracle to track them, and removes the error "DPI-1054: connection cannot be closed when open statements or LOBs exist". Errors during implicit rollback at connection close are ignored - but if an error does occur, ensure the connection is dropped from the connection pool. This reduces app errors in cases like where a DBA has killed a session. Avoids an unnecessary round trip to the database when a connection is released back to the pool by preventing a rollback from being called when no transaction is in progress. There was also an internal code restructure to simplify maintenance and consolidate transformations to/from Python objects. See the Release Notes for all the fixes. To upgrade to cx_Oracle 6.2 most users will be able to run: python -m pip install cx_Oracle --upgrade Spread the word! cx_Oracle References Home page: oracle.github.io/python-cx_Oracle/index.html Installation instructions: cx-oracle.readthedocs.io/en/latest/installation.html Documentation: cx-oracle.readthedocs.io/en/latest/index.html Release Notes: cx-oracle.readthedocs.io/en/latest/releasenotes.html Source Code Repository: github.com/oracle/python-cx_Oracle

cx_Oracle 6.2, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package that covers the Python Database API specification with many...

General

ODPI-C 2.2 Release: Powering Oracle Database Access

ODPI-C 2.2.1 has been tagged for release. Oracle Database Programming Interface for C (ODPI-C) is an open source library of C code that simplifies the use of common Oracle Call Interface (OCI) features for Oracle Database drivers and user applications. The ODPI-C project is open source and maintained by Oracle Corp. ODPI-C is used as a data access layer in drivers for Node.js, Python, Ruby, Go, Rust, Haskell and more. Changes in ODPI-C 2.2 from 2.1 include: Open statements and LOBs are tracked and automatically closed when the related connection is closed; this eliminates the need for users of the driver to do so and removes the error "DPI-1054: connection cannot be closed when open statements or LOBs exist". Errors during implicit rollback at connection close are ignored - but if an error does occur, ensure the connection is dropped from the connection pool. This reduces app errors in cases like where a DBA has killed a session. Avoid a round trip to the database when a connection is released back to the pool by preventing a rollback from being called when there is no transaction in progress. A new, optional, way of including the source code in your projects: embed/dpi.c was added. This simply includes all other source files. You can reliably link with just dpi.c and not have to update your projects if, and when, new ODPI-C versions have new source files. Many stability fixes, code improvements, new tests, and documentation updates. See the release notes for all changes. In my opinion, the stability fixes justify upgrading immediately. The eagle-eyed will note that today is a 2.2.1 release but we actually tagged 2.2.0 a few weeks ago. ODPI-C 2.2.0 was tagged solely to give an identifiable base for node-oracledb 2.2 to use. However Anthony had some ODPI-C fixes queued up in areas of code not used by node-oracledb, hence today's "official" ODPI-C 2.2.1 announcement. ODPI-C References Home page: oracle.github.io/odpi Code: github.com/oracle/odpi Documentation: oracle.github.io/odpi/doc/index.html Release Notes: oracle.github.io/odpi/doc/releasenotes.html Report issues and discuss: github.com/oracle/odpi/issues Installation Instructions: oracle.github.io/odpi/doc/installation.html.

ODPI-C 2.2.1 has been tagged for release. Oracle Database Programming Interface for C (ODPI-C) is an open source library of C code that simplifies the use of common Oracle Call Interface (OCI) features...

General

Installing the Oracle ODBC Driver on macOS

A bonus for today is a guest post by my colleague Senthil Dhamotharan. He shares the steps to install the Oracle Instant Client ODBC driver and the unixODBC Driver Manager on macOS. ODBC is an open specification for accessing databases. The Oracle ODBC driver for Oracle Database enables ODBC applications to connect to Oracle Database. In addition to standard ODBC functions, users can leverage Oracle specific features for high performance data access. Install the unixODBC Driver Manager Download unixODBC from ftp.unixodbc.org/pub/unixODBC. I used unixODBC-2.3.1.tar.gz. Extract the package: tar -zxvf unixODBC-2.3.1.tar.gz Configure unixODBC: cd unixODBC-2.3.1 ./configure Note if you use the configure option "--prefix" to install into locations other than the default directory (/usr/local) then macOS's SIP features may prevent the unixODBC libraries being located correctly by the ODBC driver. Build and install unixODBC: make sudo make install Install the Oracle ODBC Driver Download the Oracle 12.2 Instant Client Basic and ODBC packages from Instant Client Downloads for macOS (Intel x86). To reduce the installation size, the Basic Light package be used instead of Basic, if its character sets and languages are sufficient. Extract both ZIP files: unzip instantclient-basic-macos.x64-12.2.0.1.0-2.zip unzip instantclient-odbc-macos.x64-12.2.0.1.0-2.zip This will create a subdirectory instantclient_12_2 The Oracle Instant Client libraries need to be in the macOS library search path, generally either in /usr/lib/local or in your home directory under ~/lib. I did: mkdir ~/lib cd instantclient_12_2 ln -s $(pwd)/libclntsh.dylib.12.1 $(pwd)/libclntshcore.dylib.12.1 ~/lib With version 12.2, a small patch to the driver name in instantclient_12_2/odbc_update_ini.sh is required on macOS. I changed line 101 from: SO_NAME=libsqora.so.12.1 to SO_NAME=libsqora.dylib.12.1 Run the configuration script cd instantclient_12_2 sudo odbc_update_ini.sh /usr/local sudo chown $USER ~/.odbc.ini This creates a default DSN of "OracleODBC-12c" Edit the new ~/.odbc.ini configuration file and add the Oracle Database connection string. My database is running on the same machine as ODBC (inside a VirtualBox VM) and has a service name of 'orclpdb', so my connection string is 'localhost/orclpdb'. I changed: ServerName = to ServerName = localhost/orclpdb Verify the installation Run the isql utility to verify installation. Pass in the DSN name, and an existing database username and password: $ isql OracleODBC-12c scott tiger +---------------------------------------+ | Connected! | | | | sql-statement | | help [tablename] | | quit | | | +---------------------------------------+ SQL> You can execute SQL statements and quit when you are done. Test Program To test a program that makes ODBC calls, download odbcdemo.c. Edit odbcdemo.c and set the USERNAME and PASSWORD constants to the database credentials. Build it: gcc -o odbcdemo -g -lodbc odbcdemo.c Run it ./odbcdemo The output will be like: Connecting to the DB .. Done Executing SQL ==> SELECT SYSDATE FROM DUAL Result ==> 2018-02-21 02:53:47 Summary ODBC is a popular API for accessing databases. The Oracle ODBC Driver is the best way to access Oracle Database. Resources Using the Oracle ODBC Driver. Oracle ODBC Drivers Discussion Forum Oracle Instant Client ODBC Release Notes Instant Client Downloads

A bonus for today is a guest post by my colleague Senthil Dhamotharan. He shares the steps to install the Oracle Instant Client ODBC driver and the unixODBC Driver Manager on macOS. ODBC is an open...

php

Installing XAMPP on Windows for PHP and Oracle Database

Today's guest post comes from Tianfang Yang who's been working with the Oracle Database extensions for PHP. This post shows how to install XAMPP on Windows to run PHP applications that connect to a remote Oracle Database. XAMPP is an open source package that contains Apache, PHP and many PHP 'extensions'. One of these extension is PHP OCI8 which connects to Oracle Database. To install XAMPP: Download "XAMPP for Windows" and follow the installer wizard. I installed into my D: drive. Start the Apache server via the XAMPP control panel. Visit http://localhost/dashboard/phpinfo.php via your browser to see the architecture and thread safety mode of the installed PHP. Please note this is the architecture of the installed PHP and not the architecture of your machine. It’s possible to run a x86 PHP on an x64 machine. [Optional] Oracle OCI8 is pre-installed in XAMPP but if you need a newer version you can download an updated OCI8 PECL package from pecl.php.net. Pick an OCI8 release and select the DLL according to the architecture and thread safety mode. For example, if PHP is x86 and thread safety enabled, download "7.2 Thread Safe (TS) x86". Then replace "D:\xampp\php\ext\php_oci8_12c.dll" with the new "php_oci8_12c.dll" from the OCI8 PECL package. Edit "D:\xampp\php\php.ini" and uncomment the line "extension=oci8_12c". Make sure "extension_dir" is set to the directory containing the PHP extension DLLs. For example, extension=oci8_12c extension_dir="D:\xampp\php\ext" Download the Oracle Instant Client Basic package from OTN. Select the correct architecture to align with PHP's. For Windows x86 download "instantclient-basic-nt-12.2.0.1.0.zip" from the Windows 32-bit page. Extract the file in a directory such as "D:\Oracle". A subdirectory "D:\Oracle\instantclient_12_2" will be created. Add this subdirectory to the PATH environment variable. You can update PATH in Control Panel -> System -> Advanced System Settings -> Advanced -> Environment Variables -> System Variables -> PATH. In my example I set it to "D:\Oracle\instantclient_12_2". Restart the Apache server and check the phpinfo.php page again. It shows the OCI8 extension is loaded successfully. If you also run PHP from a terminal window, make sure to close and reopen the terminal to get the updated PATH value. To run your first OCI8 application, create a new file in the XAMPP document root "D:\xampp\htdocs\test.php". It should contain: <?php error_reporting(E_ALL); ini_set('display_errors', 'On'); $username = "hr"; // Use your username $password = "welcome"; // and your password $database = "localhost/orclpdb"; // and the connect string to connect to your database $query = "select * from dual"; $c = oci_connect($username, $password, $database); if (!$c) { $m = oci_error(); trigger_error('Could not connect to database: '. $m['message'], E_USER_ERROR); } $s = oci_parse($c, $query); if (!$s) { $m = oci_error($c); trigger_error('Could not parse statement: '. $m['message'], E_USER_ERROR); } $r = oci_execute($s); if (!$r) { $m = oci_error($s); trigger_error('Could not execute statement: '. $m['message'], E_USER_ERROR); } echo "<table border='1'>\n"; $ncols = oci_num_fields($s); echo "<tr>\n"; for ($i = 1; $i <= $ncols; ++$i) { $colname = oci_field_name($s, $i); echo " <th><b>".htmlspecialchars($colname,ENT_QUOTES|ENT_SUBSTITUTE)."</b></th>\n"; } echo "</tr>\n"; while (($row = oci_fetch_array($s, OCI_ASSOC+OCI_RETURN_NULLS)) != false) { echo "<tr>\n"; foreach ($row as $item) { echo "<td>"; echo $item!==null?htmlspecialchars($item, ENT_QUOTES|ENT_SUBSTITUTE):"&nbsp;"; echo "</td>\n"; } echo "</tr>\n"; } echo "</table>\n"; ?> You need to edit this file and set your database username, password and connect string. If you are using Oracle Database XE, then the connect string should be "localhost/XE". The SQL query can also be changed. Currently it queries the special DUAL table, which every user has. Load the test program in a browser using http://localhost/test.php. The output will be the single value "X" in the column called "DUMMY". You can read more about PHP OCI8 in the PHP manual, and in the free Underground PHP and Oracle Manual from Oracle. Enjoy your coding with OCI8!

Today's guest post comes from Tianfang Yang who's been working with the Oracle Database extensions for PHP. This post shows how to install XAMPP on Windows to run PHP applications that connect to a...

node.js

node-oracledb 2.1 is now available from npm

Release announcement: Node-oracledb 2.1.0, the Node.js module for accessing Oracle Database, is on npm. Top features: Privileged connections such as SYSDBA. A new 'queryStream()' Stream 'destroy()' method After the big refactoring of node-oracledb 2.0, we're pleased to push node-oracledb forward with the release of 2.1.0.   Changes in node-oracledb 2.1.0 include: Support for SYSDBA, SYSOPER, SYSASM, SYSBACKUP, SYSDG, SYSKM, and SYSRAC privileges in standalone connections. You can now connect like: oracledb.getConnection( { user: 'sys', password: 'secret', connectString: 'localhost/orclpdb', privilege: oracledb.SYSDBA }, function(err, connection) { if (err) console.error(err); else console.log('I have power'); } // . . . ); The internal implementations of 'direct fetches' and 'queryStream()' have been simplified by reusing the ResultSet changes of 2.0. You might notice speedups in some cases. The 'queryStream()' result Stream now supports Node 8's destroy() method, which obsoletes the previous experimental '_close()' method of node-oracledb. If you are using '_close()' in Node 8 or later, you should: Change the method name from '_close()' to 'destroy()'. Stop passing a callback. Optionally pass an error. Improved the Error object with new 'errorNum' and 'offset' properties. The properties will be included when you get errors from Oracle Database such as this: connection.execute( "select wrong_col from departments", function(err, result) { if (err) console.error(err); else console.log(result.rows); }); The output is: { Error: ORA-00904: "WRONG_COL": invalid identifier errorNum: 904, offset: 7 } The offset is relative to the start of the SQL statement. For non SQL errors, the offset will be 0. New 'versionSuffix' and 'versionString' properties to the oracledb object to aid showing the release status and version. The 'versionSuffix' attribute will be an empty string for production releases, but may be something like '-dev' or '-beta.1' for other builds. The existing 'version' attribute is great for runtime comparisons, but not as handy as the new 'versionString' attribute for displaying to users. With a code contribution from Sagie Gur-Ari an annoying little gotcha was fixed. Now 'connectionString' is an alias for 'connectString'. oracledb.getConnection( { user: 'hr', password: 'welcome', connectionString: 'localhost/orclpdb', }, function(err, connection) { if (err) console.error(err); else console.log('I am HR'); } ); The underlying data access layer ODPI-C had some improvements which flow through to node-oracledb. Connections can now be closed when ResultSets and Lobs are open. This removes the DPI-1054 error you might have seen if you closed resources in the 'wrong' order. At connection close there is no longer an unconditional rollback sent to the database. Instead ODPI-C makes use of some internal state to avoid the overhead of the 'round trip' to the database when it is known that there is no open transaction. Node-oracledb 2.1 no longer compiles with the long-obsolete Node 0.10 or 0.12 versions. We'd stopped testing and claiming support for these versions a while back, but technically compilation had been possible until now. Updates to NAN API usage and other cleanups have lead to this change. Our testing and documentation improved, as always. See the CHANGELOG for other changes in node-oracledb 2.1 Finally, you may be interested to know about these cool new resources: The yum.oracle.com Node.js RPM Packages for Oracle Linux The Oracle mirror of github.com/oracle. Resources Node-oracledb installation instructions are here. Node-oracledb documentation is here. Node-oracledb change log is here. For information on migrating see Migrating from node-oracledb 2.0 to node-oracledb 2.1. Issues and questions about node-oracledb can be posted on GitHub. Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

Release announcement: Node-oracledb 2.1.0, the Node.js module for accessing Oracle Database, is on npm. Top features: Privileged connections such as SYSDBA. A new 'queryStream()' Stream 'destroy()'...

node.js

Node-oracledb v2 Query Methods and Fetch Tuning

  For our Node.js node-oracledb v2 add-on for Oracle Database we revamped its lower data access layer and made it into a standalone project ODPI-C, which is being reused by several other language APIs. ODPI-C gave us a reason and opportunity to simplify some of internal query handling code in node-oracledb. To recap, node-oracledb has four ways to execute queries against an Oracle Database. These are the same in version 1 and version 2: Direct Fetches - these are non-ResultSet, non- queryStream() fetches. All rows are returned in one big array, limited to maxRows (v2 allows an unlimited array size). ResultSet getRow() - return one row on each call until all rows are returned. ResultSet getRows(numRows) - return batches of rows in each call until all rows are returned. queryStream() - stream rows until all rows are returned. The changes in node-oracledb v2 are: Enhanced direct fetches to allow an unlimited number of rows to be fetched, and made this the default. This occurs when maxRows = 0 Replaced prefetchRows (previously used for internal fetch buffering and tuning) with a new property fetchArraySize; the default size is 100. fetchArraySize affects direct fetches, ResultSet getRow() and queryStream(). getRows(numRows,...) internal fetch buffering is now only tuned by the numRows value. Previously prefetchRows could also affect internal buffering. queryStream() now use fetchArraySize for internal buffer sizing. Implemented getRow() in JavaScript for better performance and use fetchArraySize as the way to tune internal buffer sizes. The change in v2 that I want to discuss is how a 'direct fetch' does its internal buffering. To make fetching an unlimited number of rows feasible for direct fetches, data is now internally fetched from Oracle Database in batches of size fetchArraySize and then concatenated to form one big array of results that is returned in the callback. This lets you use fetchArraySize to tune fetch performance. In node-oracledb v1, one big array of size maxRows was allocated before the data was fetched from the database. (With node-oracledb v2 you can have the same behavior, if you really want it, by setting fetchArraySize = maxRows, where maxRows > 0). Let's look at two different scenarios that fetch 50000 rows with fetchArraySize of 100 (the default) and 1000. The code is at the end of the post. Direct fetch: rows: 50000, batch size: 100, seconds: 4.054 Direct fetch: rows: 50000, batch size: 1000, seconds: 1.643 You can see in this case (with a local database) that increasing fetchArraySize improved performance. There may be various factors for this, but the common one is reduced network cost and reduced load on the database because there were fewer 'round trips' between Node.js and the database to get batches of records. Each query and environment will be different, and require its own tuning. The benefits of using fetchArraySize for direct fetches are: Performance of batching and network transfer of data can be tuned. Memory can incrementally grow when the number of query rows is unknown, or varies from execution to execution. A single large chunk of memory (based on maxRows in v1) doesn't need to pre-allocated to handle the 'worst case' of a large number of rows. There are two drawbacks with direct fetches: One big array of results is needed. This is the same in v1 and v2. The concatenation of batches of records can use more memory than the final array requires, and can cause fragmentation. Let's look at timings for all query methods. This is one run; there was expected variability each time I ran the scripts. The 'batch size' number is numRows for getRows(numRows) calls, or fetchArraySize for the other fetch methods. Direct fetch: rows: 50000, batch size: 100, seconds: 4.054 ResultSet getRow(): rows: 50000, batch size: 100, seconds: 1.625 ResultSet getRows(): rows: 50000, batch size: 100, seconds: 1.586 queryStream(): rows: 50000, batch size: 100, seconds: 1.691 Direct fetch: rows: 50000, batch size: 1000, seconds: 1.643 ResultSet getRow(): rows: 50000, batch size: 1000, seconds: 1.471 ResultSet getRows(): rows: 50000, batch size: 1000, seconds: 1.327 queryStream(): rows: 50000, batch size: 1000, seconds: 1.415 The ResultSet and queryStream() methods don't have to store all rows in memory at once so there is less memory management involved. The outlier is obviously the first result: the memory management of concatenating small chunks of memory together is big. We have some ideas on what we can do inside node-oracledb to improve this a bit, but that is a future project to be investigated, and can't eliminate all costs of concatenation, and can't prevent all rows having be held in memory together. [Update: improvements were made to direct fetches in node-oracledb 2.1, and this particular case shows significantly improved performance]. The conclusion is to use ResultSets or streaming for large numbers of rows. This is the same recommendation we gave for v1. For small numbers of rows, the various query methods perform pretty much the same. The timings are so short that you can treat the differences in the one run shown below as noise. Here each query only returned 1 row: Direct fetch: rows: 1, batch size: 100, seconds: 0.011 ResultSet getRow(): rows: 1, batch size: 100, seconds: 0.012 ResultSet getRows(): rows: 1, batch size: 100, seconds: 0.013 queryStream(): rows: 1, batch size: 100, seconds: 0.013 Direct fetch: rows: 1, batch size: 1, seconds: 0.012 ResultSet getRow(): rows: 1, batch size: 1, seconds: 0.012 ResultSet getRows(): rows: 1, batch size: 1, seconds: 0.013 queryStream(): rows: 1, batch size: 1, seconds: 0.013 Although the timings are small, I suggest using a small fetchArraySize or numRows if you are querying a small handful of rows, particularly if the number of rows is known (such as 1). This reduces the amount of memory that needs to be allocated throughout node-oracledb, the Oracle client libraries, and also in the database. References Node.oracledb documentation is here. If you are currently using node-oracledb v1, you may be interested in the documentation on Migrating from node-oracledb 1.13 to node-oracledb 2.0. Code Here are the rudimentary test scripts I used. The ResultSet code originated in the v1 examples from https://jsao.io/2015/07/an-overview-of-result-sets-in-the-nodejs-driver/ The config.js file is at the end. The dbconfig.js file is the same as in the examples. The timings include statement execution in the DB, though this is not controlled by node-oracledb. With direct fetches there isn't a way in JavaScript to distinguish the query execute cost from the data fetch costs that fetchArraySize controls. Direct Fetch // direct fetch var oracledb = require('oracledb'); var dbConfig = require('./dbconfig.js'); var config = require('./config.js'); oracledb.getConnection( dbConfig, function(err, connection) { if (err) throw err; var rowsProcessed = 0; var startTime = Date.now(); connection.execute( 'select * from all_objects where rownum <= :mr', [ config.maxRows ], { fetchArraySize: config.batchSize }, function(err, results) { if (err) throw err; rowsProcessed = results.rows.length; // do work on the rows here var t = ((Date.now() - startTime)/1000); console.log('Direct fetch: rows: ' + rowsProcessed + ', batch size: ' + config.batchSize + ', seconds: ' + t); connection.release(function(err) { if (err) console.error(err.message); }); }); }); ResultSet getRow() // ResultSet getRow() var oracledb = require('oracledb'); var dbConfig = require('./dbconfig.js'); var config = require('./config.js'); oracledb.getConnection( dbConfig, function(err, connection) { if (err) throw err; var rowsProcessed = 0; var startTime = Date.now(); connection.execute( 'select * from all_objects where rownum <= :mr', [ config.maxRows ], { resultSet: true, fetchArraySize: config.batchSize }, function(err, results) { if (err) throw err; function processResultSet() { results.resultSet.getRow(function(err, row) { if (err) throw err; if (row) { rowsProcessed++; // do work on the row here processResultSet(); // try to get another row from the result set return; // exit recursive function prior to closing result set } var t = ((Date.now() - startTime)/1000); console.log('ResultSet getRow(): rows: ' + rowsProcessed + ', batch size: ' + config.batchSize + ', seconds: ' + t); results.resultSet.close(function(err) { if (err) console.error(err.message); connection.release(function(err) { if (err) console.error(err.message); }); }); }); } processResultSet(); } ); } ); ResultSet getRows() // ResultSet getRows() var oracledb = require('oracledb'); var dbConfig = require('./dbconfig.js'); var config = require('./config.js'); oracledb.getConnection( dbConfig, function(err, connection) { if (err) throw err; var rowsProcessed = 0; var startTime = Date.now(); oracledb.fetchArraySize = 1; connection.execute( 'select * from all_objects where rownum <= :mr', [ config.maxRows ], { resultSet: true }, function(err, results) { var rowsProcessed = 0; if (err) throw err; function processResultSet() { results.resultSet.getRows(config.batchSize, function(err, rows) { if (err) throw err; if (rows.length) { rows.forEach(function(row) { rowsProcessed++; // do work on the row here }); processResultSet(); // try to get more rows from the result set return; // exit recursive function prior to closing result set } var t = ((Date.now() - startTime)/1000); console.log('ResultSet getRows(): rows: ' + rowsProcessed + ', batch size: ' + config.batchSize + ', seconds: ' + t); results.resultSet.close(function(err) { if (err) console.error(err.message); connection.release(function(err) { if (err) console.error(err.message); }); }); }); } processResultSet(); }); }); queryStream() // queryStream() var oracledb = require('oracledb'); var dbConfig = require('./dbconfig.js'); var config = require('./config.js'); oracledb.getConnection( dbConfig, function(err, connection) { if (err) throw err; var rowsProcessed = 0; var startTime = Date.now(); var stream = connection.queryStream( 'select * from all_objects where rownum <= :mr', [ config.maxRows ], { fetchArraySize: config.batchSize } ); stream.on('data', function (data) { // do work on the row here rowsProcessed++; }); stream.on('end', function () { var t = ((Date.now() - startTime)/1000); console.log('queryStream(): rows: ' + rowsProcessed + ', batch size: ' + config.batchSize + ', seconds: ' + t); connection.close( function(err) { if (err) { console.error(err.message); } }); }); }); The Configuration File // config.js var maxRows; // number of rows to query var batchSize; // batch size for fetching rows maxRows = 50000; batchSize = 1000 exports.maxRows = maxRows; exports.batchSize = batchSize;

  For our Node.js node-oracledb v2 add-on for Oracle Database we revamped its lower data access layer and made it into a standalone project ODPI-C, which is being reused by several other language APIs....

node.js

node-oracledb 2.0 with pre-built binaries is on npm

Release announcement: Node-oracledb 2.0.15, the Node.js add-on for Oracle Database, is now on npm for general use. Top features: Pre-built binaries, Query fetch improvements It's time to shake things up. Node-oracledb version 1 has been stable for a while. Our tight, hard working team is now proud to deliver node-oracledb 2 to you. With improvements throughout the code and documentation, this release is looking great. There are now over 3000 functional tests, as well as solid stress tests we run in various environments under Oracle's internal testing infrastructure. Review the CHANGELOG for all changes. For information on migrating see Migrating from node-oracledb 1.13 to node-oracledb 2.0. node-oracledb v2 highlights node-oracledb 2.0 is the first release to have pre-built binaries. These are provided for convenience and will make life a lot easier, particularly for Windows users. Binaries for Node 4, 6, 8 and 9 are available for Windows 64-bit, macOS 64-bit, and Linux 64-bit (built on Oracle Linux 6). Simply add oracledb to your package.json dependencies or manually install with: npm install oracledb (Yes, Oracle Client libraries are still required separately - these do all the heavy lifting.) We've got great error messages when a binary isn't available, and improved the messages when require('oracledb') fails, however Linux users with older glibc libraries may get Linux runtime errors - you should continue building node-oracledb from source in this case, see below. There is support for hosting binary packages on your own internal web server, which will be great for users with a large number of deployments. See package/README. This is the first release to use the ODPI-C abstraction layer which is also used by Python's cx_Oracle 6.x API, as well as 3rd party APIs for other languages. Using ODPI is the the main change that allowed node-oracledb 2.0 binaries to be distributed. As another consequence of ODPI-C, any node-oracledb 2 binary will run with any of the Oracle client 11.2, 12.1 or 12.2 libraries without needing recompilation. This improves portability when node-oracledb builds are copied between machines. Since the available Oracle functionality will vary with different Oracle Client (and Oracle Database!) versions, it's important to test your applications using your intended deployment environment. The driver can still be built from source code if you need, or want, to do so. Compiling is now simpler than in version 1, since you no longer need Oracle header files, and no longer need OCI_INC_DIR or OCI_LIB_DIR environment variables. To build from source you need to pull the code from a GitHub branch or tag - generally the most recent release tag is what you want. Make sure you have Python 2.7, the 'git' utility, and a compiler, and add oracle/node-oracledb.git#v2.0.15 to your package.json dependencies. Or manually run the install: npm install oracle/node-oracledb.git#v2.0.15 Users without 'git', or with older versions of Node that won't pull the ODPI submodule code, can use the source package: npm install https://github.com/oracle/node-oracledb/releases/download/v2.0.15/oracledb-src-2.0.15.tgz I've noticed GitHub can be a little slow to download the source before compilation can begin, so bear with it. Improved query handling: Enhanced direct fetches to allow an unlimited number of rows to be fetched and changed the default number of rows fetched by this default method to be unlimited. The already existing ResultSet and Streaming methods are still recommended for large numbers of rows. Since ODPI-C internally uses 'array fetches' instead of 'prefetching' (both are underlying methods for buffering/tuning differing only in where the buffering takes place - both are transparent to applications), we've replaced prefetchRows with a new, almost equivalent property fetchArraySize.. We've moved the buffering or rows for getRow() into JavaScript for better performance. It no longer needs to call down into lower layers as often. Check my blog post Node-oracledb v2 Query Methods and Fetch Tuning for some tips. We tightened up some resource handling to make sure applications don't leak resources. If you inadvertently try to close a connection when a LOB or ResultSet is open, you will see a new error DPI-1054. The node-oracledb size restrictions on LOB fetchAsString and fetchAsBuffer queries, and also on LOB binds. In node-oracledb 1 these were particularly low when Oracle 11gR2 client libraries were used, so this should be welcome for people who have not updated the Oracle client. Node.js and V8 will still limit sizes, so continue to use the Stream interface for large LOBs. Added support for ROWID and UROWID. Data is fetched as a String Added support for fetching columns types LONG (as String) and LONG RAW (as Buffer). Added support for TIMESTAMP WITH TIME ZONE date type. These are mapped to a Date object in node-oracledb using LOCAL TIME ZONE. The TIME ZONE component is not available in the Date object. Added query support for NCHAR, NVARCHAR2 and NCLOB columns. Binding for DML may not insert data correctly, depending on the database character set and the database national character set. Plans for Version 1 Our stated plan was to cease formal support for version 1 when Node 4 LTS maintenance ended in April 2018. We're pleased 1.13.1 has been stable for some time, and we are not anticipating needing any further node-oracledb 1 releases, unless exceptional circumstances occur. Plans for Version 2 We are now testing with Node 4, 6, 8 and 9. This list will, of course, change as new Node versions are released. The pre-built binaries will also change and availability is not guaranteed. ODPI-C forms a solid base to extend node-oracledb. Users of Python cx_Oracle 6, which is also based on ODPI-C, are appreciating all the advanced Oracle features that are available. Many of these features have also been requested by node-oracledb users. As with any open source project, there are no hard and fast guarantees for node-oracledb, but you can see the general direction we'd like to head in. Pull Requests are welcome. One little thing you might be unaware of is that along the way we have been testing (and helping create) the next major release of Oracle Database, so sometimes we've had to take detours from direct node-oracledb work order to move the whole of Oracle along. Whether we work on the "client" or the "server", we look forward to bringing you great things in future releases. Resources Node-oracledb installation instructions are here. Node-oracledb documentation is here. Node-oracledb change log is here. For information on migrating see Migrating from node-oracledb 1.13 to node-oracledb 2.0. Issues and questions about node-oracledb can be posted on GitHub. Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

Release announcement: Node-oracledb 2.0.15, the Node.js add-on for Oracle Database, is now on npm for general use. Top features: Pre-built binaries, Query fetch improvements It's time to shake things...

python

cx_Oracle 6.1 with Oracle Database Sharding Support is now Production on PyPI

cx_Oracle 6.1, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package that covers the Python Database API specification with many additions to support Oracle advanced features. In the words of the creator and maintainer, Anthony Tuininga: The cx_Oracle 6.1 release has a number of enhancements building upon the release of 6.0 made a few months ago. Topping the list is support for accessing sharded databases via new shardingkey and supershardingkey parameters for connections and session pools. Support for creating connections using the SYSBACKUP, SYSDG, SYDKM and SYSRAC roles was also added, as was support for identifying the id of the transaction which spawned a subscription message. For those on Windows, improved error messages were created for when the wrong architecture Oracle Client is in the PATH environment variable. Improvements were also made to the debugging infrastructure and a number of bugs were squashed. The test suite has also been expanded. See the full release notes for more information. cx_Oracle References Home page: https://oracle.github.io/python-cx_Oracle/index.html Installation instructions: http://cx-oracle.readthedocs.io/en/latest/installation.html Documentation: http://cx-oracle.readthedocs.io/en/latest/index.html Release Notes: http://cx-oracle.readthedocs.io/en/latest/releasenotes.html Source Code Repository: https://github.com/oracle/python-cx_Oracle

cx_Oracle 6.1, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package that covers the Python Database API specification with many...

General

ODPI-C 2.1 is now available for all your Oracle Database Access Needs

ODPI-C Anthony Tuininga has just released version 2.1 of the Oracle Database Programming Interface for C (ODPI-C) on GitHub. ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications written in C or C++. Key new feature: support for accessing sharded databases.   In the words of Anthony: This release has a number of small enhancements intended to build upon the release of 2.0 made a few months ago. Topping the list is support for accessing sharded databases, a new feature in Oracle Database 12.2. Support for creating connections using the SYSBACKUP, SYSDG, SYDKM and SYSRAC roles was also added, as was support for identifying the id of the transaction which spawned a subscription message. For those on Windows, improved error messages were created for when the wrong architecture Oracle Client is in the PATH environment variable. Improvements were also made to the debugging infrastructure. The test suite has also been expanded considerably. A number of bugs were squashed to improve ODPI-C and in preparation for the upcoming releases of cx_Oracle 6.1 and node-oracledb 2.0, both of which use ODPI-C. See the full release notes for more information. ODPI-C References Home page: https://oracle.github.io/odpi/ Code: https://github.com/oracle/odpi Documentation: https://oracle.github.io/odpi/doc/index.html Release Notes: https://oracle.github.io/odpi/doc/releasenotes.html Report issues and discuss: https://github.com/oracle/odpi/issues

ODPI-C Anthony Tuininga has just released version 2.1 of the Oracle Database Programming Interface for C (ODPI-C) on GitHub. ODPI-C is an open source library of C code that simplifies access to Oracle...

General

What's on at Oracle OpenWorld for Developers using Python, Node.js, PHP, R, C and C++

The Oracle OpenWorld in San Francisco is approaching and it's time to plan your schedule.  The sessions that I'm tracking are all listed on the official schedule page Application Development with Node.js, Python, PHP, R, C and C++. Bookmark that link! And make sure to visit our Demo booth ("SOA-042") in Moscone West.   Conference Sessions Oracle Net Services 12c: Best Practices for Database Performance and Scalability [CON6718] Monday, Oct 02, 4:45 p.m. - 5:30 p.m. | Moscone West - Room 3011 This session is always a gold mine of useful information about network configuration and tuning. The tips can help improve performance and scalability of applications. Best Practices for Application Development with Oracle Database Cloud Services [CON6717] Monday, Oct 02, 5:45 p.m. - 6:30 p.m. | Moscone West - Room 3008 Configuration, deployment, and optimization of database applications for Oracle Cloud is covered by the VP most intimately involved with make it possible for programs and applications to connect to (and from) Oracle Database cloud services. Building Python Messaging Apps with Oracle Database [CON7344] Tuesday, Oct 03, 12:15 p.m. - 1:00 p.m. | Moscone West - Room 2002 This is a practical, code-heavy session at the Oracle Code part of JavaOne. Oracle Database Advanced Queuing features usable directly in Python are explained via an application. Node.js: JavaScript Application Development for Oracle Database [CON6712] Tuesday, Oct 03, 5:45 p.m. - 6:30 p.m. | Moscone West - Room 3008 See what you can do with Node.js and Oracle Database and find out how the node-oracledb driver is evolving with Node.js. Building Real-Time Data in Web Applications with Node.js [CON6738] Wednesday, Oct 04, 2:00 p.m. - 2:45 p.m. | Moscone West - Room 3014 (Repeated as [CON1240] 2:45 p.m. - 3:30 p.m. | Moscone West - Room 2003) Dan McGhan is giving his popular talk on Node.js and the node-oracledb driver. It's so popular that he's giving it twice - back to back! Best Practices for Application High Availability [CON6711] Wednesday, Oct 04, 3:30 p.m. - 4:15 p.m. | Moscone West - Room 3012 High Availability is a topic everyone is interested in. Learn when and where to use Oracle Database HA features to make applications resilient in planned and un-planned database outages. Python and Oracle Database: Tips, Tricks, and the Best New Features [CON6714] Wednesday, Oct 04, 4:30 p.m. - 5:15 p.m. | Moscone West - Room 3012 This session is packed full of tips on how to connect Python applications to Oracle Database. Performance and Scalability Techniques for Oracle Database Applications [CON6710] Wednesday, Oct 04, 5:30 p.m. - 6:15 p.m. | Moscone West - Room 3012 This session from the development group responsible for Oracle Database client APIs is great for anyone trying to write code that connects to Oracle Database. There are some great tips here. Hands-on Lab Session Python and Oracle Database 12c: Scripting for the Future [HOL7605] Wednesday, Oct 04, 9:45 a.m. - 10:45 a.m. | Hilton San Francisco Union Square (Ballroom Level) - Continental Ballroom 6 Sit down, and start typing: we've got machines set up ready for you to learn how to use Python to connect to Oracle Database. DevLive Interviews Come and hear these informal chats at the DevLive Stage, Moscone West Level 1. The discussion is bound to be broad reaching and lively. Node.js: JavaScript Application Development for Oracle Database Monday, Oct 02, 3:00 p.m. - 3:30 p.m. | Developer Lounge, Moscone West Building Python Messaging Apps with Oracle DB12c Tuesday, Oct 03, 3:00 p.m. - 3:30 p.m. | Developer Lounge, Moscone West What Else? Also check out the best Java sessions and .NET sessions too!

The Oracle OpenWorld in San Francisco is approaching and it's time to plan your schedule.  The sessions that I'm tracking are all listed on the official schedule page Application Development with...

General

Ruby and Ruby on Rails Recognized by Oracle Technology Network

I'm really pleased that two key contributors to the Ruby and Ruby on Rails communities for Oracle Database have been honored. Takehiro Kubo and Yasuo Honda have both become Oracle ACEs. The Oracle Technology Network's Oracle ACE program describes itself as recognizing Oracle evangelists and enthusiasts. Both Takehiro and Yasuo fit this description exactly. To me, Takehiro Kubo is most visible in his work on the ruby-oci8 gem. Ruby-oci8 is a Ruby interface for Oracle Database. Takehiro created ruby-oci8 and is actively maintaining it and helping users. He also willingly contributes his knowledge to Oracle Database interfaces for other programming languages, helping both maintainers and users. An eager adopter of new technology, he is currently developing ruby-odpi, a rewrite of the interface that is based on the strengths of Oracle's new ODPI-C wrapper. Most Oracle Ruby and JRuby developers use the Ruby on Rails web application framework. Here Yasuo Honda is the key person. He has been the lead maintainer of the Ruby on Rails Oracle ActiveRecord Adapter for some years now. He has nurtured an active community of users and contributors, keeping up with both framework and library improvements. He has contributed enormously to its status as a very popular development environment. He freely contributes his knowledge and expertise. Both Takehiro and Yasuo personify the ideal open source maintainers. They are able to create useful, beautiful software components that other people want to use. They take their roles seriously, and have shown long term commitment to their projects' successes. Congratulations!

I'm really pleased that two key contributors to the Ruby and Ruby on Rails communities for Oracle Database have been honored. Takehiro Kubo and Yasuo Honda have both become Oracle ACEs. The...

General

Oracle Instant Client 12.2 now has SQL*Loader and Data Pump

This is a guest post by Jim Stenoish, Senior Director of Software Development, whose products include all tools used for the high speed movement, transformation, and loading of data and metadata for Oracle Database. Oracle Instant Client 12.2 now includes SQL*Loader as well as the Data Pump command line utilities expdp and impdp, and the traditional Export/Import utilities exp and imp. Previously, these tools were only available with a 'full' client installation, or in a database installation. Being in Instant Client allows users to access these tools with a smaller footprint on disk and with a much quicker install than for a full client. The OTN license is also more convenient for many users. Now these tools are part of Instant Client, it's easy to load data from a file on a system the doesn't already have Oracle software installed. Installing the Tools You can install the tools by unzipping the Oracle Instant Client 'Basic' (or 'Basic Light') package and the 'Tools' package to the same location. See the Instant Client home page for links to download for each platform. Follow the platform-specific instructions given on the download page. For example, on Linux set LD_LIBRARY_PATH and PATH to the Instant Client directory. On Windows set PATH to the Instant Client directory and also install the Microsoft Visual Studio 2013 Redistributable. Using the Tools SQL*Loader allows you to load data from files, such as comma-separated value files (CSV), into the tables. The expdp and impdp utility allows you to move metadata and data between Oracle databases. The expdp utility unload metadata and data from the database into Data Pump files on the database server. The impdp utility recreates metadata defined in a Data Pump file and loads table data stored in the files on the database server. It also provides a way to extract metadata and data over a DB Link (no files involved) from one database to another database. Documentation and examples for the tools are in the Database Utilities manual. If you have questions about SQL*Loader or Data Pump you can ask them on OTN.

This is a guest post by Jim Stenoish, Senior Director of Software Development, whose products include all tools used for the high speed movement, transformation, and loading of data and metadata for...

python

Python cx_Oracle 6 Production Release: Hitting A Six

cx_Oracle 6.0, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package that covers the Python Database API specification with many additions to support Oracle advanced features. It was begun by Anthony Tuininga in 1998 in the era of Oracle Database 8i and Python 1.5. Since then there have been over 30 releases covering up to Oracle Database 12c and Python 3. Anthony joined Oracle relatively recently. In his time with us, he has been hard at work in the language API space, leading cx_Oracle changes as well as creating the ODPI-C interface, and also making contributions to node-oracledb. cx_Oracle 6.0 is the latest and greatest release from our group. It can connect to both on-premise databases and to Oracle Database Cloud Services. It can also be run in the cloud itself. The use cases for cx_Oracle are endless. Both small and big projects make use of cx_Oracle for all kinds of data manipulation and application development projects. cx_Oracle has extensive API documentation, solid samples, and a clean test suite. (We also run stress tests for cx_Oracle in the Oracle Database development environment). To complete the link dump, the home page is here and the source code is hosted on GitHub. cx_Oracle 6 Features cx_Oracle 6's support for Oracle Database is excellent. Easily installed from PyPI with Wheels. Support for Python 2 and 3. Support for Oracle Client 11.2, 12.1 and 12.2. Oracle's standard cross-version interoperability allows easy upgrades and connectivity to different Oracle Database versions. Connect to Oracle Database 9.2, 10, 11 or 12 (depending on the Oracle Client version used). SQL and PL/SQL Execution. The underlying Oracle Client libraries have significant optimizations including compressed fetch, pre-fetching, client and server result set caching, and statement caching with auto-tuning. Full use of Oracle Network Service infrastructure, including encrypted network traffic, authentication, and security features. Extensive Oracle data type support, including large object support (CLOB and BLOB). Direct binding to SQL objects. One great use case is binding Python objects to Oracle Spatial SDO objects. Array operations for efficient INSERT and UPDATEs. Array row counts and batch error handling for array operations. Fetching of large result sets. REF CURSOR support. Support for scrollable cursors. Go back and forth through your query results. Fetch PL/SQL Implicit Results. Easily return query results from PL/SQL. Row Prefetching. Efficient use of the network. Client Result Caching. Improve performance of frequently executed look-up statements. Support for Advanced Queuing. Use database notifications to build micro-service applications. Continuous Query Notification. Get notified when data changes. Support for Edition Based Redefinition. Easily switch applications to use updated PL/SQL logic. Support for setting application context during the creation of a connection, making application metadata more accessible to the database, including in LOGON triggers. End-to-end monitoring and tracing. Transaction Management. Session Pooling. Database Resident Connection Pooling (DRCP). Privileged Connections. External Authentication. Database startup and shutdown. Oracle Database High Availability Features, such as FAN notifications and Transaction Guard support. Changes since cx_Oracle 5.3 The main changes in cx_Oracle 6 are: Re-implemented to use our new ODPI-C abstraction layer for Oracle Database. The cx_Oracle API is unchanged. The cx_Oracle design, build and linking process has improved because of ODPI-C. Python Wheels are now available for install. This is made possible by the ODPI-C architecture. Less code in Python's Global Interpreter Lock, giving better scalability. Support for DML RETURNING of multiple rows. Support for Universal ROWIDs. LOB locators are now associated to LOB objects so they are not overwritten on database round trips. As you can see, the use of ODPI-C was a significant change, leading to code refactoring and simplification. It gives us a springboard to make future improvements to cx_Oracle using a code base that has multiple consumers (and hence testers) including node-oracledb 2. See What's New in the cx_Oracle documentation for the full change list. With the release of cx_Oracle 6, no new features are planned for cx_Oracle 5. Installing cx_Oracle With the new cx_Oracle 6 Wheels, installing is straightforward as: python -m pip install cx_Oracle --upgrade If a binary is not available, the cx_Oracle source will be compiled. Oracle Client 11.2, 12.1 or 12.2 libraries are needed in your system library path (such as PATH on Windows, or LD_LIBRARY_PATH on Linux). These libraries provide a lot of features such as connection management, query and statement cache management, as well as high availability features. Libraries can be easily installed from the free Oracle Instant Client, an Oracle Database installation, or a full Oracle Client installation. Oracle's standard client-server interoperability applies, e.g. Oracle Client 12.2 can connect to Oracle Database 11.2 onward. Thanks to ODPI-C, you do not need to update cx_Oracle if you decide to upgrade your Oracle Client from 11.2 or 12.1 to a newer version. You can simply change your library loading path, e.g. PATH or LD_LIBRARY_PATH, to the new version of the Oracle Client libraries. Once you have cx_Oracle installed, you can connect to your database. For example: from __future__ import print_function import cx_Oracle connection = cx_Oracle.connect("hr", "welcome", "localhost/orclpdb") cursor = connection.cursor() cursor.execute(""" SELECT first_name, last_name FROM employees WHERE department_id = :did AND employee_id > :eid""", did = 50, eid = 190) for fname, lname in cursor: print("Values:", fname, lname) cx_Oracle References Home page: https://oracle.github.io/python-cx_Oracle/index.html Installation instructions: http://cx-oracle.readthedocs.io/en/latest/installation.html Documentation: http://cx-oracle.readthedocs.io/en/latest/index.html Source Code Repository: https://github.com/oracle/python-cx_Oracle Last Words The cx_Oracle interface is the most popular scripting language for Oracle Database. It has a long history, is widely used, and exposes features for building all kinds of database applications. The release of cx_Oracle 6 brings a refactored implementation that is more scalable and makes it easier to bring you great, future Oracle functionality. Thank you to all our users and testers who are part of a very big & happy cx_Oracle community. Keep in touch!

cx_Oracle 6.0, the extremely popular Oracle Database interface for Python, is now Production on PyPI. cx_Oracle is an open source package that covers the Python Database API specification with many...

General

Open Source Library Drives Powerful Oracle Database Applications

ODPI-C The production release of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub.  ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications written in C or C++.   ODPI-C: exposes a C API in a readily consumable way to C and C++ developers writing applications for Oracle Database. is aimed at language interfaces and applications where speed of development is important, and where special-case Oracle features are not needed. is already being used for Python, Node.js, Go and Rust interfaces, as well as for custom applications. allows faster application implementation with less code. makes memory and resource management simpler, particularly for 'binding' and 'defining' data. A reference counting mechanism adds resiliency by stopping applications destroying in-use resources. is available as source code on GitHub under the Apache 2.0 and/or the Oracle UPL licenses, for direct inclusion into your own code bases. ODPI-C is a wrapper over Oracle Call Interface. OCI is Oracle Database's main C API and is widely used by our products and user applications so it is stable, fast, scalable and supported. OCI's API is extremely flexible and gives fine-grained control to the developer for a very wide range of use cases. ODPI-C is also flexible but is aimed primarily at language interface creators. These creators are programming within the confines of a scripting language's type system and semantics. The languages often expose simplified data access to users through cross-platform, common-denominator APIs. Therefore it makes sense for ODPI-C to provide easy to use functionality for common data access, while still allowing the power of Oracle Database to be used. Of course ODPI-C isn't just restricted to language interface usage. Since it provides a simple programming experience, if ODPI-C has the functionality you need for accessing Oracle Database, then you can add it to your own projects. ODPI-C is developed by the Oracle Database Data Access team, who also look after OCI and other language APIs for Oracle Database. Anthony Tuininga has been leading the ODPI-C effort. He has made full use of his extensive knowledge as creator and maintainer of the extremely popular, and full featured, Python cx_Oracle interface. A big thank you to all the users who have given feedback on ODPI-C pre-releases. ODPI-C Features The ODPI-C feature list currently includes all the normal calls you'd expect to manage connections and to execute SQL and PL/SQL efficiently. It also has such gems as SQL and PL/SQL object support, scrollable cursors, Advanced Queuing, and Continuous Query Notification. ODPI-C does extra validation and has a reference counting mechanism for resiliency to help stop applications destroying in-use OCI resources. Writing threaded mode applications is easier. The Oracle feature list in this initial release, in no particular order, is: 11.2, 12.1 and 12.2 Oracle Client support 9.2 and higher Oracle Database support (depending on Oracle Client version) SQL and PL/SQL execution Character data types (CHAR, VARCHAR2, NCHAR, NVARCHAR2, CLOB, NCLOB, LONG) Numeric data types (NUMBER, BINARY_FLOAT, BINARY_DOUBLE) Dates, Timestamps, Intervals Binary types (BLOB, BFILE, RAW, LONG RAW) PL/SQL data types (PLS_INTEGER, BOOLEAN, Collections, Records) JSON User Defined Types REF CURSOR, Nested cursors, Implicit Result Sets Array fetch Array bind/execute Session pools, with connection validation Standalone connections Database Resident Connection Pooling (DRCP) External authentication Statement caching End-to-end tracing, mid-tier authentication and auditing Edition Based Redefinition Scrollable cursors DML RETURNING Privileged connection support (SYSDBA, SYSOPER, SYSASM, PRELIM_AUTH) Database Startup/Shutdown Session Tagging Proxy authentication Batch Errors Array DML Row Counts Query Result Caching Application Continuity Query Metadata Password Change Two Phase Commit Continuous Query Notification Advanced Queuing In case you want to access other OCI functionality without having to modify ODPI-C code, there is a call to get the underlying OCI Service Context handle. ODPI-C applications can take advantage of OCI features which don't require API access, such as the oraaccess.xml configuration for enabling statement cache auto-tuning. Similarly, Oracle Database features controlled by SQL and PL/SQL, such as partitioning, can be used in applications, as you would expect. Since communication to the database is handled by Oracle's Network Services, features such as encrypted communication and LDAP can easily be configured. Using ODPI-C See ODPI-C Installation for detailed instructions. ODPI-C code can be included in your C or C++ applications and compiled like any OCI application. Or, if you want to use ODPI-C as a shared library, use the provided example Makefile.  To try out ODPI-C, build it as a shared library and then compile the sample programs. These show a number of ODPI-C features. You can also view the test suite. ODPI-C makes it easy to build and distribute application code or binaries because it is open source, and OCI headers and libraries are not needed at compilation. Applications need standard Oracle client libraries only at run time. Oracle client libraries must be installed separately. Version 11.2, 12.1 or 12.2 are required. These allow applications to connect to Oracle Database 9.2 or later (depending on the client version). ODPI-C uses the shared library loading mechanism available on each supported platform to load the Oracle Client library at run time. This allows code using ODPI-C to be built only once, and then run using available Oracle Client libraries. Oracle client libraries are available in the free, easily installed Oracle Instant Client "Basic" and "Basic Light" packages. Client libraries are also available in any Oracle Database installation, or in the full Oracle Client installation. ODPI-C has been tested on Windows, macOS and Linux. Compilation on other platforms such as AIX and Solaris x64 is possible, but these have not been fully tested. Wrap Up ODPI-C provides an API to Oracle Database that enables functional, flexible application and scripting language API development. ODPI-C is being used by us for the Python cx_Oracle 6 interface, and in node-oracledb 2 for Node.js. Third party developers have also started Go and Rust interfaces too. These code bases are handy, bigger examples of how to use ODPI-C. It's also pleasing to have gotten positive feedback from internal Oracle projects that have adopted ODPI-C for a variety of other application development purposes. ODPI-C will continue to be enhanced, bringing great Oracle Database technologies to developers. Scripting language drivers will similarly see enhancements and get benefits of the common ODPI-C code base. I think you'll be pleased with our direction and plans for developers and scripting languages in 2017. ODPI-C References Home page: https://oracle.github.io/odpi/ Code: https://github.com/oracle/odpi Documentation: https://oracle.github.io/odpi/doc/index.html Report issues and discuss: https://github.com/oracle/odpi/issues Footnote Trivia Why is this first production release called version 2?  Because ODPI-C is a significantly updated, refactored, and standalone release of code which had origins in a node-oracledb 1.x. abstraction layer.

ODPI-C The production release of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub.  ODPI-C is an open source library of C code that simplifies access to Oracle Database...

General

Python cx_Oracle 6.0 RC 2 is on PyPI

Python cx_Oracle is the Python interface for Oracle Database Anthony Tuininga has released the second (and probably final) Release Candidate of Python cx_Oracle 6.0 on PyPI. It's now full steam ahead towards the production release, so keep on testing. Issues can be reported on GitHub or the mailing list. To take a look, use the '--pre' option to pip to install this pre-release: python -m pip install cx_Oracle --upgrade --pre You will also need Oracle client libraries, such as from the free Oracle Instant Client. I want to highlight a few of changes, focusing, as readers of my blog will know I favor, on usability. This release picks up the latest ODPI-C Oracle DB abstraction layer, which provides some nice fixes. In particular one fix resolved a confusing Windows system message 'UnicodeDecodeError' displaying when cx_Oracle was imported. Now the actual underlying Windows error message is displayed, allowing you to see what the root problem is. The installation notes have been tidied up and made into a new Installation Chapter of the documentation, complete with troubleshooting tips. Some more introductory samples have been added, and the sample and test schema creation scripts improved. The scripts now reference a common file to set credentials, making it easier to play with them without needing to edit every single one. The full cx_Oracle release notes are here. Let us know what you find.

Python cx_Oracle is the Python interface for Oracle Database Anthony Tuininga has released the second (and probably final) Release Candidate of Python cx_Oracle 6.0 on PyPI. It's now full steam ahead...

General

Ever Evolving SQL*Plus 12.2.0.1 Adds New Performance Features

This is a guest post by Luan Nim, Senior Development Manager at Oracle. SQL*Plus 12.2.0.1 has introduced a number of features to improve the performance and ease of use in general. These features can be enabled with SET commands, or via the command line. New Oracle SQL*Plus 12.2.0.1 features include: SET MARKUP CSV This option lets you generate output in CSV format. It also lets you choose the delimiter character to use and enable quotes ON or OFF around data. The benefit of using CSV format is that it is fast. This option improves the performance for querying large amount of data where formatted output is not needed. Syntax: SET MARKUP CSV ON [DELIMI[TER] character] [QUOTE {ON|OFF}] Example: SQL> set markup csv on SQL> select * from emp; "EMPNO","ENAME","JOB","MGR","HIREDATE","SAL","COMM","DEPTNO" 7369,"SMITH","CLERK",7902,"17-DEC-80",800,,20 7499,"ALLEN","SALESMAN",7698,"20-FEB-81",1600,300,30 7521,"WARD","SALESMAN",7698,"22-FEB-81",1250,500,30 7566,"JONES","MANAGER",7839,"02-APR-81",2975,,20 7654,"MARTIN","SALESMAN",7698,"28-SEP-81",1250,1400,30 7698,"BLAKE","MANAGER",7839,"01-MAY-81",2850,,30 7782,"CLARK","MANAGER",7839,"09-JUN-81",2450,,10 7788,"SCOTT","ANALYST",7566,"19-APR-87",3000,,20 7839,"KING","PRESIDENT",,"17-NOV-81",5000,,10 7844,"TURNER","SALESMAN",7698,"08-SEP-81",1500,0,30 7876,"ADAMS","CLERK",7788,"23-MAY-87",1100,,20 7900,"JAMES","CLERK",7698,"03-DEC-81",950,,30 7902,"FORD","ANALYST",7566,"03-DEC-81",3000,,20 7934,"MILLER","CLERK",7782,"23-JAN-82",1300,,10 14 rows selected. This option is also available from command line with the "-m csv" argument. $ sqlplus –m “csv on” scott/tiger @emp.sql SQL*Plus: Release 12.2.0.2.0 Development on Wed Jul 5 23:12:14 2017 Copyright (c) 1982, 2017, Oracle. All rights reserved. Last Successful login time: Wed Jul 05 2017 23:11:46 -07:00 Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.2.0 - 64bit Development "EMPNO","ENAME","JOB","MGR","HIREDATE","SAL","COMM","DEPTNO" 7369,"SMITH","CLERK",7902,"17-DEC-80",800,,20 7499,"ALLEN","SALESMAN",7698,"20-FEB-81",1600,300,30 7521,"WARD","SALESMAN",7698,"22-FEB-81",1250,500,30 7566,"JONES","MANAGER",7839,"02-APR-81",2975,,20 7654,"MARTIN","SALESMAN",7698,"28-SEP-81",1250,1400,30 7698,"BLAKE","MANAGER",7839,"01-MAY-81",2850,,30 7782,"CLARK","MANAGER",7839,"09-JUN-81",2450,,10 7788,"SCOTT","ANALYST",7566,"19-APR-87",3000,,20 7839,"KING","PRESIDENT",,"17-NOV-81",5000,,10 7844,"TURNER","SALESMAN",7698,"08-SEP-81",1500,0,30 7876,"ADAMS","CLERK",7788,"23-MAY-87",1100,,20 7900,"JAMES","CLERK",7698,"03-DEC-81",950,,30 7902,"FORD","ANALYST",7566,"03-DEC-81",3000,,20 7934,"MILLER","CLERK",7782,"23-JAN-82",1300,,10 14 rows selected. SET FEEDBACK ONLY The new ONLY option to SET FEEDBACK is to display the number of rows selected without displaying data. This is useful for users who are interested in measuring the time taken to fetch data from the database, without actually displaying that data. Example: SQL> set feedback only SQL> select * from emp; 14 rows selected. SET STATEMENTCACHE This option is to cache executed statements in the current session. The benefit of this setting is that it reduces unnecessary parsing time for the same query. Therefore it improves performance when repeatedly executing a query in a session. Example: SQL> set statementcache 20 SQL> select * from emp; SQL> select * from emp; SET LOBPREFETCH This option is to improve access of smaller LOBs where LOB data is prefetched and cached. The benefit of this setting is to reduce the number of network round trips to the server, allowing LOB data to be fetched in one round trip when LOB data is within the LOBPREFETCH size defined. Example: SQL> set lobprefetch 2000 SQL> select * from lob_tab; SET ROWPREFETCH This option is to minimize server round trips in a query. The data is prefetched in a result set rows when executing a query. The number of rows to prefetch can be set using this SET ROWPREFETCH This option can reduce round trips by allowing Oracle to transfer query results on return from its internal OCI execute call, removing the need for the subsequent internal OCI fetch call to make another round trip to the DB. Example: SQL> set rowprefetch 20 SQL> Select * from emp; If, for example, you expect only a single row returned, set ROWPREFETCH to 2, which allows Oracle to get the row efficiently and to confirm no other rows need fetching. Command line –FAST option. This command line option improves performance in general. When this option is used, it changes the following SET options to new values: ARRAYSZE 100 LOBPREFETCH 16384 PAGESIZE 50000 ROWPREFETCH 2 STATEMENTCACHE 20 Once logged in, these setting can also be changed manually. Syntax: $ sqlplus –f @emp.sql I hope the new features described above are helpful to you. For more information, please refer to the SQL*Plus Users Guide and Reference. If you have questions about SQL, PL/SQL or SQL*Plus, post them in the appropriate OTN space.

This is a guest post by Luan Nim, Senior Development Manager at Oracle. SQL*Plus 12.2.0.1 has introduced a number of features to improve the performance and ease of use in general. These features can...

php

Updated PHP 7.2 PDO_OCI install 'configure' syntax

Frank Yang at Oracle has updated the PHP 7.2 'configure' option for PDO_OCI and added some of the heuristics previously only in the PHP OCI8 extension configure option. This has allowed the two extension option syntaxes to be aligned. PDO_OCI is PHP's PDO driver for Oracle Database. PHP 7.2 is in Alpha status. In common with most other database PDO drivers, the PDO_OCI driver is part of the PHP bundle and is not on PECL. The new PHP 7.2 PDO_OCI 'configure' syntax is like: --with-pdo-oci[=DIR] PDO: Oracle OCI support. DIR defaults to $ORACLE_HOME. Use --with-pdo-oci=instantclient,/path/to/instant/client/lib for an Oracle Instant Client installation. So now, for example, you could use: ./configure --with-pdo-oci=instantclient,/usr/lib/oracle/12.2/client64/lib \ --with-oci8=instantclient,/usr/lib/oracle/12.2/client64/lib Prior to PHP 7.2, configuring PDO_OCI with the Oracle Instant Client libraries required something like: ./configure --with-pdo-oci=instantclient,/usr,12.1 The version number on the end of the option was only partially validated. In many cases it was possible to use an 'incorrect' number and still get the desired libraries to be used. (Also each new Oracle Database version required PDO_OCI code changes to update a white list). When building with PHP 7.1 (and earlier) and using Instant Client, this old syntax is still needed. If you know PHP OCI8 'configure' options well, you may wonder why the install simplification heuristic on Linux to automatically use the highest installed version of Instant Client RPMs wasn't ported to PDO_OCI's 'configure'. Well, I still dream that bigger improvements to PDO_OCI will occur. The current incarnation of this dream is that PDO_OCI could one day be rewritten to use ODPI-C, like cx_Oracle 6 and node-oracledb 2. If this dream ever became reality, the configure option would become simply '--with-pdo-oci' since ODPI-C doesn't need Oracle headers or libraries at build time. ODPI-C only needs Oracle client libraries in LD_LIBRARY_PATH (or PATH on Windows) at run time. But I've had the dream of various PDO_OCI improvements for at least 10 years.... For the moment Oracle still recommends using PHP OCI8 in preference to PDO_OCI because OCI8 has superior features and can also take advantage of various Oracle client library connection and caching functionality.

Frank Yang at Oracle has updated the PHP 7.2 'configure' option for PDO_OCI and added some of the heuristics previously only in the PHP OCI8 extension configure option. This has allowed the two...

node.js

Node-oracledb 2.0.13-Development is now on GitHub

Node-oracledb 2.0.13-Development is now on GitHub. Node-oracledb is the Node.js interface for Oracle Database. Top features: Version 2 is based on the new ODPI-C abstraction layer. Additional data type support. The full Changelog is here. The node-oracledb 2.0.13-Development documentation is here. I'd recommend testing and reporting any issues as early as possible during the 2.0 Development release cycle. This is a development release so we are aware of some rough edges. I'll start a GitHub issue to track them. Full installation instructions are here but note that node-oracledb 2.0 is not yet on npm so you need to install from GitHub with: npm install oracle/node-oracledb.git#dev-2.0 All you then need are Oracle client 12.2, 12.1 or 11.2 libraries (e.g. the Oracle Instant Client 'Basic' or 'Basic Light' package) in your PATH or LD_LIBRARY_PATH or equivalent. Users of macOS must put the Oracle client libraries in ~/lib or /usr/local/lib. The use of ODPI-C makes installation a bit easier. Oracle header files are no longer needed. The OCI_LIB_DIR and OCI_INC_DIR environment variables are not needed. A compiler with C++11 support, and Python 2.7 are still needed, but a single node-oracledb binary now works with any of the Oracle client 11.2, 12.1 or 12.2 releases, improving portability when node-oracledb builds are copied between machines. You will get run time errors if you try to use a new Oracle Database feature that isn't supported by older client libraries, so make sure you test in an environment that resembles your deployment one. Other changes in this release are: Lob.close() now marks LOBs invalid immediately rather than during the asynchronous portion of the close() method, so that all other attempts are no-ops. Incorrect application logic in version 1 that attempted to close a connection while certain LOB, ResultSet or other database operations were still occurring gave an NJS-030, NJS-031 or NJS-032 "connection cannot be released" error. Now in version 2 the connection will be closed but any operation that relied on the connection being open will fail. Some NJS and DPI error messages and numbers have changed. This is particularly true of DPI errors due to the use of ODPI-C. Stated compatibility is now for Node.js 4, 6 and 8. Added support for fetching columns types LONG (as String) and LONG RAW (as Buffer). There is no support for streaming these types, so the value stored in the DB may not be able to be completely fetched if Node.js and V8 memory limits are reached. You should convert applications to use LOBs, which can be streamed. Added support for TIMESTAMP WITH TIME ZONE date type. These are mapped to a Date object in node-oracledb using LOCAL TIME ZONE. The TIME ZONE component is not available in Node.js's Date object. Added support for ROWID without needing an explicit fetchAsString. Data is now fetched as a String by default. Added support for UROWID. Data is fetched as a String. Added query support for NCHAR and NVARCHAR2 columns. Binding for DML may not insert data correctly, depending on the database character set and the database national character set. Added query support for NCLOB columns. NCLOB data can be streamed or fetched as String. Binding for DML may not insert data correctly, depending on the database character set and the database national character set. Removed node-oracledb size restrictions on LOB fetchAsString and fetchAsBuffer queries, and also on LOB binds. Node.js and V8 memory restrictions will still prevent large LOBs being manipulated in single chunks. The v1 limits really only affected users who linked node-oracledb with 11.2 client libraries. Statements that generate errors are now dropped from the statement cache. Applications running while table definitions change will no longer end up with unusable SQL statements due to stale cache entries. Applications may still get one error, but that will trigger the now invalid cache entry to be dropped so subsequent executions will succeed. ODPI-C has some extra smarts in there to make it even better than I describe. I can bore you with them if you ask - or you can check the ODPI-C source code. Note that Oracle best-practice is never to change table definitions while applications are executing. I know some test frameworks do it, but .... All these improvements are courtesy of ODPI-C's underlying handling. The use of ODPI-C is a great springboard for future features since it already has support for a number of things we can expose to Node.js. The ODPI-C project was an expansion of the previous DPI layer used solely by node-oracledb. Now ODPI-C is in use in Python cx_Oracle 6, and is being used in various other projects. For example Tamás Gulácsi has been working on a Go driver using ODPI-C. (Check out his branch here). Kubo Takehiro started an Oracle Rust driver too - before he decided that he preferred programming languages other than Rust! Our stated plan for node-oracledb is that maintenance of node-oracledb 1.x will end on 1st April 2018, coinciding with the end-of-life of Node 4, so start testing node-oracledb 2.0.13-Development now.

Node-oracledb 2.0.13-Development is now on GitHub. Node-oracledb is the Node.js interface for Oracle Database. Top features: Version 2 is based on the new ODPI-C abstraction layer. Additional data type...

General

ODPI-C 2.0.0-beta.4 released: More & Better-er

ODPI-C 2.0.0-beta.4 has been released to GitHub. ODPI-C is an abstraction layer for Oracle Call Interface OCI. This release has incremental improvements, based on user feedback and as a result of continued testing. There was some discussion around best practice number handling, and how to make use of ODPI-C 'variable' structures without requiring a DB connection. These have resulted in a few tweaks. See the release notes for all changes. The highlights in this release are: Added support for getting/setting attributes of objects or element values in collections that contain LOBs, BINARY_FLOAT values, BINARY_DOUBLE values and NCHAR and NVARCHAR2 values Enabled temporary LOB caching in order to avoid disk I/O. Changed default native type to DPI_ORACLE_TYPE_INT64 if the column metadata indicates that the values are able to fit inside a 64-bit integer. Added function dpiStmt_defineValue(), which gives the application the opportunity to specify the data type to use for fetching without having to create a variable. In addition, a new macro DPI_DEBUG_LEVEL can be enabled during compilation to display logging and trace information. Personally I'd set it to DPI_DEBUG_LEVEL_FREES in this initial Beta phase to check whether we've made any mistakes with resource handling. We'd like to wrap up the Beta phase soon, so please continue pounding on ODPI-C and send us feedback. If you want to see ODPI-C in a bigger project, check out Python cx_Oracle 6.0b2, which has been updated to use this new release of ODPI-C.

ODPI-C 2.0.0-beta.4 has been released to GitHub. ODPI-C is an abstraction layer for Oracle Call Interface OCI. This release has incremental improvements, based on user feedback and as a result of...

General

ODPI-C 2.0.0 Beta 2 is out with runtime client linking

Anthony Tuininga just released ODPI-C 2.0.0-beta.2 on GitHub. The release notes list all changes. ODPI-C is an open source library of C code that simplifies the use of common Oracle Call Interface (OCI) features for Oracle Database drivers and user applications. It sits on top of OCI and requires Oracle client libraries. The big change in this release is the addition of runtime dynamic linking of the Oracle client libraries. It was initiated by a comment from Vincent Rogier, author of OCILIB when he took a look at ODPI-C. One you have downloaded ODPI-C source (and written something using its API, or copied the samples) you can compile your code without needing the Oracle header files (e.g. you don't need the Oracle Instant Client SDK). All you need to have are the Oracle client libraries from Oracle Database 11.2, 12.1 or 12.2 in your path when you run your application. These client libraries can be found in the Instant Client Basic or Basic Light packages, in a database ORACLE_HOME, or in a full Oracle Client install. Your ODPI-C -based application will use the first Oracle client libraries in your load path. More details about configuring the Oracle client are in the documentation. In summary you only need to build your application once and it will run using Oracle 11.2, 12.1 or 12.2 client libraries. If you want to see ODPI-C in action in a bigger 'example', look at the master branch of cx_Oracle.

Anthony Tuininga just released ODPI-C 2.0.0-beta.2 on GitHub. The release notes list all changes. ODPI-C is an open source library of C code that simplifies the use of common Oracle Call Interface...

node.js

Node-oracledb 1.13.0 is available on npm

Node-oracledb 1.13.0, the Node.js add-on for Oracle Database, is on npm. Top node-oracledb feature: a fetchAsBuffer mode for fetching BLOBs. A side note: Oracle Database 12.2 is also here! And Oracle Instant Client 12.2 is available too. Node-oracledb works with 12.2 client libraries (as well as 11.2 and 12.1). With 12.2 client, you can connect to 11.2, 12.1 and 12.2 databases, using Oracle's cross version interoperability which allows Oracle clients to connect to older (or newer) databases. One feature I like in the 12.2 client is its internal Session Pool connection checking, which transparently improves application connection reliability in unstable network environments. BLOB Enhancements in node-oracledb The final big piece of LOB support has been added to node-oracledb 1.13 with the addition of a new fetchAsBuffer option. This allows queries to return small BLOBs directly as Node.js Buffers. The new option is ideal for the many applications where BLOBs are kilobyte-sized or, at most, a few megabytes. Fetching as a Buffer is an alternative to the previous method of streaming which should continue to be used for larger BLOBs that can't (or shouldn't) be one chunk of Node.js memory. There is an example of fetchAsBuffer in examples/lobselect.js: oracledb.fetchAsBuffer = [ oracledb.BLOB ]; connection.execute( "SELECT b FROM mylobs WHERE id = :idbv", [2], function(err, result) { if (err) { . . . } if (result.rows.length === 0) { . . . } // no resultsvar blob = result.rows[0][0]; // This is a Buffer fs.writeFile('output.jpg', blob, "binary", function(err) { return cb(err, connection); }); }); An alternative to the global oracledb.fetchAsBuffer is to use a fetchInfo option for the column at execute() time: { fetchInfo: {"B": {type: oracledb.BUFFER}} }, [ . . . ] Pull Requests A small improvement was made to PL/SQL Index-by array binding error messages in 1.13. This was based on PR #470 submitted by Hariprasad Kulkarni. In an array bind situation, if a value with an unexpected data type is used, then messages NJS-037 and NJS-052 now give the bind variable name (or position), and the index of the unexpected data in the input data array. This makes it easier to find and fix the problem. The PR is much appreciated. Bug fixes Some bugs were happily squashed in node-oracledb 1.13: Fixed several crashes and a memory leak using CLOBs with fetchAsString. Fixed several issues including a crash using NULLs and empty strings for LOB BIND_INOUT binds. Automatically clean up sessions in the connection pool when they become unusable after an ORA-56600 occurs. Overall, this is a good release for users working with CLOBs and BLOBs. Plans for node-oracledb version 2 We are now going to work on a version 2 branch that incorporates the new ODPI-C layer. The code is mostly ready, though testing will take some time. We'll push a development release to GitHub soonish so you check it out and comment during the stabilization phase. Initially node-oracledb 2.x will have the same functionality as 1.x, with a few small additions made possible by ODPI-C. The plan is for node-oracledb 1.x to go into maintenance mode. Maintenance of node-oracledb 1.x will end on 1st April 2018, coinciding with the end-of-life of Node 4. Some more details are in the earlier announcement. Resources Issues and questions about node-oracledb can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line! node-oracledb installation instructions are here. Node-oracledb documentation is here. Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

Node-oracledb 1.13.0, the Node.js add-on for Oracle Database, is on npm. Top node-oracledb feature: a fetchAsBuffer mode for fetching BLOBs. A side note: Oracle Database 12.2 is also here! And Oracle...

python

Python cx_Oracle 5.3 for Oracle Database Released

Today we are pleased to announce the release of cx_Oracle 5.3, the extremely popular Python interface for Oracle Database. Binary and source bundles can be installed from PyPi, as normal. cx_Oracle 5.3 works with Python version 2.7, and with versions 3.4 and higher. You can use cx_Oracle with Oracle 11.2, 12.1 and 12.2 client libraries, allowing connection to multiple Oracle Database versions. Oracle's standard client-server version interoperability allows connection to both older and newer databases. For example Oracle 11.2 client libraries can connect to Oracle Database 10.2 or later. This release marks the move of the source code repository and home page to GitHub. User documentation remains on readthedocs. The cx_Oracle mail list also continues to operate, but you may find it easier to transition to GitHub for questions and issues. The New cx_Oracle 5.3 Ever since Anthony Tuininga joined Oracle, he has been hard at work adding improvements to cx_Oracle. (He's also been laying the ground work for the next big release - more on that later in this post). I think you'll be impressed with cx_Oracle 5.3. At last year's Oracle OpenWorld, Anthony talked about some of the changes and got great positive feedback. Check out his presentation here. This is a big update. While a few of the changes are specific to Oracle Database 12c features (see release notes), you will appreciate all the things that will make your applications better even if you have older databases. And you are upgrading to Oracle Database 12.2, right? Don't forget you can get immediate access to Oracle Database 12.2 using the Oracle Database Cloud. Key New Features Support for Python 3.6. Note Python versions earlier than 2.6 are no longer supported. (You can still get cx_Oracle 5.2 from PyPI, if you really need Python 2.6 support). Support for Oracle client 12.2. Note Oracle clients earlier than 11.2 are no longer supported. Direct binding to SQL objects. One great use case is binding Python objects to Oracle Spatial SDO objects. See InsertGeometry.py PL/SQL records and collections can be bound. Interacting with PL/SQL stored logic has never been easier. Support for scrollable cursors. Go back and forth through your query results. See ScrollableCursors.py Support for Advanced Queuing. Use database notifications to build micro-service applications. See AdvancedQueuing.py Fetch PL/SQL implicit results. Easily return query results from PL/SQL. See ImplicitResults.py Transaction Guard support. Verify transaction status after unplanned server outages. See TransactionGuard.py Support for Edition Based Redefinition. Easily switch applications to use updated PL/SQL logic. See Editioning.py Support for Statement Caching. Reduce costs of frequently executed statements. Now available with all connections. Support for setting application context during the creation of a connection. Make application metadata more accessible to the database, including in LOGON triggers. See AppContext.py Other New Features Continuous Query Notification use has been simplified. See DatabaseChangeNotification.py and QueryChangeNotification.py Support for pickling/unpickling error objects Support for binding native integers Support setting the internal and external name for use in distributed transactions Set the maximum lifetime of connections in a session pool Larger row counts (greater than 2^32) Increased default query array size to 100 Changes from cx_Oracle 5.2 In line with previous deprecations and announcements, the following items are no longer usable in cx_Oracle 5.3: The variable attribute maxlength has been dropped. Use bufferSize instead. The variable attribute allocelems has been dropped. Use numElements instead. The types UNICODE, FIXED_UNICODE and LONG_UNICODE have been dropped. Use NCHAR, FIXED_NCHAR and LONG_NCHAR instead. Callbacks on OCI functions are no longer supported. Future Deprecation Announcements In a future cx_Oracle 6 release, the following items will be removed: The module, action and clientinfo arguments to connection creation are deprecated and will be removed in cx_Oracle 6. Use the new application context instead. The cursor attribute numbersAsStrings is deprecated and will be removed in cx_Oracle 6. An output type handler should be used instead. Things you could already do in cx_Oracle 5.2 All the new features in cx_Oracle 5.3 listed above are on top of a great history of database support. The high level features that you've already been using in the previous release are: SQL and PL/SQL Execution Extensive data type support Fetching of large result sets REF CURSORs Binding PL/SQL Arrays Large Objects: CLOBs and BLOBs Transaction Management Session Pooling Database Resident Connection Pooling (DRCP) Privileged Connections and Database startup/shutdown External Authentication Continuous Query Notification Row Prefetching Client Result Caching End-to-end tracing Oracle Database High Availability Features A future cx_Oracle 6 I said you'd be impressed with cx_Oracle 5.3, but Anthony has already looked ahead to cx_Oracle 6, which aims to be even better. Over the past year, Anthony has been working hard on ODPI-C, an open source library of C code that simplifies and standardizes the use of common Oracle Call Interface (OCI) features, such as those used by cx_Oracle. ODPI-C was recently released on Github. As you can see from the cx_Oracle source code, the master branch of cx_Oracle code has been updated to use ODPI-C. If you want to test the current state of this future release, download a bundle from GitHub and build it. The release notes (so far) for user visible changes are here. Summary The move of cx_Oracle to under the umbrella of Oracle has given Anthony more time to focus on cx_Oracle and on making database access better for all developers. Overall cx_Oracle is the most comprehensive and popular scripting language driver for Oracle Database. It takes advantage of the great features built into the Oracle client libraries and into Oracle Database. And, of course, Python cx_Oracle applications can take advantage of Oracle Net features such as network data encyption. cx_Oracle 5.3 is a big release with support for some great Oracle Database features. Take a look. cx_Oracle Resources Home Page Documentation Mail List | GitHub Issues Source Code

Today we are pleased to announce the release of cx_Oracle 5.3, the extremely popular Python interface for Oracle Database. Binary and source bundles can be installed from PyPi, as normal. cx_Oracle 5.3...

node.js

node-oracledb 1.12: Working with LOBs as String and Buffer. Connection Pinging.

Node-oracledb 1.12, the Node.js add-on for Oracle Database, is now on NPM. Top features: LOBs as JavaScript Strings and Buffers. Pool connection 'aliveness' checking. The two main features of node-oracledb 1.12 add functionality that aids usability. You may not even notice one working in the background, silently re-establishing pooled connections after network outages. The other is a 'no brainer' that makes working with large objects (LOBs) easier. You'll probably use it and think nothing of it. This release also contains a number of bug fixes and small changes. Check the CHANGELOG for all the details. LOB Enhancements Adding to the existing support for using CLOBs and BLOBs as Node.js Streams, now Strings and Buffers can be bound to CLOBs and BLOBs for IN, IN OUT and OUT binds. CLOBs in queries can also be returned directly as Strings. And streaming to and from Oracle's 'temporary LOBs' is supported. Also in this release, Writeable LOB Streams now conclude with a 'close' event. The node-oracledb LOB documentation is worth reading. There are runnable examples in the examples directory. To insert large data, just bind a Node.js String or Buffer to a CLOB or BLOB column, respectively. For example: var fs = require('fs'); var str = fs.readFileSync('example.txt', 'utf 8'); conn.execute( "INSERT INTO mylobs (id, myclobcol) VALUES (:idbv, :cbv)", { idbv: 1, cbv: str }, // type and direction are optional for IN binds function(err, result) { if (err) console.error(err.message); else console.log('CLOB inserted from example.txt'); . . . You can also bind Strings and Buffers to PL/SQL LOB parameters as IN, OUT and IN OUT binds. To fetch CLOBs as Strings from queries, use fetchAsString or fetchInfo: // return all CLOBs as Strings oracledb.fetchAsString = [ oracledb.CLOB ]; conn.execute( "SELECT mycol FROM mylobs WHERE id = 1", function(err, result) { if (err) { console.error(err.message); return; } if (result.rows.length === 0) console.error("No results"); else { var clob = result.rows[0][0]; console.log(clob); } }); Currently BLOBs in queries must be streamed; we didn't want to hold up the 1.12 release. Use the right tool for the job: use Strings and Buffer for 'small' LOBs, and use Streams for everything else. There is a theoretical limit just under 1 GB in node-oracledb for manipulating LOBs as Strings or Buffers. Luckily you wouldn't want to be allocating that much memory in Node.js applications - it's better to stream. And Node.js itself doesn't cope well with large data. It can get unstable with Strings past the several hundred megabyte mark. (Note that if node-oracledb is linked with Oracle client 11g libraries, than the upper size of Strings and Buffers is just under the 64 KB mark: the moral is use Oracle client 12c libraries - you can still connect to Oracle Database 10.2, 11g and 12c). For larger data you have multiple options for streaming. To insert into a table column, use the existing RETURNING INTO method where you get an Oracle LOB locator and use Node.js Streams functionality to put data into it. For fetching, use the existing LOB Streaming method and stream data out of the Lob. If data is too big to bind as a String or Buffer to a PL/SQL parameter, you can create a 'temporary LOB' in node-oracledb, stream into it, and then bind it. This is new in node-oracledb 1.12 First, use the new connection.createLob() method to create a temporary LOB, here of type CLOB: conn.createLob(oracledb.CLOB, function(err, templob) { if (err) { . . . } // ... else use templob }); This creates an instance of a node-oracledb Lob class. Once created, data can be inserted into the Lob. For example to read a text file: templob.on('error', function(err) { somecallback(err); }); // The data was loaded into the temporary LOB, so use it templob.on('finish', function() { somecallback(null, templob); }); // copies the text from 'example.txt' to the temporary LOB var inStream = fs.createReadStream('example.txt'); inStream.on('error', function(err) { . . . }); inStream.pipe(templob); Now the LOB has been populated, it can be bound in somecallback() to a PL/SQL IN parameter: // For PROCEDURE lobs_in(p_id IN NUMBER, c_in IN CLOB) conn.execute( "BEGIN lobs_in (:id, :c); END", { id: 3, c: templob }, // type and direction are optional for IN binds function(err) { if (err) { return cb(err); } console.log("Call completed"); return cb(null, conn, templob); }); When the LOB is no longer needed, it must be closed with lob.close(): templob.close(function (err) { if (err) . . . else // success }); Connection Pool Enhancements The other main feature in node-oracledb 1.12 is connection pool session 'aliveness' checking. This is enabled by default. You may never notice it in action but it will improve application behavior where networks are unstable, or database services become temporarily unavailable. When pooled connections have been established to the database, but are idle in the connection pool not currently being used, there is a chance that a network outage will make those connections unusable. A common scenario is when a developer's laptop is closed for a while. Unless node-oracledb is linked with Oracle 12.2 client libraries, any node-oracledb pooled getConnection() call could return one of the unusable connections. For users of Oracle 11.2 or 12.1 client libraries, a new poolPingInterval setting will do a 'ping' to the database to check the connection is OK before returning that connection to the application. If it isn't OK, then another connection is returned. There is the overhead of a 'round trip' to the database in doing a ping, so there could be some impact on scalability. But the ping setting is time based so there is no overhead on active pools. The ping will only happen when a connection has been idle in the pool for poolPingInterval seconds. If a connection has been recently used, it will be returned to the application without being checked. The default interval is 60 seconds. The feature can be configured to always-force the ping, to totally disable it, or to be whatever interval meets your performance and quality-of-service requirements. The value can be set when a pool is created, or turned on globally: oracledb.poolPingInterval = 60; Applications should continue to detect and handle execution-time errors, since execution errors may occur, or a network outage may occur between getConnection() and execute() calls, or connections may have been in use and not released back to the pool when the network dropped, or the idle time may have been shorter than the ping interval setting and no ping performed. The new feature is great at improving the reliability of connections in unstable environments but is just one of the options and tools that can be used to provide a highly available application. The ping feature is a no-op when node-oracledb is linked with Oracle 12.2 client libraries (independent of the database version at the other end of the connection), since those libraries have an always-enabled, lightweight connection check that removes the need for node-oracledb to do its own ping. This adds even more certainty that a connection will be valid at time of first use after a getConnection() call. Oracle 12.2 is currently available in Oracle Cloud. The pool pinging documentation has more information. Documentation Updates If you haven't scanned the documentation for a while, take a look. Each release more and more tips and information gets added. A couple of community contributions from Leigh Schrandt and Nick Heiner regarding the README were gratefully received. Thank you! Testing Testing of node-oracle is also something I wanted to mention. The test suite continues to grow, as you can see when you run it. We also have additional stress and loads tests - overall these extra test take days to complete. There are no guarantees, of course, but node-oracledb is more than solid. Resources Issues and questions about node-oracledb can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line! node-oracledb installation instructions are here. Node-oracledb documentation is here. Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

Node-oracledb 1.12, the Node.js add-on for Oracle Database, is now on NPM. Top features: LOBs as JavaScript Strings and Buffers. Pool connection 'aliveness' checking. The two main features of...

General

Fast Generation of CSV and JSON from Oracle Database

Want a fast way to spool CSV or JSON from Oracle Database? Check out the new SQL*Plus 12.2 SET MARKUP CSV option and the new Oracle Database 12.2 JSON_OBJECT operator. You can try Oracle Database 12.2 now using Oracle Cloud Database Service. Fast, Easy CSV with SQL*Plus and Oracle Database First, let's see CSV ("comma separated values") output in SQL*Plus 12.2. Start with this script, t.sql: set feedback off select department_id, department_name from departments where department_id < 110; exit Executed traditionally you get formatted output: SQL> @t.sql DEPARTMENT_ID DEPARTMENT_NAME ------------- ------------------------------ 10 Administration 20 Marketing 30 Purchasing 40 Human Resources 50 Shipping 60 IT 70 Public Relations 80 Sales 90 Executive 100 Finance Running it with the new CSV mode: SQL> set markup csv on SQL> @t.sql "DEPARTMENT_ID","DEPARTMENT_NAME" 10,"Administration" 20,"Marketing" 30,"Purchasing" 40,"Human Resources" 50,"Shipping" 60,"IT" 70,"Public Relations" 80,"Sales" 90,"Executive" 100,"Finance" Simple! The full CSV syntax is: SET MARKUP CSV {ON|OFF} [DELIMI[TER] character] [QUOTE {ON|OFF}] You can see the delimiter can be changed from a comma, and quoting of fields can be disabled. The SET MARKUP option can also be enabled from the command line with the -m option: $ sqlplus -s -m 'csv on' cj@localhost/pdb1 @t.sql "DEPARTMENT_ID","DEPARTMENT_NAME" 10,"Administration" 20,"Marketing" 30,"Purchasing" 40,"Human Resources" 50,"Shipping" 60,"IT" 70,"Public Relations" 80,"Sales" 90,"Executive" 100,"Finance" (Pro tip: the -s silent option may hide the password prompt on some platforms making SQL*Plus appear to hang! Enter the password and all will be well.) CSV mode bypasses the traditional SQL*Plus pagination routines, making output faster to generate. And using the -m 'csv on' option additionally invokes a faster I/O subsystem, and also automatically sets some SET options to the values of another new option sqlplus -f. Two of the changes are increasing SET ARRAYSIZE to 100 and SET ROWPRETCH to 2. Increasing SET ARRAYSIZE allows larger amounts of data to be returned in each low level request to the database, thus improving overall efficiency. Having SET ROWPRETCH to 2 reduces the database 'round trips' required between SQL*Plus and the database for queries that return only single rows. You should adjust the values of these, and other, settings to suit your data size and performance goals. CSV mode can be used when connecting SQL*Plus 12.2 to Oracle Database 12.2 or earlier. Fast, Easy JSON with SQL*Plus and Oracle Database The Oracle Database 12.2 JSON_OBJECT function is a great way to convert relational table output into JSON. Combined with SQL*Plus 12.2's efficient CSV output - and with quoting of columns disabled - you can spool JSON very easily. Here's a SQL*Plus script t2.sql to return JSON output: set heading off set feedback off select json_object ('deptId' is d.department_id, 'name' is d.department_name) department from departments d where department_id < 110; exit Running it generates the desired JSON output: $ sqlplus -s -m 'csv on quote off' cj@localhost/pdb1 @t2.sql {"deptId":10,"name":"Administration"} {"deptId":20,"name":"Marketing"} {"deptId":30,"name":"Purchasing"} {"deptId":40,"name":"Human Resources"} {"deptId":50,"name":"Shipping"} {"deptId":60,"name":"IT"} {"deptId":70,"name":"Public Relations"} {"deptId":80,"name":"Sales"} {"deptId":90,"name":"Executive"} {"deptId":100,"name":"Finance"} Summary SQL*Plus 12.2 and Oracle Database 12.2 allow efficient and easy access to CSV and JSON data. If you don't yet have 12.2, or you want to create web services and do more with the data, check out Oracle ORDS or roll your own solution using the node-oracledb driver for Node.js.

Want a fast way to spool CSV or JSON from Oracle Database? Check out the new SQL*Plus 12.2 SET MARKUP CSV option and the new Oracle Database 12.2 JSON_OBJECT operator. You can try Oracle Database...

General

Application Development at Oracle OpenWorld, San Francisco, September 2016

Well, there is certainly a lot going on at Oracle OpenWorld this September. You can browse the session catalog for interesting talks. Update If you need just one link, use our "Focus on Document" to guide your session choice. Here are a few highlights in my area: Python and Oracle Database: Tips, Tricks, and the Best New Features [CON6543] Room: Park Central - Concordia Monday 19th September, 03:00pm - 03:45pm Best Practices for Application Performance and Scalability [CON6538] Park Central - Metropolitan I Monday 19th September, 05:30pm - 06:15pm Session Title: Meet the Oracle Programming and Scripting Experts [MTE7200] Room: Moscone South - 310 Tuesday 20th September, 06:15pm - 07:00pm Best Practices for Application High Availability [CON6539] Park Central - Metropolitan II Wednesday 21st September, 12:15pm - 01:00pm Node.js: Asynchronous JavaScript Application Development for Oracle Database 12c [CON6542] Room: Park Central - Concordia Thursday 22nd September, 10:45am - 11:30am Best Practices for Application Development with Oracle Database Cloud Service [CON6540] Moscone South - 104 Thursday 22nd September, 01:15pm - 02:00pm That's some great content there. The "Meet the Experts" session is the interactive session where you get to hear from, and ask questions to, our key developers and managers in the Scripting Language and .NET area. If you're shy, you don't have to speak - just come and learn. We'll also have a demo booth open on the exhibition floor so you can come and chat. (Its location is yet to be announced). I really hope to see you during the 2016 conference.

Well, there is certainly a lot going on at Oracle OpenWorld this September. You can browse the session catalog for interesting talks. Update If you need just one link, use our "Focus on Document" to...

node.js

Node-oracledb 1.11 Released to NPM

We've pushed out a release of node-oracledb to GitHub and NPM. Top feature: Connection Pool Cache The node-oracledb driver connects Node.js to Oracle Database for fast and functional applications. The changes in node-oracledb 1.11 are: Added a connection pool cache feature allowing pools to be given a string alias at creation. This makes pools easier to use. Pools are typically created in one file and later used in other files. Giving a pool a string alias allows these other modules to access the pool by knowing only its alias name. When pools are created with oracledb.createPool(), the pool attributes can now optionally contain a new poolAlias string. var hrPoolPromise = oracledb.createPool({ poolAlias: 'pool1', users: 'hr', password: 'welcome', connectString: 'localhost/orcl'}); Pools can be retrieved from the cache using a new oracledb.getPool() method, and then used as before to get connections. Multiple pools, each with a different alias, can be used. Interestingly, oracledb.getConnection() has been enhanced to take a pool alias specifying which pool to return a connection from. This bypasses the need to call oracledb.getPool(). And there's more! The first pool (in a normal code flow) created without a poolAlias attribute is given the alias "default". Connections can be retrieved from this pool by using oracledb.getConnection() without passing an alias at all. This means applications that create only a single pool can simply use oracledb.getConnection() anywhere to get a connection from the pool. More information and examples are in the documentation. It is worth checking this since oracledb.getConnection() now has several different behaviors, depending on the use of aliases (and whether callbacks or promises are used). We'd recommend using aliases all the time if you create more than one pool - and want to access them via aliases. Using aliases is optional. Don't mix un-aliased and aliased pools unless you want to confuse yourself. Improved the bootstrap error message when the node-oracledb binary cannot be loaded. Fixed memory leaks with DATE and TIMESTAMP bind values. Fixed external authentication which broke in 1.10. Fixed metadata scale and precision values on AIX. A couple of other internal improvements are shown in the CHANGELOG. Resources Issues and questions about node-oracledb can be posted on GitHub. Your input helps us schedule work on the add-on. Drop us a line! Node-oracledb installation instructions are here. Node-oracledb API and user documentation is here.

We've pushed out a release of node-oracledb to GitHub and NPM. Top feature: Connection Pool Cache The node-oracledb driver connects Node.js to Oracle Database for fast and functional applications. The...

node.js

Node-oracledb 1.10 has Enhanced Metadata

Top feature: Enhanced Metadata The changes in node-oracledb 1.10 are: Enhanced query metadata thanks to a Pull Request from Leonardo. He kindly allowed us to take over and fine tune the implementation. Additional metadata for query and REF CURSOR columns is available in the metaData object when the new boolean oracledb.extendedMetaData attribute or corresponding execute() option attribute extendedMetaData are true. For example, if the DEPARTMENTS table is like: SQL> desc departments Name Null? Type ----------------------------------------- -------- ---------------------------- DEPARTMENT_ID NOT NULL NUMBER(4) DEPARTMENT_NAME NOT NULL VARCHAR2(30) MANAGER_ID NUMBER(6) LOCATION_ID NUMBER(4) Then a query in node-oracledb would give extended metadata: [ { name: 'DEPARTMENT_ID', fetchType: 2002, dbType: 2, precision: 4, scale: 0, nullable: false }, { name: 'DEPARTMENT_NAME', fetchType: 2001, dbType: 1, byteSize: 30, nullable: false }, { name: 'MANAGER_ID', fetchType: 2002, dbType: 2, precision: 6, scale: 0, nullable: true }, { name: 'LOCATION_ID', fetchType: 2002, dbType: 2, precision: 4, scale: 0, nullable: true } ] You can see that the available attributes vary with the database type. The attributes are described in the metaData documentation. The commonly used column name is always available in metaData regardless of the value of extendedMetaData. This is consistent with previous versions. The metadata dbType and fetchType attributes numbers are described in new DB_TYPE_* constants and the existing node-oracledb type constants, respectively. Your code should use these constants when checking metadata types. Why did we make the extra metadata optional and off by default? Why do the types use numbers instead of strings? We had a lot of debate about common use cases, out-of-box experience, performance etc. and this is the way the cookie crumbled. I know this enhancement will make your applications easier to maintain and more powerful. Fixed an issue preventing the garbage collector cleaning up when a query with LOBs is executed but LOB data isn't actually streamed. Report an error earlier when a named bind object is used in a bind-by-position context. A new error NJS-044 is returned. Previously errors like ORA-06502 were given since the expected attributes were not found and bind settings ended up as defaults. You can still use unnamed objects for bind-by-position binds like: var sql = "begin myproc(:1, :2, :3); end;";var binds = [ id, name, { type: oracledb.STRING, dir: oracledb.BIND_OUT } ]; Here the third array element is an unnamed object. Fixed a bug where an error event could have been emitted on a QueryStream instance prior to the underlying ResultSet having been closed. This would cause problems if the user tried to close the connection in the error event handler as the ResultSet could have prevented it. Fixed a bug where the public close method was invoked on the ResultSet instance that underlies the QueryStream instance if an error occurred during a call to getRows. The public method would have thrown an error had the QueryStream instance been created from a ResultSet instance via the toQueryStream method. Now the call to the C layer close method is invoked directly. Updated Pool._logStats to throw an error instead of printing to the console if the pool is not valid. Added GitHub Issue and Pull Request templates. Updated installation instructions for OS X using the new Instant Client 12.1 release. Added installation instructions for AIX and Solaris x64. Some enhancements were made to the underlying DPI data access layer. These were developed in conjuction with a non- node-oracledb consumer of DPI, but a couple of changes lay groundwork for potential, user-visible, node-oracledb enhancements: Allow SYSDBA connections Allow session tagging Allow the character set and national character set to be specified via parameters to the DPI layer Support heterogeneous pools (in addition to existing homogeneous pools) To reiterate, these are not exposed to node-oracledb. Resources Issues and questions about node-oracledb can be posted on GitHub. Your input helps us schedule work on the add-on. Drop us a line! node-oracledb installation instructions are here. Node-oracledb documentation is here.

Top feature: Enhanced Metadata The changes in node-oracledb 1.10 are: Enhanced query metadata thanks to a Pull Request from Leonardo. He kindly allowed us to take over and fine tune the implementation. Ad...

General

Using Oracle on OS X? Instant Client 12.1 is here

Oracle Instant Client 12.1 for OS X was just released and is now available for free download from OTN for 32-bit and 64-bit applications. Update: the bundles were re-released 14 June 2016 with a connectivity fix. Instant Client provides libraries and tools for connecting to Oracle Database. Among other uses, languages such as C, Python, PHP, Ruby, Perl and Node.js can use Instant Client for database connectivity. In addition to having Oracle 12.1 client features like auto-tuning, new in this release is an ODBC driver. The install instructions have been updated to reflect the resolution of the linking issues caused by the OS X El Capitan changes with SIP to ignore DYLD_LIBRARY_PATH in sub processes. The ~/lib location required for Instant Client 11.2 on El Capitan is no longer needed with Instant Client 12.1. Note if you are creating your own apps, you should link with -rpath. This release of Instant Client supports Mavericks, Yosemite, and El Capitan. Applications can connect to Oracle Database 10.2 or more recent. You should continue using the older 11.2 client if you need to connect to Oracle Database 9.2. Update: Official installation doc and release notes are now on the doc portal: Oracle Database Online Documentation 12c Release 1 (12.1). Questions and comments can be posted to the OTN forum for whichever component or tool you are using. General questions about Instant Client are best posted to the OCI Forum. If you are interested in running Oracle Database itself on OS X, see my earlier post The Easiest Way to Install Oracle Database on Mac OS X.

Oracle Instant Client 12.1 for OS X was just released and is now available for free download from OTN for 32-bit and 64-bit applications. Update: the bundles were re-released 14 June 2016 with...

node.js

Node-oracledb 1.9.1 with Promises and Node 6 Support is on NPM

Top features: Promise support. Node 6 support. The new node-oracledb 1.9.1 release adds Promise support, it now supports the new Node 6 (as well as previous releases), and it has some welcome stability fixes. Other nice goodies are mentioned below. Thanks to everyone who gave feedback on our 1.9.0 development branch - we couldn't have done it without you. node-oracledb 1.9.1 is now available on NPM. Installation instructions are here. Documentation is here. The changes in node-oracledb 1.9.1 since 1.8 Promise support was added. All asynchronous functions can now optionally return Promises. When asynchronous functions are passed with a callback function parameter, the familiar callback flow is used. If the callback parameter is omitted, then a Promise is returned. There are some examples in the examples directory, see promises.js and webapppromises.js. Node Promises allow a programming paradigm that many Node users find comfortable. Some users had implemented their own Promise wrappers for node-oracledb; a few had even been published on NPM. Now the official implementation makes Promises available to everyone. The default Promise library is used for Node 0.12, 4, 5 and 6. It can be easily overridden if you wish to incorporate your own implementation. This is also useful for Node 0.10 where there is no native library. A new toQueryStream() method was added for ResultSets. It lets REF CURSORS be fetched via Readable Streams. See the example refcursortoquerystream.js It can also be used with ResultSets from top-level queries, allowing them to be converted to streams. However the existing connection.queryStream() method will probably be easier to use in this scenario. Aliases pool.close() and connection.close() have been added for pool.terminate() and connection.release() respectively. This should make it easier to remember which method to use for releasing a connection, terminating a connection pool, or closing a ResultSet: you can just use close(). An experimental query Stream _close() method was added. It allows query streams to be closed without needing to fetch all the data. It is marked experimental because there isn't good information about how safe interrupting the current Node Streams implementation is. Calling _close() invokes pause(), if necessary. On Node 0.10 you may get a pause event even if you had previously paused the stream. This is because Node 0.10 doesn't support the isPaused() call. Upgraded to NAN 2.3 for Node 6. This removes the deprecation warnings that Node 6 was generating with the older NAN 2.2. Older versions of Node can still be used, of course. Mitigated some annoying, seemingly random crashes if JavaScript objects were garbage collected too early. While many well programmed apps naturally hold the relevant JavaScript objects until no longer required, other apps don't. Premature garbage collection could affect these latter scripts if they finished quickly or when under load. A similar change was made to Lob buffers used during LOB insertion. Memory 'improvements': Fixed some memory leaks when using ResultSets. Fixed a memory leak with the Pool queue timer map. Fixed memory release logic when querying LOBs and an error occurs. Removed an extra call to getRows() made by queryStream() at end-of-fetch. Error changes: Some method parameter validation checks, such as the number or types of parameters, will now throw errors synchronously instead of returning errors via the callback. Improved validation for fetchInfo usage. Increased the internal buffer size for Oracle Database error messages. Altered some node-oracledb NJS-xyz error message text for consistency. The test suite is no longer automatically installed when installing with npm install oracledb from NPM. The test suite was (and will be) getting bigger and bigger. It is not being used by most people, so installing it automatically is wasteful. You can still run the tests using a GitHub clone. The updated test README has instructions for this. To allow tests to be moved and run anywhere, we left the tests' require('oracledb') calls without a path prefix so you may need to set NODE_PATH as described. Fixed a symbol name redefinition warning for DATA_BLOB when compiling on Windows. Overall node-oracledb 1.9.1 is a significant milestone with the addition of Promise support, the support for Node 6, and the general resiliency changes. I'm very happy with this release and would encourage upgrading to it. What's Next? What's next? There are still lots of enhancements for node-oracledb on the wish list. Please let us know your priorities. Issues and questions about node-oracledb can be posted on GitHub. Your input helps us schedule work on the add-on. Drop us a line!

Top features: Promise support. Node 6 support. The new node-oracledb 1.9.1 release adds Promise support, it now supports the new Node 6 (as well as previous releases), and it has some welcome...

node.js

Getting a C++11 compiler for Node 4, 5 and 6 on Oracle Linux 6

A newer compiler is needed on Oracle Linux 6 when you want to use add-ons like node-oracledb with Node 4 or later. This is because add-ons for those versions need to be built with a C++11 compatibile compiler. The default compiler on OL 6 doesn't have this support. OL 7 does have such a compiler, so these instructions are not needed for that version. For OL 6 the easist way to get a new compiler is from the Software Collection Library (SCL). You enable the software collection yum channel, run a yum install command, and then the compiler is immediately available to use. Detailed installation SCL instructions are in the manual. The steps below show how to install node-oracledb on Oracle Linux 6 for Node.js 4 or later. Enabling the Software Collection Library If you are using yum.oracle.com (formerly known as public-yum.oracle.com) then edit /etc/yum.repos.d/public-yum-ol6.repo and enable the ol6_software_collections channel: [ol6_software_collections] name=Software Collection Library release 1.2 packages for Oracle Linux 6 (x86_64) baseurl=http://yum.oracle.com/repo/OracleLinux/OL6/SoftwareCollections12/x86_64/ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle gpgcheck=1 enabled=1 If necessary, you can get the latest channel list from http://yum.oracle.com/public-yum-ol6.repo and merge any updates from it into your existing /etc/yum.repos.d/public-yum-ol6.repo file. Alternatively, if your machine has a ULN support subscription, you can subscribe to the Oracle Software Collections 1.2 for Oracle Linux 6 channel in the Manage Subscription page on linux.oracle.com. Installing the Updated Compiler Once the channel is enabled, install the updated compiler with: yum install scl-utils devtoolset-3 This will install a number of packages that comprise the complete, updated tool set. Installing node-oracledb Installing node-oracledb on Node 4 (or later) is the same as in the install instructions, but using the new compiler. The Oracle Linux manual chapter Using the Software Collection Version of a Command shows various ways to enable the dev toolset. In summary, to install node-oracledb on Node 4 or later using Oracle Linux 6, first install an Oracle client such as Instant Client. If you have anything except the Instant Client RPM packages, tell the installer where the libraries and header files are located, for example: export OCI_LIB_DIR=$HOME/instantclient export OCI_INC_DIR=$HOME/instantclient/sdk/include If you are behind a firewall, set your proxy: export http_proxy=http://my.proxy.example.com:80/ In my development environments I often find some cleanup helps: which npm && rm -rf $(npm root)/oracledb $(npm root)/nan $HOME/.node-gyp $HOME/.npm \ && npm cache clean Now node-oracledb can be installed using the newer compiler: scl enable devtoolset-3 -- npm install oracledb You can also open a shell and use that to install: scl enable devtoolset-3 bash npm install oracledb Using Node Now you can use Node: $ node version.js node.js version: v4.4.3 node-oracledb version: 10900 node-oracledb text format: 1.9.0 oracle client library version: 1201000200 oracle client library text format: 12.1.0.2.0 oracle database version: 1201000200 oracle database text format: 12.1.0.2.0 $ cat /etc/oracle-release oracle linux server release 6.7

A newer compiler is needed on Oracle Linux 6 when you want to use add-ons like node-oracledb with Node 4 or later. This is because add-ons for those versions need to be built with a C++11 compatibile...

node.js

node-oracledb 1.9.0-Dev Branch with Promises is on GitHub

Top features: Promise support node-oracledb 1.9.0-Development is now available only as a development-only branch on GitHub. It adds Promise support and some other goodies mentioned below. The plan is to incorporate any user feedback, stabilize the features, improve documentation, improve test covereage, and run stress tests. Once this is all done, then a production release to npm will be made. Personally I think this will take 2-4 weeks, but it is totally dependent on what is uncovered by you, the user. Since this is a development branch, features are subject to change. Use GitHub Issue #410 to ask design questions and discuss the branch. Install by setting OCI_LIB_DIR and OCI_INC_DIR as normal, and running npm install oracle/node-oracledb.git#dev-1.9. Or install from GitHub by cloning the repository, checking out the dev-1.9 branch, setting OCI_LIB_DIR and OCI_INC_DIR, and install with npm install Anyone is welcome to report test results, contribute new tests or update documentation (or code!) to help us get a production release out faster. You do need to have your OCA accepted first before we can look at any pull requests. As you can see, most of the doc is ready, and there are a solid number of tests for new functionality. You can help us by testing your own favorite Promise library, as we are only planning on testing the default Promise implementation in Node 0.12, 4 and 5. Also check out how connections are handled as you may like a different style - let us know. In node-oracledb 1.9-dev: Promise support was added. All asynchronous functions can now return promises. The default Promise library is used for Node 0.12, 4 and 5. It can be easily overridden if you wish to incorporate your own implementation. The current implemention typically requires two promise chains. Let us know what you think. There are solutions that could be made. What do you want? A new toQueryStream() method was added. It is for ResultSets. It lets REF CURSORS be transformed into Readable Streams. It can also be used to convert ResultSets from top-level queries to streams, however the existing connection.queryStream() method will probably be easier to use for these queries. An experimental query Stream _close() method was added. It allows query streams to be closed without needing to fetch all the data. It is marked experimental because there isn't good information about whether the current Node Streams implementation really allows interruptions. Open question: Should _close() invoke pause() internally? Aliases pool.close() and connection.close() have been added for pool.terminate() and connection.release() respectively. This should make it easier to remember which method to use for releasing a connection, terminating a connection pool, or closing a ResultSet: you can just use close(). Some method parameter validation checks, such as the number or types of parameters, will now throw errors synchronously instead of returning errors via the callback. Removed an extra call to getRows() made by queryStream() at end-of-fetch. Some annoying, seemingly random crashes caused by connections being garbage collected while still in use should no longer occur. These were more common in smaller scripts that finished quickly, but could be seen in systems under load. Resources Issues and questions about node-oracledb can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line! node-oracledb installation instructions are here. Node-oracledb documentation is here.

Top features: Promise support node-oracledb 1.9.0-Development is now available only as a development-only branch on GitHub. It adds Promise support and some other goodies mentioned below. The plan is...

node.js

Node-oracledb 1.8 has a streaming mode for queries (Node.js add-on for Oracle Database)

Node-oracledb 1.8.0, the Node.js add-on for Oracle Database, is on NPM. Top new changes: New Query Result Streaming. Use DRCP connect strings only with a connection pool. I want to start by saying thanks to all contributors past and current! It is the community that defines the product. Query Result Streaming Node Streams can now be used for queries after a pull request from Sagie Gur-Ari was merged. A new connection.queryStream() method returns a Readable Stream. Of course, if you prefer callbacks, the existing ResultSet feature can continue to be used. The new query stream feature is implemented as a wrapper around the ResultSet Class. In particular it uses resultset.getRows() to fetch a subset of data, each row of which will generate a data event. The only reason to understand this is that getRows() takes a parameter to specify how many rows to fetch. We had some (well, a lot of) debate about to set this value and keep the API simple. For the moment, the value of oracle.maxRows is used. Note the value does not affect how many rows are returned by streamQuery() events because getRows() will be repeatedly called when more rows are needed. Instead, this parameter is used to tune stream performance. In the near future, when PR 361 is reviewed, we may introduce a specialized tuning parameter. We also want to look at ways of interrupting the streams early. But, in node-oracledb 1.8, make sure to read to the end of the query to avoid leaking a cursor. The ResultSet close() is executed internally; you do not need to call it to release ResultSet resources. An example of query streaming is: var stream = connection.queryStream( 'SELECT first_name, last_name FROM employees ORDER BY employee_id' ); stream.on('error', function (error) { console.error(error); return; }); stream.on('metadata', function (metadata) { console.log(metadata); }); stream.on('data', function (data) { console.log(data); }); stream.on(end, function () { connection.release( function(err) {if (err) { console.error(err.message);} }); }); There is a runnable example in examples/selectstream.js. Other useful examples are in the test file test/stream1.js. Use DRCP Connect Strings Only With a Connection Pool In node-oracledb 1.8 you must now use a connection pool if your connect string requests a DRCP connection. Previously this was just a best practice. Now it is enforced. Connect strings that request DRCP connections look either like: connectString : "mymachine/mydbservice:pooled" or connectString : "SALES" where the SALES connect identifier maps to a tnsnames.ora entry specifying SERVER=POOLED, for example: SALES=(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (HOST=mymachine) (PORT=1521))(CONNECT_DATA=(SERVICE_NAME=mydbservice) (SERVER=POOLED))) If you try oracledb.getConnection(...) you will get an error ORA-56609: Usage not supported with DRCP. Instead use a connection pool, see node-oracledb Connection Pooling documentation: oracledb.createPool ( { user : "hr" password : "welcome" connectString : "mymachine/mydbservice:pooled" }, function(err, pool) { pool.getConnection (function(err, connection){. . . // use connection}); }); In the unlikely case where the Node process is short-lived and you really, really just want a single connection, create a pool with a single session; the createPool() option attributes can be: poolMax: 1, poolMin: 1, poolIncrement: 0 Millisecond Precisions A pull request from Antonio Bustos has helped make some tests more portable by removing some time ambiguity. The Oracle DATE data type does not contain milliseconds. If an application inserts a JavaScript date with milliseconds, the DATE tests will now only compare the non-millisecond date components to validate results. Windows Debug Builds Kubo Takehiro (who is the maintainer of the popular Ruby ruby-oci8 extension) spotted a hierarchy problem with node-oracledb Windows Debug build options. The binding.gyp entry has now been fixed. Other Changes The driver name is now set to "node-oracledb : 1.8.0". This is visible to DBAs, for example in the V$SESSION_CONNECT_INFO view. It lets DBAs see what apps, and what versions, are connecting to the database. The pool queue stats now show the start time of the pool. A couple of parameter check bugs were also fixed. See CHANGELOG for details. Resources Issues and questions about node-oracledb can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line! node-oracledb installation instructions are here. Node-oracledb documentation is here.

Node-oracledb 1.8.0, the Node.js add-on for Oracle Database, is on NPM. Top new changes: New Query Result Streaming. Use DRCP connect strings only with a connection pool. I want to start by saying...

General

Using SQL*Plus Instant Client 11.2 on OS X El Capitan

The installation steps for using SQL*Plus 11.2 (Instant Client) and other OCI-based applications have necessarily changed since OS X El Capitan broke DYLD_LIBRARY_PATH. The updated instructions are given on the Instant Client Downloads for Mac OS X (Intel x86) page. Update: Instant Client 12.1 is out and installation is easier. Check out the above link for instructions Here are the 11.2 instructions repeated. The steps also work on earlier versions of OS X: Download the desired Instant Client ZIP files from OTN. All installations require the Basic or Basic Lite package. Open a terminal and unzip the packages into a single directory such as "~/instantclient_11_2". For example, to use SQL*Plus: cd ~unzip instantclient-basic-macos.x64-11.2.0.4.0.zipunzip instantclient-sqlplus-macos.x64-11.2.0.4.0.zip Create the appropriate libclntsh.dylib link for the version of Instant Client. For example: cd ~/instantclient_11_2ln -s libclntsh.dylib.11.1 libclntsh.dylib Note: OCCI programs will additionally need: ln -s libocci.dylib.11.1 libocci.dylib Add links to "~/lib" for required Basic package libraries. For example, to use OCI programs (including SQL*Plus, Python's cx_Oracle, PHP's OCI8, Node.js's node-oracledb, and Ruby's ruby-oci8 driver): mkdir ~/libln -s ~/instantclient_11_2/libclntsh.dylib.11.1 ~/lib/ln -s ~/instantclient_11_2/{libnnz11.dylib,libociei.dylib} ~/lib/ To run SQL*Plus, add its libraries to "~/lib", and update PATH. For example: ln -s ~/instantclient_11_2/{libsqlplus.dylib,libsqlplusic.dylib} ~/lib/export PATH=~/instantclient_11_2:$PATH

The installation steps for using SQL*Plus 11.2 (Instant Client) and other OCI-based applications have necessarily changed since OS X El Capitan broke DYLD_LIBRARY_PATH. The updated instructions are...

node.js

node-oracledb 1.7.0 has a connection pool queue (Node.js add-on for Oracle Database)

Node-oracledb 1.7.0, the Node.js add-on for Oracle Database, is on NPM. Top features: a new connection pool queue to make apps more resilient, and "Bind by position" syntax for PL/SQL Index-by array binds. This release has a couple of interesting changes as well as some small bind fixes. A few reported build warnings with some compilers were also squashed. Extended PL/SQL Index-by Array Bind Syntax To start with, a followup PR from @doberkofler completes his PL/SQL Index-by array binding support project. In node-oracledb 1.7 he has added "bind by position" syntax to the already existing "bind by name" support. Thanks Dieter! The "bind by position" syntax looks like: connection.execute( "BEGIN mypkg.myinproc(:id, :vals); END;", [ 1234, { type: oracledb.NUMBER, dir: oracledb.BIND_IN, val: [1, 2, 23, 4, 10] } ], function (err) { . . . }); Personally I'd recommend using bind by name for clarity, but this PR makes the feature congruent with binding scalar values, which is always a good thing. Documentation is at PL/SQL Collection Associative Array (Index-by) Bind Parameters. New Transparent JavaScript Wrapper for Existing Classes The other major change in 1.7 is a new JavaScript wrapper over the current node-oracledb C++ API implementation, courtesy of some community discussion and the direction that users seemed to have been heading in: creating similar wrappers. It was also the result of some 'above and beyond' overtime from Dan McGhan who did the project. This wrapper should be transparent to most users. It gives a framework that will make it easier to extend node-oracledb in a consistent way and also let developers who know JavaScript better than C++ contribute to node-oracledb. New Connection Pool Queue Enabled by Default The layer has let Dan add his first new user feature: a request queue for connection pooling. It is enabled by a new Boolean pool attribute queueRequests. If a pool.getConnection() request is made but there are no free connections (aka sessions) in the pool, the request will now be queued until an in-use connection is released back to the pool. At this time the first request in the queue will be dequeued, and the underlying C++ implementation of pool.getConnection() will be called to return the now available connection to the waiting requester. A second new pool attribute queueTimeout uses setTimeout to automatically dequeue and return an error for any request that has been waiting in the queue too long. The default value is 60000 milliseconds, i.e. 60 seconds. In normal cases, when requests are dequeued because a connection does become available, the timer is stopped before the underlying C++ layer gets called to return the connection. The pool queue is enabled by default. If it is turned off, you get pre-1.7 behavior. For example if more requests are concurrently thrown at an app than the poolMax value, then some of the pool.getConnection() calls would likely return an error ORA-24418: Cannot open further sessions. When enabled, the new queue nicely stops this error occurring and lets apps be more resilient. The pool option attribute _enableStats turns on lightweight gathering of basic pool and queue statistics. It is false by default. If it is enabled, applications can output stats to the console by calling pool._logStats() whenever needed. I think it will be wise to monitor the queue statistics to make sure your pool configuration is suitable for the load. You don't want the queue to be an invisible bottle neck when too many pool.getConnection() requests end up in the queue for too long. Statistics and the API may change in future, so the attribute and method have an underscore prefix to indicate they are internal. Connection Queue Example To look at an example, I used ab to throw some load at an app based on examples/webapp.js I used a load concurrency of 25 parallel requests. The pool had a maximum of 20 sessions in its pool. The extra load was nicely handled by the connection queue without the application experiencing any connection failures. I'd modified the app to check for a particular URL and dump statistics on request: . . . var hs = http.createServer ( function(request, response) { var urlparts = request.url.split("/"); var arg = urlparts[1]; if (arg === 'stats') { pool._logStats(); } . . . Here is snapshot of the output from _logStats() at one point during the test: Pool statistics:...total connection requests: 26624147...total requests enqueued: 5821874...total requests dequeued: 5821874...total requests failed: 0...total request timeouts: 0...max queue length: 6...sum of time in queue (milliseconds): 13920717...min time in queue (milliseconds): 0...max time in queue (milliseconds): 1506...avg time in queue (milliseconds): 2...pool connections in use: 12...pool connections open: 20Related pool attributes:...queueRequests: true...queueTimeout (milliseconds): 0...poolMin: 10...poolMax: 20...poolIncrement: 10...poolTimeout: 0...stmtCacheSize: 30Related environment variables:...process.env.UV_THREADPOOL_SIZE: undefined The connection pool was semi-arbitrarily configured for testing. It started out with 10 sessions open (poolMin) and as soon as they were in use, the pool would have grown by another 10 sessions (poolIncrement) to the maximum of 20 (poolMax). What the stats show is that not all pool.getConnection() requests could get a pooled connection immediately. About 20% of requests ended up waiting in the queue. The connection pool poolMax is smaller than optimal for this load. The queue was never large; it never had more than 6 requests in it. This is within expectations since there are at least 5 more concurrent requests at a time than there are connections available in the pool. If this were a real app, I might decide to increase poolMax so no pool.getConnection() call ever waited. (I might also want to set poolTimeout so that when the pool was quiet, it could shrink, freeing up DB resources.) However the average wait time of 2 milliseconds is small. If I don't have DB resources to handle the extra sessions from a bigger pool, I might decide that a 2 millisecond wait is OK and that the pool size is fine. At least one connection spent 1.5 seconds in the queue. Since I know my test infrastructure I'm guessing this was when the pool ramped up in size and my small, loaded DB took some time to create the second set of 10 sessions. Maybe I should experiment with a smaller poolIncrement or bigger poolMin? Another important variable shown in the stats is UV_THREADPOOL_SIZE. I'd not set it so there were the default four worker threads in the Node process. Blindly increasing poolMax may not always help throughput. If DB operations take some time, you might find all threads get blocked waiting for their respective DB response. Increasing UV_THREADPOOL_SIZE may help improve application throughput. The best settings for pool configuration, UV_THREADPOOL_SIZE, and any DRCP pool size will depend on your application and environment. Connection Pool Queue Statistics The table below shows the node-oracledb 1.7 pool statistics descriptions. These stats and the APIs to enable and log them may change in future versions of node-oracledb. I look forward to getting some PRs, for example to add a standard logging capability which the stats generation can be part of. Connection Pool MetricDescription total connection requestsNumber of pool.getConnection() calls made for this pool total requests enqueuedNumber of connections that couldn't be satisfied because every session in the the pool was already being used, and so they had to be queued waiting for a session to be returned to the pool total requests dequeuedNumber of connection requests that were removed from the queue when a free connection has become available. This is triggered when the app has finished with a connection and calls release() to return it to the queue total requests failedNumber of connection calls that invoked the underlying C++ pool.getConnection() callback with an error state. Does not include queue request timeout errors. total request timeoutsNumber of connection requests that were waiting in the queue but exceeded the queueTimeout setting. The timeout is triggered with a JavaScript setTimeout call max queue lengthMaximum number of connection requests that were ever waiting at one time sum of time in queueTotal sum of time that connection requests have been waiting in the queue min time in queueSmallest amount of time that any request waited in the queue max time in queueLongest amount of time that any request waited in the queue avg time in queueDerived from the sum of time value divided by the number of requests enqueued pool connections in useA metric returned by the underlying Oracle C client session pool implementation. It is the number of currently active connections in the connection pool pool connections openAlso returned by the underlying library. It shows the number of currently open sessions in the underlying connection pool Note that the sum of time in queue, the min time in queue and the max time in queue values are calculated when queued requests are removed from the queue, so they don't record the amount of time for requests still waiting in the queue. Resources Issues and questions about node-oracledb can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line! node-oracledb installation instructions are here. Node-oracledb documentation is here.

Node-oracledb 1.7.0, the Node.js add-on for Oracle Database, is on NPM. Top features: a new connection pool queue to make apps more resilient, and "Bind by position" syntax for PL/SQL Index-by...

node.js

Node-oracledb: Avoiding "ORA-01000: maximum open cursors exceeded"

Developers starting out with Node have to get to grips with the 'different' programming style of JavaScript that seems to cause methods to be called when least expected! While you are still in the initial hacking-around-with-node-oracledb phase you may sometimes encounter the error ORA-01000: maximum open cursors exceeded. A cursor is "a handle for the session-specific private SQL area that holds a parsed SQL statement and other processing information" Here are things to do when you see an ORA-1000: Avoid having too many incompletely processed statements open at one time: Close ResultSets before releasing the connection. If cursors are opened with dbms_sql.open_cursor() in a PL/SQL block, close them before the block returns - except for REF CURSORS being passed back to node-oracledb. (And if a future node-oracledb version supports Oracle Database 12c Implicit Result Sets, these cursors should likewise not be closed in the PL/SQL block) Make sure your application is handling connections and statements in the order you expect. Choose the appropriate Statement Cache size. Node-oracledb has a statement cache per connection. When node-oracledb internally releases a statement it will be put into the statement cache of that connection, but its cursor will remain open. This makes statement re-execution very efficient. The cache size is settable with the stmtCacheSize attribute. The appropriate statement cache size you choose will depend on your knowledge of the locality of the statements, and of the resources available to the application: are statements re-executed; will they still be in the cache when they get executed; how many statements do you want to be cached? In rare cases when statements are not re-executed, or are likely not to be in the cache, you might even want to disable the cache to eliminate its management overheads. Incorrectly sizing the statement cache will reduce application efficiency. Luckily with Oracle 12.1, the cache can be automatically tuned using an oraaccess.xml file. More information on node-oracledb statement caching is here. Don't forget to use bind variables otherwise each variant of the statement will have its own statement cache entry and cursor. With appropriate binding, only one entry and cursor will be needed. Set the database's open_cursors parameter appropriately. This parameter specifies the maximum number of cursors that each "session" (i.e each node-oracle connection) can use. When a connection exceeds the value, the ORA-1000 error is thrown. Documentation on open_cursors is here. Along with a cursor per entry in the connection's statement cache, any new statements that a connection is currently executing, or ResultSets that haven't been released (in neither situation are these yet cached), will also consume a cursor. Make sure that open_cursors is large enough to accommodate the maximum open cursors any connection may have. The upper bound required is stmtCacheSize+ the maximum number of executing statements in a connection . Remember this is all per connection. Also cache management happens when statements are internally released. The majority of your connections may use less than open_cursors cursors, but if one connection is at the limit and it then tries to execute a new statement, that connection will get ORA-1000: maximum open cursors exceeded.

Developers starting out with Node have to get to grips with the 'different' programming style of JavaScript that seems to cause methods to be called when least expected! While you are still in the...

node.js

node-oracledb 1.6.0 is on NPM (Node.js add-on for Oracle Database)

Node-oracledb 1.6.0, the Node.js add-on for Oracle Database, is on NPM. In this release a comprehensive pull request by Dieter Oberkofler adds support for binding PL/SQL Collection Associative Array (Index-by) types. Strings and numbers can now be bound and passed to and from PL/SQL blocks. Dieter tells us that nowadays he only gets to code for a hobby - keep it up Dieter! Using PL/SQL Associative Arrays can be a very efficient way of transferring database between an application and the database because it can reduce the number of 'round trips' between the two. As an example, consider this table and PL/SQL package: CREATE TABLE mytab (numcol NUMBER); CREATE OR REPLACE PACKAGE mypkg IS TYPE numtype IS TABLE OF NUMBER INDEX BY BINARY_INTEGER; PROCEDURE myinproc(p IN numtype); END; / CREATE OR REPLACE PACKAGE BODY mypkg IS PROCEDURE myinproc(p IN numtype) IS BEGIN FORALL i IN INDICES OF pINSERT INTO mytab (numcol) VALUES (p(i)); END; END; / With this schema, the following JavaScript will result in mytab containing five rows: connection.execute( "BEGIN mypkg.myinproc(:bv); END;", { bv: { type : oracledb.NUMBER, dir: oracledb.BIND_IN, val: [1, 2, 23, 4, 10] } }, function (err) { . . . }); There is a fuller example in examples/plsqlarray.sql and check out the documentation. Other changes in node-oracledb 1.6 are @KevinSheedy sent a GitHub Pull Request for the README to help the first time reader have the right pre-requisites and avoid the resulting pitfalls. Fixed a LOB problem causing an uncaught error to be generated. Removed the 'close' event that was being generated for LOB Writables Streams. The Node.js Streams doc specifies it only for Readable Streams. Updated the LOB examples to show connection release. Extended the OS X install section with a way to install on El Capitan that doesn't need root access for Instant Client 11.2. Thanks to @raymondfeng for pointing this out. Added RPATH to the link line when building on OS X in preparation for future client. TypeScript users will be happy to hear Richard Natal recently had a node-oracledb TypeScript type definition file added to the DefinitelyTyped project. This is not part of node-oracledb itself but Richard later mentioned he found a way it could be incorporated. Hopefully he will submit a pull request and it will make it directly to the project so it can be kept in sync. Thanks to everyone who has worked on this release and kept the momentum going. What's coming up for the next release? There is discussion about adding a JavaScript layer. This was kicked off by a pull request from Sagie Gur-Ari which has lead to some work by Oracle's Dan McGhan. See the discussion and let us know what you think. Having this layer could make it quicker and easier for JavaScript coders to contribute node-oracledb and do things like reduce API inconsistency, make it easier to add a promise API in future, and of course provide a place to directly add Sagie's Streaming query result suggestion that started the whole thing. I know a few contributors have recently submitted the Oracle Contributor Agreement ready to do big and small things - every bit counts. I look forward to being able to incorporate your work. I've heard a couple of reports that Node LTS 4.2.6 on Windows is having some issues building native add-ons. 0.10, 0.12, 5.x and 4.2.5 don't have issues. Drop me a line if you encounter a problem. Issues and questions about node-oracledb can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line! node-oracledb installation instructions are here. Node-oracledb documentation is here.

Node-oracledb 1.6.0, the Node.js add-on for Oracle Database, is on NPM. In this release a comprehensive pull request by Dieter Oberkofler adds support for binding PL/SQL Collection Associative Array...

node.js

node-oracledb 1.5.0 is on NPM (Node.js add-on for Oracle Database)

Node-oracledb 1.5.0, the Node.js add-on for Oracle Database, is on NPM. A number of bugs have been squashed in this release. We now treat Oracle Database 'Success With Info' warnings as success. Thanks to Francisco Trevino for his pull request. After investigating and discussing, we decided for 1.5 to pick up the straightforward fix proposed. In a future release we will revisit allowing these warnings to be caught and handled. Extended rollback-on-connection-release with 11g Oracle Clients to occur for all non-query executions. The natural behavior of OCI is to commit when a connection is released. This is the opposite of node-oracledb, which therefore has to determine whether to rollback or not. When node-oracledb is linked with 11g client a heuristic is used to guess whether to rollback when a connection is released. This heuristic needed to be changed to cover more cases. The result is that there will be sometimes be some unnecessary rollbacks issued. The bug didn't occur node-oracledb was linked with 12c client libraries due to this code that uses a new API available in 12c to indicate whether a connection has a transaction open. Bottom line: use Oracle 12c client libraries if possible to get optimal behavior. Updated OS X install instructions to work on El Capitan. The instructions now use symbolic links in /usr/local/lib for the Oracle client libraries. This removes the need to set DYLD_LIBRARY_PATH, which has some restrictions on it introduced in El Capitan. Display an error and prevent connection release while database calls are in progress. This was a bigger transaction, that 'fixed' a number of seemingly random crashes which were occurring when applications released connections that were in fact still in use. Node-oracledb will now print an error and not release the connection, thus preventing a crash. Note that since the release fails, connection pools can max out in this scenario. If you experience the errors NJS-030, NJS-031 or NJS-032 you should fix your app so the connection release occurs after all database operations have concluded. The new messages are: "NJS-030: Connection cannot be released because Lob operations are in progress""NJS-031: Connection cannot be released because ResultSet operations are in progress""NJS-032: Connection cannot be released because a database call is in progress" Fixed an intermittent crash while selecting data from CLOB column. We had an incorrect buffer expansion ratio in use. This has been fixed. Fixed crash when trying to set invalid values for connection properties.Enough said. Work for node-oracledb 1.6 will begin. We are trying to reproduce and understand some reported LOB issues and memory growth reports. We're also looking forward to evaluating a big pull request from Dieter Oberkofler that adds PL/SQL bind support. Issues and questions about node-oracledb can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line! node-oracledb installation instructions are here. Node-oracledb documentation is here.

Node-oracledb 1.5.0, the Node.js add-on for Oracle Database, is on NPM. A number of bugs have been squashed in this release. We now treat Oracle Database 'Success With Info' warnings as success. Thanks...

node.js

node-oracledb 1.4.0 supports Node 0.10, 0.12, 4.2 and 5 (Node.js add-on for Oracle Database)

Version 1.4 of node-oracledb, the add-on for Node.js that powers high performance Oracle Database applications, is available on NPM. Since the recent releases of Node 4 LTS and Node 5, after the reconciliation and merge of the io.js and Node.js forks, there has been strong interest in a compatible node-oracledb driver. This is now possible. Node-oracledb 1.4 works with Node.js 0.10, 0.12, 4.2, and 5.0. Thanks to everyone for their perseverance. The code change making this possible was a patch contributed by Richard Natal bumping the dependency on NAN from v1 to v2. Note: a compiler with support for C++11 is required to build with Node 4.2 and 5. (Oracle Linux 6 users will need to move to Oracle Linux 7 or install a newer compiler, such as from the Software Collection Library for Oracle Linux). Other changes in this release are: Fixed a statement cursor leak occuring when statements failed. Fixed a crash accessing Pool properties on Windows. A new testWindows target to help run the tests on Windows was added to package.json. Instructions on how to run tests are in test/README.md. Fixed compilation warnings seen on some platforms with newer compilers. Issues and questions about node-oracledb can be posted on GitHub. node-oracledb installation instructions are here. node-oracledb documentation is here.

Version 1.4 of node-oracledb, the add-on for Node.js that powers high performance Oracle Database applications, is available on NPM. Since the recent releases of Node 4 LTS and Node 5, after...

node.js

node-oracledb 1.2.0 is on NPM (Node.js add-on for Oracle Database)

Version 1.2 of node-oracledb, the add-on for Node.js that powers high performance Oracle Database applications, is available on NPM A lot of good changes have been made. Our thanks to Bruno Jouhier from Sage for his work on adding RAW support and for fixes for LOB stability. Thanks also go to Bill Christo for pushing us on some Installation topics - look out for his full article on Windows Installation that OTN will be publishing soon. An annotated list of the changes in this releases are: Added support for RAW data type. Bruno contributed a patch to add support for the Oracle RAW datatype. This data type maps to a JavaScript Buffer for inserts, queries and for binding to PL/SQL. Binding RAW for DML RETURNING is not supported. There is an example showing inserting and querying in examples/raw1.js Added a type property to the Lob class to distinguish CLOB and BLOB types.This small change will allow introspection on Lob instances so applications can more easily decide how to handle the data. Changed write-only attributes of Connection objects to work with console.log(). The Connection object had three write-only attributes (action, module, clientId) used for end-to-end tracing and mid-tier authentication. Because they were write-only, anyone doing a simple console.log() on the connection object got a confusing message often leading to the impression that connection had failed. The attributes are write-only for the reasons described in the documentation. With the change in v1.2, a Connection object can now be displayed. The three attributes will show as null (see the doc) while the non- write-only attribute stmtCacheSize will show an actual value. With hindsight the three attributes should have be set via a setter, but they aren't. Added a check to make sure maxRows is greater than zero for non-ResultSet queries. If you want to get metaData for a query without getting rows, specify resultSet:true and prefetchRows:0 in the query options (and remember to close the ResultSet). Improved installer messages for Oracle client header and library detection on Linux, OS X and Solaris. Some upfront checks now aid detection of invalid environments earlier. Optimized CLOB memory allocation to account for different database-to-client character set expansions. In line with the optimization for string buffers in v1.1, users of AL32UTF8 databases will see reduced memory consumption when fetching CLOBs. Fixed a crash while reading a LOB from a closed connection Fixed a crash when selecting multiple rows with LOB values.Another fix by Bruno. Corrected the order of Stream 'end' and 'close' events when reading a LOB.Bruno was busy this release and sent in a pull request for this too. Fixed AIX-specific REF CURSOR related failures. Fixed intermittent crash while setting fetchAsString, and incorrect output while reading the value. Added a check to return an NJS error when an invalid DML RETURN statement does not give an ORA error. Removed non-portable memory allocation for queries that return NULL. Fixed encoding issues with several files that caused compilation warnings in some Windows environments. Made installation halt sooner for Node.js versions currently known to be unusable. Fixed typo in examples/dbmsoutputgetline.js Issues and questions about node-oracledb can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line! Installation instructions are here. Node-oracledb documentation is here.

Version 1.2 of node-oracledb, the add-on for Node.js that powers high performance Oracle Database applications, is available on NPM A lot of good changes have been made. Our thanks to Bruno Jouhier from...

node.js

node-oracledb 1.1.0 is on NPM (Node.js add-on for Oracle Database)

Version 1.1 of node-oracledb, the add-on for Node.js that powers high performance Oracle Database applications, is available on NPM This is a stabilization release, with one improvement to the behavior of the local connection pool. The add-on now checks whether pool.release() should automatically drop sessions from the connection pool. This is triggered by conditions where the connection is deemed to have become unusable. A subsequent pool.getConnection() will, of course, create a new, replacement session if the pool needs to grow. Immediately as we were about to release, we identified an issue with lobPrefetchSize. Instead of delaying the release, we have temporarily made setting this attribute a no-op. The changes in this release are: Enhanced pool.release() to drop the session if it is known to be unusable, allowing a new session to be created. Optimized query memory allocation to account for different database-to-client character set expansions. Fixed build warnings on Windows with VS 2015. Fixed truncation issue while fetching numbers as strings. Fixed AIX-specific failures with queries and RETURNING INTO clauses. Fixed a crash with NULL or uninitialized REF CURSOR OUT bind variables. Fixed potential memory leak when connecting throws an error. Added a check to throw an error sooner when a CURSOR type is used for IN or IN OUT binds. (Support is pending). Temporarily disabled setting lobPrefetchSize Issues and questions about node-oracledb can be posted on GitHub or OTN. We need your input to help us prioritize work on the add-on. Drop us a line! Installation instructions are here.

Version 1.1 of node-oracledb, the add-on for Node.js that powers high performance Oracle Database applications, is available on NPM This is a stabilization release, with one improvement to the behavior...

node.js

Using DBMS_OUTPUT with Node.js and node-oracledb

The DBMS_OUTPUT package is the standard way to "print" output from PL/SQL. The way DBMS_OUTPUT works is like a buffer. Your Node.js application code turns on DBMS_OUTPUT buffering, calls some PL/SQL code that puts text into the buffer, and then later fetches from that buffer. Note: any PL/SQL code that calls DBMS_OUTPUT runs to completion before any output is available to the user. Also, other database connections cannot access your buffer. A basic way to fetch DBMS_OUTPUT with node-oracledb is to bind an output string when calling the PL/SQL dbms_output.get_line() procedure, print the string, and then repeat until there is no more output. Another way that I like is to wrap the dbms_output.get_line() call into a pipelined function and fetch the DBMS_OUTPUT using a SQL query. The following code shows both methods. /* NAME dbmsoutput.js DESCRIPTION Shows two methods of displaying PL/SQL DBMS_OUTPUT in node-oracledb. The second method depends on these PL/SQL objects: create or replace type dorow as table of varchar2(32767); / show errors create or replace function mydofetch return dorow pipelined is line varchar2(32767); status integer; begin loop dbms_output.get_line(line, status); exit when status = 1; pipe row (line); end loop; return; end; / show errors*/'use strict';var async = require('async');var oracledb = require('oracledb');var dbconfig = require('./dbconfig.js');oracledb.createPool( dbconfig, function(err, pool) { if (err) console.error(err.message) else doit(pool); });var doit = function(pool) { async.waterfall( [ function(cb) { pool.getConnection(cb); }, // Tell the DB to buffer DBMS_OUTPUT enableDbmsOutput, // Method 1: Fetch a line of DBMS_OUTPUT at a time createDbmsOutput, fetchDbmsOutputLine, // Method 2: Use a pipelined query to get DBMS_OUTPUT createDbmsOutput, function(conn, cb) { executeSql( conn, "select * from table(mydofetch())", [], { resultSet: true}, cb); }, printQueryResults ], function (err, conn) { if (err) { console.error("In waterfall error cb: ==>", err, "<=="); } conn.release(function (err) { if (err) console.error(err.message); }); } )};var enableDbmsOutput = function (conn, cb) { conn.execute( "begin dbms_output.enable(null); end;", function(err) { return cb(err, conn) });}var createDbmsOutput = function (conn, cb) { conn.execute( "begin " + "dbms_output.put_line('Hello, Oracle!');" + "dbms_output.put_line('Hello, Node!');" + "end;", function(err) { return cb(err, conn) });}var fetchDbmsOutputLine = function (conn, cb) { conn.execute( "begin dbms_output.get_line(:ln, :st); end;", { ln: { dir: oracledb.BIND_OUT, type:oracledb.STRING, maxSize: 32767 }, st: { dir: oracledb.BIND_OUT, type:oracledb.NUMBER } }, function(err, result) { if (err) { return cb(err, conn); } else if (result.outBinds.st == 1) { return cb(null, conn); // no more output } else { console.log(result.outBinds.ln); return fetchDbmsOutputLine(conn, cb); } }); }var executeSql = function (conn, sql, binds, options, cb) { conn.execute( sql, binds, options, function (err, result) { if (err) cb(err, conn) else cb(null, conn, result); });}var printQueryResults = function(conn, result, cb) { if (result.resultSet) { fetchOneRowFromRS(conn, result.resultSet, cb); } else if (result.rows && result.rows.length > 0) { console.log(result.rows); return cb(null, conn); } else { console.log("No results"); return cb(null, conn); }}function fetchOneRowFromRS(conn, resultSet, cb) { resultSet.getRow( // note: getRows would be more efficient function (err, row) { if (err) { cb(err, conn); } else if (row) { console.log(row); fetchOneRowFromRS(conn, resultSet, cb); } else { cb(null, conn); } });} The output is: Hello, Oracle!Hello, Node![ 'Hello, Oracle!' ][ 'Hello, Node!' ] I used resultSet.getrow() for simplicity, but you will probably want to use resultSet.getRows() for efficiency. If you want to buffer all the output in the Node.js application, Bruno Jouhier has a nice implementation to build up an array of query output in his GitHub gist query-all.js.

The DBMS_OUTPUT package is the standard way to "print" output from PL/SQL. The way DBMS_OUTPUT works is like a buffer. Your Node.js application code turns on DBMS_OUTPUT buffering, calls some PL/SQL...

node.js

Node-oracledb goes 1.0: The Node.js add-on for Oracle Database

Announcement Today Oracle released node-oracledb 1.0, the Node.js add-on to enable high performance Oracle Database applications. Node-oracledb is available from npmjs.com and GitHub. Each month or so, since our first code bundle was pushed to GitHub earlier this year, we released a node-oracledb update with new functionality. The adoption has been exciting, with important applications already in production. This is our eighth release of node-oracledb and promises to be our best received so far. The node-oracledb 1.0 add-on for Node.js supports standard and advanced features: SQL and PL/SQL execution Fetching of large result sets REF CURSORs Large Objects: CLOBs and BLOBs Smart mapping between JavaScript and Oracle types with manual mapping also available Query results as JavaScript objects or array Data binding using JavaScript objects or arrays Transaction Management Inbuilt Connection Pooling Database Resident Connection Pooling (DRCP) External Authentication Row Prefetching Statement Caching Client Result Caching End-to-end tracing High Availability Features Fast Application Notification (FAN) Runtime Load Balancing (RLB) Transparent Application Failover (TAF) Oracle enhances, maintains and supports node-oracledb via open source channels (i.e. GitHub), similar to Oracle Database drivers for other open source languages. The add-on is under the Apache 2.0 license. Where to get node-oracledb The Oracle Technology Network Node.js Developer Center has all the links and information you need to start using node-oracledb. To jump start, follow these instructions to install node-oracledb. Changes since the previous release The major changes in node-oracledb 1.0 since the previous release are: The Stream interface for CLOB and BLOB types was implemented, adding support for LOB queries, inserts, and PL/SQL LOB bind variables. As well as being needed for working with many legacy schemas, having LOB support lets application developers use Oracle Database 12.1.0.2's JSON data type without running into the length limitation of VARCHAR2 storage. Customers have been contacting me what seems like every day, asking when LOB support would be available, and pleading for early access. Here it is, and it looks great. We'll be continuing to run load tests, benchmark it, and to enhance it. To see how to use LOBs with node-oracledb, checkout the node-oracledb Lob documentation and LOB examples General information about Oracle Database JSON support can be found in the documentation or on the JSON team blog. Added Oracledb.fetchAsString and a new execute() property fetchInfo to allow numbers, dates, and ROWIDs to be fetched as strings. These features, available at the application level (for dates and numbers), and per-statement level (for dates, numbers and ROWIDs), can help overcome JavaScript limitations of representation and conversion. Added support for binding DATE, TIMESTAMP, and TIMESTAMP WITH LOCAL TIME ZONE as DATE to DML RETURNING (aka RETURNING INTO) type. You can also bind these types as STRING. The internal Oracle client character set is now always set to AL32UTF8. There's no longer a need to set it externally via NLS_LANG. A related bug with multibyte data reported by users was fixed by correcting the allocation of some internal buffers. Overall the NLS experience is much more consistent. The test suite's and example database credentials can now be set via environment variables. A small change to help testing in automatically provisioned environments. Our test suite already has great coverage numbers, and will continue to be enhanced in future releases. Bug fixes to node-oracledb. These are listed in the CHANGELOG. What next? Being an open source project in a dynamically changing environment, our statement of direction has been a brief, flexible goal: We are actively working on supporting Oracle Database features, and on functionality requests from users involved in the project. Our priority list is re-evaluated for each point release. So now we have version 1.0, what next? This is just the start. There are plenty of important and interesting tasks in front of us. We will begin with a review of the project, from our development processes, the driver functionality, right through to distribution. This review will determine our next tasks. Hearing from users is crucial for prioritization, so don't hesitate to comment at GitHub. Node.js is undergoing a surge of change at the moment, with the io.js re-merger, and the formation of the Node.js Foundation. As the merged Node.js code base stabilizes and the Foundation's LTS plans solidify, we will be able to be more formal about node-oracledb's schedule. We will work with Node.js and with partners to bring you the best experience. (On a technical note, the V2 release of the compatibility layer NAN was made in the last few days, too late for us to incorporate in node-oracledb 1.0. So, support of the latest, bleeding edge io.js will be in a future node-oracledb version.) Let me wrap up this announcement by appreciating the growing node-oracledb community, particularly those who have contributed to node-oracledb with code, suggestions and discussions.

Announcement Today Oracle released node-oracledb 1.0, the Node.js add-on to enable high performance Oracle Database applications. Node-oracledb is available from npmjs.com and GitHub. Each month or so,...

node.js

Installing node-oracledb on OS X with Oracle Instant Client 11.2.0.4

----> This note and script is obsolete. The current node-oracledb OS X install instructions contain nicer solutions

----> This note and script is obsolete. The current node-oracledb OS X install instructions contain nicer solutions <----- . I've been hacking an Apple OS X shell script to install node-oracledb. You tell it where your Instant Client libraries and headers ZIP packages are. It then installs node-oracledb, resulting in an instantclient directory and a node_modules directory. This automates the instructions Node-oracledb Installation on OS X with Instant Client. My osxinstall.sh script can be seen here. [Now removed] I was investigating how to avoid needing to set DYLD_LIBRARY_PATH. I wanted to find how to replicate the use of rpath, which is available for node-oracledb on Linux. A standard install on OS X needs DYLD_LIBRARY_PATH set, otherwise Node.js will fail with the error: cjones@cjones-mac:~/n$ node select1.js /Users/cjones/n/node_modules/oracledb/lib/oracledb.js:28 throw err; ^ Error: dlopen(/Users/cjones/n/node_modules/oracledb/build/Release/oracledb.node, 1): Library not loaded: /ade/b/3071542110/oracle/rdbms/lib/libclntsh.dylib.11.1 Referenced from: /Users/cjones/n/node_modules/oracledb/build/Release/oracledb.node Reason: image not found at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Module.require (module.js:364:17) at require (module.js:380:17) at Object.<anonymous> (/Users/cjones/n/node_modules/oracledb/lib/oracledb.js:23:15) at Module._compile (module.js:456:26) at Object.Module._extensions..js (module.js:474:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Module.require (module.js:364:17) So, I was playing with osxinstall.sh to see how to circumvent this. Before running osxinstall.sh, edit it and set the paths to where the Instant Client 11.2.0.4 'basic' and 'sdk' ZIP files are located on your filesystem, see IC_BASIC_ZIP and IC_SDK_ZIP. (You can download Instant Client from OTN. Use the 64-bit packages). You also specify the target application directory you are using, see TARGET_DIR. This is where the components are installed into. Update https_proxy if you are behind a firewall, otherwise comment it out. If you have various node_modules directories around, then npm might end up installing oracledb in an unexpected place and the script will error. The key bit of osxinstall.sh that I was interested in is: # For Oracle Instant Client 11.2.0.4: these are the default paths we will change IC_DEF1=/ade/b/3071542110/oracle/rdbms/lib IC_DEF2=/ade/dosulliv_ldapmac/oracle/ldap/lib . . . # Warning: work in progress - may not be optimal chmod 755 $OCI_LIB_DIR/*dylib $OCI_LIB_DIR/*dylib.11.1 install_name_tool -id libclntsh.dylib.11.1 $OCI_LIB_DIR/libclntsh.dylib.11.1 install_name_tool -change $IC_DEF2/libnnz11.dylib $OCI_LIB_DIR/libnnz11.dylib \ $OCI_LIB_DIR/libclntsh.dylib.11.1 install_name_tool -id libnnz11.dylib $OCI_LIB_DIR/libnnz11.dylib install_name_tool -change $IC_DEF1/libclntsh.dylib.11.1 \ $OCI_LIB_DIR/libclntsh.dylib.11.1 $OCI_LIB_DIR/libociei.dylib install_name_tool -change $IC_DEF1/libclntsh.dylib.11.1 \ $OCI_LIB_DIR/libclntsh.dylib.11.1 $NODE_ORACLEDB_LIB chmod 555 $OCI_LIB_DIR/*dylib $OCI_LIB_DIR/*dylib.11.1 This changes the library install and identification names using install_name_tool. Note this tool cannot allocate more space for path names than currently exists. My code is a work in progress; I may work out a better way, perhaps using libtool. Comments & suggestions welcome. The script does more than most people probably need. In future even I might only run parts extracted from it. If you are new to node-oracledb, check out its install and API documentation on GitHub. You may also be interested in reading The Easiest Way to Install Oracle Database on Mac OS X.

node.js

node-oracledb 0.7.0 now supports Result Sets and REF CURSORS

A new release of the Node.js driver for Oracle Database is now on npmjs.com and GitHub. node-oracledb 0.7 connects Node.js 0.10, Node.js 0.12, and io.js to Oracle Database. It runs on a number of platforms. For more information about node-oracledb see the node-oracledb GitHub page. The changes in 0.7 are: Added result set support for fetching large data sets. Rows from queries can now be fetched in batches using a ResultSet class. This allows large query results to be fetched without requiring all values to be in memory at once. New getRow() and getRows() methods can be called repeatedly to scroll through the query results. The original node-oracledb behavior of returning all rows at once remains the default. To return a resultSet, use the new execute() option { resultSet: true }. For example: // (See the full code in examples/resultset2.js). . .var numRows = 10; // number of rows to return from each call to getRows()connection.execute( "SELECT employee_id, last_name FROM employees ORDER BY employee_id", [], // no bind variables{ resultSet: true }, // return a result set. Default is false function(err, result) { if (err) { . . . } fetchRowsFromRS(connection, result.resultSet, numRows); });});. . .function fetchRowsFromRS(connection, resultSet, numRows){resultSet.getRows( // get numRows rows numRows, function (err, rows) { if (err) { . . . // close the result set and release the connection } else if (rows.length == 0) { // no rows, or no more rows . . . // close the result set and release the connection } else if (rows.length > 0) { console.log(rows); fetchRowsFromRS(connection, resultSet, numRows); // get next set of rows } });} It's important to use the new resultSet close() method to close the result set when no more data is available or required. There is more information on Result Sets in the manual. Added REF CURSOR support for returning query results from PL/SQL. PL/SQL code that returns REFCURSOR results via bind parameters can now bind a new node-oracledb type Oracledb.CURSOR and fetch the results using the new ResultSet class. // (See the full code in examples/refcursor.js)var oracledb = require('oracledb');. . .var numRows = 10; // number of rows to return from each call to getRows()var bindvars = { sal: 6000, cursor: { type: oracledb.CURSOR, dir: oracledb.BIND_OUT }}connection.execute( "BEGIN get_emp_rs(:sal, :cursor); END;", // The PL/SQL has an OUT bind of type SYS_REFCURSOR bindvars, function(err, result) { if (err) { . . . } fetchRowsFromRS(connection, result.outBinds.cursor, numRows); });. . .function fetchRowsFromRS(connection, resultSet, numRows){resultSet.getRows( // get numRows rows numRows, function (err, rows) { if (err) { . . . // close the result set and release the connection } else if (rows.length == 0) { // no rows, or no more rows . . . // close the result set and release the connection } else if (rows.length > 0) { console.log(rows); fetchRowsFromRS(connection, resultSet, numRows); // get next set of rows } });} There is more information on using REF CURSORS in the manual. Added row prefetching support. The new ResultSet class supports prefetching via a new attribute oracledb.prefetchRows and a new execute() option prefetchRows. Each time the application fetches query or REF CURSOR rows in a ResultSet from Oracle Database, prefetching allows the underlying Oracle libraries to transfer extra rows. This allows better use of database and network resources, improving performance and scalability. Regardless of the prefetch size, the number of rows returned to the application does not change. Buffering is handled by the underlying Oracle client library. The default prefetch size is 100 extra rows. Applications should tune the prefetch size used by each execute() for desired performance and/or to avoid allocating and initializing unused memory. There are some more tips in the manual. With node-oracledb 0.7.0, non-ResultSet queries now use prefetching with a fixed size of 2. This should reduce the number of round trips required for these queries. Added a test suite. Yay! See the README in the tests directory for how to run the tests. When you run the test suite, you'll notice each test has a unique number for ease of identification. The numbers are not necessarily sequential. We do most testing on Linux and Windows. If you see test output differences due to environment or version differences, please sign the OCA and submit a pull request with the fix and an explanation of why it is needed. See CONTRIBUTING. If you submit new tests (after signing the OCA), assign each one a unique number in the documented range that applies to the area being tested. Fixed error handling for SQL statements using RETURNING INTO. A bug causing all errors with DML RETURNING statements to report the same error message was fixed. Fixed INSERT of a date when the SQL has a RETURNING INTO clause. When using an INSERT to insert a date or timestamp and the SQL clause had a RETURNING INTO clause for character or number columns, then an error was being thrown. This has been fixed. Renumbered the values used by the Oracledb Constants. If your application uses constant names such as Oracledb.OBJECT or Oracledb.BIND_INOUT then you won't notice the change. However if, for some reason, code has hardcoded numbers like 2, then you will have to update to use the new numbers, see lib/oracledb.js. Or, better, change the code to use the constants' names.

A new release of the Node.js driver for Oracle Database is now on npmjs.com and GitHub. node-oracledb 0.7 connects Node.js 0.10, Node.js 0.12, and io.js to Oracle Database. It runs on a number of...

python

Python cx_Oracle 5.2 driver for Oracle Database has been released

Anthony Tuininga just released an updated Python cx_Oracle 5.2 driver for Oracle Database. This release brings a number of enhancements, many of them for Oracle Database 12c features such as longer VARCHARS. cx_Oracle 5.2 is the first release Oracle has contributed code for (thanks Avinash!) so we're pretty happy all around. And a big thank you to all the other contributors and users who have made this release possible. The new code features are: Support for Oracle Database 12c strings up to 32k characters. Support for LOB values larger than 4 GB. Support for Oracle Database 12c array DML row counts. Support for fetching batch errors. Support for connections as SYSASM. Added types NCHAR, FIXED_NCHAR and LONG_NCHAR to replace the types UNICODE, FIXED_UNICODE and LONG_UNICODE (which are now deprecated). These types are available in Python 3 as well so they can be used to specify the use of NCHAR type fields when binding or using setinputsizes(). Support for building without any configuration changes to the machine when using instant client RPMs on Linux. Fixed session releasing to the pool when calling connection.close() (Issue #2) Fixed binding of booleans in Python 3.x. Added __version__ attribute to conform with PEP 396. Fixed handling of datetime intervals (Issue #7) The complete release notes are here. My favorite feature is the installation improvement. (Disclaimer: I contributed the initial implementation!) With this change, Instant Client RPMS on Linux can now be used. The best bit is cx_Oracle will automatically locate Instant Client and will then also automatically build using rpath. The installation of cx_Oracle on Linux is now as simple as installing the Instant Client Basic & SDK RPMs, and running 'pip install cx_Oracle'. No need to set ORACLE_HOME during installation. No need to set LD_LIBRARY_PATH at runtime. If you have a Linux ULN support subscription you can install Instant Client via yum, which makes it even simpler. Users of Database Resident Connection Pooling will like the connection.close() fix since it allows DRCP to be used effectively without requiring a cx_Oracle session pool. In summary the cx_Oracle 5.2 release adds great features to the already impressive Oracle Database support available to Python applications. Application development and deployment just got better. Resources: cx_Oracle documentation cx_Oracle source code repository cx_Oracle mail list Thanks again to Anthony, Avinash and all the contributors who have made cx_Oracle so good.

Anthony Tuininga just released an updated Python cx_Oracle 5.2 driver for Oracle Database. This release brings a number of enhancements, many of them for Oracle Database 12c features such as longer...

node.js

node-oracledb 0.5.0 is on NPM (Node.js driver for Oracle Database)

It's been a long time between drinks so we are bringing out the 0.5.0 Preview Release version of the Node.js driver for Oracle Database, warts and all. Changed the isAutoCommit attribute name to autoCommit. Changed the isExternalAuth attribute name to externalAuth. These two small attribute name changes break backwards compatibility with node-oracledb 0.4.2. A simple search and replace in your code base is all that is needed to migrate to 0.5.0. We wanted to make these changes while the driver is still technically in a Preview Release status. I warned about the change in a GitHub post. Fixed outBinds array counting to not give empty array entries for IN binds. When a bind-by-position -style array of values is passed in as the execute() function bind parameter, the resulting outBinds array used to contain empty positions corresponding to each IN bind. Now it just has the OUT binds. Your code may need updating. Added support for "DML RETURNING" bind variables like UPDATE test SET NAME = 'abc' RETURNING id INTO :idbv. See the documentation for details. Make sure that maxSize is big enough for any STRING OUT binds otherwise you will get an error. We're continuing to investigate some edge cases with this code but wanted to get it out for early adopters to look at. Again, this is a preview release. Rectified the error message for invalid type properties. To help you keep track of changes, a CHANGELOG file was added. For more information about node-oracledb see the node-oracledb GitHub page.

It's been a long time between drinks so we are bringing out the 0.5.0 Preview Release version of the Node.js driver for Oracle Database, warts and all. Changed the isAutoCommit attribute name to autoCom...

node.js

Running Carsten Czarski's node-oracledb WebSocket Example

My colleague Carsten Czarski recently presented on the node-oracledb driver for Node.js. One of his demos used WebSockets. It was a live demo, not captured in slides. I thought I'd explain how I got it to run in my Oracle Linux 64 bit environment. Download and extract the Node 0.10.36 bundle from here. (At time of writing, the node-oracle driver requires Node.js 0.10). Add the bin to your PATH, for example: $ export PATH=/opt/node-v0.10.36-linux-x64/bin:$PATH Download and install the 'basic' and 'devel' Instant Client RPMs from OTN: # rpm -ivh oracle-instantclient12.1-basic-12.1.0.2.0-1.x86_64.rpm# rpm -ivh oracle-instantclient12.1-devel-12.1.0.2.0-1.x86_64.rpm Download Carsten's demo code from here and extract it: $ cd /home/cjones$ mkdir wsdemo$ cd wsdemo$ mv $HOME/Downloads/nodejs-beispielprogramme.zip .$ unzip nodejs-beispielprogramme.zip Create a new package.json file: { "name": "ccwebsockets", "version": "1.0.0", "description": "Carsten's WebSocket Demo application using node-oracledb 0.3.1.", "scripts": {"start": "node 05-websockets.js" }, "dependencies": {"oracledb": "oracle/node-oracledb#v0.3.1","websocket": "^1.0","express": "^4.11" }} Edit 05-websockets.js and change the database credentials at line 111. The schema needs to have the EMP table. oracledb.createPool( { user : "scott", password : "tiger", connectString : "localhost/pdborcl", poolMin : 5, poolMax : 10 }, Also in 05-websockets.js, change the path name at line 65 to your current directory name: filename = path.join("/home/cjones/wsdemo", uri); Use npm to automatically install the node-oracle driver and the "websocket" and "express" dependencies listed in package.json: $ npm install To run the demo, use the package.json script "start" target to load 05-websockets.js: $ npm start The server will start: > ccwebsockets@1.0.0 start /home/cjones/wsdemo> node 05-websockets.jsWebsocket Control Server listening at http://0.0.0.0:9000Database connection pool established Open a couple of browser windows to http://127.0.0.1:9000/html/websocket.html. These are the clients listening for messages. The output is the starting point of the demo. Let's send a message to those clients. Open a third browser window for the URL http://127.0.0.1:9000/update/CLARK. The two listening windows will be updated with the "message" containing the query result payload. My screenshot shows this, and also has evidence that I had previously visited http://127.0.0.1:9000/update/KING : You might have noticed the screen shots were made on OS X. If you are not on Linux, refer to INSTALL to see how to install Node.js and node-oracledb. The package.json file I created will download node-oracledb 0.3.1 so you don't need to manually get it from GitHub. You will have to set OCI_LIB_DIR and OCI_INC_DIR during installation, and then set LD_LIBRARY_PATH, DYLD_LIBRARY_PATH or PATH when you want to run node. You can follow Carsten at @cczarski.

My colleague Carsten Czarski recently presented on the node-oracledb driver for Node.js. One of his demos used WebSockets. It was a live demo, not captured in slides. I thought I'd explain how I got...

node.js

node-oracledb 0.3.1 is on GitHub (Node.js driver for Oracle Database)

On behalf of the development team, I have merged some new features and fixes to node-oracledb Updates for node-oracledb 0.3.1 Added Windows build configuration. See Node-oracledb Installation on Windows. Thanks to Rinie Kervel for submitting a pull request, and thanks to all those that commented and tested. Added Database Resident Connection Pooling (DRCP) support. See API Documentation for the Oracle Database Node.js Driver "Database Resident Connection Pooling enables database resource sharing for applications that run in multiple client processes or run on multiple middle-tier application servers. DRCP reduces the overall number of connections that a database must handle. DRCP is distinct from node-oracledb's local connection pool. The two pools can be used separately, or together. Made an explicit connection release() do a rollback, to be consistent with the implicit release behavior. Made install on Linux look for Oracle libraries in a search order: Using install-time environment variables $OCI_LIB_DIR and $OCI_INC_DIR In the highest version Instant Client RPMs installed In $ORACLE_HOME In /opt/oracle/instantclient Added RPATH support on Linux, so LD_LIBRARY_PATH doesn't always need to be set. See Advanced installation on Linux The directory name used by the installer for the final attempt at locating an Instant Client directory is now /opt/oracle/instantclient or C:\oracle\instantclient. This path may be used if OCI_DIR_LIB and OCI_INC_LIB are not set and the installer has to guess where the libraries are. Added a compile error message "Oracle 11.2 or later client libraries are required for building" if attempting to build with older Oracle client libraries. This helps developers self-diagnose this class of build problem. Fixed setting the isAutoCommit property. Fixed a crash using pooled connections on Windows. Fixed a crash querying object types. Fixed a crash doing a release after a failed terminate. (The Pool is still unusable - this will be fixed later) Clarified documentation that terminate() doesn't release connections. Doing an explicit release() of each pooled connection that is no longer needed is recommended to avoid resource leaks and maximize pool usage. Updated version to 0.3.1 (surprise!)

On behalf of the development team, I have merged some new features and fixes to node-oracledb Updates for node-oracledb 0.3.1 Added Windows build configuration. See Node-oracledb Installation on...

General

A Personal Victory: Oracle Database Sample Schemas are on GitHub

For anyone who ever deleted a row from a table in Oracle's Sample HR schema and wanted it back, help is nearby. You no longer have to download the full "Oracle Database 12c Release 1 Examples" zip (499,228,127 bytes worth for the Linux bundle) and run the Oracle installer. Now you can clone our GitHub db-sample-schema repository and run the creation SQL scripts in SQL*Plus. This new repository installs these six sample schemas: HR: Human Resources OE: Order Entry PM: Product Media IX: Information Exchange SH: Sales History BI: Business Intelligence Because of the widespread use of these schemas, we did minimal changes to the bundle. The install, as it is given, installs all schemas and needs to be done on a database server since file system access is needed from the database. But now, if you want, you can fork the repo and modify it to install just the HR schema from a client machine. Or change your fork to install the HR schema into an arbitrary user name of your choice so multiple people can test the same data set. And what about modifying the script to do DROP TRIGGER SECURE_EMPLOYEES getting rid of that annoying time-based trigger which yells 'You may only make changes during normal office hours' if you try to make changes after 6pm or on weekends? This may be a great teaching tool about triggers but not useful when you are configuring demonstrations for big conferences late into the night! And why is this a personal victory? Because as a client tool person, how to find these schema creation scripts has irked me in the past. The HR schema replaced SCOTT/TIGER in the Oracle documentation a long time ago but was not easily available to use. I've written a lot of examples using HR but never had a good way to instruct how to install the schema. I'm glad to have helped (being partially modest here about the legal and administrative things it required) getting this small set of scripts out on GitHub. If it makes it easier for someone to talk about features or issues by reference to a common data set, then my job is done. Having the scripts readily available is also a reminder to the Oracle community to share information and knowledge efficiently. Even as we head to a world of cloneable databases and snapshots, sometimes it is just easier to run a SQL script. This repo is a piece of a jigsaw, and it can be used where it fits. The schemas could be now considered "traditional". In future, Oracle Database teams will continue to create fresh data sets to show off newer and upcoming database features, such as these analytical-sql-examples that you might be interested in.

For anyone who ever deleted a row from a table in Oracle's Sample HR schema and wanted it back, help is nearby. You no longer have to download the full "Oracle Database 12c Release 1 Examples" zip...

Oracle

Integrated Cloud Applications & Platform Services