Wednesday May 06, 2015

Schemaless Application Development with ORDS, JSON and SODA

There's a lot to talk about when it comes to JSON support in Oracle Database 12c. A big part of that story is Oracle REST Data Services 3.0, which was just released on May 4th. But there's more to it. Oracle 12c has a key set of API's:  Simple Oracle Document Access (SODA).   And it's important.

Here's the deal:

Oracle Database supports storing, indexing and querying JSON documents in the database. But the picture is completed by document-centric API's for accessing JSON documents.

Introducing Simple Oracle Document Access (SODA)

SODA, the set of APIs specifically designed to support schemaless application development.

There are 2 SODA implementations:

  1. SODA for Java-- a programmatic document-store interface for Java Developers that uses JDBC to communicate with the database. SODA for Java consists of a set of simple classes that represent a database, a document collection and a document. Methods on these classes provide all the functionality required to manage and query collections and work with JSON documents stored in an Oracle Database.

  2. SODA for REST-- a REST-based document store interface implemented as a Java servlet and delivered as part of Oracle REST Data Services (ORDS) 3.0. Applications based on SODA for REST use HTTP to communicate with the Java Servlet. The SODA for REST Servlet can also be run under the database's native HTTP Server. HTTP verbs such as PUT, POST, GET, and DELETE map to operations over JSON documents. Because SODA for REST can be invoked from any programming or scripting language that is capable of making HTTP calls, it can be used with all modern development environments and frameworks.

Want to see it in action?

Check out the oracle/json-in-db repository on Github. You'll find downloadable and installable demonstrations for the JSON capabilities of Oracle Database and later.

And get more details about Oracle as a Document Store from OTN.

Join the ORDS discussion space on the OTN Community Platform here.

Ciao for Now!

Wednesday Dec 31, 2014

Top 5 Oracle Database Tech Releases of 2014

It was a great year for Oracle Technology...with so many great new features and products for Oracle DBAs, Data Scientists and Developers. Here's the Top 5 new technologies that we rolled out, with the resources you need to get started...from your friends here at the OTN Watercooler.

  • Oracle Database 12c -- ...with the long awaited In-Memory option, plus 21 new features. Oracle Database 12c Release supports Linux and Oracle Solaris (SPARC and x86 64 bit). See the full list of features with direct links to docs and resources. 
And if you're ready to start your upgrade journey to Oracle Database 12c, be sure to check out this Upgrade Starter's Guide and follow the Oracle Upgrade NOW blog
    • Oracle Database In-Memory -- Oracle Database In-Memory delivers leading-edge in-memory performance without the need to restrict functionality or accept compromises, complexity and risk. Deploying Oracle Database In-Memory with virtually any existing Oracle Database compatible application is as easy as flipping a switch--no application changes are required. It is fully integrated with Oracle Database's scale-up, scale-out, storage tiering, availability and security technologies making it the most industrial-strength offering in the industry.

      This whitepaper Oracle Database In-Memory gives you more detail about the new option available only in Oracle Database 12c, and if you want to get started, check out this article series:

    • Oracle Key Vault is a software appliance designed to securely manage encryption keys and credential files in the enterprise data center. It provides secure, centralized management of encryption keys and credential files including Oracle wallet files, Java KeyStores, Kerberos keytab files and SSH key files and SSL certification files.  Want to get started?  Here's what you need to know.

    • Oracle Big Data SQL -- Oracle's unique approach to providing unified query over data in Oracle Database, Hadoop, and select NoSQL datastores. Read more about why SQL is becomming the "GoTo" language for Big Data Analysis.

    • Oracle Oracle Zero Loss Recovery Appliance --Launched in October at OOW2014, The Oracle Zero Data Loss Recovery Appliance is a terrific addition to the Oracle Engineered Systems Portfolio. With its support for real-time redo transport, the appliance is designed to bring "Data Guard-like",  robust data protection for all the Oracle Databases in the data center. It's a sweet ride... Get the full story here.

      But as with most "Top" lists, there are many more items to include. So Special Mention goes to:

        JSON Support in Oracle 12c -- Oracle Database 12c supports JSON natively with relational database features, including transactions, indexing, declarative querying, and views. You can project JSON data relationally, making it available for relational processes and tools. You can also query, from within the database, JSON data that is stored outside the database, in an external table. Get more information about JSON support in Oracle Database 12c. You can start with the XML DB Developer's Guide.

      • Happy New Year and

        Ciao for Now!


      Monday Aug 25, 2014

      Oracle Key Vault Option is Here.

      Finally, a centralized way to manage all the encryption keys and credential files in the data center.   

      Critical credential files such as Oracle wallet files, Java KeyStores, Secure Shell (SSH) key files, and Secure Sockets Layer (SSL) certificate files are often widely distributed across servers and server clusters that use error-prone synchronization and backup mechanisms. As organizations increasingly encrypt data at rest and on the network, securely managing all the encryption keys and credential files in the data center has become a major challenge.

      How do you comply with stringent regulatory requirements for managing keys and certificates and ensure that keys are routinely rotated, properly destroyed, and accessed solely by authorized entities?

      Oracle Key Vault  a software appliance designed to securely manage encryption keys and credential files in the enterprise data center. It provides secure, centralized management of encryption keys and credential files in the data center, including Oracle wallet files, Java KeyStores, Kerberos keytab files, SSH key files, and SSL certificate files.

      Want to get started?  Here's what you need to know:

      Q: Where can I download the software for Oracle Key Vault?

      A: Oracle Key Vault can be downloaded from Oracle Software Delivery Cloud.

      Go to;

      Select Product Pack: Oracle Key Vault ( Media Pack v1.

      Q: What are the recommended hardware specifications?

      A: CPU: Minimum 2x86-64 cores, Recommended: 2+cores with cryptographic acceleration support (IntelĀ® AES-NI) Memory: Minimum 4 GB of RAM Disk: Minimum 500 GB hard disk.

      Hardware Compatibility: Refer to the hardware compatibility list (HCL) for Oracle Linux Release 5 Update 10. The HCL is available at

      Q: How does the software appliance install work?

      A: Oracle Key Vault is packaged as a software appliance, which means it contains everything, including the operating system, needed to install the product on bare hardware.

      During installation, the installer completely takes over the hardware. In addition to partitioning and formatting the disks, it installs the base OS, user-space libraries, Oracle Database, Oracle Key Vault software, etc. It configures all of the software (OS, networking, database) automatically and with minimal user involvement. It hardens the operating system, network, database, and more according

      to hardening best practices. It removes unnecessary packages and software and disables unused services and ports.

      Q: Can I deploy the Oracle Key Vault software appliance on Windows or Solaris?

      A: Oracle Key Vault can only be deployed on bare metal. Any existing OS including Windows or Solaris and software will be removed by the install process. Note that this applies only to the Oracle Key Vault appliance and is independent of the OS for the server endpoint.

      Q: Can I run Oracle Key Vault on Oracle Virtual Machine?

      A: For testing or proof of concept purposes, Oracle Key Vault can be run in Oracle VM or Oracle VirtualBox. However, for production deployment, Oracle Key Vault should be installed on dedicated physical hardware; otherwise VM administrators may be able to gain access to underlying keys and secrets stored inside Oracle Key Vault.

      Q: Can I install Oracle Key Vault on Oracle Database Appliance (ODA) or Exadata?

      A: No, at this time Oracle Key Vault is not certified with the Oracle Database Appliance or Exadata. Oracle Key Vault can however be used to manage keys used by ODA or Exadata.

      Find out more on the Oracle Key Vault page on OTN.

      Ciao for Now!


      Thursday Aug 14, 2014

      Did You Say "JSON Support" in Oracle

      Yes, We did.   Here's why:

      JSON is practically a subset of the object literal notation of JavaScript, so it can be used to represent JavaScript object literals. This means JSON can serve as a data-interchange language. Although it was defined in the context of JavaScript, JSON is in fact a language-independent data format. A variety of programming languages can parse and generate JSON data.

      Additionally, JSON can often be used in JavaScript programs without requiring parsing or serializing. It is a text-based way of representing JavaScript object literals, arrays, and scalar data. JSON is easy for software to parse and generate. It is often used for serializing structured data and exchanging it over a network, typically between a server and web applications.

      JSON data has often been stored in NoSQL databases such as Oracle NoSQL Database and Oracle Berkeley DB. These allow for storage and retrieval of data that is not based on any schema, but they do not offer the rigorous consistency models of relational databases. You can get around this by using a relational database in parallel with a NoSQL database, but applications using JSON data stored in the NoSQL database must then ensure data integrity themselves.

      So for these reasons (and maybe a few more) Oracle Database 12c supports JSON natively with relational database features, including transactions, indexing, declarative querying, and views. Oracle Database queries are declarative, so you can join JSON data with relational data. And you can project JSON data relationally, making it available for relational processes and tools. You can also query, from within the database, JSON data that is stored outside the database, in an external table.

      And, it's good to know you can access JSON data stored in the database the same way you access other database data, including using OCI, .NET, and JDBC.

      Get more information about JSON support in Oracle Database 12c. You can start with the XML DB Developer's Guide (I DID!).

      Ciao for Now!


      Thursday Jul 24, 2014

      Oracle Database 12c Release is Here!

      ...with the long awaited In-Memory option, plus 21 new features. Oracle Database 12c Release supports Linux and Oracle Solaris (SPARC and x86 64 bit).

      Get the download here.

      And for those of us who just LOVE *sparknotes, here's a quick index of those new features with quick links from good friends and scribes in the Oracle Doc group.  But first, a cool picture. 

      Sea Dragon photo taken at the Birch Aquarium in San Diego by my friend Joel Broude, a veteran of the Sun Microsystems documentation team. 

      Oracle Database 12c Release New Features 

      Advanced Index Compression

      Advanced Index Compression works well on all supported indexes, including those indexes that are not good candidates for the existing prefix compression feature; including indexes with no, or few, duplicate values in the leading columns of the index.

      Advanced Index Compression improves the compression ratios significantly while still providing efficient access to the index.

      Approximate Count Distinct

      The new and optimized SQL function, APPROX_COUNT_DISTINCT(), provides approximate count distinct aggregation. Processing of large volumes of data is significantly faster than the exact aggregation, especially for data sets with a large number of distinct values, with negligible deviation from the exact result.

      The need to count distinct values is a common operation in today's data analysis. Optimizing the processing time and resource consumption by orders of magnitude while providing almost exact results speeds up any existing processing and enables new levels of analytical insight.

      Attribute Clustering

      Attribute clustering is a table-level directive that clusters data in close physical proximity based on the content of certain columns. This directive applies to any kind of direct path operation, such as a bulk insert or a move operation.

      Storing data that logically belongs together in close physical proximity can greatly reduce the amount of data to be processed and can lead to better compression ratios.

      Automatic Big Table Caching

      In previous releases, in-memory parallel query did not work well when multiple scans contended for cache memory. This feature implements a new cache called big table cache for table scan workload.

      This big table cache provides significant performance improvements for full table scans on tables that do not fit entirely into the buffer cache.

      FDA Support for CDBs

      Flashback Data Archive (FDA) is supported for multitenant container databases (CDBs) in this release.

      Customers can now use Flashback Data Archive in databases that they are consolidating using Oracle Multitenant, providing the benefits of easy history tracking to applications using pluggable databases (PDB) in a multitenant container database.

      Full Database Caching

      Full database caching can be used to cache the entire database in memory. It should be used when the buffer cache size of the database instance is greater than the whole database size. In Oracle RAC systems, for well-partitioned applications, this feature can be used when the combined buffer caches of all instances, with some extra space to handle duplicate cached blocks between instances, is greater than the database size.

      Caching the entire database provides significant performance benefits, especially for workloads that were previously limited by I/O throughput or response time. More specifically, this feature improves the performance of full table scans by forcing all tables to be cached. This is a change from the default behavior in which larger tables are not kept in the buffer cache for full table scans.

      In-Memory Aggregation

      In-Memory Aggregation optimizes queries that join dimension tables to fact tables and aggregate data (for example, star queries) using CPU and memory efficient KEY VECTOR and VECTOR GROUP BY aggregation operations. These operations may be automatically chosen by the SQL optimizer based on cost estimates.

      In-Memory Aggregation improves performance of star queries and reduces CPU usage, providing faster and more consistent query performance and supporting a larger number of concurrent users. As compared to alternative SQL execution plans, performance improvements are significant. Greater improvements are seen in queries that include more dimensions and aggregate more rows from the fact table. In-Memory Aggregation eliminates the need for summary tables in most cases, thus simplifying maintenance of the star schema and allowing access to real-time data.

      In-Memory Column Store

      In-Memory Column Store enables objects (tables or partitions) to be stored in memory in a columnar format. The columnar format enables scans, joins and aggregates to perform much faster than the traditional on-disk formats for analytical style queries. The in-memory columnar format does not replace the on-disk or buffer cache format, but is an additional, transaction-consistent copy of the object. Because the column store has been seamlessly integrated into the database, applications can use this feature transparently without any changes. A DBA simply has to allocate memory to In-Memory Column Store. The optimizer is aware of In-Memory Column Store, so whenever a query accesses objects that reside in the column store and would benefit from its columnar format, they are sent there directly. The improved performance also allows more ad-hoc analytic queries to be run directly on the real-time transaction data without impacting the existing workload.

      The last few years have witnessed a surge in the concept of in-memory database objects to achieve improved query response times. In-Memory Column Store allows seamless integration of in-memory objects into an existing environment without having to change any application code. By allocating memory to In-Memory Column Store, you can instantly improve the performance of their existing analytic workload and enable interactive ad-hoc data extrapolation.

      JSON Support

      This feature adds support for storing, querying and indexing JavaScript Object Notation (JSON) data to Oracle Database and allows the database to enforce that JSON stored in the Oracle Database conforms to the JSON rules. This feature also allows JSON data to be queried using a PATH based notation and adds new operators that allow JSON PATH based queries to be integrated into SQL operations.

      Companies are adopting JSON as a way of storing unstructured and semi-structured data. As the volume of JSON data increases, it becomes necessary to be able to store and query this data in a way that provides similar levels of security, reliability and availability as are provided for relational data. This feature allows information represented as JSON data to be treated inside the Oracle database.

      See: Oracle XML DB Developer's Guide for details.

      New FIPS 140 Parameter for Encryption

      The new database parameter, DBFIPS_140, provides the ability to turn on and off the Federal Information Processing Standards (FIPS) 140 cryptographic processing mode inside the Oracle database.

      Use of FIPS 140 validated cryptographic modules are increasingly required by government agencies and other industries around the world. Customers who have FIPS 140 requirements can turn on the DBFIPS_140parameter.

      See: Oracle Database Security Guide for details.


      The CONTAINERS clause is a new way of looking at multitenant container databases (CDBs). With this clause, data can be aggregated from a single identical table or view across many pluggable databases (PDBs) from the root container. The CONTAINERS clause accepts a table or view name as an input parameter that is expected to exist in all PDBs in that container. Data from a single PDB or a set of PDBs can be included with the use of CON_ID in the WHERE clause. For example:

      SELECT ename FROM CONTAINERS(scott.emp) WHERE CON_ID IN (45, 49);

      This feature enables an innovative way to aggregate user-created data in a multitenant container database. Reports that require aggregation of data across many regions or other attributes can leverage the CONTAINERS clause and get data from one single place.

      PDB File Placement in OMF

      The new parameter, CREATE_FILE_DEST, allows administrators to set a default location for Oracle Managed Files (OMF) data files in the pluggable database (PDB). When not set, the PDB inherits the value from the root container.

      If a file system directory is specified as the default location, then the directory must already exist; Oracle does not create it. The directory must have appropriate permissions that allow Oracle to create files in it. Oracle generates unique names for the files. A file created in this manner is an Oracle-managed file.

      The CREATE_FILE_DEST parameter allows administrators to structure the PDB files independently of the multitenant container database (CDB) file destination. This feature helps administrators to plug or to unplug databases from one container to another in a shared storage environment.

      PDB Logging Clause

      The PDB LOGGING or NOLOGGING clause can be specified in a CREATE or ALTER PLUGGABLE DATABASE statement to set or modify the logging attribute of the pluggable database (PDB). This attribute is used to establish the logging attribute of tablespaces created within the PDB if the LOGGING clause was not specified in the CREATE TABLESPACE statement.

      If a PDB LOGGING clause is not specified in the CREATE PLUGGABLE DATABASE statement, the logging attribute of the PDB defaults to LOGGING.

      This new clause improves the manageability of PDBs in a multitenant container database (CDB).

      PDB Metadata Clone

      An administrator can now create a clone of a pluggable database only with the data model definition. The dictionary data in the source is copied as is but all user-created table and index data from the source is discarded.

      This feature enhances cloning functionality and facilitates rapid provisioning of development environments.

      PDB Remote Clone

      The new release of Oracle Multitenant fully supports remote full and snapshot clones over a database link. A non-multitenant container database (CDB) can be adopted as a pluggable database (PDB) simply by cloning it over a database link. Remote snapshot cloning is also supported across two CDBs sharing the same storage.

      This feature further improves rapid provisioning of pluggable databases. Administrators can spend less time on provisioning and focus more on other innovative operations.

      PDB Snapshot Cloning Additional Platform Support

      With the initialization parameter CLONEDB set to true, snapshot clones of a pluggable database are supported on any local, Network File Storage (NFS) or clustered file systems with Oracle Direct NFS (dNFS) enabled. The source of the clone must remain read-only while the target needs to be on a file system that supports sparseness.

      Snapshot cloning support is now extended to other third party vendor systems.

      This feature eases the requirement of specific file systems for snapshot clones of pluggable databases. With file system agnostic snapshot clones, pluggable databases can be provisioned even faster than before.

      PDB STANDBYS Clause

      The STANDBYS clause allows a user to specify whether a pluggable database (PDB) needs to be a part of the existing standby databases. The STANDBYS clause takes two values: ALL and NONE. While ALL is the default value, when a PDB is created with STANDBYS=NONE, the PDB's data files are offlined and marked as UNNAMED on all of the standby databases. Any of the new standby databases instantiated after the PDB has been created needs to explicitly disable the PDB for recovery to exclude it from the standby database. However, if a PDB needs to be enabled on a standby database after it was excluded on that standby database, PDB data files need to be copied to the standby database from the primary database and the control file needs to be updated to reflect their paths after which a user needs to connect to the PDB on that standby database and issue ALTER PLUGGABLE DATABASE ENABLE RECOVERY which automatically onlines all of the data files belonging to the PDB.

      This feature increases consolidation density by supporting different service-level agreements (SLAs) in the same multitenant container database (CDB).

      PDB State Management Across CDB Restart

      The SAVE STATE clause and DISCARD STATE clause are now available with the ALTER PLUGGABLE DATABASE SQL statement to preserve the open mode of a pluggable database (PDB) across multitenant container database (CDB) restarts.

      If SAVE STATE is specified, open mode of specified PDB is preserved across CDB restart on instances specified in the INSTANCES clause. Similarly, with the DISCARD STATE clause, the open mode of specified PDB is no longer preserved.

      These new SQL clauses provide the flexibility to choose the automatic startup of application PDBs when a CDB undergoes a restart. This feature enhances granular control and effectively reduces downtime of an application in planned or unplanned outages.

      PDB Subset Cloning

      The USER_TABLESPACES clause allows a user to specify which tablespaces need to be available in the new pluggable database (PDB). An example of the application of this clause is a case where a customer is migrating from a non-multitenant container database (CDB) where schema-based consolidation was used to separate data belonging to multiple tenants to a CDB where data belonging to each tenant is kept in a separate PDB. TheUSER_TABLESPACES clause helps to create one PDB for each schema in the non-CDB.

      This powerful clause helps convert cumbersome schema-based consolidations to more agile and efficient pluggable databases.

      Rapid Home Provisioning

      Rapid Home Provisioning allows deploying of Oracle homes based on gold images stored in a catalog of pre-created homes.

      Provisioning time for Oracle Database is significantly improved through centralized management while the updating of homes is simplified to linkage. Oracle snapshot technology is used internally to further improve the sharing of homes across clusters and to reduce storage space.

      Zone Maps

      For full table access, zone maps allow I/O pruning of data based on the physical location of the data on disk, acting like an anti-index.

      Accessing only the relevant data optimizes the I/O necessary to satisfy a query, increasing the performance significantly and reducing the resource consumption.

      See: Oracle Database Data Warehousing Guide for details.

      Ciao for now!



      The OTN DBA/DEV Watercooler is your official source of news covering Oracle Database technology topics and community activities from throughout the OTN Database and Developer Community. Find tips and in-depth technology information you need to master Oracle Database Administration or Application Development here. This Blog is compiled by @oracledbdev, the Oracle Database Community Manager for OTN, and features insights, tech tips and news from throughout the OTN Database Community.


      « November 2015