Monday Sep 08, 2014

What, you don't have a RAC implementation?

...but you need business continuity, high availability, scalability, combined with automatic management for your IT infrastructure and cloud deployments. Oracle Database with the Real Application Clusters (RAC) option is probably what you're looking for.

Originally focused on providing improved database services, Oracle RAC has evolved over the years and is now based on a comprehensive high availability (HA) stack that can be used as the foundation of a database cloud system as well as a shared infrastructure that can ensure high availability, scalability, flexibility and agility for any application in your data center.

Features in the Oracle 12c RAC option including Application Continuity and Oracle Flex ASM (plus noteworthy enhancements in the area of policy including managed database and cluster management, comprehensive database consolidation and Oracle Multitenant support) make Oracle RAC a compelling approach to database virtualization.

And there's a lot more to RAC implementations. This technical paper gives a quick overview of key enhancements in each layer of the Oracle RAC stack that contribute to an even higher level of application availability using an Oracle RAC database.

Find out how Oracle RAC delivers:

  • Business Continuity and High Availability
  • Scalability and Agility
  • Cost-Effective Workload Management
  • Standardized Deployment and System Management

Check it out, and if you'll be at Oracle OpenWorld in San Francisco, OTN will be hosting a RAC ATTACK hands on lab session on Sunday from 9:00 am -2:00 pm at the OTN Lounge. Come meet the Ninjas and learn all about the Oracle RAC Stack.

Ciao For Now!
LKR

Monday Aug 25, 2014

Oracle Key Vault Option is Here.

Finally, a centralized way to manage all the encryption keys and credential files in the data center.   

Critical credential files such as Oracle wallet files, Java KeyStores, Secure Shell (SSH) key files, and Secure Sockets Layer (SSL) certificate files are often widely distributed across servers and server clusters that use error-prone synchronization and backup mechanisms. As organizations increasingly encrypt data at rest and on the network, securely managing all the encryption keys and credential files in the data center has become a major challenge.

How do you comply with stringent regulatory requirements for managing keys and certificates and ensure that keys are routinely rotated, properly destroyed, and accessed solely by authorized entities?

Oracle Key Vault  a software appliance designed to securely manage encryption keys and credential files in the enterprise data center. It provides secure, centralized management of encryption keys and credential files in the data center, including Oracle wallet files, Java KeyStores, Kerberos keytab files, SSH key files, and SSL certificate files.

Want to get started?  Here's what you need to know:

Q: Where can I download the software for Oracle Key Vault?

A: Oracle Key Vault can be downloaded from Oracle Software Delivery Cloud.

Go to https://edelivery.oracle.com;

Select Product Pack: Oracle Key Vault (12.1.0.0.0) Media Pack v1.

Q: What are the recommended hardware specifications?

A: CPU: Minimum 2x86-64 cores, Recommended: 2+cores with cryptographic acceleration support (IntelĀ® AES-NI) Memory: Minimum 4 GB of RAM Disk: Minimum 500 GB hard disk.

Hardware Compatibility: Refer to the hardware compatibility list (HCL) for Oracle Linux Release 5 Update 10. The HCL is available at https://linux.oracle.com/hardware-certifications.

Q: How does the software appliance install work?

A: Oracle Key Vault is packaged as a software appliance, which means it contains everything, including the operating system, needed to install the product on bare hardware.

During installation, the installer completely takes over the hardware. In addition to partitioning and formatting the disks, it installs the base OS, user-space libraries, Oracle Database, Oracle Key Vault software, etc. It configures all of the software (OS, networking, database) automatically and with minimal user involvement. It hardens the operating system, network, database, and more according

to hardening best practices. It removes unnecessary packages and software and disables unused services and ports.

Q: Can I deploy the Oracle Key Vault software appliance on Windows or Solaris?

A: Oracle Key Vault can only be deployed on bare metal. Any existing OS including Windows or Solaris and software will be removed by the install process. Note that this applies only to the Oracle Key Vault appliance and is independent of the OS for the server endpoint.

Q: Can I run Oracle Key Vault on Oracle Virtual Machine?

A: For testing or proof of concept purposes, Oracle Key Vault can be run in Oracle VM or Oracle VirtualBox. However, for production deployment, Oracle Key Vault should be installed on dedicated physical hardware; otherwise VM administrators may be able to gain access to underlying keys and secrets stored inside Oracle Key Vault.

Q: Can I install Oracle Key Vault on Oracle Database Appliance (ODA) or Exadata?

A: No, at this time Oracle Key Vault is not certified with the Oracle Database Appliance or Exadata. Oracle Key Vault can however be used to manage keys used by ODA or Exadata.

Find out more on the Oracle Key Vault page on OTN.

Ciao for Now!

LKR 

Thursday Aug 14, 2014

Did You Say "JSON Support" in Oracle 12.1.0.2?

Yes, We did.   Here's why:

JSON is practically a subset of the object literal notation of JavaScript, so it can be used to represent JavaScript object literals. This means JSON can serve as a data-interchange language. Although it was defined in the context of JavaScript, JSON is in fact a language-independent data format. A variety of programming languages can parse and generate JSON data.

Additionally, JSON can often be used in JavaScript programs without requiring parsing or serializing. It is a text-based way of representing JavaScript object literals, arrays, and scalar data. JSON is easy for software to parse and generate. It is often used for serializing structured data and exchanging it over a network, typically between a server and web applications.

JSON data has often been stored in NoSQL databases such as Oracle NoSQL Database and Oracle Berkeley DB. These allow for storage and retrieval of data that is not based on any schema, but they do not offer the rigorous consistency models of relational databases. You can get around this by using a relational database in parallel with a NoSQL database, but applications using JSON data stored in the NoSQL database must then ensure data integrity themselves.

So for these reasons (and maybe a few more) Oracle Database 12c supports JSON natively with relational database features, including transactions, indexing, declarative querying, and views. Oracle Database queries are declarative, so you can join JSON data with relational data. And you can project JSON data relationally, making it available for relational processes and tools. You can also query, from within the database, JSON data that is stored outside the database, in an external table.

And, it's good to know you can access JSON data stored in the database the same way you access other database data, including using OCI, .NET, and JDBC.

Get more information about JSON support in Oracle Database 12c. You can start with the XML DB Developer's Guide (I DID!).

Ciao for Now!

LKR 

Thursday Jul 24, 2014

Oracle Database 12c Release 12.1.0.2 is Here!

...with the long awaited In-Memory option, plus 21 new features. Oracle Database 12c Release 12.1.0.2 supports Linux and Oracle Solaris (SPARC and x86 64 bit).

Get the download here.

And for those of us who just LOVE *sparknotes, here's a quick index of those new features with quick links from good friends and scribes in the Oracle Doc group.  But first, a cool picture. 

Sea Dragon photo taken at the Birch Aquarium in San Diego by my friend Joel Broude, a veteran of the Sun Microsystems documentation team. 

Oracle Database 12c Release 12.1.0.2 New Features 

Advanced Index Compression

Advanced Index Compression works well on all supported indexes, including those indexes that are not good candidates for the existing prefix compression feature; including indexes with no, or few, duplicate values in the leading columns of the index.

Advanced Index Compression improves the compression ratios significantly while still providing efficient access to the index.

Approximate Count Distinct

The new and optimized SQL function, APPROX_COUNT_DISTINCT(), provides approximate count distinct aggregation. Processing of large volumes of data is significantly faster than the exact aggregation, especially for data sets with a large number of distinct values, with negligible deviation from the exact result.

The need to count distinct values is a common operation in today's data analysis. Optimizing the processing time and resource consumption by orders of magnitude while providing almost exact results speeds up any existing processing and enables new levels of analytical insight.

Attribute Clustering

Attribute clustering is a table-level directive that clusters data in close physical proximity based on the content of certain columns. This directive applies to any kind of direct path operation, such as a bulk insert or a move operation.

Storing data that logically belongs together in close physical proximity can greatly reduce the amount of data to be processed and can lead to better compression ratios.

Automatic Big Table Caching

In previous releases, in-memory parallel query did not work well when multiple scans contended for cache memory. This feature implements a new cache called big table cache for table scan workload.

This big table cache provides significant performance improvements for full table scans on tables that do not fit entirely into the buffer cache.

FDA Support for CDBs

Flashback Data Archive (FDA) is supported for multitenant container databases (CDBs) in this release.

Customers can now use Flashback Data Archive in databases that they are consolidating using Oracle Multitenant, providing the benefits of easy history tracking to applications using pluggable databases (PDB) in a multitenant container database.

Full Database Caching

Full database caching can be used to cache the entire database in memory. It should be used when the buffer cache size of the database instance is greater than the whole database size. In Oracle RAC systems, for well-partitioned applications, this feature can be used when the combined buffer caches of all instances, with some extra space to handle duplicate cached blocks between instances, is greater than the database size.

Caching the entire database provides significant performance benefits, especially for workloads that were previously limited by I/O throughput or response time. More specifically, this feature improves the performance of full table scans by forcing all tables to be cached. This is a change from the default behavior in which larger tables are not kept in the buffer cache for full table scans.

In-Memory Aggregation

In-Memory Aggregation optimizes queries that join dimension tables to fact tables and aggregate data (for example, star queries) using CPU and memory efficient KEY VECTOR and VECTOR GROUP BY aggregation operations. These operations may be automatically chosen by the SQL optimizer based on cost estimates.

In-Memory Aggregation improves performance of star queries and reduces CPU usage, providing faster and more consistent query performance and supporting a larger number of concurrent users. As compared to alternative SQL execution plans, performance improvements are significant. Greater improvements are seen in queries that include more dimensions and aggregate more rows from the fact table. In-Memory Aggregation eliminates the need for summary tables in most cases, thus simplifying maintenance of the star schema and allowing access to real-time data.

In-Memory Column Store

In-Memory Column Store enables objects (tables or partitions) to be stored in memory in a columnar format. The columnar format enables scans, joins and aggregates to perform much faster than the traditional on-disk formats for analytical style queries. The in-memory columnar format does not replace the on-disk or buffer cache format, but is an additional, transaction-consistent copy of the object. Because the column store has been seamlessly integrated into the database, applications can use this feature transparently without any changes. A DBA simply has to allocate memory to In-Memory Column Store. The optimizer is aware of In-Memory Column Store, so whenever a query accesses objects that reside in the column store and would benefit from its columnar format, they are sent there directly. The improved performance also allows more ad-hoc analytic queries to be run directly on the real-time transaction data without impacting the existing workload.

The last few years have witnessed a surge in the concept of in-memory database objects to achieve improved query response times. In-Memory Column Store allows seamless integration of in-memory objects into an existing environment without having to change any application code. By allocating memory to In-Memory Column Store, you can instantly improve the performance of their existing analytic workload and enable interactive ad-hoc data extrapolation.

JSON Support

This feature adds support for storing, querying and indexing JavaScript Object Notation (JSON) data to Oracle Database and allows the database to enforce that JSON stored in the Oracle Database conforms to the JSON rules. This feature also allows JSON data to be queried using a PATH based notation and adds new operators that allow JSON PATH based queries to be integrated into SQL operations.

Companies are adopting JSON as a way of storing unstructured and semi-structured data. As the volume of JSON data increases, it becomes necessary to be able to store and query this data in a way that provides similar levels of security, reliability and availability as are provided for relational data. This feature allows information represented as JSON data to be treated inside the Oracle database.

See: Oracle XML DB Developer's Guide for details.

New FIPS 140 Parameter for Encryption

The new database parameter, DBFIPS_140, provides the ability to turn on and off the Federal Information Processing Standards (FIPS) 140 cryptographic processing mode inside the Oracle database.

Use of FIPS 140 validated cryptographic modules are increasingly required by government agencies and other industries around the world. Customers who have FIPS 140 requirements can turn on the DBFIPS_140parameter.

See: Oracle Database Security Guide for details.

PDB CONTAINERS Clause

The CONTAINERS clause is a new way of looking at multitenant container databases (CDBs). With this clause, data can be aggregated from a single identical table or view across many pluggable databases (PDBs) from the root container. The CONTAINERS clause accepts a table or view name as an input parameter that is expected to exist in all PDBs in that container. Data from a single PDB or a set of PDBs can be included with the use of CON_ID in the WHERE clause. For example:

SELECT ename FROM CONTAINERS(scott.emp) WHERE CON_ID IN (45, 49);

This feature enables an innovative way to aggregate user-created data in a multitenant container database. Reports that require aggregation of data across many regions or other attributes can leverage the CONTAINERS clause and get data from one single place.

PDB File Placement in OMF

The new parameter, CREATE_FILE_DEST, allows administrators to set a default location for Oracle Managed Files (OMF) data files in the pluggable database (PDB). When not set, the PDB inherits the value from the root container.

If a file system directory is specified as the default location, then the directory must already exist; Oracle does not create it. The directory must have appropriate permissions that allow Oracle to create files in it. Oracle generates unique names for the files. A file created in this manner is an Oracle-managed file.

The CREATE_FILE_DEST parameter allows administrators to structure the PDB files independently of the multitenant container database (CDB) file destination. This feature helps administrators to plug or to unplug databases from one container to another in a shared storage environment.

PDB Logging Clause

The PDB LOGGING or NOLOGGING clause can be specified in a CREATE or ALTER PLUGGABLE DATABASE statement to set or modify the logging attribute of the pluggable database (PDB). This attribute is used to establish the logging attribute of tablespaces created within the PDB if the LOGGING clause was not specified in the CREATE TABLESPACE statement.

If a PDB LOGGING clause is not specified in the CREATE PLUGGABLE DATABASE statement, the logging attribute of the PDB defaults to LOGGING.

This new clause improves the manageability of PDBs in a multitenant container database (CDB).

PDB Metadata Clone

An administrator can now create a clone of a pluggable database only with the data model definition. The dictionary data in the source is copied as is but all user-created table and index data from the source is discarded.

This feature enhances cloning functionality and facilitates rapid provisioning of development environments.

PDB Remote Clone

The new release of Oracle Multitenant fully supports remote full and snapshot clones over a database link. A non-multitenant container database (CDB) can be adopted as a pluggable database (PDB) simply by cloning it over a database link. Remote snapshot cloning is also supported across two CDBs sharing the same storage.

This feature further improves rapid provisioning of pluggable databases. Administrators can spend less time on provisioning and focus more on other innovative operations.

PDB Snapshot Cloning Additional Platform Support

With the initialization parameter CLONEDB set to true, snapshot clones of a pluggable database are supported on any local, Network File Storage (NFS) or clustered file systems with Oracle Direct NFS (dNFS) enabled. The source of the clone must remain read-only while the target needs to be on a file system that supports sparseness.

Snapshot cloning support is now extended to other third party vendor systems.

This feature eases the requirement of specific file systems for snapshot clones of pluggable databases. With file system agnostic snapshot clones, pluggable databases can be provisioned even faster than before.

PDB STANDBYS Clause

The STANDBYS clause allows a user to specify whether a pluggable database (PDB) needs to be a part of the existing standby databases. The STANDBYS clause takes two values: ALL and NONE. While ALL is the default value, when a PDB is created with STANDBYS=NONE, the PDB's data files are offlined and marked as UNNAMED on all of the standby databases. Any of the new standby databases instantiated after the PDB has been created needs to explicitly disable the PDB for recovery to exclude it from the standby database. However, if a PDB needs to be enabled on a standby database after it was excluded on that standby database, PDB data files need to be copied to the standby database from the primary database and the control file needs to be updated to reflect their paths after which a user needs to connect to the PDB on that standby database and issue ALTER PLUGGABLE DATABASE ENABLE RECOVERY which automatically onlines all of the data files belonging to the PDB.

This feature increases consolidation density by supporting different service-level agreements (SLAs) in the same multitenant container database (CDB).

PDB State Management Across CDB Restart

The SAVE STATE clause and DISCARD STATE clause are now available with the ALTER PLUGGABLE DATABASE SQL statement to preserve the open mode of a pluggable database (PDB) across multitenant container database (CDB) restarts.

If SAVE STATE is specified, open mode of specified PDB is preserved across CDB restart on instances specified in the INSTANCES clause. Similarly, with the DISCARD STATE clause, the open mode of specified PDB is no longer preserved.

These new SQL clauses provide the flexibility to choose the automatic startup of application PDBs when a CDB undergoes a restart. This feature enhances granular control and effectively reduces downtime of an application in planned or unplanned outages.

PDB Subset Cloning

The USER_TABLESPACES clause allows a user to specify which tablespaces need to be available in the new pluggable database (PDB). An example of the application of this clause is a case where a customer is migrating from a non-multitenant container database (CDB) where schema-based consolidation was used to separate data belonging to multiple tenants to a CDB where data belonging to each tenant is kept in a separate PDB. TheUSER_TABLESPACES clause helps to create one PDB for each schema in the non-CDB.

This powerful clause helps convert cumbersome schema-based consolidations to more agile and efficient pluggable databases.

Rapid Home Provisioning

Rapid Home Provisioning allows deploying of Oracle homes based on gold images stored in a catalog of pre-created homes.

Provisioning time for Oracle Database is significantly improved through centralized management while the updating of homes is simplified to linkage. Oracle snapshot technology is used internally to further improve the sharing of homes across clusters and to reduce storage space.

Zone Maps

For full table access, zone maps allow I/O pruning of data based on the physical location of the data on disk, acting like an anti-index.

Accessing only the relevant data optimizes the I/O necessary to satisfy a query, increasing the performance significantly and reducing the resource consumption.

See: Oracle Database Data Warehousing Guide for details.

Ciao for now!

LKR 

Wednesday Jul 16, 2014

Oracle Critical Patch Advisory -- July 15, 2014

ATTN: Oracle DBAs: Oracle has published its Critical Patch Advisory dated July 15, 2014.  Get the advisory Here

Here's what you need to know:

This Critical Patch Update provides 113 new security fixes across a wide range of product families including: Oracle Database, Oracle Fusion Middleware, Oracle Java SE, Oracle Linux and Virtualization, Oracle MySQL, Oracle Hyperion, Oracle Enterprise Manager Grid Control, Oracle E-Business Suite, and Oracle Business Applications and more.

REMEMBER: Critical Patch Update fixes are intended to address significant security vulnerabilities in Oracle products and also include code fixes that are prerequisites for the security fixes. As a result, Oracle recommends that this Critical Patch Update be applied as soon as possible. 

Read more about the specific fixes here.

Get the Critical Patch Advisory on  OTN here.

Ciao for Now!

LKR 


About

The OTN DBA/DEV Watercooler is your official source of news covering Oracle Database technology topics and community activities from throughout the OTN Database and Developer Community. Find tips and in-depth technology information you need to master Oracle Database Administration or Application Development here. This Blog is compiled by @oracledbdev, the Oracle Database Community Manager for OTN, and features insights, tech tips and news from throughout the OTN Database Community.

Search

Archives
« April 2015
SunMonTueWedThuFriSat
   
1
2
3
4
5
6
7
8
9
10
11
12
13
14
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
  
       
Today