X

All Things Database: Education, Best Practices,
Use Cases & More

Recent Posts

A Simple Guide for Migration of data to Autonomous Database using EM

Moving the workloads of your on-premises database to an Oracle Cloud gives you a competitive advantage today. Databases in Oracle Cloud leverage Oracle's Generation 2 infrastructure, providing maximum performance, better pricing, and the most secure environment for your enterprise workloads. OCI also provides the Oracle Autonomous Database, the industry’s first and only self-driving, self-securing, and self-repairing database. Oracle continues to extend the capabilities of our industry-leading Enterprise Manager offering to support our customers' journey to cloud. We are excited to introduce you to the Database Migration Workbench, which will help you migrate on-premises databases to Oracle Autonomous Database. This blog will guide you through the various steps of the migration process.   Overview of Database Migration Workbench Webinar: Automating Database Migration to Oracle Cloud, including Autonomous Database, Eliminates Operational Risk Register: December 10, 2020 11:00 a.m. PT/2:00 p.m. ET Database Migration Workbench simplifies the migration of an on-premises database to an Autonomous Database with a unified guided flow. Figure 1 (below) shows an overview of the Database Migration Workbench. It uses Data Pump to migrate data to an Autonomous Database. A DBA can use Data Pump regardless of the endian format and character set of the on-premises database. The source database can be a Single-Instance Database (SIDB), RAC Database (RACDB) or a Pluggable Database (PDB), and the destination database can be an ADW or ATP. As part of database migration, EM takes an export dump of source data files, uploads it to an object storage bucket, and imports it to an Autonomous Database. Figure 1 - Overview of Database Migration Workbench   Prerequisites for Database Migration Connectivity between the on-premises datacenter and OCI services should be present, which can be established by having a FastConnect or an IPSec VPN connection. For a full database export or exporting any schema other than the user schema, create an on-premises database user having privileges to export data, i.e., GRANT DATAPUMP_EXP_FULL_DATABASE TO <user_name>. For example: Execute the following SQL if the source database is an SIDB or a RACDB: CREATE USER MIGUSER IDENTIFIED BY MPass_123; GRANT CREATE SESSION TO MIGUSER; GRANT CONNECT TO MIGUSER; GRANT SELECT ANY TABLE TO MIGUSER; GRANT DATAPUMP_EXP_FULL_DATABASE TO MIGUSER; ALTER USER MIGUSER QUOTA UNLIMITED ON USERS; Execute the following SQL if the source database is a PDB: ALTER SESSION SET CONTAINER=SALESPDB; CREATE USER MIGUSER IDENTIFIED BY MPass_123 CONTAINER=current; GRANT CREATE SESSION TO MIGUSER; GRANT CONNECT TO MIGUSER; GRANT SELECT ANY TABLE TO MIGUSER; GRANT DATAPUMP_EXP_FULL_DATABASE TO MIGUSER; ALTER USER MIGUSER QUOTA UNLIMITED ON USERS; Create a directory object, which should have enough disk space to accommodate the data to be migrated. CREATE OR REPLACE DIRECTORY MIG_DMP_DIR AS '/u01/mig_data'; GRANT READ, WRITE ON DIRECTORY MIG_DMP_DIR TO MIGUSER; Create an "Auth Token" based credential in the Autonomous Database and set it as a default credential that will be required for authentication between the Autonomous Database and OCI object storage. Follow the below steps:   a.  Log in to your OCI console and generate an Auth Token. b.  Log in to your Autonomous Database and create a credential. Following is an example of SQL for the same: For Autonomous Database - Dedicated SET DEFINE OFF BEGIN DBMS_CREDENTIAL.CREATE_CREDENTIAL( credential_name => 'ADB_CRED', username => 'jhon_doe', password => 'auth_token'); END; / For Autonomous Database - Serverless SET DEFINE OFF BEGIN DBMS_CLOUD.CREATE_CREDENTIAL( credential_name => 'ADB_CRED', username => 'jhon_doe', password => 'auth_token'); END; / Create new or identify an existing object storage bucket in the same region of your OCI tenancy where the Autonomous Database is residing. a.  The "Storage" tier setting for the bucket must be Standard. b.  The "Visibility" setting for the bucket must be Private. If not already discovered, discover both the source and destination databases in Enterprise Manager.   cURL should be available on the host from where the export dumps will be uploaded to Object Storage.   Creating a Data Migration activity Navigate to the Database Migration Workbench to create migration activity. You can traverse either via a global context or via target context. Launch Database Migration Workbench from a global context Navigate to Enterprise > Migration and Consolidation > Database Migration Workbench Launch Database Migration Workbench from target context Goto to the target home page (target can be an SIDB, RACDB, PDB, or an Autonomous Database). Navigate to Oracle Database > Migration Figure 2 - Navigating to Database Migration Workbench   From the Database Migration Workbench dashboard, click on the button "Create Migration Activity" (as shown in figure 3). Figure 3 - Database Migration Workbench Dashboard   Source, Destination, and Object Storage Selecting a source and destination database You can either perform a full fatabase migration or migrate specific schemas. Select the source database and destination database. The source database can be a SIDB, RACDB or a PDB. The destination database can be ATP or ADW. Figure 4a - Select source and destination for migration   Selecting targets Source and destination database targets can be selected by clicking the magnifying glass icon in the Select Source/Destination Database pane. Figure 4b (below) shows the target selector for the source database. Figure 4b - Select source database   Upload Migration Tools Database Migration Workbench makes use of the tools Oracle Instant Client and Autonomous Database Schema Analyzer. The latest version of these tools is downloaded automatically via job "Migration Prerequisite Download Job" and saved in Software Library. If these utilities are not available from the Software Library for your operating system, you will need to download them manually. Manual downloads are usually required when the latest version of the Migration Tools is not compatible with your current operating system version. The following binaries are needed to be uploaded: Instant Client Files Instant client basic zip package Instant client SQL*Plus zip package Instant client tools zip package Autonomous Database Schema Advisor file **Enterprise Manager automatically downloads the tools when it has a direct internet connection or has MOS proxy setup.   Adding Source, Destination, and Object Storage details Figure 5 - Adding source and destination details   Source Database Credentials - Select a named credential of an on-premises database user with Data Pump export privileges. This region gets auto-populated if the preferred credential is set. To create database credentials, navigate to Setup > Security > Named Credentials and click on create. Enter credential name, select authenticating target type as "Database Instance", credential type as "Database Credentials", enter username, password, role and click on save. Database Credential can also be created via EMCLI as follows: emcli create_named_credential \ -auth_target_type=oracle_database \ -cred_type=DBCreds \ -cred_scope=Global \ -cred_name=SRC_DB_CREDS \ -attributes="DBUserName:db_user;DBPassword:Db_password;DBRole:Normal" Host Credentials - Select a named credential for database host user. This region gets auto-populated if the preferred credential is set. To create host credentials, navigate to Setup > Security > Named Credentials and click on create. Enter credential name, select authenticating target type as "Host", credential type as "Host Credentials", enter the username, password, and click on save. Host Credential can also be created via EMCLI as follows: emcli create_named_credential \ -auth_target_type=host \ -cred_type=HostCreds \ -cred_scope=Global \ -cred_name=SRC_HOST_CREDS \ -attributes="HostUserName:host_user;HostPassword:Host_password" Destination Database Credentials - Select a named credential of Autonomous Database user with Data Pump import privileges. This region gets auto-populated if the preferred credential is set. To create database credentials, navigate to Setup > Security > Named Credentials and click on create. Enter the credential name, select authenticating target type as "Autonomous Transaction Processing" or "Autonomous Data Warehouse", credential type as "Database Credentials", enter username, password, role and click on save. Database credential can also be created via EMCLI as follows: emcli create_named_credential \ -auth_target_type=oracle_cloud_atp \ -cred_type=DBCreds \ -cred_scope=Global \ -cred_name=DEST_ADB_CREDS \ -attributes="DBUserName:adb_user;DBPassword:Adb_pass123;DBRole:Normal" **Provide auth_target_type=oracle_cloud_adw if destination is Autonomous Data Warehouse Agent Host Credentials - Select a named credential for destination ADB's monitoring agent user. This region gets auto-populated if the preferred credential is set. To create host credentials, navigate to Setup > Security > Named Credentials and click on create. Enter credential name, select authenticating target type as "Host", credential type as "Host Credentials", enter the username, password, and click on save. Host Credential can also be created via EMCLI as follows: emcli create_named_credential \ -auth_target_type=host \ -cred_type=HostCreds \ -cred_scope=Global \ -cred_name=DEST_HOST_CREDS \ -attributes="HostUserName:host_user;HostPassword:Host_password" Service Name - Select the Autonomous Database service name, which will be used for migration. It is recommended to use the service name "high" for data migration. Action Migration Method - Migration Method for On-premises to Autonomous Database migration will always be "Data Pump". Recompile invalid objects after migration - On selecting this checkbox, all invalid objects from the migrated schema will be recompiled. Compare performance after migration - On selecting this checkbox, SQL Performance Analyzer will be used to compare performance before and after migration. You can see a detailed SQL Performance Analyzer report once the migration activity is complete. Source Data Pump Directory - Select a directory object of the source database, on which the selected database user is having read and write privileges. If the source database is a RAC target, then the Data Pump directory must be shared across all RAC nodes. Data Pump directory cannot be an ASM based location. Encryption Password - Provide encryption password for export dumps. The export dumps generated from the source database are always encrypted. Cloud Storage URL - Provide object storage URL for the region where your Autonomous Database exists. Cloud Storage URL is of the following format: objectstorage.[Region Identifier].oraclecloud.com You can get the value of region identifier from the OCI console. Open the "Regions" menu and click on "Manage Regions". You will find the "Region Identifier" in the list of regions. Figure 6 - Finding region identifier from OCI console Bucket Name - Enter Object storage bucket name (this bucket should reside in the same region where destination ADB is residing). OCI Credential - Select a named credential for the Autonomous Database of type Oracle Cloud Infrastructure. To create OCI credentials, navigate to Setup > Security > Named Credentials and click on create. Enter credential name, select authenticating target type as "Autonomous Transaction Processing" or "Autonomous Data Warehouse", credential type as "Oracle Cloud Infrastructure Credential", enter Tenancy OCID, User OCID, Private Key, Fingerprint and Passphrase. Database Credential can also be created via EMCLI as follows: emcli create_named_credential \ -cred_name=OCI_CRED \ -auth_target_type="oracle_cloud_atp" \ -cred_type="OCICredential" \ -cred_scope=Global \ -target_type="Autonomous Transaction Processing" \ -attributes="TenancyOCID#<OCI Tenancy OCID>;UserOCID#<OCI User OCID>;PrivateKey#<Private_key>;FingerPrint#<Fingerprint>;PassPhrase#<Pass_phrase>" \ -subseparator=attributes="#" **Provide -auth_target_type=oracle_cloud_adw if destination is Autonomous Data Warehouse **Provide -target_type="Autonomous Data Warehouse" if destination is Autonomous Data Warehouse **Steps to setup SSH keys can be found here **The new lines of <Private_key> should be delimited as \r\n while using in emcli Database OCI Auth Credential - Select the Auth Token based credential created in Autonomous Database.   Named credentials may be scoped as target-specific or global. All the examples above show named credential creation with global scope. More details of named credential creation can be found here.   Customizing the Activity Figure 7 - Customizing export/import options, STS, and pre/post custom scripts   Export and Import Options - You can provide the applicable parameters for export and import command, which will be used during migration. The defaults are pre-populated with values that are required for migration to Autonomous Database. There are a few options that get added by Database Migration Workbench implicitly, as shown in Figure 8 below. Figure 8 - Implicitly added options for export and import   Compare Performance After Migration - SQL Performance Analyzer will be used to compare performance before and after migration. You can choose an existing SQL Tuning Set (STS), or Database Migration Workbench will create a new STS from AWR. Pre/Post Script - Executes pre- or post- migration scripts stored in the software library. The scripts should be uploaded to the software library at <ROOT>/Database Migration/User Defined Scripts as a Directive or Generic Component. Following are the supported custom scripts: Pre-Migration Script - SH script executed before migration on the on-premises database host. Post-Migration Script- SH script executed after migration on the ADB monitoring agent's host. Post-Migration SQL Script - SQL executed after migration on ADB.   Upload Options - Database Migration Workbench uses system cURL to upload exported dumps to Object Storage. By default, the source database host is used to upload to Object Storage. But a user can opt to select a different agent host from which connectivity to object storage will be made for uploading export dumps. If the dump location is not shared with the upload agent host, then the dump files are copied to a staging directory. Figure 9a - Flow of data from an on-premises database to Oracle Autonomous Database when data is uploaded from the source host   Figure 9b - Flow of data from an on-premises database to Oracle Autonomous Database when data is uploaded from a different host   Figure 10 - Inputs for uploading data from different host   Analyze Source Data Based on the provided inputs, Analyze Source will provide a report describing the on-premises database objects which will not get successfully migrated to Autonomous Database. Analyze schema makes use of the schema analyzer tool, which has been uploaded in the first step. Click on "Analyze Source" on Review & Submit page. You have to either select the named credential of a user where schema advisor is installed or credential with sysdba privileges. If a sysdba credential is selected, then a temporary schema advisor user is created, which will be deleted post advisor report generation. Figure 11 - Report generated by Analyze Source   Validate Inputs Before Submission On the Review & Submit page, click on the "Validate" button to verify all the inputs concerning the source, destination, object storage, and export/import options provided in the activity. Validation is not a mandatory step, but it is advised to validate the inputs before submitting the activity. Figure 12 shows a list of validations that are executed as part of this step. Figure 12 - Validations   Post Migration Activities Once the migration activity is complete, you can Compare Performance, View Analysis of the Activity, View detailed logs of the activity, or cleanup dumps from source and object storage. You can perform all these operations by clicking on the Action Menu of an activity. Figure 13 - Various options of Actions Menu   Cleanup - Deletes export dumps from the source location and object storage. View Details - Redirects to the deployment activity page where you will get step by step progress details and logs for the corresponding migration activity. View Analysis - This page analyzes validations, export, and import operation executed by migration activity. It will summarize the job steps and then deep dives into details of individual Object errors. Figure 14 - Analyzing a Migration Activity   Compare Performance - Shows the SQL Performance Analyzer report of the migration activity. Marking an activity as "Complete" - Importing data to an Autonomous Database might result in a few import errors. In such situations, the migration activity ends with a status as "Completed with errors". Users can analyze the migration (using View Analysis), and if the errors reported during import are ignorable, the user can choose to Mark the Activity as complete. Figure 15 - Marking an activity as complete   Conclusion The Database Migration Workbench provides a wide variety of benefits to automate and de-risk your migrations. It helps to select the source database/schema of your interest and analyze them before migration. It also provides a unified guided workflow for migrating the on-premises database with an ability to customize migration with various export/import options and custom scripts. Database Migration Workbench can schedule a migration for a later date, minimizing the impact on the application. Post migration, it identifies objects needing attention and objects that are migrated successfully. It also shows insights on performance comparison before and after migration using SPA. Database Migration Workbench is available starting in Oracle Enterprise Manager 13c Release 4 Update 7 (13.4.0.7).   Related Links Official documentation FastConnect design Networking Setup to Manage On-Premises Targets with an Enterprise Manager hosted in OCI Advanced compression with Data Pump Webinar: Automating Database Migration to Oracle Cloud, including Autonomous Database, Eliminates Operational Risk Register: December 10, 2020 11:00 a.m. PT/2:00 p.m. ET

Moving the workloads of your on-premises database to an Oracle Cloud gives you a competitive advantage today. Databases in Oracle Cloud leverage Oracle's Generation 2 infrastructure, providing maximum...

How to Create a Database Link from an Autonomous Data Warehouse to a Database Cloud Service Instance

Autonomous Data Warehouse (ADW) now supports outgoing database links to any database that is accessible from an ADW instance including Database Cloud Service (DBCS) and other ADW/ATP instances. To use database links with ADW, the target database must be configured to use TCP/IP with SSL (TCPS) authentication. Since both ADW and ATP use TCPS authentication by default, setting up a database link between these services is pretty easy and takes only a few steps. On the other hand, enabling TCPS authentication in a database that doesn't have it configured (e.g. in DBCS) requires some additional steps that need to be followed carefully. In this blog post, I will try to demonstrate how to create a database link from an ADW instance to a DBCS instance including the steps to enable TCPS authentication. Here is an outline of the steps that we are going to follow: Enable TCPS Authentication in DBCS Connect to DBCS Instance from Client via TCPS Create a DB Link from ADW to DBCS Enable TCPS Authentication in DBCS A DBCS instance uses TCP/IP protocol by default. Configuring TCPS in DBCS involves several steps that need to performed manually. Since we are going to modify the default listener to use TCPS and it's configured under the grid user, we will be using both oracle and grid users. Here are the steps needed to enable TCPS in DBCS: Create wallets with self signed certificates for server and client Exchange certificates between server and client wallets (Export/import certificates) Add wallet location in the server and the client network files Add TCPS endpoint to the database listener Create wallets with self signed certificates for server and client As part of enabling TCPS authentication, we need to create individual wallets for the server and the client. Each of these wallets has to have their own certificates that they will exchange with one another. For the sake of this example, I will be using a self signed certificate. The client wallet and certificate can be created in the client side; however, I'll be creating my client wallet and certificate in the server and moving them to my local system later on. See Configuring Secure Sockets Layer Authentication for more information. Let's start... Set up wallet directories with the root user [root@dbcs0604 u01]$ mkdir -p /u01/server/wallet [root@dbcs0604 u01]$ mkdir -p /u01/client/wallet [root@dbcs0604 u01]$ mkdir /u01/certificate [root@dbcs0604 /]# chown -R oracle:oinstall /u01/server [root@dbcs0604 /]# chown -R oracle:oinstall /u01/client [root@dbcs0604 /]# chown -R oracle:oinstall /u01/certificate Create a server wallet with the oracle user [oracle@dbcs0604 ~]$ cd /u01/server/wallet/ [oracle@dbcs0604 wallet]$ orapki wallet create -wallet ./ -pwd Oracle123456 -auto_login Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Create a server certificate with the oracle user [oracle@dbcs0604 wallet]$ orapki wallet add -wallet ./ -pwd Oracle123456 -dn "CN=dbcs" -keysize 1024 -self_signed -validity 3650 -sign_alg sha256 Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Create a client wallet with the oracle user [oracle@dbcs0604 wallet]$ cd /u01/client/wallet/ [oracle@dbcs0604 wallet]$ orapki wallet create -wallet ./ -pwd Oracle123456 -auto_login Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Create a client certificate with the oracle user [oracle@dbcs0604 wallet]$ orapki wallet add -wallet ./ -pwd Oracle123456 -dn "CN=ctuzla-mac" -keysize 1024 -self_signed -validity 3650 -sign_alg sha256 Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Exchange certificates between server and client wallets (Export/import certificates) Export the server certificate with the oracle user [oracle@dbcs0604 wallet]$ orapki wallet export -wallet ./ -pwd Oracle123456 -dn "CN=dbcs" -cert /tmp/server.crt Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Export the client certificate with the oracle user [oracle@dbcs0604 wallet]$ orapki wallet export -wallet ./ -pwd Oracle123456 -dn "CN=ctuzla-mac" -cert /tmp/client.crt Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Import the client certificate into the server wallet with the oracle user [oracle@dbcs0604 wallet]$ cd /u01/server/wallet/ [oracle@dbcs0604 wallet]$ orapki wallet add -wallet ./ -pwd Oracle123456 -trusted_cert -cert /tmp/client.crt Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Import the server certificate into the client wallet with the oracle user [oracle@dbcs0604 wallet]$ cd /u01/client/wallet/ [oracle@dbcs0604 wallet]$ orapki wallet add -wallet ./ -pwd Oracle123456 -trusted_cert -cert /tmp/server.crt Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Change permissions for the server wallet with the oracle user We need to set the permissions for the server wallet so that it can be accessed when we restart the listener after enabling TCPS endpoint. [oracle@dbcs0604 wallet]$ cd /u01/server/wallet [oracle@dbcs0604 wallet]$ chmod 640 cwallet.sso Add wallet location in the server and the client network files Creating server and client wallets with self signed certificates and exchanging certificates were the initial steps towards the TCPS configuration. We now need to modify both the server and client network files so that they point to their corresponding wallet location and they are ready to use the TCPS protocol. Here's how those files look in my case: Server-side $ORACLE_HOME/network/admin/sqlnet.ora under the grid user # sqlnet.ora Network Configuration File: /u01/app/18.0.0.0/grid/network/admin/sqlnet.ora # Generated by Oracle configuration tools. NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT) wallet_location = (SOURCE= (METHOD=File) (METHOD_DATA= (DIRECTORY=/u01/server/wallet))) SSL_SERVER_DN_MATCH=(ON) Server-side $ORACLE_HOME/network/admin/listener.ora under the grid user wallet_location = (SOURCE= (METHOD=File) (METHOD_DATA= (DIRECTORY=/u01/server/wallet))) LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))) # line added by Agent ASMNET1LSNR_ASM=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=ASMNET1LSNR_ASM)))) # line added by Agent ENABLE_GLOBAL_DYNAMIC_ENDPOINT_ASMNET1LSNR_ASM=ON # line added by Agent VALID_NODE_CHECKING_REGISTRATION_ASMNET1LSNR_ASM=SUBNET # line added by Agent ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON # line added by Agent VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET # line added by Agent Server-side $ORACLE_HOME/network/admin/tnsnames.ora under the oracle user # tnsnames.ora Network Configuration File: /u01/app/oracle/product/18.0.0.0/dbhome_1/network/admin/tnsnames.ora # Generated by Oracle configuration tools. LISTENER_CDB1 = (ADDRESS = (PROTOCOL = TCPS)(HOST = dbcs0604)(PORT = 1521)) CDB1_IAD1W9 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = dbcs0604)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = cdb1_iad1w9.sub05282047220.vcnctuzla.oraclevcn.com) ) (SECURITY= (SSL_SERVER_CERT_DN="CN=dbcs")) ) PDB1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = dbcs0604)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = pdb1.sub05282047220.vcnctuzla.oraclevcn.com) ) (SECURITY= (SSL_SERVER_CERT_DN="CN=dbcs")) ) Add TCPS endpoint to the database listener Now that we are done with configuring our wallets and network files, we can move onto the next step, which is configuring the TCPS endpoint for the database listener. Since our listener is configured under grid, we will be using srvctl command to modify and restart it. Here are the steps: [grid@dbcs0604 ~]$ srvctl modify listener -p "TCPS:1521" [grid@dbcs0604 ~]$ srvctl stop listener [grid@dbcs0604 ~]$ srvctl start listener [grid@dbcs0604 ~]$ srvctl stop database -database cdb1_iad1w9 [grid@dbcs0604 ~]$ srvctl start database -database cdb1_iad1w9 [grid@dbcs0604 ~]$ lsnrctl status LSNRCTL for Linux: Version 18.0.0.0.0 - Production on 05-JUN-2019 16:07:24 Copyright (c) 1991, 2018, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 18.0.0.0.0 - Production Start Date 05-JUN-2019 16:05:50 Uptime 0 days 0 hr. 1 min. 34 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/18.0.0.0/grid/network/admin/listener.ora Listener Log File /u01/app/grid/diag/tnslsnr/dbcs0604/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=10.0.0.4)(PORT=1521))) Services Summary... Service "867e3020a52702dee053050011acf8c0.sub05282047220.vcnctuzla.oraclevcn.com" has 1 instance(s). Instance "cdb1", status READY, has 2 handler(s) for this service... Service "8a8e0ea41ac27e2de0530400000a486a.sub05282047220.vcnctuzla.oraclevcn.com" has 1 instance(s). Instance "cdb1", status READY, has 2 handler(s) for this service... Service "cdb1XDB.sub05282047220.vcnctuzla.oraclevcn.com" has 1 instance(s). Instance "cdb1", status READY, has 1 handler(s) for this service... Service "cdb1_iad1w9.sub05282047220.vcnctuzla.oraclevcn.com" has 1 instance(s). Instance "cdb1", status READY, has 2 handler(s) for this service... Service "pdb1.sub05282047220.vcnctuzla.oraclevcn.com" has 1 instance(s). Instance "cdb1", status READY, has 2 handler(s) for this service... The command completed successfully Please note that in the first step we added the TCPS endpoint to the port 1521 of the default listener. It's also possible to keep the port 1521 as is and add TCPS endpoint to a different port (e.g. 1523). Connect to DBCS Instance from Client via TCPS We should have TCPS authentication configured now. Before we move onto testing, let's take a look at the client-side network files (Please note the public IP address of the DBCS instance in tnsnames.ora): Client-side tnsnames.ora CDB1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = 132.145.151.208)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = cdb1_iad1w9.sub05282047220.vcnctuzla.oraclevcn.com) ) (SECURITY= (SSL_SERVER_CERT_DN="CN=dbcs")) ) PDB1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = 132.145.151.208)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = pdb1.sub05282047220.vcnctuzla.oraclevcn.com) ) (SECURITY= (SSL_SERVER_CERT_DN="CN=dbcs")) ) Client-side sqlnet.ora WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = /Users/cantuzla/Desktop/wallet) ) ) SSL_SERVER_DN_MATCH=(ON) In order to connect to the DBCS instance from the client, you need to add an ingress rule for the port that you want to use (e.g. 1521) in the security list of your virtual cloud network (VCN) in OCI as shown below: We can now try to establish a client connection to PDB1 in our DBCS instance (CDB1): ctuzla-mac:~ cantuzla$ cd Desktop/InstantClient/instantclient_18_1/ ctuzla-mac:instantclient_18_1 cantuzla$ ./sqlplus /nolog SQL*Plus: Release 18.0.0.0.0 Production on Wed Jun 5 09:39:56 2019 Version 18.1.0.0.0 Copyright (c) 1982, 2018, Oracle. All rights reserved. SQL> connect c##dbcs/DBcs123_#@PDB1 Connected. SQL> select * from dual; D - X Create a DB Link from ADW to DBCS We now have a working TCPS authentication in our DBCS instance. Here are the steps from the documentation that we will follow to create a database link from ADW to DBCS: Copy your target database wallet (the client wallet cwallet.sso that we created in /u01/client/wallet) for the target database to Object Store. Create credentials to access your Object Store where you store the cwallet.sso. See CREATE_CREDENTIAL Procedure for details. Upload the target database wallet to the data_pump_dir directory on ADW using DBMS_CLOUD.GET_OBJECT: SQL> BEGIN DBMS_CLOUD.GET_OBJECT( credential_name => 'OBJ_STORE_CRED', object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/adwctraining8/b/target-wallet/o/cwallet.sso', directory_name => 'DATA_PUMP_DIR'); END; / PL/SQL procedure successfully completed. On ADW create credentials to access the target database. The username and password you specify with DBMS_CLOUD.CREATE_CREDENTIAL are the credentials for the target database that you use to create the database link. Make sure the username consists of all uppercase letters. For this example, I will be using the C##DBCS common user that I created in my DBCS instance: SQL> BEGIN DBMS_CLOUD.CREATE_CREDENTIAL( credential_name => 'DBCS_LINK_CRED', username => 'C##DBCS', password => 'DBcs123_#'); END; / PL/SQL procedure successfully completed. Create the database link to the target database using DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK: SQL> BEGIN DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK( db_link_name => 'DBCSLINK', hostname => '132.145.151.208', port => '1521', service_name => 'pdb1.sub05282047220.vcnctuzla.oraclevcn.com', ssl_server_cert_dn => 'CN=dbcs', credential_name => 'DBCS_LINK_CRED'); END; / PL/SQL procedure successfully completed. Use the database link you created to access data on the target database: SQL> select * from dual@DBCSLINK; D - X That's it! In this blog post, we covered how to enable TCPS authentication in DBCS and create an outgoing database link from ADW to our DBCS instance. Even though we focused on the DBCS configuration, these steps can be applied when setting up a database link between ADW and any other Oracle database.

Autonomous Data Warehouse (ADW) now supports outgoing database links to any database that is accessible from an ADW instance including Database Cloud Service (DBCS) and other ADW/ATP instances. To use...

Autonomous Database

Autonomous Database Newsletter - December 9, 2020

  December 9, 2020 Latest Newsletter For Autonomous Database on Shared Exadata Infrastructure       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the following new features and topics: Release of APEX 20.2 Accessing Object Store Files Operations Insights Webcast: Win With Data Replay: Global Leaders APAC Fall Meeting Don't forget to checkout our "What's New" page for a comprehensive archive of all the new features we have added to Autonomous Database - click here. To find out how to subscribe to this newsletter click here.     (Shared Infrastructure Only) UPDATE - APEX 20.2 Now Available In Autonomous Database-Shared The version of APEX within Autonomous Database has been upgraded to version 20.2. This release of APEX introduces several new features and enhancements to help developers be more productive than ever before. From the all-new Cards component, more automations, and REST data synchronization, to charts in faceted search, REST connector plug-ins, report printing improvements and Redwood Light Theme Style.   LEARN MORE   DOC: Creating Applications with Oracle APEX in ADB, click here DOC: APEX Release Notes, click here BLOG: Announcing APEX 20.2, click here WEB: Visual Guide To What's New in APEX 20.2, click here.     DO MORE   Visit the apex.oracle.com for a list of all available tutorials - click here.   BACK TO TOP   (Shared Infrastructure Only) UPDATE - Changes To Access Methods for Files In S3 And Azure Blog Storage It is now possible to use a pre-signed URL in any DBMS_CLOUD procedure that takes a URL to access files in Amazon S3, without the need to create a credential. It is now possible to use Shared Access Signatures (SAS) URL in any DBMS_CLOUD procedure that takes a URL to access files in Azure Blob Storage, without the need to create a credential.   LEARN MORE   DOC: DBMS_CLOUD Package File URI Formats in ADW, click here DOC: DBMS_CLOUD Package File URI Formats in ATP, click here DOC: DBMS_CLOUD Package File URI Formats in AJD, click here.       BACK TO TOP     (Shared Infrastructure Only) NEW - Oracle Cloud Infrastructure Operations Insights Oracle Cloud Infrastructure Operations Insights provides 360-degree insight into database resource utilization and capacity. It is FREE to use with ADW and ATP - Note that currently Autonomous JSON Database is not supported by Operations Insights but will be supported in a future release. Operations Insights consists of two integrated applications: Capacity Planning Oracle SQL Warehouse   Capacity Planning - enables IT administrator or a capacity planners to understand how critical resources, including CPU and storage are used. Capacity planning ensures that enterprise databases have sufficient resources to meet future business needs. Oracle SQL Warehouse - Makes it easier to manage SQL at scale. It analyzes SQL performance issues, identifies trends and key insights to help avoid potential future problems.   LEARN MORE   DOC: About Oracle Cloud Infrastructure Operations Insights, click here BLOG: Announcing the general availability of Oracle Cloud Infrastructure Operations Insights click here. VIDEO: Introduction to Oracle Cloud Infrastructure Operations Insights Capacity Planning, click here VIDEO: Introduction to Oracle Cloud Infrastructure Operations Insights SQL Warehouse Service, click here.   BACK TO TOP   (Shared Infrastructure Only) WEBCAST - Win With Data     Are you looking to swiftly turn a growing mountain of data into business insights? Attend this ession to discover how to get the deep, data-driven insights needed to make quick decisions with the Oracle Departmental Data Warehouse in four live sessions. We invite you to join us for a virtual event on Dec. 17th from 9:00 - 11:00 a.m. PT The New Self-Service Data Warehouse: The No Hassle Data Engine for Next-Level Insights. Register once and you can stay for all four sessions or just one! Agenda and registration link is available here.     BACK TO TOP   (Shared Infrastructure Only) REPLAY - Oracle Global Leaders APAC Meeting     Watch the recordings of real-life Oracle Cloud deployments from 12 customers and partners speaking in 3 live panels (covering Cloud Infrastructure, Autonomous Data Management & Analytics) with Q&A moderated by Reiner Zimmermann, VP Product Management. Each customer panel is followed by a product direction update by Oracle product management. The videos and presentations for the event are now available, click here.   BACK TO TOP   oracle.com ADW ATP Documentation ADW ATP TCO Calculator Shared Dedicated Cloud Cost Estimator Autonomous Database ADB New Features Autonomous Database Schema Advisor Autonomous Database       Customer Forums ADW ATP       BACK TO TOP  

  December 9, 2020 Latest Newsletter For Autonomous Database on Shared Exadata Infrastructure       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure....

Autonomous Database

Autonomous Database Newsletter - October 28, 2020

  October 28, 2020 Latest Newsletter For Autonomous Database on Shared Exadata Infrastructure       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the following new features and topics: Autonomous Data Guard For Autonomous JSON Database Private Endpoint Support For Tools Database Modes PL/SQL SDK for OCI OCI Events Integration Changes to Password Rules Machine Learning Workshop Global Leaders APAC Meeting Note: some new features are only available with 19c. To get the most from Autonomous Database, upgrade to 19c as soon as possible. There is more information about how to upgrade existing 18c instances to 19c here, for ADW and here, for ATP. Don't forget to checkout our "What's New" page for a comprehensive archive of all the new features we have added to Autonomous Database - click here. To find out how to subscribe to this newsletter click here.   (Shared Infrastructure Only) NEW - Autonomous Data Guard Now Available For Autonomous JSON Database It is now possible to enable Autonomous Data Guard on AJD. This allows the following, depending on the state of the primary AJD database: If the primary database goes down, Autonomous Data Guard converts the standby database to the primary database with minimal interruption. After failover completes, Autonomous Data Guard automatically creates a new standby database. Admins/developers can perform a manual switchover operation, where the primary database becomes the standby database, and the standby database becomes the primary database. Note: Autonomous JSON Database does not provide access to the standby database. Simplified animation showing basic steps to create an ADG standby instance for AJD   LEARN MORE   DOC: Key features of Autonomous Data Guard for ATP/AJD, click here. DOC: Autonomous Data Guard notes for ATP/AJD, click here.     DO MORE   BLOG: Announcing Autonomous Data Guard! click here. BLOG: Autonomous Data Guard: Disaster Recovery Protection with a Couple Clicks in the Cloud! click here.   BACK TO TOP   (Shared Infrastructure Only) NEW - ADB Tools Now Support Private Endpoints Oracle Application Express, Oracle SQL Developer Web, and Oracle REST Data Services are now supported in Autonomous Databases with private endpoints. It is possible to connect Oracle Analytics Cloud to ADB that has a private endpoint via the Data Gateway. See the OAC documentation on configuring and registering Data Gateway for more information. Oracle Machine Learning Notebooks are not supported in databases with private endpoints (Note:support is coming shortly!).   LEARN MORE   DOC: Configure Private Endpoints with ADW, click here DOC: Configure Private Endpoints with ATP, click here BLOG: Announcing Private Endpoints in Autonomous Database on Shared Exadata Infrastructure, click here.     DO MORE The documentation includes two sample network scenarios for configuring PEs. DOC: Private Endpoint Configuration Examples for ADW: click here DOC: Private Endpoint Configuration Examples for ATP/AJD: click here. BLOG: Getting Up to Speed on Using Private Endpoints for Autonomous Database, click here.   BACK TO TOP     (Shared Infrastructure Only) NEW - Change ADB Mode to Read/Write or Read-Only It is now possible to select an Autonomous Database operation mode - the default mode is Read/Write. If you select Read-Only mode users can only run queries. In addition, for both modes it is possible to restrict access to only allow users with the RESTRICTED SESSION privilege to connect to the database. The ADMIN user has this privilege. A typical use case for restricted access mode is to allow for administrative tasks such as indexing, data loads, or other planned activities. Illustration shows the new UI for setting the database mode.   LEARN MORE   DOC: Change Autonomous Database Mode to Read/Write or Read-Only for ADW, click here. DOC: Change Autonomous Database Mode to Read/Write or Read-Only for ATP, click here.     BACK TO TOP   (Shared Infrastructure Only) NEW - Oracle Cloud Infrastructure SDK for PL/SQL The Oracle Cloud Infrastructure SDK for PL/SQL enables you to write code to interact with Oracle Cloud Infrastructure resources, including Autonomous Database. The latest version of the SDK is pre-installed by Oracle for all Autonomous Databases using shared Exadata infrastructure.   LEARN MORE DOC: PL/SQL SDK click here. DOC: DBMS_CLOUD_OCI_DB_DATABASE Functions, click here. DOC: DBMS_CLOUD_OCI_DB_DATABASE Types, click here. VIDEO: PL/SQL and the Art of OCI Management- Robert Pastijn, a member of Platform Technology Solutions group, discusses how he built a set of PL/SQL APIs to help manage Oracle Cloud accounts and services, click here.   DO MORE BLOG: Autonomous Database adds PL/SQL to its arsenal of OCI native SDKs, click here. BLOG: How to level up and invoke an Oracle Function (or any Cloud REST API) from within Autonomous Database, click here.   BACK TO TOP   (Shared Infrastructure Only) ENHANCEMENT - Oracle Cloud Infrastructure Events Integration You can now use OCI Events to subscribe to, and be notified of, Autonomous Database events. Using OCI Events you can create automation and receive notifications based on state changes for Autonomous Database. You can now subscribe to the following categories of events for Autonomous Database: Critical Events Information Events Individual Events   Illustration shows how to use OCI Events service.   LEARN MORE DOC: Using OCI Events with ADW click here. DOC: Using OCI Events with ATP, click here.     DO MORE   Visit the LiveLabs Using Events and Notification Workshop - click here.   BACK TO TOP   (Shared Infrastructure Only) NEW - Database Actions in SQL Developer Web SQL Developer Web UI is now based around the concept of "Database Actions". The hamburger menu provides access to additional actions and administration features. Under "Administration" there is a new UI for managing database users: Illustration shows user management console within SQL Developer Web. this includes the ability to create new users: Illustration shows creating a new user within SQL Developer Web.   LEARN MORE DOC: Create Users With SQL Developer Web on ADW, click here. DOC: Create Users With SQL Developer Web on ATP, click here.     BACK TO TOP   (Shared Infrastructure Only) UPDATE - Changes to Password Rules For non-admin users the password length has been lowered from 12 to 8 characters. Rules for covering passwords for ADMIN user are unchanged.   BACK TO TOP   (Shared Infrastructure Only) EVENT - Make Machine Learning Work for You     NOVEMBER 18, 2020 9:00AM - 11:00AM PT (GMT -8) Register for a series of webcasts designed to help you and your customers/partners understand how to harness machine learning. In four live sessions product experts will highlight use cases, best practices, and supporting technology that can help customers unleash the power of their data. Share the following event details and registration link with your customers and partners, click here.   BACK TO TOP   (Shared Infrastructure Only) EVENT - Oracle Global Leaders APAC Meeting     DECEMBER 3, 2020 1:00PM SGT (GMT +8) Listen to real-life Oracle Cloud deployments, with 12 Customers and Partners speaking in 3 live panels (covering Cloud Infrastructure, Autonomous Data Management & Analytics) with Q&A moderated by Reiner Zimmermann, VP Product Management. Each panel will be followed by a product direction update by Oracle product management. Share the following event details and registration link with your customers and partners, click here.   BACK TO TOP   oracle.com ADW ATP Documentation ADW ATP TCO Calculator Shared Dedicated Cloud Cost Estimator Autonomous Database ADB New Features Autonomous Database Schema Advisor Autonomous Database       Customer Forums ADW ATP       BACK TO TOP  

  October 28, 2020 Latest Newsletter For Autonomous Database on Shared Exadata Infrastructure       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure....

Database

Announcing Oracle APEX 20.2

Oracle Application Express (APEX) 20.2 is now generally available!  Oracle APEX is the world's most popular low-code platform for enterprise apps and enables you to build scalable, secure enterprise apps, with world-class features, that can be deployed anywhere.  This release of APEX introduces several new features and enhancements to help developers be more productive than ever before.  From the all new Cards component, Automations, and REST Data Synchronization, to charts in Faceted Search, REST Connector Plug-ins, Report Printing improvements, Redwood Light Theme Style, and an exceptional code editing experience across all of App Builder, APEX 20.2 an epic release that has something for everyone.  Here are some of the feature highlights: All New Cards Component The new Cards component unlocks a powerful new way to present data and provides developers with the ultimate control and flexibility over the user interface. The Cards region provides easy customization of nearly every aspect of the card UI, including the layout, appearance, icon, badge, media, advanced HTML expressions that support client-side templating, and for the first time, the ability to define multiple actions per card. This means that you can include any number of actions, such as links or buttons, on a single card, all without writing a single line of code. You can visit the Cards page on the Universal Theme Sample App to see them in action.  You can also visit the Sample Cards application to see advanced usage of the new Cards component.   Automations Automations are a conditional set of actions that are automatically triggered by changes that occur within database tables or remote data sources. They are used to monitor data and then perform the appropriate action (examples are auto-approving specific requests and sending email alerts). An automation can be triggered on Schedule or on Demand, by invoking the APEX_AUTOMATION API.   Faceted Search Enhancements Bar or pie charts of facet value counts Quickly display a chart of facet value counts in a dialog or 'dashboard' area. Groups of checkbox facets for Boolean columns Checking the facet will find records that match the 'yes' or 'true' value of the column. Input Field facet type Input Field facet type supports comparing a user-entered value with the facet column. This enables faceted searches such as finding stores within a user entered number of miles or records where a column contains the user entered text. Performance optimization for distinct value facets   Report Printing Built-in PDF printing and Excel download Built-in APIs to print PDF and download Excel in interactive reports and classic reports. Interactive Report - Send E-Mail All download formats can now be attached. Enhanced integration with Oracle BI Publisher New APIs to generate files Introduced new APEX_REGION.EXPORT_DATA and APEX_DATA_EXPORT APIs to programmatically generate PDF, CSV, Excel, HTML, JSON and XML files. More language support Built-in PDF now supports Chinese, Japanese and Korean languages.   REST Data Source Synchronization APEX supports data synchronization from a REST Data Source (formerly known as Web Source Modules) to a local table. Synchronization can run either on Schedule or on Demand, by calling the APEX_REST_SOURCE_SYNC package. Developers don't need to build custom PL/SQL code in order to copy data from REST services to local tables; APEX provides this as a declarative option. More control over REST Source Data REST Source Data can be appended or merged to the local table. Replacing all local data is also supported. Local or REST Source Data APEX components using the REST Data Source can be configured to use the local table instead. Configure Technical Details Technical details like HTTP request limits, commit intervals or delete methods for the Replace mode are configurable. Automatic Table Generation   REST Data Source Connector Plug-Ins The APEX Plug-In infrastructure has been extended to support Connector Plug-Ins for external REST APIs. This enables APEX to fully leverage REST API features like result pagination or server-side filtering, often used with 3rd Party REST Services (note: APEX natively supports ORDS and Oracle Fusion SaaS Services today!) REST Service Handling The Plug-In code handles REST service-specific implementation details like the pagination style or how filters are passed to the REST API Automatically Pass Relevant Information When APEX invokes a REST Data Source (e.g. to render a report), the engine will invoke the Plug-In code and pass all relevant context information. HTTP Requests The Plug-In code executes one or multiple HTTP requests and passes results back to the APEX engine. Automatic Processing APEX processes the REST response received from the Plug-In.     New Web Credential Types APEX 20.2 introduces new URL Query String and HTTP Header types for Web Credentials. This allows developers to use the highly secure and encrypted credential storage also for REST Services which expect an element (like an API key) as part of the URL. APEX makes sure that such sensitive parts are not written to debug or execution logs. A web credential can now be protected by providing a URL pattern. APEX will only use the Web Credential for URLs starting with the given pattern; otherwise an error message will be raised. To change the URL pattern, the secret part of the Web Credential needs to be entered again.   Redwood Light Theme Style Universal Theme now provides a new Redwood Light theme style for your apps, available via Theme Roller. This new theme style aligns with Oracle's new user experience design system, and provides the new design, color, and textures that extend across all of Universal Theme. Refresh your existing apps to uptake the latest version of Universal Theme and this new theme style.     Developer Experience Multiple Tabs in Property Editor Page Designer has been enhanced to support multiple tabs in the Property Editor pane, making it more efficient to access the attributes of a region. Monaco Editor Oracle APEX now uses Monaco Editor to provide a vastly improved coding experience throughout the development environment. The new editor provides in-context code completion, syntax highlighting, and superior accessibility. Inspect the SQL, PL/SQL and JavaScript code with Ease The Embedded Code utility allows developers to inspect the SQL, PL/SQL and JavaScript contained within an APEX application. Having the ability to view the embedded code makes conducting tasks such as code reviews, security evaluations or application tuning, far more convenient. Code can be saved to the file system from the App Builder, or using the APEXExport utility. Quick SQL Enhancements Quick SQL now enables you to save and load data models, provides automatic primary key column defaults, and delivers far better coding experience with improved syntax highlighting and autocompletion for table and column directives.   New and Improved Items New Checkbox This single checkbox offers an alternative to the Switch item type for Boolean columns. This works in Interactive Grid as well, even when not in edit mode. The previous Checkbox type has been renamed to Checkbox Group. File Browse File Browse item type has been enhanced to support rendering as a drop zone, supporting the drag & drop of a file to be uploaded. Rich Text Editor Rich Text Editor item type has been upgraded to use CKEditor 5 and now supports markdown output. Text Field Field item type has a new Text Case setting to optionally transform the user-entered text to upper or lower case. The Text Field Trim Spaces and Text Case settings and Textarea Trim Spaces settings are now applied on the client as well as the server. Additional Features Below are a number of additional features you should familiarize yourself with. Tree Region Enhancements Tree region type has been enhanced to support lazy loading and refresh without having to reload the full page. New Interactive Grid Saved Report Static ID Support Linking to Interactive Grid saved reports should use the saved report static ID instead of the report name. With APEX 20.2, the APEX_IG API has been updated to require using the saved report static ID rather than the report name, when linking to Interactive Grids. Web Source Modules are now named REST Data Sources   As in every APEX release, a number of enhancements and bugs fixes have been made to the productivity and sample applications.  Additionally, numerous bugs have been fixed in APEX 20.2, resulting in even higher quality and stability for APEX environments. For more information about many of the other exciting new features in APEX 20.2, please see the Release Notes or visit the What's New page.   Additional Information Application Express (APEX) is the low-code application development platform which is included with every Oracle Database and is also provided with every Oracle Database Cloud Service, including Oracle Autonomous Database and the Oracle Cloud Free Tier.  APEX, combined with the Oracle Database, provides a fully integrated environment to build, deploy, maintain and monitor data-driven business applications that look great on mobile and desktop devices.  To learn more about Oracle Application Express, visit apex.oracle.com.  To learn more about Oracle Autonomous Database, visit oracle.com/autonomous. 

Oracle Application Express (APEX) 20.2 is now generally available!  Oracle APEX is the world's most popular low-code platform for enterprise apps and enables you to build scalable, secure enterprise...

Database

Exadata Cloud Service X8M—No Database Workload is Too Large

The fastest cloud database service is now the most scalable Oracle Exadata Cloud Service, the world’s fastest cloud database service, is now even faster, supporting the latest Exadata X8M hardware and bringing the benefits of Intel® Optane™ Persistent Memory (PMEM) and Remote Direct Memory Access over Converged Ethernet (RoCE) to the Oracle Cloud. On top of that, Exadata Cloud Service X8M redefines elasticity in the cloud, with configurations that scale to over 1,600 cores and 25 PB of HCC compressed data. How did Exadata Cloud become even faster?  With smart use of persistent memory and the RoCE networking fabric, Exadata Cloud now has 50x lower storage access latency than AWS databases.  With typical storage access latency as low as 19usec, Exadata X8M is now up to 10x faster than before, boosting performance of a wide range of OLTP and Analytical applications.  I’ve written about what PMEM and RoCE can do in the Exadata platform before—so check out my previous blog, “Exadata Cloud@Customer X8M: The Best In-Datacenter Database Cloud Platform” for the details.  What are you going to do with all that speed?  You can consolidate more workloads with less infrastructure and fewer CPUs, and ultimately lower your total cost of ownership.  To boost Exadata’s consolidation capabilities, ExaCS X8M will have almost double the usable memory (1,390 GB per server) and double the local disk space for Oracle Homes. But Exadata Cloud Service X8M is not just catching up to its sibling Exadata Cloud@Customer, it is redefining elasticity in the cloud.  Exadata Cloud Service now supports independent scaling of compute and storage up to 32 compute servers and 64 storage cells.  This provides three key benefits to customers: No workload is too large:  Customers can scale to support huge workloads with up to 1,600 cores, 44 TB memory, 96 TB PMEM, 1.6 PB Flash, and 2.5 PB of usable data (25 PB after 10x HCC Compression). Right-sized systems:  Customers can choose the optimum mix of storage and compute.  They are no longer required to deploy compute and storage in fixed ratios, eliminating the need to pay for unnecessary infrastructure.  They can choose what is best for their workload. On-demand Elasticity:  Grow your system at any time by adding additional compute and storage servers.  Pay for that capacity only when you need it.  To make this work, we have redefined the concept of an Exadata in the cloud.  Gone are the fixed rack sizes (Quarter, Half, and Full).  Now, you start out with a minimal elastic configuration (happens to be the same as a quarter rack) of two database servers and three storage servers, and you add database and storage servers one or more at a time as required.  The Oracle cloud takes care of provisioning these servers from pools of RoCE interconnected compute and storage, building the ideal system as you go. Start with a small quarter rack and grow it over time as your business grows.  If you need more storage, add more.  If your system needs additional compute to remain balanced, add compute.  Pay for this additional infrastructure only when you need it.  With maximum configurations of 32 compute servers and 64 storage cells, be confident that the system can scale to meet future growth. Exadata Cloud Service X8M will be available globally in the next few weeks.  Pricing is unchanged from Exadata Cloud Service X8—you get persistent memory and the new faster RoCE interconnect for no additional charge.  You can try it out with little investment as we allow you to provision a system for as little as 48 hours.  More details are available at www.oracle.com/engineered-systems/exadata/             Bob Thome is a Vice President at Oracle responsible for product management for Database Engineered Systems and Cloud Services, including Exadata, Exadata Cloud Service, Exadata Cloud@Customer, RAC on OCI-C, VM DB (RAC and SI) on OCI, and Oracle Database Appliance. He has over 30 years of experience working in the information technology industry. With experience in both hardware and software companies, he has managed databases, clusters, systems, and support services.     

The fastest cloud database service is now the most scalable Oracle Exadata Cloud Service, the world’s fastest cloud database service, is now even faster, supporting the latest Exadata X8M hardware and...

Autonomous Database

Machine Learning Performance on Autonomous Database

In many organizations, a data science project likely involves the data scientist pulling data to a separate analytics server - analyzing and preparing data and building machine learning models locally. As enterprises grow their data science teams and data volumes expand, common access to data and the ability to analyze that data in place can dramatically reduce time-to-project-deployment and overall complexity. Building models and scoring data at scale is a hallmark for Oracle’s in-database machine learning - Oracle Machine Learning. Combine this with Oracle Autonomous Database - the converged database with auto-scale capabilities - and a team of data scientists can work comfortably in the same environment. In this blog post, we take a look at factors affecting machine learning model building performance as well as performance numbers illustrating the performance and scalability possible with Oracle Machine Learning. In a subsequent post, we'll discuss scoring performance. Top 7 factors affecting performance Many factors affect machine learning performance, including: Data volume – whether building models or scoring data, the most obvious factor is the amount of data involved – typically measured in the number of rows and columns, or simply gigabytes. Data movement and loading – related to data volume is the performance impact of moving data from one environment to another, or from disk into the analytics processing engine. This time needs to be considered when comparing machine learning tools and processes.  Choice of algorithm – different algorithms can have vastly different computational requirements, e.g., Naïve Bayes and Decisions Tree algorithms have low computational demands compared with those of Generalized Linear Models or Support Vector Machine. Data complexity – some patterns in data are easily identified by an algorithm and result in a model converging quickly. Other patterns, e.g., non-linear, can require many more iterations. In other cases, the cardinality of categorical variables or the density/sparsity of data can significantly impact performance. Algorithm implementation –open source and even proprietary algorithms are often implemented in a non-parallel or single-threaded manner, meaning that, even when run on multi-processor hardware, no performance benefit is realized. Such traditional single-threaded algorithms can often be redesigned to take advantage of multi-processor and multi-node hardware, through parallelized and distributed algorithms implementation. Enabling parallelism is often fundamental for improving performance and scalability. Concurrent users – one data scientist working on a single model on a dedicated machine may or may not see adequate performance relative to the factors identified above. However, when multiple users try to work concurrently to build and evaluate models or score data, the impact on the overall performance for these users may significantly degrade or even result in failures due to exhausting memory or other system resources. The ability for an environment to scale up resources to meet demand alleviates such impact.  Load on the system – while the number of concurrent machine learning users impacts performance, the non-ML sources (interactive users and scheduled jobs) can both impact and be impacted by ML sources. Compute environments that can manage and balance such uses can provide better overall performance Enter Oracle Autonomous Database Oracle Autonomous Database includes the in-database algorithms of Oracle Machine Learning (OML), which addresses the factors cited above that impact machine learning performance. By virtue of being in the database, OML algorithms operate on data in the database such that no data movement is required, effectively eliminating latency for loading data into a separate analytical engine either from disk or extracting it from the database. The OML in-database algorithm implementations are also parallelized—can table advantage of multiple CPUs—and distributed—can take advantage of multiple nodes as found with Oracle Exadata and Oracle Autonomous Database. Oracle Autonomous Database further supports performance by enabling different service levels to both manage the load on the system, by controlling the degree of parallelism jobs can use, and auto-scaling, which adds compute resources on demand—up to 3X for CPU and memory to accommodate both ML and non-ML uses. Performance results To illustrate how Oracle Autonomous Database with Oracle Machine Learning performs, we conducted tests on a 16 CPU environment, involving a range of data sizes, algorithms, parallelism, and concurrent users. Oracle Autonomous Database supports three service levels: high, medium, and low. ‘High’ limits the number of concurrent jobs to three, each of which can use up to the number of CPUs allocated to database instance (here, up to 16 CPUs). ‘Medium’ allows more concurrent users but limits the number of CPUs each job can consume (here, 4CPUs). ‘Low’ allows even more concurrent use but only single-threaded execution, i.e., no parallelism. Let's begin by comparing the single user experience with four popular in-database classification algorithms: Decision Tree (DT), Generalized Linear Model (GLM), Naive Bayes (NB), and Support Vector Machine (SVM). We start with the medium service level, which caps the number of CPUs at 4, and for a 16 CPU environment, the number of concurrent jobs at 20 (1.25 * 16). We use the ONTIME data set with 8 columns (equating to 70 coefficients when accounting for categorical variables). Notice the linear scalability across the range of data set sizes, i.e., a doubling in the number of rows is roughly doubling the run time. While there is some variation in the individual job execution times, this plot depicts the average execution time. Next we look at the effect of the high service level, which enables a job to use up to the number of CPUs allocated to the database instance, in this case 16 CPUs. As we noted earlier, different algorithms respond differently to data, but even to the number of available resources. In some cases, increased parallelism can actually adversely impact performance as we see with SVM above due to the overhead of introducing parallelism for "smaller" data sets. However, at higher data volumes, the additional CPU resources clearly improve performance by about 50% for 800M rows. The remaining algorithms saw performance improvements across the range. As with the medium service level, we see effectively linear scalability across data sizes. Let's now turn our attention to concurrent users. We start with the medium service level using the Generalized Linear Model (GLM) algorithm. It is interesting to note that since each run is limited to 4 CPUs, auto-scale had it's most significant impact at 8 concurrent users. At 4 concurrent users and 4 CPUs each, this consumed the 16 CPU, so this should not be surprising. When we turn auto-scale on, there are more CPUs available for more concurrent users. This illustrates how a team of data scientists can work in the same environment with modest impact on one another and that this can be further mitigated with auto-scale.   Here, we look at the Support Vector Machine (SVM) algorithm regression. The performance benefits with auto-scale enabled are particularly beneficial for 4 and 8 concurrent users with a ~30% reduction in execution time for 400M rows. As discussed above, Oracle Autonomous Database with Oracle Machine Learning provides scalability and performance for data science teams, while providing powerful machine learning tools and autonomous data management capabilities. Thanks to Jie Liu and Marcos Arancibia for their contributions to these performance results.

In many organizations, a data science project likely involves the data scientist pulling data to a separate analytics server - analyzing and preparing data and building machine learning...

Database Cloud Services

Sharding Oracle Database Cloud Service

Oracle Sharding is now available in Oracle Cloud with Oracle Database Cloud Service as well as Kubernetes and Docker containers (OKE). Oracle Sharding enables hyperscale, globally distributed, converged databases. It supports applications that require linear scalability, elasticity, fault isolation and geographic distribution of data for data sovereignty. It does so by distributing chunks of a data set across independent Oracle databases (shards). Shards can be deployed in the cloud or on-premises and require no specialized hardware or software. The following figure shows a table horizontally partitioned across three shards. Figure 1-1 Horizontal Partitioning of a Table Across Shards Benefits of Sharding Linear Scalability. Sharding eliminates performance bottlenecks and makes it possible to linearly scale performance and capacity by adding shards. Fault Containment. Sharding is a shared nothing hardware infrastructure that eliminates single points of failure, such as shared disk, SAN, and clusterware, and provides strong fault isolation—the failure or slow-down of one shard does not affect the performance or availability of other shards. Geographical Distribution of Data. Sharding makes it possible to store particular data close to its consumers and satisfy regulatory requirements when data must be located in a particular jurisdiction. Rolling Upgrades. Applying configuration changes on one shard at a time does not affect other shards, and allows administrators to first test the changes on a small subset of data. Unlike NoSQL solutions, Oracle Sharding provides strong data consistency, the full power of SQL, support for structured and unstructured data, and the Oracle Database ecosystem. Additional Information Please review following links for additional information. Sharding Oracle Database Cloud Service : This is the listing for sharding automation available in Oracle Cloud Infrastructure Marketplace. It automates provisioning and management of Oracle Sharded Database. Automation includes the following features Supports system-managed sharding on Oracle Enterprise Extreme performance database edition. Automatic and uniform distribution of Shards, Catalog and Shard Directors across ADs in a region and Fault Domains within each AD. Supports dataguard based data replication for shards and catalog and thus provides high availability within a region. Provides the ability to easily scale the number of Shards and provides horizontal scalability. Provides the ability to easily scale the number of Shard Directors for high availabillity and load balancing of requests. Sharding with Kubernetes and Docker containers in Oracle Cloud (OKE) - This GitHub repository has deployment procedures for automating provisioning of Oracle Sharded Databases on Oracle Kubernetes Engine (OKE) using Oracle Cloud Infrastructure Ansible Modules and Helm/Chart. Reference Architecture Patterns - These are customer inspired Oracle Cloud deployment architecture patterns with best practices for scalability, availability and security Oracle Database Sharding - This is Oracle Sharding product page which has latest information, customer case studies and links to various resources.  

Oracle Sharding is now available in Oracle Cloud with Oracle Database Cloud Service as well as Kubernetes and Docker containers (OKE). Oracle Sharding enables hyperscale, globally...

Strategy

Standard Edition High Availability Released - See What's New

The "Standard Edition 2 – We Heard You! Announcing: Standard Edition High Availability" blog post published approximately two months ago resulted in a lot of interest in this new feature - Thank you! It is therefore with great pleasure that I can announce the general availability of Standard Edition High Availability (SEHA) on Linux, Microsoft Windows and Solaris with Oracle Database 19c, Release Update (RU) 19.7. Additional operating systems are planned to be supported later this year. What's New As the Oracle Database 19c New Features Guide under the "RAC and Grid section" states, Standard Edition High Availability provides cluster-based failover for single-instance Standard Edition Oracle Databases using Oracle Clusterware. It benefits from the cluster capabilities and storage solutions that are already part of Oracle Grid Infrastructure, such as Oracle Clusterware, Oracle Automatic Storage Management (Oracle ASM) and Oracle ASM Cluster File System (Oracle ACFS). Standard Edition High Availability is fully integrated with Oracle Grid Infrastructure starting with Oracle Grid Infrastructure 19c, Release Update 19.7. The prerequisites for SEHA database systems are therefore largely the same as for all Grid Infrastructure-based database systems, as discussed under Requirements for Installing Standard Edition High Availability. Standard Edition High Availability databases are not Real Application Clusters (RAC)-enabled. Oracle RAC One Node is a RAC-enabled database in that the RAC-option needs to be enabled in the Oracle Database home from which the database runs. This is not the case for Standard Edition High Availability databases. While both solutions provide cluster-based failover for the Oracle Database, RAC One node supports additional high availability features further reducing planned maintenance related downtime that are not part of the Standard Edition High Availability offering. For more information on how Standard Edition High Availability, Oracle Restart, Oracle RAC and Oracle RAC One Node compare, see the High Availability Options Overview for Oracle Databases using Oracle Clusterware. Standard Edition High Availability databases can be licensed using the “10-day failover rule”, which is described in this document. This rule includes the right to run the licensed program(s) [here the Standard Edition High Availability database] on an unlicensed spare computer in a failover environment for up to a total of ten separate days in any given calendar year. This right only applies when a number of machines are arranged in a cluster and share one disk array, which is the case for Standard Edition High Availability databases by default. In addition, SEHA databases are subject to all licensing regulations that generally apply to a Standard Edition 2 (SE2) single-instance Oracle Database. Note that SEHA databases are not subject to a per-cluster socket limitation, but need to adhere to the per-server socket limitation that applies to any Standard Edition 2 Oracle Database.  Standard Edition High Availability databases can either be freshly installed or configured using an existing single-instance Standard Edition Oracle Database. There is no direct upgrade path from either a single-instance or a pre-19c Oracle RAC Standard Edition 2 database. While the database can be upgraded, configuring it to be a Standard Edition High Availability database requires additional manual steps explained in the Managing Standard Edition High Availability with Oracle Databases section of the Oracle Database Administrator’s Guide. Standard Edition 2 Oracle RAC databases need to be converted to single-instance Standard Edition Oracle Databases prior to upgrading to Oracle Database 19c, as described in My Oracle Support note 2504078.1. Concluding, Standard Edition High Availability provides a fully integrated cluster-based failover for Standard Edition Oracle Databases using Oracle Clusterware. Oracle’s Standard Edition 2 (SE2) customers thereby benefit from the high availability capabilities and storage management solutions that are already part of Oracle Grid Infrastructure, such as Oracle Automatic Storage Management (ASM) and the Oracle ASM Cluster File System (ACFS), free of charge.  Start testing here: Deploying Standard Edition High Availability and let me know your feedback, please.  

The "Standard Edition 2 – We Heard You! Announcing: Standard Edition High Availability" blog post published approximately two months ago resulted in a lot of interest in this new feature - Thank you! It...

Autonomous Database

Freedom to Build - Announcing Oracle Cloud Free Tier with New Always Free Services and Always Free Oracle Autonomous Database

On Monday, September 16th, 2019, Oracle Cloud Infrastructure (OCI) received a major update that enables students, professional developers, and anyone looking to jumpstart their next big idea to build, test, and deploy applications on Oracle Cloud and Oracle Autonomous Database free of charge for an unlimited time. This update adds a new set of Always Free services to OCI – including Always Free Oracle Autonomous Database – and converges them with Oracle’s existing $300, 30-day Free Trial program into a new unified Free Tier now available worldwide. How to Sign Up Getting started with Free Tier, which includes Always Free services and the Free Trial, is easy via a single streamlined sign up. Recognized Oracle users who sign up for Free Tier will see an accelerated onboarding flow with fewer verification steps and no requirement to enter a credit card. For example, students and educators in Oracle Academy; attendees at Oracle OpenWorld and Oracle Code One; designated Oracle Groundbreaker Ambassadors; and prospective customers engaged with Oracle Sales all can sign up without a credit card. Other new users must provide a credit card during sign up for identification purposes only. You can sign up and get started with Always Free services in less than 5 minutes. Simply open the Free Tier webpage at oracle.com/cloud/free and click on the prominent button for signing up. Note while you are signing up, the second step asks you to select a “Home Region”. This is important because Always Free services run only in your home region, and you cannot change it after the account is created. Make sure to click the displayed Regions link, review global data centers where your desired Always Free services are available, and select the home region that works best. Current paying customers who use Oracle Universal Credits and run in a supported commercial OCI Data Region will see new Always Free services added to their accounts automatically – no new sign up required. Always Free Services - What’s Included The new Always Free services that are part of Free Tier cover essential IaaS and PaaS technologies, providing generous resources for free as long as they are used. A summary of Always Free service shapes and specifications available at launch time is below. Not only do these Always Free services run on Oracle’s high-performance mission critical cloud infrastructure, they also incorporate Oracle’s premier cloud database: Oracle Autonomous Database. What You Can Do with Always Free Oracle Autonomous Database Always Free Oracle Autonomous Database delivers a comprehensive database experience, plus built-in tools for developers, administrators, and data scientists – now entirely free. It is a multi-model database for virtually any type of data (including relational, JSON, spatial, graph, multimedia, XML, files, and more) and any interface (full SQL, REST data access, and drivers for popular programming languages). The service supports both Autonomous Transaction Processing and Autonomous Data Warehouse workload types. It includes development tools such as SQL Developer Web for SQL and PL/SQL scripting, along with powerful command-line utilities. Developers also get a fully managed instance of Oracle APEX for rapidly building and deploying low-code apps with a modern aesthetic for mobile and desktop browsers. For data scientists, an included SQL notebook assists with creating Machine Learning models. What You Can Do with Always Free Oracle Cloud Infrastructure Services Always Free Oracle Cloud Infrastructure services include general purpose compute VMs; block, object, and archive storage; and a load balancer. You also get an allocation of free outbound data transfer, monitoring, and notifications. Together with Always Free Oracle Autonomous Database, users can leverage this bundle of free services to develop and deploy applications in diverse languages and frameworks, and to test drive enterprise infrastructure capabilities such as load balancing and storage cloning. In Conclusion Adding it all up, Oracle Cloud Free Tier with new Always Free services and Always Free Oracle Autonomous Database is a breakthrough for developers of all kinds. With easy sign up, a range of useful services, and full Oracle Autonomous Database, developers have the resources to build enterprise class solutions in the cloud quickly and at no cost. And best of all, because it is Always Free, they can do it without worry of losing work at a deadline set by their cloud provider. These new Always Free services greatly expand developers’ freedom to build. * * * * * * * * * * * * * * * * * * * * * To learn more, visit oracle.com/cloud/free. Speak with the experts at Oracle OpenWorld 2019. Join our session on Wednesday, September 18th, 2019 at 10:00am PT in Moscone South Room #203 – Oracle Cloud’s New Free Tier and Always Free Oracle Autonomous Database Speaker: Todd Bottger Session ID: PRO6695

On Monday, September 16th, 2019, Oracle Cloud Infrastructure (OCI) received a major update that enables students, professional developers, and anyone looking to jumpstart their next big idea to build,...

Autonomous Database

Enabling the Autonomous Enterprise

This post was contributed by Senior Principal Product Marketing Director, Ron Craig. Background – data overflow The ability of enterprises to generate data is increasingly outpacing their ability to realize real value from that data. As a result, opportunities for innovation, driven by customer, market and competitive intelligence are being left on the table. And given that only a subset of the avalanche of data is being put to good use, it’s entirely possible that the use of inadequate data is leading to bad decisions. A key source of this problem is that the productivity of human beings simply hasn’t kept pace with the technologies we have developed to help improve our business processes. IDC has predicted that by 2025, 6 billion consumers will have one data interaction every 18 seconds. At that point, the volume of global data will be 175ZB (175,000,000,000,000,000,000,000 bytes), and ~30% of that will be real time data – a 6X increase vs. 2018. The exponential increase in effort required to clean, arrange, secure, maintain and process the data from all those customers means less effort can be dedicated to insights. As a consequence, enterprises are not truly seeing the benefits from their own success in becoming data companies. Abstraction as a game-changer So what’s needed in response? Sometimes it’s good to look to other areas for inspiration, and innovation in the semiconductor industry provides some useful insights. That industry, since its early days, has had to deal with the fact that user productivity has struggled to keep pace with advances in technology, and has surmounted those issues with innovations that address those productivity limitations head on. Digital designs - the creations that comprise everything from the silicon chips to operate a timer on a microwave oven all the way up to the ability to forecast the weather with a supercomputer - are at their essence created from logical components, known as gates. These logic gates perform pretty routine Boolean operations, and effectively allow decisions or calculations to be made based on sets of inputs, and propagate those calculations in real time. Chip designers working at the gate level could be expected to produce verified designs (effectively combinations of connected gates) at a rate of ~50 gates per day - a productivity level that’s remained pretty constant over time. The processors in today’s high end cellphones may contain around 100 million gates, so a team of 100 chip designers working at the gate level would take 80 years to put such a chip together. In reality though, today such chips are often developed in two years or less, as a result of innovations introduced in the chip design flow over the last twenty years. For the purposes of this blog, and since it provides a useful analogy, the innovation we’ll focus on is the introduction of the hardware definition language (HDL). An HDL effectively works like software, allowing the chip designer to describe logic in a way that resembles what it does, as opposed to how it’s built, hence freeing the designer from the details of how that logic operation is implemented in hardware. HDL-based design goes hand in hand with automated synthesis algorithms, which translate those higher level descriptions into the equivalent gates that perform the same function, and which ultimately can be realized in silicon. As a result of these innovations, the semiconductor industry has enabled designers to keep up with the capacity of silicon chips by allowing them to be less and less concerned about the lower level implementation details of the chips they are designing, and put their focus on what those chips actually do. Chip designers take care of the ‘what’, where they can bring their creativity and experience to bear, and automation takes care of the ‘how’ in a reliable and repeatable fashion. Oracle Autonomous Database - Automating a path to innovation The semiconductor industry experience provides a useful blueprint for a path that the data industry must also take, demonstrating why automation is the key to unlocking the potential of today’s data, in the same way that innovation in the semiconductor industry has allowed designers to fully exploit the capacity of silicon chips. Across a range of industries, corporations differentiate themselves by what they do with the data they generate and collect, not in the effort they expend to manage and secure that data. To have maximum impact, database experts need to be able to maximize their focus on what their data is telling them (the ‘what’), and rely on automation to keep that data available and secure (the ‘how’). 95% of the respondents to a recent Oracle user survey noted that they are having difficulty keeping up with the growth in their data, and the majority of data managers are performing multiple major database updates per year. In addition to simply keeping the database up and running, the survey noted that significant manual effort continues to be dedicated to general troubleshooting and tuning, backup/recovery tasks, and provisioning to handle usage peaks and troughs. Data security also stands out as an area that can benefit significantly from automation, not only because automation can reduce manual effort, but because it can reduce risk. In an age where managers of on premises database setups much continuously juggle the urgency of patch installation with the cost of the downtime needed to install those patches, it comes as no surprise that a recent Verizon survey noted that 85% of successful data breaches exploited vulnerabilities for which patches were available for up to a year before the attack occurred. It makes perfect sense to instead make use of Oracle Autonomous Database to automatically apply security patches with no downtime. In total, these automated capabilities reduce administrative costs by 80%, meaning that the Autonomous Enterprise taking advantage of these advances can dedicate significantly more effort to innovation. Coming back to our semiconductor analogy, innovations in how design is done didn’t make chip designers less necessary, rather it made them significantly more productive and enabled them to make more innovative use of advances in technology. We expect the Oracle Autonomous Database to have the same impact for DBAs and data managers in the Autonomous Enterprise. Learn more at Oracle Open World 2019 To learn more about how enterprises who have already become autonomous, visit the sessions below at the 2019 Oracle Open World event – Drop Tank: A Cloud Journey Case Study, Tuesday September 17, 11:15AM – 12:00PM Oracle Autonomous Data Warehouse: Customer Panel, Tuesday September 17, 1:45PM – 2:30PM Oracle Autonomous Transaction Processing Dedicated Deployment: The End User’s Experience, Tuesday September 17, 5:15PM – 6:00PM Managing One Of the Largest IoT Systems in the World With Autonomous Technologies, Wednesday September 18, 9:00AM – 9:45AM

This post was contributed by Senior Principal Product Marketing Director, Ron Craig. Background – data overflow The ability of enterprises to generate data is increasingly outpacing their ability to...