X

All Things Database: Education, Best Practices,
Use Cases & More

Recent Posts

Autonomous Database

Autonomous Database Newsletter - March 8, 2021

March 8, 2021 Newsletter For Autonomous Database on Shared Exadata Infrastructure Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the following new features and topics: Graph Studio for ADB OML for Python AES256 encryption Enhancements for manual backups Private endpoint support for tools New OCI Events Global Leaders Update Don't forget to checkout our "What's New" page for a comprehensive archive of all the new features we have added to Autonomous Database - click here. To find out how to subscribe to this newsletter click here. (SharedInfrastructureOnly) COMING SOON - Graph Studio In Autonomous Database Graph console is now built-in to Autonomous Database. There is a launch button on the Tools panel of the ADB console. This integration provides comprehensive tooling for working with graphs: Graph modeling tool to map relational data to graphs Browser-based notebooks for interactive analysis and collaboration Integrated graph visualization Uses the proven capabilities of the Graph Analytics Engine This feature is currently in Limited Availability and GA is expected within the next couple of weeks. LEARN MORE WEBCASTS: What Graph can do for you + how to get started, click here Demystifying Graph Analytics for the Non-Expert, click here AskTOM Office Hours - recordings click here, and for slide decks click here DO MORE VIDEO: Autonomous Graph Database: A tour of the Graph Studio interface, click here VIDEO: Simplify Graph Analytics with Autonomous Database, click here BACK TO TOP (SharedInfrastructureOnly) NEW - Oracle Machine Learning for Python We are pleased to announce the general availability of the Oracle Machine Learning for Python (OML4Py) on Autonomous Database. OML4Py leverages the database as a high-performance computing environment to explore, transform, and analyze data faster and at scale, while allowing the use of familiar Python. The in-database parallelized machine learning algorithms are exposed through a well-integrated Python interface. In addition, data scientists and other Python users can create user-defined Python functions managed in the database. Python objects can also be stored directly in the database – as opposed to being managed in flat files. OML4Py also supports automated machine learning—or AutoML—which not only enhances data scientist productivity, but also enables non-experts to use and benefit from machine learning. AutoML can help produce more accurate models faster, through automated algorithm and feature selection, and model tuning and selection. LEARN MORE VIDEO: OML for Python (2m video) click here VIDEO:OML4Py Introduction (17m video) click here DOC: User's Guide, click here DOC: REST API for Embedded Python Execution, click here PDF: OML for Python briefing note, click here BLOG: Introducing Oracle Machine Learning for Python click here. DO MORE OFFICE HOURS: OML4Py Hands-on Lab, click here. CODE: GitHub Repository with Python notebooks click here. BACK TO TOP (SharedInfrastructureOnly) NEW - ADB Now Uses AES256 Encryption On Tablespaces Note that ADB now uses AES256 encryption for tablespaces. BACK TO TOP (SharedInfrastructureOnly) NEW - Enhancements To Manual Backup Proces Note that You can choose the bucket name to store manual backups and the steps to configure manual backups are simplified. Set database property DEFAULT_BACKUP_BUCKET to specify the manual backup bucket on the Oracle Cloud Infrastructure Object Storage. DOC: Manual Backups on Autonomous Database, click here. BACK TO TOP (SharedInfrastructureOnly) NEW - Private Endpoint Support For Built-In Tools ADB within a private endpoint now supports access to APEX, SQL Developer Web, ORDS, and OML Notebooks. To connect Oracle Analytics Cloud instance to Autonomous Database that has a private endpoint, use the Data Gateway as per an on-premises database. See Configure and Register Data Gateway for Data Visualization for more information. DOC: Configuring Private Endpoints , click here. BACK TO TOP (SharedInfrastructureOnly) NEW - Additional Information Events Now Available Autonomous Database generates events that you can subscribe to with OCI Events. Two new Information events have been recently added: - ScheduledMaintenanceWarning - WalletExpirationWarning LEARN MORE DOC: Using OCI Events with ADB click here. BACK TO TOP Oracle Global Leaders Spring Webinar Series Date: Tues March 17 Time: 9:00am PT / 12:00pm ET Join us for this webcast to hear from CEO, DX Marketing, Ray Owens about how they achieve effective ROI for their client's marketing spend using ADW, OAC, Oracle Machine Learning and more. Learn about their growth from early days of Oracle cloud-based solutions and the integration of more data sets from SAS and their new parent company Vision Integrated Graphics. Please use the following link to review the agenda and register for the event: click here. BACK TO TOP oracle.com ADW ATP Documentation ADW ATP TCO Calculator Shared Dedicated Cloud Cost Estimator Autonomous Database ADB New Features Autonomous Database Schema Advisor Autonomous Database       Customer Forums ADW ATP       BACK TO TOP

March 8, 2021 Newsletter For Autonomous Database on Shared Exadata Infrastructure Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This...

Autonomous Database

Using Oracle Machine Learning for Python - building predictive models

In my previous post, we introduced some basic features of OML4Py. In this post, I'll illustrate using some in-database machine learning algorithms. Creating in-database predictive models In this example, we use the Random Forest algorithm and the optional cost matrix input to predict Species from the IRIS table. First, we get a DataFrame proxy object to the IRIS table, use the overloaded split function to obtain training and test data sets, which are also proxy objects (data remains in the database).  We then create and display a demo cost matrix - just for illustration - before building a Random Forest model. Note that the cost matrix is also a named database table, RF_COST. IRIS = oml.sync(table = "IRIS") # obtain proxy object IRIS_TRAIN, IRIS_TEST = IRIS.split() # split data using default ratio TRAIN_x = IRIS_TRAIN[0].drop('Species') TRAIN_y = IRIS_TRAIN[0]['Species'] # Create a cost matrix table in the database cost_matrix = [['setosa', 'setosa', 0], ['setosa', 'virginica', 0.2], ['setosa', 'versicolor', 0.8], ['virginica', 'virginica', 0], ['virginica', 'setosa', 0.5], ['virginica', 'versicolor', 0.5], ['versicolor', 'versicolor', 0], ['versicolor', 'setosa', 0.4], ['versicolor', 'virginica', 0.6]] COST = oml.create(pd.DataFrame(cost_matrix, columns = ['ACTUAL_TARGET_VALUE', 'PREDICTED_TARGET_VALUE', 'COST']), "RF_COST") COST.pull().pivot(index='ACTUAL_TARGET_VALUE', columns='PREDICTED_TARGET_VALUE',values='COST') With the output: PREDICTED_TARGET_VALUE setosa versicolor virginica ACTUAL_TARGET_VALUE setosa 0.0 0.8 0.2 versicolor 0.4 0.0 0.6 virginica 0.5 0.5 0.0 To build our Random Forest model, we create an rf object, specifying a maximum tree depth of 4, and invoke fit on our training data with the cost matrix. To display model details, we print the object. Note that the Random Forest model also reports attribute importance. from oml import rf rf_mod = rf(tree_term_max_depth = '4') rf_mod = rf_mod.fit(TRAIN_x, TRAIN_y, COST) rf_mod # Show model details With the output: Algorithm Name: Random Forest Mining Function: CLASSIFICATION Target: Species Settings: setting name setting value 0 ALGO_NAME ALGO_RANDOM_FOREST 1 CLAS_COST_TABLE_NAME "PYQUSER"."RF_COST" 2 CLAS_MAX_SUP_BINS 32 3 CLAS_WEIGHTS_BALANCED OFF 4 ODMS_DETAILS ODMS_ENABLE 5 ODMS_MISSING_VALUE_TREATMENT ODMS_MISSING_VALUE_AUTO 6 ODMS_RANDOM_SEED 0 7 ODMS_SAMPLING ODMS_SAMPLING_DISABLE 8 PREP_AUTO ON 9 RFOR_NUM_TREES 20 10 RFOR_SAMPLING_RATIO .5 11 TREE_IMPURITY_METRIC TREE_IMPURITY_GINI 12 TREE_TERM_MAX_DEPTH 4 13 TREE_TERM_MINPCT_NODE .05 14 TREE_TERM_MINPCT_SPLIT .1 15 TREE_TERM_MINREC_NODE 10 16 TREE_TERM_MINREC_SPLIT 20 Computed Settings: setting name setting value 0 RFOR_MTRY 2 Global Statistics: AVG_DEPTH AVG_NODECOUNT MAX_DEPTH MAX_NODECOUNT MIN_DEPTH \ 0 3.25 5.6 4.0 6.0 4.0 MIN_NODECOUNT NUM_ROWS 0 4.0 102.0 Attributes: SEPAL_LENGTH SEPAL_WIDTH PETAL_LENGTH PETAL_WIDTH Partition: NO Importance: ATTRIBUTE_NAME ATTRIBUTE_SUBNAME ATTRIBUTE_IMPORTANCE 0 PETAL_LENGTH None 0.584287 1 PETAL_WIDTH None 0.512901 2 SEPAL_LENGTH None 0.136424 3 SEPAL_WIDTH None 0.038918 Use this Random Forest in-database model to make predictions The rf_mod object is itself a proxy object to its corresponding in-database model. We predict using the IRIS_TEST proxy object and optionally specify supplemental columns from the original data. Supplemental columns are useful to compare actual target values against predicted values, or to include unique keys for subsequent row identification or joining with other tables. You may recall that row order is not implicit on results from relational databases unless a sort (i.e., ORDER BY) is specified by the user. Why? Sorting of often unnecessary and a potentially expensive operation depending on data size. The predicted results in pred are also a DataFrame proxy object, since the result could be large (e.g., millions or billions of rows). pred = rf_mod.predict(IRIS_TEST.drop('Species'), supplemental_cols = test_dat[:, ['SEPAL_LENGTH', 'SEPAL_WIDTH', 'PETAL_LENGTH', 'Species']]) print("Shape:",pred.shape) pred.head(10) With the output: Shape: (48, 5) Out[16]: SEPAL_LENGTH SEPAL_WIDTH PETAL_LENGTH Species PREDICTION 0 4.9 3.0 1.4 setosa setosa 1 4.9 3.1 1.5 setosa setosa 2 4.8 3.4 1.6 setosa setosa 3 5.8 4.0 1.2 setosa setosa 4 5.1 3.5 1.4 setosa setosa 5 5.0 3.4 1.6 setosa setosa 6 5.2 3.5 1.5 setosa setosa 7 5.4 3.4 1.5 setosa setosa 8 4.9 3.1 1.5 setosa setosa 9 5.0 3.2 1.2 setosa setosa Next, let's compute a confusion matrix for the model predictions to assess model accuracy. Using the overloaded crosstab function, we compute the confusion matrix between the actual values in Species and the predicted result in the PREDICTION column directly in the database. res_ct = pred.crosstab('Species','PREDICTION',pivot=True) print("Type:",type(res_ct)) print("Columns:",res_ct.columns) res_ct.sort_values(by='Species') With the output: Type: Columns: ['Species', 'count_(setosa)', 'count_(versicolor)', 'count_(virginica)'] Species count_(setosa) count_(versicolor) count_(virginica) 0 setosa 16 0 0 1 versicolor 0 13 2 2 virginica 0 0 17 In my next post, we'll look at the embedded Python execution features of OML4Py.

In my previous post, we introduced some basic features of OML4Py. In this post, I'll illustrate using some in-database machine learning algorithms. Creating in-database predictive models In this...

Autonomous Database

Getting started with Oracle Machine Learning for Python

As noted in Introducing Oracle Machine Learning for Python, OML4Py is included with Oracle Autonomous Database, making the open source Python scripting language and environment ready for the enterprise and big data.  To get started with OML4Py, log into your Oracle Machine Learning Notebooks account and create a new notebook. If you don't have one yet, you can create an Autonomous Database account using your Oracle Always Free Services and follow this OML Notebooks tutorial. Load the OML package In the initial paragraph, specify %python as your interpreter. At this point, you can invoke Python code. However, to use OML4Py, import the package oml. Click the "run this paragraph" button. You can optionally invoke oml.isconnected to verify your connection, which should return true. %python import oml oml.isconnected() Load a Pandas DataFrame to the database There are several way to load data into Oracle Autonomous Database. In this first example, we create a table using the sklearn iris data set. We combine the target and predictors into a single Pandas DataFrame and load this DataFrame object into an Oracle Autonomous Database  table using the create function. from sklearn.datasets import load_iris import pandas as pd iris = load_iris() x = pd.DataFrame(iris.data, columns = ["SEPAL_LENGTH", "SEPAL_WIDTH", "PETAL_LENGTH", "PETAL_WIDTH"]) y = pd.DataFrame(list(map(lambda x: {0:'setosa', 1: 'versicolor', 2:'virginica'}[x], iris.target)), columns = ['Species']) iris_df = pd.concat([x,y], axis=1) IRIS = oml.create(iris_df, table="IRIS") print("Shape:",IRIS.shape) print("Columns:",IRIS.columns) IRIS.head(4) The script above produces the following output. Note that we access shape and columns properties on the proxy object, just as we would with a Pandas DataFrame. Similarly, we invoke the overloaded head function on the IRIS proxy object. Shape: (150, 5) Columns: ['SEPAL_LENGTH', 'SEPAL_WIDTH', 'PETAL_LENGTH', 'PETAL_WIDTH', 'Species'] Out[6]: SEPAL_LENGTH SEPAL_WIDTH PETAL_LENGTH PETAL_WIDTH Species 0 5.1 3.5 1.4 0.2 setosa 1 4.9 3.0 1.4 0.2 setosa 2 4.7 3.2 1.3 0.2 setosa 3 4.6 3.1 1.5 0.2 setosa This table is also readily available in the user schema under the name IRIS, just as any other database table. Using overloaded functions Using the numeric columns, we compute the correlation matrix on the in-database table IRIS using the overloaded corr function. Here, we see that petal length and petal width are highly correlated. IRIS.corr() With the output: SEPAL_LENGTH SEPAL_WIDTH PETAL_LENGTH PETAL_WIDTH SEPAL_LENGTH 1.000000 -0.109369 0.871754 0.817954 SEPAL_WIDTH -0.109369 1.000000 -0.420516 -0.356544 PETAL_LENGTH 0.871754 -0.420516 1.000000 0.962757 PETAL_WIDTH 0.817954 -0.356544 0.962757 1.000000 OML4Py overloads graphics functions as well. Here, we use boxplot to show the distribution of the numeric columns. In such overloaded functions, the statistical computations take place in the database - avoiding data movement and leveraging Autonomous Database as a high performance compute engine - returning only the summary statistics needed to produce the plot. import matplotlib.pyplot as plt plt.style.use('seaborn') plt.figure(figsize=[10,5])) oml.graphics.boxplot(IRIS[:, :4], notch=True, showmeans = True, labels=IRIS.columns[:4]) plt.title('Distribution of IRIS Attributes') plt.ylabel('cm'); In-database attribute importance Let's rank the relative importance of each variable (a.k.a., attribute or predictor) to predict the target 'Species' from the IRIS table. We define the ai (attribute importance) object, compute the result, and show the attribute importance ranking. In the result, notice that petal width is most predictive of the target species. The importance value produced by this algorithm provides a relative ranking to be used to distinguish importance among variables. from oml import ai # here we use sync to get handle to existing table IRIS = oml.sync(table = "IRIS") IRIS_x = IRIS.drop('Species') IRIS_y = IRIS['Species'] ai_obj = ai() # Create attribute importance object ai_obj = ai_obj.fit(IRIS_x, IRIS_y) ai_obj With the output: Algorithm Name: Attribute Importance Mining Function: ATTRIBUTE_IMPORTANCE Settings: setting name setting value 0 ALGO_NAME ALGO_AI_MDL 1 ODMS_DETAILS ODMS_ENABLE 2 ODMS_MISSING_VALUE_TREATMENT ODMS_MISSING_VALUE_AUTO 3 ODMS_SAMPLING ODMS_SAMPLING_DISABLE 4 PREP_AUTO ON Global Statistics: attribute name attribute value 0 NUM_ROWS 150 Attributes: PETAL_LENGTH PETAL_WIDTH SEPAL_LENGTH SEPAL_WIDTH Partition: NO Importance: variable importance rank 0 PETAL_WIDTH 1.050935 1 1 PETAL_LENGTH 1.030633 2 2 SEPAL_LENGTH 0.454824 3 3 SEPAL_WIDTH 0.191514 4 Change your service level In your notebook, you can change the service level of your connection to Oracle Autonomous Database to take advantage of different parallelism options. Available parallelism is relative to your autonomous database compute resource settings. Click the gear icon in the upper right (as indicated by the arrow in the figure), then click individual interpreters to turn them on or off, and click and drag each interpreter box to change the default service level. The 'low' binding runs your functions and queries without parallelism, 'medium' allows limited parallelism, and 'high' allows your functions and queries to use up to the maximum number of compute resources allocated to your Autonomous Database. In my next post, we'll look at building predictive models.  

As noted in Introducing Oracle Machine Learning for Python, OML4Py is included with Oracle Autonomous Database, making the open source Python scripting language and environment ready for the...

Autonomous Database

Introducing Oracle Machine Learning for Python

Data scientists and developers know the power of Python and Python's wide-spread adoption is a testament to its success. Now, Python users can extend this power when analyzing data in Oracle Autonomous Database. Oracle Machine Learning for Python (OML4Py) makes the open source Python scripting language and environment ready for the enterprise and big data. Designed for problems involving both large and small data volumes, Oracle Machine Learning for Python integrates Python with Oracle Autonomous Database, allowing users to run Python commands and scripts for data exploration, statistical analysis, and machine learning on database tables and views using Python syntax. Familiar Python functions are overloaded to translate Python functionality into SQL for in-database processing - achieving performance and scalability - transparently. Python users can take advantage of parallelized in-database algorithms to enable scalable model building and data scoring - eliminating costly data movement. Further, Python users can develop and deploy user-defined Python functions that leverage the parallelism and scalability of Autonomous Database, and deploy those same user-defined Python functions using environment-managed Python engines through a REST API. Oracle Machine Learning for Python also introduces automated machine learning (AutoML), which consists of: automated algorithm selection to select the algorithm most appropriate for the provided data, automated feature selection to enhance model accuracy and performance, and automated model tuning to improve model quality. AutoML enhances data scientist productivity by automating repetitive and time-consuming tasks, while also enabling non-experts to produce models without needing detailed algorithm-specific knowledge. Access Oracle Machine Learning for Python in Autonomous Database using Oracle Machine Learning Notebooks, where you can use Python and SQL in the same Apache Zeppelin-based notebook - allowing the most appropriate API for the task. Take advantage of team collaboration and job scheduling features to further your data science project goals. Oracle Machine Learning for Python has a range of template example notebooks included with Oracle Autonomous Database that highlight various features. Zeppelin notebooks illustrating OML4Py features are also available in the Oracle Machine Learning GitHub repository. Learn more about Oracle Machine Learning or try it today using your Always Free Services from Oracle.

Data scientists and developers know the power of Python and Python's wide-spread adoption is a testament to its success. Now, Python users can extend this power when analyzing data in...

A Simple Guide for Migration of data to Autonomous Database using EM

Moving the workloads of your on-premises database to an Oracle Cloud gives you a competitive advantage today. Databases in Oracle Cloud leverage Oracle's Generation 2 infrastructure, providing maximum performance, better pricing, and the most secure environment for your enterprise workloads. OCI also provides the Oracle Autonomous Database, the industry’s first and only self-driving, self-securing, and self-repairing database. Oracle continues to extend the capabilities of our industry-leading Enterprise Manager offering to support our customers' journey to cloud. We are excited to introduce you to the Database Migration Workbench, which will help you migrate on-premises databases to Oracle Autonomous Database. This blog will guide you through the various steps of the migration process.   Overview of Database Migration Workbench Webinar: Automating Database Migration to Oracle Cloud, including Autonomous Database, Eliminates Operational Risk Register: December 10, 2020 11:00 a.m. PT/2:00 p.m. ET Database Migration Workbench simplifies the migration of an on-premises database to an Autonomous Database with a unified guided flow. Figure 1 (below) shows an overview of the Database Migration Workbench. It uses Data Pump to migrate data to an Autonomous Database. A DBA can use Data Pump regardless of the endian format and character set of the on-premises database. The source database can be a Single-Instance Database (SIDB), RAC Database (RACDB) or a Pluggable Database (PDB), and the destination database can be an ADW or ATP. As part of database migration, EM takes an export dump of source data files, uploads it to an object storage bucket, and imports it to an Autonomous Database. Figure 1 - Overview of Database Migration Workbench   Prerequisites for Database Migration Connectivity between the on-premises datacenter and OCI services should be present, which can be established by having a FastConnect or an IPSec VPN connection. For a full database export or exporting any schema other than the user schema, create an on-premises database user having privileges to export data, i.e., GRANT DATAPUMP_EXP_FULL_DATABASE TO <user_name>. For example: Execute the following SQL if the source database is an SIDB or a RACDB: CREATE USER MIGUSER IDENTIFIED BY MPass_123; GRANT CREATE SESSION TO MIGUSER; GRANT CONNECT TO MIGUSER; GRANT SELECT ANY TABLE TO MIGUSER; GRANT DATAPUMP_EXP_FULL_DATABASE TO MIGUSER; ALTER USER MIGUSER QUOTA UNLIMITED ON USERS; Execute the following SQL if the source database is a PDB: ALTER SESSION SET CONTAINER=SALESPDB; CREATE USER MIGUSER IDENTIFIED BY MPass_123 CONTAINER=current; GRANT CREATE SESSION TO MIGUSER; GRANT CONNECT TO MIGUSER; GRANT SELECT ANY TABLE TO MIGUSER; GRANT DATAPUMP_EXP_FULL_DATABASE TO MIGUSER; ALTER USER MIGUSER QUOTA UNLIMITED ON USERS; Create a directory object, which should have enough disk space to accommodate the data to be migrated. CREATE OR REPLACE DIRECTORY MIG_DMP_DIR AS '/u01/mig_data'; GRANT READ, WRITE ON DIRECTORY MIG_DMP_DIR TO MIGUSER; Create an "Auth Token" based credential in the Autonomous Database and set it as a default credential that will be required for authentication between the Autonomous Database and OCI object storage. Follow the below steps:   a.  Log in to your OCI console and generate an Auth Token. b.  Log in to your Autonomous Database and create a credential. Following is an example of SQL for the same: For Autonomous Database - Dedicated SET DEFINE OFF BEGIN DBMS_CREDENTIAL.CREATE_CREDENTIAL( credential_name => 'ADB_CRED', username => 'jhon_doe', password => 'auth_token'); END; / For Autonomous Database - Serverless SET DEFINE OFF BEGIN DBMS_CLOUD.CREATE_CREDENTIAL( credential_name => 'ADB_CRED', username => 'jhon_doe', password => 'auth_token'); END; / Create new or identify an existing object storage bucket in the same region of your OCI tenancy where the Autonomous Database is residing. a.  The "Storage" tier setting for the bucket must be Standard. b.  The "Visibility" setting for the bucket must be Private. If not already discovered, discover both the source and destination databases in Enterprise Manager.   cURL should be available on the host from where the export dumps will be uploaded to Object Storage.   Creating a Data Migration activity Navigate to the Database Migration Workbench to create migration activity. You can traverse either via a global context or via target context. Launch Database Migration Workbench from a global context Navigate to Enterprise > Migration and Consolidation > Database Migration Workbench Launch Database Migration Workbench from target context Goto to the target home page (target can be an SIDB, RACDB, PDB, or an Autonomous Database). Navigate to Oracle Database > Migration Figure 2 - Navigating to Database Migration Workbench   From the Database Migration Workbench dashboard, click on the button "Create Migration Activity" (as shown in figure 3). Figure 3 - Database Migration Workbench Dashboard   Source, Destination, and Object Storage Selecting a source and destination database You can either perform a full fatabase migration or migrate specific schemas. Select the source database and destination database. The source database can be a SIDB, RACDB or a PDB. The destination database can be ATP or ADW. Figure 4a - Select source and destination for migration   Selecting targets Source and destination database targets can be selected by clicking the magnifying glass icon in the Select Source/Destination Database pane. Figure 4b (below) shows the target selector for the source database. Figure 4b - Select source database   Upload Migration Tools Database Migration Workbench makes use of the tools Oracle Instant Client and Autonomous Database Schema Analyzer. The latest version of these tools is downloaded automatically via job "Migration Prerequisite Download Job" and saved in Software Library. If these utilities are not available from the Software Library for your operating system, you will need to download them manually. Manual downloads are usually required when the latest version of the Migration Tools is not compatible with your current operating system version. The following binaries are needed to be uploaded: Instant Client Files Instant client basic zip package Instant client SQL*Plus zip package Instant client tools zip package Autonomous Database Schema Advisor file **Enterprise Manager automatically downloads the tools when it has a direct internet connection or has MOS proxy setup.   Adding Source, Destination, and Object Storage details Figure 5 - Adding source and destination details   Source Database Credentials - Select a named credential of an on-premises database user with Data Pump export privileges. This region gets auto-populated if the preferred credential is set. To create database credentials, navigate to Setup > Security > Named Credentials and click on create. Enter credential name, select authenticating target type as "Database Instance", credential type as "Database Credentials", enter username, password, role and click on save. Database Credential can also be created via EMCLI as follows: emcli create_named_credential \ -auth_target_type=oracle_database \ -cred_type=DBCreds \ -cred_scope=Global \ -cred_name=SRC_DB_CREDS \ -attributes="DBUserName:db_user;DBPassword:Db_password;DBRole:Normal" Host Credentials - Select a named credential for database host user. This region gets auto-populated if the preferred credential is set. To create host credentials, navigate to Setup > Security > Named Credentials and click on create. Enter credential name, select authenticating target type as "Host", credential type as "Host Credentials", enter the username, password, and click on save. Host Credential can also be created via EMCLI as follows: emcli create_named_credential \ -auth_target_type=host \ -cred_type=HostCreds \ -cred_scope=Global \ -cred_name=SRC_HOST_CREDS \ -attributes="HostUserName:host_user;HostPassword:Host_password" Destination Database Credentials - Select a named credential of Autonomous Database user with Data Pump import privileges. This region gets auto-populated if the preferred credential is set. To create database credentials, navigate to Setup > Security > Named Credentials and click on create. Enter the credential name, select authenticating target type as "Autonomous Transaction Processing" or "Autonomous Data Warehouse", credential type as "Database Credentials", enter username, password, role and click on save. Database credential can also be created via EMCLI as follows: emcli create_named_credential \ -auth_target_type=oracle_cloud_atp \ -cred_type=DBCreds \ -cred_scope=Global \ -cred_name=DEST_ADB_CREDS \ -attributes="DBUserName:adb_user;DBPassword:Adb_pass123;DBRole:Normal" **Provide auth_target_type=oracle_cloud_adw if destination is Autonomous Data Warehouse Agent Host Credentials - Select a named credential for destination ADB's monitoring agent user. This region gets auto-populated if the preferred credential is set. To create host credentials, navigate to Setup > Security > Named Credentials and click on create. Enter credential name, select authenticating target type as "Host", credential type as "Host Credentials", enter the username, password, and click on save. Host Credential can also be created via EMCLI as follows: emcli create_named_credential \ -auth_target_type=host \ -cred_type=HostCreds \ -cred_scope=Global \ -cred_name=DEST_HOST_CREDS \ -attributes="HostUserName:host_user;HostPassword:Host_password" Service Name - Select the Autonomous Database service name, which will be used for migration. It is recommended to use the service name "high" for data migration. Action Migration Method - Migration Method for On-premises to Autonomous Database migration will always be "Data Pump". Recompile invalid objects after migration - On selecting this checkbox, all invalid objects from the migrated schema will be recompiled. Compare performance after migration - On selecting this checkbox, SQL Performance Analyzer will be used to compare performance before and after migration. You can see a detailed SQL Performance Analyzer report once the migration activity is complete. Source Data Pump Directory - Select a directory object of the source database, on which the selected database user is having read and write privileges. If the source database is a RAC target, then the Data Pump directory must be shared across all RAC nodes. Data Pump directory cannot be an ASM based location. Encryption Password - Provide encryption password for export dumps. The export dumps generated from the source database are always encrypted. Cloud Storage URL - Provide object storage URL for the region where your Autonomous Database exists. Cloud Storage URL is of the following format: objectstorage.[Region Identifier].oraclecloud.com You can get the value of region identifier from the OCI console. Open the "Regions" menu and click on "Manage Regions". You will find the "Region Identifier" in the list of regions. Figure 6 - Finding region identifier from OCI console Bucket Name - Enter Object storage bucket name (this bucket should reside in the same region where destination ADB is residing). OCI Credential - Select a named credential for the Autonomous Database of type Oracle Cloud Infrastructure. To create OCI credentials, navigate to Setup > Security > Named Credentials and click on create. Enter credential name, select authenticating target type as "Autonomous Transaction Processing" or "Autonomous Data Warehouse", credential type as "Oracle Cloud Infrastructure Credential", enter Tenancy OCID, User OCID, Private Key, Fingerprint and Passphrase. Database Credential can also be created via EMCLI as follows: emcli create_named_credential \ -cred_name=OCI_CRED \ -auth_target_type="oracle_cloud_atp" \ -cred_type="OCICredential" \ -cred_scope=Global \ -target_type="Autonomous Transaction Processing" \ -attributes="TenancyOCID#<OCI Tenancy OCID>;UserOCID#<OCI User OCID>;PrivateKey#<Private_key>;FingerPrint#<Fingerprint>;PassPhrase#<Pass_phrase>" \ -subseparator=attributes="#" **Provide -auth_target_type=oracle_cloud_adw if destination is Autonomous Data Warehouse **Provide -target_type="Autonomous Data Warehouse" if destination is Autonomous Data Warehouse **Steps to setup SSH keys can be found here **The new lines of <Private_key> should be delimited as \r\n while using in emcli Database OCI Auth Credential - Select the Auth Token based credential created in Autonomous Database.   Named credentials may be scoped as target-specific or global. All the examples above show named credential creation with global scope. More details of named credential creation can be found here.   Customizing the Activity Figure 7 - Customizing export/import options, STS, and pre/post custom scripts   Export and Import Options - You can provide the applicable parameters for export and import command, which will be used during migration. The defaults are pre-populated with values that are required for migration to Autonomous Database. There are a few options that get added by Database Migration Workbench implicitly, as shown in Figure 8 below. Figure 8 - Implicitly added options for export and import   Compare Performance After Migration - SQL Performance Analyzer will be used to compare performance before and after migration. You can choose an existing SQL Tuning Set (STS), or Database Migration Workbench will create a new STS from AWR. Pre/Post Script - Executes pre- or post- migration scripts stored in the software library. The scripts should be uploaded to the software library at <ROOT>/Database Migration/User Defined Scripts as a Directive or Generic Component. Following are the supported custom scripts: Pre-Migration Script - SH script executed before migration on the on-premises database host. Post-Migration Script- SH script executed after migration on the ADB monitoring agent's host. Post-Migration SQL Script - SQL executed after migration on ADB.   Upload Options - Database Migration Workbench uses system cURL to upload exported dumps to Object Storage. By default, the source database host is used to upload to Object Storage. But a user can opt to select a different agent host from which connectivity to object storage will be made for uploading export dumps. If the dump location is not shared with the upload agent host, then the dump files are copied to a staging directory. Figure 9a - Flow of data from an on-premises database to Oracle Autonomous Database when data is uploaded from the source host   Figure 9b - Flow of data from an on-premises database to Oracle Autonomous Database when data is uploaded from a different host   Figure 10 - Inputs for uploading data from different host   Analyze Source Data Based on the provided inputs, Analyze Source will provide a report describing the on-premises database objects which will not get successfully migrated to Autonomous Database. Analyze schema makes use of the schema analyzer tool, which has been uploaded in the first step. Click on "Analyze Source" on Review & Submit page. You have to either select the named credential of a user where schema advisor is installed or credential with sysdba privileges. If a sysdba credential is selected, then a temporary schema advisor user is created, which will be deleted post advisor report generation. Figure 11 - Report generated by Analyze Source   Validate Inputs Before Submission On the Review & Submit page, click on the "Validate" button to verify all the inputs concerning the source, destination, object storage, and export/import options provided in the activity. Validation is not a mandatory step, but it is advised to validate the inputs before submitting the activity. Figure 12 shows a list of validations that are executed as part of this step. Figure 12 - Validations   Post Migration Activities Once the migration activity is complete, you can Compare Performance, View Analysis of the Activity, View detailed logs of the activity, or cleanup dumps from source and object storage. You can perform all these operations by clicking on the Action Menu of an activity. Figure 13 - Various options of Actions Menu   Cleanup - Deletes export dumps from the source location and object storage. View Details - Redirects to the deployment activity page where you will get step by step progress details and logs for the corresponding migration activity. View Analysis - This page analyzes validations, export, and import operation executed by migration activity. It will summarize the job steps and then deep dives into details of individual Object errors. Figure 14 - Analyzing a Migration Activity   Compare Performance - Shows the SQL Performance Analyzer report of the migration activity. Marking an activity as "Complete" - Importing data to an Autonomous Database might result in a few import errors. In such situations, the migration activity ends with a status as "Completed with errors". Users can analyze the migration (using View Analysis), and if the errors reported during import are ignorable, the user can choose to Mark the Activity as complete. Figure 15 - Marking an activity as complete   Conclusion The Database Migration Workbench provides a wide variety of benefits to automate and de-risk your migrations. It helps to select the source database/schema of your interest and analyze them before migration. It also provides a unified guided workflow for migrating the on-premises database with an ability to customize migration with various export/import options and custom scripts. Database Migration Workbench can schedule a migration for a later date, minimizing the impact on the application. Post migration, it identifies objects needing attention and objects that are migrated successfully. It also shows insights on performance comparison before and after migration using SPA. Database Migration Workbench is available starting in Oracle Enterprise Manager 13c Release 4 Update 7 (13.4.0.7).   Related Links Official documentation FastConnect design Networking Setup to Manage On-Premises Targets with an Enterprise Manager hosted in OCI Advanced compression with Data Pump Webinar: Automating Database Migration to Oracle Cloud, including Autonomous Database, Eliminates Operational Risk Register: December 10, 2020 11:00 a.m. PT/2:00 p.m. ET

Moving the workloads of your on-premises database to an Oracle Cloud gives you a competitive advantage today. Databases in Oracle Cloud leverage Oracle's Generation 2 infrastructure, providing maximum...

How to Create a Database Link from an Autonomous Data Warehouse to a Database Cloud Service Instance

Autonomous Data Warehouse (ADW) now supports outgoing database links to any database that is accessible from an ADW instance including Database Cloud Service (DBCS) and other ADW/ATP instances. To use database links with ADW, the target database must be configured to use TCP/IP with SSL (TCPS) authentication. Since both ADW and ATP use TCPS authentication by default, setting up a database link between these services is pretty easy and takes only a few steps. On the other hand, enabling TCPS authentication in a database that doesn't have it configured (e.g. in DBCS) requires some additional steps that need to be followed carefully. In this blog post, I will try to demonstrate how to create a database link from an ADW instance to a DBCS instance including the steps to enable TCPS authentication. Here is an outline of the steps that we are going to follow: Enable TCPS Authentication in DBCS Connect to DBCS Instance from Client via TCPS Create a DB Link from ADW to DBCS Enable TCPS Authentication in DBCS A DBCS instance uses TCP/IP protocol by default. Configuring TCPS in DBCS involves several steps that need to performed manually. Since we are going to modify the default listener to use TCPS and it's configured under the grid user, we will be using both oracle and grid users. Here are the steps needed to enable TCPS in DBCS: Create wallets with self signed certificates for server and client Exchange certificates between server and client wallets (Export/import certificates) Add wallet location in the server and the client network files Add TCPS endpoint to the database listener Create wallets with self signed certificates for server and client As part of enabling TCPS authentication, we need to create individual wallets for the server and the client. Each of these wallets has to have their own certificates that they will exchange with one another. For the sake of this example, I will be using a self signed certificate. The client wallet and certificate can be created in the client side; however, I'll be creating my client wallet and certificate in the server and moving them to my local system later on. See Configuring Secure Sockets Layer Authentication for more information. Let's start... Set up wallet directories with the root user [root@dbcs0604 u01]$ mkdir -p /u01/server/wallet [root@dbcs0604 u01]$ mkdir -p /u01/client/wallet [root@dbcs0604 u01]$ mkdir /u01/certificate [root@dbcs0604 /]# chown -R oracle:oinstall /u01/server [root@dbcs0604 /]# chown -R oracle:oinstall /u01/client [root@dbcs0604 /]# chown -R oracle:oinstall /u01/certificate Create a server wallet with the oracle user [oracle@dbcs0604 ~]$ cd /u01/server/wallet/ [oracle@dbcs0604 wallet]$ orapki wallet create -wallet ./ -pwd Oracle123456 -auto_login Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Create a server certificate with the oracle user [oracle@dbcs0604 wallet]$ orapki wallet add -wallet ./ -pwd Oracle123456 -dn "CN=dbcs" -keysize 1024 -self_signed -validity 3650 -sign_alg sha256 Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Create a client wallet with the oracle user [oracle@dbcs0604 wallet]$ cd /u01/client/wallet/ [oracle@dbcs0604 wallet]$ orapki wallet create -wallet ./ -pwd Oracle123456 -auto_login Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Create a client certificate with the oracle user [oracle@dbcs0604 wallet]$ orapki wallet add -wallet ./ -pwd Oracle123456 -dn "CN=ctuzla-mac" -keysize 1024 -self_signed -validity 3650 -sign_alg sha256 Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Exchange certificates between server and client wallets (Export/import certificates) Export the server certificate with the oracle user [oracle@dbcs0604 wallet]$ orapki wallet export -wallet ./ -pwd Oracle123456 -dn "CN=dbcs" -cert /tmp/server.crt Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Export the client certificate with the oracle user [oracle@dbcs0604 wallet]$ orapki wallet export -wallet ./ -pwd Oracle123456 -dn "CN=ctuzla-mac" -cert /tmp/client.crt Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Import the client certificate into the server wallet with the oracle user [oracle@dbcs0604 wallet]$ cd /u01/server/wallet/ [oracle@dbcs0604 wallet]$ orapki wallet add -wallet ./ -pwd Oracle123456 -trusted_cert -cert /tmp/client.crt Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Import the server certificate into the client wallet with the oracle user [oracle@dbcs0604 wallet]$ cd /u01/client/wallet/ [oracle@dbcs0604 wallet]$ orapki wallet add -wallet ./ -pwd Oracle123456 -trusted_cert -cert /tmp/server.crt Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Change permissions for the server wallet with the oracle user We need to set the permissions for the server wallet so that it can be accessed when we restart the listener after enabling TCPS endpoint. [oracle@dbcs0604 wallet]$ cd /u01/server/wallet [oracle@dbcs0604 wallet]$ chmod 640 cwallet.sso Add wallet location in the server and the client network files Creating server and client wallets with self signed certificates and exchanging certificates were the initial steps towards the TCPS configuration. We now need to modify both the server and client network files so that they point to their corresponding wallet location and they are ready to use the TCPS protocol. Here's how those files look in my case: Server-side $ORACLE_HOME/network/admin/sqlnet.ora under the grid user # sqlnet.ora Network Configuration File: /u01/app/18.0.0.0/grid/network/admin/sqlnet.ora # Generated by Oracle configuration tools. NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT) wallet_location = (SOURCE= (METHOD=File) (METHOD_DATA= (DIRECTORY=/u01/server/wallet))) SSL_SERVER_DN_MATCH=(ON) Server-side $ORACLE_HOME/network/admin/listener.ora under the grid user wallet_location = (SOURCE= (METHOD=File) (METHOD_DATA= (DIRECTORY=/u01/server/wallet))) LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))) # line added by Agent ASMNET1LSNR_ASM=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=ASMNET1LSNR_ASM)))) # line added by Agent ENABLE_GLOBAL_DYNAMIC_ENDPOINT_ASMNET1LSNR_ASM=ON # line added by Agent VALID_NODE_CHECKING_REGISTRATION_ASMNET1LSNR_ASM=SUBNET # line added by Agent ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON # line added by Agent VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET # line added by Agent Server-side $ORACLE_HOME/network/admin/tnsnames.ora under the oracle user # tnsnames.ora Network Configuration File: /u01/app/oracle/product/18.0.0.0/dbhome_1/network/admin/tnsnames.ora # Generated by Oracle configuration tools. LISTENER_CDB1 = (ADDRESS = (PROTOCOL = TCPS)(HOST = dbcs0604)(PORT = 1521)) CDB1_IAD1W9 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = dbcs0604)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = cdb1_iad1w9.sub05282047220.vcnctuzla.oraclevcn.com) ) (SECURITY= (SSL_SERVER_CERT_DN="CN=dbcs")) ) PDB1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = dbcs0604)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = pdb1.sub05282047220.vcnctuzla.oraclevcn.com) ) (SECURITY= (SSL_SERVER_CERT_DN="CN=dbcs")) ) Add TCPS endpoint to the database listener Now that we are done with configuring our wallets and network files, we can move onto the next step, which is configuring the TCPS endpoint for the database listener. Since our listener is configured under grid, we will be using srvctl command to modify and restart it. Here are the steps: [grid@dbcs0604 ~]$ srvctl modify listener -p "TCPS:1521" [grid@dbcs0604 ~]$ srvctl stop listener [grid@dbcs0604 ~]$ srvctl start listener [grid@dbcs0604 ~]$ srvctl stop database -database cdb1_iad1w9 [grid@dbcs0604 ~]$ srvctl start database -database cdb1_iad1w9 [grid@dbcs0604 ~]$ lsnrctl status LSNRCTL for Linux: Version 18.0.0.0.0 - Production on 05-JUN-2019 16:07:24 Copyright (c) 1991, 2018, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 18.0.0.0.0 - Production Start Date 05-JUN-2019 16:05:50 Uptime 0 days 0 hr. 1 min. 34 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/18.0.0.0/grid/network/admin/listener.ora Listener Log File /u01/app/grid/diag/tnslsnr/dbcs0604/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=10.0.0.4)(PORT=1521))) Services Summary... Service "867e3020a52702dee053050011acf8c0.sub05282047220.vcnctuzla.oraclevcn.com" has 1 instance(s). Instance "cdb1", status READY, has 2 handler(s) for this service... Service "8a8e0ea41ac27e2de0530400000a486a.sub05282047220.vcnctuzla.oraclevcn.com" has 1 instance(s). Instance "cdb1", status READY, has 2 handler(s) for this service... Service "cdb1XDB.sub05282047220.vcnctuzla.oraclevcn.com" has 1 instance(s). Instance "cdb1", status READY, has 1 handler(s) for this service... Service "cdb1_iad1w9.sub05282047220.vcnctuzla.oraclevcn.com" has 1 instance(s). Instance "cdb1", status READY, has 2 handler(s) for this service... Service "pdb1.sub05282047220.vcnctuzla.oraclevcn.com" has 1 instance(s). Instance "cdb1", status READY, has 2 handler(s) for this service... The command completed successfully Please note that in the first step we added the TCPS endpoint to the port 1521 of the default listener. It's also possible to keep the port 1521 as is and add TCPS endpoint to a different port (e.g. 1523). Connect to DBCS Instance from Client via TCPS We should have TCPS authentication configured now. Before we move onto testing, let's take a look at the client-side network files (Please note the public IP address of the DBCS instance in tnsnames.ora): Client-side tnsnames.ora CDB1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = 132.145.151.208)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = cdb1_iad1w9.sub05282047220.vcnctuzla.oraclevcn.com) ) (SECURITY= (SSL_SERVER_CERT_DN="CN=dbcs")) ) PDB1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = 132.145.151.208)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = pdb1.sub05282047220.vcnctuzla.oraclevcn.com) ) (SECURITY= (SSL_SERVER_CERT_DN="CN=dbcs")) ) Client-side sqlnet.ora WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = /Users/cantuzla/Desktop/wallet) ) ) SSL_SERVER_DN_MATCH=(ON) In order to connect to the DBCS instance from the client, you need to add an ingress rule for the port that you want to use (e.g. 1521) in the security list of your virtual cloud network (VCN) in OCI as shown below: We can now try to establish a client connection to PDB1 in our DBCS instance (CDB1): ctuzla-mac:~ cantuzla$ cd Desktop/InstantClient/instantclient_18_1/ ctuzla-mac:instantclient_18_1 cantuzla$ ./sqlplus /nolog SQL*Plus: Release 18.0.0.0.0 Production on Wed Jun 5 09:39:56 2019 Version 18.1.0.0.0 Copyright (c) 1982, 2018, Oracle. All rights reserved. SQL> connect c##dbcs/DBcs123_#@PDB1 Connected. SQL> select * from dual; D - X Create a DB Link from ADW to DBCS We now have a working TCPS authentication in our DBCS instance. Here are the steps from the documentation that we will follow to create a database link from ADW to DBCS: Copy your target database wallet (the client wallet cwallet.sso that we created in /u01/client/wallet) for the target database to Object Store. Create credentials to access your Object Store where you store the cwallet.sso. See CREATE_CREDENTIAL Procedure for details. Upload the target database wallet to the data_pump_dir directory on ADW using DBMS_CLOUD.GET_OBJECT: SQL> BEGIN DBMS_CLOUD.GET_OBJECT( credential_name => 'OBJ_STORE_CRED', object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/adwctraining8/b/target-wallet/o/cwallet.sso', directory_name => 'DATA_PUMP_DIR'); END; / PL/SQL procedure successfully completed. On ADW create credentials to access the target database. The username and password you specify with DBMS_CLOUD.CREATE_CREDENTIAL are the credentials for the target database that you use to create the database link. Make sure the username consists of all uppercase letters. For this example, I will be using the C##DBCS common user that I created in my DBCS instance: SQL> BEGIN DBMS_CLOUD.CREATE_CREDENTIAL( credential_name => 'DBCS_LINK_CRED', username => 'C##DBCS', password => 'DBcs123_#'); END; / PL/SQL procedure successfully completed. Create the database link to the target database using DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK: SQL> BEGIN DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK( db_link_name => 'DBCSLINK', hostname => '132.145.151.208', port => '1521', service_name => 'pdb1.sub05282047220.vcnctuzla.oraclevcn.com', ssl_server_cert_dn => 'CN=dbcs', credential_name => 'DBCS_LINK_CRED'); END; / PL/SQL procedure successfully completed. Use the database link you created to access data on the target database: SQL> select * from dual@DBCSLINK; D - X That's it! In this blog post, we covered how to enable TCPS authentication in DBCS and create an outgoing database link from ADW to our DBCS instance. Even though we focused on the DBCS configuration, these steps can be applied when setting up a database link between ADW and any other Oracle database.

Autonomous Data Warehouse (ADW) now supports outgoing database links to any database that is accessible from an ADW instance including Database Cloud Service (DBCS) and other ADW/ATP instances. To use...

Autonomous Database

Autonomous Database Newsletter - December 9, 2020

  December 9, 2020 Latest Newsletter For Autonomous Database on Shared Exadata Infrastructure       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the following new features and topics: Release of APEX 20.2 Accessing Object Store Files Operations Insights Webcast: Win With Data Replay: Global Leaders APAC Fall Meeting Don't forget to checkout our "What's New" page for a comprehensive archive of all the new features we have added to Autonomous Database - click here. To find out how to subscribe to this newsletter click here.     (Shared Infrastructure Only) UPDATE - APEX 20.2 Now Available In Autonomous Database-Shared The version of APEX within Autonomous Database has been upgraded to version 20.2. This release of APEX introduces several new features and enhancements to help developers be more productive than ever before. From the all-new Cards component, more automations, and REST data synchronization, to charts in faceted search, REST connector plug-ins, report printing improvements and Redwood Light Theme Style.   LEARN MORE   DOC: Creating Applications with Oracle APEX in ADB, click here DOC: APEX Release Notes, click here BLOG: Announcing APEX 20.2, click here WEB: Visual Guide To What's New in APEX 20.2, click here.     DO MORE   Visit the apex.oracle.com for a list of all available tutorials - click here.   BACK TO TOP   (Shared Infrastructure Only) UPDATE - Changes To Access Methods for Files In S3 And Azure Blog Storage It is now possible to use a pre-signed URL in any DBMS_CLOUD procedure that takes a URL to access files in Amazon S3, without the need to create a credential. It is now possible to use Shared Access Signatures (SAS) URL in any DBMS_CLOUD procedure that takes a URL to access files in Azure Blob Storage, without the need to create a credential.   LEARN MORE   DOC: DBMS_CLOUD Package File URI Formats in ADW, click here DOC: DBMS_CLOUD Package File URI Formats in ATP, click here DOC: DBMS_CLOUD Package File URI Formats in AJD, click here.       BACK TO TOP     (Shared Infrastructure Only) NEW - Oracle Cloud Infrastructure Operations Insights Oracle Cloud Infrastructure Operations Insights provides 360-degree insight into database resource utilization and capacity. It is FREE to use with ADW and ATP - Note that currently Autonomous JSON Database is not supported by Operations Insights but will be supported in a future release. Operations Insights consists of two integrated applications: Capacity Planning Oracle SQL Warehouse   Capacity Planning - enables IT administrator or a capacity planners to understand how critical resources, including CPU and storage are used. Capacity planning ensures that enterprise databases have sufficient resources to meet future business needs. Oracle SQL Warehouse - Makes it easier to manage SQL at scale. It analyzes SQL performance issues, identifies trends and key insights to help avoid potential future problems.   LEARN MORE   DOC: About Oracle Cloud Infrastructure Operations Insights, click here BLOG: Announcing the general availability of Oracle Cloud Infrastructure Operations Insights click here. VIDEO: Introduction to Oracle Cloud Infrastructure Operations Insights Capacity Planning, click here VIDEO: Introduction to Oracle Cloud Infrastructure Operations Insights SQL Warehouse Service, click here.   BACK TO TOP   (Shared Infrastructure Only) WEBCAST - Win With Data     Are you looking to swiftly turn a growing mountain of data into business insights? Attend this ession to discover how to get the deep, data-driven insights needed to make quick decisions with the Oracle Departmental Data Warehouse in four live sessions. We invite you to join us for a virtual event on Dec. 17th from 9:00 - 11:00 a.m. PT The New Self-Service Data Warehouse: The No Hassle Data Engine for Next-Level Insights. Register once and you can stay for all four sessions or just one! Agenda and registration link is available here.     BACK TO TOP   (Shared Infrastructure Only) REPLAY - Oracle Global Leaders APAC Meeting     Watch the recordings of real-life Oracle Cloud deployments from 12 customers and partners speaking in 3 live panels (covering Cloud Infrastructure, Autonomous Data Management & Analytics) with Q&A moderated by Reiner Zimmermann, VP Product Management. Each customer panel is followed by a product direction update by Oracle product management. The videos and presentations for the event are now available, click here.   BACK TO TOP   oracle.com ADW ATP Documentation ADW ATP TCO Calculator Shared Dedicated Cloud Cost Estimator Autonomous Database ADB New Features Autonomous Database Schema Advisor Autonomous Database       Customer Forums ADW ATP       BACK TO TOP  

  December 9, 2020 Latest Newsletter For Autonomous Database on Shared Exadata Infrastructure       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure....

Database Cloud Services

Fine grained Network Access Control for Oracle Autonomous Database on Exadata Cloud@Customer

Network perimeter security is the primary method for securing cloud resources. This is generally done by creating virtual networks/subnets, security lists and firewalls. For Oracle multi-tenant databases deployed on Exadata Cloud@Customer, there is now an additional layer of network security available. Access Control Lists at the pluggable database level. What this means is that while the Exadata Infrastructure may be deployed in a private customer subnet and access is governed by the security rules associated with that subnet, each ADB or pluggable database can have its own set of access rules. This is done by defining an Access Control List at the time of provisioning an ADB or at a later stage if desired. Access Control Lists can be one or more IP addresses or a CIDR block. Typically,  an Autonomous Container Database (ACD) may have multiple Autonomous Databases (ADB) providing a higher degree of consolidation and cost efficiency by leveraging service features such as online auto-scaling. By defining Access Control Lists (ACL) for each ADB, you now have much better control on which specific SQL clients or users can access the database. As an example, take the Exadata Cloud@Customer deployed in a customer data center in a private network with CIDR 10.22.0.0/16.  The network security rules may be setup to allow traffic from any host within this corporate network but by setting up ACLs at the database level you can now allow / disallow traffic from specific applications or client users. This has other security benefits such as, say,  a.)  blocking ad-hoc users from accessing production databases directly ( even if they have the right credentials)  b) Allowing client connections to certain databases from office locations only ( although I cannot imagine such an ACL rule in these times) c) If you are providing Database as a Service to your organization using Autonomous ( which is a great use case btw) then each database can be associated with a line of business / application / project team etc. Let's take a look at how to go about setting up ACLs.  We'll look at doing this from the database console in Oracle Cloud Infrastructure but you can easily automate the process using a robust set of REST APIs available for each action. 1. Set up ACLs at the time of your Autonomous Database provisioning You are provided the choice to enable network ACLs on the provisioning page as shown below.  Next, you simply provide your list of IP Addresses or CIDR ranges to allow connections from. Make sure you do not leave this list empty once your enable ACLs or else your database may be inaccessible ( although you can always come back and edit this)  2. Setup or modify ACLs for existing ADBs If you wish to setup ACLs for your already existing databases or change a rule later, you can easily do that from the database console   ... and in the event you clone your database ( a nifty feature in Autonomous) , your ACLs carry over to the clone and once again, you may edit them as desired.   I'm sure you will find this pretty intuitive when you see it yourself but nevertheless, technical documentation is available here and a step-by-step video tutorial here ... no audio though so you don't wake up your spouse  ;-)    

Network perimeter security is the primary method for securing cloud resources. This is generally done by creating virtual networks/subnets, security lists and firewalls. For Oracle multi-tenant...

Autonomous Database

Autonomous Database Newsletter - October 28, 2020

  October 28, 2020 Latest Newsletter For Autonomous Database on Shared Exadata Infrastructure       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the following new features and topics: Autonomous Data Guard For Autonomous JSON Database Private Endpoint Support For Tools Database Modes PL/SQL SDK for OCI OCI Events Integration Changes to Password Rules Machine Learning Workshop Global Leaders APAC Meeting Note: some new features are only available with 19c. To get the most from Autonomous Database, upgrade to 19c as soon as possible. There is more information about how to upgrade existing 18c instances to 19c here, for ADW and here, for ATP. Don't forget to checkout our "What's New" page for a comprehensive archive of all the new features we have added to Autonomous Database - click here. To find out how to subscribe to this newsletter click here.   (Shared Infrastructure Only) NEW - Autonomous Data Guard Now Available For Autonomous JSON Database It is now possible to enable Autonomous Data Guard on AJD. This allows the following, depending on the state of the primary AJD database: If the primary database goes down, Autonomous Data Guard converts the standby database to the primary database with minimal interruption. After failover completes, Autonomous Data Guard automatically creates a new standby database. Admins/developers can perform a manual switchover operation, where the primary database becomes the standby database, and the standby database becomes the primary database. Note: Autonomous JSON Database does not provide access to the standby database. Simplified animation showing basic steps to create an ADG standby instance for AJD   LEARN MORE   DOC: Key features of Autonomous Data Guard for ATP/AJD, click here. DOC: Autonomous Data Guard notes for ATP/AJD, click here.     DO MORE   BLOG: Announcing Autonomous Data Guard! click here. BLOG: Autonomous Data Guard: Disaster Recovery Protection with a Couple Clicks in the Cloud! click here.   BACK TO TOP   (Shared Infrastructure Only) NEW - ADB Tools Now Support Private Endpoints Oracle Application Express, Oracle SQL Developer Web, and Oracle REST Data Services are now supported in Autonomous Databases with private endpoints. It is possible to connect Oracle Analytics Cloud to ADB that has a private endpoint via the Data Gateway. See the OAC documentation on configuring and registering Data Gateway for more information. Oracle Machine Learning Notebooks are not supported in databases with private endpoints (Note:support is coming shortly!).   LEARN MORE   DOC: Configure Private Endpoints with ADW, click here DOC: Configure Private Endpoints with ATP, click here BLOG: Announcing Private Endpoints in Autonomous Database on Shared Exadata Infrastructure, click here.     DO MORE The documentation includes two sample network scenarios for configuring PEs. DOC: Private Endpoint Configuration Examples for ADW: click here DOC: Private Endpoint Configuration Examples for ATP/AJD: click here. BLOG: Getting Up to Speed on Using Private Endpoints for Autonomous Database, click here.   BACK TO TOP     (Shared Infrastructure Only) NEW - Change ADB Mode to Read/Write or Read-Only It is now possible to select an Autonomous Database operation mode - the default mode is Read/Write. If you select Read-Only mode users can only run queries. In addition, for both modes it is possible to restrict access to only allow users with the RESTRICTED SESSION privilege to connect to the database. The ADMIN user has this privilege. A typical use case for restricted access mode is to allow for administrative tasks such as indexing, data loads, or other planned activities. Illustration shows the new UI for setting the database mode.   LEARN MORE   DOC: Change Autonomous Database Mode to Read/Write or Read-Only for ADW, click here. DOC: Change Autonomous Database Mode to Read/Write or Read-Only for ATP, click here.     BACK TO TOP   (Shared Infrastructure Only) NEW - Oracle Cloud Infrastructure SDK for PL/SQL The Oracle Cloud Infrastructure SDK for PL/SQL enables you to write code to interact with Oracle Cloud Infrastructure resources, including Autonomous Database. The latest version of the SDK is pre-installed by Oracle for all Autonomous Databases using shared Exadata infrastructure.   LEARN MORE DOC: PL/SQL SDK click here. DOC: DBMS_CLOUD_OCI_DB_DATABASE Functions, click here. DOC: DBMS_CLOUD_OCI_DB_DATABASE Types, click here. VIDEO: PL/SQL and the Art of OCI Management- Robert Pastijn, a member of Platform Technology Solutions group, discusses how he built a set of PL/SQL APIs to help manage Oracle Cloud accounts and services, click here.   DO MORE BLOG: Autonomous Database adds PL/SQL to its arsenal of OCI native SDKs, click here. BLOG: How to level up and invoke an Oracle Function (or any Cloud REST API) from within Autonomous Database, click here.   BACK TO TOP   (Shared Infrastructure Only) ENHANCEMENT - Oracle Cloud Infrastructure Events Integration You can now use OCI Events to subscribe to, and be notified of, Autonomous Database events. Using OCI Events you can create automation and receive notifications based on state changes for Autonomous Database. You can now subscribe to the following categories of events for Autonomous Database: Critical Events Information Events Individual Events   Illustration shows how to use OCI Events service.   LEARN MORE DOC: Using OCI Events with ADW click here. DOC: Using OCI Events with ATP, click here.     DO MORE   Visit the LiveLabs Using Events and Notification Workshop - click here.   BACK TO TOP   (Shared Infrastructure Only) NEW - Database Actions in SQL Developer Web SQL Developer Web UI is now based around the concept of "Database Actions". The hamburger menu provides access to additional actions and administration features. Under "Administration" there is a new UI for managing database users: Illustration shows user management console within SQL Developer Web. this includes the ability to create new users: Illustration shows creating a new user within SQL Developer Web.   LEARN MORE DOC: Create Users With SQL Developer Web on ADW, click here. DOC: Create Users With SQL Developer Web on ATP, click here.     BACK TO TOP   (Shared Infrastructure Only) UPDATE - Changes to Password Rules For non-admin users the password length has been lowered from 12 to 8 characters. Rules for covering passwords for ADMIN user are unchanged.   BACK TO TOP   (Shared Infrastructure Only) EVENT - Make Machine Learning Work for You     NOVEMBER 18, 2020 9:00AM - 11:00AM PT (GMT -8) Register for a series of webcasts designed to help you and your customers/partners understand how to harness machine learning. In four live sessions product experts will highlight use cases, best practices, and supporting technology that can help customers unleash the power of their data. Share the following event details and registration link with your customers and partners, click here.   BACK TO TOP   (Shared Infrastructure Only) EVENT - Oracle Global Leaders APAC Meeting     DECEMBER 3, 2020 1:00PM SGT (GMT +8) Listen to real-life Oracle Cloud deployments, with 12 Customers and Partners speaking in 3 live panels (covering Cloud Infrastructure, Autonomous Data Management & Analytics) with Q&A moderated by Reiner Zimmermann, VP Product Management. Each panel will be followed by a product direction update by Oracle product management. Share the following event details and registration link with your customers and partners, click here.   BACK TO TOP   oracle.com ADW ATP Documentation ADW ATP TCO Calculator Shared Dedicated Cloud Cost Estimator Autonomous Database ADB New Features Autonomous Database Schema Advisor Autonomous Database       Customer Forums ADW ATP       BACK TO TOP  

  October 28, 2020 Latest Newsletter For Autonomous Database on Shared Exadata Infrastructure       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure....

Database

Announcing Oracle APEX 20.2

Oracle Application Express (APEX) 20.2 is now generally available!  Oracle APEX is the world's most popular low-code platform for enterprise apps and enables you to build scalable, secure enterprise apps, with world-class features, that can be deployed anywhere.  This release of APEX introduces several new features and enhancements to help developers be more productive than ever before.  From the all new Cards component, Automations, and REST Data Synchronization, to charts in Faceted Search, REST Connector Plug-ins, Report Printing improvements, Redwood Light Theme Style, and an exceptional code editing experience across all of App Builder, APEX 20.2 an epic release that has something for everyone.  Here are some of the feature highlights: All New Cards Component The new Cards component unlocks a powerful new way to present data and provides developers with the ultimate control and flexibility over the user interface. The Cards region provides easy customization of nearly every aspect of the card UI, including the layout, appearance, icon, badge, media, advanced HTML expressions that support client-side templating, and for the first time, the ability to define multiple actions per card. This means that you can include any number of actions, such as links or buttons, on a single card, all without writing a single line of code. You can visit the Cards page on the Universal Theme Sample App to see them in action.  You can also visit the Sample Cards application to see advanced usage of the new Cards component.   Automations Automations are a conditional set of actions that are automatically triggered by changes that occur within database tables or remote data sources. They are used to monitor data and then perform the appropriate action (examples are auto-approving specific requests and sending email alerts). An automation can be triggered on Schedule or on Demand, by invoking the APEX_AUTOMATION API.   Faceted Search Enhancements Bar or pie charts of facet value counts Quickly display a chart of facet value counts in a dialog or 'dashboard' area. Groups of checkbox facets for Boolean columns Checking the facet will find records that match the 'yes' or 'true' value of the column. Input Field facet type Input Field facet type supports comparing a user-entered value with the facet column. This enables faceted searches such as finding stores within a user entered number of miles or records where a column contains the user entered text. Performance optimization for distinct value facets   Report Printing Built-in PDF printing and Excel download Built-in APIs to print PDF and download Excel in interactive reports and classic reports. Interactive Report - Send E-Mail All download formats can now be attached. Enhanced integration with Oracle BI Publisher New APIs to generate files Introduced new APEX_REGION.EXPORT_DATA and APEX_DATA_EXPORT APIs to programmatically generate PDF, CSV, Excel, HTML, JSON and XML files. More language support Built-in PDF now supports Chinese, Japanese and Korean languages.   REST Data Source Synchronization APEX supports data synchronization from a REST Data Source (formerly known as Web Source Modules) to a local table. Synchronization can run either on Schedule or on Demand, by calling the APEX_REST_SOURCE_SYNC package. Developers don't need to build custom PL/SQL code in order to copy data from REST services to local tables; APEX provides this as a declarative option. More control over REST Source Data REST Source Data can be appended or merged to the local table. Replacing all local data is also supported. Local or REST Source Data APEX components using the REST Data Source can be configured to use the local table instead. Configure Technical Details Technical details like HTTP request limits, commit intervals or delete methods for the Replace mode are configurable. Automatic Table Generation   REST Data Source Connector Plug-Ins The APEX Plug-In infrastructure has been extended to support Connector Plug-Ins for external REST APIs. This enables APEX to fully leverage REST API features like result pagination or server-side filtering, often used with 3rd Party REST Services (note: APEX natively supports ORDS and Oracle Fusion SaaS Services today!) REST Service Handling The Plug-In code handles REST service-specific implementation details like the pagination style or how filters are passed to the REST API Automatically Pass Relevant Information When APEX invokes a REST Data Source (e.g. to render a report), the engine will invoke the Plug-In code and pass all relevant context information. HTTP Requests The Plug-In code executes one or multiple HTTP requests and passes results back to the APEX engine. Automatic Processing APEX processes the REST response received from the Plug-In.     New Web Credential Types APEX 20.2 introduces new URL Query String and HTTP Header types for Web Credentials. This allows developers to use the highly secure and encrypted credential storage also for REST Services which expect an element (like an API key) as part of the URL. APEX makes sure that such sensitive parts are not written to debug or execution logs. A web credential can now be protected by providing a URL pattern. APEX will only use the Web Credential for URLs starting with the given pattern; otherwise an error message will be raised. To change the URL pattern, the secret part of the Web Credential needs to be entered again.   Redwood Light Theme Style Universal Theme now provides a new Redwood Light theme style for your apps, available via Theme Roller. This new theme style aligns with Oracle's new user experience design system, and provides the new design, color, and textures that extend across all of Universal Theme. Refresh your existing apps to uptake the latest version of Universal Theme and this new theme style.     Developer Experience Multiple Tabs in Property Editor Page Designer has been enhanced to support multiple tabs in the Property Editor pane, making it more efficient to access the attributes of a region. Monaco Editor Oracle APEX now uses Monaco Editor to provide a vastly improved coding experience throughout the development environment. The new editor provides in-context code completion, syntax highlighting, and superior accessibility. Inspect the SQL, PL/SQL and JavaScript code with Ease The Embedded Code utility allows developers to inspect the SQL, PL/SQL and JavaScript contained within an APEX application. Having the ability to view the embedded code makes conducting tasks such as code reviews, security evaluations or application tuning, far more convenient. Code can be saved to the file system from the App Builder, or using the APEXExport utility. Quick SQL Enhancements Quick SQL now enables you to save and load data models, provides automatic primary key column defaults, and delivers far better coding experience with improved syntax highlighting and autocompletion for table and column directives.   New and Improved Items New Checkbox This single checkbox offers an alternative to the Switch item type for Boolean columns. This works in Interactive Grid as well, even when not in edit mode. The previous Checkbox type has been renamed to Checkbox Group. File Browse File Browse item type has been enhanced to support rendering as a drop zone, supporting the drag & drop of a file to be uploaded. Rich Text Editor Rich Text Editor item type has been upgraded to use CKEditor 5 and now supports markdown output. Text Field Field item type has a new Text Case setting to optionally transform the user-entered text to upper or lower case. The Text Field Trim Spaces and Text Case settings and Textarea Trim Spaces settings are now applied on the client as well as the server. Additional Features Below are a number of additional features you should familiarize yourself with. Tree Region Enhancements Tree region type has been enhanced to support lazy loading and refresh without having to reload the full page. New Interactive Grid Saved Report Static ID Support Linking to Interactive Grid saved reports should use the saved report static ID instead of the report name. With APEX 20.2, the APEX_IG API has been updated to require using the saved report static ID rather than the report name, when linking to Interactive Grids. Web Source Modules are now named REST Data Sources   As in every APEX release, a number of enhancements and bugs fixes have been made to the productivity and sample applications.  Additionally, numerous bugs have been fixed in APEX 20.2, resulting in even higher quality and stability for APEX environments. For more information about many of the other exciting new features in APEX 20.2, please see the Release Notes or visit the What's New page.   Additional Information Application Express (APEX) is the low-code application development platform which is included with every Oracle Database and is also provided with every Oracle Database Cloud Service, including Oracle Autonomous Database and the Oracle Cloud Free Tier.  APEX, combined with the Oracle Database, provides a fully integrated environment to build, deploy, maintain and monitor data-driven business applications that look great on mobile and desktop devices.  To learn more about Oracle Application Express, visit apex.oracle.com.  To learn more about Oracle Autonomous Database, visit oracle.com/autonomous. 

Oracle Application Express (APEX) 20.2 is now generally available!  Oracle APEX is the world's most popular low-code platform for enterprise apps and enables you to build scalable, secure enterprise...

Database

Exadata Cloud Service X8M—No Database Workload is Too Large

The fastest cloud database service is now the most scalable Oracle Exadata Cloud Service, the world’s fastest cloud database service, is now even faster, supporting the latest Exadata X8M hardware and bringing the benefits of Intel® Optane™ Persistent Memory (PMEM) and Remote Direct Memory Access over Converged Ethernet (RoCE) to the Oracle Cloud. On top of that, Exadata Cloud Service X8M redefines elasticity in the cloud, with configurations that scale to over 1,600 cores and 25 PB of HCC compressed data. How did Exadata Cloud become even faster?  With smart use of persistent memory and the RoCE networking fabric, Exadata Cloud now has 50x lower storage access latency than AWS databases.  With typical storage access latency as low as 19usec, Exadata X8M is now up to 10x faster than before, boosting performance of a wide range of OLTP and Analytical applications.  I’ve written about what PMEM and RoCE can do in the Exadata platform before—so check out my previous blog, “Exadata Cloud@Customer X8M: The Best In-Datacenter Database Cloud Platform” for the details.  What are you going to do with all that speed?  You can consolidate more workloads with less infrastructure and fewer CPUs, and ultimately lower your total cost of ownership.  To boost Exadata’s consolidation capabilities, ExaCS X8M will have almost double the usable memory (1,390 GB per server) and double the local disk space for Oracle Homes. But Exadata Cloud Service X8M is not just catching up to its sibling Exadata Cloud@Customer, it is redefining elasticity in the cloud.  Exadata Cloud Service now supports independent scaling of compute and storage up to 32 compute servers and 64 storage cells.  This provides three key benefits to customers: No workload is too large:  Customers can scale to support huge workloads with up to 1,600 cores, 44 TB memory, 96 TB PMEM, 1.6 PB Flash, and 2.5 PB of usable data (25 PB after 10x HCC Compression). Right-sized systems:  Customers can choose the optimum mix of storage and compute.  They are no longer required to deploy compute and storage in fixed ratios, eliminating the need to pay for unnecessary infrastructure.  They can choose what is best for their workload. On-demand Elasticity:  Grow your system at any time by adding additional compute and storage servers.  Pay for that capacity only when you need it.  To make this work, we have redefined the concept of an Exadata in the cloud.  Gone are the fixed rack sizes (Quarter, Half, and Full).  Now, you start out with a minimal elastic configuration (happens to be the same as a quarter rack) of two database servers and three storage servers, and you add database and storage servers one or more at a time as required.  The Oracle cloud takes care of provisioning these servers from pools of RoCE interconnected compute and storage, building the ideal system as you go. Start with a small quarter rack and grow it over time as your business grows.  If you need more storage, add more.  If your system needs additional compute to remain balanced, add compute.  Pay for this additional infrastructure only when you need it.  With maximum configurations of 32 compute servers and 64 storage cells, be confident that the system can scale to meet future growth. Exadata Cloud Service X8M will be available globally in the next few weeks.  Pricing is unchanged from Exadata Cloud Service X8—you get persistent memory and the new faster RoCE interconnect for no additional charge.  You can try it out with little investment as we allow you to provision a system for as little as 48 hours.  More details are available at www.oracle.com/engineered-systems/exadata/             Bob Thome is a Vice President at Oracle responsible for product management for Database Engineered Systems and Cloud Services, including Exadata, Exadata Cloud Service, Exadata Cloud@Customer, RAC on OCI-C, VM DB (RAC and SI) on OCI, and Oracle Database Appliance. He has over 30 years of experience working in the information technology industry. With experience in both hardware and software companies, he has managed databases, clusters, systems, and support services.     

The fastest cloud database service is now the most scalable Oracle Exadata Cloud Service, the world’s fastest cloud database service, is now even faster, supporting the latest Exadata X8M hardware and...

Autonomous Database

Machine Learning Performance on Autonomous Database

In many organizations, a data science project likely involves the data scientist pulling data to a separate analytics server - analyzing and preparing data and building machine learning models locally. As enterprises grow their data science teams and data volumes expand, common access to data and the ability to analyze that data in place can dramatically reduce time-to-project-deployment and overall complexity. Building models and scoring data at scale is a hallmark for Oracle’s in-database machine learning - Oracle Machine Learning. Combine this with Oracle Autonomous Database - the converged database with auto-scale capabilities - and a team of data scientists can work comfortably in the same environment. In this blog post, we take a look at factors affecting machine learning model building performance as well as performance numbers illustrating the performance and scalability possible with Oracle Machine Learning. In a subsequent post, we'll discuss scoring performance. Top 7 factors affecting performance Many factors affect machine learning performance, including: Data volume – whether building models or scoring data, the most obvious factor is the amount of data involved – typically measured in the number of rows and columns, or simply gigabytes. Data movement and loading – related to data volume is the performance impact of moving data from one environment to another, or from disk into the analytics processing engine. This time needs to be considered when comparing machine learning tools and processes.  Choice of algorithm – different algorithms can have vastly different computational requirements, e.g., Naïve Bayes and Decisions Tree algorithms have low computational demands compared with those of Generalized Linear Models or Support Vector Machine. Data complexity – some patterns in data are easily identified by an algorithm and result in a model converging quickly. Other patterns, e.g., non-linear, can require many more iterations. In other cases, the cardinality of categorical variables or the density/sparsity of data can significantly impact performance. Algorithm implementation –open source and even proprietary algorithms are often implemented in a non-parallel or single-threaded manner, meaning that, even when run on multi-processor hardware, no performance benefit is realized. Such traditional single-threaded algorithms can often be redesigned to take advantage of multi-processor and multi-node hardware, through parallelized and distributed algorithms implementation. Enabling parallelism is often fundamental for improving performance and scalability. Concurrent users – one data scientist working on a single model on a dedicated machine may or may not see adequate performance relative to the factors identified above. However, when multiple users try to work concurrently to build and evaluate models or score data, the impact on the overall performance for these users may significantly degrade or even result in failures due to exhausting memory or other system resources. The ability for an environment to scale up resources to meet demand alleviates such impact.  Load on the system – while the number of concurrent machine learning users impacts performance, the non-ML sources (interactive users and scheduled jobs) can both impact and be impacted by ML sources. Compute environments that can manage and balance such uses can provide better overall performance Enter Oracle Autonomous Database Oracle Autonomous Database includes the in-database algorithms of Oracle Machine Learning (OML), which addresses the factors cited above that impact machine learning performance. By virtue of being in the database, OML algorithms operate on data in the database such that no data movement is required, effectively eliminating latency for loading data into a separate analytical engine either from disk or extracting it from the database. The OML in-database algorithm implementations are also parallelized—can table advantage of multiple CPUs—and distributed—can take advantage of multiple nodes as found with Oracle Exadata and Oracle Autonomous Database. Oracle Autonomous Database further supports performance by enabling different service levels to both manage the load on the system, by controlling the degree of parallelism jobs can use, and auto-scaling, which adds compute resources on demand—up to 3X for CPU and memory to accommodate both ML and non-ML uses. Performance results To illustrate how Oracle Autonomous Database with Oracle Machine Learning performs, we conducted tests on a 16 CPU environment, involving a range of data sizes, algorithms, parallelism, and concurrent users. Oracle Autonomous Database supports three service levels: high, medium, and low. ‘High’ limits the number of concurrent jobs to three, each of which can use up to the number of CPUs allocated to database instance (here, up to 16 CPUs). ‘Medium’ allows more concurrent users but limits the number of CPUs each job can consume (here, 4CPUs). ‘Low’ allows even more concurrent use but only single-threaded execution, i.e., no parallelism. Let's begin by comparing the single user experience with four popular in-database classification algorithms: Decision Tree (DT), Generalized Linear Model (GLM), Naive Bayes (NB), and Support Vector Machine (SVM). We start with the medium service level, which caps the number of CPUs at 4, and for a 16 CPU environment, the number of concurrent jobs at 20 (1.25 * 16). We use the ONTIME data set with 8 columns (equating to 70 coefficients when accounting for categorical variables). Notice the linear scalability across the range of data set sizes, i.e., a doubling in the number of rows is roughly doubling the run time. While there is some variation in the individual job execution times, this plot depicts the average execution time. Next we look at the effect of the high service level, which enables a job to use up to the number of CPUs allocated to the database instance, in this case 16 CPUs. As we noted earlier, different algorithms respond differently to data, but even to the number of available resources. In some cases, increased parallelism can actually adversely impact performance as we see with SVM above due to the overhead of introducing parallelism for "smaller" data sets. However, at higher data volumes, the additional CPU resources clearly improve performance by about 50% for 800M rows. The remaining algorithms saw performance improvements across the range. As with the medium service level, we see effectively linear scalability across data sizes. Let's now turn our attention to concurrent users. We start with the medium service level using the Generalized Linear Model (GLM) algorithm. It is interesting to note that since each run is limited to 4 CPUs, auto-scale had it's most significant impact at 8 concurrent users. At 4 concurrent users and 4 CPUs each, this consumed the 16 CPU, so this should not be surprising. When we turn auto-scale on, there are more CPUs available for more concurrent users. This illustrates how a team of data scientists can work in the same environment with modest impact on one another and that this can be further mitigated with auto-scale.   Here, we look at the Support Vector Machine (SVM) algorithm regression. The performance benefits with auto-scale enabled are particularly beneficial for 4 and 8 concurrent users with a ~30% reduction in execution time for 400M rows. As discussed above, Oracle Autonomous Database with Oracle Machine Learning provides scalability and performance for data science teams, while providing powerful machine learning tools and autonomous data management capabilities. Thanks to Jie Liu and Marcos Arancibia for their contributions to these performance results.

In many organizations, a data science project likely involves the data scientist pulling data to a separate analytics server - analyzing and preparing data and building machine learning...

Database Cloud Services

Autonomous Database: Customer Controlled Encryptions Keys

There are two options to manage your database encryption keys for the Autonomous Database on Dedicated Exadata Infrastructure, 1. Automatic lifecycle management of encryption keys via autonomous operations i.e. Oracle automation controlled key management 2. Customer controlled key management Option #1 is self managing and there is nothing much to do for the service user. Let's talk about what customer controlled key means and how to go about using that option. Option #2 is for customers whose corporate security policies mandate tighter control of encryption keys. Customer controlled key management option allows for 3 key features. 1. Master Encryption Key (MEK) generation is on a separate HSM based key management service in the Oracle Cloud Infrastructure. 2. Customer controls MEK lifecycle management and key rotation. 3. Access to MEK can be controlled using IAM policy based role separation. Oracle Cloud Infrastructure (OCI)  offers an HSM based service called the 'Vault' to manage all your keys, certificates and other secrets. OCI Vault runs on a FIPS-140 security level 3 certified HSM device to meet the highest security standard.  The autonomous database on dedicated exadata infrastructure provides integration with the OCI Vault service so that database master encryption keys may be generated and stored in the vault rather than on the database host. How database encryption works? All data in the Oracle autonomous database is encrypted in memory before being stored on disk or other persistent memory. This includes all data files, redo logs, backups and replication streams. The data encryption keys (DEK) are part of the data dictionary and are accessed frequently as data is read and written to the database. These keys are however encrypted and the master encryption key may be stored either in a PKCS#12 software wallet, or externally on an HSM device in the Vault service. In a cloud@customer deployment, master keys may be stored on-premises in the Oracle Key Vault a security hardened key management appliance.   To setup encryption key management in OCI Vault there are 3 things you need to do. Create a Vault and Key in the OCI Vault Service. Setup an IAM policy that defines what users and resources have access to the key. Create a network path between your exadata infrastructure resource and the Vault service. While this may seem like  lot to do before you can start using externally managed keys, the idea is to provide enterprise customers the fine grained access control that most security policies mandate. Nothing is assumed. You, the customer decides which specific Exadata infrastructure has access to a vault or keys. Note that the concept of a 'Key' in the OCI Vault service is similar to that of a 'wallet' i.e. each wallet can store numerous MEKs. Therefore, as a best practice, it recommended that each Autonomous Container Database have its own OCI Vault Key ( think wallet) which becomes a holder of all MEK for the container database (ACD) itself and all associated Autonomous Databases (ADBs). Once the OCI Vault is setup with a Key, the IAM policies are in place and a network path is setup, you are ready to deploy container databases with customer managed keys in the OCI Vault. This choice of using Oracle Managed / Customer managed is to be made only at the ACD level. The PDB / ADB consumer does not need to worry about key management. This allows for a simpler end-user experience for an ADB  user or developer  who is simply interested in deploying applications and basic database lifecycle operations.  This Video tutorial and step-by-step lab guide provides all the details you need to start using Customer managed keys with your autonomous database on dedicated infrastructure.    

There are two options to manage your database encryption keys for the Autonomous Database on Dedicated Exadata Infrastructure, 1. Automatic lifecycle management of encryption keys via autonomous...

Database Cloud Services

Sharding Oracle Database Cloud Service

Oracle Sharding is now available in Oracle Cloud with Oracle Database Cloud Service as well as Kubernetes and Docker containers (OKE). Oracle Sharding enables hyperscale, globally distributed, converged databases. It supports applications that require linear scalability, elasticity, fault isolation and geographic distribution of data for data sovereignty. It does so by distributing chunks of a data set across independent Oracle databases (shards). Shards can be deployed in the cloud or on-premises and require no specialized hardware or software. The following figure shows a table horizontally partitioned across three shards. Figure 1-1 Horizontal Partitioning of a Table Across Shards Benefits of Sharding Linear Scalability. Sharding eliminates performance bottlenecks and makes it possible to linearly scale performance and capacity by adding shards. Fault Containment. Sharding is a shared nothing hardware infrastructure that eliminates single points of failure, such as shared disk, SAN, and clusterware, and provides strong fault isolation—the failure or slow-down of one shard does not affect the performance or availability of other shards. Geographical Distribution of Data. Sharding makes it possible to store particular data close to its consumers and satisfy regulatory requirements when data must be located in a particular jurisdiction. Rolling Upgrades. Applying configuration changes on one shard at a time does not affect other shards, and allows administrators to first test the changes on a small subset of data. Unlike NoSQL solutions, Oracle Sharding provides strong data consistency, the full power of SQL, support for structured and unstructured data, and the Oracle Database ecosystem. Additional Information Please review following links for additional information. Sharding Oracle Database Cloud Service : This is the listing for sharding automation available in Oracle Cloud Infrastructure Marketplace. It automates provisioning and management of Oracle Sharded Database. Automation includes the following features Supports system-managed sharding on Oracle Enterprise Extreme performance database edition. Automatic and uniform distribution of Shards, Catalog and Shard Directors across ADs in a region and Fault Domains within each AD. Supports dataguard based data replication for shards and catalog and thus provides high availability within a region. Provides the ability to easily scale the number of Shards and provides horizontal scalability. Provides the ability to easily scale the number of Shard Directors for high availabillity and load balancing of requests. Sharding with Kubernetes and Docker containers in Oracle Cloud (OKE) - This GitHub repository has deployment procedures for automating provisioning of Oracle Sharded Databases on Oracle Kubernetes Engine (OKE) using Oracle Cloud Infrastructure Ansible Modules and Helm/Chart. Reference Architecture Patterns - These are customer inspired Oracle Cloud deployment architecture patterns with best practices for scalability, availability and security Oracle Database Sharding - This is Oracle Sharding product page which has latest information, customer case studies and links to various resources.  

Oracle Sharding is now available in Oracle Cloud with Oracle Database Cloud Service as well as Kubernetes and Docker containers (OKE). Oracle Sharding enables hyperscale, globally...

Strategy

Standard Edition High Availability Released - See What's New

The "Standard Edition 2 – We Heard You! Announcing: Standard Edition High Availability" blog post published approximately two months ago resulted in a lot of interest in this new feature - Thank you! It is therefore with great pleasure that I can announce the general availability of Standard Edition High Availability (SEHA) on Linux, Microsoft Windows and Solaris with Oracle Database 19c, Release Update (RU) 19.7. Additional operating systems are planned to be supported later this year. What's New As the Oracle Database 19c New Features Guide under the "RAC and Grid section" states, Standard Edition High Availability provides cluster-based failover for single-instance Standard Edition Oracle Databases using Oracle Clusterware. It benefits from the cluster capabilities and storage solutions that are already part of Oracle Grid Infrastructure, such as Oracle Clusterware, Oracle Automatic Storage Management (Oracle ASM) and Oracle ASM Cluster File System (Oracle ACFS). Standard Edition High Availability is fully integrated with Oracle Grid Infrastructure starting with Oracle Grid Infrastructure 19c, Release Update 19.7. The prerequisites for SEHA database systems are therefore largely the same as for all Grid Infrastructure-based database systems, as discussed under Requirements for Installing Standard Edition High Availability. Standard Edition High Availability databases are not Real Application Clusters (RAC)-enabled. Oracle RAC One Node is a RAC-enabled database in that the RAC-option needs to be enabled in the Oracle Database home from which the database runs. This is not the case for Standard Edition High Availability databases. While both solutions provide cluster-based failover for the Oracle Database, RAC One node supports additional high availability features further reducing planned maintenance related downtime that are not part of the Standard Edition High Availability offering. For more information on how Standard Edition High Availability, Oracle Restart, Oracle RAC and Oracle RAC One Node compare, see the High Availability Options Overview for Oracle Databases using Oracle Clusterware. Standard Edition High Availability databases can be licensed using the “10-day failover rule”, which is described in this document. This rule includes the right to run the licensed program(s) [here the Standard Edition High Availability database] on an unlicensed spare computer in a failover environment for up to a total of ten separate days in any given calendar year. This right only applies when a number of machines are arranged in a cluster and share one disk array, which is the case for Standard Edition High Availability databases by default. In addition, SEHA databases are subject to all licensing regulations that generally apply to a Standard Edition 2 (SE2) single-instance Oracle Database. Note that SEHA databases are not subject to a per-cluster socket limitation, but need to adhere to the per-server socket limitation that applies to any Standard Edition 2 Oracle Database.  Standard Edition High Availability databases can either be freshly installed or configured using an existing single-instance Standard Edition Oracle Database. There is no direct upgrade path from either a single-instance or a pre-19c Oracle RAC Standard Edition 2 database. While the database can be upgraded, configuring it to be a Standard Edition High Availability database requires additional manual steps explained in the Managing Standard Edition High Availability with Oracle Databases section of the Oracle Database Administrator’s Guide. Standard Edition 2 Oracle RAC databases need to be converted to single-instance Standard Edition Oracle Databases prior to upgrading to Oracle Database 19c, as described in My Oracle Support note 2504078.1. Concluding, Standard Edition High Availability provides a fully integrated cluster-based failover for Standard Edition Oracle Databases using Oracle Clusterware. Oracle’s Standard Edition 2 (SE2) customers thereby benefit from the high availability capabilities and storage management solutions that are already part of Oracle Grid Infrastructure, such as Oracle Automatic Storage Management (ASM) and the Oracle ASM Cluster File System (ACFS), free of charge.  Start testing here: Deploying Standard Edition High Availability and let me know your feedback, please.  

The "Standard Edition 2 – We Heard You! Announcing: Standard Edition High Availability" blog post published approximately two months ago resulted in a lot of interest in this new feature - Thank you! It...