X

Information, tips, tricks and sample code for Big Data Warehousing in an autonomous, cloud-driven world

Recent Posts

Data Warehousing

Using role Amazon Resource Names (ARNs) to securely access AWS Resources from your Autonomous Database

As Oracle continues its journey toward being the most inter-operable cloud platform out there, we recently released support for accessing your Amazon Web Services (AWS) resources (such as S3 object storage) from Autonomous Database on Shared Infrastructure (ADB-S) using AWS IAM policies and roles, instead of dealing with security credentials containing secret access keys. This makes security simpler and better, allowing you to limit access to AWS resources by roles granted to a database, rather than individual user credentials, as well as taking away any risk of exposing secret access keys in a script. Below I walk through a start-to-finish example and explanation of how you may set up such a role Amazon Resource Name (ARN) based credential for your ADB instance. Contrast this with my previous post showing how you may access the AWS S3 object storage from ADB-S using a secret access key; This method has a few additional steps, however, these are initial, one-time steps that can then used across several different databases and database users.  Follow the linked URLs, in this post, to the relevant documentation if you would like to learn more about each topic. Note: If you already have an AWS role containing a policy to grant access to your AWS resources, you may skip Steps 1 and 2 and jump directly to Step 3.  I assume below that you have an Autonomous Database instance provisioned. If not, and you are starting from scratch, follow this ADB Quickstart lab to help you get provisioned and familiar with the database console.    Step 1: Create an AWS Policy to allow access to the required AWS Resources   In your AWS console, your account administrator must define a policy that allows access to AWS resources (such as an S3 bucket). Navigate to Identity and Access Management (IAM) and hit Create Policy.   Select the AWS resources to be granted access to as well as the access level. In this example, I am granting All Access to the S3 object storage in the policy.   Follow the wizard and finish up with a name and description of your policy. In my example, I've named the policy ARNDemoPolicy. Note that this policy can be granted to several different roles, if required.   Step 2: Create an AWS Role containing the policy Next, we will create an AWS Role, which is a secure grouping containing the policy(s) that will grant the necessary access to your AWS resources. Once again under IAM, select Create Role.     Since we will be using our new role to trust our Oracle Cloud ADB instances (which is third-party to the AWS Cloud) select Another AWS Account. We will be updating the User Account ID and External IDs with Oracle Cloud Infrastructure (OCI) values in Step 3; for now, simply use your existing AWS Account ID as a placeholder (which you can find as shown in the screen below)     As well as any placeholder value for an External ID. Here, I've used 0000.   Follow along the wizard. Select the policy(s) you would like to include in your role (I include our created ARNDemoPolicy from Step 1) and then click Create Role.   Step 3: Edit AWS Role's Trust Relationship to include Oracle's User ARN, and an External ID for additional security Under your newly created role on the AWS Console, as in the screen below, you will now see a Role ARN; In AWS, the role ARN is the role's identifier to be used for access to AWS resources. Note your Role ARN, we will use it in a later step. Click on Edit Trust Relationship. The trust relationship is a configuration in your role that informs your AWS Resources about which external AWS account and external ID it can trust. You may read more about trust relationships here.   Next, let us switch over to query our Autonomous Database for the necessary information for our AWS role's trust relationship. From your Autonomous Database console, go to Tools -> Database Actions -> SQL For the trust relationship, we will need Oracle Cloud's user ARN An External ID, which will be an Oracle Cloud Identifier (OCID). Run the following query in your database and copy down the Oracle user ARN SELECT param_value FROM CLOUD_INTEGRATIONS WHERE param_name = 'aws_user_arn';   Also run the following query over the v$pdbs view in your ADB, and note down your database, compartment or tenancy OCIDs to use as your role's External ID.  If you choose to use your database_ocid as an external ID, your AWS role will trust only your individual database. If you choose to use your compartment_ocid as an external ID, your AWS role can trust all your ADB instances in that entire Oracle Cloud compartment. If you choose to use your tenancy_ocid as an external ID, your AWS role can trust all ADB instances in your Oracle Cloud tenancy. SELECT cloud_identity FROM v$pdbs;   Now, head back over to your AWS role's trust relationship. Paste in the Oracle User ARN in place of your Account ID Paste in the External ID you chose above in place of your dummy External ID Note: When you set the value for ExternalID, by default the value is expected to be in upper case. If you are supplying any part of the OCID in lower case, set the condition "StringEqualsIgnoreCase" instead of "StringEquals" in the JSON configuration of the trust relationship This completes the setup required on AWS.     Step 4: Enable ARN credentials in your Autonomous Database Now that we are done with our AWS setup, let us jump back into our ADB instance's SQL worksheet and run the following script to enable the use of ARN credentials for the database user that you want to give access to your AWS resource. Replace "adb_user" with your database user's username; In my example, I am providing access to the Admin user. Note: Only an Admin user with privileges to access the DBMS_CLOUD_ADMIN package may run this procedure BEGIN    DBMS_CLOUD_ADMIN.ENABLE_AWS_ARN (username => 'adb_user'); END; /   Step 5: Create an ARN credential with your AWS Role ARN After ARN usage is enabled for the Autonomous Database instance and the necessary role is configured in AWS, we may now create a credential object with ARN parameters by running the following script, which calls the DBMS_CLOUD package. This credential will provide the database user access to Amazon resources. You may enter any credential name of your choosing Paste in the Role ARN we copied from the Role information on the AWS IAM Console, as the value for the 'aws_role_arn' parameter Specify the 'external_id_type' parameter as 'database_ocid', 'compartment_ocid' or 'tenancy_ocid', depending on the type of OCID you specified in the trust relationship in Step 3. Depending on the type specified here, your ADB will internally send across the corresponding OCID for authorization when connecting to your AWS resource You may repeat Step 5 and 6 for any database user that you wish to grant access to the same AWS resources. BEGIN   DBMS_CLOUD.CREATE_CREDENTIAL(          credential_name => 'DEF_CRED_ARN',          params =>             JSON_OBJECT(               'aws_role_arn' value                   'arn:aws:iam::123456:role/AWS_ROLE_ARN',               'external_id_type' value 'database_ocid'                  )  ); END; /   Step 6: You may now access your AWS Resource! Now, with the ARN credential setup, our ADB user has access to the AWS S3 object store. We can verify this by running the following example script, passing in the "credential_name" of the credential created above and the S3 object store bucket URI, to list all the objects present in the S3 bucket! SELECT object_name FROM DBMS_CLOUD.LIST_OBJECTS(                credential_name => 'DEF_CRED_ARN',                location_uri => 'https://my-bucket.s3.us-west-2.amazonaws.com/')   Conclusion Refer to the official documentation for more information about the use of ARN credentials in Autonomous Database. We are continuing to work on releasing such one-time, repeatable and simplified methods for ADB authorization both for Oracle Cloud (see Can's post on authorization using Resource Principles) as well as for other cloud platforms. 

As Oracle continues its journey toward being the most inter-operable cloud platform out there, we recently released support for accessing your Amazon Web Services (AWS) resources (such as S3 object...

Autonomous

VIDEO: What's New In Autonomous Database For May 2021

Get our regular customer newsletter for Autonomous Database in video format via our YouTube Channel. You can read the latest newsletter and sign-up for new editions by visiting our dedicated newsletter page - https://blogs.oracle.com/datawarehousing/newsletter. Oracle Data Warehouse Solutions YouTube Channel What's New in ADW - May 2021(12:32) Listen to Scott Wiesner (Senior Director, Data Management Business Development) and Keith Laker (Senior Principal Product Manager for Autonomous Database and Analytic SQL) discuss this month's latest and greatest new features in Oracle Autonomous Database Shared. To watch previous recordings of our new feature webcasts, head over to our OTube channel! and don;t forget to sign-up for our regular new features newsletter via our blog's rss feed - https://blogs.oracle.com/datawarehousing/newsletter. ul { list-style-type: square !important; margin-left:55px; } ul li::before { content: "\2192"; /* Add content: \2022 is the CSS Code/unicode for a bullet */ color: grey; /* Change the color */ font-weight: bold; /* If you want it to be bold */ display: inline-block; /* Needed to add space between the bullet and the text */ width: 1em; /* Also needed for space (tweak if needed) */ margin-left: -1em; /* Also needed for space (tweak if needed) */ } ol { list-style-type: square !important; margin-left:55px; } ol li::before { content: "\2192"; /* Add content: \2022 is the CSS Code/unicode for a bullet */ color: grey; /* Change the color */ font-weight: bold; /* If you want it to be bold */ display: inline-block; /* Needed to add space between the bullet and the text */ width: 1em; /* Also needed for space (tweak if needed) */ margin-left: -1em; /* Also needed for space (tweak if needed) */ } .row { display: -ms-flexbox; /* IE10 */ display: flex; -ms-flex-wrap: wrap; /* IE10 */ flex-wrap: wrap; padding: 0 4px; } /* Create four equal columns that sits next to each other */ .column { -ms-flex: 25%; /* IE10 */ flex: 25%; max-width: 90%; padding: 0 4px; align: center; } .column img { margin-top: 8px; vertical-align: middle; width: 90%; text-align: center; }

Get our regular customer newsletter for Autonomous Database in video format via our YouTube Channel. You can read the latest newsletter and sign-up for new editions by visiting our...

Autonomous

Autonomous Database Newsletter - June 7, 2021

    June 7, 2021 Autonomous Database Shared development and product management teams are pleased to announce the following updates:   Customer Managed Keys for TDE Graph Studio Now GA PCI Compliance Attestation for ADB Support for Transparent Application Continuity Support For OCI Resource Principals Support For Amazon Resource Names ADB As An RMAN Recovery Catalog Global Leaders Update To watch recordings of our new feature webcasts, head over to our OTube channel!   CUSTOMER MANAGED ENCRYPTION KEYS NEW - ADB Now Supports Use Of Customer-Managed Encryption Keys If you have security policies that require the use of customer-managed encryption keys, it is now possible to configure ADB to use an OCI Vault master encryption key. NOTE: The customer-managed encryption key is stored in Oracle Cloud Infrastructure Vault, external to the database host. If the customer-managed encryption key is disabled or deleted, the database will be inaccessible.   MORE INFORMATION DOC: About Master Encryption Key Management on Autonomous Database, click here     ORACLE GRAPH STUDIO Oracle Graph Studio Is Now Fully GA with Autonomous Database To download, click here.   The documentation has been updated to provide more information about using Oracle Graph with ADB and this includes details about specific restrictions that apply to ADB.   MORE INFORMATION DOC: Using Oracle Graph with Autonomous Database, click here WEBCASTS: What Graph can do for you + how to get started, click here Demystifying Graph Analytics for the Non-Expert, click here     PCI COMPLIANCE NEW: PCI Compliance Attestation For ADB-Shared ADB-Shared now has PCI Compliance Attestation. The OCI Compliance page on oracle.com is available via this linke here     DATA SAFE NEW: Data Safe Audit Retention Time Increased Original Data Safe audit retention was for a maximum of 12 months. Maximum retention is 84 months (7 years) with the online retention period is 12 months supplemented by an additional archive retention period of 72 months (six years). Screenshot below shows the Data Safe console where you can configure the retention period.   MORE INFORMATION DOC: Audit Data Lifecycle Management - Audit Data Archival, click here     TRANSPARENT APPLICATION CONTINUITY NEW: ADB-Shared Now Supports Transparent Application Continuity Application Continuity hides outages for thin Java-based applications, Oracle Database Oracle Call Interface, and ODP.NET based applications along with support for open-source drivers, such as Node.js, and Python. TAC transparently tracks and records session and transactional state so the database session can be recovered following recoverable outages. This is done with no reliance on application knowledge or application code changes   MORE INFORMATION DOC: Using Application Continuity on Autonomous Database, click here DOC: Ensuring Application Continuity, click here BLOG: Application Continuity – A Database Feature for Developers, click here   OCI RESOURCE PRINCIPAL NEW: Accessing OCI Resources using Resource Principal ADB-Shared now supports OCI resource principal authentication. OCI resource principal is a part of Oracle Identity and Access Management (IAM) that eliminates the need to create and configure OCI user credential objects. The resource principle authorizes an OCI resource to make API calls to certain OCI services (e.g. Object Storage) and this authorization is established through dynamic groups and IAM policies.   MORE INFORMATION DOC: Use Resource Principal to Access Oracle Cloud Infrastructure Resources click here BLOG: Accessing Oracle Cloud Infrastructure Resources from Your Autonomous Database using Resource Principal, click here   AMAZON RESOURCE NAMES NEW: Use Amazon Resource Names (ARNs) to Access AWS Resources   When you use ARN role-based authentication with Autonomous Database, you can securely access AWS resources without creating and saving credentials based on long-term AWS IAM access keys. For example, you may want to load data from an AWS S3 bucket into your Autonomous Database, perform some operation on the data, and then write the modified data back to the S3 bucket. You can do this without using an ARN if you have AWS user credentials to access the S3 bucket.   MORE INFORMATION DOC: Use Amazon Resource Names (ARNs) to Access AWS Resources, click here   RMAN NEW: Autonomous Database As An RMAN Recovery Catalog You can use Oracle Autonomous Database as a Recovery Manager (RMAN) recovery catalog. A recovery catalog is a database schema that RMAN uses to store metadata about one or more Oracle databases. Recovery Manager (RMAN) recovery catalog is preinstalled in Autonomous Database in schema RMAN$CATALOG. The preinstalled catalog version is based on the latest version of Oracle Database and is compatible with all supported Oracle database versions.   MORE INFORMATION DOC: Autonomous Database RMAN Recovery Catalog, click here.     GLOBAL LEADERS REGISTER: UK & IE VIRTUAL MEETING Registration is now open for the UK and Ireland virtual event, click here. At this event you will hear directly from our key customers about their real-life experiences on projects with Oracle Cloud Infrastructure, Autonomous Database and Oracle Analytics Cloud. JULY 8 2- 5 (UK Time)   REGISTER: CUSTOMER PANEL EVENT FOR N. AMERICA The event will provide a platform for a strategic discussion on data management solutions using Oracle Autonomous Data Warehouse, Oracle Cloud Infrastructure, Data Integration and Oracle Analytics. To register, click here.   JULY 22 12 - 2 PM ET   To watch recordings of our webcasts, head over to our OTube channel!   Copyright © 2021 Oracle and/or its affiliates. All rights reserved.

    June 7, 2021 Autonomous Database Shared development and product management teams are pleased to announce the following updates:   Customer Managed Keys for TDE Graph Studio Now GA PCI Compliance...

Autonomous

Accessing Oracle Cloud Infrastructure Resources from Your Autonomous Database using Resource Principal

As we covered in a previous blog post, Autonomous Database on Shared Infrastructure (ADB-S) users often interact with other Oracle Cloud Infrastructure (OCI) resources to perform various common operations such as loading data from the Object Storage or creating an external table. The main goal of that blog post was to explain and demonstrate the available methods to create a credential object which is a requirement for ADB-S to access Object Storage. Using an auth token or OCI native authentication is still fully supported and if you’d like to learn more about them, you might want to take a look here. However, in this blog post we are going to be talking about a new concept that makes things a bit more interesting and easier, called OCI resource principal! I’m excited to announce that ADB-S now supports OCI resource principal authentication. What does this mean for you? Let’s first start with understanding what a resource principal is. OCI resource principal is principal type in Oracle Identity and Access Management (IAM) that eliminates the need to create and configure OCI user credential objects in the database. In other words, a resource principle uses a certificate that is frequently refreshed to sign the API calls to certain OCI services (e.g. Object Storage, Vault) and the authorization is established through dynamic groups and IAM policies. A dynamic group is a logical group of OCI resources that you create while a policy is a statement (or list of statements) that specifies who can access which OCI resources that you have. If you’d like to learn more about dynamic groups, policies, or IAM in general, you can check out the documentation.  Let’s get started with the fun part… a quick demo! In the remainder of this blog post, I’m going to demonstrate how to load data into my ADB-S instance from Object Storage using OCI resource principle. Here’re the steps that we are going to follow: Create a dynamic group and a policy Enable resource principal in ADB-S Load data from Object Storage using resource principal Create a dynamic group and policy As we covered earlier, we need to first create a dynamic group and a policy to be able to use resource principal authentication. This is basically how we will be able tell IAM that a given Autonomous Database should be able to read from the Object Storage buckets and objects that are in a given compartment. In the OCI console, go to ‘Identity and Security’ -> ‘Dynamic Groups’ -> ‘Create Dynamic Group’ Since I want to include only my ADB-S instance to this dynamic group, I need to add the OCID of my instance in the following rule: resource.id = 'ocid1.autonomousdatabase.oc1.iad.osbgdthsnmakytsbnjpq7n37q' Here's how my dynamic group looks after creation: Please note that you can add multiple resources in your dynamic groups (e.g. all Autonomous Databases in the tenancy or in a given compartment). You can check out our documentation here for other examples. Now that we have created a dynamic group that includes our ADB-S instance, we can go ahead and create a policy to allow this resource to access our Object Storage bucket that resides in a given compartment. In the OCI console, go to ‘Identity and Security’ -> ‘Policies’-> ‘Create Policy’ Add your policy statement in plain text or use the Policy Builder. Allow dynamic-group ctuzlaDynamicGroup to read buckets in compartment ctuzlaRPcomp Allow dynamic-group ctuzlaDynamicGroup to read objects in compartment ctuzlaRPcomp Please note that my policy only allows read access to the Object Storage buckets and and objects that I specified. It’s also possible to allow higher levels of access as described in the documentation. Enable resource principal in ADB-S Resource principal is not enabled by default in ADB-S. In order to be able to use resource principal in our ADB-S instance, we need to enable it using the DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL procedure: As ADMIN user, execute the following statement: EXEC DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL(); PL/SQL procedure successfully completed. Verify that resource principle is enabled as follows: SELECT owner, credential_name FROM dba_credentials WHERE credential_name = 'OCI$RESOURCE_PRINCIPAL' AND owner = 'ADMIN'; OWNER CREDENTIAL_NAME ----- ---------------------- ADMIN OCI$RESOURCE_PRINCIPAL The first step above enables resource principle authentication for ADMIN user. If you’d like other database users to call DBMS_CLOUD APIs using resource principal, ADMIN user can enable resource principle authentication for other database users as well: EXEC DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL(username => 'ADB_USER'); PL/SQL procedure successfully completed. Load data from Object Storage using resource principal As the final step of our demonstration, let’s create a table and load data from our Object Storage bucket: CREATE TABLE CHANNELS (channel_id CHAR(1), channel_desc VARCHAR2(20), channel_class VARCHAR2(20) ); Table CHANNELS created. BEGIN DBMS_CLOUD.COPY_DATA( table_name =>'CHANNELS', credential_name =>'OCI$RESOURCE_PRINCIPAL', file_uri_list =>'https://objectstorage.us-ashburn-1.oraclecloud.com/n/adwc4pm/b/ctuzlaBucket/o/chan_v3.dat', format => json_object('ignoremissingcolumns' value 'true', 'removequotes' value 'true') ); END; / PL/SQL procedure successfully completed. As you might have noticed, OCI$RESOURCE_PRINCIPLE is the credential_name we need to specify in DBMS_CLOUD APIs whenever we want to use resource principal authentication. To summarize, resource principal is a really neat Oracle IAM capability that enables your OCI resources to access various OCI services through dynamic groups and policies. Creating dynamic groups and policies can potentially be a one-time operation since you can define your dynamic group such that it includes all existing and future ADB-S instances in a given compartment. Whenever you provision a new ADB-S instance, all you have to do would be to enable resource principle for that instance via the DBMS_CLOUD_ADMIN API if the instance needs to access other OCI services or resources. Much simpler and easier than creating credential objects via auth tokens or OCI native authentication!

As we covered in a previous blog post, Autonomous Database on Shared Infrastructure (ADB-S) users often interact with other Oracle Cloud Infrastructure (OCI) resources to perform various common...

Autonomous

How To: Download all files from an export to object store job in Autonomous Database using cURL

A good portion of Autonomous Database (ADB) users export and store their data in Oracle Cloud Object Storage. Last year, I had written this guide to help users export database dump files, using the Data Pump tool, to Oracle's object-store. Once completed, an export, depending on the export filesize parameters, may have created several dump file parts. You may then download your dump file parts using a tool with Swift API support such as the popular cURL tool; most newer operation systems have cURL preinstalled. Recently, we have had users, that use cURL to download their files, reach out to ask how they may download all their exported dump file parts at once. cURL lacks support for wildcard or substitution characters in its URL, so it isn't a simple one-line command. Since my job is to make your life easier, below I provide a sample, easy-to-invoke bash script to download the multiple dump file parts of an export job from the object store, using cURL.   Note: This example below is run on a Mac and works similarly on Linux. If you are on Windows, you may need to install PowerShell, Cygwin or similar to run shell scripts.   Step 1: Download the bash script Once you have run your export job and the dump file parts are in the object store, download the shell script “get_objects.sh” by clicking this link and unzipping it. You may open this script in a text editor and read the detailed notes and description of what this script does and the parameters it accepts.   Step 2: Run the script with your object store URL and credential parameters. Once you have downloaded the get_objects.sh script, open the Terminal and navigate to the folder where you downloaded the script above. Run the script with bash command below, replacing in: The SWIFT object store URL that you used in your export job following the "-f" file parameter. Notice, we also include using the same substitution character “%U” in that URL to download all the dump files from an export job. In this example, the dump file part names begin with “exp” and end in “.dmp”, the dump file extension. Your user credentials following the “-a” authorization parameter to access the object store, which is your username and SWIFT Auth token, as used in your export job. If you do not have this information, you may create a new credential by following Step 7 in this ADB workshop.   bash get_objects.sh -f 'https://swiftobjectstorage.<region identifier>.oraclecloud.com/v1/<object store namespace>/<bucketname>/exp%U.dmp' -a '<username>':'<SWIFT auth token>' –verbose     Once executed, the script will begin downloading all the dump files that match the name pattern in the URL provided! Follow the notes in the bash script if you need to use additional parameters in this bash invocation, for things like specifying your proxy or downloading to a specific folder. I hope this time-saving example helps many of you easily download your exported dump files from the object store in one easy go!

A good portion of Autonomous Database (ADB) users export and store their data in Oracle Cloud Object Storage. Last year, I had written this guide to help users export database dump files, using the Dat...

Autonomous

Autonomous Database Newsletter - March 8, 2021

March 8, 2021 Newsletter For Autonomous Database on Shared Exadata Infrastructure Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the following new features and topics: Graph Studio for ADB OML for Python AES256 encryption Enhancements for manual backups Private endpoint support for tools New OCI Events Global Leaders Update Don't forget to checkout our "What's New" page for a comprehensive archive of all the new features we have added to Autonomous Database - click here. To find out how to subscribe to this newsletter click here. (SharedInfrastructureOnly) COMING SOON - Graph Studio In Autonomous Database Graph console is now built-in to Autonomous Database. There is a launch button on the Tools panel of the ADB console. This integration provides comprehensive tooling for working with graphs: Graph modeling tool to map relational data to graphs Browser-based notebooks for interactive analysis and collaboration Integrated graph visualization Uses the proven capabilities of the Graph Analytics Engine This feature is currently in Limited Availability and GA is expected within the next couple of weeks. LEARN MORE WEBCASTS: What Graph can do for you + how to get started, click here Demystifying Graph Analytics for the Non-Expert, click here AskTOM Office Hours - recordings click here, and for slide decks click here DO MORE VIDEO: Autonomous Graph Database: A tour of the Graph Studio interface, click here VIDEO: Simplify Graph Analytics with Autonomous Database, click here BACK TO TOP (SharedInfrastructureOnly) NEW - Oracle Machine Learning for Python We are pleased to announce the general availability of the Oracle Machine Learning for Python (OML4Py) on Autonomous Database. OML4Py leverages the database as a high-performance computing environment to explore, transform, and analyze data faster and at scale, while allowing the use of familiar Python. The in-database parallelized machine learning algorithms are exposed through a well-integrated Python interface. In addition, data scientists and other Python users can create user-defined Python functions managed in the database. Python objects can also be stored directly in the database – as opposed to being managed in flat files. OML4Py also supports automated machine learning—or AutoML—which not only enhances data scientist productivity, but also enables non-experts to use and benefit from machine learning. AutoML can help produce more accurate models faster, through automated algorithm and feature selection, and model tuning and selection. LEARN MORE VIDEO: OML for Python (2m video) click here VIDEO:OML4Py Introduction (17m video) click here DOC: User's Guide, click here DOC: REST API for Embedded Python Execution, click here PDF: OML for Python briefing note, click here BLOG: Introducing Oracle Machine Learning for Python click here. DO MORE OFFICE HOURS: OML4Py Hands-on Lab, click here. CODE: GitHub Repository with Python notebooks click here. BACK TO TOP (SharedInfrastructureOnly) NEW - ADB Now Uses AES256 Encryption On Tablespaces Note that ADB now uses AES256 encryption for tablespaces. BACK TO TOP (SharedInfrastructureOnly) NEW - Enhancements To Manual Backup Proces Note that You can choose the bucket name to store manual backups and the steps to configure manual backups are simplified. Set database property DEFAULT_BACKUP_BUCKET to specify the manual backup bucket on the Oracle Cloud Infrastructure Object Storage. DOC: Manual Backups on Autonomous Database, click here. BACK TO TOP (SharedInfrastructureOnly) NEW - Private Endpoint Support For Built-In Tools ADB within a private endpoint now supports access to APEX, SQL Developer Web, ORDS, and OML Notebooks. To connect Oracle Analytics Cloud instance to Autonomous Database that has a private endpoint, use the Data Gateway as per an on-premises database. See Configure and Register Data Gateway for Data Visualization for more information. DOC: Configuring Private Endpoints , click here. BACK TO TOP (SharedInfrastructureOnly) NEW - Additional Information Events Now Available Autonomous Database generates events that you can subscribe to with OCI Events. Two new Information events have been recently added: - ScheduledMaintenanceWarning - WalletExpirationWarning LEARN MORE DOC: Using OCI Events with ADB click here. BACK TO TOP Oracle Global Leaders Spring Webinar Series Date: Tues March 17 Time: 9:00am PT / 12:00pm ET Join us for this webcast to hear from CEO, DX Marketing, Ray Owens about how they achieve effective ROI for their client's marketing spend using ADW, OAC, Oracle Machine Learning and more. Learn about their growth from early days of Oracle cloud-based solutions and the integration of more data sets from SAS and their new parent company Vision Integrated Graphics. Please use the following link to review the agenda and register for the event: click here. BACK TO TOP oracle.com ADW ATP Documentation ADW ATP TCO Calculator Shared Dedicated Cloud Cost Estimator Autonomous Database ADB New Features Autonomous Database Schema Advisor Autonomous Database       Customer Forums ADW ATP       BACK TO TOP

March 8, 2021 Newsletter For Autonomous Database on Shared Exadata Infrastructure Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This...

Autonomous

Introducing Oracle Machine Learning for Python

Data scientists and developers know the power of Python and Python's wide-spread adoption is a testament to its success. Now, Python users can extend this power when analyzing data in Oracle Autonomous Database. Oracle Machine Learning for Python (OML4Py) makes the open source Python scripting language and environment ready for the enterprise and big data. Designed for problems involving both large and small data volumes, Oracle Machine Learning for Python integrates Python with Oracle Autonomous Database, allowing users to run Python commands and scripts for data exploration, statistical analysis, and machine learning on database tables and views using Python syntax. Familiar Python functions are overloaded to translate Python functionality into SQL for in-database processing - achieving performance and scalability - transparently. Python users can take advantage of parallelized in-database algorithms to enable scalable model building and data scoring - eliminating costly data movement. Further, Python users can develop and deploy user-defined Python functions that leverage the parallelism and scalability of Autonomous Database, and deploy those same user-defined Python functions using environment-managed Python engines through a REST API. Oracle Machine Learning for Python also introduces automated machine learning (AutoML), which consists of: automated algorithm selection to select the algorithm most appropriate for the provided data, automated feature selection to enhance model accuracy and performance, and automated model tuning to improve model quality. AutoML enhances data scientist productivity by automating repetitive and time-consuming tasks, while also enabling non-experts to produce models without needing detailed algorithm-specific knowledge. Access Oracle Machine Learning for Python in Autonomous Database using Oracle Machine Learning Notebooks, where you can use Python and SQL in the same Apache Zeppelin-based notebook - allowing the most appropriate API for the task. Take advantage of team collaboration and job scheduling features to further your data science project goals. Oracle Machine Learning for Python has a range of template example notebooks included with Oracle Autonomous Database that highlight various features. Zeppelin notebooks illustrating OML4Py features are also available in the Oracle Machine Learning GitHub repository. Learn more about Oracle Machine Learning or try it today using your Always Free Services from Oracle.

Data scientists and developers know the power of Python and Python's wide-spread adoption is a testament to its success. Now, Python users can extend this power when analyzing data in...

Autonomous

Autonomous Database Newsletter - January 8, 2021

  January 7, 2021 Newsletter For Autonomous Database on Shared Exadata Infrastructure       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the following new features and topics: Connect ADB To Data In Hadoop GoldenGate Capture For ADB Global Leaders Americas Winter Meeting Don't forget to checkout our "What's New" page for a comprehensive archive of all the new features we have added to Autonomous Database - click here. To find out how to subscribe to this newsletter click here.     (Shared Infrastructure Only) NEW - Analyze Data in Hadoop with Autonomous Database Autonomous Database can now query data in Big Data Service's Hadoop clusters. Big Data Service includes the Cloud SQL Query Server - which is an Oracle-based SQL on Hadoop query execution engine. From Autonomous Database, you simply create a database link to Query Server - and your ADB queries will be distributed across the nodes of the cluster for scalable, fast performance. This has been made easier with the latest release of Big Data Service. This not only benefits Autonomous Database on Shared Infrastructure, it also benefits any other client-tools that wants to analyze data in Hadoop using Oracle SQL   LEARN MORE   BLOG: Analyze Data in Hadoop with Autonomous Database, click here LAB: Getting Started with Oracle Big Data Service, click here       BACK TO TOP   (Shared Infrastructure Only) NEW - GoldenGate Capture for Oracle Autonomous Database Using Oracle GoldenGate you can capture changes from an Oracle Autonomous Database and replicate to any target database or platform that Oracle GoldenGate supports, including another Oracle Autonomous Database. Oracle GoldenGate Capture for Oracle Autonomous Database supports the following: Replication for different use cases: Report Offloading, Active-Active, Cloud to Cloud, and Cloud to on premises Inter-region and cross-region replication: Replicate data between different Oracle Cloud data centers around the world Replicate between targets: Replicate from an Autonomous Database to any target database or platform that Oracle GoldenGate supports, including to other Oracle Autonomous Database environments     LEARN MORE   DOC: Oracle GoldenGate Capture for Oracle Autonomous Database, click here DOC: Using Oracle GoldenGate with Autonomous Database, click here       BACK TO TOP       2 DAY CUSTOMER EVENT - Oracle Global Leaders Winter Americas Meeting 2021   Date: Tues Feb 23 - Wed Feb 24 Time: 1 - 6PM EST Invite your customers to register for this 2 day event where we will host Day 1 with 12 Customers and Partners sharing their insights in panels of cloud architecture, autonomous, integration techniques and analytics, with product direction update by Oracle product management following each section. On Day 2, we will have a special Day with Development Class on data management architecture, with our own tailored class for our Global Leaders members. Please use the following link to review the agenda and register for the event: click here.     BACK TO TOP   oracle.com ADW ATP Documentation ADW ATP TCO Calculator Shared Dedicated Cloud Cost Estimator Autonomous Database ADB New Features Autonomous Database Schema Advisor Autonomous Database       Customer Forums ADW ATP       BACK TO TOP  

  January 7, 2021 Newsletter For Autonomous Database on Shared Exadata Infrastructure       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This...

Autonomous

Analyze Data in Hadoop with Autonomous Database

There is incredible value to being able to go to a single place and query data across relational, Hadoop, and object storage.  When you access that data, you don't need to be concerned about its source. You can have a single, integrated database to correlate information from multiple data stores. It's even better when that database you are querying is Autonomous Database on Shared Exadata Infrastructure (ADB-S); you get all of the great management, security and performance characteristics in addition to the best analytic SQL. ADB-S lets you query any data in Big Data Service's Hadoop clusters.  Big Data Service includes the Cloud SQL Query Server - which is an Oracle-based SQL on Hadoop query execution engine.  From ADB-S, you simply create a database link to Query Server - and your ADB-S queries will be distributed across the nodes of the cluster for scalable, fast performance.  This has been made easier with the latest release of Big Data Service.  Kerberos is the authentication mechanism for Hadoop - which is highly secure but also a bit cumbersome.  In addition to making a Kerberos connection to Query Server - you can now make a standard JDBC connection.  This not only benefits ADB-S - but it also benefits any other client that wants to analyze data in Hadoop using Oracle SQL. Here's a step-by-step on how to set up ADB-S to access Query Server.  There are a few steps - but this is because we'll be showing you how to use network encryption between ADB-S and Query Server. Prerequisite As a prerequisite, a secure Big Data Service cluster is created and sample data is uploaded to HDFS and Hive.  See this tutorial for steps in creating the environment.  After creating the secure cluster, you will follow the steps below to: Add a user to the cluster and to Query Server. Add sample data to HDFS and Hive.   Set up a secure connection between ADB-S and Query Server Create a database link and start running queries! Let's start.  As root: # # Add an admin user to the cluster: OS & Kerberos # # OS user dcli -C "groupadd supergroup" dcli -C "useradd -g supergroup -G hdfs,hadoop,hive admin" # Kerberos kadmin.local kadmin.local: addprinc admin kadmin.local: exit Add sample data to the cluster following the HDFS section of the data upload lab.   # # Download Sample Data and Add to Hive # # get a kerberos ticket for the newly added "admin"user kinit admin # download and run the scripts required to create sample data and hive tables wget https://objectstorage.us-phoenix-1.oraclecloud.com/n/oraclebigdatadb/b/workshop-data/o/bds-livelabs/env.sh wget https://objectstorage.us-phoenix-1.oraclecloud.com/n/oraclebigdatadb/b/workshop-data/o/bds-livelabs/download-all-hdfs-data.sh ./download-all-hdfs-data.sh chmod +x *.sh Connect to the Query Server and add a new Database user - also called admin.Note - the name is case sensitive!  If the name is lowercased when added as an OS user, then the database user should also be lowercased.  Next, sync the hive database metadata with Query Server: # Add a database user # connect as sysdba to the query server # Notice that the database username is lowercased in order to match the OS and kerberos user sudo su - oracle sqlplus / as sysdba SQL> alter session set container=bdsqlusr; Session altered. SQL> exec dbms_bdsqs_admin.add_database_users('"admin"'); SQL> alter user "admin" identified by "your-password" account unlock; Make sure that Query Server is synchronized with Hive. From the CM Home page, select the Big Data SQL Query Server service. Next, select Synchronize Hive Databases from the Actions drop-down menu. Accessing Query Server from Autonomous Database Autonomous Database provides support for database links, a capability that allows queries to span multiple databases.  You can use database links to query Big Data Service: Although the query from ADB-S to Query Server is serial, the Query Server will parallelize the processing against Hadoop data - scanning, filtering and aggregating data. Only the filtered, aggregated results are returned to ADB-S. Security Considerations Set up the user that will authenticate ADB-S to Query Server and encrypt data traffic over the connection: Open port 1521 to access Query Server Make the Oracle wallet on Query Server available to ADB-S to enable TLS encryption Use DBMS_CLOUD.CREATE_CREDENTIAL to specify the user that will connect to Query Server Use DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK to create the connection Open port 1521 to Access Query Server In the subnet used by Big Data Service, ensure that an ingress rule is defined for port 1521: Networking > Virtual Cloud Networks > your-network > Security List Details  Set Up Data Encryption The connectivity between ADB-S and Query Server uses TLS encryption. You must copy the Query Server wallet to the ADB-S instance to set up the secure the connection. Upload the wallet file from Query Server to object storage.  Then, download it to ADB-S To copy the wallet: Upload the file from Query Server to an object storage bucket (we'll call it wallet). Download the wallet to ADB-S. Wallet uploaded to object storage bucket called wallet To access object storage from ADB-S create a credential.  Below, the credential is using an AuthToken.  After creating the credential, you can query the bucket containing the wallet file from ADB-S. -- In ADB-S, create credential for accessing Object Storage BEGIN DBMS_CLOUD.CREATE_CREDENTIAL( credential_name => 'OBJECT_STORE_CRED', username => 'your-name', password => 'your-password'); END; / -- View the contents of the wallet bucket SELECT * FROM DBMS_CLOUD.LIST_OBJECTS('OBJECT_STORE_CRED', 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/oraclebigdatadb/b/wallet/o/'); Use SQL Developer to output of the object listing in the wallet bucket Then, create a directory to store the wallet and download the wallet to that directory.  View the contents of the files in the local directory: -- create a directory for storing the wallet create directory QS_WALLET_DIR as 'qs_wallet_dir'; -- download the wallet from the object store BEGIN DBMS_CLOUD.GET_OBJECT( credential_name => 'OBJECT_STORE_CRED', object_uri => 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/oraclebigdatadb/b/wallet/o/cwallet.ss', directory_name => 'QS_WALLET_DIR', file_name => 'cwallet.sso'); END; / show errors; -- verify that the wallet was downloaded successfully SELECT * FROM DBMS_CLOUD.LIST_FILES('QS_WALLET_DIR'); Create the Database Link Set up the connection from ADB-S to Query Server.  First, create the credential that captures the database user identity used for the connection to query server: -- create credential -- NOTE: The username is case sensitive. BEGIN DBMS_CLOUD.CREATE_CREDENTIAL( credential_name => 'QS_CRED', username => '"admin"', password => 'ComplexPassword1234%'); END; / -- verify that the credential exists SELECT * FROM ALL_CREDENTIALS WHERE CREDENTIAL_NAME='QS_CRED'; Then, create the Database link that use the credential.  The bdsqlusr service name and  the SSL distinguished name can be found at the query server node in /opt/oracle/bigdatasql/bdsqs/wallets/client/tnsnames.ora: [oracle@mgadwqs0 ~]$ cat /opt/oracle/bigdatasql/bdsqs/wallets/client/tnsnames.ora # tnsnames.ora Network Configuration File: /opt/oracle/bigdatasql/bdsqs/edgedb/product/199000/network/admin/tnsnames.ora # Generated by Oracle configuration tools. LISTENER_BDSQL = (ADDRESS = (PROTOCOL=TCPS)(HOST = xyz.bdsdevclus01iad.oraclevcn.com)(PORT = 1521)) BDSQL = (DESCRIPTION = (ADDRESS = (PROTOCOL=TCPS)(HOST = xyz.bdsdevclus01iad.oraclevcn.com)(PORT = 1521)) (SECURITY=(SSL_SERVER_CERT_DN="CN=xyz.bdsadw.oraclevcn.com"))(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = xyz.bdsdevclus01iad.oraclevcn.com) ) ) bdsqlusr=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)(HOST=xyz.bdsadw.oraclevcn.com)(PORT=1521))(SECURITY=(SSL_SERVER_CERT_DN="CN=xyz.bdsadw.oraclevcn.com"))(CONNECT_DATA=(SERVICE_NAME=xyz.bdsadw.oraclevcn.com))) Use the information for the bdsqlusr entry to create the database link: -- create database link BEGIN DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK( db_link_name => 'QUERY_SERVER', hostname => '129.159.99.999', port => 1521, service_name => 'xyz.bdsadw.oraclevcn.com', ssl_server_cert_dn => 'CN=xyz.bdsadw.oraclevcn.com', credential_name => 'QS_CRED', directory_name => 'QS_WALLET_DIR'); END; / -- select the current user on the Query Server using the database link select user from dual@query_server USER ----- ADMIN Elapsed: 00:00:00.018 1 rows selected. The database link was successful - the result of the query is showing the current database use..  You were able to connect to a Query Server on a kerberized cluster without a Kerberos client.   Run your query! You can now query any of the tables available thru Query Server.  Below, we'll query the weather data sourced from Hive: -- Query Weather data on Hive select * from weather_ext@query_server where rownum < 10; LOCATION REPORTED_DATE WIND_AVG PRECIPITATION SNOW SNOWDEPTH TEMP_MAX TEMP_MIN ------------------------- ------------- -------- ------------- ---- --------- -------- -------- NEWARK-NJ-LIBERTY-AIRPORT 2019-01-01 13 0.07 0 0 49 34 NEWARK-NJ-LIBERTY-AIRPORT 2019-01-02 7 0 0 0 39 33 NEWARK-NJ-LIBERTY-AIRPORT 2019-01-03 10 0 0 0 54 38 NEWARK-NJ-LIBERTY-AIRPORT 2019-01-04 8 0 0 0 50 32 NEWARK-NJ-LIBERTY-AIRPORT 2019-01-05 10 0.57 0 0 44 38 NEWARK-NJ-LIBERTY-AIRPORT 2019-01-06 13 0 0 0 43 26 NEWARK-NJ-LIBERTY-AIRPORT 2019-01-07 9 0 0 0 31 22 NEWARK-NJ-LIBERTY-AIRPORT 2019-01-08 2 0.13 0 0 39 30 NEWARK-NJ-LIBERTY-AIRPORT 2019-01-09 16 0.04 0 0 41 30 Elapsed: 00:00:01.098 9 rows selected.  

There is incredible value to being able to go to a single place and query data across relational, Hadoop, and object storage.  When you access that data, you don't need to be concerned about its...

Autonomous

Autonomous Database Newsletter - December 9, 2020

  December 9, 2020 Latest Newsletter For Autonomous Database on Shared Exadata Infrastructure       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the following new features and topics: Release of APEX 20.2 Accessing Object Store Files Operations Insights Webcast: Win With Data Replay: Global Leaders APAC Fall Meeting Don't forget to checkout our "What's New" page for a comprehensive archive of all the new features we have added to Autonomous Database - click here. To find out how to subscribe to this newsletter click here.     (Shared Infrastructure Only) UPDATE - APEX 20.2 Now Available In Autonomous Database-Shared The version of APEX within Autonomous Database has been upgraded to version 20.2. This release of APEX introduces several new features and enhancements to help developers be more productive than ever before. From the all-new Cards component, more automations, and REST data synchronization, to charts in faceted search, REST connector plug-ins, report printing improvements and Redwood Light Theme Style.   LEARN MORE   DOC: Creating Applications with Oracle APEX in ADB, click here DOC: APEX Release Notes, click here BLOG: Announcing APEX 20.2, click here WEB: Visual Guide To What's New in APEX 20.2, click here.     DO MORE   Visit the apex.oracle.com for a list of all available tutorials - click here.   BACK TO TOP   (Shared Infrastructure Only) UPDATE - Changes To Access Methods for Files In S3 And Azure Blog Storage It is now possible to use a pre-signed URL in any DBMS_CLOUD procedure that takes a URL to access files in Amazon S3, without the need to create a credential. It is now possible to use Shared Access Signatures (SAS) URL in any DBMS_CLOUD procedure that takes a URL to access files in Azure Blob Storage, without the need to create a credential.   LEARN MORE   DOC: DBMS_CLOUD Package File URI Formats in ADW, click here DOC: DBMS_CLOUD Package File URI Formats in ATP, click here DOC: DBMS_CLOUD Package File URI Formats in AJD, click here.       BACK TO TOP     (Shared Infrastructure Only) NEW - Oracle Cloud Infrastructure Operations Insights Oracle Cloud Infrastructure Operations Insights provides 360-degree insight into database resource utilization and capacity. It is FREE to use with ADW and ATP - Note that currently Autonomous JSON Database is not supported by Operations Insights but will be supported in a future release. Operations Insights consists of two integrated applications: Capacity Planning Oracle SQL Warehouse   Capacity Planning - enables IT administrator or a capacity planners to understand how critical resources, including CPU and storage are used. Capacity planning ensures that enterprise databases have sufficient resources to meet future business needs. Oracle SQL Warehouse - Makes it easier to manage SQL at scale. It analyzes SQL performance issues, identifies trends and key insights to help avoid potential future problems.   LEARN MORE   DOC: About Oracle Cloud Infrastructure Operations Insights, click here BLOG: Announcing the general availability of Oracle Cloud Infrastructure Operations Insights click here. VIDEO: Introduction to Oracle Cloud Infrastructure Operations Insights Capacity Planning, click here VIDEO: Introduction to Oracle Cloud Infrastructure Operations Insights SQL Warehouse Service, click here.   BACK TO TOP   (Shared Infrastructure Only) WEBCAST - Win With Data     Are you looking to swiftly turn a growing mountain of data into business insights? Attend this ession to discover how to get the deep, data-driven insights needed to make quick decisions with the Oracle Departmental Data Warehouse in four live sessions. We invite you to join us for a virtual event on Dec. 17th from 9:00 - 11:00 a.m. PT The New Self-Service Data Warehouse: The No Hassle Data Engine for Next-Level Insights. Register once and you can stay for all four sessions or just one! Agenda and registration link is available here.     BACK TO TOP   (Shared Infrastructure Only) REPLAY - Oracle Global Leaders APAC Meeting     Watch the recordings of real-life Oracle Cloud deployments from 12 customers and partners speaking in 3 live panels (covering Cloud Infrastructure, Autonomous Data Management & Analytics) with Q&A moderated by Reiner Zimmermann, VP Product Management. Each customer panel is followed by a product direction update by Oracle product management. The videos and presentations for the event are now available, click here.   BACK TO TOP   oracle.com ADW ATP Documentation ADW ATP TCO Calculator Shared Dedicated Cloud Cost Estimator Autonomous Database ADB New Features Autonomous Database Schema Advisor Autonomous Database       Customer Forums ADW ATP       BACK TO TOP  

  December 9, 2020 Latest Newsletter For Autonomous Database on Shared Exadata Infrastructure       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure....

Autonomous

Loading XML data from your object store into Autonomous Database

The Oracle Autonomous Database (ADB) is truly a multi-model database, making it the ideal place to store and retrieve all types of data your business may use. Last year, I wrote a post about data loading, walking through examples of moving various types of unstructured and semi-structured data, lying in the Oracle object store, into your ADB. In recent weeks, we have had users reach out asking for similar examples loading XML data into ADB via the object store, so here we go... The steps for loading semi-structured XML type data are easy and similar to our previous example for loading JSON data into Autonomous Database using the DBMS_CLOUD package. Before we get into the example, some prerequisites to follow the example below are: Make sure you have a running ADB instance with a little storage space and a user with object store access. If you haven’t done this already you can follow Lab 1 in the ADB Quickstart workshop. Click here to download the example XML datafile we will use below, and upload this file to a bucket in your Object Store. If you need exact steps on how to upload files to the Oracle Object Store, follow Lab 3 Step 4 in the ADB Quickstart workshop. You may also use AWS or Azure object stores if you prefer, and may refer to the documentation for more information on this. You will provide the URL of the XML file lying in your object store to the DBMS_CLOUD function call. If you already created your object store bucket’s URL in the lab you may use that, else you may copy the URL from your "Object Details" in the Object Store bucket's UI console. The format, if you would prefer to construct the URL yourself, is below; replace the placeholders <region_name>, <tenancy_name> and <bucket_name> with your object store bucket’s region, tenancy and bucket names.  https://objectstorage.<region_name>..oraclecloud.com/n/<tenancy_name>/b/<bucket_name>/o/   Since my previous post we released SQL Developer Web, which makes it quick and easy to connect to your database and start running scripts. On your database's UI console, go to the "Tools" tab and click SQL Developer Web and login to your database as we proceed.   Step 1 - Create credential for Object Store access We begin by authenticating your database to connect to the object store, with a user credential. We create one using the DBMS_CLOUD.CREATE_CREDENTIAL procedure. Please refer this article for details on where to generate an auth token for your user. --Please fill in your OCI Object Store username and auth token below begin   DBMS_CLOUD.create_credential(     credential_name => 'OBJ_STORE_CRED',     username => 'your.username@oracle.com',     password => 'm4F;.]:2JxJO..wepofwMUvg'   ); end; /       Step 2 - Create an external table over your XML datafile   Have a look at the "xmlfile.xml" datafile we uploaded to our object store bucket (in the pre-requisites above). Notice, it contains a purchase order document of a user "Sam" and several line items.   We use the DBMS_CLOUD package to create an external table (here we name it "STAGING_TABLE"), with a column of type CLOB, over our datafile in the object store. We input the object store URL in the "file_uri_list" parameter. If your file has multiple XML documents, each document should be separated by a unique delimiter; in our script below we use the unique recorddelimiter "|" (pipe).   set define on BEGIN DBMS_CLOUD.CREATE_EXTERNAL_TABLE (    table_name =>'STAGING_TABLE',    credential_name =>'OBJ_STORE_CRED',       format => json_object('recorddelimiter' value '''|'''),    file_uri_list =>' https://objectstorage.ca-toronto-1.oraclecloud.com/n/adwc4pm/b/xmldata/o/xmlfile.xml',    column_list => 'xml_document clob',    field_list => 'xml_document CHAR(80000)' ); END; /       We can now immediately query our XML file lying in the object store via the external table CLOB column, using native database XML features such as XPATH expressions.   SELECT EXTRACTVALUE(XMLTYPE(xml_document),'/PurchaseOrder/Actions/Action/User') as Users from STAGING_TABLE;     Step 3 - Copy and convert data from the external table into native XMLTYPE column   Lastly, we can proceed to move the XML data into a column in ADB, by simply extracting each XML document into a table with an XMLTYPE (or CLOB) column, and using the powerful XML features in the database to parse and query them. --Saving the XML data into a into an XMLTYPE column CREATE TABLE xml_table (xml_document xmltype); INSERT INTO xml_table (SELECT XMLTYPE(xml_document) from STAGING_TABLE);   Note: We currently support XMLType columns, not tables, in ADB on Shared Infrastructure. Registering XML schema is not supported. Refer any XML DB restrictions here.   Concluding This example should give the steps you need to quickly ingest and query your XML data in ADB. The Autonomous Database also support spatial data and Oracle Text allowing you to full-text index, optimize and analyze your semi-structured data and get the business value you want out of it. See you in the next one!     Like what I write? Follow me on the Twitter! 🐦

The Oracle Autonomous Database (ADB) is truly a multi-model database, making it the ideal place to store and retrieve all types of data your business may use. Last year, I wrote a post about...

Autonomous

Autonomous Database Newsletter - October 28, 2020

  October 28, 2020 Latest Newsletter For Autonomous Database on Shared Exadata Infrastructure       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the following new features and topics: Autonomous Data Guard For Autonomous JSON Database Private Endpoint Support For Tools Database Modes PL/SQL SDK for OCI OCI Events Integration Changes to Password Rules Machine Learning Workshop Global Leaders APAC Meeting Note: some new features are only available with 19c. To get the most from Autonomous Database, upgrade to 19c as soon as possible. There is more information about how to upgrade existing 18c instances to 19c here, for ADW and here, for ATP. Don't forget to checkout our "What's New" page for a comprehensive archive of all the new features we have added to Autonomous Database - click here. To find out how to subscribe to this newsletter click here.   (Shared Infrastructure Only) NEW - Autonomous Data Guard Now Available For Autonomous JSON Database It is now possible to enable Autonomous Data Guard on AJD. This allows the following, depending on the state of the primary AJD database: If the primary database goes down, Autonomous Data Guard converts the standby database to the primary database with minimal interruption. After failover completes, Autonomous Data Guard automatically creates a new standby database. Admins/developers can perform a manual switchover operation, where the primary database becomes the standby database, and the standby database becomes the primary database. Note: Autonomous JSON Database does not provide access to the standby database. Simplified animation showing basic steps to create an ADG standby instance for AJD   LEARN MORE   DOC: Key features of Autonomous Data Guard for ATP/AJD, click here. DOC: Autonomous Data Guard notes for ATP/AJD, click here.     DO MORE   BLOG: Announcing Autonomous Data Guard! click here. BLOG: Autonomous Data Guard: Disaster Recovery Protection with a Couple Clicks in the Cloud! click here.   BACK TO TOP   (Shared Infrastructure Only) NEW - ADB Tools Now Support Private Endpoints Oracle Application Express, Oracle SQL Developer Web, and Oracle REST Data Services are now supported in Autonomous Databases with private endpoints. It is possible to connect Oracle Analytics Cloud to ADB that has a private endpoint via the Data Gateway. See the OAC documentation on configuring and registering Data Gateway for more information. Oracle Machine Learning Notebooks are not supported in databases with private endpoints (Note:support is coming shortly!).   LEARN MORE   DOC: Configure Private Endpoints with ADW, click here DOC: Configure Private Endpoints with ATP, click here BLOG: Announcing Private Endpoints in Autonomous Database on Shared Exadata Infrastructure, click here.     DO MORE The documentation includes two sample network scenarios for configuring PEs. DOC: Private Endpoint Configuration Examples for ADW: click here DOC: Private Endpoint Configuration Examples for ATP/AJD: click here. BLOG: Getting Up to Speed on Using Private Endpoints for Autonomous Database, click here.   BACK TO TOP     (Shared Infrastructure Only) NEW - Change ADB Mode to Read/Write or Read-Only It is now possible to select an Autonomous Database operation mode - the default mode is Read/Write. If you select Read-Only mode users can only run queries. In addition, for both modes it is possible to restrict access to only allow users with the RESTRICTED SESSION privilege to connect to the database. The ADMIN user has this privilege. A typical use case for restricted access mode is to allow for administrative tasks such as indexing, data loads, or other planned activities. Illustration shows the new UI for setting the database mode.   LEARN MORE   DOC: Change Autonomous Database Mode to Read/Write or Read-Only for ADW, click here. DOC: Change Autonomous Database Mode to Read/Write or Read-Only for ATP, click here.     BACK TO TOP   (Shared Infrastructure Only) NEW - Oracle Cloud Infrastructure SDK for PL/SQL The Oracle Cloud Infrastructure SDK for PL/SQL enables you to write code to interact with Oracle Cloud Infrastructure resources, including Autonomous Database. The latest version of the SDK is pre-installed by Oracle for all Autonomous Databases using shared Exadata infrastructure.   LEARN MORE DOC: PL/SQL SDK click here. DOC: DBMS_CLOUD_OCI_DB_DATABASE Functions, click here. DOC: DBMS_CLOUD_OCI_DB_DATABASE Types, click here. VIDEO: PL/SQL and the Art of OCI Management- Robert Pastijn, a member of Platform Technology Solutions group, discusses how he built a set of PL/SQL APIs to help manage Oracle Cloud accounts and services, click here.   DO MORE BLOG: Autonomous Database adds PL/SQL to its arsenal of OCI native SDKs, click here. BLOG: How to level up and invoke an Oracle Function (or any Cloud REST API) from within Autonomous Database, click here.   BACK TO TOP   (Shared Infrastructure Only) ENHANCEMENT - Oracle Cloud Infrastructure Events Integration You can now use OCI Events to subscribe to, and be notified of, Autonomous Database events. Using OCI Events you can create automation and receive notifications based on state changes for Autonomous Database. You can now subscribe to the following categories of events for Autonomous Database: Critical Events Information Events Individual Events   Illustration shows how to use OCI Events service.   LEARN MORE DOC: Using OCI Events with ADW click here. DOC: Using OCI Events with ATP, click here.     DO MORE   Visit the LiveLabs Using Events and Notification Workshop - click here.   BACK TO TOP   (Shared Infrastructure Only) NEW - Database Actions in SQL Developer Web SQL Developer Web UI is now based around the concept of "Database Actions". The hamburger menu provides access to additional actions and administration features. Under "Administration" there is a new UI for managing database users: Illustration shows user management console within SQL Developer Web. this includes the ability to create new users: Illustration shows creating a new user within SQL Developer Web.   LEARN MORE DOC: Create Users With SQL Developer Web on ADW, click here. DOC: Create Users With SQL Developer Web on ATP, click here.     BACK TO TOP   (Shared Infrastructure Only) UPDATE - Changes to Password Rules For non-admin users the password length has been lowered from 12 to 8 characters. Rules for covering passwords for ADMIN user are unchanged.   BACK TO TOP   (Shared Infrastructure Only) EVENT - Make Machine Learning Work for You     NOVEMBER 18, 2020 9:00AM - 11:00AM PT (GMT -8) Register for a series of webcasts designed to help you and your customers/partners understand how to harness machine learning. In four live sessions product experts will highlight use cases, best practices, and supporting technology that can help customers unleash the power of their data. Share the following event details and registration link with your customers and partners, click here.   BACK TO TOP   (Shared Infrastructure Only) EVENT - Oracle Global Leaders APAC Meeting     DECEMBER 3, 2020 1:00PM SGT (GMT +8) Listen to real-life Oracle Cloud deployments, with 12 Customers and Partners speaking in 3 live panels (covering Cloud Infrastructure, Autonomous Data Management & Analytics) with Q&A moderated by Reiner Zimmermann, VP Product Management. Each panel will be followed by a product direction update by Oracle product management. Share the following event details and registration link with your customers and partners, click here.   BACK TO TOP   oracle.com ADW ATP Documentation ADW ATP TCO Calculator Shared Dedicated Cloud Cost Estimator Autonomous Database ADB New Features Autonomous Database Schema Advisor Autonomous Database       Customer Forums ADW ATP       BACK TO TOP  

  October 28, 2020 Latest Newsletter For Autonomous Database on Shared Exadata Infrastructure       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure....

Autonomous

Autonomous Database adds PL/SQL to its arsenal of OCI native SDKs

Earlier this year I introduced our new functionality in Autonomous Database on Shared infrastructure (ADB-S) which made it possible to call most cloud REST APIs by simply running scripts right in your database, without the need to stand up your own server. We now went the extra mile and built a familiar, native OCI PL/SQL SDK over the previous generic DBMS_CLOUD.SEND_REQUEST procedures. This adds PL/SQL to the long list of native SDKs for Oracle Cloud Infrastructure (OCI). This SDK enables a user to call PL/SQL procedures, as you would in your database scripts, that invoke any OCI REST API. This is powerful as it allows you to consolidate parts of data pipeline and business logic, including creating and managing object storage data, spinning up virtual machines and autonomous databases, invoking serverless Oracle Functions, managing your data streams and everything in between! Let's jump into our SQL Developer Web worksheet and have a look at what this looks like when put into action. In my examples below, we will create a bucket in the object store that can hold data, and later call an Oracle Function, all using simple PL/SQL procedures. If you don't already have an Autonomous Database created, follow Lab 1 in our quickstart tutorial. It only takes a few minutes!   Create a native OCI credential For authorized secure calls to OCI REST APIs, we must first create a native OCI credential using a public and private key. If you are unfamiliar with creating credentials, follow my colleague Can's easy guide to creating a secure access OCI credential. Here, I create an OCI native credential "SDK_CRED". begin   DBMS_CLOUD.CREATE_CREDENTIAL (     credential_name => 'SDK_CRED',     user_ocid       => 'ocid1.user.oc1....',     tenancy_ocid    => 'ocid1.tenancy.oc1....',     private_key     => 'MIIE.....',     fingerprint     => 'f2:db:d9:18:a4:aa:fc:83:f4:f6:6c:39:96:16:aa:27'   ); end; /   Call the relevant DBMS_CLOUD_OCI_*   PL/SQL SDK package for OCI Object Storage When using this SDK over OCI REST APIs, a general guideline for your block of code will be: Identify the named SDK package that relates to the OCI resource on which you want to perform a REST API call Declare a response object as defined in the package, and it's corresponding response body object with type as specified in the response object type definition. Declare and set any request object and its parameters, if one needs to be sent along with your intended SDK function call ("bucketdetails" needs to be set, in our example below) Call the relevant SDK function for the action you want to perform, passing in values for the required parameters. You will pass in any request objects created in Step 3 over here. Set the response object to retrieve the response from your SDK function call. Query this response object for the status and response information from your function call. With this, we proceed to use types and functions in the DBMS_CLOUD_OCI_OBS_OBJECT_STORAGE package from the PL/SQL SDK. In the script below, we will create a new bucket in our object store named "examplebucketfordata" in the Toronto region. If you are unfamiliar with storing data in the cloud: A bucket is a container that can store your small or large data files (analogous to a folder on your computer where you store files). Read more about cloud object storage and buckets here. Refer to the screens below for information on where to find your necessary resource details.   set serveroutput on declare   l_type_status  PLS_INTEGER;   resp_body      dbms_cloud_oci_obs_object_storage_bucket_t;   response       dbms_cloud_oci_obs_object_storage_create_bucket_response_t;   bucket_details dbms_cloud_oci_obs_object_storage_create_bucket_details_t;   l_json_obj     json_object_t;   l_keys         json_key_list; begin   bucket_details := dbms_cloud_oci_obs_object_storage_create_bucket_details_t();   bucket_details.name := 'examplebucketfordata';    bucket_details.compartment_id := 'ocid1.compartment.oc1...';   --Note the use of the native SDK function create_bucket   response := dbms_cloud_oci_obs_object_storage.create_bucket(                 namespace_name => 'adwc4pm',                 opc_client_request_id => 'xxxxxxxxx',                 create_bucket_details => bucket_details,                 credential_name => 'SDK_CRED',                 region => 'ca-toronto-1');     resp_body := response.response_body;     -- Response Headers   dbms_output.put_line('Headers: ' || CHR(10) ||'------------');   l_json_obj := response.headers;   l_keys := l_json_obj.get_keys;   for i IN 1..l_keys.count loop      dbms_output.put_line(l_keys(i)||':'||l_json_obj.get(l_keys(i)).to_string);   end loop;        -- Response status code   dbms_output.put_line('Status Code: ' || CHR(10) || '------------' || CHR(10) ||      response.status_code);   dbms_output.put_line(CHR(10));   end; /   You can find your compartment OCID by sliding out the left pane menu and going to Identity -> Compartments -> Your compartment   Your namespace can be found by visiting your tenancy details in the left pen menu at Administration -> Tenancy Details   You can look-up your region's identifier here    I have simplified the output below however, notice that you receive a complete response header and body, as is expected in a REST API call, which you can use to navigate for status codes, resource location and other necessary information for your application's logic.    After running this script and getting a success response code (200), we now have a newly created bucket in your object store! You can now proceed to write similar scripts with the DBMS_CLOUD package to upload data and manage your files in your bucket.     Conclusion: Revisiting our friendly ol' Oracle Function example As we conclude, let us rewrite our previous Oracle Function example, in which we had used the generic DBMS_CLOUD.SEND_REQUEST for our REST API call, to invoke a simple Oracle Function that prints "Hello <Name>". We use the DBMS_CLOUD_OCI_FNC_FUNCTIONS_INVOKE package and provide our Oracle Function "fndemo" OCID, which can be found on your Function's console page, alongside its invoke endpoint. While the generic function is still of good use to call other cloud platform REST APIs, notice that the new SDK uses native PL/SQL objects for parameters and responses, improving your code quality and reducing the margin for error. This is great example of how things are constantly evolving toward becoming simpler in the world of Autonomous! declare   resp_body      blob;   response       dbms_cloud_oci_fnc_functions_invoke_invoke_function_response_t;      l_json_obj     json_object_t;   l_keys         json_key_list; begin     -- Note the use of the native SDK function invoke_function   response := dbms_cloud_oci_fnc_functions_invoke.invoke_function(                 function_id => 'ocid1.tenancy.oc2...',                 invoke_function_body => UTL_RAW.cast_to_raw('Nilay'),                 credential_name => 'SDK_CRED',                 region => 'ca-toronto-1');     resp_body := response.response_body;     -- Response status code   dbms_output.put_line('Status Code: ' || CHR(10) || '------------' || CHR(10) ||       response.status_code);   dbms_output.put_line(CHR(10));   end; /   Like what I write? Follow me on the Twitter! 🐦  

Earlier this year I introduced our new functionality in Autonomous Database on Shared infrastructure (ADB-S) which made it possible to call most cloud REST APIs by simply running scripts right in your...

Big Data Service. What's new. October 2020 update

It's extra to say that Oracle has one of the most innovative Cloud Platform.  Big Data Service (BDS) is one of the PaaS services you may find there. It provides an easy way to deploy and manage Cloudera based Hadoop clusters in Oracle Cloud. Service comes with a bunch of functionality, including: Cluster creation (fully secure and highly available), allowing users to spin up fully managed and ready to use Hadoop clusters in a minutes Grow cluster compute  (add worker nodes) to catch up with the increasing workload Increase Storage only for cases when data volumes keep growing, while processing remains the same In this blog post, I want to share what Oracle adds to Big Data Service over the past few months. Be flexible When we start the service many customers were happy to start using it. They were happy to find that it's really easy to create clusters, add new nodes, block storage, but one of the main concerns since day 1 was that service wasn't flexible enough. In other words, I want to pay for only what I use. For example, during the daytime, I want to have 5 worker nodes cluster with 16 cores for each of these. But why should I pay for this overnight or during the weekend? You shouldn't. You have to use the change shape feature. It's documented here and I will not re-write the doc, just show how you can use it. Go to Service console and choose change shape: Next, you have to choose a new shape for a desirable group of nodes: Wait a bit (cluster will perform rolling restart) and you good to go! Another one use case is "start low, grow big". Indeed, this is Cloud with its own benefits, and flexibility/time-to-market is one of the keys. Imagine, you have an idea which you need to test/prove. Do you need a big cluster for development? apparently not. So, start with something very basic and cheap and if you prove your idea, add more nodes and change the shape (increase it) Notebook access Hadoop is a technology that allows you to store data and process it. But you need some interface to develop code on top of it, right? Oracle recently announced a new capability - Oracle Notebook.  Oracle Notebook allows to users: Import existing notebooks from Jupiter Notebooks and Apache Zeppelin Use the environment as a single interface to run different interpreters, like Spark, PySpark, Python, R, JDBC and Graph (PGX) Leverage the scalability and performance of the Big Data Cluster Collaborate with data scientists, developers and business users, sharing notebooks among the teams Create specific tags on the Notebooks to facilitate the search for relevant documents Explore the data with an advanced visualization engine that allows many chart types not available in Zeppelin by default. Use Cloud SQL and Oracle SQL to explore data using the JDBC interpreter Get access from everywhere Because of security considerations, we do recommend to use FastConnect or VPN to reach out to your Cluster UI (like Notebook, Cloudera Manager, Hue), but sometimes you want to expose your endpoint on the public internet (for convenience of Development). Here we can recommend to use OCI Load Balancers. The detailed step by step guide can be found here. Quick start Setting up all security rules and network configuration could seem hard for a new user, who just sign up for OCI free credits. The good news, that here is terraform scripts that will spin up the stack in a minuted. Happy playing!

It's extra to say that Oracle has one of the most innovative Cloud Platform.  Big Data Service (BDS) is one of the PaaS services you may find there. It provides an easy way to deploy and manage...

Autonomous

Use any AWS S3 compatible object store with Autonomous Database

In keeping with our mission to be an open and interoperable cloud platform, recently we added the ability to use any object store of your choosing that supports AWS S3 compatible URLs, in Autonomous Database on Shared Infrastructure (ADB-S). While I recommend using Oracle Object Storage since it is fast, affordable and optimized for Oracle Cloud, this feature enables you to load data into or from your ADB instance using object storage options on the market that you may already be using -  Wasabi, Google Cloud Object Storage, Digital Ocean Spaces and many more! For those less familiar with S3 compatibility, it simply means your cloud storage authentication method and APIs are compliant with the AWS standardized S3 interface format. This minimizes any changes required to your existing applications or tools if they are already using S3 compatible keys and URLs. Check your object store's documentation if you are unaware of its S3 compatibility. The Oracle Object Store provides S3 compatible URLs too, making it easy to switch it in as your underlying object store as your storage needs change!   Below, I present a great example using Wasabi's object storage to load some data into Autonomous Database using the DBMS_CLOUD package. If you do not have a Wasabi account and would like to follow along, you may sign up for a free Wasabi trial account. I do assume below that you already have an ADB instance created and ready to use. If not, follow this quick workshop for a walkthrough on how to get started from scratch with Autonomous Database.   Step 1: Create a bucket in your cloud object store to upload your datafiles A bucket is simply a container to hold your datafiles (termed as objects). Once you have logged into your Wasabi account, from the left navigation pane, select "Buckets" and hit "Create Bucket". Here I create a bucket named doctrial in the West region.        Step 2: Upload datafiles to your object storage bucket I proceed to upload some data into the object storage bucket we just created. Here, I upload some sales channels data. You may, of course, follow along with your own datafiles; if you would like to follow along with my example download and unzip the chan_v3.dat datafile here.   Step 3: Find or create your file's S3 compatible URL Next, let's find or create our S3 compatible URL to point our database script to. In Wasabi, clicking on the file in the bucket UI opens up a panel displaying the file's URL which, conveniently, happens to be an S3 compatible URL (notice "s3" as part of the standard sub-domain of the URL). Some object stores may not be quite as simple, but referring to your object store's documentation will provide the format for its S3 compatible URL, where you may have to substitute in your region, bucket name and datafile (ie. object) name.       Step 4: Create an S3 Access/Secret key pair for authenticating your ADB Instance For your ADB instance to be able to talk to your object storage, we must set up authentication between them. To do this you will need to create an access key. In our Wasabi example below, navigate to Access Keys and click "Create New Access Key".     Along with each new Access Key you generate, you will receive a Secret Key. This is a private key that you will be able to see here only once, so note it down somewhere safe and retrievable. We will be using this key pair in our script to load data into ADB. Note: For security, we always recommend using private buckets (this is the default with most object stores). If you do happen to use a public bucket for non-sensitive data, you may skip the use of keys (as well as the credential creation in the next step).   Step 5: Create a credential in your database instance  Now that your object store and data are set up, let us switch from the Wasabi console to our Autonomous Database. We jump right into our SQL Developer Web worksheet to run scripts on our database (you may use the desktop version of SQL Developer or your PL/SQL tool of choice connected to your database). I run the following script to create a credential named "wasabi_cred", which your ADB instance will use to authenticate itself while connecting to the object store.     begin   DBMS_CLOUD.CREATE_CREDENTIAL (       'wasabi_cred',       '<Paste your object store access key here>',       '<Paste your object store secret key here>'); end;       Step 6: Create table and copy data into it from your object store With the necessary credential created, I now create a table with the structure that I know matches my channels dataset. CREATE TABLE channels (     channel_id                  NUMBER(6)          NOT NULL,     channel_desc                VARCHAR2(20)    NOT NULL,     channel_class               VARCHAR2(20)    NOT NULL,     channel_class_id            NUMBER(6)          NOT NULL,     channel_total               VARCHAR2(13)    NOT NULL,     channel_total_id            NUMBER(6)          NOT NULL);   Next comes what we are all here for - I run this script which uses the DBMS_CLOUD package to copy the data from my datafile (object) lying in my object store into my database. You will copy your object's URL we created in Step 3 into the "file_uri_list" parameter, and your credential and table names, if you used names different than the scripts above. To recap, the credential "wasabi_cred" we created in Step 5, using our keys, is what authenticates your database to access the object storage bucket.   begin  dbms_cloud.copy_data(     table_name =>'CHANNELS',     credential_name =>'wasabi_cred',     file_uri_list =>'https://s3.us-west-1.wasabisys.com/doctrial/chan_v3.dat',     format => json_object('ignoremissingcolumns' value 'true', 'removequotes' value 'true')  ); end;     Finito And just like that, we're done! Having successfully run the "copy_data" script above, your data has been copied from your object store into your database and can now be queried, transformed or analyzed. You may also use this method to create external tables over your object store, if you prefer to query your data directly, without physically moving it into your database. As always, refer to the ADB documentation for more details. This new functionality provides excellent flexibility when it comes to your object storage options that work with Autonomous Database!     select * from channels;     Like what I write? Follow me on the Twitter! 🐦  

In keeping with our mission to be an open and interoperable cloud platform, recently we added the ability to use any object store of your choosing that supports AWS S3 compatible URLs, in Autonomous...

Autonomous

SQL Macros Have Arrived in Autonomous Database

If you saw last week's Autonomous Database newsletter you will have noticed that one of the new features that I announced was support for SQL Macros), see here and if you want to sign-up for the Autonomous Database newsletter then see here. Whilst the newsletter included a link to the SQL documentation, I thought it would be useful to write blog post to provides some more insight into how this new feature can make your SQL code both simpler and faster and give you yet another great reason to move your workloads to the cloud. Lets get started... What is a SQL Macro It's a new, simpler way to encapsulate complex processing logic directly within SQL. SQL Macros allow developers to encapsulate complex processing within a new structure called a "macro" which can then be used within SQL statement. Essentially there two types of SQL Macros: SCALAR and TABLE. What's the difference: SCALAR expressions can be used in SELECT list, WHERE/HAVING, GROUP BY/ORDER BY clauses   TABLE expressions used in a FROM-clause Why Do We Need SQL Macros When We Have PL/SQL? In the past, if you wanted to extend the capabilities of SQL, then PL/SQL was the usually the best way to do add additional, more complex processing logic. However, by using PL/SQL this created some underlying complexities at execution time since we needed to keep swapping between the SQL context and the PL/SQL context. This tended to have an impact on performance and most people tried to avoid doing it whenever possible.  Why Is A SQL Macro Better Than A PL/SQL Function? SQL Macros have an important advantage over ordinary PL/SQL functions in that they make the reusable SQL code completely transparent to the Optimizer – and that brings big benefits! It makes it possible for the optimizer to transform the original code for efficient execution because the underlying query inside the macro function can be merged into outer query. That means there is no context switching between PL/SQL and SQL and the query inside the macro function is now executed under same snapshot as outer query. So we get both simplicity and faster execution.   How Does Autonomous Database Support SQL Macros As of today, Autonomous Database only supports TABLE macros with support for SCALAR macros coming later. I provided a gentle introduction to SQL Macros at OpenWorld 2019 and you can download the presentation from here: A New Approach To Simplifying Complex SQL. This covers both types of scalar macros so you understand how they can be used today and what you will be able to do shortly when the SCALAR macros are supported. So what can we do today with a TABLE macro?    Two Types of Table Macros There are two types of table macros: 1 Parameterized Views:  tables used in the query are fixed inside the definition of the macro and arguments are passed in to select all or specific rows from those tables. The "shape" of the queries returned from these table is, in most cases, fixed. The most common use of these parameterized views is when you need to pass in arguments to select a subset of the rows that are then aggregated. The first code example below is a good example where I pass in a specific zip code and get the total sales revenue for that zip code. 2. Polymorphic Views: These are more interesting (at least I find them more interesting) because they have one or more table arguments (of course they can also have scalar valued arguments as well). The passed in tables are used inside the query returned by the macro which gives them a lot flexibility. The second set of code examples below show this type of macro and I can see this being of real interest to DBAs because of the overall flexibility and power it provides to create useful utilities. This is probably going to need a follow-up post at some point.   How Do I Create A Table Macro? We define a SQL macro using something like the following syntax: CREATE OR REPLACE FUNCTION total_sales(zip_code VARCHAR2) RETURN VARCHAR2 SQL_MACRO..... the keyword is 'SQL_MACRO' as the return type which is a text string containing SQL that will be substituted into the original statement at execution time. There is a lot that can be done with SQL Macros, simply because they provide so much flexibility, however, my aim here is try and keep things simple just to get us started. Maybe I will go deeper in subsequent blog posts. Let's create a simple table macro...   My First Table Macro For our first table macro let's keep things relatively simple. Suppose I want to have a function that returns the total sales revenue from my fact table SALES for a specific zip code in my CUSTOMERS table. I need to join the two tables together (SALES and CUSTOMERS), find the matching rows and sum the result. In effect, we are creating a paramterized view. The SQL Macro will look like this: create or replace function total_sales(zip_code varchar2) return varchar2 SQL_MACRO(TABLE) is begin   return q'{    select cust.cust_postal_code as zip_code,              sum(amount_sold) as revenue    from sh.customers cust, sh.sales s    where cust.cust_postal_code = total_sales.zip_code     and s.cust_id = cust.cust_id    group by cust.cust_postal_code    order by cust.cust_postal_code   }'; end; /   essentially I am encapsulating a parameter driven SQL query within a PL/SQL function which is of type SQL MACRO. To execute a query using the macro I do this: select * from total_sales('60332'); and the output is as follows: So far so good...the explain plan is very interesting because what you see is not an explain plan for TOTAL_SALES but the SQL within the MACRO which was transparently inserted by the Optimizer...   Obviously the SQL statement within the MACRO can be as simple or complex as needed. But this macro only allows you to query the SALES fact table and CUSTOMERS dimension table. What if we want a more dynamic approach where I could pass in any table to my MACRO for processing?   Going A Little Deeper... In 12c we introduced a new set of keywords to return a subset of rows from a table. Mr Optimizer (aka. Nigel Bayliss) wrote a post about this which you can read here. This allowed you to return the first 10 rows of resultset by using the syntax FETCH FIRST 10 ROWS ONLY. Let's take this new feature and wrap it up in a TABLE MACRO: create or replace function keep_n(n_rows number, t DBMS_TF.Table_t)                   return varchar2 SQL_MACRO(TABLE) is begin   return 'select *            from t           order by 1           fetch first keep_n.n_rows rows only'; end; / this may look complicated but it is relatively simple to explain. The mysterious part for many readers will be the datatype for t which is DBMS_TF.Table_t. Why not simply use a varchar2 string to capture the table name? Because that could lead to SQL injection so we force the input to be a valid table name by enforcing the type via DBMS_TF.Table_t. If you want to learn more about why and how to use DBMS_TF.table_t then I would recommend watching Chris Saxon's webcast from the recent Developer Live event - "21st Century SQL". The above SQL Macro allows me to set the number of rows I want returned from any table and the output is completely dynamic in that all columns in the specified table are returned: select * from keep_n(10, sh.customers);   returns the first 10 rows from my table SH.CUSTOMERS. The following statement returns the first 2 rows from my table SH.CHANNELS:   select * from keep_n(2, sh.channels); and the explain plan for the above query is transformed by the Optimizer to the actual query inside the macro:   select *    from sh.channels   order by 1   fetch first 2 rows only';     Going A Even Deeper.. What if we wanted to extend our table macro to return a random sample of rows from any table? That's a very simple thing to do with a macro! We just need to tweak the FETCH syntax to use PERCENT ROWS and insert an order by clause that uses the DBMS_RANDOM function. Now the macro looks like this... CREATE OR REPLACE FUNCTION row_sampler(t DBMS_TF.Table_t, pct number DEFAULT 5) RETURN VARCHAR2 SQL_MACRO(TABLE) AS  BEGIN  RETURN q'{SELECT *             FROM t            order by dbms_random.value            fetch first row_sampler.pct percent rows only}'; END; / which allows me to randomly sample 15% of rows from my CUSTOMERS table: SELECT * FROM row_sampler(sh.customers, pct=>15);     or my PRODUCTS table: SELECT * FROM row_sampler(sh.products, pct=>15); and again the explain looks as if we had written the the SQL statement in full as: SELECT *  FROM sh.products ORDER BY dbms_random.value FETCH FIRST 15 percent rows only     Conclusion Hopefully this has given you a little bit of a peek into the exciting new world of SQL Macros. This new feature is extremely powerful and I would recommend  watching Chris Saxon's video from the recent Developer Live event - "21st Century SQL" - for more information on how to use SQL Macros with other features such as SQL pattern matching (MATCH_RECOGNIZE). Don't forget to take a look at the documentation, see here.  LiveSQL will shortly be updated to the same patchset release as Autonomous Database and once that happens then I will start posting some code samples that you test and play with on that platform. Don't forget you can always sign up for a 100% free account to run your own Autonomous Database: https://www.oracle.com/cloud/free/ Have fun with SQL Macros!        

If you saw last week's Autonomous Database newsletter you will have noticed that one of the new features that I announced was support for SQL Macros), see here and if you want to sign-up for...

Autonomous

Keep your clone's data up-to-date with Refreshable Clones in Autonomous Database

One of the most widely used features in Autonomous Database on Shared Infrastructure (ADB-S) today is the ability to clone your database, no matter how big or small, with little to no effort. This week, we are souping up ADB's cloning abilities to include Refreshable Clones! A refreshable clone is a read-only clone that stays "connected" to, and has the ability to pull in data (ie. refresh) from, its source database with a simple click of a button. Until now, if you needed to update your clone's data from its source, you had two options: Move new data from the source database to the clone (via data pump, database links etc.) Create a new clone from the source database   Refreshable clones take away the friction of these above options by enabling you to refresh your clone's data by simply hitting a refresh button and providing the source database's timestamp to refresh to (aptly termed the refresh point); the clone automatically looks at your source database's logs and pulls in all the data up to your inputted timestamp. Here are some example use cases where this clone type can be useful to your team: Providing a routinely updated clone to a different business unit within your organization for reporting and analysis  Creating billing or workload separation for your databases between business units within the organization Providing up-to-date read-only test database environments to internal teams Of course, I look forward to learning about how our users make use of this functionality in various different scenarios.   With this context, let me walk you through a simple example of using a refreshable clone showing how powerful this feature can be!   Step 1: Setting up an Autonomous Database After logging into my Oracle Cloud account, I navigate to my existing ADB instance conveniently named "sourceDB". If you missed the memo and haven't created your first Autonomous Database yet, here is a quickstart tutorial on ADB to get you up to speed. To insert a line of data before we clone, I navigate to my SQL Developer Web (SDW) worksheet via the Tools tab my database's OCI console.   I then create a table named "refreshclonetests" with a single row of data in it, before we proceed to clone the database. Note that I perform this action at 12:20 am UTC.     Step 2: Creating a refreshable clone from the Autonomous Database instance I now jump into my source database's list of actions to select create clone.   I select the new Refreshable Clone option. Notice the text describing it; A refreshable clone must be refreshed every 7 days or less, else it falls too far out of sync from the source and can no longer be refreshed. For this example, I name my clone "refreshclone". As easy to remember as it gets.   I proceed to select the number of OCPUs for my refreshable clone; there is no storage selection necessary. Since this is a read-only clone that only brings in data from its source database, the amount of storage selected in TB is automatically the same as that of the source. There is also no Admin password option for the refreshable clone, as that is taken from the source when refreshed.   Clicking the create clone option starts provisioning the clone. Once provisioned, you can see useful clone information on the OCI console, including the source database that is attached to and the refresh point timestamp of the source to which the clone was refreshed.   Opening up SQL Developer Web once again, via the refreshable clone's OCI console, and querying the database shows the table refreshclonetests that I created in the source, with the single row of data that I inserted.   Step 3: Inserting additional data into the source database Switching back to the source database SDW worksheet, I insert and commit an additional row into the source database. We now have 2 rows in the source but only a single row in the refreshable clone. Notice the second row I inserted at 12:38 AM UTC.     Step 4: Refreshing the clone to view new data Now here's the fun bit; while in my refreshable clone I hit the "Refresh" button in the banner. This banner also displays the date and time before which we must refresh the clone (which is 7 days after the last refresh was performed), before it would lose the ability to sync with the source. Also, notice the "Disconnect Clone from Source Database" option under "More Actions" here. At any point, you may choose to disconnect your clone from its source making it a regular, standalone, read/write database. This, of course, is a one-way operation and after disconnecting you will no longer be able to refresh data into the clone. If 7 days have gone by and you can no longer refresh your clone, you may still disconnect your clone from its source.   The popup asks for a "refresh point" timestamp of the source database to which we want to refresh. This makes refreshes consistent and intelligible, as it definitively refreshes to an exact timestamp of the source. Since I inserted the second row into the source at about 12:38 AM UTC, I input 12:39 AM UTC as my refresh point and hit refresh clone.   While the clone is refreshing it goes into the "Updating" state. During a refresh, connections are not dropped and any running queries on the clone simply wait until the refresh is completed. The refresh may take several seconds to several minutes depending on how long it has been since the last refresh and the number of changes that have come in since then.   Once the refresh is completed, we can see exactly what timestamp of the source the clone has been refreshed to in the clone's information.   In the clone's SDW worksheet, we can now run a select query on the "refreshclonetests" table and instead of a single row, we now see both rows of data from the source! The data in the clone has been seamlessly updated to reflect that which is in the source.   Wrapping Up With this Refreshable Clones feature, you can now keep cloned databases updated without any tedious manual export process. As new data comes into your source database each day, it can easily be refreshed into all its connected refreshable clones with a few button clicks. For more on Refreshable Clones, refer to the complete documentation. As with everything OCI, your refreshable clone can also be refreshed via simple REST API calls (see API documentation). Better still, with the ability to call cloud REST APIs from a controlling ADB instance, you can quickly schedule automate your data refreshes to fit right into your data pipeline, without having to deploy any servers!     Like what I write? Follow me on the Twitter! 🐦

One of the most widely used features in Autonomous Database on Shared Infrastructure (ADB-S) today is the ability to clone your database, no matter how big or small, with little to no effort. This...

Autonomous

Autonomous Database Newsletter - September 03, 2020

  September 03, 2020 Latest Newsletter For Autonomous Database on Shared Exadata Infrastructure       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the following new features and topics: Refreshable Clones SQL Developer Web Enhancements SQL Macros Note: some new features are only available with 19c. To get the most from Autonomous Database, upgrade to 19c as soon as possible. There is more information about how to upgrade existing 18c instances to 19c here, for ADW and here, for ATP. Don't forget to checkout our "What's New" page for a comprehensive archive of all the new features we have added to Autonomous Database - click here. To find out how to subscribe to this newsletter click here.   (Shared Infrastructure Only) NEW - Creating A Refreshable Clone Autonomous Database provides cloning where you can choose to create a full clone of the active instance, create a metadata clone, or create a refreshable clone. With a refreshable clone the system creates a clone that can be easily updated with changes from the source database. A refreshable clone allows the following: Maintain one or more copies of the source database for use as read-only databases. A clone database is available when needed, and when the data needs to be updated, simply refresh the clone from its source database. Share copies of a production database with multiple business units. For example, one business unit might use the source database for ongoing transactions and another business unit could at the same time use the refreshable clone database for read-only operations. Refreshable clones allow customers to spread the cost of database usage across multiple business units. They can bill the different units separately, based on their usage of one or more refreshable clones. Illustration shows a click-through demo for creating a new refreshable clone. If an Autonomous Database is the source for a refreshable clone or clones then you can view the list of refreshable clones via the Resources area, select "Refreshable Clones" as shown below. Illustration shows the OCI console view of refreshable clones associated with a specific ADB instance.   LEARN MORE   PDF: Overview of Refreshable Clones, including click-through demo, click here. DOC: About Refreshable Clones for ADW click here. DOC: About Refreshable Clones for ATP click here.     DO MORE   Visit the LiveLabs home page for a list of all available Autonomous Database labs - click here.       BACK TO TOP     (Shared Infrastructure Only) ENHANCEMENTS - Updates to SQL Developer Web UI Autonomous Database comes with a built-in SQL Developer Web. The UI for this web console has just been updated. Autonomous Database uses pre-set consumer groups/resource plans (high, medium, low, tp and tpurgent) which determine memory, CPU, and the degree of parallelism for queries and scripts. In the past SQL Developer Web connections were defaulted to the LOW consumer group. With this latest update it is now possible to select any of the available consumer groups when using the worksheet. Illustration shows SQL Developer Web UI for selecting a consumer group.   The latest UI also provides a simpler experience for managing users: create, edit, grant roles & privileges, REST Enable accounts, view accounts where the password is about to expire and unlock an account. Illustration shows SQL Developer Web UI for managing ADB users.   LEARN MORE   BLOG: Exploring the latest version of SQL Developer Web in the Oracle Autonomous Database, click here.         BACK TO TOP   (Shared Infrastructure Only) NEW - SQL Macros now in ADB What is a SQL Macro? It is a new and unique way for SQL developers to encapsulate complex processing which can then be reused within SQL statements. Unlike bespoke PL/SQL functions, SQL Macro code is transparent to SQL Optimizer – which brings big benefits since there is no overhead for context switching! There are two types of SQL Macros: TABLE expressions used in a FROM-clause SCALAR expressions used in SELECT list, WHERE/HAVING, GROUP BY/ORDER BY clauses Initially only TABLE macros will be available within Autonomous Database. Support for SCALAR macros is coming soon!   LEARN MORE PDF: Overview presentation for SQL Macros, click here VIDEO: 21st Century SQL: using SQL pattern matching and SQL macros - at the same time! click here. MOS: How To Identify the SQL Macros in Oracle Data Dictionary 19.7 Onwards (Doc ID 2678637.1). DOC: SQL_MACRO Clause click here.         BACK TO TOP   oracle.com ADW ATP Documentation ADW ATP TCO Calculator Shared Dedicated Cloud Cost Estimator Autonomous Database ADB New Features Autonomous Database Schema Advisor Autonomous Database       Customer Forums ADW ATP       BACK TO TOP  

  September 03, 2020 Latest Newsletter For Autonomous Database on Shared Exadata Infrastructure       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata...

Autonomous

Optimize the Orchestration of Your DevTest Lifecycle with the New Autonomous Database Rename Feature

One of the key advantages of Autonomous Database on Shared Exadata Infrastructure (ADB-S) is to be able to provision or clone a database with the click of a button. This is especially important in the context of agile development and testing (DevTest) in which the teams often need a fresh copy of the production environment in order to make sure the new projects, enhancements, or bug fixes solve the existing problems and don’t cause new issues in the production environment. Many ADB-S customers do this by frequently cloning their production databases for their DevTest teams because cloning an ADB-S instance requires very little effort and fits perfectly in the agile development lifecycle. In this blog post, we are going to explore how to further accelerate this cycle with the help of a new ADB-S capability that enables you to rename your existing ADB-S instance. Let’s put aside this new feature for a moment and see how several ADB-S customers used to take advantage of the cloning capability to keep their DevTest environments up-to-date until today. For the following example, let's assume I have a production database called adwsales and a test database called adwsalestest. In order to create a new clone of adwsales that shares the same name as adwsalestest, I will need to first terminate adwsalestest. Here's why... My new clone has to share the same name as the old one because I don’t want to change the connection string in my application code, and I have to terminate the old clone first because I cannot have two databases with the same name in the same region and tenancy. This has been a valid use case for many of our customers. However, creating a new clone and having access to your DevTest environment are serialized operations with no overlap in this approach. This is where the new rename capability comes into play. You probably can already tell how it can help improve this use case but to further explain, we can now start the new clone preparation with an intermediate name (e.g. adwsalesstage) in parallel while the old clone is still being used. Once the new clone is ready, we can rename it after terminating the old one. Again, the main motivation behind using the same name for every new clone is to avoid any application changes. The key difference between the two scenarios is that the rename capability allows replacing your existing DevTest database with a fresh copy of your production database thanks to a quick and simple metadata operation.If we go with the example above, when our new clone adwsalesstage is ready, we can terminate adwsalestest and use its name to rename adwsalesstage. Now it’s time to demonstrate it in a few simple steps: Download the regional wallet and have the production and test instances ready (adwsales and adwsalestest) Insert some data in the production (Now we need a new copy for DevTest!) Create a new clone of the production instance (adwsalesstage) Terminate the old test instance (adwsalestest) Rename adwsalesstage to adwsalestest Connect to adwsalestest Download the regional wallet and have the production and test instances ready (adwsales and adwsalestest) In this demonstration, I have two ADB-S instances: adwsales is my production database and adwsalestest is my test database that I cloned from adwsales. We are going to first download our regional wallet. The reason for using the regional wallet instead of the instance wallet is to avoid downloading a new wallet every time we create a new clone of the production since regional wallet serves all the instances that belong to the same tenancy in a given region. Let’s connect and run a simple query in our production and test instances: ctuzla-mac:instantclient_19_3 ctuzla$ ./sqlplus ADMIN/************@adwsales_low SQL*Plus: Release 19.0.0.0.0 - Production on Thu Aug 13 15:40:50 2020 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Last Successful login time: Thu Aug 13 2020 15:32:06 -07:00 Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.5.0.0.0 SQL> select count(*) from sales; COUNT(*) ---------- 19 ctuzla-mac:instantclient_19_3 ctuzla$ ./sqlplus ADMIN/************@adwsalestest_low SQL*Plus: Release 19.0.0.0.0 - Production on Thu Aug 13 16:29:57 2020 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Last Successful login time: Thu Aug 13 2020 15:32:06 -07:00 Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.5.0.0.0 SQL> select count(*) from sales; COUNT(*) ---------- 19 Insert some data in the production (Now we need a new copy for DevTest!) As we covered earlier, production databases may evolve rapidly and it’s important to have an up-to-date development and test environment. In order to simulate this scenario, let’s insert some data into our production database: SQL> insert into sales select * from sh.sales where rownum<60 order by time_id desc; 59 rows created. SQL> commit; Commit complete. SQL> select count(*) from sales; COUNT(*) ---------- 78 Create a new clone of the production instance (adwsalesstage) Since we want to maintain a fresh copy of our production database, let’s create a new clone called adwsalesstage, soon to be our new test database: Terminate the old test instance (adwsalestest) After verifying our new clone has been successfully created, we can terminate adwsalestest: Rename adwsalesstage to adwsalestest Now that the old clone is successfully terminated, we can go ahead and rename adwsalesstage to adwsalestest. We are doing this to avoid any connect string changes in the application code: Connect to adwsalestest Let's connect to our new test database and run the same query. Please note that the connect string hasn't changed and the query result reflects the insert operation we performed couple steps above: ctuzla-mac:instantclient_19_3 ctuzla$ ./sqlplus ADMIN/************@adwsalestest_low SQL*Plus: Release 19.0.0.0.0 - Production on Thu Aug 13 17:48:53 2020 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Last Successful login time: Thu Aug 13 2020 16:31:04 -07:00 Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.5.0.0.0 SQL> select count(*) from sales; COUNT(*) ---------- 78 It’s important to note that in this demonstration I didn’t even have to download a new wallet after creating my second clone since I’m using a regional wallet and the service names for the test database stay the same after the rename operation. To summarize, ADB-S offers quick and easy provisioning which is an integral part of agile development and testing. In this blog post, we have seen how the rapid provisioning combined with the new database rename capability can further improve the DevTest lifecycle. If you would like learn more about the rename feature, you can check out the documentation here.

One of the key advantages of Autonomous Database on Shared Exadata Infrastructure (ADB-S) is to be able to provision or clone a database with the click of a button. This is especially important in the...

Autonomous

Autonomous Database Newsletter - August 20 2020

a { color: white !important; } August 20, 2020 Latest Newsletter For Autonomous Database on Shared Exadata Infrastructure Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the following new features and topics: Launch of Autonomous JSON Database Rename Autonomous Database Data Safe Now Supports Private Endpoints Analytics Cloud 5.7 and ADB ADW Training Plans for LiveLabs Data Warehouse Blog Compilation Note: some new features are only available with 19c. To get the most from Autonomous Database, upgrade to 19c as soon as possible. There is more information about how to upgrade existing 18c instances to 19c here, for ADW and here, for ATP. Don't forget to checkout our "What's New" page for a comprehensive archive of all the new features we have added to Autonomous Database - click here. To find out how to subscribe to this newsletter click here. (SharedInfrastructureOnly) NEW - Autonomous JSON Database - Autonomous Service For Application Developers Oracle Autonomous JSON Database is an Oracle Cloud service that is specialized for developing NoSQL-style applications that use JavaScript Object Notation (JSON) documents. Like other Oracle Autonomous Database services, it delivers: auto-scaling, automated patching, upgrading, and tuning. Autonomous JSON Database provides all of the same features as Autonomous Transaction Processing and, unlike MongoDB, allows developers to also store up to 20 GB of non-JSON data. There is no storage limit for JSON collections. Simplified animation showing basic steps to create an AJD instance LEARN MORE DOC: About Autonomous JSON Database, click here. DOC: Work with JSON Documents in Autonomous Database, click here. Q&A: Post questions about AJD on einstein.oracle.com, click here. O.COM: home page on oracle.com, click here. VIDEO: launch video from Developer Live event, click here. DO MORE Visit the LiveLabs home page for a list of all available Autonomous Database demo labs - click here. LAB: Loading, indexing and querying JSON Data - click here. VIDEO: Using ATP as a JSON document store - click here. NOTE - both demos use an ATP instance, but they also work with the new Autonomous JSON Database service.   BACK TO TOP (SharedInfrastructureOnly) NEW - Change Database Name For An Autonomous Database Autonomous Database offers quick and easy provisioning which is an integral part of an agile development framework. This agility can be further accelerated with the help of a new feature - database rename - which makes it significantly easier and faster to deploy dev/test instances, cloned from production, by avoiding the need to change connect strings in supporting application code (blog post about a real-world use case coming soon on blogs.oracle.com/datawarehousing). The rename pop-up form from the OCI ADB instance details page End-to-end flow of renaming an ADB instance LEARN MORE DOC: Rename Autonomous Data Warehouse Instance, click here. DOC: Rename Autonomous Transaction Processing Instance, click here.   BACK TO TOP (SharedInfrastructureOnly) NEW - Data Safe Now Supports ADBs Using Private Endpoints You can now register an Autonomous Database on Shared Exadata Infrastructure within a private virtual cloud network (VCN). For Oracle Data Safe to communicate with your database, you need to create an Oracle Data Safe private endpoint on the same VCN as your database's private endpoint. You can use any subnet; however, Oracle recommends that you use the same subnet as your database. Illustration shows two private endpoints communicate with each other, allowing Oracle Data Safe to communicate with your database. LEARN MORE DOC: Register ADB on Shared Exadata Infrastructure with Private VCN Access, click here. PPT: Oracle OpenWorld 2019 Data Safe Overview Presentation click here BRIEFING NOTE: Secure Critical Data with Oracle Data Safe click here FAQ: Data Safe FAQ click here DOC: Getting Started click here PORTAL: Data Safe internal home page click here   BACK TO TOP (SharedInfrastructureOnly) NEW - Democratize Machine Learning with Oracle Analytics Cloud 5.7 Machine learning models in Oracle Autonomous Database can now be registered in Oracle Analytics Cloud. These can then be applied to datasets and data connections through the Oracle Analytics Cloud interface. Not only does this reduce the cost of switching between environments, it opens the door for business users to tap into the expertise of an organization’s data scientists. Illustration shows registration process within Analytics Cloud 5.7 for enabling access to Oracle Machine Learning models. LEARN MORE DOC: How Can I Use Oracle Machine Learning Models in Oracle Analytics? click here. DOC: What's New in Analytics Cloud click here   BACK TO TOP (SharedInfrastructureOnly) NEW - ADW Training Plans For LiveLabs We have launched a new page on oracle.com to help you navigate all the data warehousing workshops available within LiveLabs. Select the main topic area, then either choose to start a specific workshop now or reserve a date and time to work through it. Customers won't need to sign up for anything - all accounts and access will be provided! Illustration shows new data warehousing training plans for content in Oracle LiveLabs LEARN MORE Training plan home page on o.com, click here   BACK TO TOP (SharedInfrastructureOnly) NEW - Compilation of Blogs For Modern Data Warehousing Capabilities Here is a compilation (in PDF format) of the most important blogs that cover our modern data warehousing capabilities, such as: Autonomous Data Warehouse, big data, query optimization, machine learning, spatial, graph, SQL and our Global Leaders program. Note: the document will be refreshed on a regular basis to include the very latest content. Illustration shows the cover of the document Compilation of Blogs For Modern Data Warehousing Capabilities & Beyond To download the PDF file, click here. BACK TO TOP oracle.com ADW ATP Documentation ADW ATP TCO Calculator Shared Dedicated Cloud Cost Estimator Autonomous Database ADB New Features Autonomous Database Schema Advisor Autonomous Database       Customer Forums ADW ATP       BACK TO TOP

August 20, 2020 Latest Newsletter For Autonomous Database on Shared Exadata Infrastructure Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This...

Big Data

Continuous ETL, Realtime Analytics, and Realtime Decisions in Oracle Big Data Service using GoldenGate Stream Analytics

Many thanks to Prabhu Thukkaram from the GoldenGate Streaming Analytics team for this post :). GoldenGate Stream Analytics is a Spark-based platform for building stream processing applications. One of the key differentiators is productivity or time-to-value, which is a direct result of the following Support for diverse messaging platforms – Kafka, OCI Streaming Service, JMS, Oracle AQ, MQTT, etc. Natively integrated with Oracle GoldenGate to process and analyze transaction streams from relational databases Interactive pipeline designer with live results to instantly validate your work Zero-code environment to build continuous ETL and analytics workflows Pattern library for advanced data transformation and real-time analytics Extensive support for processing geospatial data Secured connectivity to diverse data sources and sinks Built-in support for real-time visualizations and dashboards Automatic application state management Automatic configuration of pipelines for high availability and reliability Automatic configuration of pipelines for lower latency and higher throughput Automatic log management of pipelines for better disk space utilization Built in cache (choice of Coherence and Apache Ignite) for better pipeline performance and sharing of data across pipelines.  Caching is seamlessly integrated and no need to run additional cache clusters Continuously refresh your DataMart, Datawarehouse, and Data Lake (ADW, OCI Object Store, HDFS/Hive, Oracle NoSQL, Mongo DB, AWS S3, Azure Data Lake, Google Big Query, Redshift, etc.). Please note some of this will be arriving in 19.1.0.0.4 which is the next version. Alerts via Email, PagerDuty, Slack, etc., using OCI Notification Service This video discusses industry use cases and demonstrates the ease with which data pipelines can be built and deployed using GoldenGate Stream Analytics. If you are aware of the value proposition and interested in deploying GGSA pipelines to Big Data service, then please continue reading.   The Marketplace instance of GoldenGate Stream Analytics embeds Apache Spark and Apache Kafka so data pipelines can be built and tested without other dependencies. While this is good for developers to quickly build and test pipelines, it is not ideal for enterprise deployments and production. For production we will need a runtime platform that can scale with increasing workloads, is highly available, and one that can be secured with highest standards. This blog describes how to run GGSA pipelines in the Big Data Service runtime environment.   Topology In this example, the marketplace instance of GoldenGate Stream Analytics is created in the same regional subnet as the BDS cluster.  A marketplace instance of Stream Analytics can be provisioned from here.  You can also refer to the GGSA Marketplace user guide. Big Data Service can be provisioned from here.  Topology is shown in the diagram below and the GGSA instance is accessed using its public IP address. Please note your GoldenGate Stream Analytics instance will also act as a Bastion Host to your BDS cluster. Once you ssh to GGSA instance, you will also be able to ssh to all your BDS nodes. To do this you will need to copy your ssh private key to GGSA instance. You will also be able to access your Cloudera Manager using the same public IP of GGSA instance by following steps below. The default security list for the Regional Subnet must allow bidirectional traffic to edge/OSA node so create a stateful rule for destination port 443. Also create a similar Ingress rule for port 7183 to access the BDS Cloudera Manager via the OSA edge node.  An example Ingress rule is shown below. SSH to GGSA box and run the following port forward commands so you can access the Cloudera Manager via the GGSA instance. sudo firewall-cmd --add-forward-port=port=7183:proto=tcp:toaddr=<IP address of Utility Node running the Cloudera Manager console> sudo firewall-cmd --runtime-to-permanent sudo sysctl net.ipv4.ip_forward=1 sudo  iptables -t nat -A PREROUTING -p tcp --dport 7183 -j DNAT --to-destination  <IP address of Utility Node running the Cloudera Manager console>:7183 sudo iptables -t nat -A POSTROUTING -j MASQUERADE You should now be able to access the Cloudera Manager using the URL https://<Public IP of GGSA>:7183 Prerequisites Retrieve IP addresses of BDS cluster nodes from OCI console as shown in screenshot below. Alternatively, you can get the FQDN for BDS nodes from the Cloudera Manager as shown in the next screenshot. Patch spark-osa.jar in $OSA_HOME/osa-base/resources/modules/spark-osa.jar so dfs.client.use.datanode.hostname is set to true. Please make a copy of spark-osa.jar before replacing with the patch. This will be automatic in the next OS›A version 19.1.0.0.4. None of this workaround will be necessary.  For now you can obtain the patch from here and here are the steps to patch ssh to your OSA instance and run "wget https://objectstorage.us-phoenix-1.oraclecloud.com/p/cGDU0NMTEV0sF7uqgzH_94Ni54dP08mVwCAEwwtYab8/n/paasdevehcs/b/TEMP-GGSA/o/spark-osa.jar" Stop OSA by running “sudo systemctl stop osa” Copy /u01/app/osa/osa-base/resources/modules/spark-osa.jar ~/spark-osa.jar.backup Copy downloaded ~/spark-osa.jar to /u01/app/osa/osa-base/resources/modules/spark-osa.jar Start OSA by running “sudo systemctl start osa” Reconfigure YARN virtual cores using Cloudera Manager as shown below. This will allow many pipelines to run in the cluster and not bound by actual physical cores. Container Virtual CPU Cores :- This is the total virtual CPU cores available to YARN Node Manager for allocation to Containers. Please note this is not limited by physical cores and you can set this to a high number, say 32 even for VM standard 2.1. Container Virtual CPU Cores Minimum :- This is the minimum vcores that will be allocated by YARN scheduler to a Container. Please set this to 2 since CQL engine is a long-running task and will require a dedicated vcore. Container Virtual CPU Cores Maximum :- This is the maximum vcores that will be allocated by YARN scheduler to a Container. Please set this to a number higher than 2 say 4. Note: this change will require a restart of the YARN cluster from Cloudera Manager.   If using Kafka from OSA Marketplace instance, set advertised listeners to private IP of the OSA node and port 9092 in /u01/app/kafka/config/server.properties. E.g. advertised_listeners=PLAINTEXT://10.0.0.13:9092. Please restart the Kafka service after this change by running “sudo systemctl restart kafka”.  This will be automatic in the next OSA version 19.1.0.0.4. Deploying GGSA Pipelines to Non-Kerberized Dev/Test BDS cluster In GGSA system settings dialog, configure the following: Set Kafka Zookeeper Connection to Private IP of the OSA node. E.g. 10.0.0.59:2181 Set Runtime Server to Yarn Set Yarn Resource Manager URL to Private IP or Hostname of the BDS Master node (E.g. xxxxx.devintnetwork.oraclevcn.com) Set storage type to HDFS Set Path to <PrivateIP or Host Of Master><HDFS Path>. E.g. 10.x.x.x/user/oracle/ggsapipelines or xxxxx.devintnetwork.oraclevcn.com/user/oracle/ggsapipelines.  Set HA Namenode to Private IP or Hostname of the BDS Master node. E.g. xxxxx.devintnetwork.oraclevcn.com Set Yarn Master Console port to 8088 or as configured in BDS Set Hadoop authentication to Simple and leave Hadoop protection policy at authentication if available Set username to "oracle" and click Save.  A sample screenshot is shown below: Deploying GGSA Pipelines to Kerberized Production BDS Cluster In GGSA system settings dialog, configure the following: Set Kafka Zookeeper Connection to Private IP of the OSA node. E.g. 10.0.0.59:2181 Set Runtime Server to Yarn Set Yarn Resource Manager URL to Private IPs of all master nodes starting with the one running active Yarn Resource Manager. In the next version of OSA, the ordering will not be needed. E.g. xxxxx.devintnetwork.oraclevcn.com, xxxxx.devintnetwork.oraclevcn.com Set storage type to HDFS Set Path to <NameNode Nameservice><HDFS Path>. E.g. bdsggsasec-ns/user/oracle/ggsapipelines. The path will automatically be created if it does not exist but the Hadoop User must have write permissions. NameNode Nameservice can be obtained from HDFS configuration as shown in the screenshot below. Set HA Namenode to Private IPs or hostnames of all master nodes (comma separated) starting with the one running active NameNode server. In the next version of OSA, the ordering will not be needed. E.g. xxxxx.devintnetwork.oraclevcn.com, xxxxx.devintnetwork.oraclevcn.com Set Yarn Master Console port to 8088 or as configured in BDS Set Hadoop authentication to Kerberos Set Kerberos Realm to BDACLOUDSERVICE.ORACLE.COM Set Kerberos KDC to private IP or hostname of one of the BDS master nodes. E.g. xxxxx.devintnetwork.oraclevcn.com Set principal to bds@BDACLOUDSERVICE.ORACLE.COM.  Please see this documentation to create a Kerberos principal (e.g. bds) and add it to hadoop admin group, starting with step "Connect to Cluster's First Master Node" and  through the step "Update HDFS Supergroup". Note, you can hop/ssh to the master node using your GGSA node as the Bastion. You will need your ssh private key to be available on GGSA node though. Also, do not forget to restart your BDS cluster as instructed in the documentation. [opc@bdsggsa ~]$ ssh -i id_rsa_prabhu opc@xxxxx.devintnetwork.oraclevcn.com Make sure the newly created principal is added to Kerberos keytab file on the master node as shown  bdsmn0 # sudo kadmin.local kadmin.local: ktadd -k /etc/krb5.keytab bds@BDACLOUDSERVICE.ORACLE.COM Fetch the keytab file using sftp, etc and set Keytab field in system settings by uploading the same Set Yarn Resource Manager principal which should be in the format “yarn/<FQDN of BDS MasterNode running Active Resource Manager>@KerberosRealm”  E.g. yarn/xxxxx.devintnetwork.oraclevcn.com@BDACLOUDSERVICE.ORACLE.COM Sample screenshot shown below: Running Samples on BDS To import a sample, click “import” on any of the six samples in screenshot below and switch to Catalog or just click on the circle.  To run the sample on BDS change the hostname in Kafka connection of the sample pipeline from “localhost” to the private IP address (run hostname -i to get IP address) of the GGSA instance node. After this change you can just open the pipeline or you can publish the pipeline and then open the associated dashboard. Please see screenshots below.

Many thanks to Prabhu Thukkaram from the GoldenGate Streaming Analytics team for this post :). GoldenGate Stream Analytics is a Spark-based platform for building stream processing applications. One of...

Autonomous

How To Manually Upgrade Your Autonomous Database from 18c to 19c

In a recent newsletter (April 08, 2020) I mentioned that Oracle Database 19c is now available on Autonomous Database for all regions. The default version for all new databases is 19c and going forward, most new Autonomous Database features will only be available for databases using Oracle Database 19c. Around this time, you should have noticed some new messages appearing on your ADW console if your ADW instance was using Database version 18c. At the top of screen a new information banner appeared: and next to the Database version information there was another information message: This was to let everyone know that it's possible to manually upgrade to Database 19c right now. Of course, we will be automatically upgrading instances during September and email notifications from the Cloud Ops team will be sent shortly that will outline the plans for upgrading instances. But as my recent newsletter pointed out - most new features are only available if you are using 19c. Therefore, to get the most from your Autonomous Database it really does make sense to manually upgrade to 19c now and not wait for the automatic upgrade in September. For example, if you are using the built-in APEX environment, then you definitely want to be using the latest version 20.1 which has a lot of great new features (see July 15 newsletter) BUT your ADB instance has to be 19c. So, how do you manually upgrade your ADB instance from 18c to 19c? Fortunately it only takes a few mouse clicks. Here is the general flow: 1) Goto your console page for your ADW instance. You will see the upgrade announcement next to the database version information  and the banner at the top of the page:   2) Click on either of the upgrade links to start the upgrade process: 3) This will pop a form over your console page. It's similar to the other pop-up forms so to confirm you really do want to start the upgrade you need to enter the name of your ADB instance in the field shown below:   4) Click the blue "Upgrade" button to start the upgrade process:   That's it! Honestly, this is easiest upgrade you will ever do. It's simpler than upgrading your iPhone! Whilst the upgrade process is running your console page will look like this: ..and you will notice the status box is now orange and the status message says "UPGRADING". There is an additional banner message that says "The database remains available during the upgrade". No other database vendor is able to do this, everyone else forces their database to go offline during an upgrade which means all connections are terminated and all on-going workloads are lost. Can you track the upgrade process? Yes. The Work Request tab provides real-time status information for the upgrade process   What happens when the upgrade process is complete? As soon as the upgrade process has finished the Work Request screen will show the big green SUCCEEDED message:   That's it, it's all done and you are now running Database 19c. If you check the console page again you will notice that the banner message has now gone and the Database Version now says 19c. Conclusion Now you should be in a position to manually upgrade your own 18c ADB instances to 19c. Then you will be able to take advantage of all the new features we have recently rolled out such as the high availability feature, Autonomous Data Guard. The whole end-to-end migration process is shown here:   If you have any questions then please post them on our customer forum which is here.  

In a recent newsletter (April 08, 2020) I mentioned that Oracle Database 19c is now available on Autonomous Database for all regions. The default version for all new databases is 19c and going...

Autonomous

How to Access Non-Oracle Databases from Autonomous Database using Oracle Database Gateway

Being able to create database links from Autonomous Database on Shared Infrastructure (ADB-S) to other ADB-S instances or Oracle databases has been one of the most sought-after features that we introduced. ADB-S now supports creating database links to Oracle Database Gateways in order to access non-Oracle databases. In this blog post, we are going to explore how to access a Microsoft SQL Server database from an ADB-S instance using a database link that we will create to a gateway. Before we jump on the detailed steps, let’s do a quick recap of database links in ADB-S. As you may already know, ADB-S requires the target database to be configured with TCP/IP with SSL (TCPS) authentication for outgoing database links since it uses TCPS authentication by default. What this means is that the gateway we want to connect needs to be configured with TCPS authentication as well. We already covered how to configure an Oracle database, such as DBCS, with TCPS authentication in one of my earlier blog posts. Good news is that configuring Oracle Database Gateway with TCPS is very similar and requires couple minor additional steps. Let’s start! The non-Oracle database that I will be using for this demonstration is Microsoft SQL Server 2019 Express running on Windows Server 2019. For simplicity, I chose to install Oracle Database Gateway 19c on the same Windows VM. The installation of the gateway using the Oracle Universal Installer is extremely quick and simple. All you need to provide is a Windows user that has no admin privileges (gateway owner), which non-Oracle database you are planning to connect to and the details of that database such as host name, port number, database name etc. The installer also partially configures a listener for the gateway during the installation and it’s possible to choose TCPS authentication in the UI (I will explain what I mean by 'partially' in a moment). Here’s how my listener.ora looked like after the installation (note the TCPS protocol in the listener description): # listener.ora Network Configuration File: C:\app\GW\product\19.0.0\tghome_1\NETWORK\ADMIN\listener.ora # Generated by Oracle configuration tools. LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = ctuzlaWindows)(PORT = 1522)) ) ) SID_LIST_LISTENER = (SID_LIST= (SID_DESC= (SID_NAME=dg4msql) (GLOBAL_DBNAME=dg4msql_svc) (ORACLE_HOME=C:\app\GW\product\19.0.0\tghome_1) (PROGRAM=dg4msql) ) ) In the listener.ora above, LISTENER was automatically created by the installer; however, I had to manually add the SID_LIST_LISTENER variable. It’s important to note that SID_NAME is the SID of the gateway and GLOBAL_DBNAME is the service name of the gateway. The value of the GLOBAL_DBNAME variable can be anything you specify and this is the value that we will be using as the target database service name when we create our database link from our Autonomous Database. Additionally, as you can tell from the ORACLE_HOME path, GW is our Windows user who is also the gateway owner. Even though our listener already has the TCPS endpoint, we still need to perform couple additional steps so that our ADB-S instance can successfully communicate with the gateway. Here are the steps needed to complete the TCPS configuration in our gateway: Create wallets with self signed certificates for server and client Exchange certificates between server and client wallets (Export/import certificates) Add wallet location in the server network files Create wallets with self signed certificates for server and client As part of enabling TCPS authentication, we need to create individual wallets for the server and the client. Each of these wallets has to have their own certificates that they will exchange with one another. For the sake of this example, I will be using a self signed certificate. The client wallet and certificate can be created in the client side; however, I'll be creating my client wallet and certificate in the server and moving them to Object Store later on. See Configuring Secure Sockets Layer Authentication for more information. Directories to be used for the wallets and certificates C:\Users\GW\Desktop\server\wallet C:\Users\GW\Desktop\client\wallet C:\Users\GW\Desktop\tmp Create a server wallet with the GW user C:\Users\GW\Desktop\server\wallet>orapki wallet create -wallet ./ -pwd ************ -auto_login Oracle PKI Tool Release 19.0.0.0.0 - Production Version 19.3.0.0.0 Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Create a server certificate with the GW user C:\Users\GW\Desktop\server\wallet>orapki wallet add -wallet ./ -pwd ************ -dn "CN=windows" -keysize 1024 -self_signed -validity 3650 -sign_alg sha256 Oracle PKI Tool Release 19.0.0.0.0 - Production Version 19.3.0.0.0 Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Create a client wallet with the GW user C:\Users\GW\Desktop\client\wallet>orapki wallet create -wallet ./ -pwd ************ -auto_login Oracle PKI Tool Release 19.0.0.0.0 - Production Version 19.3.0.0.0 Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Create a client certificate with the GW user C:\Users\GW\Desktop\client\wallet>orapki wallet add -wallet ./ -pwd ************ -dn "CN=ctuzla" -keysize 1024 -self_signed -validity 3650 -sign_alg sha256 Oracle PKI Tool Release 19.0.0.0.0 - Production Version 19.3.0.0.0 Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Exchange certificates between server and client wallets (Export/import certificates) Export the server certificate with the GW user C:\Users\GW\Desktop\server\wallet>orapki wallet export -wallet ./ -pwd ************ -dn "CN=windows" -cert C:\Users\GW\Desktop\tmp\server.crt Oracle PKI Tool Release 19.0.0.0.0 - Production Version 19.3.0.0.0 Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Export the client certificate with the GW user C:\Users\GW\Desktop\client\wallet>orapki wallet export -wallet ./ -pwd ************ -dn "CN=ctuzla" -cert C:\Users\GW\Desktop\tmp\client.crt Oracle PKI Tool Release 19.0.0.0.0 - Production Version 19.3.0.0.0 Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Import the client certificate into the server wallet with the GW user C:\Users\GW\Desktop\server\wallet>orapki wallet add -wallet ./ -pwd ************ -trusted_cert -cert C:\Users\GW\Desktop\tmp\client.crt Oracle PKI Tool Release 19.0.0.0.0 - Production Version 19.3.0.0.0 Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Import the server certificate into the client wallet with the GW user C:\Users\GW\Desktop\client\wallet>orapki wallet add -wallet ./ -pwd ************ -trusted_cert -cert C:\Users\GW\Desktop\tmp\server.crt Oracle PKI Tool Release 19.0.0.0.0 - Production Version 19.3.0.0.0 Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Add wallet location in the server network files We now need to modify the server network files so that they point to the server wallet location and they are ready to use the TCPS protocol. Here's how those files look in my case: Server-side $ORACLE_HOME/network/admin/sqlnet.ora # sqlnet.ora Network Configuration File: C:\app\GW\product\19.0.0\tghome_1\NETWORK\ADMIN\sqlnet.ora # Generated by Oracle configuration tools. SSL_SERVER_DN_MATCH= (ON) WALLET_LOCATION = (SOURCE = (METHOD = File) (METHOD_DATA = (DIRECTORY = C:\Users\GW\Desktop\server\wallet) ) ) Server-side $ORACLE_HOME/network/admin/listener.ora # listener.ora Network Configuration File: C:\app\GW\product\19.0.0\tghome_1\NETWORK\ADMIN\listener.ora # Generated by Oracle configuration tools. WALLET_LOCATION = (SOURCE = (METHOD = File) (METHOD_DATA = (DIRECTORY = C:\Users\GW\Desktop\server\wallet) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = ctuzlaWindows)(PORT = 1522)) ) ) SID_LIST_LISTENER = (SID_LIST= (SID_DESC= (SID_NAME=dg4msql) (GLOBAL_DBNAME=dg4msql_svc) (ORACLE_HOME=C:\app\GW\product\19.0.0\tghome_1) (PROGRAM=dg4msql) ) ) Notes About the Target Environment If your target database is hosted on a Windows VM, make sure the firewall is turned off or configured in a way to not block the incoming traffic. As you can see in my listener description above, my listener is configured on port 1522. Since my VM is hosted in OCI, I needed to add an ingress rule for that port that in the security list of my virtual cloud network (VCN). Without this step, I wouldn’t be able to reach the listener from my Autonomous Database. Here’s how the HS parameters in my gateway agent init file looks like: # This is a customized agent init file that contains the HS parameters # that are needed for the Database Gateway for Microsoft SQL Server # # HS init parameters # HS_FDS_CONNECT_INFO=ctuzlaWindows:8000//mssfinance HS_FDS_TRACE_LEVEL=ODBC HS_FDS_RECOVERY_ACCOUNT=RECOVER HS_FDS_RECOVERY_PWD=RECOVER Please note that my SQL Server database (mssfinance) is configured to have TCP enabled on port 8000 as shown above. Confirm the listener status: C:\Users\GW>lsnrctl status LSNRCTL for 64-bit Windows: Version 19.0.0.0.0 - Production on 24-JUL-2020 09:27:30 Copyright (c) 1991, 2019, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)(HOST=ctuzlaWindows)(PORT=1522))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for 64-bit Windows: Version 19.0.0.0.0 - Production Start Date 17-JUL-2020 20:35:35 Uptime 6 days 12 hr. 51 min. 59 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File C:\app\GW\product\19.0.0\tghome_1\network\admin\listener.ora Listener Log File C:\app\GW\diag\tnslsnr\ctuzlaWindows\listener\alert\log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=ctuzlaWindows)(PORT=1522))) Services Summary... Service "dg4msql_svc" has 1 instance(s). Instance "dg4msql", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully Create a Database Link from ADB-S to the Gateway We now have a working TCPS authentication in the gateway. Here are the steps from the documentation that we will follow to create a database link from ADB-S to the gateway: Copy the client wallet (cwallet.sso) that we created in C:\Users\GW\Desktop\client\wallet to Object Store. Create credentials to access your Object Store where you store the cwallet.sso. See CREATE_CREDENTIAL Procedure for details. Create a directory to store the target database wallet: SQL> create directory wallet_dir as 'walletdir'; Directory WALLET_DIR created. Upload the wallet to the wallet_dir directory on ADB-S using DBMS_CLOUD.GET_OBJECT: SQL> BEGIN DBMS_CLOUD.GET_OBJECT( credential_name => 'OBJ_STORE_CRED', object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/adbstraining/b/target-wallet/o/cwallet.sso', directory_name => 'WALLET_DIR'); END; / PL/SQL procedure successfully completed. On ADB-S, create credentials to access the target database. The username and password you specify with DBMS_CLOUD.CREATE_CREDENTIAL are the credentials for the target database that you use to create the database link: BEGIN DBMS_CLOUD.CREATE_CREDENTIAL( credential_name => 'SSFINANCE_LINK_CRED', username => 'mssadmin', password => '************'); END; / PL/SQL procedure successfully completed. Create the database link to the target database using DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK: BEGIN DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK( db_link_name => 'FINANCELINK', hostname => '149.343.90.113', port => '1522', service_name => 'dg4msql_svc', ssl_server_cert_dn => 'CN=windows', credential_name => 'SSFINANCE_LINK_CRED', directory_name => 'WALLET_DIR', gateway_link => TRUE); END; / PL/SQL procedure successfully completed. Note that DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK procedure has a new parameter called gateway_link that needs to be set to TRUE when creating a database link to an Oracle Database Gateway. Use the database link you created to access the data on the target database: select count(*) from CUSTOMERS@FINANCELINK; COUNT(*) -------- 4 This is all you need in order to access a non-Oracle database from your Autonomous Database! In this post, we have gone through gateway installation, TCPS configuration in gateways, target environment tips and database link creation in order to access a SQL Server database. The same steps apply to other non-Oracle databases that are supported by Oracle Database Gateway as well.

Being able to create database links from Autonomous Database on Shared Infrastructure (ADB-S) to other ADB-S instances or Oracle databases has been one of the most sought-after features that we...

Autonomous

Using OCI Resource Manage to deploy a complete end-to-end data warehouse

A while back, I announced in the Autonomous Database Newsletter (June 24 edition) that we now had a series of reference architecture patterns for Autonomous Data Warehouse. Currently, the following patterns are available (note: more patterns are being built and will be added shortly): Departmental data warehousing - an EBS integration example Departmental data warehousing/data marts - consolidate spreadsheets Enterprise data warehousing - an integrated data lake example Set up a data science environment that uses Oracle Machine Learning Each reference architecture pattern comes with the following: best practices framework; a set of recommendations which are a starting point for implementing the pattern and a terraform script for deploying the end-to-end set of cloud services on OCI. That was great news for everyone because you now had a very precise system architecture plan for each data warehouse use case that could be expanded to incorporate additional use cases as your data warehouse project evolved over time. These patterns were the result of a some amazing work by our Enterprise Architecture, OCI andADB PM teams. There are a lot more patterns to come and I will let you know as new ones get added to the list. I have used a couple of the patterns myself and, if I am honest, while the Terraform scripts are great it was hard work to setup all the required keys etc and get everything in to place so I could run one of the pre-built Terraform scripts. But then again, Terraform is a completely new area for me! Then this week our OCI team announced they have added one of our data warehouse patterns to the list of pre-built stacks within OCI Resource Manager. Here's a screen grab of the list of built-in stacks that are ready for deploying:   Right now, we have the "Departmental Data Warehouse" pattern available as a stack and this deploys an instance of Autonomous Database and an instance on Oracle Analytics Cloud for you. So how do you get started? Here is a walk-through of the main screens: Step 1) - I have logged into my Always Free cloud account and arrived at the main OCI home page: Step 2 - I have opened the main menu tree and scrolled down to Resource Manager and then selected "Stacks" Step 3 - As this is the first stack I have built my home page is currently empty. Obviously we click on the blue Create Stack button. Step 4 - I can build my own stack deployment process or I can select to use a pre-built stack and this is what we need to use in this case...   Step 5 - this pops a form which lists all the available pre-built stacks as I showed above. In this case I want to build a departmental data warehouse so let's select that box Step 6 - now I can set the name and description for my deployment "stack" along with the compartment I want to use in my deployment (if you are not sure what a compartment is or does then spend a few minutes looking at this page in the documentation). Once you are happy with the information on this page, click the Next button in the bottom left corner. Step 7 - this brings us to the form which is going to capture the information about our ADW and OAC instances. In the screen below you can see that I have set the number of CPUs for my ADW instance as 1, I have set the password for my ADMIN user, I have changed the name and display name for my new database, I have elected to build a 19c autonomous data warehouse and I have enabled auto-scaling. For storage, I am going to start with 1 TB. Just point out....both the number of OCPUs and the amount of storage and can be scaled independently and with zero downtime.  Step 8 - scrolling down....I can set my license type. We allow you to bring your on-premise licenses to the cloud or you can simply buy additional licenses as required. With this departmental warehouse project I know I am going to include personally identifiable information about some of our employees so I need to keep that data totally safe and secure. Therefore, I am going to register my ADW instance with our Data Safe service (which is free to use with Autonomous Database). ...and to further protect my ADW instance this pre-built stack automatically uses Access Control Lists (ACLs). If you want more information about to use ACLs then click here. This process within Resource Manager will automatically configure the ACLs so that OAC can connect to my ADW instance so that's all taken care of but I need to allow my desktop computer to access my ADW instance as well. Therefore, I have to enter the public IP address for my desktop computer. If you don't include this information then you will not be able to use any of the desktop tools or built-in tools that come with ADW. Step 9 -  Setup the details for my OAC instance. Again, I have opted for 1 OCPU and I am going to take the ENTERPRISE_ANALYTICS deployment option since this gives me access to all the features I need. If you use the pulldown menu you will there are two options: 1)Self-service analytics deploys an instance with data visualization, and 2) Enterprise analytics deploys an instance with enterprise modeling, reporting, and data visualization.   Step 10 - and now we are at the end of the process and ready to deploy our end-to-end departmental data warehouse. The review page gives us a chance to make sure everything is correct before we hit the Create button.  The Jobs tab allows you to monitor progress of your deployment and at the end of this process you should have a new ADW instance and a new OAC instance within your tenancy.   Congratulations You should now have your new departmental data warehouse ready for use, What's Next Next you can start loading data into your ADW instance. Take a look at the Chapter 3 in the ADW documentation which covers loading data with Autonomous Data Warehouse - https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/load-data.html#GUID-1351807C-E3F7-4C6D-AF83-2AEEADE2F83E Then you can launch the Oracle Analytics home page in your browser, connect to your ADW instance and start building reports and dashboards. The OAC product management team has put together a great series of labs that will get you started with OAC and ADW, see here: https://www.oracle.com/uk/business-analytics/data-visualization/tutorials.html Hope this was useful and I will let you know as we add more patterns to the solution library and more patterns into the pre-built stacks within OCI Resource Manager.                                   

A while back, I announced in the Autonomous Database Newsletter (June 24 edition) that we now had a series of reference architecture patterns for Autonomous Data Warehouse. Currently, the following...

Autonomous

Autonomous Database Newsletter - July 22 2020

Autonomous Database on Shared Exadata Infrastructure July 22, 2020   Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the following new features and topics: Using Autonomous Data Guard MDW Patterns in OCI Resource Manager Note: some new features are only available with 19c. To get the most from Autonomous Database, upgrade to 19c as soon as possible. There is more information about how to upgrade existing 18c instances to 19c here, for ADW and here, for ATP. Don't forget to checkout our "What's New" page for a comprehensive archive of all the new features we have added to Autonomous Database - click here. To find out how to subscribe to this newsletter click here.   (Shared Infrastructure Only) NEW - Creating a Standby Database with ADG When you enable Autonomous Data Guard on ADB, the system creates a standby database that stays "connected" to the primary database and automatically refreshes its data from the primary database. This allows the following, depending on the state of the primary database: If your primary database goes down, Autonomous Data Guard converts the standby database to the primary database with minimal interruption. After failover completes, Autonomous Data Guard automatically creates a new standby database. You can perform a switchover operation, where the primary database becomes the standby database, and the standby database becomes the primary database. Note: Autonomous Database does not provide access to the standby database. ×   // Get the modal var modal = document.getElementById("myModal"); // Get the image and insert it inside the modal - use its "alt" text as a caption var img = document.getElementById("myImg"); var modalImg = document.getElementById("img01"); var captionText = document.getElementById("caption"); img.onclick = function(){ modal.style.display = "block"; modalImg.src = this.src; captionText.innerHTML = this.alt; } // Get the element that closes the modal var span = document.getElementsByClassName("close")[0]; // When the user clicks on (x), close the modal span.onclick = function() { modal.style.display = "none"; } Simplified animation showing basic steps to create an ADG standby instance for ADW   LEARN MORE   DOC: Key features of Autonomous Data Guard for ADW, click here. DOC: Autonomous Data Guard notes for ADW, click here. DOC: Key features of Autonomous Data Guard for ATP, click here. DOC: Autonomous Data Guard notes for ATP, click here.       BACK TO TOP   (Shared Infrastructure Only) NEW - Using OCI Resource Manager to Deploy Data Warehouse Reference Architecture Patterns The list of pre-built Stacks within OCI Resource Manager has been expanded to include some of the new reference architecture patterns. Currently, the Departmental Data Warehouse pattern is available as a pre-built stack and the other Modern Data Warehouse patterns will be available shortly. ×   // Get the modal var modal = document.getElementById("myModal02"); // Get the image and insert it inside the modal - use its "alt" text as a caption var img = document.getElementById("myImg02"); var modalImg = document.getElementById("img02"); var captionText = document.getElementById("caption02"); img.onclick = function(){ modal.style.display = "block"; modalImg.src = this.src; captionText.innerHTML = this.alt; } // Get the element that closes the modal var span = document.getElementsByClassName("close02")[0]; // When the user clicks on (x), close the modal span.onclick = function() { modal.style.display = "none"; } Simplified animation showing basic steps for using Resource Manager pre-built stacks to deploy a departmental data warehouse.   LEARN MORE DOC: Architecture Center, click here. DOC: Pattern for Departmental Data Warehouse, click here. DOC: Pattern for Enterprise Data Warehouse with HCM data enrichment, click here. DOC: Pattern for Departmental Data Warehouse with EBS integration click here. DOC: Pattern for Enterprise Data Warehouse with an integrated data lake click here. DOC: Pattern for data science environment using Oracle Machine Learning, click here.       BACK TO TOP   oracle.com ADW ATP Documentation ADW ATP TCO Calculator Shared Dedicated Cloud Cost Estimator Autonomous Database ADB New Features Autonomous Database Schema Advisor Autonomous Database       Customer Forums ADW ATP       BACK TO TOP  

Autonomous Databaseon Shared Exadata InfrastructureJuly 22, 2020   Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the...

Announcing Autonomous Data Guard!

Just over a week ago, you may have heard Larry mention Autonomous Data Guard at the live-streamed Oracle Live event. Today, I am happy to announce this much-awaited disaster recovery (DR) feature in Autonomous Database on Shared Infrastructure (ADB-S). Autonomous Data Guard (ADG) gives users the ability to enable a standby database for each ADB instance in a just one-click; As always in the Autonomous world - It is completely automated! Note that this is an advanced feature available only in 19c and above ADB instances and not available in the Always-Free tier. If you are still running database version 18c, consider upgrading to 19c in one-click as well. While ADB already runs on highly available Exadata infrastructure, this feature further protects your business' data against unforeseen disaster scenarios (think earthquakes, fires, floods, major network outages etc.)  by automatically failing over to a standby database when the primary database goes down. Currently, ADB-S supports ADG within the same region (physically separated across Availability Domains) and aims to support ADG across regions later this year. Before we jump right into using this feature, let's brush up on some (simplified) disaster recovery terminology as it relates to ADG: Peer Databases: Two or more databases that are peered (linked and replicated) between each other. They consist of a Primary database and Standby (copy of the primary) databases. Primary or Source Database: The main database that is actively being used to read from and write to by a user or application. Standby Database: A replica of the primary database which is constantly and passively refreshing (ie. replicating) data from the primary. This standby database is used in case of failure of the primary. In the case of ADG, we keep this standby on a different physical Exadata machine (in a different Availability Domain in regions that have more than one) for the highest level of protection. Recovery Point Objective (RPO): An organization's tolerance for data loss, after which business operations start to get severely impacted, usually expressed in minutes. We want this to be as low as possible. Recovery Time Object (RTO): An organization's tolerance for the unavailability (or downtime) of a service after which business operations start to get severely impacted, usually expressed in minutes. We want this to be as low as possible. Armed with the above knowledge, let's dive right into using Autonomous Data Guard. You already know how to navigate to your Autonomous Database console in OCI, by clicking on your database name in your list of database instances. As before, you should be using a 19c version database. Like magic, you now see Autonomous Data Guard as an option on your console. Click the "Enable" button on your console, followed by clicking the "Enable Autonomous Data Guard" confirmation button. (Okay, I lied... It's 2 clicks, not 1). You're done, that's all you need to do to protect your database! You will now see your peered standby database in a Provisioning state until it becomes Available; depending on the size of your primary database this may take several minutes. Note that when you switch your standby to make it your primary as described below, you will not need a new wallet - Your existing wallet will continue to work seamlessly! Everything that follows describes features you will use either in a disaster type scenario, or to test your applications / mid-tiers against your Autonomous Database with ADG,   Failover - When your Primary database goes down, you are automatically protected by your Standby database If a disaster were to occur and your primary database is brought down, you can "Failover" to your standby. A failover is a role change, switching from the primary database to the standby database when the primary is down and Unavailable, while the standby is Available. An Automatic Failover is automatically triggered (no user action needed) by the Autonomous Database when a user is unable to connect to their primary database for a few minutes. Since this is an automated action, we err on the side of caution and allow auto-failovers to succeed only when we can guarantee no data loss will occur.  For automatic failover, RTO is 2 minutes and RPO is 0 minutes. In the rare case when your primary is down and Automatic Failover was unsuccessful, the Manual Failover button may be triggered by you, the user. During a manual failover, the system automatically recovers as much as data as possible minimizing any potential data loss; there may be a few seconds or minutes of data loss. You would usually only perform a Manual Failover in a true disaster scenario, accepting the few minutes of potential data loss to ensure getting your database back online as soon as possible. For manual failover, the RTO is 2 minutes and RPO is 5 minutes.   After a failover, a new standby for your primary will automatically be provisioned.       Switchover - Test your applications with Autonomous Data Guard when both databases are healthy When your standby is provisioned, you will see a "Switchover" option on the console while your primary and standby databases are both healthy (ie. in the Available or Stopped states). Clicking the Switchover button performs a role change switching from the primary database to the standby database, which may take a couple of minutes, with no data loss guaranteed. You would usually perform a Switchover to test your applications against this role change behaviour of the primary.     We talked above about the buttons on the UI console. Of course, as with the rest of Oracle's cloud platform, there are Autonomous Database APIs for all actions including Switchover and Failover that can be called at any time. You may also subscribe to Events to be notified of your standby's operations.   Wrapping Up While we ideated over Autonomous Data Guard, our primary focus has been that it should just work, with nothing for a user to do. My hope is that every user with a production database out there takes advantage of this simplicity and is able to further minimize their business' operational downtime and data loss. For all the details about Autonomous Data Guard features please refer to the official documentation and for updated pricing refer to the service description. Stay tuned for Part 2 of this post later this year, once we are ready to announce Autonomous Data Guard across regions!

Just over a week ago, you may have heard Larry mention Autonomous Data Guard at the live-streamed Oracle Live event. Today, I am happy to announce this much-awaited disaster recovery (DR) feature in...

Autonomous

Autonomous Database Newsletter - July 15 2020

  Autonomous Database on Shared Exadata Infrastructure July 15, 2020   Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the following new features and topics: Connecting to non-Oracle Databases Modify Network Configuration Timezone Support Modifying User Profiles Updated Quick Start Guides APEX 20.1 is coming to ADB Global Leaders Update Don't forget to checkout our "What's New" page for a comprehensive archive of all the new features we have added to Autonomous Database - click here. To find out how to subscribe to this newsletter click here.   (Shared Infrastructure Only) NEW - Connecting to Non-Oracle Databases Autonomous Database now supports using connections that are enabled via Oracle Database Gateway. Gateways are designed for accessing a specific non-Oracle Database such as DB2, Teradata, SQL Server etc. Oracle Database Gateway provide queries and applications with access to data anywhere in a distributed database system without knowing either the location of the data or how it is stored. This removes the need to customize applications to access data from non-Oracle systems.   LEARN MORE   DOC: Create Database Links to an Oracle Database Gateway to Access Non-Oracle Databases (ADW), click here. DOC: Create Database Links to an Oracle Database Gateway to Access Non-Oracle Databases (ATP), click here. DOC: Introduction to Heterogeneous Connectivity, click here. DOC: Install guides for Database Gateways, click here.       BACK TO TOP   (Shared Infrastructure Only) NEW - Modify Network Settings for ADB Instances Autonomous Database now supports switching network access type. This feature enables customers to perform an in-place switch from public endpoint to private endpoint (and vice versa). The feature will reduce customer effort required for switching between public and private endpoints and make it easier for customers to adopt private endpoint.     LEARN MORE   DOC: Networking Management Tasks in OCI doc, click here. DOC: Change from Private to Public Endpoints with Autonomous Database (ADW), click here. DOC: Change from Private to Public Endpoints with Autonomous Database (ATP), click here.       DO MORE   VIDEO: Autonomous Database Shared - Switching from public to private endpoint, click here. BLOG: Getting Up to Speed on Using Private Endpoints for Autonomous Database with Shared Exadata Infrastructure, click here. DOC: Reference architecture pattern: Deploy an autonomous database with a private endpoint click here.     BACK TO TOP   (Shared Infrastructure Only) NEW - Timezone Files Automatically Updated For time zone support, Oracle Database uses time zone files that store the list of all time zones. The time zone files for Autonomous Database are periodically updated to reflect the latest time zone specific changes. An Autonomous Database instance will automatically pick up updated time zone files depending on the lifecycle state of the instance: - Stopped: At the next start operation the update is automatically applied. - Available: After a restart the update is automatically applied. When a load or import operation results in a timezone related error, restart the ADB instance and try again.   LEARN MORE   DOC: Manage Time Zone Data Updates on ADW, click here. DOC: Manage Time Zone Data Updates on ATP, click here. EINSTEIN: How Does Time Zone Upgrade Work in Autonomous Database on Shared Exadata Infrastructure (ADB-S), click here.     BACK TO TOP   (Shared Infrastructure Only) NEW - Create/Modify User Profile and Set Password Rules Autonomous Database now supports the ability to create/alter user profiles and import existing user profiles from another environment with Oracle Data Pump Import. Cloud Admins can now manage the rules for password complexity on Autonomous Database. Administrators can can create a Password Verify Function (PVF) and associate the PVF with a profile to manage the rules for password complexity.   LEARN MORE   DOC: Manage User Profiles with ADW, click here. DOC: Manage password complexity rules on ADW, click here. DOC: Manage User Profiles with ATP, click here. DOC: Manage password complexity rules on ATP, click here.     BACK TO TOP   (Shared Infrastructure Only) UPDATE - Quick Start Guides Quickstart Tutorial 1: Autonomous Database Quickstart Learn about Autonomous Database on Shared Infrastructure and learn how to create an instance in just a few clicks. Then load data into your database, query it, and visualize it.   Quickstart Tutorial 2: Analyzing your data with Autonomous Database Connect using secure wallets and monitor your Autonomous Database instances, use Oracle Analytics to visualize data and then use Oracle Machine Learning (OML) to try your hand at predictive analytics.   LEARN MORE   DOC: Getting Started Guides for ADW, click here. DOC: Getting Started Guide for ATP, click here.       BACK TO TOP   NEW - Collateral For Autonomous Database Great news - Oracle APEX 20.1 is coming to Autonomous Database! Always Free customers received the upgrade on July 8, 2020 and other customers will receive APEX 20.1 beginning July 17, 2020. To receive the APEX 20.1 upgrade, customers must be on Oracle Database 19c. For more information about how to upgrade an existing ADB instance to 19c please follow the instructions in the documentation: - click here, for ADW - click here, for ATP.   Can I Validate My APEX Applications Before Upgrading? Yes you can! There is a unique capability in APEX on Autonomous Database that enables customers to validate their applications prior to the APEX upgrade. Please see this blog post for detailed instructions.     BACK TO TOP     ORACLE GLOBAL LEADERS       EMEA Summer 2020 Event - Replay Now Available We have just held our first Oracle Global Leaders EMEA Online Conference through Zoom with 300 registrations and an average of 130 participating each day. Over 20 customers and partner shared their data management and analytics projects use-cases on six panels. We also had Mike Dietrich, Engin Şenel, Barb Lundhild, Philippe Lions, Mark Hornick and Joel Kallman provide updates and roadmaps for their products. Cetin Ozbutun delivered the Executive Keynote on our Data Management Roadmap. This is the first time we have managed to make the event material publicly available, so please share this link, click here, with your customers and prospects. (Note - some attendees content is not included: PDF docs from Caixa Bank and ADNOC (pending)).   Americas Summer 2020 Event - Sept 2 - 3 During this online event you will have the unique opportunity to hear from customers, Oracle product management and product development for Autonomous Database, big data, data warehousing, analytics and general data management strategies and implementations. Registration for this event is now open, so please share this link, click here, with your customers and prospects.   Recent Webcasts Forth Smart transforms Thai Finance with Autonomous Database and Oracle Analytics Cloud Learn how Forth Smart, a payment services provider, uses Oracle technology for easy-access, real-time data. Share this on-demand link, click here, with your customers and prospects. Sharda University future proofs student learning with Oracle Autonomous Database Find out how Sharda University uses Autonomous Database to manage data and uncover insights for student success. Share this on-demand link, click here, with your customers and prospects.     BACK TO TOP   oracle.com ADW ATP Documentation ADW ATP TCO Calculator Shared Dedicated Cloud Cost Estimator Autonomous Database ADB New Features Autonomous Database Schema Advisor Autonomous Database       Customer Forums ADW ATP       BACK TO TOP  

  Autonomous Databaseon Shared Exadata InfrastructureJuly 15, 2020   Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the...

Autonomous

Autonomous Database Newsletter - June 24 2020

a { color: white !important; }   Autonomous Database on Shared Exadata Infrastructure June 24, 2020       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the following new features and topics: Support for ORC files Automatic upgrade to 19c AWR Reports in Performance Hub Changes to file transfer limits into Object Store Procedures for using Database Real Application Security Reference architecture patterns for data warehousing Don't forget to checkout our "What's New" page for a comprehensive archive of all the new features we have added to Autonomous Database - click here. To find out how to subscribe to this newsletter click here.   (Shared Infrastructure Only) NEW - ADB now supports ORC files ADB has expanded object storage support in two important ways: it can now query ORC files as well as complex data types found in Parquet, ORC and Avro files. The Optimized Row Columnar (ORC) file is a highly efficient way to store Hive data. It was designed to overcome limitations of the other Hive file formats. Like Parquet, ORC data is stored in a compressed columnar format and provides metadata to enable efficient reads. ORC, Parquet and Avro file types provide the ability to capture fields containing complex data types, including arrays, maps, unions and objects (or structs). These complex types can now be queried in ADB using Oracle’s extensive JSON query support. Note: support for ORC and complex data types are only available in 19c.   LEARN MORE DOC: DBMS_CLOUD Package ORC to Oracle Data Type Mapping for ADW, click here. DOC: DBMS_CLOUD Package ORC, Parquet and Avro Complex Types for ADW, click here. DOC: DBMS_CLOUD Package ORC, Parquet and Avro to Oracle Column Name Mapping for ADW, click here. DOC: DBMS_CLOUD Package ORC to Oracle Data Type Mapping for ATP, click here. DOC: DBMS_CLOUD Package ORC, Parquet and Avro Complex Types for ATP, click here. DOC: DBMS_CLOUD Package ORC, Parquet and Avro to Oracle Column Name Mapping for ATP, click here.     DO MORE   BLOG: Query ORC files and complex data types in Object Storage with Autonomous Database, click here. BLOG: Autonomous Database - Access Parquet Files in Object Stores, click here.     BACK TO TOP   (Shared Infrastructure Only) NOTE - Customer Announcement About Automatic Upgrade to 19c An email announcement about the automatic upgrade to Database 19c has been sent to ADB customers (see below). The automatic upgrade of Autonomous Databases will start in September. At the date of the scheduled upgrade, Autonomous Database instances will be automatically upgraded in-place to Oracle Database 19c. Please note that to better manage the customer experience during this upgrade process we are recommending that customers contact support by logging a service request if they have any questions about the upgrade process. To view the options for manually upgrading an exsting ADW instance to Database 19c - click here. To view the options for manually upgrading an exsting ATP instance to Database 19c - click here.     BACK TO TOP   (Shared Infrastructure Only) NEW - Download AWR Reports From Performance Hub Automatic Workload Repository (AWR) report is a vital tool to collect historical performance statistics of an Oracle Database by DBAs and System Performance engineers. Performance Hub on the OCI console now allows customers to download an AWR report for a specific time period.   LEARN MORE   DOC: Using Performance Hub to Analyze Database Performance, click here. DOC: Active Session History Reports, click here.     DO MORE   DOC: Interpreting Results from Active Session History Reports... After generating an ASH report, you can use the following sections to review its contents to help identify possible causes of transient performance problems: Top Events Top SQL Top PL/SQL Top Sessions Activity Over Time click here.   BACK TO TOP   (Shared Infrastructure Only) UPDATE - Maximum File Size Increased For Transfer Object Store The maximum file size limit for DBMS_CLOUD.PUT_OBJECT procedure when transferring files to OCI Object Storage has been increased to 50 GB. Note - there are different size restrictions when using other object stores such as Amazon S3 and Microsoft's Azure Blob Storage. See doc links below for more details.   LEARN MORE   DOC: Details of PUT_OBJECT procedure for ADW, click here. DOC: Details of PUT_OBJECT procedure for ATP, click here.     BACK TO TOP   (Shared Infrastructure Only) UPDATE - Procedures For Using Database Real Application Security With ADB Real Application Security works the same on Autonomous Database as on an on-premise database except you need to perform some additional administrator tasks before using Real Application Security on Autonomous Database. These tasks are now detailed in the documentation.   LEARN MORE   DOC: Procedures for using Oracle Database Real Application Security with ADW, click here. DOC: Procedures for using Oracle Database Real Application Security with ATP, click here.     BACK TO TOP   (Shared Infrastructure Only) UPDATE - Reference Architecture Patterns and Solution Playbooks Now Available The Architecture Center has been updated to include a new series of data warehouse patterns: Each reference architecture pattern will come with the following: best practices framework; a set of recommendations which are a starting point for implementing the pattern and a terraform script for deploying the end-to-end set of cloud services on OCI. Currently, the following patterns are available (note: more patterns are being built and will be added shortly): Departmental data warehousing - an EBS integration example Departmental data warehousing/data marts - consolidate spreadsheets Enterprise data warehousing - an integrated data lake example Set up a data science environment that uses Oracle Machine Learning     BACK TO TOP   oracle.com ADW ATP Documentation ADW ATP TCO Calculator Shared Dedicated Cloud Cost Estimator Autonomous Database ADB New Features Autonomous Database Schema Advisor Autonomous Database       Customer Forums ADW ATP       BACK TO TOP  

  Autonomous Databaseon Shared Exadata InfrastructureJune 24, 2020       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the...

Autonomous

Query ORC files and complex data types in Object Storage with Autonomous Database

Apache ORC is a columnar file type that is common to the Hadoop ecosystem.  It is similar in concept to Apache Parquet; Hortonworks Hadoop clusters often leveraged ORC – while Cloudera clusters utilized Parquet.  Like parquet, ORC is a database file designed for efficient reads. Files embed a schema, data is stored in a compressed columnar format, predicate pushdown enables data pruning – sound familiar?  See the Parquet blog post if you want a refresher :) In addition, ORC provides the ability to capture complex data types – including arrays, maps, unions and objects (or structs).  This capability too is similar to both Parquet and Avro. Autonomous Database now supports querying object store data that is captured in ORC format – in addition to text, Avro and Parquet.  And, across the structured file types – you can now also query complex data types.  Let’s take a look at an example.  We’ll extend the movie file that was used in our previous Avro post (we downloaded this data from Wikipedia) - this time using the ORC file type with an extra column. The movie file has the following schema: id int original_title string overview string poster_path string release_date string vote_count int runtime int popularity double genres array<struct<id:int,name:string> Notice that each movie is categorized by multiple genres (an array of genres).  This array is an array of objects - or structs:  each genre has an id (integer) and a name (string).  Let's take a look at how to access this data in Autonomous Database.  After creating a credential object that enables access to Oracle Object Storage, create a table using dbms_cloud.create_external table: begin   DBMS_CLOUD.create_credential (     credential_name => 'OBJ_STORE_CRED',     username => abc@oracle.com',     password => '12345'   ) ;     dbms_cloud.create_external_table (     table_name =>'movie_info',     credential_name =>'OBJ_STORE_CRED',     file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/abcd/b/movies/o/movie-info.orc',     format =>  '{"type":"orc",  "schema": "first"}'     ); end; / Notice you don’t have to specify the shape of the data.  The columns and data types are automatically derived by reading the metadata contained in the ORC file.  This created a table with the following DDL:  CREATE TABLE "ADMIN"."MOVIE_INFO"     ( "ID" NUMBER(10,0),       "ORIGINAL_TITLE" VARCHAR2(4000 BYTE) COLLATE "USING_NLS_COMP",       "OVERVIEW" VARCHAR2(4000 BYTE) COLLATE "USING_NLS_COMP",       "POSTER_PATH" VARCHAR2(4000 BYTE) COLLATE "USING_NLS_COMP",       "RELEASE_DATE" VARCHAR2(4000 BYTE) COLLATE "USING_NLS_COMP",       "VOTE_COUNT" NUMBER(10,0),       "RUNTIME" NUMBER(10,0),       "POPULARITY" BINARY_DOUBLE,       "GENRES" VARCHAR2(4000 BYTE) COLLATE "USING_NLS_COMP"    )  DEFAULT COLLATION "USING_NLS_COMP"     ORGANIZATION EXTERNAL      ( TYPE ORACLE_BIGDATA       DEFAULT DIRECTORY "DATA_PUMP_DIR"       ACCESS PARAMETERS       ( com.oracle.bigdata.credential.name=OBJ_STORE_CRED         com.oracle.bigdata.fileformat=ORC   )       LOCATION        ( 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/abcd/b/movies/o/movie-info.orc'        )     )    REJECT LIMIT UNLIMITED    PARALLEL ; So, how will you handle the complex data?  Simple… data will be returned as JSON.  Thus, despite the fact that different file types store the same data in different ways, your query can be totally agnostic of this fact.  You see the same JSON output for all common complex types.  Use Oracle's rich JSON query processing capabilities against the table. Running a simple query against this table yields the following: select original_title, release_date, genres from movie_info where release_date > '2000' order by original_title; original_title release_date genres (500) Days of Summer 2009 [{"id":3,"name":"Drama"},{"id":6,"name":"Comedy"},{"id":17,"name":"Horror"},{"id":19,"name":"Western"},{"id":18,"name":"War"},{"id":15,"name":"Romance"}] 10,000 BC 2008 [{"id":6,"name":"Comedy"}] 11:14 2003 [{"id":9,"name":"Thriller"},{"id":14,"name":"Family"}] 127 Hours 2010 [{"id":6,"name":"Comedy"},{"id":3,"name":"Drama"}] 13 Going on 30 2004 [{"id":6,"name":"Comedy"},{"id":3,"name":"Drama"},{"id":18,"name":"War"},{"id":15,"name":"Romance"}] 1408 2007 [{"id":45,"name":"Sci-Fi"},{"id":6,"name":"Comedy"},{"id":17,"name":"Horror"},{"id":6,"name":"Comedy"},{"id":18,"name":"War"}] Notice that the genres returned as a json array.  Let's make that JSON data more useful.  The JSON can be transformed using Oracle's JSON functions - including the simple "." notation as well as the more powerful transform functions like JSON_TABLE.  The example below queries the table - turning each value of the array into a row in the result set: select original_title,        release_date,        m.genre_name,        genres from movie_info mi,      json_table(mi.genres, '$.name[*]'        COLUMNS (genre_name VARCHAR2(25) PATH '$')                                  ) AS m where rownum < 10; original_title release_date genre_name genres (500) Days of Summer 2009 Drama [{"id":3,"name":"Drama"},{"id":6,"name":"Comedy"},{"id":17,"name":"Horror"},{"id":19,"name":"Western"},{"id":18,"name":"War"},{"id":15,"name":"Romance"}] (500) Days of Summer 2009 Comedy [{"id":3,"name":"Drama"},{"id":6,"name":"Comedy"},{"id":17,"name":"Horror"},{"id":19,"name":"Western"},{"id":18,"name":"War"},{"id":15,"name":"Romance"}] (500) Days of Summer 2009 Horror [{"id":3,"name":"Drama"},{"id":6,"name":"Comedy"},{"id":17,"name":"Horror"},{"id":19,"name":"Western"},{"id":18,"name":"War"},{"id":15,"name":"Romance"}] (500) Days of Summer 2009 Western [{"id":3,"name":"Drama"},{"id":6,"name":"Comedy"},{"id":17,"name":"Horror"},{"id":19,"name":"Western"},{"id":18,"name":"War"},{"id":15,"name":"Romance"}] (500) Days of Summer 2009 War [{"id":3,"name":"Drama"},{"id":6,"name":"Comedy"},{"id":17,"name":"Horror"},{"id":19,"name":"Western"},{"id":18,"name":"War"},{"id":15,"name":"Romance"}] (500) Days of Summer 2009 Romance [{"id":3,"name":"Drama"},{"id":6,"name":"Comedy"},{"id":17,"name":"Horror"},{"id":19,"name":"Western"},{"id":18,"name":"War"},{"id":15,"name":"Romance"}] 10,000 BC 2008 Comedy [{"id":6,"name":"Comedy"}] 11:14 2003 Family [{"id":9,"name":"Thriller"},{"id":14,"name":"Family"}] 11:14 2003 Thriller [{"id":9,"name":"Thriller"},{"id":14,"name":"Family"}] 127 Hours 2010 Comedy [{"id":6,"name":"Comedy"},{"id":3,"name":"Drama"}] 127 Hours 2010 Drama [{"id":6,"name":"Comedy"},{"id":3,"name":"Drama"}] 13 Going on 30 2004 Romance [{"id":6,"name":"Comedy"},{"id":3,"name":"Drama"},{"id":18,"name":"War"},{"id":15,"name":"Romance"}] 13 Going on 30 2004 Comedy [{"id":6,"name":"Comedy"},{"id":3,"name":"Drama"},{"id":18,"name":"War"},{"id":15,"name":"Romance"}] 13 Going on 30 2004 War [{"id":6,"name":"Comedy"},{"id":3,"name":"Drama"},{"id":18,"name":"War"},{"id":15,"name":"Romance"}] 13 Going on 30 2004 Drama [{"id":6,"name":"Comedy"},{"id":3,"name":"Drama"},{"id":18,"name":"War"},{"id":15,"name":"Romance"}]   JSON_TABLE was used in this case to 1) create a row for each value of the array (think outer join) and 2) the struct was parsed to extract the name of the genre. Autonomous Database continues to expand its ability to effectively query external source.  Look for more to come in this area soon!

Apache ORC is a columnar file type that is common to the Hadoop ecosystem.  It is similar in concept to Apache Parquet; Hortonworks Hadoop clusters often leveraged ORC – while Cloudera clusters...

Autonomous

Oracle Database Vault on Autonomous Database: Quickly and Easily

We often talk about how to protect our data from external threats by implementing various practices such as encrypting our data, only allowing connections via TCPS authentication or even further, deploying our databases in a private network. However, these security practices might not be sufficient to eliminate the threats that can originate from unauthorized privileged user access or unauthorized database changes. Good news is that Oracle Database Vault offers comprehensive access control capabilities to prevent those internal threats. What’s better is that you can use it in your Oracle Autonomous Database on Shared Exadata Infrastructure (ADB-S) as well! In this blog post, we are going to focus on how to take advantage of Database Vault in ADB-S by just following a few simple steps: Enable Database Vault Control Privileged User Access Create a Realm to Protect Your Data Disable Database Vault Enable Database Vault To configure and enable Oracle Database Vault on Autonomous Database, we will follow the steps described in our documentation: Create two users, one to be the Database Vault owner while the other one to be the Database Vault account manager. This is the first step towards implementing separation of duties. ctuzla$ ./sqlplus ADMIN/************@tuzladv_low SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jun 3 21:07:29 2020 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Last Successful login time: Wed Jun 03 2020 20:55:08 -07:00 Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.5.0.0.0 SQL> create user mydvowner identified by ************; User created. SQL> create user mydvacctmgr identified by ************; User created. Configure Database Vault: SQL> EXEC DBMS_CLOUD_MACADM.CONFIGURE_DATABASE_VAULT('mydvowner', 'mydvacctmgr'); PL/SQL procedure successfully completed. Enable Database Vault: SQL> EXEC DBMS_CLOUD_MACADM.ENABLE_DATABASE_VAULT; PL/SQL procedure successfully completed. Restart your ADB-s instance. Confirm Database Vault is enabled: SQL> SELECT * FROM DBA_DV_STATUS; NAME STATUS ------------------ ------------------- DV_APP_PROTECTION NOT CONFIGURED DV_CONFIGURE_STATUS TRUE DV_ENABLE_STATUS TRUE Control Privileged User Access In the previous section, we have seen how to set up separate users to be the designated Database Vault owner and account manager. This was an important step in order to achieve separation of duties; however, we are not done yet. In Autonomous Database, ADMIN user has DV_OWNER and DV_ACCTMGR roles by default. What this means is that even after configuring and enabling Database Vault with different users, ADMIN user can still perform certain operations such user management thanks to these roles. Even though ADMIN can be considered as a backup account for both Database Vault owner and account manager, it’s fairly easy to revoke these roles from ADMIN as we will see below. Connect to your ADB-S instance as the designated Database Vault account manager and revoke DV_ACCTMGR role from ADMIN: ctuzla$ ./sqlplus MYDVACCTMGR/************@tuzladv_low SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jun 3 22:42:00 2020 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Last Successful login time: Wed Jun 03 2020 21:18:48 -07:00 Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.5.0.0.0 SQL> revoke dv_acctmgr from ADMIN; Revoke succeeded. Connect to your ADB-S instance as the designated Database Vault owner and revoke DV_OWNER role from ADMIN: ctuzla$ ./sqlplus MYDVOWNER/************@tuzladv_low SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jun 3 22:43:30 2020 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Last Successful login time: Sun May 31 2020 21:38:21 -07:00 Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.5.0.0.0 SQL> revoke dv_owner from ADMIN; Revoke succeeded. Confirm ADMIN user cannot perform any user operations: ctuzla$ ./sqlplus ADMIN/************@tuzladv_low SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jun 3 23:00:05 2020 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Last Successful login time: Wed Jun 03 2020 21:30:55 -07:00 Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.5.0.0.0 SQL> create user testuser identified by ************; create user testuser identified by ************ * ERROR at line 1: ORA-01031: insufficient privileges SQL> alter user dvdemo identified by ************; alter user dvdemo identified by ************ * ERROR at line 1: ORA-01031: insufficient privileges SQL> drop user dvdemo; drop user dvdemo * ERROR at line 1: ORA-01031: insufficient privileges Create a Realm to Protect Your Data In this section, we will explore how to create a realm to protect tables of a schema. For this demonstration, I will be using my DVDEMO schema that contains a table named PRODUCTS. Check if ADMIN can access the DVDEMO.PRODUCTS table: ctuzla$ ./sqlplus ADMIN/************@tuzladv_low SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jun 3 23:00:05 2020 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Last Successful login time: Wed Jun 03 2020 21:30:55 -07:00 Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.5.0.0.0 SQL> select count(*) from dvdemo.products; COUNT(*) ---------- 72 As you can see, ADMIN can access the data that belongs to another schema since it’s not protected by a realm yet. Create a realm for all tables of the DVDEMO schema as Database Vault owner: SQL> begin DBMS_MACADM.CREATE_REALM ( realm_name => 'DVDEMO Realm', description => 'Realm for DVDEMO schema', enabled => DBMS_MACUTL.G_YES, audit_options => DBMS_MACUTL.G_REALM_AUDIT_OFF, realm_type => 1); end; / PL/SQL procedure successfully completed. SQL> begin DBMS_MACADM.ADD_OBJECT_TO_REALM( realm_name => 'DVDEMO Realm', object_owner => 'DVDEMO', object_name => '%', object_type => 'TABLE'); end; / PL/SQL procedure successfully completed. Authorize DVDEMO to access the realm: SQL> begin DBMS_MACADM.ADD_AUTH_TO_REALM( realm_name => 'DVDEMO Realm', grantee => 'DVDEMO', auth_options => DBMS_MACUTL.G_REALM_AUTH_OWNER); end; / PL/SQL procedure successfully completed. Try to access PRODUCTS table as ADMIN and DVDEMO users: ctuzla$ ./sqlplus ADMIN/************@tuzladv_low SQL*Plus: Release 19.0.0.0.0 - Production on Thu Jun 4 00:04:48 2020 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Last Successful login time: Wed Jun 03 2020 23:00:06 -07:00 Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.5.0.0.0 SQL> select count(*) from dvdemo.products; select count(*) from dvdemo.products * ERROR at line 1: ORA-01031: insufficient privileges ctuzla$ ./sqlplus DVDEMO/************@tuzladv_low SQL*Plus: Release 19.0.0.0.0 - Production on Thu Jun 4 00:05:44 2020 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Last Successful login time: Sun May 31 2020 20:57:16 -07:00 Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.5.0.0.0 SQL> select count(*) from dvdemo.products; COUNT(*) ---------- 72 As expected, only DVDEMO can access the PRODUCTS table. Disable Database Vault As the final step, let’s take a look at how to disable Database Vault. Disable Database Vault using the following API: ctuzla$ ./sqlplus MYDVOWNER/************@tuzladv_low SQL*Plus: Release 19.0.0.0.0 - Production on Thu Jun 4 00:13:10 2020 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Last Successful login time: Thu Jun 04 2020 00:08:31 -07:00 Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.5.0.0.0 SQL> EXEC DBMS_CLOUD_MACADM.DISABLE_DATABASE_VAULT; PL/SQL procedure successfully completed. Restart your ADB-s instance. Confirm Database Vault is disabled: ctuzla$ ./sqlplus ADMIN/************@tuzladv_low SQL*Plus: Release 19.0.0.0.0 - Production on Thu Jun 4 00:16:50 2020 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Last Successful login time: Thu Jun 04 2020 00:04:48 -07:00 Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.5.0.0.0 SQL> SELECT * FROM DBA_DV_STATUS; NAME STATUS ------------------- -------------------------- DV_APP_PROTECTION NOT CONFIGURED DV_CONFIGURE_STATUS TRUE DV_ENABLE_STATUS FALSE Check if ADMIN can access the DVDEMO.PRODUCTS table now: SQL> select count(*) from dvdemo.products; COUNT(*) ---------- 72 It is pretty simple, isn’t it? In just a few steps, we have gone through how to enable Database Vault, how to limit privileged user access via a realm and how to revert everything back to square one by disabling Database Vault. It’s important to note that this is just a small subset of all the capabilities Database Vault offers. If you’d like to learn more about Database Vault, please check out Database Vault Administrator’s Guide.

We often talk about how to protect our data from external threats by implementing various practices such as encrypting our data, only allowing connections via TCPS authentication or even further,...

Autonomous

Autonomous Database Newsletter - June 4-2020

  Autonomous Database on Shared Exadata Infrastructure June 04, 2020       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the following new features and topics: Customer Managed Oracle REST Data Services Workload metrics on performance Hub Timezone selector on Perf Hub Latency metrics now available Easy access to tenancy details New ADB collateral Global Leaders Update Don't forget to checkout our "What's New" page for a comprehensive archive of all the new features we have added to Autonomous Database - click here. To find out how to subscribe to this newsletter click here.   (Shared Infrastructure Only) NEW - ADB now supports Customer Managed Oracle REST Data Services When you use the default ORDS on Autonomous Database, you cannot modify any of the ORDS configuration options. For example, with the default configuration connections for ORDS are preconfigured to use the LOW database service. You can now use a customer managed environment if you want manual control of the configuration and management of Oracle REST Data Services.   LEARN MORE   DOC: About Customer Managed Oracle REST Data Services on ADW, click here. DOC: About Customer Managed Oracle REST Data Services on ATP, click here. DOC: Installing and Configuring Customer Managed ORDS on Autonomous Database, click here. VIDEO: ORDS 101, click here.     DO MORE   BLOG: Examples of how to use REST, click here (see section marked "Examples"). GITHUB: Sample code repository, click here.     BACK TO TOP   (Shared Infrastructure Only) NEW - Workload Metrics on Performance Hub Performance Hub has been expanded to include a new tab, "Workload". This can be used to monitor the workload in the database and helps the user identify performance spikes and bottlenecks. There are 4 regions in this tab: CPU Statistics, Wait Time Statistics, Workload Profile, and Sessions. Each region contains a group of charts that indicate the characteristics of the workload and the distribution of the resources, as shown below. IMAGE - view the screenshot of the new Workload tab containing the four regions and their associated charts are displayed - click here.   LEARN MORE   DOC: See section on "To navigate to Performance Hub in the Oracle Cloud Infrastructure Console interface of an Autonomous Database", click here   BACK TO TOP   (Shared Infrastructure Only) NEW - Performance Hub Now Has a Timezone Selector A time zone control has been added to Performance Hub. This will allow users to view data using an alternate time zone to UTC. They will have the choice of viewing data using either UTC (default), the time zone of their browser client, or the time zone of the database.     BACK TO TOP   (Shared Infrastructure Only) NEW - Latency Metrics Now Available in Service Metrics The Database Service Metrics information help Cloud Admins measure useful quantitative data about their Autonomous Databases. This list of available metrics now includes two new latency metrics: 1) Connection latency - Time taken (in milliseconds) to connect to an Autonomous Database that uses shared Exadata infrastructure in each region from a Compute service virtual machine in the same region. 2) Query latency/response time - Time taken (in milliseconds) to display the results of a simple query on the user's screen. Note - the tests to measure connection latency and query latency are not run against any customer instances. Within the Service Metrics console, it is possible to define and publish alarms against these new metrics. IMAGE - view the screenshot of the new latency metrics graphs on Service Metrics console- click here.   LEARN MORE   Overview of Events Service, click here. List of all the Autonomous Database events, click here.     BACK TO TOP   (Shared Infrastructure Only) NEW - SQL access to tenancy details for ADB When you file a service request for Autonomous Database, you need to provide the tenancy details for your instance. Tenancy details for the instance are available on the Oracle Cloud Infrastructure console. However, if you are connected to the database, you can now obtain these details by querying the CLOUD_IDENTITY column of the V$PDBS view. For example: SELECT cloud_identity FROM v$pdbs; ...will generate something similar to the following: {"DATABASE_NAME" : "DBxxxxxxxxxxxx", "REGION" : "us-phoenix-1", "TENANT_OCID" : "OCID1.TENANCY.REGION1..ID1", "DATABASE_OCID" : "OCID1.AUTONOMOUSDATABASE.OC1.SEA.ID2", "COMPARTMENT_OCID" : "ocid1.tenancy.region1..ID3"}   LEARN MORE   DOC: Obtain tenancy details for ADW to File a Service Request, click here. DOC: Obtain tenancy details for ATP to File a Service Request, click here.     BACK TO TOP   NEW - Collateral For Autonomous Database TRAINING: Autonomous Database Workshops on GitHub 1) Autonomous Database Quickstart - Learn about Autonomous Database on Shared Infrastructure and learn how to create an instance in just a few clicks. To run the workshop click here. 2) Analyzing Your Data with Autonomous Database - Connect using secure wallets, monitor your instance, use Analytics Desktop to visualize data and then use machine learning to try your hand at predictive analytics. To run the workshop click here. VIDEO: Oracle Analytics Channel - Oracle Autonomous Database Experience George Lumpkin discusses getting value from an analytics system and leveraging end-to-end solutions in an ever-changing market. To watch the video click here. VIDEO: Using ATP as a JSON Document Store This video demonstrates how to use ATP as a JSON document store. The video is divided into the following sections: Overview, Accessing JSON collections using a document-store API, querying JSON collections using SQL, loading JSON from object storage and viewing and analyzing data from JSON collections using a notebook. To watch the video click here.     BACK TO TOP     ORACLE GLOBAL LEADERS   EVENT - Global Leaders EMEA 2020 Online Summit: June 23 - June 24   For June the Oracle Global Leaders program is going online, with the goal of keeping you connected with your peers so you can hear the latest Oracle customer and product stories. Gain from valuable real life experience on projects with Autonomous Database, data warehousing, big data, analytics and Oracle Cloud Infrastructure. The focus for this live online event will be a series of customer panels based on the following topics: Oracle Cloud Infrastructure Oracle Autonomous Database Oracle Exadata for Analytics Oracle Analytics Oracle Machine Learning Oracle Database Development To register for this event, click here and to view the agenda, click here. If you have any questions, email Reiner Zimmermann In association with intel® WEBCAST: The MineSense Story: Smart Shovelling with ADW and APEX - June 30 Hear how Oracle Autonomous Data Warehouse + APEX is powering smart and efficient Mining and enabling the “Scale Up” of startup company MineSense. Learn how MineSense reduced implementation time from weeks or months to days, experienced over 2x performance improvement and implemented a secure, scalable infrastructure. To register for this Zoom webinar, click here.     BACK TO TOP   oracle.com ADW ATP Documentation ADW ATP TCO Calculator Shared Dedicated Cloud Cost Estimator Autonomous Database ADB New Features Autonomous Database Schema Advisor Autonomous Database       Customer Forums ADW ATP       BACK TO TOP  

  Autonomous Databaseon Shared Exadata InfrastructureJune 04, 2020       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the...

Big Data

Oracle Big Data SQL 4.1 is now available

I'm pleased to announce that Oracle Big Data SQL 4.1 is now available.  This is a significant release that enables you to take advantage of new Oracle Database 19c capabilities plus core Big Data SQL enhancements. Big Data SQL enables organizations to easily analyze data across Apache Hadoop, Apache Kafka, NoSQL, object stores and Oracle Database leveraging their existing Oracle SQL skills, security policies and applications with extreme performance.  Oracle Database 19c introduces the following features to make this support even better: Enhanced Information Lifecycle Management with Hybrid Partitioned Tables   a single partitioned table can now source data from both Oracle data files and external stores (object storage, hive, HDFS and local text files).  Store the external data in native big data formats (parquet, orc, csv, avro, etc.) - enabling that data to be shared with Spark and other data lake applications In-Memory External Tables:  load frequently accessed external data in-memory to dramatically improve performance Core Big Data SQL reader functionality for object storage has also been enhanced - offering support for new sources, file types and complex columns: ORC columnar file type support has been added to existing parquet, avro and text file support Complex data types (e.g. arrays, records, etc.) can now be queried using JSON functions Vastly improved support for text file attributes  Azure Blob Storage added to existing Oracle Object Storage and Amazon S3 support Apache Kafka support has also been enhanced with Oracle SQL Access to Kafka.  This new capability removes the requirement for the Oracle Kafka hive storage handler and provides a more systematic approach to querying Kafka streams. Enjoy the new release!

I'm pleased to announce that Oracle Big Data SQL 4.1 is now available.  This is a significant release that enables you to take advantage of new Oracle Database 19c capabilities plus core Big Data SQL...

Autonomous

Autonomous Database Newsletter - April 30-2020

  Autonomous Database on Shared Exadata Infrastructure April 30, 2020       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the following new features and topics: Service Level Agreement for ADB Per-second billing Support for SODA documents and collections ADB Extensions for IDEs Stateful Rule Support in Private Endpoints Enhancements for Data Pump Global Leaders Program - Update Quick Links Don't forget to checkout our "What's New" page for a comprehensive archive of all the new features we have added to Autonomous Database - click here. To find out how to subscribe to this newsletter click here.   (Shared Infrastructure Only) NEW - Autonomous Database now comes with an SLA The service level agreement for ADB has just been published and it applies only to the deployment of an Autonomous Database on shared infrastructure. The pillar document listed below in the section headed LEARN states that: Oracle will use commercially reasonable efforts to have the Oracle Cloud Infrastructure – Autonomous Database Services on Shared Infrastructure with a Monthly Uptime Percentage of at least 99.95% during any calendar month (the "Service Commitment"). In the event an Autonomous Database Service on Shared Infrastructure does not meet the "Service Commitment" then customers will be eligible to receive Service Credits. Note - there are specific definitions of key terms in the pillar document (listed below in the section headed LEARN) that you should read and understand.   LEARN MORE PDF: Oracle PaaS and IaaS Public Cloud Services Pillar Document (April 2020),  click here. Autonomous Database information is covered in Category 7, which starts on page 4 with ADB covered on pages 17 and 18.   BACK TO TOP   (Shared Infrastructure Only) NEW - Per-Second Billing For Autonomous Database Autonomous Database instance CPU and storage usage is now billed by the second, with a minimum usage period of one minute. Previously Autonomous Database billed in one-hour minimum increments and partial usage of an hour was rounded up to a full hour. This change benefits the following types of use cases: Where you only run your ADB instances during work hours and stop instances at end of working day Where you start-stop your instances to run overnight batch jobs Where you have ADB instances which typically have a short lifecycle – sandboxes, QA, training, integration testing instances.   LEARN MORE DOC: Per-second billing announcement for ADW,  click here DOC: Per-second billing announcement for ATP,  click here DOC: OCI Per-second billing general announcement which covers ADB,  click here BLOG: Review of per-second billing on OCI,  click here   BACK TO TOP   (Shared Infrastructure Only) NEW - Support for SODA Documents and Collections   Autonomous Database now supports loading and using Simple Oracle Document Architecture (SODA) documents and collections. SODA allows developers to store, search, and retrieve document collections, typically JSON documents, without using SQL. It is possible to also access document collections with SQL/JSON operators which means that ADB now combines the simplicity of a document database with the power of a relational database. The Oracle Cloud Infrastructure console has been updated with a link to access the SODA drivers for several languages and frameworks including: Java, Node.js, Python, Oracle Call Interface, and PL/SQL, as shown below.   LEARN MORE DOC: Using JSON Documents with ADW,  click here DOC: Using JSON Documents with ATP,  click here Documentation covers the following key topics: Work with Simple Oracle Document Access (SODA) in Autonomous Database SODA Collection Metadata on Autonomous Database Work with JSON Documents Using SQL and PL/SQL APIs on Autonomous Database Load JSON Documents with Autonomous Database   DO MORE   For examples of using SODA see the following whitepaper:  Developing schemaless applications using the Simple Oracle Document Access APIs.     BACK TO TOP   (Shared Infrastructure Only) NEW - ADB Extensions for IDEs ADB extensions enable developers to connect to, browse, and manage Autonomous Databases directly from their IDEs. There are extensions for the following: Eclipse, Microsoft Visual Studio and Visual Studio Code. Users with permissions to manage the databases can perform a number of actions (depending on their IDE), such as: Sign up for Oracle Cloud Connect to a cloud account using a simple auto-generated config file and key file Create new or clone existing Always Free Autonomous Database, Autonomous Database Dedicated, and Autonomous Database Shared databases Automatically download credentials files (including wallets) and quickly connect, browse, and operate on Autonomous Database schemas Change compartments and regions without reconnecting Start, stop, or terminate Autonomous Databases Scale up/down Autonomous Database resources Restore from backup Update instance credentials and/or license type used Rotate wallets Convert Always Free ADBs into paid ADBs   LEARN MORE   DOC: IDE Extensions in ADW,  click here DOC: IDE Extensions in ATP,  click here   BACK TO TOP   (Shared Infrastructure Only) NEW - Stateful Rule Support in Private Endpoints When configuring a private endpoint one of the steps is to define a security rule(s) in a Network Security Group. This creates a virtual firewall for the Autonomous Database and allows connections to the Autonomous Database instance. The Network Security Group can now be configured with stateful rules.   LEARN MORE   DOC: Configure Private Endpoints with ADW, click here DOC: Configure Private Endpoints with ATP, click here BLOG: Announcing Private Endpoints in Autonomous Database on Shared Exadata Infrastructure, click here.     BACK TO TOP   (Shared Infrastructure Only) UPDATED - Enhancements to Data Pump Support Two specific improvements relating to how Data Pump dump files can be used with ADB. It is now possible to: Query Data Pump dump files in the Cloud by creating an external table Use Data Pump dump files in the Cloud as source files to load data   LEARN MORE DOC: Query external Data Pump dump files with ADW, click here DOC: Load Data Pump dump files into an existing table with ADW, click here.   DOC: Query external Data Pump dump files with ATP, click here DOC: Load Data Pump dump files into an existing table with ATP, click here.     BACK TO TOP     ORACLE GLOBAL LEADERS PROGRAM   WEBCAST - How to Scale Your Services with Autonomous Database To Infinity 29 April 2020 at 15:00 CET Join this Oracle Global Leaders webcast to discover directly from Peter Merkert, CTO & Co-Fouder, Retraced GmbH how they are using Oracle Autonomous Database and the benefit they are getting. To register click here. To learn more, also see the video: How Autonomous & Blockchain Provide A Sustainable Transparent Clothing Supply Chain?     WEBCAST - From BillBoards to Dashboards using ADW and OAC 13 May 2020 at 12:00 PM ET Join this Oracle Global Leaders webcast to discover directly from Derek Hayden, SVP, Data Strategy and Analytics at OUTFRONT Media, about the role of Autonomous Data Warehouse in the new data platform for OUTFRONT Media moving in the digital age. Derek will discuss initial success with ADW, OAC and how this platform is being used for OUTFRONT's growth, efficiencies and data management in the out of home advertising industry. To register click here.       WEBCAST - Building low code application using Oracle Autonomous Database and Application Express 19 May 2020 at 15:00 CET Join this Oracle Global Leaders webcast to discover directly from Dr. Holger Friedrich, CEO, SumIT how they are using Oracle Autonomous Database and the benefit they are getting. Oracle Application Express is a powerful low-code web application development framework built-in and ready-to-use with Oracle Autonomous Database. It is instantly available in every provisioned service instance. In this webcast we will explain and demonstrate how sumIT and partner document2relation employ APEX and ATP to provide quick migration services and application development for migrating legacy Lotus Notes applications to the Cloud. To register click here.     WEBCAST - Smart Shovelling with ADW & APEX 13 May 2020 at 12:00 PM ET Join our Global Leaders webinar to hear from Frank Hoogendoorn, CDO of MineSense: a technology solution provider for the Mining Industry that combines IoT and machine learning to create "Smart Shovel" technologies that optimize mine operations, all powered by Oracle solutions! In this Webinar, Frank will explain how MineSense has transitioned from "start-up" to a "scale-up", and how Autonomous Data Warehouse with built-in APEX is enabling rapid business growth with international Mining customers that include some of the world’s largest Copper and Zinc producers. Backed by APEX and ADW, their disruptive ‘smart shovel’ technology collects and analyzes orebody data in real-time, where machine learning based algorithms drive processing decisions to boost mine productivity while reducing energy consumption, all of which leads to more sustainable mining operations. With the flexibility and low cost setup of ADW, the ease of application development and deployment with APEX, and superb on-demand data processing performance, MineSense will talk about how Oracle Solutions, (some even running right on mining shovels!), have been key in setting up a pivotal digital platform, where IT is no longer seen as a business cost but rather a key driver for growth. To register click here.     BACK TO TOP   oracle.com ADW ATP Documentation ADW ATP TCO Calculator Shared Dedicated Cloud Cost Estimator Autonomous Database ADB New Features Autonomous Database Schema Advisor Autonomous Database       Customer Forums ADW ATP       BACK TO TOP  

  Autonomous Databaseon Shared Exadata InfrastructureApril 30, 2020       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the...

The simplest guide to exporting data from Autonomous Database directly to Object Storage

One of the best ways to export data out of your Autonomous Database (ADB), whether to transfer data to a database lying in a different region or even keep a copy of the data in a single place for other teams in your organization to access, is via Oracle Data Pump. Recently we released a much requested feature enabling the ability to export data from your ADB Instance directly to your Oracle Object Storage bucket. For this walkthrough, I assume you have:  A Oracle Cloud Account (Sign Up for a trial if you don't have one. It's totally free!) An Autonomous Database Instance with some data in it (Follow Labs 1 & 3 in this ADB Workshop if you don't have one) Once you have logged in to your Oracle Cloud account with an ADB Instance, lets dive right in to these steps below to export your data.   Step 1: Create an Object Storage bucket Using the hamburger menu on the left, navigate to your Object Storage. Select a compartment which you would like to put your data in and hit "Create Bucket". The default options work well; In the popup, enter a bucket name (we will use 'examplebucket') and continue on to create the bucket . This bucket will hold your exported dump files. Note: Data Pump Export supports both OCI Object Storage and OCI Classic Object Storage.   Step 2: Create a Credential: Authorization between your ADB instance and Object Storage You will need to generate an auth token password for a new credential, navigate the left side menu to Identity -> Users -> Your User. In your selected user, on the bottom left, you will see "Auth Tokens". When clicked, you will see an option generate a token. Note it down somewhere secure, since you cannot view this later. Note: You must use a Swift auth token based credential here, and cannot use an OCI native Signing Key credential. Navigate to your existing Autonomous Database instance (it may be ADW or ATP) via the left-hand menu. In your instance's console, go to the "Tools" tab and navigate to SQL Developer Web. Login with your ADMIN Database User. Copy the following script, in your worksheet. Fill in your Cloud User's username and Auth Token as password, and run it to create your user credential. BEGIN   DBMS_CLOUD.CREATE_CREDENTIAL(     credential_name => 'EXPORT_CRED',     username => 'someone@oracle.com',     password => 'erioty6434900z……'   ); END;   Step 3: Set your credential as the Default Credential (for Data Pump v19.6 and below) Next, in SQL Developer Web, run the following command after replacing in your database user: ALTER DATABASE PROPERTY SET DEFAULT_CREDENTIAL = '<Database User Name>.EXPORT_CRED' You have now set your database's default credential. Note: I am assuming here you are running Data Pump version 19.6 or below since that is what is currently available on most platforms. In versions 19.7 and above, you will not need to set a default credential and be able to run your exports with a credential parameter!   Step 4: Install Oracle Data Pump If you're ahead in the game and already have a recent installation of Oracle Data Pump, skip to Step 5. If not, the easiest method to install Data Pump is via Instant Client. Download that as well as the Tools Package, which includes Oracle Data Pump, for your platform from Oracle Instant Client Downloads. Follow the documentation for step-by-step installation details.   Step 5: Run Oracle Data Pump Export The Data Pump utility is standalone from the database. Once installed, in your terminal / command line we will run the command below. Notice, we are using the "default_credential" from Step 3 as the first argument in the 'dumpfile' parameter.  The second argument is pointing to your Object Storage bucket's URI. You may use a Swift URI if you have one, instead of an OCI Native URI as is in the example below. In this URI, replace the: <region ID> with the region you are using. You can identify this by looking at the top right corner of your Oracle Cloud page. Go into "Manage Regions" or look at this page to get your region's identifier. <namespace> with Object Storage namespace. This can be found in Profile -> Tenancy (top right corner of the Oracle Cloud page). Look for your "Object Storage Namespace", it may be the same as your tenancy name or a different ID. ​​​​​ Here we used the bucket name "examplebucket", if you used a different bucket name also replace that in the URI. With encryption enabled, you will be prompted for a password. For more information on the export parameters and file size limits have a look at the documentation. expdp admin/password@ADWC1_high \ filesize=5GB \ dumpfile=default_credential:https://objectstorage.<regionID>.oraclecloud.com/n/<namespace>/b /examplebucket/o/exampleexport%U.dmp \ parallel=16 \ encryption_pwd_prompt=yes \ logfile=export.log \ directory=data_pump_dir Of course, depending on the size of your database, this database export can take minutes or hours.   Final Step & Thoughts That's a wrap! Once completed successfully, you can navigate back to your Object Storage bucket we created in Step 1 and you will see your newly exported dump files. The file names begin with "exampleexport". Since exporting to object store uses chunks for optimum performance, you may see additional files other than your dump ".dmp" files in your bucket. Don't delete them, but you may ignore them and should only have to interact with your dump files to access your data. Note: If you followed this guide to a T but are seeing an authorization or a URI not found error, you (or your Cloud Admin) may need to grant object storage access to your Cloud User or Group.   Here, we looked at exporting your data to the object store using the popular Data Pump utility running on your machine or server, which makes for a simple workflow to include in your data pipeline. In coming weeks, we will also support running Data Pump import/export (using the ORACLE_DATAPUMP driver) from within the database using your PL/SQL scripts using DBMS_CLOUD, so keep your eyes peeled for more Data Pump goodness! ?

One of the best ways to export data out of your Autonomous Database (ADB), whether to transfer data to a database lying in a different region or even keep a copy of the data in a single place for...

Autonomous

Autonomous Database Newsletter - April 08-2020

  Autonomous Database on Shared Exadata Infrastructure April 08, 2020       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the following new features and topics: Don't forget to checkout our "What's New" page for a comprehensive archive of all the new features we have added to Autonomous Database - click here. Options for upgrading to Database 19c Enhancements to Data Pump New menu option to restart ADB instances Enhacements to wallets Global Leaders Program - Update Day-with-Development Program Webcast Program Quick Links     (Shared Infrastructure Only) NEW - Upgrade an Autonomous Database Instance to Oracle Database 19c Oracle Database 19c is now available on Autonomous Database for all regions. In select regions, customers can choose between Oracle Database 18c and Oracle Database 19c for new databases. The default version for all new databases is 19c and going forward, most new Autonomous Database features will only be available for databases using Oracle Database 19c. Please start planning for the upgrade of your applications to Database 19c. For most applications, there should not be any changes required. However, it is recommended you should test your applications with Database 19c. You can easily test your applications by creating a clone of your current database to Oracle Database 19c, and testing your application using this clone. Oracle Database 18c is scheduled to be available in Autonomous Database until September 2020. You can choose to upgrade your database at any time during this period. After that time, all existing databases will be automatically upgraded to Oracle Database 19c. There are several approaches to upgrading your database which are described in the documentation, see links below. Note: Autonomous Database plans to offer a new upgrade path starting in April, with a single button that provides a one-way, in-place upgrade.   LEARN MORE   DOC: Upgrade an ADW instance to Database 19c, click here DOC: Upgrade an ATP instance to Database 19c, click here   DO MORE   BLOG: More information about ADB cloning feature is here     BACK TO TOP   (Shared Infrastructure Only) NEW - Enhancements to Data Pump 1) Support For Pre-Authenticated URLs This enhancement allows Oracle Data Pump to use Oracle Cloud Infrastructure pre-authenticated URIs for source files on Oracle Cloud Infrastructure Object Storage. Note: Customers should carefully assess the business requirement for and the security ramifications of pre-authenticated access. There is more guidance available in the documentation.   LEARN MORE   DOC: Import Data Using Oracle Data Pump on ADW, click here DOC: Import Data Using Oracle Data Pump on ATP, click here   DO MORE   VIDEO: Database Migration with MV2ADB - a tool that provides support for moving existing databases to Oracle Autonomous Database, click here O.COM: Oracle Database Cloud Migration Solutions page, click here     (Shared Infrastructure Only) 2) Data Pump Export to Object Store It is now possible for Oracle Data Pump to export directly to Oracle Object Store. This simplifies the process of moving data between Autonomous Data Warehouse and other Oracle databases. This export method is supported with Oracle Cloud Infrastructure Object Storage and Oracle Cloud Infrastructure Object Storage Classic.   LEARN MORE   DOC: Move Data with Data Pump Export to Object Store on ADW, click here DOC: Move Data with Data Pump Export to Object Store on ATP, click here   BACK TO TOP   (Shared Infrastructure Only) NEW - Menu Option to Restart ADB Instance A new menu option has been added to the OCI console to automate the process of stopping and then starting an ADB instance. The new menu option is labelled "Restart" as shown below:   LEARN MORE   DOC: How to restart an ADW instance, click here DOC: How to restart an ATP instance, click here   BACK TO TOP   (Shared Infrastructure Only) NEW - README file added to Wallet The Oracle client credentials wallet zip file now contains a README file. This file provides information about the wallet expiry date. An example of the contents of the new readme file is shown below:   Wallet Expiry Date ----------------------- This wallet was downloaded on 2020-04-03 10:19:43.4 UTC. The SSL certificates provided in this wallet will expire on 2025-03-31 21:26:18.928 UTC. In order to avoid any service interruptions due to an expired SSL certificate, you must re-download the wallet before this date.   LEARN MORE   DOC: Download client credentials (wallets) for an ADW instance, click here DOC: Download client credentials (wallets) for an ATP instance, click here     BACK TO TOP   VIDEO Oracle Global Leaders - Day With Development A recording of a recent Oracle Global Leaders Day with Development workshop is now available for viewing. The workshop covers general architecture overview of Data Management Services on OCI and how the following services work together: Big Data Service Autonomous Data Warehouse Data Integration Service Streaming Service, Data Flow Oracle Analytics Cloud To watch the video click here.     WEBCAST - Global Leaders Customers: Autonomous Database Use Cases Our Global Leaders team will be hosting a series of webcasts over the next six months where various customers will talk publicly about how they are using Autonomous Database. The schedule for April is: April 29, 2020 at 15:00 CET Oracle Global Leaders Webcast: Oracle Autonomous Database from the view of a CIO. Join this webcast to discover directly from Rok Planinsec, CIO of Unior, SI how they are using Oracle Autonomous Database and the benefit they are getting. At Unior Oracle Autonomous Data Warehouse and Oracle Analytics enable real-time operational reporting from months to seconds for more strategic decision-making with zero database administration.     BACK TO TOP   oracle.com ADW ATP Documentation ADW ATP TCO Calculator Shared Dedicated Cloud Cost Estimator Autonomous Database ADB New Features Autonomous Database Schema Advisor Autonomous Database       Customer Forums ADW ATP       BACK TO TOP  

  Autonomous DatabaseonShared ExadataInfrastructureApril 08, 2020       Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the...

Big Data

Now Available! Oracle Big Data Service

  After a successful controlled GA, we are excited to announce general availability for Oracle Big Data Service in the following regions:  Ashburn, Sao Paulo, Frankfurt, Tokyo, Seoul, London and Phoenix.  Oracle Big Data Service is an automated cloud platform service designed for a diverse set of big data use cases and workloads. From agile, short-lived clusters used to tackle specific tasks to long-lived clusters that manage large data lakes, Big Data Service scales to meet all big data requirements at a low cost and with the highest levels of security. Create new big data clusters or efficiently extend your on-premise big data solutions – and leverage the full capabilities of Cloudera Enterprise Data Hub along with Oracle Big Data analytics capabilities. Take advantage of Oracle Cloud SQL to enable new and existing applications to gain insights from data across the big data landscape using Oracle’s advanced SQL dialect – including data sourced from Hadoop, Object Storage, NoSQL and Kafka. And, use the languages of your choice – including Python, Scala, R and more – for machine learning, graph and spatial analytics. Go to Big Data Service on oracle.com for more information about the new service

  After a successful controlled GA, we are excited to announce general availability for Oracle Big Data Service in the following regions:  Ashburn, Sao Paulo, Frankfurt, Tokyo, Seoul, London...

Newsletter

Autonomous Database Newsletter - February 26-2020

Autonomous Database on Shared Exadata Infrastructure February 26, 2020 Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the following new features and topics: Support for Private Endpoints Global Leaders Program - Update Quick Links Don't forget to checkout our "What's New" page for a comprehensive archive of all the new features we have added to Autonomous Database - click here. (Shared Infrastructure Only) NEW - Autonomous Database Now Supports Private Endpoints It is now possible to restrict access to Autonomous Database by specifying a private endpoint within a Virtual Cloud Network (VCN). Configuration of private access is done when provisioning or cloning an Autonomous Database - allowing all traffic to and from an Autonomous Database to be kept off the public internet. During provisioning or cloning, it is possible to specify private access for an Autonomous Database by selecting Virtual cloud network within the Choose network access section of the create/clone dialog, as shown below: Description of the illustration adb_private_vcn.png The configuration process for enablishing private endpoints involves three steps which must be done before provisioning or cloning an Autonomous Database: Create a VCN within the region that will contain your Autonomous Database. Configure a subnet within the VCN, configured with default DHCP options. Specify at least one network security group (NSG) within the VCN - used to specify the ingress and egress rules for the Autonomous Database. Note: Private Endpoints is currently rolling out across all ou data centers. As of today, Feb 26, it is live in the following data centers: Amsterdam Ashburn Frankfurt Jeddah London Melbourne Osaka Phoenix Seoul Tokyo Toronto It will be available shortly in the remaining data centers. LEARN MORE   DOC: Configuring Private Endpoints with ADW: click here DOC: Configuring Private Endpoints with ATP: click here. BLOG: Announcing Private Endpoints in Autonomous Database on Shared Exadata Infrastructure: click here.   DO MORE The documentation includes two sample network scenarios: Sample 1: Connecting from Inside Oracle Cloud Infrastructure VCN Sample 2: Connecting from Your Data Center to Autonomous Database DOC: Private Endpoint Configuration Examples on ADW: click here DOC: Private Endpoint Configuration Examples on ATP: click here. BACK TO TOP EVENTS Oracle Global Leaders - Day With Development As part of our Global Leaders program, Oracle Product Management will be hosting a series of workshops for Architects, DBAs, Application Developers and for those interested in learning more about Oracle Information Management. Content will cover a general architecture overview, various cloud service capabilities (such as Autonomous Data Warehouse, Oracle Data Integration and Oracle Analytics Cloud) and how these services work together. All Services will be shown live so you will experience the look and feel of each product along with how to work with our complete range of data warehouse services. The schedule for this series of "Day With Development" events in the US is:   March 26        Irving   April 20       Nashville   April 21       Louisville   May 21       New York The schedule for this series of "Day With Development" events in the EMEA is:   March 3       Madrid, Spain   March 5       Roma, Italy   March 9 (Morning)       Dubai, United Arab Emirates   March 9 (Afternoon)       Dubai, United Arab Emirates   March 12       Istanbul, Turkey   March 16       Warszawa, Poland   March 19       Colombes cedex, France   March 23       Moscow, Russian Federation2 Please use the above links to register for an event in your area and get more information the general Terms and Conditions.   VIDEO - Rosendin Electric Rosendin Electric uses Oracle Autonomous Data Warehouse with Oracle Analytics Cloud to capture data in various formats and sources, making that data available to the right people to extract meaningful insights and drive smarter decision making. Note - click on the above image to watch the video or click here. Thanks to our Global Leaders team for helping to deliver this great customer video WEBCAST - Global Leaders Customers: Autonomous Database Use Cases Our Global Leaders team will be hosting a series of webcasts over the next six months where various customers will talk publicly about how they are using Autonomous Database. The schedule for March and April is: March 4, 2020 at 15:00 CET Oracle Global Leaders Webcast: Managing 1 PB of data with Oracle Autonomous DW. Join this webcast to discover directly from Manuel Marquez, Openlab Coordinator at CERN how they are using Oracle Autonomous Database and the benefit they are getting. Since 1982, CERN has been successfully adopting Oracle technology. Now, CERN is investigating Autonomous Data Warehouse to improve the performance of its research infrastructures. The Partnership with Oracle is key in monitoring large quantities of data. April 29, 2020 at 15:00 CET Oracle Global Leaders Webcast: Oracle Autonomous Database from the view of a CIO. Join this webcast to discover directly from Rok Planinsec, CIO of Unior, SI how they are using Oracle Autonomous Database and the benefit they are getting. At Unior Oracle Autonomous Data Warehouse and Oracle Analytics enable real-time operational reporting from months to seconds for more strategic decision-making with zero database administration. BACK TO TOP oracle.com ADW ATP Documentation ADW ATP TCO Calculator Serverless Dedicated Cloud Cost Estimator Autonomous Database ADB New Features Autonomous Database Schema Advisor Autonomous Database Customer Forums ADW ATP BACK TO TOP Oracle Autonomous Database BACK TO TOP

Autonomous DatabaseonShared Exadata InfrastructureFebruary 26, 2020 Welcome to our latest customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This newsletter covers the...

Announcing Private Endpoints in Autonomous Database on Shared Exadata Infrastructure

Access to a database using private IP addresses has been one of the most common requests for Autonomous Database on Shared Exadata Infrastructure, especially from enterprise customers. Today, we are announcing support for private IP addresses in Autonomous Database on Shared Exadata Infrastructure. With the Private Endpoints functionality Autonomous Database customers will now be able to assign a private IP address and a private hostname to their database in their Virtual Cloud Network. This completely disables the public endpoint for the database, ensuring that no client can access the database from the public internet. For a detailed explanation of this feature including configuration examples, please see this blog post. Private Endpoints functionality further enhances the security of the Autonomous Database for customers with on-premises and Virtual Cloud Network connectivity requirements. We believe the Private Endpoints functionality will be especially beneficial for customers whose security standards mandate private IP addresses for their applications and databases. Autonomous Database continues to offer access via public endpoints as well. Note that private endpoints and public endpoints are mutually exclusive. If you want your databases to be accessible from the public internet in addition to your Virtual Cloud Network and your on-premises network, you should choose to use a public endpoint. Even with public endpoints, you can configure your database so that it is only accessible from trusted clients or networks. You can also ensure the network traffic between the database and your clients in your Virtual Cloud Network or your on-premises network stays private and does not traverse the public internet. To configure databases with public endpoints, you can use the following features: The Service Gateway for connecting your clients running in your Virtual Cloud Network to the database privately without going through the public internet. FastConnect or VPN Connect for connecting your on-premises clients to the database privately without going through the public internet. Network access control lists (ACLs) for restricting access to your database from only specific client IP addresses or networks so that untrusted clients cannot reach the database from the public internet. Note that Autonomous Database on Shared Exadata Infrastructure always uses SSL authentication and encryption between clients and the database - for both public and private endpoints. All users of a database must have a wallet containing the required connectivity files like SSL certificates, SQL*Net and JDBC configuration files. Any user who does not have access to these files and a database username/password is not able to connect to the database. SSL certificates ensure that, even with a public endpoint, only authorized users are able to attempt to connect to a database. Stay tuned for more posts on networking and connectivity with Autonomous Database!

Access to a database using private IP addresses has been one of the most common requests for Autonomous Database on Shared Exadata Infrastructure, especially from enterprise customers. Today, we are...

How to level up and invoke an Oracle Function (or any Cloud REST API) from within your Autonomous Database

  October '20 Edit: We recently released a pre-installed PL/SQL native SDK that makes calling Oracle Cloud (OCI) REST APIs even simpler. I recommend using that SDK for calls to OCI; continue with the method below if you need to call REST APIs in other cloud platforms. Recently, we released functionality in the Autonomous Database Shared Infrastructure (ADB-S) to enable a user to call REST API Endpoints using simple PL/SQL scripts that run directly in the database. ADB supports REST API calls from the 3 major cloud providers, Oracle Cloud, AWS and Azure. This feature gets rid of the need to deploy a running server someplace and then call a cloud REST API via a programmable tool such as oci-curl; instead, you simply call the DBMS_CLOUD.SEND_REQUEST procedure with familiar PL/SQL syntax and it runs straight out of your database server. This procedure makes use of the underlying UTL_HTTP database package. This also means you can invoke your REST API call script with existing in-built database features like triggers and job schedulers that your existing code is likely already using! Below, I give an example of using the DBMS_CLOUD package to invoke an Oracle Function. Oracle Functions are fully managed, highly scalable, Functions-as-a-Service platform, available in the Oracle Cloud  (OCI) and powered by the Fn Project open-source engine. An Oracle Function is intended to be a single unit of work deployed as a serverless, callable function in OCI, which is billed only for the resources consumed during the function's execution.   1)  Create and deploy an Oracle Function If you already have an Oracle Function deployed and ready to go in Oracle Cloud, jump to (2) Before we can jump right into deploying an Oracle Function, we must perform the following steps to get set up. Since there are several steps, we will defer to the well-structured Functions documentation: Configure your Tenancy to create an Oracle Function Configure your Client Environment for Oracle Function Development   Next, we will be deploying a basic Oracle Function that accepts a parameter during invocation and returns "Hello <parameter>!" as a response. This involves registering a Docker image with the helloworld-func function code, and then deploying the function to an application in Oracle Functions. Create and deploy your Oracle Function     2) Invoke your Oracle Function using DBMS_CLOUD   Next, we will use the new DBMS_CLOUD functionality in ADB to invoke the function we just deployed. Open up SQL Developer Web from ADB instance Service Console and run the following scripts as directed. If you don't yet have an ADB instance and need a guide on how to set one up click here. Create a user credential that is required for authentication to call OCI APIs by filling in your user_ocid, tenancy_ocid, private_key and fingerprint. Click here if you are unsure where to find this information. BEGIN DBMS_CLOUD.CREATE_CREDENTIAL (        credential_name => 'OCI_KEY_CRED',        user_ocid       => 'ocid1.user.oc1..aaaaaaaam2...',        tenancy_ocid    => 'ocid1.tenancy.oc1..aaaaaaaakc...',        private_key     => 'MIIEogIBAAKCAQEAtU...',        fingerprint     => 'f2:db:d9:18:a4:aa:fc:83:f4:f6..'); END; /   And finally, we use the SEND_REQUEST procedure to invoke the deployed function using the function endpoint. You may identify your function's invoke endpoint using this CLI command or simply copying it from your Oracle Cloud UI under Developer Services     Replace the uri parameter below with your Function's invoke endpoint and, if you like, your own custom name in the body parameter.   SET SERVEROUTPUT ON   DECLARE     resp DBMS_CLOUD_TYPES.resp;   BEGIN     --HTTP POST Request     resp := DBMS_CLOUD.send_request(                credential_name => 'OCI_KEY_CRED',                uri => 'https://5pjfkzq5fhq.ca-toronto-...actions/invoke',                method => DBMS_CLOUD.METHOD_POST,                body => UTL_RAW.cast_to_raw('Nilay')             );         -- Response Body in TEXT format   DBMS_OUTPUT.put_line('Body: ' || '------------' || CHR(10) ||   DBMS_CLOUD.get_response_text(resp) || CHR(10));      -- Response Headers in JSON format   DBMS_OUTPUT.put_line('Headers: ' || CHR(10) || '------------' || CHR(10) ||   DBMS_CLOUD.get_response_headers(resp).to_clob || CHR(10));     -- Response Status Code   DBMS_OUTPUT.put_line('Status Code: ' || CHR(10) || '------------' || CHR(10) ||   DBMS_CLOUD.get_response_status_code(resp)); END; /  If all went according to plan, you should see a 200 OK Response Code and the response text "Hello Nilay!" (or the name you passed into the function) as in the screen below. Note: While invoking Oracle Functions, use the troubleshooting guide to help resolve issues. In addition to that, here are two places I stumbled so you won't have to: If you are still seeing access errors after already uploading your public key to your User, or creating the policies to give Functions access to your user, you may want to wait a few minutes. It sometimes may take a little while for the changes to propagate. If you are seeing a 502 error and are using a public subnet in your VCN, you may need to create an internet gateway and set up its routing table (you can use CIDR 0.0.0.0/0 as default) to give public internet access to your gateway. Click here for more information about networking or to use the simplified Virtual Networking Wizard.     While this is a simple example to walk you through the necessary steps, Oracle Functions is an extremely powerful service and the ability to call Functions from the database expands Autonomous Database functionality to essentially any custom functionality you desire. You may of course also use the SEND_REQUEST procedure to call any other Cloud Service REST API, such as object storage operations, instance scaling operations, Oracle Streams, and many more!

  October '20 Edit: We recently released a pre-installed PL/SQL native SDK that makes calling Oracle Cloud (OCI) REST APIs even simpler. I recommend using that SDK for calls to OCI; continue with the...

Newsletter

Autonomous Database Newsletter - February 18-2020

  Autonomous Database on Shared Exadata Infrastructure February 18, 2020       Welcome to our first customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This first newsletter covers the following new features and topics: Database Vault now available APEX 19.2 now available in Always Free ADB Multiple DB support New Graph Server supports ADB Easier migrations New data centers Global Leaders Program - Day With Development Quick Links   Don't forget to checkout our "What's New" page for Autonomous Database on oracle.com, click here.   BACK TO TOP   (Shared Infrastructure Only) NEW - Using Oracle Database Vault with Autonomous Database Oracle Database Vault implements powerful security controls for your Autonomous Database. These unique security controls restrict access to application data by privileged database users, reducing the risk of insider and outside threats and addressing common compliance requirements.     LEARN MORE   DOC: Using Oracle Database Vault with ADW: click here DOC: Using Oracle Database Vault with ATP: click here. PPT: OOW2018 - Autonomous and Beyond: Security in the Age of the Autonomous Database: click here DOC: Database 19c documentation for Database Vault, click here.     SEE MORE   VIDEO: Database Vault Overview, click here. PDF: Database Vault FAQ - click here PDF: Database Vault Data Sheet - click here PDF: Oracle Database Vault Overview - click here PDF: Oracle Database Vault Best Practices - click here PDF: Oracle Database Vault DBA Best Practices - click here     DO MORE   VIDEO: Database Vault - Enforcing separation of duties, click here. VIDEO: Database Vault - Enforcing Trusted Path Access Control, click here. VIDEO: Database Vault Advanced Use Cases II - Operations Control, Database Vault Simulation Mode, click here.       BACK TO TOP   (Shared Infrastructure Only)   NEW - Always Free Autonomous Database now includes APEX 19.2 Oracle APEX 19.2 on Always Free Autonomous Database provides a preconfigured, fully managed and secured environment to both build and deploy world-class data-centric applications. Oracle APEX applications developed on-premise can be easily deployed to Oracle APEX on the free version Autonomous Database.     LEARN MORE   What is Application Express: click here. DOC: APEX 19c Getting Started Guide: click here. DOC: Creating Applications with Oracle Application Express in ADW: click here. DOC: Creating Applications with Oracle Application Express in ATP: click here.     DO MORE   Create a simple Human Resources (HR) application for the fictitious AnyCo Corp which manages departmental and employee information stored in database tables, click here. VIDEO: Learn how to create an APEX Application using a spreadsheet in just 2 minutes! click here. HOL: List of APEX hands-on labs: click here.       BACK TO TOP   (Shared Infrastructure Only) NEW - Multiple Database Versions and Availability by Region Depending on the region where you provision or clone your database, Autonomous Database now supports one or more Oracle Database versions. The version availability depends on the region - note that some regions do not support multiple Oracle Database versions. When multiple database versions are available, you choose an Oracle Database version when you provision or clone a database   LEARN MORE   DOC: Oracle Database versions and availability by region for ADW: click here. DOC: Oracle Database versions and availability by region for ATP: click here. If you are using ADB with Oracle Database 19c there are additional database features available in the database: DOC: ADW Oracle Database 19c features, click here. DOC: ATP Oracle Database 19c features, click here.     BACK TO TOP   (Shared Infrastructure Only) NEW - Availability of Oracle Graph Server and Client 20.1 Oracle Graph Server and Client 20.1 is a software package that works with Autonomous Database. It includes the in-memory analytics server (PGX) and client libraries required to work with the Property Graph feature in Autonomous Database. With graph analytics you can explore and discover connections and patterns in social networks, IoT, big data, data warehouses, and complex transaction data for applications such as fraud detection in banking, customer 360, and smart manufacturing.   LEARN MORE   Download is available on edelivery.oracle.com click here. DOC: Installation instructions click here. See section 1.7 Using Oracle Graph with the Autonomous Database. WEB: Graph Server page on oracle.com click here.     SEE MORE   PPT: Demystifying Graph Analytics for the Non-expert, click here PPT: Using Graph Analytics for New Insights, click here     BACK TO TOP   (Shared Infrastructure Only)   NEW - Features to Simplify Migrations To ADB 1. Database Resident Connection Pool (DRCP) Using DRCP provides you with access to a connection pool in your ADB that enables a significant reduction in key database resources required to support many client connections. See Use Database Resident Connection Pooling with Autonomous Database for more information. DOC: ADW click here. DOC: ATP click here.   2. Set MAX_STRING_SIZE value By default the Autonomous Data Warehouse database uses MAX_STRING_SIZE set to the value EXTENDED. To support migration from older Oracle Databases or applications you can set MAX_STRING_SIZE to the value STANDARD. See Checking and Setting MAX_STRING_SIZE for more information. DOC: ADW click here. DOC: ATP click here.   3. Number of concurrent statements The maximum number of concurrent statements is increased. See Predefined Database Service Names for Autonomous Data Warehouse for more information. DOC: ADW click here. DOC: ATP click here.       BACK TO TOP   (Shared Infrastructure Only) NEW - More Data Centers Now Online Autonomous Database is now available in the following data centers: Jeddah, Saudi Arabia Osaka, Japan Melbourne, Australia Amsterdam, Netherlands For more information on the status of services in each data center click here. For more information about how to subscribe to these new regions, click here. To switch to the new region, use the Region menu in the Console. See Switching Regions for more information, click here.     BACK TO TOP   EVENTS Oracle Global Leaders - Day With Development As part of our Global Leaders program, Oracle Product Management will be hosting a series of workshops for Architects, DBAs, Application Developers and for those interested in learning more about Oracle Information Management. Content will cover a general architecture overview, various cloud service capabilities (such as Autonomous Data Warehouse, Oracle Data Integration and Oracle Analytics Cloud) and how these services work together. All Services will be shown live so you will experience the look and feel of each product along with how to work with our complete range of data warehouse services. The schedule for this series of "Day With Development" events in the US is:   Feb 21       Chesterfield/St Louis   Feb 28       Vancouver   March 26        Irving   April 20       Nashville   April 21       Louisville   May 21       New York The schedule for this series of "Day With Development" events in the EMEA is:   March 3       Madrid, Spain   March 5       Roma, Italy   March 9 (Morning)       Dubai, United Arab Emirates   March 9 (Afternoon)       Dubai, United Arab Emirates   March 12       Istanbul, Turkey   March 16       Warszawa, Poland   March 18       Colombes cedex, France   March 23       Moscow, Russian Federation2 Please use the above links to register for an event in your area and get more information the general Terms and Conditions.     BACK TO TOP   oracle.com ADW ATP Documentation ADW ATP TCO Calculator Serverless Dedicated Cloud Cost Estimator Autonomous Database ADB New Features Autonomous Database Schema Advisor Autonomous Database       Customer Forums ADW ATP       BACK TO TOP  

  Autonomous DatabaseonShared ExadataInfrastructureFebruary 18, 2020       Welcome to our first customer newsletter for Autonomous Database on Shared Exadata Infrastructure. This first newsletter...

Newsletter

Announcing Our New Customer Newsletter for Autonomous Database - Don't Miss Out!

Welcome I am happy to announce that I will be posting a regular customer newsletter for Autonomous Database (on Shared Exadata Infrastructure) on this blog. It will cover the latest features added to Autonomous Database with links to additional collateral such as specific doc pages, videos, whitepapers, etc. Typically we are delivering new features every 3-4 weeks so this newsletter will appear on a regular basis throughout the year. How Do I Subscribe? If you want to subscribe to an RSS feed that will automatically refresh when each newsletter is posted then here are the steps... 1) Goto blogs.oracle.com/datawarehousing 2) Click on the three horizontal lines next to a magnifying glass in the top-right corner of the page to open the menu showing the list of categories for this blog 3) Now select the category "Newsletter" This will take you to the following page...https://blogs.oracle.com/datawarehousing/newsletter (yes, you are quite correct - I could have simply given you this link at the top of the page but this way you get to view the different categories available on the data warehouse blog so you can easily register for other areas). 4) Now click on the last icon on the right in the group of four...this is the RSS feed icon 5) If you just want the RSS feed to include the newsletters then select the second option "Only This Category's Posts" All done! You will probably get a pop-up to authorise access to your local application for viewing RSS feeds.  When Is The First Edition The first edition will be out very soon! Yes, it's free. Hope you enjoy our newsletter.  

Welcome I am happy to announce that I will be posting a regular customer newsletter for Autonomous Database (on Shared Exadata Infrastructure) on this blog. It will cover the latest features added to...

Newsletter

Announcing Our New Customer Newsletter for Autonomous Database - Don't Miss Out!

I am happy to announce that I will be posting a regular customer newsletter for Autonomous Database (on Shared Exadata Infrastructure) on this blog. It will cover the latest features added to Autonomous Database with links to additional collateral such as specific doc pages, videos, whitepapers, etc. Typically we are delivering new features every 3-4 weeks so this newsletter will appear on a regular basis throughout the year. How Do I Subscribe? If you want to subscribe to an RSS feed that will automatically refresh when each newsletter is posted then here are the steps... 1) Goto blogs.oracle.com/datawarehousing 2) Click on the three horizontal lines next to a magnifying glass in the top-right corner of the page to open the menu showing the list of categories for this blog 3) Now select the category "Newsletter" This will take you to the following page...https://blogs.oracle.com/datawarehousing/newsletter (yes, you are quite correct - I could have simply given you this link at the top of the page but this way you get to view the different categories available on the data warehouse blog so you can easily register for other areas). 4) Now click on the last icon on the right in the group of four...this is the RSS feed icon 5) If you just want the RSS feed to include the newsletters then select the second option "Only This Category's Posts" All done! You will probably get a pop-up to authorise access to your local application for viewing RSS feeds.  When Is The First Edition Out? The first editions out now, see here: https://blogs.oracle.com/datawarehousing/autonomous-database-newsletter-february-18-2020! Yes, it's free. Hope you enjoy our newsletter.  

I am happy to announce that I will be posting a regular customer newsletter for Autonomous Database (on Shared Exadata Infrastructure) on this blog. It will cover the latest features added to...

Autonomous

How to Send an Email using UTL_SMTP in Autonomous Database

Autonomous Database now supports UTL_HTTP and UTL_SMTP PL/SQL packages. You may already be familiar with these as they are commonly used in various scenarios. In this blog post, we will focus on how to send an email using the UTL_SMTP package. Before we dive into the details, it's worth noting that both packages are subject to certain restrictions. Even though we'll cover some of those here, you might still want to see PL/SQL Packages with Restrictions in the documentation. The UTL_SMTP package is designed for sending emails over Simple Mail Transfer Protocol (SMTP) and it provides numerous interfaces to the SMTP commands (See UTL_SMTP for more details). Thanks to these interfaces, it's in fact quite simple to send an email from within the database. However, as mentioned earlier, there are some restrictions that we need to be aware of. For example, the only supported email provider currently is Oracle Cloud Infrastructure (OCI) Email Delivery service. In other words, we need to have a working Email Delivery configuration before we can start sending emails on Autonomous Database (ADB). Let's start! Here's the list of steps that we are going to follow to successfully send an email on ADB: Configure Email Delivery Service Allow SMTP Access for ADMIN via an Access Control Entry (ACE) Create a PL/SQL Procedure to Send Email Send a Test Email Configure Email Delivery Service Oracle Cloud Infrastructure Email Delivery is an email sending service that provides a fast and reliable managed solution for sending high-volume emails (See Overview of the Email Delivery Service for more details). In this step, we are going to configure Email Delivery in OCI console as shown below: Generate SMTP credentials for a user. Open the navigation menu. Under Governance and Administration, go to Identity and click Users. Locate the user in the list that has permissions to manage email, and then click the user's name to view the details. Click SMTP Credentials. Click Generate SMTP Credentials. Enter a Description of the SMTP Credentials in the dialog box. Click Generate SMTP Credentials. A user name and password is displayed. Note: Whether you create a new user (See Adding Users) or choose to use an existing user for these steps, you need to make sure the user is assigned to a group with permissions to manage approved-senders and suppressions (See Set Up Permissions for more details). For example, our user in this example is assigned to a group that has the following policies for approved-senders and suppressions: Allow group <Your Group Name> to manage approved-senders in tenancy Allow group <Your Group Name> to manage suppressions in tenancy Create an approved sender for Email Delivery. We need to do this for all email addresses we use as the "From" with UTL_SMTP.MAIL (See Managing Approved Senders for more information). Open the navigation menu. Under Solutions and Platform, go to Email Delivery and click Email Approved Senders. Click Create Approved Sender within the Approved Senders view. Enter the email address you want to list as an approved sender in the Create Approved Sender dialog box. Click Create Approved Sender. The email address is added to your Approved Senders list. Allow SMTP Access for ADMIN via an Access Control Entry (ACE) Now we need to append an access control entry (ACE) using the DBMS_NETWORK_ACL_ADMIN package for ADMIN user to access SMTP for a specific host and port: Note: You can find your SMTP endpoint (host) and eligible ports by opening the navigation menu following Email Delivery --> Email Configuration. begin -- Allow SMTP access for user ADMIN dbms_network_acl_admin.append_host_ace( host =>'smtp.us-ashburn-1.oraclecloud.com', lower_port => 587, upper_port => 587, ace => xs$ace_type( privilege_list => xs$name_list('SMTP'), principal_name => 'ADMIN', principal_type => xs_acl.ptype_db)); end; / Create a PL/SQL Procedure to Send Email CREATE OR REPLACE PROCEDURE SEND_MAIL ( msg_to varchar2, msg_subject varchar2, msg_text varchar2 ) IS mail_conn utl_smtp.connection; username varchar2(1000):= 'ocid1.user.oc1.username'; passwd varchar2(50):= 'password'; msg_from varchar2(50) := 'adam@example.com'; mailhost VARCHAR2(50) := 'smtp.us-ashburn-1.oraclecloud.com'; BEGIN mail_conn := UTL_smtp.open_connection(mailhost, 587); utl_smtp.starttls(mail_conn); UTL_SMTP.AUTH(mail_conn, username, passwd, schemes => 'PLAIN'); utl_smtp.mail(mail_conn, msg_from); utl_smtp.rcpt(mail_conn, msg_to); UTL_smtp.open_data(mail_conn); UTL_SMTP.write_data(mail_conn, 'Date: ' || TO_CHAR(SYSDATE, 'DD-MON-YYYY HH24:MI:SS') || UTL_TCP.crlf); UTL_SMTP.write_data(mail_conn, 'To: ' || msg_to || UTL_TCP.crlf); UTL_SMTP.write_data(mail_conn, 'From: ' || msg_from || UTL_TCP.crlf); UTL_SMTP.write_data(mail_conn, 'Subject: ' || msg_subject || UTL_TCP.crlf); UTL_SMTP.write_data(mail_conn, 'Reply-To: ' || msg_to || UTL_TCP.crlf || UTL_TCP.crlf); UTL_SMTP.write_data(mail_conn, msg_text || UTL_TCP.crlf || UTL_TCP.crlf); UTL_smtp.close_data(mail_conn); UTL_smtp.quit(mail_conn); EXCEPTION WHEN UTL_smtp.transient_error OR UTL_smtp.permanent_error THEN UTL_smtp.quit(mail_conn); dbms_output.put_line(sqlerrm); WHEN OTHERS THEN UTL_smtp.quit(mail_conn); dbms_output.put_line(sqlerrm); END; / Notes: username: Specifies the SMTP credential username. passwd: Specifies the SMTP credential password. msg_from: Specifies one of the approved senders. mailhost: Specifies the SMTP connection endpoint. Send a Test Email In order to verify that we configured everything accurately and created a working procedure (SEND_MAIL), we will send a test email: execute send_mail('taylor@example.com', 'Email from Oracle Autonomous Database', 'Sent using UTL_SMTP'); To summarize, UTL_SMTP package is now supported in Autonomous Database and we just explored how to send an email by taking advantage of this package the OCI Email Delivery service. If you'd like to learn more about Email Delivery or UTL_SMTP in ADB, please follow the documentation links referenced above.  

Autonomous Database now supports UTL_HTTP and UTL_SMTP PL/SQL packages. You may already be familiar with these as they are commonly used in various scenarios. In this blog post, we will focus on how...

How to Launch a Virtual Cloud Network Using the Networking Quickstart Wizard and Connect to Your Autonomous Database

A virtual cloud network (VCN) is a virtual, private network you set up in Oracle data centers. It is very similar to a traditional network with firewall rules and various gateways. When you work with Oracle Cloud Infrastructure (OCI), setting up a VCN for your cloud resources is usually one of the first things that you might be doing. However, configuring a VCN can be a bit involving when you think about all the sub-components that need attention such as subnets, route tables, gateways, security lists, etc. (See OCI Networking documentation for more details). The good news is that you can now launch a VCN with connectivity to the internet and Oracle Services Network in just a couple steps thanks to the new OCI Networking Quickstart wizard.  The wizard basically creates a VCN with regional public and private subnets, a NAT gateway, service gateway and an internet gateway along with the necessary route table and security list rules (including SSH access). It only prompts you to specify the IP CIDR block for the VCN and subnets. This reduces the number of steps and amount of time it takes to setup your network to 1-2 minutes. In this blog post, we are going to explore how to take advantage of this wizard to quickly launch a VCN as well as creating a compute instance in this VCN to connect to our Autonomous Database. Here's the outline of the steps that we are going to follow: Create a VCN Using the Networking Quickstart Wizard Provision a Compute Instance (Virtual Machine) Connect to our ADW Instance Create a VCN Using the Networking Quickstart Wizard We have two options to launch the wizard:  From Oracle Cloud Console home page under 'Quick Actions': In the navigation menu, follow 'Networking' --> 'Virtual Cloud Networks': In the wizard dialog, we will select 'VCN with Internet Connectivity' and click 'Start Workflow': In the next page, we will enter the VCN name and specify the compartment, VCN and subnet CIDR blocks: We will click 'Next' to review our configuration: As the final step, we'll hit 'Create' and watch all the components being configured: Provision a Compute Instance (Virtual Machine) In the previous step, we have seen how easy it is to launch a VCN in just a couple minutes. As you may remember, our end goal is to access our ADW instance from within that VCN and we are almost there! Now, all we have to do is to provision a virtual machine (VM) in the VCN that we just created. Our VM can either be on a public subnet or a private subnet. A VM on a public subnet has the option to have a public IP address; on the other hand, a VM on a private subnet only has a private IP address that can be accessed within the same VCN. Just to demonstrate how we can access to our ADW instance from both public and private subnets, we will create two VMs as shown below. On public subnet: On private subnet: Connect to our ADW Instance So far we launched our VCN and created two VMs in it. At this point it's important to remember that one of our VMs (ctuzla-vcnpublic) is on a public subnet while the other VM (ctuzla-vcnprivate) is on a private subnet, meaning it doesn't have an assigned public IP address and cannot be accessed via the internet. Connecting to ADW from our VM on the public subnet is fairly easy. After copying our ADW wallet into our VM, we can just SSH into the VM using the public IP address and connect to the ADW instance in SQL Plus. In order to connect to ADW from our VM on the private subnet, we need to first connect to the VM itself and it requires couple additional steps. Since ctuzla-vcnprivate is on a private subnet, we will first connect to ctuzla-vcnpublic and ssh into ctuzla-vcnprivate using its private IP address (please note that we need to copy the private SSH key of ctuzla-vcnprivate into ctuzla-vcnpublic to be able to do this). Let's see all these steps in action (The steps below assume that we already have Oracle Instant Client set up and our ADW wallet available in both VMs): Connection from a public subnet (ctuzla-vcnpublic): ctuzla-mac$ ssh -i /Users/ctuzla/id_rsa opc@129.143.200.127 Last login: Wed Dec 11 18:43:03 2019 [opc@ctuzla-vcnpublic ~]$ [opc@ctuzla-vcnpublic instantclient_19_5]$ ./sqlplus ADMIN/************@adw_high SQL*Plus: Release 19.0.0.0.0 - Production on Wed Dec 11 19:51:33 2019 Version 19.5.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.4.0.0.0 SQL> select * from dual; D - X Connection from a private subnet (ctuzla-vcnprivate): Please note that we will first connect to ctuzla-vcnpublic then jump into ctuzla-vcnprivate ctuzla-mac$ ssh -i /Users/ctuzla/id_rsa opc@129.143.200.127 Last login: Wed Dec 11 19:38:41 2019 [opc@ctuzla-vcnpublic ~]$ [opc@ctuzla-vcnpublic ~]$ ssh -i /home/opc/id_rsa opc@10.0.1.3 Last login: Wed Dec 11 19:33:30 2019 [opc@ctuzla-vcnprivate ~]$ [opc@ctuzla-vcnprivate instantclient_19_5]$ ./sqlplus ADMIN/************@adw_high SQL*Plus: Release 19.0.0.0.0 - Production on Wed Dec 11 20:13:40 2019 Version 19.5.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.4.0.0.0 SQL> select * from dual; D - X In this blog post, we have explored how to launch a VCN using the Networking Quickstart wizard, create a compute instance (on both public and private subnet), and connect to an ADW instance from within this VCN. As we have seen, the wizard turns the VCN creation into a much simpler and quicker process. If you would like to learn more about the new Networking Quickstart wizard, please check out the documentation here.

A virtual cloud network (VCN) is a virtual, private network you set up in Oracle data centers. It is very similar to a traditional network with firewall rules and various gateways. When you work with...

Announcing: Big Data Appliance X8-2

Big Data Appliance X8-2 is the 7th hardware generation of Oracle's leading Big Data platform continuing the platform evolution from Hadoop workloads to Big Data, SQL, Analytics and Machine Learning workloads. Big Data Appliance combines dense IO with dense Compute in a single server form factor. The single form factor enables our customers to build a single data lake, rather then replicating data across more specialized lakes.  What is New? The current X8-2 generation is based on the latest Oracle Sun X8-2L servers, and leverages that infrastructure to deliver enterprise class hardware for big data workloads. The latest generation sports more cores, more disk space and the same level of memory per server. Big Data Appliance retains its InfiniBand internal network, support by a multi-homed Cloudera CDH cluster set up.  Why a Single Form Factor? Many customers are embarking on a data unification effort, and the main data management concept used in that effort is the data lake. Within this data lake, we see and recommend a set of workloads to be run as is shown in this logical architecture:   In essence what we are saying is that the data lake will host the Innovation or Discovery Lab workloads as well as the Execution or production workloads on the same systems. This means that we need an infrastructure to both deal with large data volumes in a cost effective manner and deal with high compute volumes on a regular basis. Leveraging the hardware footprint in BDA, enables us to run both these workloads. The servers come with 2 * 24 cores AND 12 * 14TB drives enabling very large volumes of data and CPUs spread across a number of workloads. So rather then dealing with various form factors, and copying data from the main data lake to a side show Discovery Lab, BDA X8-2 consolidates these workloads. The other increasingly important data set in the data lake is streaming into the organization, typically via Apache Kafka. Both the CPU counts and the memory footprints can provide a great Kafka cluster, connecting it over InfiniBand to the main HDFS data stores. Again, while these nodes are very IO dense for Kafka, the simplicity of using the same nodes for any of the workloads makes Big Data Appliance a great Big Data platform choice. What is in the Box? Apart from the hardware specs, the software that is included in Big Data Appliance enables the data lake creation in a single software & hardware combination. Big Data Appliance comes with the full Cloudera stack, enabling the data lake as drawn above, with Kafka, HDFS, Spark all included in the cost of the system. The specific licensing for Big Data Appliance makes the implementation cost effective, and added to the simplicity of a single form factor makes Big Data Appliance an ideal platform to implement and grow the data lake into a successful venture. Where I can do test and development for BDA? Quite frequently our customers need to run some tests once and don't need to retain environment for longer. Another use case is customizable environment, when customers need to try some Hadoop reconfiguration, but risk of mistake is quite high and it's challenging to redeploy all BDA environment in case of fail. Cloud solutions will solve these problems and our team is working hard to release soon Big Data Service, which can help our customers to setup hybrid environments (with BDA on the ground and Big Data Service in the cloud)

Big Data Appliance X8-2 is the 7th hardware generation of Oracle's leading Big Data platform continuing the platform evolution from Hadoop workloads to Big Data, SQL, Analytics and Machine Learning...

Autonomous

Is your data in parts and lying in other object stores? No worries, Autonomous Database has you covered!

Whether you personally believe in the multi-cloud lifestyle that’s in vogue or not, the reality is you may be dealing with different departments or orgs within your company that store their data in their cloud of choice, and partitioned in various ways. Autonomous DB doesn't judge, we support querying or ingesting your data whether it lies in Oracle Cloud Object Store, Azure, or AWS!  Let me walk you through a simple example of how you may query your multi-part data lying in any of these object stores. We will be using the new partitioned external table functionality in the DBMS_CLOUD package, which creates a partitioned table structure over flat files lying in your external storage. (ie. a query on this table fetches data directly from the flat files in external storage, instead of the database's own datafiles). You may use the other functions in this package if you need to apply this to a non-partitioned external table, or to actually copy data into your database. Let's begin with some cloud concepts we will be using further in the post: In general, an Object Store is a cloud-based, scalable, serverless storage platform that offers storage of “objects”. These objects can be unstructured data of any content type, including analytic data and rich content, like images and videos. Oracle’s Object Storage, AWS S3 and Azure Blob Storage are all examples of object stores. A private object store URL is a URL that requires authentication via a credential to be accessed. Pre-Authenticated URLs and Public URLs don’t require any credentials to be accessed. Having the complete URL will give someone access to that data. Next, download the 2 part file in a zip here. In keeping with some of my previous posts, we will use the small weather history data set from Charlotte, NC in Comma Separated Values (CSV) format. The file which has "Part 1" in its name has July 2014 weather data, and that with "Part 2" will be the following month’s, Aug 2014 weather data.   Now select the cloud object store that you will be using to jump to the appropriate section:               .                 Using Oracle Cloud Object Storage   Step 1: Unzip and upload the two downloaded files to your Oracle Cloud Object Storage bucket Create an Object Store bucket and unzip and upload the two Weather history files to your object store. You can refer to this hands-on lab for a detailed walkthrough of how to do this in your Oracle Cloud tenancy.     Keep note of the "Visibility" (you can use Private or Public) of the bucket to which you are uploading the files. This will come in handy in Step 3.   Step 2: Create a credential to access your object store for your Private URLs Dive into SQL Developer that is connected to your Autonomous Database, and create a credential for your object store like below. The username is your Oracle Cloud Infrastructure username. The password is your Oracle Cloud Infrastructure generated auth token: BEGIN   DBMS_CLOUD.CREATE_CREDENTIAL (     credential_name => 'OBJ_STORE_CRED',     username => '<OCI username>',     password => '<OCI auth token>'   ); END; / Note: If you are unsure about how to create an auth token, refer to Step 8 in in the same lab tutorial mentioned above. While we have used a username and password for simplicity here, we recommend that you use native authentication for your production systems added security and versatility.   Step 3: Create a Partitioned External Table on top of your two-part Weather History data Now that we have a credential ready, run the PL/SQL script below using your Object Store URLs in the location parameters. You may either construct the URLs yourself by looking at this URL format or find the URL from the details of your file in the object store console as in the following screens.       Since we have a month's worth of data in each file, we call the DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE function to create the partitioned external table with 2 partitions using the REPORT_DATE column. Note: We are using a credential parameter in this call to DBMS_CLOUD to illustrate the use of files that lie in a private object storage bucket (ie. that have private URLs). If we use public bucket or pre-authenticated URLs, we can omit the credential parameter entirely. We can also use a mix of private and public URLs from the same object store if required.   BEGIN  DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE(     table_name =>'WEATHER_REPORT_EXTERNAL_PART',     credential_name =>'OBJ_STORE_CRED',     format => json_object('type' value 'csv', 'skipheaders' value '1',    'dateformat' value 'mm/dd/yy'),   column_list => 'REPORT_DATE DATE,   ACTUAL_MEAN_TEMP NUMBER,   ACTUAL_MIN_TEMP NUMBER,                      ACTUAL_MAX_TEMP NUMBER,   AVERAGE_MIN_TEMP NUMBER,                      AVERAGE_MAX_TEMP NUMBER,   AVERAGE_PRECIPITATION NUMBER',   partitioning_clause =>     'partition by range (REPORT_DATE)           (partition p1 values less than (to_date(''01-AUG-2014'',''DD-MON-YYYY'')) location               ( ''https://objectstorage.us-ashburn-1.oraclecloud.com/                   n/adwctraining5/b/partitiontest/o/Charlotte_NC_Weather_History_Part1.csv'')            ,            partition p2 values less than (to_date(''01-SEP-2014'',''DD-MON-YYYY'')) location               ( ''https://objectstorage.us-ashburn-1.oraclecloud.com/                   n/adwctraining5/b/partitiontest/o/Charlotte_NC_Weather_History_Part2.csv'')            )'   ); END; /   ↓ Click here to Jump to Step 4     |    Click here to Jump to object store selection ↑   Using Microsoft Azure Blob Storage   Step 1: Unzip and upload the two downloaded files to your Azure Blob Storage container Navigate to your Azure Blob Storage account, create a container and unzip and upload the two Weather history files. (Refer to these detailed steps on how to do this if necessary).     Keep note of the "Access Level" of your container (you can use a Private or Public container) in which you uploaded the files. This will come in handy in Step 3.   Step 2: Create a credential to access your Blob storage for your Private URLs Dive into SQL Developer that is connected to your Autonomous Database, and create a credential for your blob store like below. The username is your Azure Storage account name. The password is your Azure storage account access key: BEGIN   DBMS_CLOUD.CREATE_CREDENTIAL (     credential_name => 'OBJ_STORE_CRED',     username => '<Azure Storage account name>',     password => '<Azure Storage account access key>'   ); END; / Note: If you are still unsure about how to find your Azure storage account access key, follow detailed steps here.   Step 3: Create a Partitioned External Table on top of your two-part Weather History data Now that we have a credential ready, run the PL/SQL script below using your Azure Blob store URLs in the location parameters. You may either construct the URLs yourself by looking at the URL format or find the URL from the details of your file in the blob store console as in the following screen:     Since we have a month's worth of data in each file, we call the DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE function to create the partitioned external table with 2 partitions using the REPORT_DATE column. Note:  We are using a credential parameter in this call to DBMS_CLOUD to illustrate the use of files that lie in a private Azure blob storage container (ie. that have private URLs). If we use a public access container, we can omit the credential parameter entirely. We can also use a mix of private and public URLs from the same object store if required. We don't currently support pre-authenticated URLs for Azure, that's coming soon. Feel free to comment below if your business relies heavily on this.   BEGIN  DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE(     table_name =>'WEATHER_REPORT_EXTERNAL_PART',     credential_name =>'OBJ_STORE_CRED',     format => json_object('type' value 'csv', 'skipheaders' value '1',    'dateformat' value 'mm/dd/yy'),   column_list => 'REPORT_DATE DATE,   ACTUAL_MEAN_TEMP NUMBER,   ACTUAL_MIN_TEMP NUMBER,                      ACTUAL_MAX_TEMP NUMBER,   AVERAGE_MIN_TEMP NUMBER,                      AVERAGE_MAX_TEMP NUMBER,   AVERAGE_PRECIPITATION NUMBER',   partitioning_clause =>     'partition by range (REPORT_DATE)           (partition p1 values less than (to_date(''01-AUG-2014'',''DD-MON-YYYY'')) location               ( ''https://nilaysobjectstore.blob.core.windows.net/                   externaltest/Charlotte_NC_Weather_History_Part1.csv'')            ,            partition p2 values less than (to_date(''01-SEP-2014'',''DD-MON-YYYY'')) location               ( ''https://nilaysobjectstore.blob.core.windows.net/                   externaltest/Charlotte_NC_Weather_History_Part2.csv'')            )'   ); END; /   ↓ Click here to Jump to Step 4     |    Click here to Jump to object store selection ↑   Using AWS S3 Storage   Step 1: Unzip and upload the two downloaded files to your AWS S3 Storage container Navigate to your AWS S3 storage account, create a bucket and unzip and upload the two Weather history files. (Refer to these detailed steps on how to do this if necessary).     Keep note of the Permissions of your bucket (you can use a Private or Public access) in which you uploaded the files. This will come in handy in Step 3.   Step 2: Create a credential to access your AWS S3 storage for your Private URLs Dive into SQL Developer that is connected to your Autonomous Database, and create a credential for your S3 store like below. The username is your AWS Access key ID. The password is your AWS secret access key: BEGIN   DBMS_CLOUD.CREATE_CREDENTIAL (     credential_name => 'OBJ_STORE_CRED',     username => '<AWS Access key ID>',     password => '<AWS secret access key>'   ); END; / Note: If you are still unsure about how to find your AWS access key ID and secret access key, follow detailed steps here.   Step 3: Create a Partitioned External Table on top of your two-part Weather History data Now that we have a credential ready, run the PL/SQL script below using your AWS S3 URLs in the location parameters. You may either construct the URLs yourself by looking at the URL format or find the URL from the details of your file in the S3 bucket console as in the following screen:     Since we have a month's worth of data in each file, we call the DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE function to create the partitioned external table with 2 partitions using the REPORT_DATE column. Note:  We are using a credential parameter in this call to DBMS_CLOUD to illustrate the use of files that lie in a private AWS S3 storage bucket (ie. that have private URLs). If we use a public access bucket, we can omit the credential parameter entirely. We can also use a mix of private and public URLs from the same object store if required. We don't currently support pre-authenticated URLs for AWS, but that's coming soon. Feel free to comment below if your business relies heavily on this. BEGIN  DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE(     table_name =>'WEATHER_REPORT_EXTERNAL_PART',     credential_name =>'OBJ_STORE_CRED',     format => json_object('type' value 'csv', 'skipheaders' value '1',    'dateformat' value 'mm/dd/yy'),   column_list => 'REPORT_DATE DATE,   ACTUAL_MEAN_TEMP NUMBER,   ACTUAL_MIN_TEMP NUMBER,                      ACTUAL_MAX_TEMP NUMBER,   AVERAGE_MIN_TEMP NUMBER,                      AVERAGE_MAX_TEMP NUMBER,   AVERAGE_PRECIPITATION NUMBER',   partitioning_clause =>     'partition by range (REPORT_DATE)           (partition p1 values less than (to_date(''01-AUG-2014'',''DD-MON-YYYY'')) location              (''https://nilaystests.s3-us-west-1.amazonaws.com/Charlotte_NC_Weather_History_Part1.csv'')            ,            partition p2 values less than (to_date(''01-SEP-2014'',''DD-MON-YYYY'')) location              (''https://nilaystests.s3-us-west-1.amazonaws.com/Charlotte_NC_Weather_History_Part2.csv'')            )'   ); END; / Step 4: Validate and Query your data We're done already! You may run the "DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE" function to validate that your partitioned external table was created correctly and can access its underlying data. (Validation is an optional but recommended step)   EXEC DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE('WEATHER_REPORT_EXTERNAL_PART'); If things went awry and you receive an error, you may look at the logfile and badfile to easily troubleshoot any errors you see. (Refer to this lab tutorial's Step 12 for more detail on this). If you sailed through with no errors, you may now query the table as a whole or by partition (ie. in this case, by month) with the script below. Notice that in the third query we select data between July 5th and 21st 2014, all of which lies in partition P1. This query therefore makes use of the performance benefits of partitioned external tables, and only needs to physically access the file named "Part 1" file under the hood!   SELECT REPORT_DATE, AVERAGE_PRECIPITATION FROM WEATHER_REPORT_EXTERNAL_PART; SELECT REPORT_DATE, AVERAGE_MIN_TEMP FROM WEATHER_REPORT_EXTERNAL_PART PARTITION (P2); SELECT * FROM WEATHER_REPORT_EXTERNAL_PART where REPORT_DATE between '05-JUL-2014' and '21-JUL-2014';     So, there you have it! Autonomous Database aims to not only simplify the provisioning and management of your database in the cloud, but also aims to provide easy to use functions to make your entire interaction with the database seamless and intuitive.   ↑ Click here to jump back to object store selection to try a different one

Whether you personally believe in the multi-cloud lifestyle that’s in vogue or not, the reality is you may be dealing with different departments or orgs within your company that store their data in...

Autonomous

Keeping Your Autonomous Data Warehouse Secure with Data Safe - Part 2

In the part 1 of this series we looked at how to get your ADW and OCI environment ready for using Data Safe. If you missed part 1 or need a quick refresher then the blog post is here: https://blogs.oracle.com/datawarehousing/keeping-your-autonomous-data-warehouse-secure-with-data-safe-part-1. In this post we are going to explore the process of connecting and Autonomous Data Warehouse instance to our newly deployed Data Safe environment. Remember that you deploy your Data Safe control center within a specific OCI regional data center - as is the case with all our other cloud services. Therefore, if you switch to a different data center then you will need to deploy a new Data Safe environment. Hope that makes sense! Launching the Data Safe Service Console In part 1 we got to the point of enabling Data Safe in the Frankfurt data center. Now when we login to Oracle Cloud using our newly created OCI credentials we can pop open the hamburger menu and select Data Safe: and arrive on the Data Safe landing pad. The next step is to launch the Service Console (you may be wondering...why doesn't the Service Console just open automatically since the landing pad page is empty, apart from the Service Console button! Great question and we will come back to this towards the end of the series of posts when the landing pad page will show a lot more information). After clicking on the Service Console button a new window pops open which looks like this:   Right now, there is no information showing on any of our graphs or any of the other pages. This is because we have not registered our data warehouse instance so that's the next step. Registering an ADW with Data Safe We need to register our ADW with Data Safe before we can generate any of the reports that are part of the Data Safe library. To do that we need to go to the tab marked "Target" in the horizontal menu at the top of the page: Clicking on the "Register" button will pop open a form where we can input the connection details for our ADW... Data Safe is not limited to just working with Autonomous Databases and we could register any of the following: Nnote that Data Safe supports only serverless deployments for Autonomous Database. "Dedicated" is not currently supported. There is more information here: https://docs.oracle.com/en/cloud/paas/data-safe/udscs/supported-target-databases.html For ADW (and ATP) we first need to change the connection type to TLS which will add some additional fields to the form - this will be most recognisable if you have been spent time configuring connections to ADW from DI/ETL or BI tools: it looks as if a lot of information is now required to register our ADW instance but the good news is that just about all the information we need is contained within a small zip file which we can download from our OCI ADB console. Essentially we need the wallet file for our instance. But first let's quickly complete the fields in the top part of the form:   Ok, now we need the information about our ADW instance and here's how you get it: Collecting Connection Information For ADW If we flip over to the OCI console page for our ADW instance we can see that there is a line for something called "OCID" which is the first piece of information we need to collect .There are two links next to it: "Show" and "Copy". Click on copy and then flip over to our Data Safe page and paste in the OCID reference. Now we need things like hostname, port, service name and target distinguished name along with various secure wallet files. To get this information we need to download the wallet file which can be accessed by clicking on the "DB Connection" button. On the pop-up form click the "Download Wallet" button and enter a password...note this down because we are going to need it again shortly... Once the file has been downloaded, find the file on your filesystem and unzip it. The result will be a folder containing the following files: Ok, back to our Target registration form on Data Safe....the data for the next four fields can all be found in the tnsnames.ora file. We are going to use the "low service" for this connection because running Data Safe reports is not an urgent, rush-rush, workload. If you have no idea what a "low service" is then it might be a good idea to quickly read through the section on "Managing Concurrency and Priorities on Autonomous Data Warehouse" in section 12 of the ADW documentation. In simple terms...when we connect to an ADW instance we need to select a service (low, medium or high). These services map to LOW, MEDIUM, and HIGH consumer groups which have the following characteristics: HIGH: Highest resources, lowest concurrency. Queries run in parallel. MEDIUM: Fewer resources, higher concurrency. Queries run in parallel. LOW: Least resources, highest concurrency. Queries run serially. Anyway....as long as the jobs run, then we are going to be happy. Therefore, we need to find the details for our low-service connection in the tnsnames.ora file which will look something like this: adwdemo_low = (description=(address=(protocol=tcps)(port=1522)(host=xxxxxx.oraclecloud.com))(connect_data=(service_name=xxxxx_adwdemo_low.xxxxxxx))(security=(ssl_server_cert_dn="CN=xxxxxx.oraclecloud.com,OU=Oracle,O=Oracle Corporation,L=Redwood City,ST=California,C=US"))) You can copy & paste the host, port, service_name and ssl_server_cert_dn into the four fields below the "TLS" pulldown menu entry. So now our form looks like this... now the last few steps....make sure the wallet type is set to "JKS Wallet". For the Certificate/Wallet find the "truststore.jks" file from our downloaded and unzipped connection file. In the same directory/folder we can pick "keystone.jks" for the "Keystore Wallet". The next field needs the password we used on the OCI ADW console page when we downloaded the connection zip file so paste that in... Lastly add the ADW instance username/password that we created in Part 1of this series of blog posts - our user was called DATASAFE.   before you click on the "Test Connection" button we need to run a PL/SQL script to give our new DATASAFE database user some privileges that will allow Data Safe to run though it's library of checks... Click on the download button then search for the PL/SQL script dscs_privileges.sql. Using SQL Developer (or any other tool) we need to login as our standard ADMIN user and run that script (copy & paste will do the trick). Check the log for the script and you should see something like this: Enter value for USERNAME (case sensitive matching the username from dba_users) Setting USERNAME to DATASAFE Enter value for TYPE (grant/revoke) Setting TYPE to GRANT Enter value for MODE (audit_collection/audit_setting/data_discovery/masking/assessment/all) Setting MODE to ALL Granting AUDIT_COLLECTION privileges to "DATASAFE" ...  Granting AUDIT_SETTING privileges to "DATASAFE" ...  Granting DATA_DISCOVERY role to "DATASAFE" ...  Granting MASKING role to "DATASAFE" ...  Granting ASSESSMENT role to "DATASAFE" ...  Done. Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.4.0.0.0 NOW, we can test the connection... and everything should work and we should see a nice big green tick.    Finally we are ready to click on "Register Target". Now our target page shows the details of our newly registered database...     Of course that's all we have done - register our ADW as a new target - so all the other pages are still empty, including the home page:     Wrap up for Part 2 In part 1 we setup our environment ready for using Data Safe and we enabled Data Safe within our regional data center - in this case the Frankfurt Data Center. In this post, Part 2, we have successfully added our existing ADW instance as a new target database in Data Safe.   Coming up in Part 3 In the next post we will start to explore some of the data discovery and data masking features that are part of Data Safe.   Learn More... Our security team has created a lot of great content to help you learn more about Data Safe so here are my personal bookmarks: Documentation - start here: https://docs.oracle.com/en/cloud/paas/data-safe/udscs/oracle-data-safe-overview.html Data Safe page on Oracle.com - https://www.oracle.com/database/technologies/security/data-safe.html Database Security Blog: https://blogs.oracle.com/cloudsecurity/db-sec https://blogs.oracle.com/cloudsecurity/keep-your-data-safe-with-oracle-autonomous-database-today https://blogs.oracle.com/cloudsecurity/keeping-your-data-safe-part-4-auditing-your-cloud-databases        

In the part 1 of this series we looked at how to get your ADW and OCI environment ready for using Data Safe. If you missed part 1 or need a quick refresher then the blog post is here: https://blogs.ora...

Autonomous

Keeping Your Autonomous Data Warehouse Secure with Data Safe - Part 1

One of the big announcements at OpenWorld 2019 in San Francisco was Oracle Data Safe - a totally new cloud-based security control center for your Oracle Data Warehouse (ADW) and it's completely FREE!    So what exactly does it do? Well, in summary Data Safe delivers essential data security capabilities as a service on Oracle Cloud Infrastructure. Essentially, it helps you understand the sensitivity of your data, evaluate risks to data, mask sensitive data, implement and monitor security controls, assess user security, monitor user activity, and address data security compliance requirements.   Maybe a little video will help...     Data Safe Console The main console dashboard page for Data Safe looks like this:  giving you a fantastic window directly into the types of data sets sitting inside Autonomous Data Warehouse. it means you can... Assess if your database is securely configured Review and mitigate risks based on GDPR Articles/Recitals, Oracle Database STIG Rules, and CIS Benchmark recommendations Assess user risk by highlighting critical users, roles and privileges Configure audit policies and collect user activity to identify unusual behavior Discover sensitive data and understand where it is located Remove risk from non-production data sets by masking sensitive data So let's see how easy it is to connect an existing Autonomous Data Warehouse to Data Safe and learn about the types of security reviews you can run on your data sets... Getting ADW ready to work with Data Safe To make this more useful to everyone I am going to take an existing ADW instance and create a new user called LOCAL_SH. Then I am going to copy the supplementary demographics, countries and customer tables from the read-only sales history demo schema to my new local_sh schema. This will give me some "sensitive" data points for Data Safe to discover when I connect my ADW to Data Safe. CREATE USER local_sh IDENTIFIED BY "Welcome1!Welcome1"; GRANT DWROLE TO local_sh; CREATE TABLE local_sh.supplementary_demographics AS SELECT * FROM sh.supplementary_demographics; CREATE TABLE local_sh.customers AS SELECT * FROM sh.customers; CREATE TABLE local_sh.countries AS SELECT * FROM sh.countries; So what does the customer table look like: and as you can see that there are definitely some columns that would help to personally identify someone. Those types of columns need to be hidden or masked from our development teams and business users...   Now we know that we have some very sensitive data! If you are not quite as lucky as me and you are starting from a completely clean ADW and need to load some of your data then checkout the steps in our documentation guide that explains how to load your own data into ADW: https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/tasks_load_data.html   Next step is to create a user within my ADW instance that I will use for in my Data Safe connection process so that my security review process is not tied to one of my application users. CREATE USER datasafe IDENTIFIED BY "Welcome1!DataSafe1"; GRANT DWROLE TO datasafe; What you are going to see later is that we have to run an installation script, which is available from the Data Safe console when we register a database. This user is going to own the required roles and privileges for running Data Safe, which is why I don't want to tie it to one of my existing application users.   Do I have the right type of Cloud Account? If you are new to Oracle Cloud or maybe created your account within the last 12 months then you can probably skip this section. For some of you who have had cloud accounts on the Oracle Cloud for over two years then when you try to access Data Safe you may get an interesting warning message about federated accounts... Accessing Data Safe After you log into your cloud account click on the hamburger (three horizontal lines) menu in the top left corner. The pop-out menu will have Data Safe listed underneath Autonomous Transaction Processing...   Click on Data Safe and this screen might appear! If this message doesn't appear then jump ahead a paragraphs to "Enabling Data Safe".   Don't panic all is not lost at this point. All you need to do is create a new OCI user. In the same hamburger menu list scroll down to the section for "Governance and Administration", select "Identity" and then select "Users"... Click on the big blue "Create User" button at the top of the screen and then fill in the boxes to create a new OCI user which we will use as the owner of your Data Safe environment. After you create the new user a welcome email should arrive with a link to reset the password...something similar to this: Having created a completely new user it's important to enable all the correct permissions so that Data Safe can access the resources and autonomous database instances within your tenancy. For my user, called DataSafe, I already have an “administrators” group within my tenancy that contains all the require privileges needed for Data Safe.  In the real world it's probably prudent to setup a new group just for Data Safe administrators and then assign OCI privileges to just that group. There is more information about this process here: https://docs.oracle.com/en/cloud/paas/data-safe/udscs/required-permission-enabling-oracle-data-safe.html Quick Recap We have an existing Autonomous Data Warehouse instance setup. I have used SQL Developer to create a new user to own a small data set containing potentially sensitive information, copied some tables from an existing schema that I know has sensitive data points so I now have a working data set and we have setup a new OCI user to own our Data Safe deployment.   Enabling Data Safe Now we are ready to start working with Data Safe. From the hamburger menu select "Data Safe" and you should see this screen if it's the first time you used it in your tenancy-region. You can see below that I am working in our Frankfurt Data Center and this is the first time I have logged into Data Safe: so to get to the next stage all we need to do is click on the big blue button to enable Data Safe. At which point we get the usual "working..." screen   followed by the "ready to get to work" screen...   Wrap-up for Part 1 This is a wrap for this particular post. In the next instalment we will look at how to register an Autonomous Data Warehouse instance and then run some of the security reports that can help us track down those objects that contain sensitive data about our customers. Learn More... Our security team has created a lot of great content to help you learn more about Data Safe so here are my personal bookmarks: Documentation - start here: https://docs.oracle.com/en/cloud/paas/data-safe/udscs/oracle-data-safe-overview.html Data Safe page on Oracle.com - https://www.oracle.com/database/technologies/security/data-safe.html Database Security Blog: https://blogs.oracle.com/cloudsecurity/db-sec https://blogs.oracle.com/cloudsecurity/keep-your-data-safe-with-oracle-autonomous-database-today https://blogs.oracle.com/cloudsecurity/keeping-your-data-safe-part-4-auditing-your-cloud-databases        

One of the big announcements at OpenWorld 2019 in San Francisco was Oracle Data Safe - a totally new cloud-based security control center for your Oracle Data Warehouse (ADW) and it's completely FREE!   ...

Autonomous

Key Highlights from an Autonomous OpenWorld 2019

It's taken longer than expected (too many things to do post-OpenWorld!) but I have finally finished and published my review of OpenWorld 2019 and it is now available in the Apple iBooks store.   Why do you need this book? Well as usual there were so many sessions and hands-on labs at this year's conference it really is hard to know where to start when you click on the link (https://events.rainfocus.com/widget/oracle/oow19/catalogow19?) to access the content catalog guide. There are some filters that help you narrow down the huge list but it can take time to search and download all the most important content linked to Autonomous Database. To save you all that time searching, I have put together a complete review in beautiful iBook format! And if you didn't manage to get to San Francisco then here is the perfect way to learn about the key messages, announcements, roadmaps, features and hands-on training that will help you get the most from your Autonomous Database experience.   So what's in the book? The guide includes all the key sessions, labs and key announcements from this year's Oracle OpenWorld conference broken down into the following sections Chapter 1 - Welcome and key video highlights  Chapter 2 -  List of key sessions, labs and videos with links to download the related presentations Chapter 3 - Details of all the links you need to keep up to date on Oracle’s strategy and products for Data Warehousing and Big Data. This covers all our websites, blogs and social media pages. Chapter 4 - Everything you need to justify being at OpenWorld 2020 Where can I get it? If you have an Apple device then you will definitely want to get the iBook version which is available here: http://books.apple.com/us/book/id1483761470 If you are still stuck on Windows or stuck using an Android device then the PDF version is the way to go. The download link for this version is here: https://www.dropbox.com/s/7ejlmhmwhwpbdpw/ADB-Review-oow19.pdf?dl=0 Feedback If you think anything is missing then let me know via this blog by leaving a comment or send me an email (keith.laker@oracle.com). Hope you find the review useful and look forward to seeing you all at Moscone Center next year for OpenWorld 2020.  

It's taken longer than expected (too many things to do post-OpenWorld!) but I have finally finished and published my review of OpenWorld 2019 and it is now available in the Apple iBooks store.   Why do...

Autonomous

Getting A Sneak Preview into the Autonomous World Of 19c

If you have been autonomous data warehouse for a while now you will know that when you create a new data warehouse your instance will be based on Database 18c. But you also probably spotted that Database 19c is also available via some of our other cloud services and LiveSQL also runs Database 19c. Which probably makes you wonder when Autonomous Data Warehouse will be upgraded to 19c? The good news is that we are almost there! We have taken the first step by releasing a "19c Preview" mode so you can test your applications and tools against this latest version of the Oracle Database in advance of ADW being autonomously upgraded to 19c. So if you are using Autonomous Data Warehouse today then now is the time to start testing your data warehouse tools and apps using the just released  "19c Preview" feature. Where is "19c Preview"  available? The great news is that you can enable this feature in any of our data centers! We are rolling it out right now so if you don't see the exact flow outlined below when you select your data center then don't panic because it just means we haven't got to your data center yet but we will, just give us a bit more time! If you can see the option to enable preview (I will show you how to do this in the next section) then let's work through the options to build your first 19c Preview instance. How to Enable 19c Preview Scenario 1 - creating a new instance based on 19c Let's assume that in this case you want to create a completely new instance for testing your tools and apps. The great news is that process is almost identical to the existing process for creating a new data warehouse. We have just added two additional mouse clicks. So after you login to the main OCI console you will navigate to your autonomous database console: Note that in this screenshot I am using our US, Ashburn data center. This is my default data center. Since this feature is available across all our data centers it doesn't matter which data center you use. Click on the big blue create "Autonomous Database" button to launch the pop-up create form... this should look familiar...I have add a display name for my new instance "ADW 19c Preview" and then I set the database name:  in this case "ADW19CP". Looks nice and simple so far! Next we pick the workload type which in my example I selected the "Data Warehouse" workload... ...now this brings us to one of our other recent additions to ADW - you can now opt for a "Serverless" vs. "Dedicated". Configuration. So what's the difference? Essentially... Serverless is a simple and elastic deployment choice. Oracle autonomously operates all aspects of the database lifecycle from database placement to backup and updates. Dedicated is a private cloud in public cloud deployment choice. A completely dedicated compute, storage, network and database service for only a single tenant. You get customizable operational policies to guide Autonomous Operations for workload placement, workload optimization, update scheduling, availability level, over provisioning and peak usage. There is more information in one my recent blog posts "There's a minor tweak to our UI - DEDICATED" In this demo I am going to select "Serverless"... and we are almost there...just note that as you scroll down the "Auto Scaling" box has been automatically selected which is now the default behaviour for all new data warehouse instances. Of course if you don't want auto scaling enabled simply untick the box and move on to the next step... Finally we get to the most important tick box on the form! the text above the tick box says "New Database Preview Version 19c Available" and all you need to do is tick the "Enable Preview Mode" box. Of course as with everything in life you need to pay attention to the small print so carefully read the text in the yellow information box: You can't actually move forward until you agree to the T&Cs related to using preview mode. The most important part is that preview mode is time boxed and ends on December 15th 2019.  NOTE: Obviously, we reserve the right to extend the date based on customer demand, however, the console will always show the correct date. Once you confirm agreement to the T&Cs you can scroll down, add your administrator password and finally click on the usual big blue "Create Autonomous Database" button. Notice that the instance page on the service console now has a yellow banner telling you when your preview period will end. There is marker on the OCI console page so you can easily spot a "preview" instance in your list of existing autonomous database instances...   Now let's move on to scenario 2 and create a 19c clone - if you have no idea what a "clone" is then this blog post might help: "What is cloning and what does it have to do with Autonomous Data Warehouse?" Scenario 2 - cloning an existing instance to 19c You may already have an existing data warehouse instance and want to check that everything that's working today (ETL jobs, scripts, reports etc) will still work when ADW moves to Database 19c. Easiest way to do this to simply clone your existing instance and transform it into a 19c instance during the cloning process.  Let's assume that you are on the service console page for your instance... click on the "Actions" button and select "Create Clone" The first step is to select the type of clone you want to create? For testing purposes it's likely that you will want to have the some data in your 19c clone as your original data warehouse as this will make it easier to test reports and scripts. This is what I have done below by selecting "Full Clone".. Of course if you just want to make sure that your existing tools and applications can connect to your new 19c ADW then a metadata clone could well be sufficient. The choice is yours! The rest of the form is as per the usual cloning process until you get towards the bottom where you will spot the new section to enable "19c Preview Mode". Click to enable the preview mode and then agree to the T&Cs and your done! Simply add your administrator password and finally click on the usual big blue "Create Autonomous Database" button. That's it! Welcome to the new world of Autonomous Database 19c. Happy testing! If you want more information about preview versions for Autonomous Database then checkout the overview page in the documentation which is here:  https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/autonomous-preview.html   What's New in 19c Autonomous Database Here is a quick summary of some of the default settings in Oracle Database Preview Version 19c for Autonomous Database: Real-Time Statistics: Enabled by default. Real-Time Statistics enables the database to automatically gather real-time statistics during conventional DML operations. Fresh statistics enable the optimizer to produce more optimal plans. See Real-Time Statistics for more information. High-Frequency Automatic Optimizer Statistics Collection: Enabled by default. High-Frequency Automatic Optimizer Statistics Collection enables the database to gather optimizer statistics every 15 minutes for objects that have stale statistics. See About High-Frequency Automatic Optimizer Statistics Collection for more information. High-Frequency SQL Plan Management Evolve Advisor Task: Enabled by default. When enabled, the database will assess the opportunity for automatic SQL plan changes to improve the performance for known statements every hour. A frequent execution means that the Optimizer has more opportunities to find and evolve to better performing plans. See Managing the SPM Evolve Advisor Task for more information. Automatic Indexing: Disabled by default. To take advantage of automatic indexing you can enable automatic indexing. When enabled automatic indexing automates the index management tasks in an Oracle database. Automatic indexing automatically creates, rebuilds, and drops indexes in a database based on the changes in application workload, thus improving database performance. See Managing Auto Indexes for more information. Enjoy the autonomous world of Database 19c.

If you have been autonomous data warehouse for a while now you will know that when you create a new data warehouse your instance will be based on Database 18c. But you also probably spotted that...

Autonomous

Autonomous Wednesday at OpenWorld 2019 - List of Must See Sessions

Here you go folks...OpenWorld is so big this year so to help you get the most from Monday I have put together a cheat sheet listing all the best sessions. Enjoy your Wednesday and make sure you drink lots of water! If you want this agenda on your phone (iPhone or Android) then checkout our smartphone web app by clicking here !function(){ var d=document.documentElement;d.className=d.className.replace(/no-js/,'js'); if(document.location.href.indexOf('betamode=') > -1) document.write(''); }(); AGENDA - WEDNESDAY 09:00AM Moscone South - Room 207/208 SOLUTION KEYNOTE: A New Vision for Oracle Analytics T.K. Anand, Senior Vice President, Analytics, Oracle 09:00 AM - 10:15 AM SOLUTION KEYNOTE: A New Vision for Oracle Analytics 09:00 AM - 10:15 AM Moscone South - Room 207/208 In this session learn about the bright new future for Oracle Analytics, where customers and partners benefit from augmented analytics working together with Oracle Autonomous Data Warehouse, automating the delivery of personalized insights to fuel innovation without limits. SPEAKERS:T.K. Anand, Senior Vice President, Analytics, Oracle Moscone South - Room 152B Managing One of the Largest IoT Systems in the World with Autonomous Technologies Manuel Martin Marquez, Senior Project Leader, Cern Organisation Européenne Pour La Recherche Nucléaire Sebastien MASSON, Oracle DBA, CERN 09:00 AM - 09:45 AM Managing One of the Largest IoT Systems in the World with Autonomous Technologies 09:00 AM - 09:45 AM Moscone South - Room 152B CERN’s particle accelerator control systems produce more than 2.5 TB of data per day from more than 2 million heterogeneous signals. This IoT system and data is used by scientists and engineers to monitor magnetic field strengths, temperatures, and beam intensities among many other parameters to determine if the equipment is operating correctly. These critical data management and analytics tasks represent important challenges for the organization, and key technologies including big data, machine learning/AI, IoT, and autonomous data warehouses, coupled cloud-based models, can radically optimize the operations. Attend this session to learn from CERN’s experience with IoT systems, Oracle’s cloud, and autonomous solutions. SPEAKERS:Manuel Martin Marquez, Senior Project Leader, Cern Organisation Européenne Pour La Recherche Nucléaire Sebastien MASSON, Oracle DBA, CERN Moscone South - Room 213 Strategy and Roadmap for Oracle Data Integrator and Oracle Enterprise Data Quality Jayant Mahto, Senior Software Development Manager, Oracle 09:00 AM - 09:45 AM Strategy and Roadmap for Oracle Data Integrator and Oracle Enterprise Data Quality 09:00 AM - 09:45 AM Moscone South - Room 213 This session provides a detailed look into Oracle Data Integrator and Oracle Enterprise Data Quality, Oracle’s strategic products for data integration and data quality. See product overviews, highlights from recent customer implementations, and future roadmap plans, including how the products will work with ADW. SPEAKERS:Jayant Mahto, Senior Software Development Manager, Oracle Moscone South - Room 203 The Hidden Data Economy and Autonomous Data Management Paul Sonderegger, Senior Data Strategist, Oracle 09:00 AM - 09:45 AM The Hidden Data Economy and Autonomous Data Management 09:00 AM - 09:45 AM Moscone South - Room 203 Inside every company is a hidden data economy. But because there are no market prices for data inside a single firm, most executives don’t think of it this way. They should. Seeing enterprise data creation, use, and management in terms of supply, demand, and transaction costs will enable companies to compete more effectively on data. In this session learn to see data economy hiding in your company and see Oracle’s vision for helping you get the most out of it. SPEAKERS:Paul Sonderegger, Senior Data Strategist, Oracle Moscone West - Room 3021 Hands-on Lab: Oracle Machine Learning Mark Hornick, Senior Director Data Science and Big Data, ORACLE Marcos Arancibia Coddou, Product Manager, Data Science and Big Data, Oracle Charlie Berger, Sr. Director Product Management, Data Science and Big Data, Oracle 09:00 AM - 10:00 AM Hands-on Lab: Oracle Machine Learning 09:00 AM - 10:00 AM Moscone West - Room 3021 In this introductory hands-on-lab, try out the new Oracle Machine Learning Zeppelin-based notebooks that come with Oracle Autonomous Database. Oracle Machine Learning extends Oracle’s offerings in the cloud with its collaborative notebook environment that helps data scientist teams build, share, document, and automate data analysis methodologies that run 100% in Oracle Autonomous Database. Interactively work with your data, and build, evaluate, and apply machine learning models. Import, export, edit, run, and share Oracle Machine Learning notebooks with other data scientists and colleagues. Share and further explore your insights and predictions using the Oracle Analytics Cloud. SPEAKERS:Mark Hornick, Senior Director Data Science and Big Data, ORACLE Marcos Arancibia Coddou, Product Manager, Data Science and Big Data, Oracle Charlie Berger, Sr. Director Product Management, Data Science and Big Data, Oracle 10:00AM Moscone South - Room 214 Oracle Essbase 19c: Roadmap on the Cloud Raghuram Venkatasubramanian, Product Manager, Oracle Ashish Jain, Product Manager, Oracle 10:00 AM - 10:45 AM Oracle Essbase 19c: Roadmap on the Cloud 10:00 AM - 10:45 AM Moscone South - Room 214 In this session learn about new analytic platform on Oracle Autonomous Data Warehouse and the role of Oracle Essbase as an engine for data analysis at the speed of thought. Learn how you can perform analysis on data that is stored in Oracle Autonomous Data Warehouse without having to move it into Oracle Essbase. Learn about zero footprint Oracle Essbase and how it provides an architecture that is efficient both in terms of performance and resource utilization. The session also explores new innovations in the future Oracle Essbase roadmap. SPEAKERS:Raghuram Venkatasubramanian, Product Manager, Oracle Ashish Jain, Product Manager, Oracle Moscone West - Room 3000 The Autonomous Trifecta: How a University Leveraged Three Autonomous Technologies Erik Benner, VP Enterprise Transformation, Mythics, Inc. Carla Steinmetz, Senior Principal Consultant, Mythics, Inc. 10:00 AM - 10:45 AM The Autonomous Trifecta: How a University Leveraged Three Autonomous Technologies 10:00 AM - 10:45 AM Moscone West - Room 3000 The challenges facing organizations are often more complex than what one technology can solve. Most solutions require more than just a fast database, or an intelligent analytics tool. True solutions need to store data, move data, and report on data—ideally with all of the components being accelerated with machine learning. In this session learn how Adler University migrated to the cloud with Oracle Autonomous Data Warehouse, Oracle Data Integration, and Oracle Analytics. Learn how the university enhanced the IT systems that support its mission of graduating socially responsible practitioners, engaging communities, and advancing social justice. SPEAKERS:Erik Benner, VP Enterprise Transformation, Mythics, Inc. Carla Steinmetz, Senior Principal Consultant, Mythics, Inc. Moscone South - Room 152C Graph Databases and Analytics: How to Use Them Melli Annamalai, Senior Principal Product Manager, Oracle Hans Viehmann, Product Manager EMEA, Oracle 10:00 AM - 10:45 AM Graph Databases and Analytics: How to Use Them 10:00 AM - 10:45 AM Moscone South - Room 152C Graph databases and graph analysis are powerful new tools that employ advanced algorithms to explore and discover relationships in social networks, IoT, big data, data warehouses, and complex transaction data for applications such as fraud detection in banking, customer 360, public safety, and manufacturing. Using a data model designed to represent linked and connected data, graphs simplify the detection of anomalies, the identification of communities, the understanding of who or what is the most connected, and where there are common or unnatural patterns in data. In this session learn about Oracle’s graph database and analytic technologies for Oracle Cloud, Oracle Database, and big data including new visualization tools, PGX analytics, and query language. SPEAKERS:Melli Annamalai, Senior Principal Product Manager, Oracle Hans Viehmann, Product Manager EMEA, Oracle Moscone South - Room 213 Data Architect's Dilemma: Many Specialty Databases or One Multimodel Database? Tirthankar Lahiri, Senior Vice President, Oracle Juan Loaiza, Executive Vice President, Oracle 10:00 AM - 10:45 AM Data Architect's Dilemma: Many Specialty Databases or One Multimodel Database? 10:00 AM - 10:45 AM Moscone South - Room 213 The most fundamental choice for an enterprise data architect to make is between using a single multimodel database or different specialized databases for each type of data and workload. The decision has profound effects on the architecture, cost, agility, and stability of the enterprise. This session discusses the benefits and tradeoffs of each of these alternatives and also provides an alternative solution that combines the best of the multimodal architecture with a powerful multimodel database. Join this session to find out what is the best choice for your enterprise. SPEAKERS:Tirthankar Lahiri, Senior Vice President, Oracle Juan Loaiza, Executive Vice President, Oracle 10:30AM Moscone West - Room 3021 Hands-on Lab: Oracle Big Data SQL Martin Gubar, Director of Product Management, Big Data and Autonomous Database, Oracle Eric Vinck, Principal Sales Consultant, EMEA Oracle Solution Center, Oracle, Oracle 10:30 AM - 11:30 AM Hands-on Lab: Oracle Big Data SQL 10:30 AM - 11:30 AM Moscone West - Room 3021 Modern data architectures encompass streaming data (e.g. Kafka), Hadoop, object stores, and relational data. Many organizations have significant experience with Oracle Databases, both from a deployment and skill set perspective. This hands-on lab on walks through how to leverage that investment. Learn how to extend Oracle Database to query across data lakes (Hadoop and object stores) and streaming data while leveraging Oracle Database security policies. SPEAKERS:Martin Gubar, Director of Product Management, Big Data and Autonomous Database, Oracle Eric Vinck, Principal Sales Consultant, EMEA Oracle Solution Center, Oracle, Oracle Moscone West - Room 3023 Hands-on Lab: Oracle Multitenant John Mchugh, Senior Principal Product Manager, Oracle Thomas Baby, Architect, Oracle Patrick Wheeler, Senior Director, Product Management, Oracle 10:30 AM - 11:30 AM Hands-on Lab: Oracle Multitenant 10:30 AM - 11:30 AM Moscone West - Room 3023 This is your opportunity to get up close and personal with Oracle Multitenant. In this session learn about a very broad range of Oracle Multitenant functionality in considerable depth. Warning: This lab has been filled to capacity quickly at every Oracle OpenWorld that it has been offered. It is strongly recommended that you sign up early. Even if you're only able to get on the waitlist, it's always worth showing up just in case there's a no-show and you can grab an available seat. SPEAKERS:John Mchugh, Senior Principal Product Manager, Oracle Thomas Baby, Architect, Oracle Patrick Wheeler, Senior Director, Product Management, Oracle Moscone West - Room 3019 HANDS-ON LAB: RESTful Services with Oracle REST Data Services and Oracle Autonomous Database Jeff Smith, Senior Principal Product Manager, Oracle Ashley Chen, Senior Product Manager, Oracle Colm Divilly, Consulting Member of Technical Staff, Oracle Elizabeth Saunders, Principal Technical Staff, Oracle 10:30 AM - 11:30 AM HANDS-ON LAB: RESTful Services with Oracle REST Data Services and Oracle Autonomous Database 10:30 AM - 11:30 AM Moscone West - Room 3019 In this session learn to develop and deploy a RESTful service using Oracle SQL Developer, Oracle REST Data Services, and Oracle Autonomous Database. Then connect these services as data sources to different Oracle JavaScript Extension Toolkit visualization components to quickly build rich HTML5 applications using a free and open source JavaScript framework. SPEAKERS:Jeff Smith, Senior Principal Product Manager, Oracle Ashley Chen, Senior Product Manager, Oracle Colm Divilly, Consulting Member of Technical Staff, Oracle Elizabeth Saunders, Principal Technical Staff, Oracle 10:45AM The Exchange - Ask Tom Theater Scaling Open Source R and Python for the Enterprise Marcos Arancibia Coddou, Product Manager, Oracle Data Science and Big Data, Oracle 10:45 AM - 11:05 AM Scaling Open Source R and Python for the Enterprise 10:45 AM - 11:05 AM The Exchange - Ask Tom Theater Open source environments such as R and Python offer tremendous value to data scientists and developers. Scalability and performance on large data sets, however, is not their forte. Memory constraints and single-threaded execution can significantly limit their value for enterprise use. With Oracle Advanced Analytics’ R and Python interfaces to Oracle Database, users can take their R and Python to the next level, deploying for enterprise use on large data sets with ease of deployment. In this session learn the key functional areas of Oracle R Enterprise and Oracle Machine Learning for Python and see how to get the best combination of open source and Oracle Database. SPEAKERS:Marcos Arancibia Coddou, Product Manager, Oracle Data Science and Big Data, Oracle Moscone South - Room 156B Demystifying Graph Analytics for the Non-expert Peter Jeffcock, Big Data and Data Science, Cloud Business Group, Oracle Sherry Tiao, Oracle 11:15 AM - 12:00 PM Demystifying Graph Analytics for the Non-expert 11:15 AM - 12:00 PM Moscone South - Room 156B This session is aimed at the non-expert: somebody who wants to know how it works so they can ask the technical experts to apply it in new ways to generate new kinds of value for the business. Look behind the curtain to see how graph analytics works. Learn how it enables use cases, from giving directions in your car, to telling the tax authorities if your business partner’s first cousin is conspiring to cheat on payments. SPEAKERS:Peter Jeffcock, Big Data and Data Science, Cloud Business Group, Oracle Sherry Tiao, Oracle Moscone South - Room 214 Oracle Autonomous Data Warehouse: How to Connect Your Tools and Applications What exactly do you need to know to connect your existing on-premises and cloud tools to Oracle Autonomous Data Warehouse? Come to this session and learn about the connection architecture of Autonomous Data Warehouse. For example, learn how to set up and configure Java Database Connectivity connections, how to configure SQLNet, and which Oracle Database Cloud driver you need. Learn how to use Oracle wallets and Java key store files, and what to do if you have a client that is behind a firewall and your network configuration requires an HTTP proxy. All types of connection configurations are explored and explained. 11:15 AM - 12:00 PM "Oracle Autonomous Data Warehouse: How to Connect Your Tools and Applications 11:15 AM - 12:00 PM Moscone South - Room 214 George Lumpkin, Vice President, Product Management, Oracle Keith Laker, Senior Principal Product Manager, Oracle SPEAKERS:"What exactly do you need to know to connect your existing on-premises and cloud tools to Oracle Autonomous Data Warehouse? Come to this session and learn about the connection architecture of Autonomous Data Warehouse. For example, learn how to set up and configure Java Database Connectivity connections, how to configure SQLNet, and which Oracle Database Cloud driver you need. Learn how to use Oracle wallets and Java key store files, and what to do if you have a client that is behind a firewall and your network configuration requires an HTTP proxy. All types of connection configurations are explored and explained. Moscone South - Room 152B 11 Months with Oracle Autonomous Transaction Processing Eric Grancher, Head of Database Services group, IT department, CERN 11:15 AM - 12:00 PM 11 Months with Oracle Autonomous Transaction Processing 11:15 AM - 12:00 PM Moscone South - Room 152B Oracle Autonomous Transaction Processing and Oracle Autonomous Data Warehouse represent a new way to deploy applications, with the platform providing a performant environment with advanced and automated features. In this session hear one company’s experience with the solution over the past 11 months, from setting up the environment to application deployment, including how to work with it on a daily basis, coordinating with maintenance windows, and more. SPEAKERS:Eric Grancher, Head of Database Services group, IT department, CERN Moscone South - Room 155B Oracle Data Integration Cloud: Database Migration Service Deep Dive Alex Kotopoulis, Director, Product Management – Data Integration Cloud, Oracle Chai Pydimukkala, Senior Director of Product Management – Data Integration Cloud, Oracle 11:15 AM - 12:00 PM Oracle Data Integration Cloud: Database Migration Service Deep Dive 11:15 AM - 12:00 PM Moscone South - Room 155B Oracle Database Migration Service is a new Oracle Cloud service that provides an easy-to-use experience to migrate databases into Oracle Autonomous Transaction Processing, Oracle Autonomous Data Warehouse, or databases on Oracle Cloud Infrastructure. Join this Oracle Product Management–led session to see how Oracle Database Migration Service makes life easier for DBAs by offering an automated means to address the complexity of enterprise databases and data sets. Use cases include offline migrations for batch database migrations, online migrations requiring minimized downtime, and schema conversions for heterogeneous database migrations. The session also shows how it provides the added value of monitoring, auditability, and data validation to deliver real-time progress. SPEAKERS:Alex Kotopoulis, Director, Product Management – Data Integration Cloud, Oracle Chai Pydimukkala, Senior Director of Product Management – Data Integration Cloud, Oracle 11:30AM Moscone South - Room 301 Cloud Native Data Management Gerald Venzl, Master Product Manager, Oracle Maria Colgan, Master Product Manager, Oracle 11:30 AM - 12:15 PM Cloud Native Data Management 11:30 AM - 12:15 PM Moscone South - Room 301 The rise of the cloud has brought many changes to the way applications are built. Containers, serverless, and microservices are now commonplace in a modern cloud native architecture. However, when it comes to data persistence, there are still many decisions and trade-offs to be made. But what if you didn’t have to worry about how to structure data? What if you could store data in any format, independent of having to know if a cloud service could handle it? What if performance, web-scale, or security concerns no longer held you back when you were writing or deploying apps? Sound like a fantasy? This session shows you how you can combine the power of microservices with the agility of cloud native data management to make this fantasy a reality. SPEAKERS:Gerald Venzl, Master Product Manager, Oracle Maria Colgan, Master Product Manager, Oracle 12:00PM Moscone West - Room 3019 HANDS-ON LAB: Low-Code Development with Oracle Application Express and Oracle Autonomous Database David Peake, Senior Principal Product Manager, Oracle Marc Sewtz, Senior Software Development Manager, Oracle 12:00 PM - 01:00 PM HANDS-ON LAB: Low-Code Development with Oracle Application Express and Oracle Autonomous Database 12:00 PM - 01:00 PM Moscone West - Room 3019 Oracle Application Express is a low-code development platform that enables you to build stunning, scalable, secure apps with world-class features that can be deployed anywhere. In this lab start by initiating your free trial for Oracle Autonomous Database and then convert a spreadsheet into a multiuser, web-based, responsive Oracle Application Express application in minutes—no prior experience with Oracle Application Express is needed. Learn how you can use Oracle Application Express to solve many of your business problems that are going unsolved today. SPEAKERS:David Peake, Senior Principal Product Manager, Oracle Marc Sewtz, Senior Software Development Manager, Oracle Moscone West - Room 3021 Hands-on Lab: Oracle Autonomous Data Warehouse Hermann Baer, Senior Director Product Management, Oracle Keith Laker, Senior Principal Product Manager, Oracle Nilay Panchal, Product Manager, Oracle Yasin Baskan, Senior Principal Product Manager, Oracle 12:00 PM - 01:00 PM Hands-on Lab: Oracle Autonomous Data Warehouse 12:00 PM - 01:00 PM Moscone West - Room 3021 In this hands-on lab discover how you can access, visualize, and analyze lots of different types of data using a completely self-service, agile, and fast service running in the Oracle Cloud: oracle Autonomous Data Warehouse. See how quickly and easily you can discover new insights by blending, extending, and visualizing a variety of data sources to create data-driven briefings on both desktop and mobile browsers—all without the help of IT. It has never been so easy to create visually sophisticated reports that really communicate your discoveries, all in the cloud, all self-service, powered by Oracle Autonomous Data Warehouse. Oracle’s perfect quick-start service for fast data loading, sophisticated reporting, and analysis is for everybody. SPEAKERS:Hermann Baer, Senior Director Product Management, Oracle Keith Laker, Senior Principal Product Manager, Oracle Nilay Panchal, Product Manager, Oracle Yasin Baskan, Senior Principal Product Manager, Oracle 12:30PM Moscone South - Room 152C Oracle Autonomous Data Warehouse and Oracle Analytics Cloud Boost Agile BI Reiner Zimmermann, Senior Director, DW & Big Data Product Management, Oracle Christian Maar, CEO, 11880 Solutions AG Pawarit Ruengsuksilp, Business Development Officer, Forth Corporation PCL 12:30 PM - 01:15 PM Oracle Autonomous Data Warehouse and Oracle Analytics Cloud Boost Agile BI 12:30 PM - 01:15 PM Moscone South - Room 152C 11880.com is a midsize media company in Germany that used to do traditional BI using Oracle Database technology and Oracle BI. In this session, learn how the switch to Oracle Autonomous Data Warehouse and Oracle Analytics Cloud enabled 11880.com to give more responsibility directly to business users, who are now able to do real self-service BI. They can start their own database, load data, and analyze it the way they want and need it without the need to ask IT and wait for days or weeks to have a system set up and running. SPEAKERS:Reiner Zimmermann, Senior Director, DW & Big Data Product Management, Oracle Christian Maar, CEO, 11880 Solutions AG Pawarit Ruengsuksilp, Business Development Officer, Forth Corporation PCL Moscone South - Room 214 JSON in Oracle Database: Common Use Cases and Best Practices Beda Hammerschmidt, Consulting Member of Technical Staff, Oracle 12:30 PM - 01:15 PM JSON in Oracle Database: Common Use Cases and Best Practices 12:30 PM - 01:15 PM Moscone South - Room 214 JSON is a popular data format in modern applications (web/mobile, microservices, etc.) that brings about increased demand from customers to store, process, and generate JSON data using Oracle Database. JSON supports a wide range of requirements including real-time payment processing systems, social media analytics, and JSON reporting. This session offers common use cases, explains why customers pick JSON over traditional relational or XML data models, and provides tips to optimize performance. Discover the Simple Oracle Document Access (Soda) API that simplifies the interaction with JSON documents in Oracle Database and see how a self-tuning application can be built over Oracle Documents Cloud. SPEAKERS:Beda Hammerschmidt, Consulting Member of Technical Staff, Oracle 02:00PM Moscone North - Hall F Main Keynote Oracle Eexcutives 2:00 p.m. – 3:00 p.m. Main Keynote 2:00 p.m. – 3:00 p.m. Moscone North - Hall F Main OpenWorld Keynote SPEAKERS:Oracle Eexcutives 03:45PM Moscone South - Room 214 Top 10 SQL Features for Developers/DBAs in the Latest Generation of Oracle Database Keith Laker, Senior Principal Product Manager, Oracle 03:45 PM - 04:30 PM Top 10 SQL Features for Developers/DBAs in the Latest Generation of Oracle Database 03:45 PM - 04:30 PM Moscone South - Room 214 SQL is at the heart of every enterprise data warehouse running on Oracle Database, so it is critical that your SQL code is optimized and makes use of the latest features. The latest generation of Oracle Database includes a lot of important new features for data warehouse and application developers and DBAs. This session covers the top 10 most important features, including new and faster count distinct processing, improved support for processing extremely long lists of values, and easier management and optimization of data warehouse and operational queries. SPEAKERS:Keith Laker, Senior Principal Product Manager, Oracle Moscone West - Room 3023 Hands-on Lab: Oracle Database In-Memory Andy Rivenes, Product Manager, Oracle 03:45 PM - 04:45 PM Hands-on Lab: Oracle Database In-Memory 03:45 PM - 04:45 PM Moscone West - Room 3023 Oracle Database In-Memory introduces an in-memory columnar format and a new set of SQL execution optimizations including SIMD processing, column elimination, storage indexes, and in-memory aggregation, all of which are designed specifically for the new columnar format. This lab provides a step-by-step guide on how to get started with Oracle Database In-Memory, how to identify which of the optimizations are being used, and how your SQL statements benefit from them. The lab uses Oracle Database and also highlights the new features available in the latest release. Experience firsthand just how easy it is to start taking advantage of this technology and its performance improvements. SPEAKERS:Andy Rivenes, Product Manager, Oracle 04:00PM Moscone South - Room 306 Six Technologies, One Name: Flashback—Not Just for DBAs Connor Mcdonald, Database Advocate, Oracle 04:00 PM - 04:45 PM Six Technologies, One Name: Flashback—Not Just for DBAs 04:00 PM - 04:45 PM Moscone South - Room 306 There is a remarkable human condition where you can be both cold and sweaty at the same time. It comes about three seconds after you press the Commit button and you realize that you probably needed to have a WHERE clause on that “delete all rows from the SALES table” SQL statement. But Oracle Flashback is not just for those “Oh no!” moments. It also enables benefits for developers, ranging from data consistency and continuous integration to data auditing. Tucked away in Oracle Database, Enterprise Edition, are six independent and powerful technologies that might just save your career and open up a myriad of other benefits as well. Learn more in this session. SPEAKERS:Connor Mcdonald, Database Advocate, Oracle 04:15PM Moscone South - Room 213 The Changing Role of the DBA Maria Colgan, Master Product Manager, Oracle Jenny Tsai-Smith, Vice President, Oracle 04:15 PM - 05:00 PM The Changing Role of the DBA 04:15 PM - 05:00 PM Moscone South - Room 213 The advent of the cloud and the introduction of Oracle Autonomous Database presents opportunities for every organization, but what's the future role for the DBA? In this session explore how the role of the DBA will continue to evolve, and get advice on key skills required to be a successful DBA in the world of the cloud. SPEAKERS:Maria Colgan, Master Product Manager, Oracle Jenny Tsai-Smith, Vice President, Oracle 04:45PM Moscone South - Room 214 Remove Silos and Query the Data Warehouse, Data Lake, and Streams with Oracle SQL Martin Gubar, Director of Product Management, Big Data and Autonomous Database, Oracle 04:45 PM - 05:30 PM Remove Silos and Query the Data Warehouse, Data Lake, and Streams with Oracle SQL 04:45 PM - 05:30 PM Moscone South - Room 214 The latest data architectures take the approach of using the right tool for the right job. Data lakes have become the repository for capturing and analyzing raw data. Data warehouses continue to manage enterprise data, with Oracle Autonomous Data Warehouse greatly simplifying optimized warehouse deployments. Kafka is key capability capturing for real-time streams. This architecture makes perfect sense until you need answers to questions that require correlations across these sources. And you need business users—using their familiar tools and applications—to be able to find the answers themselves. This session outlines how to break down these data silos to query across these sources with security and without moving mountains of data. SPEAKERS:Martin Gubar, Director of Product Management, Big Data and Autonomous Database, Oracle Moscone South - Room 152B Creating a Multitenant Sandbox with Oracle Cloud Infrastructure Andrew Westwood, VP, Principal Engineer - Innovation, Bank of the West 04:45 PM - 05:30 PM Creating a Multitenant Sandbox with Oracle Cloud Infrastructure 04:45 PM - 05:30 PM Moscone South - Room 152B Bank of the West needed a multitenanted sandbox database environment in the cloud to work with multiple potential partners and with internal customers simultaneously and independently of each other. The environment needed to provide rapid deployment, data segregation and versioning, data isolation that ensured each partner could access only the information it specifically needed, and self-service that enabled each team to query and build their own database objects and create their own restful services on the sandbox data. In this session learn why Bank of the West chose Oracle Cloud Infrastructure, Oracle Application Container Cloud, and Oracle Application Express. SPEAKERS:Andrew Westwood, VP, Principal Engineer - Innovation, Bank of the West 06:30PM " Chase Center Mission Bay Blocks 29-32 San Francisco, CA Oracle CloudFest.19 "John Mayer. 6:30 p.m.–11 p.m. "Oracle CloudFest.19 6:30 p.m.–11 p.m. Chase Center Mission Bay Blocks 29-32 San Francisco, CA You’ve been energized by the fresh ideas and brilliant minds you’ve engaged with at Oracle OpenWorld. Now cap off the event with an evening of inspiration and celebration with John Mayer. John is many things—a guitar virtuoso, an Instagram live host, a storyteller—and a seven-time Grammy Award-winning performer. Here’s your chance to savor his distinctive and dynamic body of work that’s touched millions worldwide. * Included in full conference pass. Can be purchased for an additional $375 with a Discover pass SPEAKERS:"John Mayer.

Here you go folks...OpenWorld is so big this year so to help you get the most from Monday I have put together a cheat sheet listing all the best sessions. Enjoy your Wednesday and make sure you...

Autonomous

Autonomous Data Warehouse + Oracle Analytics Cloud Hands-on Lab at #OOW19

If you are at Moscone Center for this week's OpenWorld conference then don't miss your chance to get some free hands-on time with Oracle Analytics Cloud querying Autonomous Data Warehouse. We have three sessions left to go this week: Tuesday, September 17, 03:45 PM - 04:45 PM | Moscone West - Room 3021 Wednesday, September 18, 12:00 PM - 01:00 PM | Moscone West - Room 3021 Thursday, September 19, 10:30 AM - 11:30 AM | Moscone West - Room 3021   Each session is being led by Philippe Lions Oracle's own analytics guru and master of data visualization. Philippe and his team built a special version of the their standard ADW+OAC workshop which walks you through connecting OAC to ADW, getting immediate valuable insight out of your ADW data, deepening analysis by mashing-up additional data and leveraging OAC interactive visualization features.  At a higher level the aim is to take a data set which you have never seen before and quickly and easily discover insights into brand and category performance over time, highlighting major sales weaknesses within a specific category. All in just under 45 minutes! Here is Philippe explaining the workshop flow.. Your browser does not support the video tag. What everyone ends up with is the report shown below which identifies the sales weaknesses in the mobile phone category, specifically android phones. All this is done using the powerful data interrogation features of Oracle Analytics Cloud. Make sure you sign up for an ADW hands-on lab running tomorrow, Wednesday and Thursday and learn from the our experts in data visualization and analytics. If you want to try this workshop at home then everything you need is here: https://www.oracle.com/solutions/business-analytics/data-visualization/tutorials.html  

If you are at Moscone Center for this week's OpenWorld conference then don't miss your chance to get some free hands-on time with Oracle Analytics Cloud querying Autonomous Data Warehouse. We have...

Autonomous

Need to Query Autonomous Data Warehouse directly in Slack?

We have one of THE coolest demos ever in the demogrounds area at OpenWorld (head straight to the developer tools area and look for the Chatbot and Digital Assistant demo booths. My colleagues in the Oracle Analytics team have combined Oracle Analytics Cloud with the new Digital Assistant tool to create an amazing experience for users who want to get data out using what I would call slightly unusual channels!  First step is to create a connection to ADW or ATP using the pre-built connection wizards - as shown below there are dozens of different connection wizards to help you get started... Then once the connection is setup and the data sets/projects built you just swap over to your favorite front end collaboration tool such as Slack or Microsoft Teams and just start writing your natural language query which is translated into SQL and the results are returned directly to your collaboration tool where depending what the APIs support you will get a tabular report, a summary analysis and a graphical report.  The Oracle Analytics Cloud experts on the demo booth kindly walked me through their awesome demo: Your browser does not support the video tag. Need to do the same on your mobile phone, then this has you covered as well (apologies had my thumb over the lens for a while in this video - complete amateur) Your browser does not support the video tag. So if you are at Moscone this week and want to see quite possibly the coolest demo at the show then head to the demogrounds and make for the developer tools area which is clearly marked with a huge overhead banner. If you get stuck then swing by the ADW booth which is right next to the escalators as you come down to the lower area in Moscone South. Enjoy OpenWorld 2019 folks!    

We have one of THE coolest demos ever in the demogrounds area at OpenWorld (head straight to the developer tools area and look for the Chatbot and Digital Assistant demo booths. My colleagues in...

Autonomous

Autonomous Tuesday at OpenWorld 2019 - List of Must See Sessions

Here you go folks...OpenWorld is so big this year so to help you get the most from Monday I have put together a cheat sheet listing all the best sessions. Enjoy your Tuesday and make sure you drink lots of water! If you want this agenda on your phone (iPhone or Android) then checkout our smartphone web app by clicking here !function(){ var d=document.documentElement;d.className=d.className.replace(/no-js/,'js'); if(document.location.href.indexOf('betamode=') > -1) document.write(''); }(); AGENDA - TUESDAY 08:45AM Moscone South - Room 204 Using Graph Analytics for New Insights Melliyal Annamalai, Senior Principal Product Manager, Oracle   08:45 AM - 10:45 AM Using Graph Analytics for New Insights 08:45 AM - 10:45 AM Moscone South - Room 204 Graph is an emerging data model for analyzing data. Graphs enable navigation of large and complex data warehouses and intuitive detection of complex relationships for new insights into your data. Powerful algorithms for Graph models such as ranking, centrality, community identification, and path-finding routines support fraud detection, recommendation engines, social network analysis, and more. In this session, learn how to load a graph; insert nodes and edges with Graph APIs; and traverse a graph to find connections and do high-performance Graph analysis with PGQL, a SQL-like graph query language. Also learn how to use visualization tools to work with Graph data. SPEAKERS: Melliyal Annamalai, Senior Principal Product Manager, Oracle     Moscone South - Room 301 Fraud Detection with Oracle Autonomous Data Warehouse and Oracle Machine Learning Charlie Berger, Sr. Director Product Management, Machine Learning, AI and Cognitive Analytics, Oracle   08:45 AM - 10:45 AM Fraud Detection with Oracle Autonomous Data Warehouse and Oracle Machine Learning 08:45 AM - 10:45 AM Moscone South - Room 301 Oracle Machine Learning is packaged with Oracle Autonomous Data Warehouse. Together, they can transform your cloud into a powerful anomaly- and fraud-detection cloud solution. Oracle Autonomous Data Warehouse’s Oracle Machine Learning extensive library of in-database machine learning algorithms can help you discover anomalous records and events that stand out—in a multi-peculiar way. Using unsupervised learning techniques (SQL functions), Oracle Autonomous Data Warehouse and Oracle Machine Learning SQL notebooks enable companies to build cloud solutions to detect anomalies, noncompliance, and fraud (taxes, expense reports, people, claims, transactions, and more). In this session, see example scripts, Oracle Machine Learning notebooks, and best practices and hear customer examples. SPEAKERS: Charlie Berger, Sr. Director Product Management, Machine Learning, AI and Cognitive Analytics, Oracle   11:15AM Moscone South - Room 215/216 Oracle Database: What's New and What's Coming Next Dominic Giles, Master Product Manager, Oracle   11:15 AM - 12:00 PM Oracle Database: What's New and What's Coming Next 11:15 AM - 12:00 PM Moscone South - Room 215/216 In this informative session learn about recent Oracle Database news and developments, and take a sneak peek into what's coming next from the Oracle Database development team. SPEAKERS: Dominic Giles, Master Product Manager, Oracle     Moscone South - Room 214 Oracle Partitioning: What Everyone Should Know Hermann Baer, Senior Director Product Management, Oracle   11:15 AM - 12:00 PM Oracle Partitioning: What Everyone Should Know 11:15 AM - 12:00 PM Moscone South - Room 214 Oracle Partitioning has proven itself over the course of decades and ensures that tens of thousands of systems run successfully day in and out. But do you really know Oracle Partitioning? Whether you are using it already or are planning to leverage it, this is the time for you to check your knowledge. Attend this session to learn about all the things you already should know about Oracle Partitioning, including the latest innovations of this most widely used functionality in the Oracle Database. SPEAKERS: Hermann Baer, Senior Director Product Management, Oracle     Moscone South - Room 156B Drop Tank: A Cloud Journey Case Study Timothy Miller, CTO, Drop Tank LLC Shehzad Ahmad, Oracle   11:15 AM - 12:00 PM Drop Tank: A Cloud Journey Case Study 11:15 AM - 12:00 PM Moscone South - Room 156B Drop Tank, based out of Burr Ridge, Illinois, was started by a couple of seasoned execs from major oil and fuel brands. Seeing the need for POS connectivity in an otherwise segmented industry, they designed and created proprietary hardware devices. Fast forward a couple years, and Drop Tank is now a full-fledged loyalty program provider for the fuel industry, allowing for an end-to-end solution. Attend this session to learn more. SPEAKERS: Timothy Miller, CTO, Drop Tank LLC Shehzad Ahmad, Oracle     Moscone West - Room 3022C HANDS-ON LAB: Migrate Databases into Oracle Cloud with Oracle Database Migration Service Alex Kotopoulis, Director, Product Management – Data Integration Cloud, Oracle Chai Pydimukkala, Senior Director of Product Management – Data Integration, Oracle Julien Testut, Senior Principal Product Manager – Data Integration Cloud, Oracle Sachin Thatte, Senior Director, Software Development, Data Integration Cloud, Oracle David Allan, Architect - Data Integration Cloud, Oracle Shubha Sundar, Software Development Director, Data Integration Cloud, Oracle   11:15 AM - 12:15 PM HANDS-ON LAB: Migrate Databases into Oracle Cloud with Oracle Database Migration Service 11:15 AM - 12:15 PM Moscone West - Room 3022C Join this hands-on lab to gain experience with Oracle Database Migration Service, which provides an easy-to-use experience to assist in migrating databases into Oracle Autonomous Transaction Processing, Oracle Autonomous Data Warehouse, and databases on Oracle Cloud Infrastructure. The service provides an automated means to address the complexity of enterprise databases and data sets. Use cases include offline migrations for batch migration of databases, online migrations needing minimized database downtime, and schema conversions for heterogeneous migrations from non-Oracle to Oracle databases. Learn how Oracle Database Migration Service delivers monitoring, auditability, and data validation to ensure real-time progress and activity monitoring of processes SPEAKERS: Alex Kotopoulis, Director, Product Management – Data Integration Cloud, Oracle Chai Pydimukkala, Senior Director of Product Management – Data Integration, Oracle Julien Testut, Senior Principal Product Manager – Data Integration Cloud, Oracle Sachin Thatte, Senior Director, Software Development, Data Integration Cloud, Oracle David Allan, Architect - Data Integration Cloud, Oracle Shubha Sundar, Software Development Director, Data Integration Cloud, Oracle     Moscone South - Room 211 Security Architecture for Oracle Database Cloud Tammy Bednar, Sr. Director of Product Management, Database Cloud Services, Oracle   11:15 AM - 12:00 PM Security Architecture for Oracle Database Cloud 11:15 AM - 12:00 PM Moscone South - Room 211 Oracle enables enterprises to maximize the number of mission-critical workloads they can migrate to the cloud while continuing to maintain the desired security posture and reducing the overhead of building and operating data center infrastructure. By design, Oracle provides the security of cloud infrastructure and operations (cloud operator access controls, infrastructure security patching, and so on), and you are responsible for securely configuring your cloud resources. This provides unparalleled control and transparency with applications running on Oracle Cloud. Security in the cloud is a shared responsibility between you and Oracle. Join this session to gain a better understanding of the Oracle Cloud security architecture. SPEAKERS: Tammy Bednar, Sr. Director of Product Management, Database Cloud Services, Oracle     Moscone West - Room 3023 Hands-on Lab: Oracle Database In-Memory Andy Rivenes, Product Manager, Oracle   11:15 AM - 12:15 PM Hands-on Lab: Oracle Database In-Memory 11:15 AM - 12:15 PM Moscone West - Room 3023 Oracle Database In-Memory introduces an in-memory columnar format and a new set of SQL execution optimizations including SIMD processing, column elimination, storage indexes, and in-memory aggregation, all of which are designed specifically for the new columnar format. This lab provides a step-by-step guide on how to get started with Oracle Database In-Memory, how to identify which of the optimizations are being used, and how your SQL statements benefit from them. The lab uses Oracle Database and also highlights the new features available in the latest release. Experience firsthand just how easy it is to start taking advantage of this technology and its performance improvements. SPEAKERS: Andy Rivenes, Product Manager, Oracle     Moscone West - Room 3021 Hands-on Lab: Oracle Machine Learning Mark Hornick, Senior Director Oracle Data Science and Big Data Marcos Arancibia Coddou, Product Manager, Oracle Data Science and Big Data, Oracle Charlie Berger, Sr. Director Product Management, Oracle Data Science and Big Data, Oracle   11:15 AM - 12:15 PM Hands-on Lab: Oracle Machine Learning 11:15 AM - 12:15 PM Moscone West - Room 3021 In this introductory hands-on-lab, try out the new Oracle Machine Learning Zeppelin-based notebooks that come with Oracle Autonomous Database. Oracle Machine Learning extends Oracle’s offerings in the cloud with its collaborative notebook environment that helps data scientist teams build, share, document, and automate data analysis methodologies that run 100% in Oracle Autonomous Database. Interactively work with your data, and build, evaluate, and apply machine learning models. Import, export, edit, run, and share Oracle Machine Learning notebooks with other data scientists and colleagues. Share and further explore your insights and predictions using the Oracle Analytics Cloud. SPEAKERS: Mark Hornick, Senior Director Oracle Data Science and Big Data Marcos Arancibia Coddou, Product Manager, Oracle Data Science and Big Data, Oracle Charlie Berger, Sr. Director Product Management, Oracle Data Science and Big Data, Oracle   12:00PM The Exchange - Ask Tom Theater Ask TOM: SQL Pattern Matching Chris Saxon, Developer Advocate, Oracle   12:00 PM - 12:20 PM Ask TOM: SQL Pattern Matching 12:00 PM - 12:20 PM The Exchange - Ask Tom Theater SQL is a powerful language for accessing data. Using analytic functions, you can gain lots of insights from your data. But there are many problems that are hard or outright impossible to solve using them. Introduced in Oracle Database 12c, the row pattern matching clause, match_recognize, fills this gap. With this it's easy-to-write efficient SQL to answer many previously tricky questions. This session introduces the match_recognize clause. See worked examples showing how it works and how it's easier to write and understand than traditional SQL solutions. This session is for developers, DBAs, and data analysts who need to do advanced data analysis. SPEAKERS: Chris Saxon, Developer Advocate, Oracle   12:30AM Moscone West - Room 2002 The DBA's Next Great Job Rich Niemiec, Chief Innovation Officer, Viscosity North America   12:30 PM - 01:15 PM The DBA's Next Great Job 12:30 PM - 01:15 PM " Moscone West - Room 2002 What's the next role for the DBA as the autonomous database and Oracle enhancements free up time? This session explores the DBA role (managing more databases with autonomous database) and the integration of AI and machine learning. Topics covered include what's next for the DBA with the autonomous database, important skills for the future such as machine learning and AI, how the merging of tech fields cause both pain and opportunity, and future jobs and the world ahead where Oracle will continue to lead. SPEAKERS: "Rich Niemiec, Chief Innovation Officer, Viscosity North America   12:45PM Moscone West - Room 3021 Hands-on Lab: Oracle Essbase Eric Smadja, Oracle Ashish Jain, Product Manager, Oracle Mike Larimer, Oracle   12:45 PM - 01:45 PM Hands-on Lab: Oracle Essbase 12:45 PM - 01:45 PM Moscone West - Room 3021 The good thing about the new hybrid block storage option (BSO) is that many of the concepts that we learned over the years about tuning a BSO cube still have merit. Knowing what a block is, and why it is important, is just as valuable today in hybrid BSO as it was 25 years ago when BSO was introduced. In this learn best practices for performance optimization, how to manage data blocks, how dimension ordering still impacts things such as calculation order, the reasons why the layout of a report can impact query performance, how logs will help the Oracle Essbase developer debug calculation flow and query performance, and how new functionality within Smart View will help developers understand and modify the solve order. SPEAKERS: Eric Smadja, Oracle Ashish Jain, Product Manager, Oracle Mike Larimer, Oracle   01:30PM The Exchange - Ask Tom Theater Extracting Real Value from Data Lakes with Machine Learning Marcos Arancibia Coddou, Product Manager, Oracle Data Science and Big Data, Oracle   01:30 PM - 01:50 PM Extracting Real Value from Data Lakes with Machine Learning 01:30 PM - 01:50 PM The Exchange - Ask Tom Theater In this session learn how to interface with and analyze large data lakes using the tools provided by the Oracle Big Data platform (both on-premises and in the cloud), which includes the entire ecosystem of big data components, plus several tools and interfaces ranging from data loading to machine learning and data visualization. Using notebooks for machine learning makes the environment easy and intuitive, and because the platform is open users can choose the language they feel most comfortable with, including R, Python, SQL, Java, etc. Customer stories of how to achieve success on the Oracle Big Data platform are shared, and the proven architecture of the solutions and techniques should help anyone starting a big data project today. SPEAKERS: Marcos Arancibia Coddou, Product Manager, Oracle Data Science and Big Data, Oracle   01:45PM Moscone South - Room 214 Choosing and Using the Best Performance Tools for Every Situation Mike Hallas, Architect, Oracle Juergen Mueller, Senior Director, Oracle John Zimmerman, Oracle   01:45 PM - 02:30 PM Choosing and Using the Best Performance Tools for Every Situation 01:45 PM - 02:30 PM Moscone South - Room 214 How do you investigate the performance of Oracle Database? The database is highly instrumented with counters and timers to record important activity, and this instrumentation is enabled by default. A wealth of performance data is stored in the Automatic Workload Repository, and this data underpins various performance tools and reports. With so many options available, how do you choose and use the tools with the best opportunity for improvement? Join this session to learn how real-world performance engineers apply different tools and techniques, and see examples based on real-world situations. SPEAKERS: Mike Hallas, Architect, Oracle Juergen Mueller, Senior Director, Oracle John Zimmerman, Oracle     Moscone South - Room 213 Oracle Multitenant: Best Practices for Isolation Can Tuzla, Senior Product Manager, Oracle Patrick Wheeler, Senior Director, Product Management, Oracle   01:45 PM - 02:30 PM Oracle Multitenant: Best Practices for Isolation 01:45 PM - 02:30 PM Moscone South - Room 213 The dramatic efficiencies of deploying databases in cloud configurations are worthless if they come at the expense of isolation. Attend this session to learn about the technologies that deliver the isolation behind Oracle Autonomous Data Warehouse. SPEAKERS: Can Tuzla, Senior Product Manager, Oracle Patrick Wheeler, Senior Director, Product Management, Oracle     Moscone South - Room 211 Oracle Autonomous Data Warehouse: Customer Panel Holger Friedrich, CTO, sumIT AG Reiner Zimmermann, Senior Director, DW & Big Data Product Management, Oracle Joerg Otto, Head of IT, IDS GmbH - Analysis and Reporting Services Manuel Martin Marquez, Senior Project Leader, Cern Organisation Européenne Pour La Recherche Nucléaire James Anthony, CTO, Data Intensity Ltd   01:45 PM - 02:30 PM Oracle Autonomous Data Warehouse: Customer Panel 01:45 PM - 02:30 PM Moscone South - Room 211 In this panel session, hear customers discuss their experiences using Oracle Autonomous Data Warehouse. Topics include both the business and use cases for Oracle Autonomous Data Warehouse and why this service is the perfect fit for their organization. The panel includes customers from Oracle's Global Leaders program, including Droptank, QLX, Hertz, 11880.com, Unior, Deutsche Bank, Caixa Bank, and many more. SPEAKERS: Holger Friedrich, CTO, sumIT AG Reiner Zimmermann, Senior Director, DW & Big Data Product Management, Oracle Joerg Otto, Head of IT, IDS GmbH - Analysis and Reporting Services Manuel Martin Marquez, Senior Project Leader, Cern Organisation Européenne Pour La Recherche Nucléaire James Anthony, CTO, Data Intensity Ltd     Moscone South - Room 215/216 Oracle Autonomous Database: Looking Under the Hood Yasin Baskan, Director Product Manager, Oracle   01:45 PM - 02:30 PM Oracle Autonomous Database: Looking Under the Hood 01:45 PM - 02:30 PM Moscone South - Room 215/216 This session takes you under the hood of Oracle Autonomous Database so you can really understand what makes it tick. Learn about key database features and how these building-block features put the “autonomous” into Autonomous Database. See how to manage and monitor an Autonomous Database using all the most popular tools including Oracle Cloud Infrastructure’s management console, Oracle SQL Developer, Oracle Application Express, REST, and more. Take a deep-dive into what you really need to know to make your DBA career thrive in an autonomous-driven world. SPEAKERS: Yasin Baskan, Director Product Manager, Oracle     Moscone South - Room 152C How Oracle Autonomous Data Warehouse/Oracle Analytics Cloud Can Help FinTech Marketing Charlie Berger, Sr. Director Product Management, Machine Learning, AI and Cognitive Analytics, Oracle Pawarit Ruengsuksilp, Business Development Officer, Forth Corporation PCL   01:45 PM - 02:30 PM How Oracle Autonomous Data Warehouse/Oracle Analytics Cloud Can Help FinTech Marketing 01:45 PM - 02:30 PM Moscone South - Room 152C Forth Smart has a unique FinTech business based in Thailand. It has more than 120,000 vending machines, and customers can top-up their mobile credit with cash, as well as sending money to bank accounts, ewallets, and more. It uses Oracle Database and Oracle Analytics Cloud for segmenting its customer base, improving operational efficiency, and quantifying the impact of marketing campaigns. This has allowed Forth Smart to identify cross-selling opportunities and utilize its marketing budget. In this session learn about Forth Smart's journey to introduce Oracle Analytics platforms. Hear about its recent trials to introduce technologies, such as Oracle Autonomous Data Warehouse, to help it stay ahead of competitors. SPEAKERS: Charlie Berger, Sr. Director Product Management, Machine Learning, AI and Cognitive Analytics, Oracle Pawarit Ruengsuksilp, Business Development Officer, Forth Corporation PCL   03:15PM Moscone South - Room 207/208 Solution Keynote: Oracle Cloud Infrastructure Clay Magouyrk, Senior Vice President, Software Development, Oracle   03:15 PM – 05:00 PM Solution Keynote: Oracle Cloud Infrastructure 03:15 PM – 05:00 PM Moscone South - Room 207/208 In this session join Oracle Cloud Infrastructure senior leadership for the latest news and updates about Oracle's new class of cloud. SPEAKERS: Clay Magouyrk, Senior Vice President, Software Development, Oracle     Moscone South - Room 214 Oracle Machine Learning: Overview of New Features and Roadmap Mark Hornick, Senior Director, Data Science and Machine Learning, Oracle Marcos Arancibia Coddou, Product Manager, Oracle Data Science and Big Data, Oracle Charlie Berger, Sr. Director Product Management, Machine Learning, AI and Cognitive Analytics, Oracle   03:15 PM – 04:00 PM Oracle Machine Learning: Overview of New Features and Roadmap 03:15 PM – 04:00 PM Moscone South - Room 214 Oracle extended its database into one that discovers insights and makes predictions. With more than 30 algorithms as parallelized SQL functions, Oracle Machine Learning and Oracle Advanced Analytics eliminate data movement, preserve security, and leverage database parallelism and scalability. Oracle Advanced Analytics 19c delivers unequaled performance and a library of in-database machine learning algorithms. In Oracle Autonomous Database, Oracle Machine Learning provides collaborative notebooks for data scientists. Oracle Machine Learning for Python, Oracle R Enterprise, Oracle R Advanced Analytics for Hadoop, cognitive analytics for images and text, and deploying models as microservices round out Oracle’s ML portfolio. In this session hear the latest developments and learn what’s next. SPEAKERS: Mark Hornick, Senior Director, Data Science and Machine Learning, Oracle Marcos Arancibia Coddou, Product Manager, Oracle Data Science and Big Data, Oracle Charlie Berger, Sr. Director Product Management, Machine Learning, AI and Cognitive Analytics, Oracle   03:45PM Moscone West - Room 3022C Introduction to Oracle Data Integration Service, a Fully Managed Serverless ETL Service Julien Testut, Senior Principal Product Manager – Data Integration Cloud, Oracle Sachin Thatte, Senior Director, Software Development, Data Integration Cloud, Oracle Denis Gray, Senior Director of Product Management – Data Integration Cloud, Oracle David Allan, Architect - Data Integration Cloud, Oracle Shubha Sundar, Software Development Director, Data Integration Cloud, Oracle Abhiram Gujjewar, Director Product Management, OCI, Oracle SOMENATH DAS, Senior Director, Software Development - Data Integration Cloud, Oracle   03:45 PM - 04:45 PM Introduction to Oracle Data Integration Service, a Fully Managed Serverless ETL Service 03:45 PM - 04:45 PM Moscone West - Room 3022C Cloud implementations need data integration to be successful. To maximize value from data, you must decide how to transform and aggregate data as it is moved from source to target. Join this hands-on lab to have firsthand experience with Oracle Data Integration Service, a key component of Oracle Cloud Infrastructure that provides fully managed data integration and extract/transform/load (ETL) capabilities. Learn how to implement processes that will prepare, transform, and load data into data warehouses or data lakes and hear how Oracle Data Integration Service integrates with Oracle Autonomous Databases and Oracle Cloud Infrastructure. SPEAKERS: Julien Testut, Senior Principal Product Manager – Data Integration Cloud, Oracle Sachin Thatte, Senior Director, Software Development, Data Integration Cloud, Oracle Denis Gray, Senior Director of Product Management – Data Integration Cloud, Oracle David Allan, Architect - Data Integration Cloud, Oracle Shubha Sundar, Software Development Director, Data Integration Cloud, Oracle Abhiram Gujjewar, Director Product Management, OCI, Oracle SOMENATH DAS, Senior Director, Software Development - Data Integration Cloud, Oracle     Moscone West - Room 3019 hands-on lab: Low-Code Development with Oracle Application Express and Oracle Autonomous Database David Peake, Senior Principal Product Manager, Oracle Marc Sewtz, Senior Software Development Manager, Oracle   03:45 PM - 04:45 PM hands-on lab: Low-Code Development with Oracle Application Express and Oracle Autonomous Database 03:45 PM - 04:45 PM Moscone West - Room 3019 Oracle Application Express is a low-code development platform that enables you to build stunning, scalable, secure apps with world-class features that can be deployed anywhere. In this lab start by initiating your free trial for Oracle Autonomous Database and then convert a spreadsheet into a multiuser, web-based, responsive Oracle Application Express application in minutes—no prior experience with Oracle Application Express is needed. Learn how you can use Oracle Application Express to solve many of your business problems that are going unsolved today. SPEAKERS: David Peake, Senior Principal Product Manager, Oracle Marc Sewtz, Senior Software Development Manager, Oracle   Moscone West - Room 3021 Hands-on Lab: Oracle Autonomous Data Warehouse Hermann Baer, Senior Director Product Management, Oracle Keith Laker, Senior Principal Product Manager, Oracle Nilay Panchal, Product Manager, Oracle Yasin Baskan, Senior Principal Product Manager, Oracle   03:45 PM - 04:45 PM Hands-on Lab: Oracle Autonomous Data Warehouse 03:45 PM - 04:45 PM Moscone West - Room 3021 In this hands-on lab discover how you can access, visualize, and analyze lots of different types of data using a completely self-service, agile, and fast service running in the Oracle Cloud: oracle Autonomous Data Warehouse. See how quickly and easily you can discover new insights by blending, extending, and visualizing a variety of data sources to create data-driven briefings on both desktop and mobile browsers—all without the help of IT. It has never been so easy to create visually sophisticated reports that really communicate your discoveries, all in the cloud, all self-service, powered by Oracle Autonomous Data Warehouse. Oracle’s perfect quick-start service for fast data loading, sophisticated reporting, and analysis is for everybody. SPEAKERS: Hermann Baer, Senior Director Product Management, Oracle Keith Laker, Senior Principal Product Manager, Oracle Nilay Panchal, Product Manager, Oracle Yasin Baskan, Senior Principal Product Manager, Oracle   04:15PM Moscone South - Room 214 JSON Document Store in Oracle Autonomous Database: A Developer Perspective Roger Ford, Product Manager, Oracle   04:15 PM - 05:00 PM JSON Document Store in Oracle Autonomous Database: A Developer Perspective 04:15 PM - 05:00 PM Moscone South - Room 214 JSON Document Store is a service that works alongside Oracle Autonomous Data Warehouse and Oracle Autonomous Transaction Processing. In this session see how JSON Document Store makes it easy for developers to create straightforward document-centric applications utilizing the power of Oracle Database without needing to write SQL code or to plan schemas in advance. Learn about the full development lifecycle, from JSON source to an end-user application running on Oracle Cloud. SPEAKERS: Roger Ford, Product Manager, Oracle   05:00PM The Exchange - Ask Tom Theater Ask TOM: SQL Pattern Matching Chris Saxon, Developer Advocate, Oracle   05:00 PM - 05:20 PM Ask TOM: SQL Pattern Matching 05:00 PM - 05:20 PM The Exchange - Ask Tom Theater SQL is a powerful language for accessing data. Using analytic functions, you can gain lots of insights from your data. But there are many problems that are hard or outright impossible to solve using them. Introduced in Oracle Database 12c, the row pattern matching clause, match_recognize, fills this gap. With this it's easy-to-write efficient SQL to answer many previously tricky questions. This session introduces the match_recognize clause. See worked examples showing how it works and how it's easier to write and understand than traditional SQL solutions. This session is for developers, DBAs, and data analysts who need to do advanced data analysis. SPEAKERS: Chris Saxon, Developer Advocate, Oracle   05:15PM Moscone South - Room 211 What’s New in Oracle Optimizer Nigel Bayliss, Senior Principal Product Manager, Oracle   05:15 PM - 06:00 PM What’s New in Oracle Optimizer 05:15 PM - 06:00 PM Moscone South - Room 211 This session covers all the latest features in Oracle Optimizer and provides an overview of the most important differences you can expect to see as you upgrade from earlier releases. Discover all the new, enhanced, and fully automated features relating to Oracle Optimizer. For example, learn how Oracle Database delivers better SQL execution plans with real-time statistics, protects you from query performance regression with SQL quarantine and automatic SQL plan management, and self-tunes your workload with automatic indexing. SPEAKERS: Nigel Bayliss, Senior Principal Product Manager, Oracle     Moscone South - Room 214 Oracle Advanced Compression: Essential Concepts, Tips, and Tricks for Enterprise Data Gregg Christman, Product Manager, Oracle   05:15 PM - 06:00 PM Oracle Advanced Compression: Essential Concepts, Tips, and Tricks for Enterprise Data 05:15 PM - 06:00 PM Moscone South - Room 214 Oracle Database 19c provides innovative row and columnar compression technologies that improve database performance while reducing storage costs. This enables organizations to utilize optimal compression based on usage. Automatic policies enable dynamic, lifecycle-aware compression including lower compression for faster access to hot/active data and higher compression for less frequently accessed inactive/archival data. In this session learn about the essential concepts, as well as compression best practices, and tips and tricks to ensure your organization is achieving the best possible compression savings, and database performance, for your enterprise data. SPEAKERS: Gregg Christman, Product Manager, Oracle     Moscone South - Room 152C Billboards to Dashboards: OUTFRONT Media Uses Oracle Analytics Cloud to Analyze Marketing Tim Vlamis, Vice President & Analytics Strategist, VLAMIS SOFTWARE SOLUTIONS INC Dan Vlamis, CEO - President, VLAMIS SOFTWARE SOLUTIONS INC Derek Hayden, VP - Data Strategy & Analytics, OUTFRONT Media   05:15 PM - 06:00 PM Billboards to Dashboards: OUTFRONT Media Uses Oracle Analytics Cloud to Analyze Marketing 05:15 PM - 06:00 PM Moscone South - Room 152C As one of the premier outdoor and out-of-home media organizations in the world, OUTFRONT Media (Formerly CBS Outdoor) serves America’s largest companies with complex marketing and media planning needs. In this session learn how the OUTFRONT’s analytics team migrated from Oracle Business Intelligence Cloud Service to Oracle Analytics Cloud with an Oracle Autonomous Data Warehouse backend. Hear what lessons were learned and what best practices to follow for Oracle Autonomous Data Warehouse and Oracle Analytics Cloud integration and see how sales analytics and location analysis are leading the organization to new analytics insights. Learn how a small team produced gigantic results in a short time. SPEAKERS: Tim Vlamis, Vice President & Analytics Strategist, VLAMIS SOFTWARE SOLUTIONS INC Dan Vlamis, CEO - President, VLAMIS SOFTWARE SOLUTIONS INC Derek Hayden, VP - Data Strategy & Analytics, OUTFRONT Media     Moscone South - Room 212 Oracle Autonomous Transaction Processing Dedicated Deployment: The End User’s Experience Robert Greene, Senior Director, Product Management, Oracle Jim Czuprynski, Enterprise Data Architect, Viscosity NA   05:15 PM - 06:00 PM Oracle Autonomous Transaction Processing Dedicated Deployment: The End User’s Experience 05:15 PM - 06:00 PM Moscone South - Room 212 Oracle Autonomous Transaction Processing dedicated deployment is the latest update to Oracle Autonomous Database. In this session see the service in action and hear a preview customer describe their experience using a dedicated deployment. Walk through the process of setting up and using HammerDB to run some quick performance workloads and then put the service through its paces to test additional functionality in the areas of private IP, fleet administrator resource isolation, service setup, bulk data loading, patching cooperation controls, transparent application continuity, and more. Hear directly from one of the earliest end users about their real-world experience using Autonomous Database. SPEAKERS: Robert Greene, Senior Director, Product Management, Oracle Jim Czuprynski, Enterprise Data Architect, Viscosity NA     Moscone West - Room 3019 Hands-on lab: RESTful Services with Oracle REST Data Services and Oracle Autonomous Database Jeff Smith, Senior Principal Product Manager, Oracle Ashley Chen, Senior Product Manager, Oracle Colm Divilly, Consulting Member of Technical Staff, Oracle Elizabeth Saunders, Principal Technical Staff, Oracle   05:15 PM - 06:15 PM Hands-on lab: RESTful Services with Oracle REST Data Services and Oracle Autonomous Database 05:15 PM - 06:15 PM Moscone West - Room 3019 In this session learn to develop and deploy a RESTful service using Oracle SQL Developer, Oracle REST Data Services, and Oracle Autonomous Database. Then connect these services as data sources to different Oracle JavaScript Extension Toolkit visualization components to quickly build rich HTML5 applications using a free and open source JavaScript framework. SPEAKERS: Jeff Smith, Senior Principal Product Manager, Oracle Ashley Chen, Senior Product Manager, Oracle Colm Divilly, Consulting Member of Technical Staff, Oracle Elizabeth Saunders, Principal Technical Staff, Oracle     Moscone West - Room 3023 Hands-on Lab: Oracle Multitenant John Mchugh, Senior Principal Product Manager, Oracle Thomas Baby, Architect, Oracle Patrick Wheeler, Senior Director, Product Management, Oracle   05:15 PM - 06:15 PM Hands-on Lab: Oracle Multitenant 05:15 PM - 06:15 PM Moscone West - Room 3023 This is your opportunity to get up close and personal with Oracle Multitenant. In this session learn about a very broad range of Oracle Multitenant functionality in considerable depth. Warning: This lab has been filled to capacity quickly at every Oracle OpenWorld that it has been offered. It is strongly recommended that you sign up early. Even if you're only able to get on the waitlist, it's always worth showing up just in case there's a no-show and you can grab an available seat. SPEAKERS: John Mchugh, Senior Principal Product Manager, Oracle Thomas Baby, Architect, Oracle Patrick Wheeler, Senior Director, Product Management, Oracle     Moscone West - Room 3021 Hands-on Lab: Oracle Big Data SQL Martin Gubar, Director of Product Management, Big Data and Autonomous Database, Oracle Eric Vinck, Principal Sales Consultant, EMEA Oracle Solution Center, Oracle, Oracle   05:15 PM - 06:15 PM Hands-on Lab: Oracle Big Data SQL 05:15 PM - 06:15 PM Moscone West - Room 3021 Modern data architectures encompass streaming data (e.g. Kafka), Hadoop, object stores, and relational data. Many organizations have significant experience with Oracle Databases, both from a deployment and skill set perspective. This hands-on lab on walks through how to leverage that investment. Learn how to extend Oracle Database to query across data lakes (Hadoop and object stores) and streaming data while leveraging Oracle Database security policies. SPEAKERS: Martin Gubar, Director of Product Management, Big Data and Autonomous Database, Oracle Eric Vinck, Principal Sales Consultant, EMEA Oracle Solution Center, Oracle, Oracle   06:00PM Moscone South - Room 309 Building Apps with an Autonomous Multi-model Database in the Cloud Keith Laker, Senior Principal Product Manager, Oracle Nilay Panchal, Product Manager, Oracle   06:00 PM - 06:45 PM Building Apps with an Autonomous Multi-model Database in the Cloud 06:00 PM - 06:45 PM Moscone South - Room 309 Oracle Autonomous Database provides full support for relational data and non-relational data such as JSON, XML, text, and spatial. It brings new capabilities, delivers dramatically faster performance, moves more application logic and analytics into the database, provides cloud-ready JSON and REST services to simplify application development, and enables analysis on dramatically larger datasets—making it ideally suited for the most advanced multi-model cloud-based applications. Learn more in this session. SPEAKERS: Keith Laker, Senior Principal Product Manager, Oracle Nilay Panchal, Product Manager, Oracle   Oracle Autonomous Database Product Management Team Designed by: Keith Laker Senior Principal Product Manager | Analytic SQL and Autonomous Database Oracle Database Product Management Scotscroft, Towers Business Park, Wilmslow Road, Didsbury. M20 2RY.  

Here you go folks...OpenWorld is so big this year so to help you get the most from Monday I have put together a cheat sheet listing all the best sessions. Enjoy your Tuesday and make sure you...

Autonomous

Where to find us in the demogrounds at #oow19....

If you are OpenWorld tomorrow then I expect that you have lots of questions that need answering. Well, the demogrounds is this place for that! Our development teams will be at the demogrounds all week ready to answer all your questions. The demo area is located in the base of Moscone South and it will formally open at 10:00 am tomorrow (Monday) morning. The full list of opening times is: Monday 10:00am Tuesday 10:30am Wednesday 10:00am The full agenda for #OOW19 is here: https://www.oracle.com/openworld/agenda.html. So where can you find us? And by "us" I mean the analytic SQL, parallel execution, PL/SQL and Autonomous Data Warehouse development teams? Well here some information to guide you to the right area: Got questions about Analytics SQL, parallel execution and PL/SQL? So where do you go to answers to questions about Analytics SQL, parallel execution and PL/SQL? You need to find demo booth ADB-014. Which probably isn't that helpful really! So here is a short video showing you how to find the demo booth Analytics SQL, parallel execution and PL/SQL once you have got yourself to Moscone South and come down the moving stairway/escalator...and then follow in my footsteps and we will be ready and waiting for you! Your browser does not support the video tag.   Got questions about Autonomous Data Warehouse and Analytics Cloud? So where do you go to answers to questions about Autonomous Data Warehouse and Oracle Analytics Cloud? Well, these booths are in a different area so you need to take a slightly different route which is outlined here: Your browser does not support the video tag.   Hopefully that's all clear now! Enjoy OpenWorld and looking forward to answering your questions.      

If you are OpenWorld tomorrow then I expect that you have lots of questions that need answering. Well, the demogrounds is this place for that! Our development teams will be at the demogrounds all week...

Autonomous

Autonomous Monday at OpenWorld 2019 - List of Must See Sessions

Here you go folks...OpenWorld is so big this year so to help you get the most from Monday I have put together a cheat sheet listing all the best sessions. Enjoy your Monday and make sure you drink lots of water! If you want this agenda on your phone (iPhone or Android) then checkout our smartphone web app by clicking here !function(){ var d=document.documentElement;d.className=d.className.replace(/no-js/,'js'); if(document.location.href.indexOf('betamode=') > -1) document.write(''); }(); AGENDA - MONDAY 08:30AM Moscone West - Room 3019 RESTful Services with Oracle REST Data Services and Oracle Autonomous Database Jeff Smith, Senior Principal Product Manager, Oracle Ashley Chen, Senior Product Manager, Oracle Colm Divilly, Consulting Member of Technical Staff, Oracle Elizabeth Saunders, Principal Technical Staff, Oracle 08:30 AM - 09:30 AM RESTful Services with Oracle REST Data Services and Oracle Autonomous Database 08:30 AM - 09:30 AM Moscone West - Room 3019 In this session learn to develop and deploy a RESTful service using Oracle SQL Developer, Oracle REST Data Services, and Oracle Autonomous Database. Then connect these services as data sources to different Oracle JavaScript Extension Toolkit visualization components to quickly build rich HTML5 applications using a free and open source JavaScript framework. SPEAKERS:Jeff Smith, Senior Principal Product Manager, Oracle Ashley Chen, Senior Product Manager, Oracle Colm Divilly, Consulting Member of Technical Staff, Oracle Elizabeth Saunders, Principal Technical Staff, Oracle 09:00AM Moscone South - Room 201 Oracle Database 19c for Developers Chris Saxon, Developer Advocate, Oracle 09:00 AM - 09:45 AM Oracle Database 19c for Developers 09:00 AM - 09:45 AM Moscone South - Room 201 Another year brings another new release: Oracle Database 19c. As always, this brings a host of goodies to help developers. This session gives an overview of the best features for developers in recent releases. It covers JSON support, SQL enhancements, and PL/SQL improvements. It also shows you how to use these features to write faster, more robust, more secure applications with Oracle Database. If you’re a developer or a DBA who regularly writes SQL or PL/SQL and wants to keep up to date, this session is for you. SPEAKERS:Chris Saxon, Developer Advocate, Oracle Moscone South - Room 156B A Unified Platform for All Data Peter Jeffcock, Big Data and Data Science, Cloud Business Group, Oracle Aali Masood, Senior Director, Big Data Go-To-Market, Oracle 09:00 AM - 09:45 AM A Unified Platform for All Data 09:00 AM - 09:45 AM Moscone South - Room 156B Seamless access to all your data, no matter where it’s stored, should be a given. But the proliferation of new data storage technologies has led to new kinds of data silo. In this session learn how to build an information architecture that will stand the test of time and make all data available to the business, whether it’s in Oracle Database, object storage, Hadoop, or elsewhere. SPEAKERS:Peter Jeffcock, Big Data and Data Science, Cloud Business Group, Oracle Aali Masood, Senior Director, Big Data Go-To-Market, Oracle Moscone South - Room 210 Oracle Data Integration Cloud: The Future of Data Integration Now Chai Pydimukkala, Senior Director of Product Management – Data Integration Cloud, Oracle Arun Patnaik, Vice President, Architect - Data Integration Cloud, Oracle 09:00 AM - 09:45 AM Oracle Data Integration Cloud: The Future of Data Integration Now 09:00 AM - 09:45 AM Moscone South - Room 210 The rapid adoption of enterprise cloud–based solutions brings new data integration challenges as data moves from ground-to-cloud and cloud-to-cloud. Join this session led by Oracle Product Management to hear about the Oracle Data Integration Cloud strategy, vision, and roadmap to solve data needs now and into the future. Explore Oracle’s next-generation data integration cloud services and learn about the evolution of data integration in the cloud with a practical path forward for customers using Oracle Data Integration Platform Cloud, Oracle Data Integrator, Oracle GoldenGate, Oracle Enterprise Metadata Management, and more. Learn the crucial role of data integration in successful cloud implementations. SPEAKERS:Chai Pydimukkala, Senior Director of Product Management – Data Integration Cloud, Oracle Arun Patnaik, Vice President, Architect - Data Integration Cloud, Oracle Moscone West - Room 3016 All Analytics, All Data: No Nonsense Shyam Varan Nath, Director IoT & Cloud, BIWA User Group Dan Vlamis, CEO - President, VLAMIS SOFTWARE SOLUTIONS INC Charlie Berger, Sr. Director Product Management, Machine Learning, AI and Cognitive Analytics, Oracle 09:00 AM - 09:45 AM All Analytics, All Data: No Nonsense 09:00 AM - 09:45 AM Moscone West - Room 3016 n this session learn from people who have implemented real-world solutions using Oracle's business analytics, machine learning, and spatial and graph technologies. Several Oracle BI, Data Warehouse, and Analytics (BIWA) User Community customers explain how they use Oracle's analytics and machine learning, cloud, autonomous, and on-premises applications to extract more information from their data. Come join the community and learn from people that have been there and done that. SPEAKERS:Shyam Varan Nath, Director IoT & Cloud, BIWA User Group Dan Vlamis, CEO - President, VLAMIS SOFTWARE SOLUTIONS INC Charlie Berger, Sr. Director Product Management, Machine Learning, AI and Cognitive Analytics, Oracle 10:00AM YBCA Theater SOLUTION KEYNOTE: Autonomous Data Management Andrew Mendelsohn, Executive Vice President Database Server Technologies, Oracle 10:00 AM - 11:15 AM SOLUTION KEYNOTE: Autonomous Data Management 10:00 AM - 11:15 AM YBCA Theater Oracle is proven to be the database of choice for managing customers operational and analytical workloads on-premises and in the cloud, with truly unique and innovate autonomous database services. In his annual Oracle OpenWorld address, Oracle Executive Vice President Andy Mendelsohn discusses what's new and what's coming next from the Database Development team. SPEAKERS:Andrew Mendelsohn, Executive Vice President Database Server Technologies, Oracle Moscone South - Room 210 Oracle Data Integration Cloud: Data Catalog Service Deep Dive Denis Gray, Senior Director of Product Management – Data Integration Cloud, Oracle Abhiram Gujjewar, Director Product Management, OCI, Oracle 10:00 AM - 10:45 AM Oracle Data Integration Cloud: Data Catalog Service Deep Dive 10:00 AM - 10:45 AM Moscone South - Room 210 The new Oracle Data Catalog enables transparency, traceability, and trust in enterprise data assets in Oracle Cloud and beyond. Join this Oracle Product Management–led session to learn how to get control of your enterprise’s data, metadata, and data movement. The session includes a walkthrough of enterprise metadata search, business glossary, data lineage, automatic metadata harvesting, and native metadata integration with Oracle Big Data and Oracle Autonomous Database. Learn the importance of a metadata catalog and overall governance for big data, data lakes, and data warehouses as well as metadata integration for PaaS services. See how Oracle Data Catalog enables collaboration across enterprise data personas. SPEAKERS:Denis Gray, Senior Director of Product Management – Data Integration Cloud, Oracle Abhiram Gujjewar, Director Product Management, OCI, Oracle Moscone West - Room 3021 Hands-on Lab: Oracle Big Data SQL Martin Gubar, Director of Product Management, Big Data and Autonomous Database, Oracle Eric Vinck, Principal Sales Consultant, EMEA Oracle Solution Center, Oracle, Oracle 10:00 AM - 11:00 AM Hands-on Lab: Oracle Big Data SQL 10:00 AM - 11:00 AM Moscone West - Room 3021 Modern data architectures encompass streaming data (e.g. Kafka), Hadoop, object stores, and relational data. Many organizations have significant experience with Oracle Databases, both from a deployment and skill set perspective. This hands-on lab on walks through how to leverage that investment. Learn how to extend Oracle Database to query across data lakes (Hadoop and object stores) and streaming data while leveraging Oracle Database security policies. SPEAKERS:Martin Gubar, Director of Product Management, Big Data and Autonomous Database, Oracle Eric Vinck, Principal Sales Consultant, EMEA Oracle Solution Center, Oracle, Oracle Moscone West - Room 3023 Hands-on Lab: Oracle Multitenant John Mchugh, Senior Principal Product Manager, Oracle Thomas Baby, Architect, Oracle Patrick Wheeler, Senior Director, Product Management, Oracle 10:00 AM - 11:00 AM  Hands-on Lab: Oracle Multitenant 10:00 AM - 11:00 AM  Moscone West - Room 3023 This is your opportunity to get up close and personal with Oracle Multitenant. In this session learn about a very broad range of Oracle Multitenant functionality in considerable depth. Warning: This lab has been filled to capacity quickly at every Oracle OpenWorld that it has been offered. It is strongly recommended that you sign up early. Even if you're only able to get on the waitlist, it's always worth showing up just in case there's a no-show and you can grab an available seat. SPEAKERS:John Mchugh, Senior Principal Product Manager, Oracle Thomas Baby, Architect, Oracle Patrick Wheeler, Senior Director, Product Management, Oracle Moscone West - Room 3019 Low-Code Development with Oracle Application Express and Oracle Autonomous Database David Peake, Senior Principal Product Manager, Oracle Marc Sewtz, Senior Software Development Manager, Oracle 10:00 AM - 11:00 AM Low-Code Development with Oracle Application Express and Oracle Autonomous Database 10:00 AM - 11:00 AM Moscone West - Room 3019 Oracle Application Express is a low-code development platform that enables you to build stunning, scalable, secure apps with world-class features that can be deployed anywhere. In this lab start by initiating your free trial for Oracle Autonomous Database and then convert a spreadsheet into a multiuser, web-based, responsive Oracle Application Express application in minutes—no prior experience with Oracle Application Express is needed. Learn how you can use Oracle Application Express to solve many of your business problems that are going unsolved today. SPEAKERS:David Peake, Senior Principal Product Manager, Oracle Marc Sewtz, Senior Software Development Manager, Oracle 11:30AM Moscone West - Room 3023 Hands-on Lab: Oracle Database In-Memory Andy Rivenes, Product Manager, Oracle 11:30 AM - 12:30 PM Hands-on Lab: Oracle Database In-Memory 11:30 AM - 12:30 PM Moscone West - Room 3023 Oracle Database In-Memory introduces an in-memory columnar format and a new set of SQL execution optimizations including SIMD processing, column elimination, storage indexes, and in-memory aggregation, all of which are designed specifically for the new columnar format. This lab provides a step-by-step guide on how to get started with Oracle Database In-Memory, how to identify which of the optimizations are being used, and how your SQL statements benefit from them. The lab uses Oracle Database and also highlights the new features available in the latest release. Experience firsthand just how easy it is to start taking advantage of this technology and its performance improvements. SPEAKERS:Andy Rivenes, Product Manager, Oracle Moscone West - Room 3021 Hands-on Lab: Oracle Autonomous Data Warehouse Hermann Baer, Senior Director Product Management, Oracle Keith Laker, Senior Principal Product Manager, Oracle Nilay Panchal, Product Manager, Oracle Yasin Baskan, Senior Principal Product Manager, Oracle 11:30 AM - 12:30 PM Hands-on Lab: Oracle Autonomous Data Warehouse 11:30 AM - 12:30 PM Moscone West - Room 3021 In this hands-on lab discover how you can access, visualize, and analyze lots of different types of data using a completely self-service, agile, and fast service running in the Oracle Cloud: oracle Autonomous Data Warehouse. See how quickly and easily you can discover new insights by blending, extending, and visualizing a variety of data sources to create data-driven briefings on both desktop and mobile browsers—all without the help of IT. It has never been so easy to create visually sophisticated reports that really communicate your discoveries, all in the cloud, all self-service, powered by Oracle Autonomous Data Warehouse. Oracle’s perfect quick-start service for fast data loading, sophisticated reporting, and analysis is for everybody. SPEAKERS:Hermann Baer, Senior Director Product Management, Oracle Keith Laker, Senior Principal Product Manager, Oracle Nilay Panchal, Product Manager, Oracle Yasin Baskan, Senior Principal Product Manager, Oracle 12:15PM The Exchange - Ask Tom Theater Using Machine Learning and Oracle Autonomous Database to Target Your Best Customers Charlie Berger, Sr. Director Product Management, Machine Learning, AI and Cognitive Analytics, Oracle 12:15 PM - 12:35 PM Using Machine Learning and Oracle Autonomous Database to Target Your Best Customers 12:15 PM - 12:35 PM The Exchange - Ask Tom Theater Oracle Machine Learning extends Oracle’s offerings in the cloud with its collaborative notebook environment that helps data scientist teams build, share, document, and automate data analysis methodologies that run 100% in Oracle Autonomous Database. In this session learn how to interactively work with your data, and build, evaluate, and apply machine learning models. Import, export, edit, run, and share Oracle Machine Learning notebooks with other data scientists and colleagues, all on Oracle Autonomous Database. SPEAKERS:Charlie Berger, Sr. Director Product Management, Machine Learning, AI and Cognitive Analytics, Oracle Moscone West - Room 3024C Node.js SODA APIs on Oracle Autonomous Database - BYOL Dan Mcghan, Developer Advocate, Oracle 12:30 PM - 02:30 PM Node.js SODA APIs on Oracle Autonomous Database - BYOL 12:30 PM - 02:30 PM " Moscone West - Room 3024C PLEASE NOTE: YOU MUST BRING YOUR OWN LAPTOP (BYOL) TO PARTICIPATE IN THIS HANDS-ON LAB. Oracle Database has many features for different type of data, including spatial and graph, XML, text, and SecureFiles. One of the latest incarnations is Simple Oracle Document Access (SODA), which provides a set of NoSQL-style APIs that enable you to create collections of documents (often JSON), retrieve them, and query them—all without any knowledge of SQL. In this hands-on lab, create an Oracle Autonomous Database instance and learn how to connect to it securely. Then use SODA APIs to complete a Node.js-based REST API powering the front end of a “todo” tracking application. Finally learn how to use the latest JSON functions added to the SQL engine to explore and project JSON data relationally. The lab includes an introduction to more-advanced usage and APIs. SPEAKERS:"Dan Mcghan, Developer Advocate, Oracle 01:00PM Moscone West - Room 3021 Hands-on Lab: Oracle Essbase Eric Smadja, Oracle Ashish Jain, Product Manager, Oracle Mike Larimer, Oracle 01:00 PM - 02:00 PM Hands-on Lab: Oracle Essbase 01:00 PM - 02:00 PM Moscone West - Room 3021 The good thing about the new hybrid block storage option (BSO) is that many of the concepts that we learned over the years about tuning a BSO cube still have merit. Knowing what a block is, and why it is important, is just as valuable today in hybrid BSO as it was 25 years ago when BSO was introduced. In this learn best practices for performance optimization, how to manage data blocks, how dimension ordering still impacts things such as calculation order, the reasons why the layout of a report can impact query performance, how logs will help the Oracle Essbase developer debug calculation flow and query performance, and how new functionality within Smart View will help developers understand and modify the solve order. SPEAKERS:Eric Smadja, Oracle Ashish Jain, Product Manager, Oracle Mike Larimer, Oracle 01:45PM Moscone South - Room 213 Oracle Multitenant: Seven Sources of Savings Patrick Wheeler, Senior Director, Product Management, Oracle 01:45 PM - 02:30 PM Oracle Multitenant: Seven Sources of Savings 01:45 PM - 02:30 PM Moscone South - Room 213 You've heard that the efficiencies of Oracle Multitenant help reduce capital expenses and operating expenses. How much can you expect to save? Attend this session to understand the many ways Oracle Multitenant can help reduce costs so that you can arrive at an answer to this important question. SPEAKERS:Patrick Wheeler, Senior Director, Product Management, Oracle Moscone South - Room 215/216 Rethink Database IT with Autonomous Database Dedicated Robert Greene, Senior Director, Product Management, Oracle Juan Loaiza, Executive Vice President, Oracle 01:45 PM - 02:30 PM Rethink Database IT with Autonomous Database Dedicated 01:45 PM - 02:30 PM Moscone South - Room 215/216 Larry Ellison recently launched a new dedicated deployment option for Oracle Autonomous Transaction Processing. But what does this mean to your organization and how you achieve your key data management goals? This session provides a clear understanding of how dedicated deployment works and illustrates how it can simplify your approach to data management and accelerate your transition to the cloud. SPEAKERS:Robert Greene, Senior Director, Product Management, Oracle Juan Loaiza, Executive Vice President, Oracle Moscone South - Room 211 Oracle Autonomous Data Warehouse: Update, Strategy, and Roadmap George Lumpkin, Vice President, Product Management, Oracle Keith Laker, Senior Principal Product Manager, Oracle 01:45 PM - 02:30 PM Oracle Autonomous Data Warehouse: Update, Strategy, and Roadmap 01:45 PM - 02:30 PM Moscone South - Room 211 The session provides an overview of Oracle’s product strategy for Oracle Autonomous Data Warehouse. Learn about the capabilities and associated business benefits of recently released features. There is a focus on themes for future releases of the product roadmap as well as the key capabilities being planned. This is a must-attend session for Oracle DBAs, database developers, and cloud architects. SPEAKERS:George Lumpkin, Vice President, Product Management, Oracle Keith Laker, Senior Principal Product Manager, Oracle 01:00PM Moscone West - Room 3019 Building Microservices Using Oracle Autonomous Database Jean De Lavarene, Software Development Director, Oracle Kuassi Mensah, Director, Product Management, Oracle Simon Law, Product Manager, Oracle Pablo Silberkasten, Software Development Manager, Oracle 01:00 PM - 02:00 PM Building Microservices Using Oracle Autonomous Database 01:00 PM - 02:00 PM Moscone West - Room 3019 In this hands-on lab build a Java microservice connecting to Oracle Autonomous Database using the reactive zstreams Ingestor Java library, Helidon SE, Oracle’s JDBC driver, and the connection pool library (UCP). See how to containerize the service with Docker and perform container orchestration using Kubernetes. SPEAKERS:Jean De Lavarene, Software Development Director, Oracle Kuassi Mensah, Director, Product Management, Oracle Simon Law, Product Manager, Oracle Pablo Silberkasten, Software Development Manager, Oracle 02:30PM Moscone South - Room 203 Ten Amazing SQL Features Connor Mcdonald, Database Advocate, Oracle 02:30 PM - 03:15 PM Ten Amazing SQL Features 02:30 PM - 03:15 PM Moscone South - Room 203 Sick and tired of writing thousands of lines of middle-tier code and still having performance problems? Let’s become fans once again of the database by being reintroduced to just how powerful the SQL language really is! Coding is great fun, but we do it to explore complex algorithms, build beautiful applications, and deliver fantastic solutions for our customers, not just to do boring data processing. By expanding our knowledge of SQL facilities, we can let all the boring work be handled via SQL rather than a lot of middle-tier code—and get performance benefits as an added bonus. This session highlights some SQL techniques for solving problems that would otherwise require a lot of complex coding. SPEAKERS:Connor Mcdonald, Database Advocate, Oracle Moscone West - Room 3021 Hands-on Lab: Oracle Machine Learning Mark Hornick, Senior Director Data Science and Big Data, Oracle Marcos Arancibia Coddou, Product Manager, Data Science and Big Data, Oracle Charlie Berger, Sr. Director Product Management, Data Science and Big Data, Oracle 02:30 PM - 03:30 PM Hands-on Lab: Oracle Machine Learning 02:30 PM - 03:30 PM Moscone West - Room 3021 In this introductory hands-on-lab, try out the new Oracle Machine Learning Zeppelin-based notebooks that come with Oracle Autonomous Database. Oracle Machine Learning extends Oracle’s offerings in the cloud with its collaborative notebook environment that helps data scientist teams build, share, document, and automate data analysis methodologies that run 100% in Oracle Autonomous Database. Interactively work with your data, and build, evaluate, and apply machine learning models. Import, export, edit, run, and share Oracle Machine Learning notebooks with other data scientists and colleagues. Share and further explore your insights and predictions using the Oracle Analytics Cloud. SPEAKERS:Mark Hornick, Senior Director Data Science and Big Data, Oracle Marcos Arancibia Coddou, Product Manager, Data Science and Big Data, Oracle Charlie Berger, Sr. Director Product Management, Data Science and Big Data, Oracle 02:45PM Moscone South - Room 213 Choosing the Right Database Cloud Service for Your Application Tammy Bednar, Sr. Director of Product Management, Database Cloud Services, Oracle 02:45 PM - 03:30 PM Choosing the Right Database Cloud Service for Your Application 02:45 PM - 03:30 PM Moscone South - Room 213 Oracle Cloud provides automated, customer-managed Oracle Database services in flexible configurations to meet your needs, large or small, with the performance of dedicated hardware. On-demand and highly available Oracle Database, on high-performance bare metal servers or Exadata, is 100% compatible with on-premises Oracle workloads and applications: seamlessly move between the two platforms. Oracle Database Cloud allows you to start at the cost and capability level suitable to your use case and then gives you the flexibility to adapt as your requirements change over time. Join this session to learn about Oracle Database Cloud services. SPEAKERS:Tammy Bednar, Sr. Director of Product Management, Database Cloud Services, Oracle Moscone South - Room 211 Roadmap for Oracle Big Data Appliance and Oracle Big Data Cloud Martin Gubar, Director of Product Management, Big Data and Autonomous Database, Oracle Alexey Filanovskiy, Senior Product Manager, Oracle 02:45 PM - 03:30 PM Roadmap for Oracle Big Data Appliance and Oracle Big Data Cloud 02:45 PM - 03:30 PM Moscone South - Room 211 Driven by megatrends like AI and the cloud, big data platforms are evolving. But does this mean it’s time to throw away everything being used right now? This session discusses the impact megatrends have on big data platforms and how you can ensure that you properly invest for a solid future. Topics covered include how platforms such as Oracle Big Data Appliance cater to these trends, as well as the roadmap for Oracle Big Data Cloud on Oracle Cloud Infrastructure. Learn how the combination of these products enable customers and partners to build out a platform for big data and AI, both on-premises and in Oracle Cloud, leveraging the synergy between the platforms and the trends to solve actual business problems. SPEAKERS:Martin Gubar, Director of Product Management, Big Data and Autonomous Database, Oracle Alexey Filanovskiy, Senior Product Manager, Oracle Oracle Autonomous Database Product Management Team Designed by: Keith Laker Senior Principal Product Manager | Analytic SQL and Autonomous Database Oracle Database Product Management Scotscroft, Towers Business Park, Wilmslow Road, Didsbury. M20 2RY.

Here you go folks...OpenWorld is so big this year so to help you get the most from Monday I have put together a cheat sheet listing all the best sessions. Enjoy your Monday and make sure you...

Autonomous Database Now Supports Accessing the Object Storage with OCI Native Authentication

Loading and querying external data from Oracle Cloud Object Storage are amongst some of the common operations performed in Autonomous Database. Accessing Object Storage requires users to have credentials that they can create via the CREATE_CREDENTIAL procedure of DBMS_CLOUD package. Object Storage credentials are based on the user's OCI username and an Oracle-generated token string (also known as 'auth token'). While auth token based credentials are still supported, DBMS_CLOUD now supports creating OCI native credentials as well! In this blog post, we are going to cover how to create a native credential and use it in an operation that requires Object Storage authentication. Let’s start… As you may already be familiar, the syntax to create a credential with a username and an auth token in ADB is as follows: DBMS_CLOUD.CREATE_CREDENTIAL ( credential_name IN VARCHAR2, username IN VARCHAR2, password IN VARCHAR2 DEFAULT NULL); CREATE_CREDENTIAL procedure is now overloaded to provide native authentication with the following syntax: DBMS_CLOUD.CREATE_CREDENTIAL ( credential_name IN VARCHAR2, user_ocid IN VARCHAR2, tenancy_ocid IN VARCHAR2, private_key IN VARCHAR2, fingerprint IN VARCHAR2); In native authentication, the username and password parameters are replaced with the user_ocid, tenancy_ocid, private_key, and fingerprint parameters.  user_ocid and tenancy_ocid are pretty self-explanatory and they correspond to user’s and tenancy’s OCIDs respectively (Check out “Where to Get the Tenancy's OCID and User's OCID” for more details). The private_key parameter specifies the generated private key in PEM format. When it comes to the private_key parameter, there are couple important details worth mentioning. Currently, a private key that is created with a passphrase is not supported. Therefore, you need to make sure you generate a key with no passphrase (Check out “How to Generate an API Signing Key” for more details on how to create a private key with no passphrase). Additionally, the private key that you provide for this parameter should only contain the key itself without any header or footer (e.g. ‘-----BEGIN RSA PRIVATE KEY-----', ‘-----END RSA PRIVATE KEY-----’). Lastly, the fingerprint parameter specifies the fingerprint that can be obtained either after uploading the public key to the console (See “How to Upload the Public Key”) or via the OpenSSL commands (See “How to Get the Key's Fingerprint”). Once you gather all the necessary info and generate your private key, your CREATE_CREDENTIAL procedure should look similar to this: BEGIN DBMS_CLOUD.CREATE_CREDENTIAL ( credential_name => 'OCI_NATIVE_CRED', user_ocid => 'ocid1.user.oc1..aaaaaaaatfn77fe3fxux3o5lego7glqjejrzjsqsrs64f4jsjrhbsk5qzndq', tenancy_ocid => 'ocid1.tenancy.oc1..aaaaaaaapwkfqz3upqklvmelbm3j77nn3y7uqmlsod75rea5zmtmbl574ve6a', private_key => 'MIIEogIBAAKCAQEAsbNPOYEkxM5h0DF+qXmie6ddo95BhlSMSIxRRSO1JEMPeSta0C7WEg7g8SOSzhIroCkgOqDzkcyXnk4BlOdn5Wm/BYpdAtTXk0sln2DH/GCH7l9P8xC9cvFtacXkQPMAXIBDv/zwG1kZQ7Hvl7Vet2UwwuhCsesFgZZrAHkv4cqqE3uF5p/qHfzZHoevdq4EAV6dZK4Iv9upACgQH5zf9IvGt2PgQnuEFrOm0ctzW0v9JVRjKnaAYgAbqa23j8tKapgPuREkfSZv2UMgF7Z7ojYMJEuzGseNULsXn6N8qcvr4fhuKtOD4t6vbIonMPIm7Z/a6tPaISUFv5ASYzYEUwIDAQABAoIBACaHnIv5ZoGNxkOgF7ijeQmatoELdeWse2ZXll+JaINeTwKU1fIB1cTAmSFv9yrbYb4ubKCJuYZJeC6I92rT6gEiNpr670Pn5n43cwblszcTryWOYQVxAcLkejbPA7jZd6CW5xm/vEgRv5qgADVCzDCzrij0t1Fghicc+EJ4BFvOetnzEuSidnFoO7K3tHGbPgA+DPN5qrO/8NmrBebqezGkOuOVkOA64mp467DQUhpAvsy23RjBQ9iTuRktDB4g9cOdOVFouTZTnevN6JmDxufu9Lov2yvVMkUC2YKd+RrTAE8cvRrn1A7XKkH+323hNC59726jT57JvZ+ricRixSECgYEA508e/alxHUIAU9J/uq98nJY/6+GpI9OCZDkEdBexNpKeDq2dfAo9pEjFKYjH8ERj9quA7vhHEwFL33wk2D24XdZl6vq0tZADNSzOtTrtSqHykvzcnc7nXv2fBWAPIN59s9/oEKIOdkMis9fps1mFPFiN8ro4ydUWuR7B2nM2FWkCgYEAxKs/zOIbrzVLhEVgSH2NJVjQs24S8W+99uLQK2Y06R59L0Sa90QHNCDjB1MaKLanAahP30l0am0SB450kEiUD6BtuNHH8EIxGL4vX/SYeE/AF6tw3DqcOYbLPpN4CxIITF0PLCRoHKxARMZLCJBTMGpxdmTNGyQAPWXNSrYEKFsCgYBp0sHr7TxJ1WtO7gvvvd91yCugYBJAyMBr18YY0soJnJRhRL67A/hlk8FYGjLW0oMlVBtduQrTQBGVQjedEsepbrAcC+zm7+b3yfMb6MStE2BmLPdF32XtCH1bOTJSqFe8FmEWUv3ozxguTUam/fq9vAndFaNre2i08sRfi7wfmQKBgBrzcNHN5odTIV8l9rTYZ8BHdIoyOmxVqM2tdWONJREROYyBtU7PRsFxBEubqskLhsVmYFO0CD0RZ1gbwIOJPqkJjh+2t9SH7Zx7a5iV7QZJS5WeFLMUEv+YbYAjnXK+dOnPQtkhOblQwCEY3Hsblj7Xz7o=', fingerprint => '4f:0c:d6:b7:f2:43:3c:08:df:62:e3:b2:27:2e:3c:7a'); END; / PL/SQL procedure successfully completed. We should now be able to see our new credential in the dba_credentials table: SELECT owner, credential_name FROM dba_credentials WHERE credential_name LIKE '%NATIVE%'; OWNER CREDENTIAL_NAME ----- --------------- ADMIN OCI_NATIVE_CRED Let’s go ahead and create an external table using our new credential: BEGIN DBMS_CLOUD.CREATE_EXTERNAL_TABLE( table_name =>'CHANNELS_EXT', credential_name =>'OCI_NATIVE_CRED', file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/adb/b/bucket_testpdb/o/channels.txt', format => json_object('delimiter' value ','), column_list => 'CHANNEL_ID NUMBER, CHANNEL_DESC VARCHAR2(20), CHANNEL_CLASS VARCHAR2(20), CHANNEL_CLASS_ID NUMBER, CHANNEL_TOTAL VARCHAR2(13), CHANNEL_TOTAL_ID NUMBER'); END; / PL/SQL procedure successfully completed. SELECT count(*) FROM channels_ext; COUNT(*) -------- 5 To summarize, in addition to the auth token based authentication, you can now also have OCI native authentication and CREATE_CREDENTIAL procedure is overloaded to accommodate both options as we demonstrated above.

Loading and querying external data from Oracle Cloud Object Storage are amongst some of the common operations performed in Autonomous Database. Accessing Object Storage requires users to have...

Big Data

Oracle Big Data SQL 4.0 – Query Server

One of the popular new Big Data SQL features is its Query Server.  You can think of Query Server as an Oracle Database 18c query engine that uses the Hive metastore to capture table definitions.  Data isn’t stored in Query Server;  it allows you to access data in Hadoop, NoSQL, Kafka and Object Stores (Oracle Object Store and Amazon S3) using Oracle SQL.  Installation and Configuration Architecturally, here’s what a Big Data SQL deployment with Query Server looks like: There are two parts to the Big Data SQL deployment: Query Server is deployed to an edge node of the cluster (eliminating resource contention with services running on the data nodes) “Cells” are deployed to data nodes. These cells are responsible for scanning, filtering and processing of data and returning summarized results to Query Server Query Server setup is handled by Jaguar – the Big Data SQL install utility.  As part of the installation, update the installer configuration file – bds-config.json – to simply specify that you want to use Query Server and the host that it should be deployed to (that host should be a “gateway” server).  Also, include the Hive databases that should synchronize with Query Server (here we're specifying all): {    “edgedb”: {        "node": “your-host.com",        "enabled": "true"        "sync_hive_db_list": "*"   } } Jaguar will automatically detect the Hive source and the Hadoop cluster security configuration information and configure Query Server appropriately.  Hive metadata will be synchronized with Query Server (either full metadata replacement or incremental updates) using the PL/SQL API (dbms_bdsqs.sync_hive_databases) or thru the cluster management framework (see picture of Cloudera Manager below): For secure clusters, you will log into Query Server using Kerberos – just like you would access other Hadoop cluster services.  Similar to Hive metadata, Kerberos principals can be synchronized thru your cluster admin tool (Cloudera Manager or Ambari), Jaguar (jaguar sync_principals) or PL/SQL (DBMS_BDSQS_ADMIN.ADD_KERBEROS_PRINCIPALS and DBMS_BDSQS_ADMIN.DROP_KERBEROS_PRINCIPALS). Query Your Data Once your Query Server is deployed, query your data using Oracle SQL.  There is a bdsql user that is automatically created and data is accessible thru the bdsqlusr PDB. sqlplus bdsql@bdsqlusr You will see schemas defined for all your hive databases – and external tables within those schemas that map to your Hive tables. The full Oracle SQL language is available to you (queries – not inserts/updates/deletes).  Authorization will leverage the underlying privileges set up on the Hadoop cluster; there are no authorization rules to replicate. You can create new external tables using the Big Data SQL drivers: ORACLE_HIVE – to leverage tables using hive metadata (note, you probably don’t need to do this b/c the external tables are already available) ORACLE_HDFS – to create tables over HDFS data for which there is no hive metadata ORACLE_BIGDATA – to create tables over object store sources Query Server provides a limited use Oracle Database license.  This allows you to create external tables over sources – but not internal tables.  Although there is nothing physically stopping the creation of tables – you will find that any internal table created will be deleted when the Query Server restarts. This beauty of query server is that you get to use the powerful Oracle SQL language and mature Oracle Database optimizer.  It means that your existing query applications will be able to use Query Server as they would any other Oracle Database.  No need to change your queries to support a less rich query engine.  Correlate near real-time Kafka data with information captured in your data lake.  Apply advanced SQL functions like Pattern Matching and time series analyses to gain insights from all your data – and watch your insights and productivity soar :-).

One of the popular new Big Data SQL features is its Query Server.  You can think of Query Server as an Oracle Database 18c query engine that uses the Hive metastore to capture table definitions.  Data...

Big Data

Oracle Big Data SQL 4.0 - Great New Performance Feature

Big Data SQL 4.0 introduces a data processing enhancement that can have a dramatic impact on query performance:  distributed aggregation using in-memory capabilities. Big Data SQL has always done a great job of filtering data on the Hadoop cluster.  It does this using the following optimizations:  1) column projection, 2) partition pruning, 3) storage indexes, 4) predicate pushdown. Column projection is the first optimization.  If your table has 200 columns – and you are only selecting one – then only a single column’s data will be transferred from the Big Data SQL Cell on the Hadoop cluster to the Oracle Database.  This optimization is applied to all file types – CSV, Parquet, ORC, Avro, etc. The image below shows the other parts of the data elimination steps.  Let’s say you are querying 100TB data set. Partition Pruning:  Hive partitions data by a table’s column(s).  If you have two years of data and your table is partitioned by day – and the query is only selecting 2 months – then in this example, 90% of the data will be “pruned” – or not scanned Storage Index:  SIs are a fine-grained data elimination technique.  Statistics are collected for each file’s data blocks based on query usage patterns – and these statistics are used to determine whether or not it’s possible that data for the given query is contained within that block.  If the data does not exist in that block, then the block is not scanned (remember, a block can represent a significant amount of data - oftentimes 128MB). This information is automatically maintained and stored in a lightweight, in-memory structure. Predicate Pushdown:  Certain file types – like Parquet and ORC – are really database files.  Big Data SQL is able to push predicates into those files and only retrieve the data that meets the query criteria Once those scan elimination techniques are applied, Big Data SQL Cells will process and filter the remaining data - returning the results to the database. In-Memory Aggregation In-memory aggregation has the potential to dramatically speed up queries.  Prior to Big Data SQL 4.0, Oracle Database performed the aggregation over the filtered data sets that were returned by Big Data SQL Cells.  With in-memory aggregation, summary computations are run across the Hadoop cluster data nodes.  The massive compute power of the cluster is used to perform aggregations. Below, detailed activity is captured at the customer location level; the query is asking for a summary of activity by region and month. When the query is executed, processing is distributed to each data node on the Hadoop cluster.  Data elimination techniques and filtering is applied – and then each node will aggregate the data up to region/month.  This aggregated data is then returned to the database tier from each cell - and the database then completes the aggregation and applies other functions. Big Data SQL is using an extension to the in-memory aggregation functionality offered by Oracle Database.  Check out the documentation for details on the capabilities and where you can expect a good performance gain. The results can be rather dramatic, as illustrated by the chart found below: This test compares running the same queries with aggregation offload disabled and then enabled.  It shows 1) a simple, single table “count(*)” query, 2) a query against a single table that performs a group by and 3) a query that joins a dimension table to a fact table.  The second and third examples also show increasing the number of columns accessed by the query.  In this simple test, performance improved from 13x to 36x :-). Lots of great new capabilities in Big Data SQL 4.0.  This one may be my favorite :-).

Big Data SQL 4.0 introduces a data processing enhancement that can have a dramatic impact on query performance:  distributed aggregation using in-memory capabilities. Big Data SQL has always done a...

Autonomous

SQL Developer Web comes to Autonomous Data Warehouse - oh YES!

If you login into your cloud console and create a new autonomous data warehouse, or if you have an existing data warehouse instance, then there is great news - you can now launch SQL Developer Web direct from the service console. There is no need to download and install the full desktop version of SQL Developer anymore.  If you want a quick overview of this feature then there is great video by Jeff Smith (Oracle Product Manager for SQL Developer) on YouTube: https://www.youtube.com/watch?v=asHlUW-Laxk. In the video Jeff gives an overview and a short demonstration of this new UI.  ADMIN-only Access First off, straight out the box, only the ADMIN user can access SQL Developer web - which makes perfect sense when you think about it! Therefore, the ADMIN user is always going to be the first person to connect to SQL Dev Web and then they enable access for other users/schemas as required.  A typical autonomous workflow will look something like this: Create a new ADW instance Open Service Console Connect to SQL Dev Web as ADMIN user Enable each schema/user via the ords_admin.enable_schema package Send schema-specific URL to each developer     Connecting as the ADMIN user From the Administration tab on the service console you will see that we added two new buttons - one to access APEX (more information here) and one to access SQL Developer Web.: As this is on the Administration tab then the link for SQL Developer Web, not surprisingly, provides a special admin-only URL which, one you are logged in as the admin user, brings you to the home screen:   and the admin user has some additional features enabled for monitoring their autonomous data warehouse via the hamburger menu in the top left corner     The Dashboard view displays general status information about the data warehouse:. Database Status: Displays the overall status of the database. Alerts: Displays the number of Error alerts in the alert log.  Database Storage: Displays how much storage is being used by the database. Sessions: Displays the status of open sessions in the database. Physical IO Panel: Displays the rates of physical reads and writes of database data. Waits: Displays how many wait events are occurring in the database for various reasons Quick Links: Provides buttons to open the Worksheet, Data Modelel. It also provides a button to open the Oracle Application Express sign-in page for the current database.   Home Page View This page has some cool features - there is a timeline that tracks when objects got added to the database: and there is an associated quick glance view that shows the status of those objects so you know that if it's a table whether it's been automatically analyzed and the stats are up to date:   Enabling the users/schemas To allow a developer to access their schema and login the ADMIN user has to run a small PL/SQL script to enable the schema and that process is outlined here: https://docs.oracle.com/en/database/oracle/sql-developer-web/19.1/sdweb/about-sdw.html#GUID-A79032C3-86DC-4547-8D39-85674334B4FE. Once that's done the ADMIN user can provide the developer with their personal URL to access SQL Developer Web. Essentially, this developer URL is the same as the URL the ADMIN user gets from the service console, but with the /admin/ segment of the URL replaced by /schema-alias/ specified during the "enable-user-access" step. The doc lays this out very nicely.   Guided Demo   Overall adding SQL Dev Web to Autonomous Data Warehouse is going to make life so much easier for DBAs and developers. For most tasks SQL Developer Web can now be the go-to interface for doing most in-database tasks which means you don't have to download and install a desktop tool (which in most corporate environments creates all sorts of problems due to locked-down Windows and Mac desktops). Where to get more information When it comes to SQL Developer there is only one URL you need and it belongs the Jeff Smith who is the product manager for SQL Developer: https://www.thatjeffsmith.com/. Jeff's site contains everything you could ever want to know about using SQL Developer Desktop and SQL Developer Web. There are overview videos, tutorial videos, feature videos, tips & tricks etc etc. Have fun with SQL Developer Web and Autonomous Data Warehouse      

If you login into your cloud console and create a new autonomous data warehouse, or if you have an existing data warehouse instance, then there is great news - you can now launch SQL Developer...

Autonomous

APEX comes to Autonomous Data Warehouse - oh YES!

A big "Autonomous Welcome" to all our APEX developers because your favorite low-code development environment is now built in Autonomous Data Warehouse. And before you ask - YES, if you have existing autonomous data warehouse instances you will find an APEX launch button got added to the Admin tab on your service console (see the screen capture below).  APEX comes to ADW. YES! APEX is now bundled with Autonomous Data Warehouse (even existing data warehouse instances have been updated). What does this mean? It means that you now have free access to  Oracle’s premiere low-code development platform: Application Express (APEX). This provides a low-code development environment that enables customers and partners to build stunning, scalable, secure apps with world-class features fully supported by Autonomous Database.   As an application developer you can now benefit from a simple but very powerful development platform powered by an autonomous database. It’s the perfect combination of low-code development meets zero management database. You can focus on building rich, sophisticated applications with APEX and the database will take care of itself. There are plenty of great use cases for APEX combined with Autonomous Database and from a data warehouse perspective 2 key ones stand out: 1) A replacement for a data mart built around spreadsheets We all done it at some point in our careers - used spreadsheets to build business critical applications and reporting systems. We all know this approach is simply a disaster waiting to happen! Yet almost every organization utilizes spreadsheets to share and report on data. Why? Because spreadsheets are so easy to create - anyone can put together a spreadsheet once they have the data. Once created they often send it out to colleagues who then tweak the data and pass it on to other colleagues, and so forth. This inevitably leads to numerous copies with different data and very a totally flawed business processes. A far better solution is to have a single source of truth stored in a fully secured database with a browser-based app that everyone can use to maintain the data. Fortunately, you now have one! Using Autonomous Data Warehouse and APEX any user can go from a spreadsheet to web app in a few clicks.  APEX provides a very powerful but easy to use wizard that in just a few clicks can transform your spreadsheet into a fully-populated table in Oracle Autonomous Data Warehouse, complete with a fully functioning app with a report and form for maintaining the data. One of the key benefits of switching to APEX is that your data becomes completely secure. The Autonomous Data Warehouse automatically encrypts data at rest and in transit, you can apply data masking profiles on any sensitive data that you share with others and Oracle takes care of making sure you have all the very latest security patches applied. Lastly, all your data is automatically backed. 2) Sharing external data with partners and customers. Many data warehouses make it almost impossible to share data with partners. This can make it very hard to improve your business processes. Providing an app to enable your customers to interact with you and see the same data sets can greatly improve customer satisfaction and lead to repeat business. However, you don't want to expose your internal systems on the Internet, and you have concerns about security, denial of service attacks, and web site uptime. By combining Autonomous Data Warehouse with APEX you can now safely develop public facing apps. Getting Started with APEX!  Getting started with APEX is really easy. Below you will see that I have put together a quick animation which guides you through the process of logging in to your APEX workspace from Autonomous Data Warehouse: What see you above is the process of logging in to APEX for the first time. In this situation you connect as the ADMIN user to the reserved workspace called “INTERNAL”. Once you login you will be required to create a new workspace and assign a user to that workspace to get things setup. In the above screenshots a new workspace called GKL is created for the user GKL. Then at that point everything becomes fully focused on APEX and your  Autonomous Data Warehouse just fades into the background, taking care of itself. It could not be simpler! Learn More about APEX If you are completely new to APEX then I would recommend jumping over to the dedicated Application Express website - apex.oracle.com. On this site you will find the APEX PM team has put together a great 4-step process to get you up-and-running with APEX: https://apex.oracle.com/en/learn/getting-started/ - quick note: obviously, you can skip step 1 which covers how to request and environment on our public APEX service because you have your dedicated environment within your very own Autonomous Data Warehouse. Enjoy your new, autonomous APEX-enabled environment!      

A big "Autonomous Welcome" to all our APEX developers because your favorite low-code development environment is now built in Autonomous Data Warehouse. And before you ask - YES, if you have existing...

Autonomous

There is a minor tweak to our UI - DEDICATED

You may have spotted from all the recent online news headlines and social media activity that we launched a new service for transactional workloads - ATP Dedicated. It allows an organization to rethink how they deliver Database IT, enabling a customizable private database cloud in the public cloud. Obviously this does not affect you if you are using Autonomous Data Warehouse but it does have a subtle impact because our UI has had to change slightly. You will notice in the top left corner of the main console page we now have three types of services: Autonomous Database Autonomous Container Database Autonomous Exadata Infrastructure From a data warehouse perspective you are only interested in the first one in that list: Autonomous Database. In the main table that list all your instances you can see there is a new column headed “Dedicated Infrastructure”. For ADW, this will always show “No” as you can see below.     If you create a new ADW you will notice that the pop-up form has now been replaced by a full width page to make it easier to focus on the fields you need to complete. The new auto-scaling feature is still below the CPU Core Count box (for more information about auto scaling with ADW see this blog post). …and that’s about it for this modest little tweak to our UI. So nothing major, just a subtle change visible when you click on the "Transaction Processing" box. Moving on...  

You may have spotted from all the recent online news headlines and social media activity that we launched a new service for transactional workloads - ATP Dedicated. It allows an organization to...

Autonomous

How to Create a Database Link from an Autonomous Data Warehouse to a Database Cloud Service Instance

Autonomous Data Warehouse (ADW) now supports outgoing database links to any database that is accessible from an ADW instance including Database Cloud Service (DBCS) and other ADW/ATP instances. To use database links with ADW, the target database must be configured to use TCP/IP with SSL (TCPS) authentication. Since both ADW and ATP use TCPS authentication by default, setting up a database link between these services is pretty easy and takes only a few steps. We covered the ADB-to-ADB linking process in the first of this two part series of blog posts about using database links, see Making Database Links from ADW to other Databases. That post explained the simplest use case to configure and use. On the other hand, enabling TCPS authentication in a database that doesn't have it configured (e.g. in DBCS) requires some additional steps that need to be followed carefully. In this blog post, I will try to demonstrate how to create a database link from an ADW instance to a DBCS instance including the steps to enable TCPS authentication. Here is an outline of the steps that we are going to follow: Enable TCPS Authentication in DBCS Connect to DBCS Instance from Client via TCPS Create a DB Link from ADW to DBCS Create a DB Link from DBCS to ADW (Optional) Enable TCPS Authentication in DBCS A DBCS instance uses TCP/IP protocol by default. Configuring TCPS in DBCS involves several steps that need to be performed manually. Since we are going to modify the default listener to use TCPS and it's configured under the grid user, we will be using both oracle and grid users. Here are the steps needed to enable TCPS in DBCS: Create wallets with self signed certificates for server and client Exchange certificates between server and client wallets (Export/import certificates) Add wallet location in the server and the client network files Add TCPS endpoint to the database listener Create wallets with self signed certificates for server and client As part of enabling TCPS authentication, we need to create individual wallets for the server and the client. Each of these wallets has to have their own certificates that they will exchange with one another. For the sake of this example, I will be using a self signed certificate. The client wallet and certificate can be created in the client side; however, I'll be creating my client wallet and certificate in the server and moving them to my local system later on. See Configuring Secure Sockets Layer Authentication for more information. Let's start... Set up wallet directories with the root user [root@dbcs0604 u01]$ mkdir -p /u01/server/wallet [root@dbcs0604 u01]$ mkdir -p /u01/client/wallet [root@dbcs0604 u01]$ mkdir /u01/certificate [root@dbcs0604 /]# chown -R oracle:oinstall /u01/server [root@dbcs0604 /]# chown -R oracle:oinstall /u01/client [root@dbcs0604 /]# chown -R oracle:oinstall /u01/certificate Create a server wallet with the oracle user [oracle@dbcs0604 ~]$ cd /u01/server/wallet/ [oracle@dbcs0604 wallet]$ orapki wallet create -wallet ./ -pwd Oracle123456 -auto_login Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Create a server certificate with the oracle user [oracle@dbcs0604 wallet]$ orapki wallet add -wallet ./ -pwd Oracle123456 -dn "CN=dbcs" -keysize 1024 -self_signed -validity 3650 -sign_alg sha256 Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Create a client wallet with the oracle user [oracle@dbcs0604 wallet]$ cd /u01/client/wallet/ [oracle@dbcs0604 wallet]$ orapki wallet create -wallet ./ -pwd Oracle123456 -auto_login Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Create a client certificate with the oracle user [oracle@dbcs0604 wallet]$ orapki wallet add -wallet ./ -pwd Oracle123456 -dn "CN=ctuzla-mac" -keysize 1024 -self_signed -validity 3650 -sign_alg sha256 Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Exchange certificates between server and client wallets (Export/import certificates) Export the server certificate with the oracle user [oracle@dbcs0604 wallet]$ cd /u01/server/wallet/ [oracle@dbcs0604 wallet]$ orapki wallet export -wallet ./ -pwd Oracle123456 -dn "CN=dbcs" -cert /tmp/server.crt Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Export the client certificate with the oracle user [oracle@dbcs0604 wallet]$ cd /u01/client/wallet/ [oracle@dbcs0604 wallet]$ orapki wallet export -wallet ./ -pwd Oracle123456 -dn "CN=ctuzla-mac" -cert /tmp/client.crt Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Import the client certificate into the server wallet with the oracle user [oracle@dbcs0604 wallet]$ cd /u01/server/wallet/ [oracle@dbcs0604 wallet]$ orapki wallet add -wallet ./ -pwd Oracle123456 -trusted_cert -cert /tmp/client.crt Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Import the server certificate into the client wallet with the oracle user [oracle@dbcs0604 wallet]$ cd /u01/client/wallet/ [oracle@dbcs0604 wallet]$ orapki wallet add -wallet ./ -pwd Oracle123456 -trusted_cert -cert /tmp/server.crt Oracle PKI Tool Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Operation is successfully completed. Change permissions for the server wallet with the oracle user We need to set the permissions for the server wallet so that it can be accessed when we restart the listener after enabling TCPS endpoint. [oracle@dbcs0604 wallet]$ cd /u01/server/wallet [oracle@dbcs0604 wallet]$ chmod 640 cwallet.sso Add wallet location in the server and the client network files Creating server and client wallets with self signed certificates and exchanging certificates were the initial steps towards the TCPS configuration. We now need to modify both the server and client network files so that they point to their corresponding wallet location and they are ready to use the TCPS protocol. Here's how those files look in my case: Server-side $ORACLE_HOME/network/admin/sqlnet.ora under the grid user # sqlnet.ora Network Configuration File: /u01/app/18.0.0.0/grid/network/admin/sqlnet.ora # Generated by Oracle configuration tools. NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT) wallet_location = (SOURCE= (METHOD=File) (METHOD_DATA= (DIRECTORY=/u01/server/wallet))) SSL_SERVER_DN_MATCH=(ON) Server-side $ORACLE_HOME/network/admin/listener.ora under the grid user wallet_location = (SOURCE= (METHOD=File) (METHOD_DATA= (DIRECTORY=/u01/server/wallet))) LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))) # line added by Agent ASMNET1LSNR_ASM=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=ASMNET1LSNR_ASM)))) # line added by Agent ENABLE_GLOBAL_DYNAMIC_ENDPOINT_ASMNET1LSNR_ASM=ON # line added by Agent VALID_NODE_CHECKING_REGISTRATION_ASMNET1LSNR_ASM=SUBNET # line added by Agent ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON # line added by Agent VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET # line added by Agent Server-side $ORACLE_HOME/network/admin/tnsnames.ora under the oracle user # tnsnames.ora Network Configuration File: /u01/app/oracle/product/18.0.0.0/dbhome_1/network/admin/tnsnames.ora # Generated by Oracle configuration tools. LISTENER_CDB1 = (ADDRESS = (PROTOCOL = TCPS)(HOST = dbcs0604)(PORT = 1521)) CDB1_IAD1W9 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = dbcs0604)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = cdb1_iad1w9.sub05282047220.vcnctuzla.oraclevcn.com) ) (SECURITY= (SSL_SERVER_CERT_DN="CN=dbcs")) ) PDB1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = dbcs0604)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = pdb1.sub05282047220.vcnctuzla.oraclevcn.com) ) (SECURITY= (SSL_SERVER_CERT_DN="CN=dbcs")) ) Add TCPS endpoint to the database listener Now that we are done with configuring our wallets and network files, we can move onto the next step, which is configuring the TCPS endpoint for the database listener. Since our listener is configured under grid, we will be using srvctl command to modify and restart it. Here are the steps: [grid@dbcs0604 ~]$ srvctl modify listener -p "TCPS:1521/TCP:1522" [grid@dbcs0604 ~]$ srvctl stop listener [grid@dbcs0604 ~]$ srvctl start listener [grid@dbcs0604 ~]$ srvctl stop database -database cdb1_iad1w9 [grid@dbcs0604 ~]$ srvctl start database -database cdb1_iad1w9 [grid@dbcs0604 ~]$ lsnrctl status LSNRCTL for Linux: Version 18.0.0.0.0 - Production on 05-JUN-2019 16:07:24 Copyright (c) 1991, 2018, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 18.0.0.0.0 - Production Start Date 05-JUN-2019 16:05:50 Uptime 0 days 0 hr. 1 min. 34 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/18.0.0.0/grid/network/admin/listener.ora Listener Log File /u01/app/grid/diag/tnslsnr/dbcs0604/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=10.0.0.4)(PORT=1521)))   (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.0.0.4)(PORT=1522))) Services Summary... Service "867e3020a52702dee053050011acf8c0.sub05282047220.vcnctuzla.oraclevcn.com" has 1 instance(s). Instance "cdb1", status READY, has 2 handler(s) for this service... Service "8a8e0ea41ac27e2de0530400000a486a.sub05282047220.vcnctuzla.oraclevcn.com" has 1 instance(s). Instance "cdb1", status READY, has 2 handler(s) for this service... Service "cdb1XDB.sub05282047220.vcnctuzla.oraclevcn.com" has 1 instance(s). Instance "cdb1", status READY, has 1 handler(s) for this service... Service "cdb1_iad1w9.sub05282047220.vcnctuzla.oraclevcn.com" has 1 instance(s). Instance "cdb1", status READY, has 2 handler(s) for this service... Service "pdb1.sub05282047220.vcnctuzla.oraclevcn.com" has 1 instance(s). Instance "cdb1", status READY, has 2 handler(s) for this service... The command completed successfully Please note that in the first step we added the TCPS endpoint to the port 1521 and TCP endpoint to the port 1522 of the default listener. It's also possible to keep the port 1521 as is and add TCPS endpoint to a different port (e.g. 1523). Connect to DBCS Instance from Client via TCPS We should have TCPS authentication configured now. Before we move onto testing, let's take a look at the client-side network files (Please note the public IP address of the DBCS instance in tnsnames.ora): Client-side tnsnames.ora CDB1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = 132.145.151.208)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = cdb1_iad1w9.sub05282047220.vcnctuzla.oraclevcn.com) ) (SECURITY= (SSL_SERVER_CERT_DN="CN=dbcs")) ) PDB1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = 132.145.151.208)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = pdb1.sub05282047220.vcnctuzla.oraclevcn.com) ) (SECURITY= (SSL_SERVER_CERT_DN="CN=dbcs")) ) Client-side sqlnet.ora WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = /Users/cantuzla/Desktop/wallet) ) ) SSL_SERVER_DN_MATCH=(ON) In order to connect to the DBCS instance from the client, you need to add an ingress rule for the port that you want to use (e.g. 1521) in the security list of your virtual cloud network (VCN) in OCI as shown below: We can now try to establish a client connection to PDB1 in our DBCS instance (CDB1): ctuzla-mac:~ cantuzla$ cd Desktop/InstantClient/instantclient_18_1/ ctuzla-mac:instantclient_18_1 cantuzla$ ./sqlplus /nolog SQL*Plus: Release 18.0.0.0.0 Production on Wed Jun 5 09:39:56 2019 Version 18.1.0.0.0 Copyright (c) 1982, 2018, Oracle. All rights reserved. SQL> connect c##dbcs/DBcs123_#@PDB1 Connected. SQL> select * from dual; D - X Create a DB Link from ADW to DBCS We now have a working TCPS authentication in our DBCS instance. Here are the steps from the documentation that we will follow to create a database link from ADW to DBCS: Copy your target database wallet (the client wallet cwallet.sso that we created in /u01/client/wallet) for the target database to Object Store. Create credentials to access your Object Store where you store the cwallet.sso. See CREATE_CREDENTIAL Procedure for details. Create a directory to store the target database wallet: SQL> create directory wallet_dir as 'walletdir'; Directory WALLET_DIR created. Upload the target database wallet to the wallet_dir directory on ADW using DBMS_CLOUD.GET_OBJECT: SQL> BEGIN DBMS_CLOUD.GET_OBJECT( credential_name => 'OBJ_STORE_CRED', object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/adwctraining8/b/target-wallet/o/cwallet.sso', directory_name => 'WALLET_DIR'); END; / PL/SQL procedure successfully completed. On ADW create credentials to access the target database. The username and password you specify with DBMS_CLOUD.CREATE_CREDENTIAL are the credentials for the target database that you use to create the database link. Make sure the username consists of all uppercase letters. For this example, I will be using the C##DBCS common user that I created in my DBCS instance: SQL> BEGIN DBMS_CLOUD.CREATE_CREDENTIAL( credential_name => 'DBCS_LINK_CRED', username => 'C##DBCS', password => 'DBcs123_#'); END; / PL/SQL procedure successfully completed. Create the database link to the target database using DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK: SQL> BEGIN DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK( db_link_name => 'DBCSLINK', hostname => '132.145.151.208', port => '1521', service_name => 'pdb1.sub05282047220.vcnctuzla.oraclevcn.com', ssl_server_cert_dn => 'CN=dbcs', credential_name => 'DBCS_LINK_CRED',   directory_name => 'WALLET_DIR'); END; / PL/SQL procedure successfully completed. Use the database link you created to access the data on the target database: SQL> select * from dual@DBCSLINK; D - X Create a DB Link from DBCS to ADW (Optional) Although the previous section concludes the purpose of this blog post, here's something extra for those who are interested. In just couple additional steps, we can also create a DB link from DBCS to ADW: Download your ADW wallet. Upload the wallet to your DBCS instance using sftp or an FTP client. Unzip the wallet: [oracle@dbcs0604 ~]$ cd /u01/targetwallet [oracle@dbcs0604 targetwallet]$ unzip Wallet_adwtuzla.zip Archive: Wallet_adwtuzla.zip inflating: cwallet.sso inflating: tnsnames.ora inflating: truststore.jks inflating: ojdbc.properties inflating: sqlnet.ora inflating: ewallet.p12 inflating: keystore.jks Set GLOBAL_NAMES parameter to FALSE. This step is very important. If you skip this, your DB link will not work. SQL> alter system set global_names=FALSE; System altered. SQL> sho parameter global NAME TYPE VALUE ---------------------- ----------- ----------- allow_global_dblinks boolean FALSE global_names boolean FALSE global_txn_processes integer 1 Create a DB link as follows (notice the my_wallet_directory clause pointing to where we unzipped the ADW wallet): create database link ADWLINK connect to ADMIN identified by ************ using '(description= (retry_count=20)(retry_delay=3)(address=(protocol=tcps)(port=1522)(host=adb.us-ashburn-1.oraclecloud.com)) (connect_data=(service_name=ctwoqpkdfcuwpsd_adwtuzla_high.adwc.oraclecloud.com)) (security=(my_wallet_directory=/u01/targetwallet)(ssl_server_cert_dn="CN=adwc.uscom-east-1.oraclecloud.com,OU=Oracle BMCS US,O=Oracle Corporation,L=Redwood City,ST=California,C=US")))'; Database link created. Use the database link you created to access the data on the target database (your ADW instance in this case): SQL> select * from dual@ADWLINK; D - X That's it! In this blog post, we covered how to enable TCPS authentication in DBCS and create an outgoing database link from ADW to our DBCS instance. As a bonus content, we also explored how to create a DB link in the opposite direction, that is from DBCS to ADW. Even though we focused on the DBCS configuration, these steps can be applied when setting up a database link between ADW and any other Oracle database.

Autonomous Data Warehouse (ADW) now supports outgoing database links to any database that is accessible from an ADW instance including Database Cloud Service (DBCS) and other ADW/ATP instances. To use...

Autonomous

Making Database Links from ADW to other Databases

Autonomous Database now fully supports database links. What does this mean? It means that from within your Autonomous Data Warehouse you can make a connection to any other database (on-premise or in the cloud) including other Autonomous Data Warehouse instances and/or Autonomous Transaction Processing instances. Before I dive into an example, let’s take a small step backwards and get a basic understanding of what a database links. Firstly, what is a database link? What Are Database Links? A database link is a pointer that defines a one-way communication path from, in this case an Autonomous Data Warehouse instance to another database. The link is one-way in the sense that a client connected to Autonomous Data Warehouse A can use a link stored in Autonomous Data Warehouse A to access information (schema objects such as tables, views etc) in remote database B, however, users connected to database B cannot use the same link to access data in Autonomous Data Warehouse A. If local users on database B want to access data on Autonomous Data Warehouse A, then they must define their own link to Autonomous Data Warehouse A.     There is more information about database links in the Administrator's Guide. Why Are Database Links Useful? In a lot of situations it can be really useful to have access to the very latest data without having to wait for the next run of the ETL processing. Being able to reach through directly into other databases using a DBLINK can be the fastest way to get an upto-the-minute view of what’s happening with sales orders, or expense claims, or trading positions etc. Another use case is to actually make use of dblinks within the actual ETL processing by pulling data from remote databases into staging tables for further processing. This makes the ETL process impose a minimal processing overhead on the remote databases since all that is being typically executed is a basic SQL SELECT statement. There are additional security benefits as well. For example if you consider an example where employees submit expense reports to Accounts Payable (A/P) application and that information needs to be viewed within a financial data mart. The data mart users should be able to connect to the AP database and run queries to retrieve the desired information. The mart users do not need to be A/P application users to do their analysis or run their ETL jobs; they should only be able to access AP information in a controlled, secured way. Setting Up A Database Link in ADW There are not many steps involved in creating a new database link since all the hard work happens under the covers. The first step is to check that you can actually access the target database- i.e. you have a username and password along with all the connection information. To use database links with Autonomous Data Warehouse the target database must be configured to use TCP/IP with SSL (TCPS) authentication. Fortunately if you want to connect to another Autonomous Data Warehouse or Autonomous Transaction Processing instance then everything already in place because ADB’s use TCP/IP with SSL (TCPS) authentication by default. For other cloud and on-premise databases you will most likely have to configure them to use TCP/IP with SSL (TCPS) authentication. I will try and cover this topic in a separate blog post. Word of caution here…don’t forget to check your Network ACLs settings if you are connecting to another ATP or ADW instance since your attempt to connect might get blocked! There is more information about setting up Network ACLs here. Scenario 1 - Connecting an Autonomous Data Warehouse to your Autonomous Transaction Processing instance Let’s assume that I have an ATP instance running a web store application that contains information about sales orders, distribution channels, customers, products etc. I want to access some of that data in real-time from within my sales data mart. The first step is get hold of the secure connection information for my ATP instance - essentially I need the cwallet.sso file that is part of the client credential file. If I click on the “APDEMO” link above I can access the information about that autonomous database and in the list of “management” buttons is the facility to download the client credentials file…    this gets me a zip file containing a series of files two of which are needed to create a database link: cwallet.sso contains all the security credentials and tnsnames.ora contains all the connection information that I am going to need. Uploading the wallet file… Next I goto to my Object Storage page and create a new bucket to store my wallet file. In this case I have just called it “wallet”. Probably in reality you will name your buckets to identify the target database such as “atpdemo_wallet” simply because every wallet for each database will have exactly the same name - cwallet.sso - so you will need to have a way to identify the target database each wallet is associated with and avoid over-writing each wallet.   within my bucket and I click on the blue “Upload” button to find the cwallet.sso file and move it to my Object Storage bucket:     once my wallet file is in my bucket I then need to setup my autonomous data warehouse to use that file when it makes a connection to my ATP instance.   This is where we step out of the cloud GUI and switch to a client tool like SQL Developer. I have already defined my SQL Developer connection to my Autonomous Data Warehouse which means I can start building my new database link. Step 1 - Moving the wallet file To allow Autonomous Data Warehouse to access the wallet file for my ATP target database wallet I need to put it in a special location -  the data_pump_dir directory. This is done by using DBMS_CLOUD.GET_OBJECT as follows: BEGIN DBMS_CLOUD.GET_OBJECT( credential_name => 'DEF_CRED_NAME', object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/adwc/b/adwc_user/o/cwallet.sso', directory_name => 'DATA_PUMP_DIR'); END; / If you execute the above command all you will get back in the console is a message something like this “PL/SQL procedure successfully completed”. So to find out if the file actually got moved you can use the following query to query the data_pump_dir directory  SELECT *  FROM table(dbms_cloud.list_files('DATA_PUMP_DIR')) WHERE object_name LIKE '%.sso' which hopefully returns the following result within SQL Developer that confirms my wallet file is now available to my Autonomous Data Warehouse:     Step 2 - Setting up authentication When my database link process connects to my target ATP instance it obviously needs a valid username and password on my target ATP instance. However, if I can use an account in mu Autonomous Data Warehouse if it matches the account in my ATP instance. Chances are you will want to use a specific account on the target database so a credential is required. This can be setup relatively quickly using the following command: BEGIN DBMS_CLOUD.CREATE_CREDENTIAL( credential_name => ‘ATP_DB_LINK_CRED', username => ’scott', password => ’tiger' ); END; / Step 3 - Defining the new database link  For this step I am going to need access to the tnsnames.ora file to extract specific pieces of information about my ATP instance. Don’t forget that for each autonomous instances there is a range of connections that are identified by resource group ids such as “low”, “medium”, “high”, “tp_urgent” etc. When defining your database link make sure you select the correct information from your tnsnames file. You will need to find the following identifiers: hostname port service name  ssl_server_cert_dn In the example below I am using the “low” resource group connection: BEGIN DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK( db_link_name => 'SHLINK', hostname => 'adb.us-phoenix-1.oraclecloud.com', port => '1522', service_name => 'example_low.adwc.example.oraclecloud.com', ssl_server_cert_dn => ‘CN=adwc.example.oraclecloud.com,OU=Oracle BMCS PHOENIX,O=Oracle Corporation,L=Redwood City,ST=California,C=US’, credential_name => 'ATP_DB_LINK_CRED'); END; /   I could configure the database link to authenticate using the current user within my Autonomous Data Warehouse (assuming that I had a corresponding account in my Autonomous Transaction Processing instance). That’s all there is to it! Everything is now in place which means I can directly query my transactional data from my data warehouse. For example if I want to see the table of distribution channels for my tp_app_orders then I can simply query the channels table as follows: SELECT  channel_id,  channel_desc,  channel_class,  channel_class_id,  channel_total,  channel_total_id  FROM channels@SHLINK;  Will now return the following:   and if I query my tp_app_orders table I can see the live data in my Autonomous Transaction Processing instance: All Done! That's it. It's now possible to connect your Autonomous Data Warehouse to any other database running on-premise or in the cloud, including other Autonomous Database instances.  This makes it even quicker and easier to pull data from existing systems into your staging tables or even just query data directly from your source applications to get the most up to date view.  In this post you will have noticed that I have created a new database link between an Autonomous Data Warehouse and an Autonomous Transaction Processing instance. Whilst this is a great use case I suspect that many of you will want to connect your Autonomous Data Warehouse to an on-premise database. Well, as I mentioned at the start of this post there are some specific requirements related to using database links with Autonomous Data Warehouse where the target instance is not an autonomous database and we will deal with those in the next post: How to Create a Database Link from an Autonomous Data Warehouse to a Database Cloud Service Instance. For more information about using dblinks with ADW click here.  

Autonomous Database now fully supports database links. What does this mean? It means that from within your Autonomous Data Warehouse you can make a connection to any other database (on-premise or...

Autonomous

Autonomous Data Warehouse - Now with Spatial Intelligence

We are pleased to announce that Oracle Autonomous Data Warehouse now comes with spatial intelligence! If you are completely new to Oracle Autonomous Data Warehouse (where have you been for the last 18 months?) then here is a quick recap of the key features: What is  Oracle Autonomous Data Warehouse Oracle Autonomous Data Warehouse provides a self-driving, self-securing, self-repairing cloud service that eliminate the overhead and human errors associated with traditional database administration. Oracle Autonomous Data Warehouse takes care of configuration, tuning, backup, patching, encryption, scaling, and more. Additional information can be found at https://www.oracle.com/database/autonomous-database.html. Special Thanks... This post has been prepared by David Lapp who is part of the Oracle Spatial and Graph product management team.He is extremely well known within our spatial and graph community. If you want to follow David's posts on the Spatial and Graph blog then use this link and the spatial and graph blog is here. Spatial Features The core set of Spatial features have been enabled on Oracle Autonomous Data Warehouse.  Highlights of the enabled features are; native storage and indexing of point/line/polygon geometries, spatial analysis and processing, such as proximity, containment, combining geometries, distance/area calculations, geofencing to monitor objects entering and exiting areas of interest, and linear referencing to analyze events and activities located along linear networks such as roads and utilities. For details on enabled Spatial features, please see the Oracle Autonomous Data Warehouse documentation.   Loading Your Spatial Data into ADW In Oracle Autonomous Data Warehouse, data loading is typically performed using either Oracle Data Pump or Oracle/3rd party data integration tools. There are a few different ways to load and configure your spatial data sets: Load existing spatial data Load GeoJSON, WKT, or WKB and convert to Spatial using SQL.  Load coordinates and convert to Spatial using SQL.  Obviously the files containing your spatial data sets can be located in your on-premise data center or maybe your desktop computer, but for the fastest data loading performance Oracle Autonomous Data Warehouse also supports loading from files stored in Oracle Cloud Infrastructure Object Storage and other cloud file stores. Details can be found here: https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/load-data.html. Configuring Your Spatial Data Routine Spatial data configuration is performed using Oracle SQL Developer GUIs or SQL commands for: Insertion of Spatial metadata Creation of Spatial index Validation of Spatial data   Example Use Case The Spatial features enabled for Oracle Autonomous Data Warehouse support the most common use cases in data warehouse contexts. Organizations such as insurance, finance, and public safety require data warehouses to perform a wide variety of analytics. These data warehouses provide the clues to answer questions such as: What are the major risk factors for a potential new insurance policy? What are the patterns associated with fraudulent bank transactions? What are the predictors of various types of crimes?  In all of these data warehouse scenarios, location is an important factor, and the Spatial features of Oracle Autonomous Data Warehouse enable building and analyzing the dimensions of geographic data. Using the insurance scenario as an example, the major steps for location analysis are: Load historical geocoded policy data including outcomes such as claims and fraud Load geospatial reference data for proximity such as businesses and transportation features Use Spatial to calculate location-based metrics For example lets find the number of restaurants within 5 miles, and the distance to the nearest restaurant:   -- Count within distance -- Use a SQL statement with SDO_WITHIN_DISTANCE   -- and DML to build the result data SELECT policy_id, count(*) as no_restaurant_5_mi  FROM policies, businesses WHERE businesses.type = 'RESTAURANT' AND SDO_WITHIN_DISTANCE(          businesses.geometry,          policies.geometry,         'distance=5 UNIT=mile') = 'TRUE' GROUP BY policy_id; POLICY_ID  NO_RESTAURANT_5_MI 81902842   5 86469385   1 36378345   3 36323540   3 36225484   2 40830185   5 40692826   1 ...   Now we can expand the above query to use the SDO_NN function to do further analysis and find the closest restaurant within the group of restaurants that are within a mile radius of a specific location. Something like the following: -- Distance to nearest -- The SDO_NN function does not perform an implicit join -- so use PL/SQL with DML to build the result data DECLARE  distance_mi NUMBER; BEGIN FOR item IN (SELECT * FROM policies)   LOOP   execute immediate    'SELECT sdo_nn_distance(1) FROM businesses '||   'WHERE businesses.type = ''RESTAURANT'' '||   'AND SDO_NN(b.ora_geometry,:1,'||   '''sdo_batch_size=10 unit=mile'', 1) = ''TRUE'' '||   'AND ROWNUM=1'  INTO distance_mi USING item.geometry;  DBMS_OUTPUT.PUT_LINE(item.policy_id||' '||distance_mi); END LOOP; END;   POLICY_ID RESTAURANT_MI 81902842 4.100 86469385 1.839 36378345 4.674 36323540 3.092 36225484 1.376 40830185 2.237 40692826 4.272 44904642 2.216 ...   Generate the desired spectrum of location-based metrics by stepping through combinations of proximity targets (i.e., restaurants, convenience stores, schools, hospitals, police stations ...) and distances (i.e., 0.25 mi, 0.5 mi, 1 mi, 3 mi, 5 mi...). Combine these location-based metrics with traditional metrics (i.e., value of property, age of policy holder, household income ...) for analytics to identify predictors of outcomes. To enable geographic aggregation, start with a geographic hierarchy with geometry at the most detailed level. For example, a geographic hierarchy where ZONE rolls up to SUB_REGION which rolls up to REGION: DESCRIBE geo_hierarchy Name                     Type ---------------------    ------------------------------------------------- ZONE                     VARCHAR2(30) GEOMETRY                 SDO_GEOMETRY SUB_REGION               VARCHAR2(30) REGION                   VARCHAR2(30) Use Spatial to calculate containment (things found within a region) within the detailed level, which by extension associates the location with all levels of the geo-hierarchy for aggregations: -- Calculate containment -- -- The SDO_ANYINTERACT function performs an implicit join -- so, use a SQL statement with DML to build the result data -- SELECT policy_id, zone FROM policies, geo_hierarchy WHERE SDO_ANYINTERACT(policies.geometry, geo_hierarchy.geometry) = 'TRUE'; POLICY_ID ZONE 81902842 A23 86469385 A21 36378345 A23 36323540 A23 36225484 B22 40830185 C05 40692826 C10 44904642 B16 ...   With these and similar operations, analytics may be performed including the calculation of additional location-based metrics and aggregation by geography.     Summary For important best practices and further details on the use of these and many other Spatial operations, please refer to the Oracle Autonomous Data Warehouse documentation: https://www.oracle.com/database/autonomous-database.html.    

We are pleased to announce that Oracle Autonomous Data Warehouse now comes with spatial intelligence! If you are completely new to Oracle Autonomous Data Warehouse (where have you been for the last 18...