X

A blog about Oracle's Database Cloud Service Technology

Recent Posts

Database Cloud Service

Connecting Azure Web Apps to Oracle Autonomous Database

Interest in using Oracle Autonomous Database (ADB) is high. In fact, many developers ask how to connect their Azure Web Apps to ADB. It’s straightforward when using Oracle Data Provider for .NET (ODP.NET). I will show you how to configure and connect an ASP.NET web application hosted on Azure Web Apps to Oracle ADB in the Oracle Cloud. It’s actually quite simple! If you’re already familiar deploying to Azure Web Apps, there is no special configuration needed for ODP.NET with ADB versus any other app. All you need is a dedicated (non-shared) web environment. That means the Azure App Service Basic Plan or higher. You will also need to configure the web server to load a user profile. That’s it! To help you get started, below is a step by step guide for deploying a sample on-premises ADB Web app to Azure.   Developing an ASP.NET App for ADB (On-Premises) Let’s first build a basic ASP.NET Web application in Visual Studio that connects to ADB. I’ll be using Visual Studio 2017, but steps in other Visual Studio versions should be similar. We will use managed ODP.NET for data access between the application and database, but ODP.NET Core works as well with these instructions. In Visual Studio, create a new ASP.NET Web Application (.NET Framework) → Empty Template project. Copy the following three files from the Oracle .NET GitHub sample code site into your web project root directory. autonomous-managed-odp.aspx autonomous-managed-odp.aspx.cs autonomous-managed-odp.aspx.designer.cs These files constitute a web page that connects to ADB and returns the database version it is using to demonstrate basic connectivity. Add these files to your project as “Existing items”. Using NuGet, import the managed ODP.NET assembly (Oracle.ManagedDataAccess). I recommend using version 18.6 or higher. In the autonomous-managed-odp.aspx.cs file, modify the User Id and Password credentials in the connection string specific to your ADB instance. We will first test connectivity locally before deploying the web app to Azure. On the next connection string line, modify the following settings for your ADB instance: Port (e.g. 1521) Host name or IP (e.g. hostname.oraclecloud.com) Service name (e.g. servicename.adb.oracle.com) Wallet directory (e.g. d:\wallets) You will find these entries in the tnsnames.ora file from the downloaded ADB client credentials zip file. The wallet directory must be set to the local machine directory where your ADB wallet file will reside. For this tutorial, we will copy the file to the web project’s root directory. You can thus set “MY_WALLET_DIRECTORY = .” (i.e. the website’s current working directory), which will be where the wallet is located when running this website both on-premises and on Azure. Add the wallet file, cwallet.sso, from the credentials zip file to the web project root directory in line with the MY_WALLET_DIRECTORY setting. When you create a Web Deploy package of this project later for Azure deployment, the wallet file will automatically be included. If it’s not already set, configure your local IIS attribute Load User Profile to "true" so that you can use the local wallet. Set the autonomous-managed-odp.aspx as the start page. Run your web application. You should see something similar to the following: Congratulations! You are connected to Oracle Autonomous Database.   Deploying an ASP.NET App for ADB to Azure Web Apps Now, let’s deploy to Azure Web Apps and connect to the same ADB instance. We will package up the application, deploy it to your Azure Web Apps account, and then connect to ADB from Azure. Using the existing Web project you’ve just tested, create a new Azure App Service profile for publishing your app. Choose an Azure App Service Basic Plan level or higher. Publish the application to Azure via Visual Studio to start the service. You will see a web page that indicates your App Service is up and running. We will now configure the App Service to enable ADB connectivity. Open a new browser tab to connect to Azure Portal. Next, navigate to the just deployed website’s management dashboard. Under Settings, click on the Configuration link, then click the Application settings tab. Add a new application setting, WEBSITE_LOAD_USER_PROFILE, and set it to 1. This step enables you to use the Oracle wallet on Azure. When completed, your setting will look like the following: Click on the Default documents tab to add autonomous-managed-odp.aspx as the default page to load for your website. Be sure to also delete hostingstart.html from the list to prevent Azure from loading this document by default. Save these settings before you leave the configuration page so that they take effect for your Azure website. Setup is now complete. Refresh the browser tab connected to your web app to verify your Azure Web App is now connected to ADB. Congratulations! You have just deployed an Azure ODP.NET Web App for Oracle Autonomous Database. I told you it was simple.

Interest in using Oracle Autonomous Database (ADB) is high. In fact, many developers ask how to connect their Azure Web Apps to ADB. It’s straightforward when using Oracle Data Provider for...

Building Microservices on Oracle Autonomous Transaction Processing Service

Containers allow us to package apps along with all their dependencies and provide a light-weight run time environment that provides isolation similar to a virtual machine, but without the added overhead of a full-fledged operating system. The topic of containers and microservices is a subject on its own but suffice to say that breaking up large, complex software into more manageable pieces that run isolated from each other has many advantages. Its easier to deploy, diagnose and provides better availability since failure of any microservice limits downtime to a portion of the application. Its important to have a similar strategy in the backend for the database tier. But the issue is if you run multiple databases for each application then you end up having to maintain a large fleet of databases and the maintenance costs can go through the roof. Add to that having to manage security and availability for all of them. This is where the Oracle’s autonomous database cloud service comes in. It is based on a pluggable architecture similar to application containers where one container database holds multiple pluggable databases. Each of these pluggable databases or PDBs are completely isolated from each other, can be deployed quickly and can be managed as a whole so that you incur the cost of managing a single database while deploying multiple micro services onto these PDBs. The Autonomous cloud service takes it a step further. It is self managing, self securing and highly available. There is no customer involvement in backing it up, patching it or even tuning it for most part. You simply provision, connect and run your apps. Oracle even provides a 99.995 SLA. That is a maximum of 2 minutes, 11.5 seconds of downtime per month.   Note there are two Docker files in the repository. That’s because we have two different applications – ATPnodeapp and aOne. Both of these are node.js applications which mimic as microservices in our case. ATPnodeapp simply makes a connection to the ATP database and does not require any schema setup. aOne, on the other hand is a sample marketplace application and requires schema and seed data to be deployed in the backend. If you plan to use that app, you will need to first run the create_schema.sql scripts on the database. Btw, the app is called aOne since its built on the angularjs, Oracle, node and express stack J …yeah baby! Let’s provision an ATP instance to start with. Log into your Oracle Cloud infrastructure account. Once logged in, from the top left hamburger menu, select Autonomous Transaction Processing Oracle Cloud Infrastructure allows logical isolation of users within a tenant through Compartments. This allows multiple users and business units to share a tenant account while being isolated from each other.   Once you have chosen the compartment assigned to you, you can then click to create your ATP instance. On the Create ATP form, enter a display name, DB name and an admin password. You can also choose an instance shape, specified by the CPU count and storage size. Default CPU is 1 core and storage is 1 TB which is also the minimum. Hit  button at the bottom and your database will be provisioned in a few minutes. That is all it takes. Of course, if you do not like logging on to web consoles you can use the REST API interface to programmatically deploy ATP instances along with your application stack Now, if you wish to deploy the aOne app, you would need to connect to your database using a SQL client and run the create_schema script in the default admin schema or create a suitable user schema for the application. https://github.com/kbhanush/ATPDocker/blob/master/aone/create_schema.sql You will also need to download the secure connectivity credentials file from the database admin console.Unzip and store the wallet folder in the same folder as your application under /wallet_NODEAPPDB2.  This folder is copied into your container image when you run the docker file. Alternatively, you can choose to remove that command from your docker file and manually copy the file to your image when you run your container. That way, if you build and store your docker image in the public docker repository you certainly don’t want your credentials files in there. In your wallet folder, edit sqlnet.ora and replace the contents of that file with the following text- WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY=$TNS_ADMIN))) This tells the driver to look for the wallet in the path setup in variable TNS_ADMIN Assuming you have downloaded the Dockerfile, database wallet and created the backend schema, you can now build your docker image Ensure you are in a folder that contains the wallet folder /wallet_NODEAPPDB2 when you run this command $ docker build –t aone . -t options gives your image a tag ‘aone’. Don’t forget to include the period in the end. It means ‘use the Dockerfile in the current folder. You can also specify your dockerfile as, $docker build –t atpnodeapp –f Dockerfile2 .   [ Note: this will build another app called ATPnodeapp in the image]   If all goes well, your image should be ready in less than a minute. Note that the entire image is about 400 megs.   Note that docker creates multiple image files as it builds each layer. Your final image would show at the top of the list and will have the tag you chose.   $ docker images –a   REPOSITORY              TAG                 IMAGE ID            CREATED             SIZE aone                    latest              b8db0c7d015e        10 days ago         408MB <none>                  <none>              44af88220a69        10 days ago         408MB <none>                  <none>              ed6b184684cb        10 days ago         408MB <none>                  <none>              d192d0f45d3f        10 days ago         402MB <none>                  <none>              c9fe8df71a18        10 days ago         402MB <none>                  <none>              04b8fcac90ea        10 days ago         402MB <none>                  <none>              251c11bebdb1        10 days ago         402MB <none>                  <none>              5801d93521ba        10 days ago         402MB <none>                  <none>              86f9bbbd6de2        10 days ago         402MB   Finally, you would need to launch your docker image change the dbuser, password and connect string to match your database in dbconfig.js $ docker run aone Once inside the image, # cd /opt/oracle/lib/ATPDocker/aone/scripts # cat dbconfig.js module.exports= { user: "nodeuser", password: "password" connectString: "nodeappdb2_high" }   Replace the user, password and connectString to match the database and user you just created.   To run your node.js app, $ node server.js   & Node responds with this message – aOne listening on port 3050   Now to check out the app on a browser, you will need to bridge port 3050 on the container to your local host. Exit nodejs, make sure you kill the process and shut down and exit the container. Re-run the container with the following docker command – $ docker run –i –p 3050:3050 –t <tagname> Open a browser on your local host and go to http://localhost:3050 This is what you see if your app ran successfully, You just built and provisioned an entire application stack consisting of a microservice and a enterprise grade, self managing database. You can push your docker image to a public/private docker repository and it can be pushed to any container orchestration service either on-prem or with any cloud provider. Your database is autonomous – it provisions quickly, backs up, patches and tunes itself as needed.  

Containers allow us to package apps along with all their dependencies and provide a light-weight run time environment that provides isolation similar to a virtual machine, but without the added...

Provision an Oracle Autonomous Transaction Processing database service

Oracle's newest autonomous database services take away the mystery and overhead of deploying and managing high performance databases. On August 7th, Oracle announced the Autonomous Transaction Processing service, comparing it to a self driving car. For a database service this means, easy provisioning, self managing in terms of backups, patching, availability and performance and zero downtime scaling Lets take a quick look at deploying an ATP instances and connecting to it via sql developer Login to your Oracle Cloud account at https://cloud.oracle.com using your tenant, username and password.  Once logged in select Autonomous Transaction Processing from the top left hamburger menu Oracle Cloud Infrastructure allows logical isolation of users within a tenant through Compartments. This allows multiple users and business units to share a tenant account while being isolated from each other. Select a compartment you wish to provision an ATP database instance in or create one. Here I've picked a pre provisioned compartment called DBPM. Needless to say, you then hit the Create Autonomous Transaction Processing button. Quite a mouthful, I just like to call it ATP On the Create ATP page, you need some basic info for provisioning. A name, size of your instance in cores and storage and an admin password. Scroll down the form and you will see a couple of licensing options Oracle allows you to bring your unused on-prem licenses to the cloud and your instances are billed at a discounted rate. This is the default option so ensure you have the right license type for this subscription This means your cloud service instance should include database license. This is an all-inclusive cost and you do not need to bring any additional licenses to cloud Finally, you can create tag for your instance to group them together for easier search and management. For eg. Dev_DB or Tier1_appDB, Fin_DB etc Hit  button at the bottom and your database will be provisioned in a few minutes. Lets now connect to this database. Select the ATP instance you just provisioned Click Service Console button and provide username (admin) and password you provided at the time of provisioning From the Admin panel, select ‘Download Client Credentials’. Provide a keystore password and save the file to your local machine. You will need this file and keystore password to connect to the database later. The credentials zip file contains the encryption wallet, Java keystore and other relevant files to make a secure TLS 1.2 connection to your database from client applications. Store this file in a secure location. Next, we will connect to the nodeAppDB database using Oracle SQL Developer.  Launch SQL Developer and select Add connection top left Enter a connection name, username (admin) and password provided at the time of provisioning. Select connection type as ‘Cloud PDB’ Configuration file is the connection wallet downloaded from ATP console. Enter keystore password provided at the time of wallet download from the admin console Finally, select service name from drop down. Service name is database name followed by suffixes low, medium or high. These suffixes determine degree of parallelism used and are relevant for a DSS workload. For OLTP workloads its safe to select any of them. We select nodeappdb_medium Test your connection and save. You now have a secure connection to your cloud database. As simple as that.    

Oracle's newest autonomous database services take away the mystery and overhead of deploying and managing high performance databases. On August 7th, Oracle announced the Autonomous Transaction...

Cross region replication in Oracle Database Cloud Service using DataGuard

Oracle Database Cloud Service ( DBCS) provides a Hybrid DataGuard option in the cloud tooling that can be creatively used to deploy a DataGuard based HA environment between any two Oracle database instances. This would apply to -  1. Primary database instances on-premise that need a standby in the Oracle Cloud 2. Primary database  instances in AWS EC2 or Azure compute with standby in Oracle Cloud 3. Primary database  instances in the Oracle Cloud - compute or PaaS Database service with standby in the same region or across regions Basically, anywhere. If you have an Oracle 11.2 or 12.1 database instance anywhere and you want to configure a DataGuard standby, you can use the cloud tooling to achieve this as long as there is network connectivity between the primary and standby. This tutorial is for an Oracle cloud to Oracle cloud replication but the steps can be applied to any of the configurations listed above. Just make sure your primary DB meet the required checks so a successful backup can be taken to Oracle storage cloud using the Hybrid DataGuard tooling that you would download to the primary host. Here we will deploy a fresh 12.1 primary and then use cloud tooling to deploy a dataguard standby Here's the high level steps to follow - 1. Deploy a 12.1 Enterprise Edition database using Oracle DBCS tooling ( skip this step if you are creating a dataguard standby for an existing instance ) 2. Check for hybrid DG readiness on your primary 3. Take an RMAN backup of your primary DB using command line tooling to Oracle Storage Service 4. Deploy standby through DBCS UI using the Hybrid Dataguard option Here's the steps in detail with screenshots.   1. Deploy a 12.1 Enterprise Edition database using Oracle DBCS tooling  Log in to your Oracle Database Cloud service account and click 'Create Service' on the cloud console On the next screen, fill in details of your primary database setup. Note I've chosen to deploy in region uscom-central-1 in a pre-created IP Network. Make sure you pick an OCI-C region. This configuration has not been tested in OCI regions such as Ashburn, Phoenix, Slough, Amsterdam at the time of this writing Hit next and enter the remaining details. Note I've selected backup location as 'None' since we will do a backup by downloading a setup script later on. Note down the Instance Name on step 1 and DB Name on step 2 as you would need them later Hit next, validate configuration and Create.   In about 20 mins your 12.1 primary database instance should be ready. Make a note of its public IP address. A few things to remember before you move on to configure the primary and take a backup to OSS 1. You need to make sure port 22 ( ssh) and 1521 (sqlnet) allow ingress traffic from your database standby host. Which means you would need to setup an access rule to allow traffic on 1521 from a particular IP ( your dataguard standby) Port 22 is open by default for ssh traffic First make an IP reservation for setting up the access rule and also make a note of it. You would need it later while configuring your primary for Hybrid DG backup and for provisioning of the standby.     Now create an access rule using the reserved IP   2. Check for hybrid DG readiness on your primary Note: Many of these checks may be unnecessary on an Oracle cloud instance but mentioned here in case your primary DB is not an Oracle DBCS instance ssh into your primary host as user opc $ ssh -i <keyfile> opc@ipaddress Check user oracle belongs to the right groups and has DBA priviledges p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8} span.s1 {font-variant-ligatures: no-common-ligatures} [opc@hdg-primary121 ~]$ id oracle uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba) [opc@hdg-primary121 ~]$ Check flashback and forceloggig are turned on in the database p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8; min-height: 13.0px} span.s1 {font-variant-ligatures: no-common-ligatures} span.Apple-tab-span {white-space:pre} [opc@hdg-primary121 ~]$ sudo su - [root@hdg-primary121 ~]# su - oracle [oracle@hdg-primary121 ~]$ [oracle@hdg-primary121 ~]$ sqlplus / as sysdba   SQL*Plus: Release 12.1.0.2.0 Production on Thu Jul 5 17:04:30 2018   Copyright (c) 1982, 2014, Oracle.  All rights reserved.     Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, Oracle Label Security, OLAP, Advanced Analytics and Real Application Testing options   SQL> select flashback_on, force_logging from v$database;   FLASHBACK_ON       FORCE_LOGGING ------------------ --------------------------------------- YES           YES   SQL> Ensure DB is in archivelog mode p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8} span.s1 {font-variant-ligatures: no-common-ligatures} span.Apple-tab-span {white-space:pre} SQL> archive log list Database log mode           Archive Mode Automatic archival           Enabled Archive destination           USE_DB_RECOVERY_FILE_DEST Oldest online log sequence     3 Next log sequence to archive   5 Current log sequence           5 SQL> Check listener.ora is configured for static service registration p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8; min-height: 13.0px} span.s1 {font-variant-ligatures: no-common-ligatures} [oracle@hdg-primary121 ~]$ cat /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/listener.ora # listener.ora Network Configuration File: /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/listener.ora # Generated by Oracle configuration tools.   LISTENER =   (DESCRIPTION_LIST =     (DESCRIPTION =       (ADDRESS = (PROTOCOL = TCP)(HOST = hdg-primary121.compute-5923450477.oraclecloud.internal)(PORT = 1521))       (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))     )   )   VALID_NODE_CHECKING_REGISTRATION_LISTENER=ON SSL_VERSION = 1.2 [oracle@hdg-primary121 ~]$   Edit sqlnet.ora to ensure all sqlnet traffic is encrypted. This is what my sqlnet.ora looks like after changes p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffa853; background-color: #2c67c8} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8} span.s1 {font-variant-ligatures: no-common-ligatures; color: #65e551} span.s2 {font-variant-ligatures: no-common-ligatures; color: #ffffff} span.s3 {font-variant-ligatures: no-common-ligatures} span.s4 {font-variant-ligatures: no-common-ligatures; color: #63e1ee} span.s5 {font-variant-ligatures: no-common-ligatures; color: #ffa853} SQLNET.ENCRYPTION_SERVER = requested SQLNET.ENCRYPTION_CLIENT = requested SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER = (SHA1) SQLNET.CRYPTO_CHECKSUM_SERVER = required ENCRYPTION_WALLET_LOCATION = (SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/u01/app/oracle/admin/ORCL/tde_wallet))) SQLNET.ENCRYPTION_TYPES_SERVER = (RC4_256, AES256) SQLNET.ENCRYPTION_TYPES_CLIENT = (RC4_256, AES256) NAMES.DIRECTORY_PATH = (TNSNAMES, EZCONNECT) SQLNET.EXPIRE_TIME = 10 SQLNET.WALLET_OVERRIDE = FALSE ADR_BASE = /u01/app/oracle WALLET_LOCATION = (SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/u01/app/oracle/admin/ORCL/db_wallet))) SSL_VERSION = 1.2 Reload listener for changes to take effect p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8; min-height: 13.0px} span.s1 {font-variant-ligatures: no-common-ligatures} [oracle@hdg-primary121 ~]$ lsnrctl reload   LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 05-JUL-2018 17:22:56   Copyright (c) 1991, 2014, Oracle.  All rights reserved.   Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=hdg-primary121.compute-596600477.oraclecloud.internal)(PORT=1521))) The command completed successfully [oracle@hdg-primary121 ~]$     Ensure you have the right oracle RDBMS rpm installed, in this case its 12.1 p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8} span.s1 {font-variant-ligatures: no-common-ligatures} [oracle@hdg-primary121 ~]$ rpm -qa|grep oracle-rdbms-server oracle-rdbms-server-12cR1-preinstall-1.0-14.el6.x86_64 [oracle@hdg-primary121 ~]$ Ensure the Netcat RPM is installed: nc-1.84-24.el6.x86_64 or higher p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8} span.s1 {font-variant-ligatures: no-common-ligatures} [oracle@hdg-primary121 ~]$ rpm -qa|grep nc nc-1.84-24.el6.x86_64 Edit your /etc/hosts file to add standby host IP. This is the NAT IP you reserved on the database console earlier. This is what my /etc/hosts file looks like after edits. Note that I've picked 'hdg-standby121' as my standby hostname p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8; min-height: 13.0px} span.s1 {font-variant-ligatures: no-common-ligatures} 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6   12.12.0.3 hdg-primary121.compute-596600477.oraclecloud.internal hdg-primary121 129.23.43.134 hdg-standby121 hdg-standby121 Ensure all TCP socket size maximum kernel parameters are set to 10 MB (10485760) for optimal transport performance [root@hdg-primary121 ~]# [root@hdg-primary121 ~]# sysctl -w net.core.rmem_max=10485760 net.core.rmem_max = 10485760 [root@hdg-primary121 ~]# sysctl -w net.core.wmem_max=10485760 net.core.wmem_max = 10485760 [root@hdg-primary121 ~]# Ensure your python, perl and Java versions are 2.6, 5.1 and 1.8 or higher respectively [root@hdg-primary121 ~]# python --version Python 2.6.6 [root@hdg-primary121 ~]# perl -version   This is perl, v5.10.1 (*) built for x86_64-linux-thread-multi   Copyright 1987-2009, Larry Wall   Perl may be copied only under the terms of either the Artistic License or the GNU General Public License, which may be found in the Perl 5 source kit.   Complete documentation for Perl, including FAQ lists, should be found on this system using "man perl" or "perldoc perl".  If you have access to the Internet, point your browser at http://www.perl.org/, the Perl Home Page.   [root@hdg-primary121 ~]# java -version java version "1.8.0_121" Java(TM) SE Runtime Environment (build 1.8.0_121-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode) [root@hdg-primary121 ~]#   Check your dbaastools version.  p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8; min-height: 13.0px} span.s1 {font-variant-ligatures: no-common-ligatures} [root@hdg-primary121 ~]# rpm -qa | grep dbaastools dbaastools-1.0-1+18.2.3.0.0_180413.0807.x86_64 [root@hdg-primary121 ~]#   At the time of this writing, 18.2.3 is the latest release of dbaastools. If your version isn't the latest release, download the rpm and upgrade to latest dbaastools as shown below p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8; min-height: 13.0px} span.s1 {font-variant-ligatures: no-common-ligatures} [root@hdg-primary121 ~]# wget https://storage.us2.oraclecloud.com/v1/dbcsswlibp-usoracle29538/hdg/18.2.3/OracleCloud_HybridDR_Setup.zip --2018-07-05 18:13:15--  https://storage.us2.oraclecloud.com/v1/dbcsswlibp-usoracle29538/hdg/18.2.3/OracleCloud_HybridDR_Setup.zip Resolving storage.us2.oraclecloud.com... 129.152.172.3, 129.152.172.4 Connecting to storage.us2.oraclecloud.com|129.152.172.3|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 138247232 (132M) [application/zip] Saving to: “OracleCloud_HybridDR_Setup.zip”   100%[=================================================================================================================>] 138,247,232 71.9M/s   in 1.8s       2018-07-05 18:13:18 (71.9 MB/s) - “OracleCloud_HybridDR_Setup.zip” saved [138247232/138247232]   [root@hdg-primary121 ~]# Unzip and install both RPMs p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8} span.s1 {font-variant-ligatures: no-common-ligatures} span.Apple-tab-span {white-space:pre} [root@hdg-primary121 ~]# unzip OracleCloud_HybridDR_Setup.zip Archive:  OracleCloud_HybridDR_Setup.zip   inflating: dbaastools.rpm             inflating: perl-JSON-2.15-5.el6.noarch.rpm   inflating: README                   [root@hdg-primary121 ~]# rpm -Uvh perl-JSON-2.15-5.el6.noarch.rpm Preparing...                ########################################### [100%]     package perl-JSON-2.15-5.el6.noarch is already installed [root@hdg-primary121 ~]# rpm -Uvh dbaastools.rpm Preparing...                ########################################### [100%]     package dbaastools-1.0-1+18.2.3.0.0_180413.0807.x86_64 is already installed [root@hdg-primary121 ~]#   With all checks now complete, you are now ready to send a backup of your primary database to the Oracle storage cloud. But first, you need to create a storage cloud container Logon to your storage cloud service and create a container as shown below. Note down the name and URL of the container 3. Take an RMAN backup of your primary DB using command line tooling to Oracle Storage Service The tooling required to take a backup is available in /var/opt/oracle/hdg. A word of caution - if you have previously made a failed attempt to take a hybrid DG backup on source instance, do the following before you move on 1. Delete folder db_wallet and file hdgonpreminfo* from /var/opt/oracle/hdg 2. Remove all objects from the backup OSS container ok, we are now ready to send a backup to OSS. Lets configure /var/opt/oracle/hdg/setupdg.cfg with the required parameters before firing the hybridDG utility. This is what my setupdg.cfg looks like. cloud_ipaddr is the reserved IP of the standby. Encryption wallet needs to be autologin, hence no password. sys_passwd is what your  will provide as admin password while provisioning standby OSS password is password of your OSS account, typically your cloud account p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #65e551; background-color: #2c67c8} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #760b00; background-color: #2c67c8} p.p3 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #021ea9; background-color: #2c67c8} p.p4 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8} p.p5 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #032be0; background-color: #2c67c8} span.s1 {font-variant-ligatures: no-common-ligatures} span.s2 {font-variant-ligatures: no-common-ligatures; color: #ffffff} span.s3 {font-variant-ligatures: no-common-ligatures; color: #760b00} [dg] cloud_ipaddr=129.153.17.81 oss_url=https://oradbclouducm.us.storage.oraclecloud.com/v1/Storage-oradbclouducm/hdg-primary121-backup onprem_ipaddr=129.107.126.21 tde=Y dbname=ORCL wallet_passwd= sys_passwd=abcd1234# oss_passwd=abcd1234# oss_user=xxxxxxxx@acme.com firewall_acl=Y cloud_shost=hdg-standby121 oss_user=xxxxx@acme.com firewall_acl=Y ~                 Save setupdg.cfg and fire off hybridDG utility as follows -  p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8; min-height: 13.0px} p.p3 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #7c1b80; background-color: #2c67c8} p.p4 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #021ea9; background-color: #2c67c8} span.s1 {font-variant-ligatures: no-common-ligatures} span.s2 {font-variant-ligatures: no-common-ligatures; color: #65e551} span.s3 {font-variant-ligatures: no-common-ligatures; color: #d5d24c} span.s4 {font-variant-ligatures: no-common-ligatures; color: #7c1b80} span.s5 {font-variant-ligatures: no-common-ligatures; color: #ffffff} span.s6 {text-decoration: underline ; font-variant-ligatures: no-common-ligatures} [oracle@hdg-primary121 ~]$ cd /var/opt/oracle/hdg/ [oracle@hdg-primary121 hdg]$ ./setupdg.py -b   #################### DG-readiness check ####################   OK: On-premises Firewall ACLs configured OK: Will use TDE to encrypt the standby database in cloud environment OK: sys & system passwords are identical OK: supported java version OK: supported db OK: supported database edition for Data Guard OK: supported os WARNING: APEX is not installed; Recommended minimum apex version : 5.0.0.00.31 OK: socket_size ok OK: Data Vault is not enabled OK: Flashback mode of the database is enabled OK: The log mode is Archive Mode and the archive destination is /u03/app/oracle/fast_recovery_... OK: Database ORCL is not part of an existing DG configuration OK: DBID of ORCL database is 1508430918 OK: Size of ORCL database     data(MB) |  temp(MB) |  redo(MB) |  archive(MB) |  control(MB) |  total(MB)        --------------------------------------------------------------------------        4075         317        3072           79             35           7578     OK: sqlnet.ora has the following information       SQLNET.ENCRYPTION_SERVER = requested       SQLNET.ENCRYPTION_TYPES_SERVER = (RC4_256, AES256)       SQLNET.ENCRYPTION_CLIENT = requested       SQLNET.ENCRYPTION_TYPES_CLIENT = (RC4_256, AES256) OK: Oracle Net Listener Configuration file listener.ora is present OK: perl-JSON rpm is installed OK: netcat (nc) rpm is installed OK: /etc/hosts entries are verified fine OK: Domain information available for hdg-primary121     All checks passed. Database backup can be performed   100% of checks completed successfully running opc installer creating config file for rman backup generating rman.bkup initiating backup backup in progress. Takes a while ... database backup to oss complete.   New set of priv/pub ssh keypair is created for oracle user. The keypair is under /home/oracle/.ssh, please use this to access this VM. Backup of original /home/oracle/.ssh is saved as /home/oracle/.ssh.bak       Created hdgonpreminfo.tgz. Uploaded to OSS   ############################## DG-readiness check completed ##############################   [oracle@hdg-primary121 hdg]$   Your primary side configuration and backup is now complete. Next we provision the dataguard standby using cloud UI 4. Deploy standby through DBCS UI using the Hybrid Dataguard option On the DBCS provisioning UI, enter your standby database details. Ensure your Instance Name matches the hostname you picked in the primary etc/hosts and the Software Release and Edition matches primary   On screen 2, ensure DB Name, admin password, IP reservation and Cloud Storage container URL matches what you specified in setupdg.cfg on the primary On the final step, confirm everything looks ok and hit create. Your DataGuard standby should be ready soon.         p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8} span.s1 {font-variant-ligatures: no-common-ligatures} p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #ffffff; background-color: #2c67c8; min-height: 13.0px} span.s1 {font-variant-ligatures: no-common-ligatures}

Oracle Database Cloud Service ( DBCS) provides a Hybrid DataGuard option in the cloud tooling that can be creatively used to deploy a DataGuard based HA environment between any two Oracle database...

Promote Yourself in Exadata Express Using a Custom Domain Name for APEX Applications and REST Endpoints

Starting in March 2018, customers of Oracle Database Exadata Express Cloud Service can promote their brand by exposing Application Express (APEX) applications and REST endpoints using their own domain name and SSL/TLS certificate. This new "vanity URL" functionality enables application end-users to see a familiar name in the address bar of their browser and developers who use Oracle Database REST access to see a familiar name in HTTPS APIs. It also supports IP whitelists for improved security and privacy in front of APEX- and REST-based apps. Vanity URLs optionally can replace or complement the default oraclecloudapps.com domain and Oracle SSL certificate that are included in Exadata Express out of the box. Exadata Express implements an innovative approach to provide vanity URLs. It integrates with other cloud services including Oracle Cloud Infrastructure Load Balancing Classic (LBaaS) and the customer's own DNS service, stitching these technologies together with a straightforward UI that makes vanity URLs easy to implement. To learn more about vanity URLs in Exadata Express, take a look at this recent PM Office Hours session recording. The video demonstrates configuring and using vanity URLs with Oracle LBaaS Classic (note: skip ahead to minute 09:00).   ------------ Exadata Express delivers a full Oracle Database experience as a managed cloud service running on Exadata and provisioned in minutes - all at an affordable entry-level price. It is ideal for production and non-production environments with medium sized data and workloads, and it is packed with features for building and hosting modern data driven applications. Exadata Express is a great way to get started building on top of Oracle Database running in Oracle Cloud. Learn more about Exadata Express at cloud.oracle.com/database, and sign-up for your US$300 free credits today!

Starting in March 2018, customers of Oracle Database Exadata Express Cloud Service can promote their brand by exposing Application Express (APEX) applications and REST endpoints using their own...

New Monthly PM Office Hours Sessions Deliver In-Person Access to Oracle's Exadata Express Service Team

If you are new to Oracle Database Exadata Express Cloud Service and want to ask a few questions of the Oracle experts, then attending an Exadata Express PM Office Hours session may be just what you need. These new online sessions with Oracle's Exadata Express service team recur monthly and are open for anyone to join (note: registration required). Each session includes a short presentation on a topic of interest related to Exadata Express or its predecessor Schema Service, followed by time for informal audience Q&A. Sessions are recorded and available for playback from the PM Office Hours homepage. This is a great opportunity not only to get your questions answered but also to pick up best practices, request enhancements, learn about the service roadmap, hear from other customers, and provide general feedback on Exadata Express. Click below to read more about PM Office Hours, and please join us for the next session. We look forward to hearing from you!     ------------ Exadata Express delivers a full Oracle Database experience as a managed cloud service running on Exadata and provisioned in minutes - all at an affordable entry-level price. It is ideal for production and non-production environments with medium sized data and workloads, and it is packed with features for building and hosting modern data driven applications. Exadata Express is a great way to get started building on top of Oracle Database running in Oracle Cloud. Learn more about Exadata Express at cloud.oracle.com/database, and sign-up for your US$300 free credits today!  

If you are new to Oracle Database Exadata Express Cloud Service and want to ask a few questions of the Oracle experts, then attending an Exadata Express PM Office Hours session may be just what you...

Independent Article on Exadata Express Highlights Its Benefits

ACE Director Jim Czuprynski’s recent OTN article about Exadata Express contains good insights about this database cloud service. It also provides interesting performance testing results from heavy analytic workloads that he ran on Exadata Express with In-Memory Column Store enabled. Below are my favorite quotes and highlights from Jim. Head over to OTN to see the full article. A Compelling Value Proposition “…Exadata Express is intended to be the entry-level service from Oracle providing low prices, reasonable ceilings on the specs, and certain functionality disabled as you might expect in a fully managed offering. Despite what it's missing compared to the other Oracle services, I must say Exadata Express is a compelling proposition for the price.” Responsive and Simple to Access “…I found that the Exadata Express X50IM instance was essentially always available, responsive—or to use a technical term, ‘snappy’—and simple to access using just about every method I tried, including SQL*Plus, Oracle SQL Developer, and even my old standby third-party RDBMS monitoring tool…” Easy to Connect Applications “It's easy to connect an application to it. I only need to add a few entries into a host's SQLNET.ORA file and (if required) TNSNAMES.ORA…” High-Speed Data Loading “I loaded 25 GB worth of TPC-DS data over my meager home office internet bandwidth in under four hours—and that includes index creation and statistics regathering.” Fast Analytic Queries “…of the 14 TPC-DS queries that typically ran for more than 1,800 seconds [without In-Memory Column Store], only three still exhibited no improvement in execution time [when using In-Memory Columns Store]. For those that did improve, the improvement was dramatic, including some whose performance improved by two or more orders of magnitude…” Powerful and Hassle-Free “Though I've mentioned some of the unexpected limitations of my Exadata Express X50IM instance, I found them to be simply minor annoyances when compared to the power and hassle-free configuration that Exadata Express provided for my extended testing.”

ACE Director Jim Czuprynski’s recent OTN article about Exadata Express contains good insights about this database cloud service. It also provides interesting performance testing results from...

Larger Shapes and Hourly Credits Now Available in Exadata Express

Starting in October 2017, Oracle Database Exadata Express Cloud Service now supports usage by the hour, purchasing via Oracle Cloud Universal Credits, and free credit-based trials. The service also adds new larger shapes to meet strong customer demand for increased processing power and expanded database storage. Support for Universal Credits in Exadata Express gives customers maximum flexibility and durability in how they pay for and consume the service. Universal Credits customers can start or stop Exadata Express at any time, spending their credits only for hours when the service is actively in use. These flexible credits can be applied to Exadata Express, other Oracle Cloud services, and future Oracle Cloud services yet to be released. Using Exadata Express X20 for one day (8 hours) costs less than USD $3 of credits. An ongoing promotion gives prospective customers USD $300 of free credits so they can "try before you buy" with Exadata Express and other services. Additionally, new larger Exadata Express shapes are now available in direct response to customer requests. As customer use cases for a fully managed cloud Oracle Database continue to expand and data being managed continues to grow, customers want to execute ever more demanding database workloads in Exadata Express. The new larger shapes will deliver greater ability for Exadata Express customers to scale. They provide up to a maximum of 4 cores (bare metal Exadata compute), 40GB of RAM (Oracle Database SGA and PGA), and 1TB of Exadata storage. The shapes can be purchased by the hour using Universal Credits. New purchases also include a free bundled instance of Oracle Developer Cloud Service. A summary of current shapes and pricing is below. Exadata Express delivers a full Oracle Database experience as a managed cloud service running on Exadata and provisioned in minutes - all at an affordable entry-level price. It is ideal for production and non-production environments with medium sized data and workloads, and it is packed with features for building and hosting modern data-driven applications. Exadata Express is a great way to get started building on top of Oracle Database running in Oracle Cloud. Learn more about Exadata Express at cloud.oracle.com/database, and sign-up for your US$300 free credits today.

Starting in October 2017, Oracle Database Exadata Express Cloud Service now supports usage by the hour, purchasing via Oracle Cloud Universal Credits, and free credit-based trials. The service...

Database Cloud Service

Refreshed Service Console Makes Exadata Express Cloud Service Even Easier to Use

Starting on Friday, August 18th, 2017, Oracle Database Exadata Express Cloud Service receives a major refresh to its service console user interface. This refresh divides the broad capabilities of Exadata Express into three simple navigation tabs: one for developers, one for service administrators, and one that provides a jumping-off point for anyone to get started quickly. A new “Welcome Tour” video series available on the Home screen walks users through step-by-step instructions on how to use Exadata Express. A new dashboard displays the exact Application Express (APEX) and Oracle Database versions, while a new graphical meter shows the current database storage consumed. The console also incorporates new contextual documentation links and other usability improvements. In combination, these enhancements make the console easier to use than ever before. To see the new design for yourself, watch Exadata Express service console in action in a new video playlist on Oracle Learning Library YouTube channel or simply sign in to your existing account and navigate to the console.     Exadata Express delivers a full Oracle Database experience as a managed cloud service running on Exadata and provisioned in minutes - all at an affordable entry-level price starting at US$175 per month. It is ideal for small to medium sized data and packed with features for modern application development. It is a great way to get started with Oracle Database running in Oracle Cloud. Note Exadata Express is available for purchase directly from Oracle Store. Visit the service homepage at cloud.oracle.com/database to learn more.

Starting on Friday, August 18th, 2017, Oracle Database Exadata Express Cloud Service receives a major refresh to its service console user interface. This refresh divides the broad capabilities of...

Installing a trusted SSL certificate on Oracle Database Cloud Service for Apex

As many of you know, Oracle Rest Data Service (ORDS) front ends APEX to provide http(s) connectivity to your APEX instance running inside an Oracle database. Here's how you would go about installing a signed SSL certificate to your ORDS instance. I am using Comodo as a CA here. In an Oracle database cloud service instance ORDS configuration is in /u01/app/oracle/product/ords/conf/ords/standalone  by default The Jetty configuration for certificates is held in 'standalone.properties'. If ORDS is started without a specific certificate and key, it generates its own self-signed certificate for 'localhost'. In order to replace this with a valid, trusted certificate - follow the steps below. Requesting and Installing the Certificate 1) Generate a new RSA private key and PKCS#10 CSR using the key $ sudo openssl req -new -newkey rsa:2048 -nodes -keyout comodokey.pem -out comodorequest.csr Note that during this process you are asked for 'Common Name (eg, your name or your server's hostname) []:'. This should be a valid  Fully Qualified Domain Name (FQDN) you point to the IP address of your Oracle Cloud instance. Using the public IP directly will take much longer to validate and issue your certificate, and using a non-public name like 'localhost' or 'myoracle.local' will not work. 2) Take the contents of CSR ('comodorequest.csr') and purchase a certificate with it on Comodo's website. You may get a 90-day free trial for test purposes - https://ssl.comodo.com/free-ssl-certificate.php?track=8177 $ sudo cat comodorequest.csr 3) Once you have received your signed certificate, extract two files to your server: 'your.fqdn.crt' and 'COMODORSADomainValidationSecureServerCA.crt'. These need to be copied together into a single file $ sudo cat <your.fqdn.crt> COMODORSADomainValidationSecureServerCA.crt > comodocert.crt 4) Convert the PEM private key into a format Jetty uses (PKCS8, in DER format) $ sudo openssl pkcs8 -topk8 -inform PEM -outform DER -in comodokey.pem -out comodokey.key -nocrypt 5) Ensure the permissions of all of the required files are correct $ sudo chmod 644 comodokey.key comodocert.crt 6) Edit the configuration file to use the new certificate and key  $ sudo nano standalone.properties Edit the following lines: ssl.cert=/u01/app/oracle/product/ords/conf/ords/standalone/comodocert.crt ssl.cert.key=/u01/app/oracle/product/ords/conf/ords/standalone/comodokey.key 7) Restart ORDS $ sudo /etc/init.d/ords restart Your certificate is now installed and will function with no errors or warnings on: https://your.fqdn.here/

As many of you know, Oracle Rest Data Service (ORDS) front ends APEX to provide http(s) connectivity to your APEX instance running inside an Oracle database. Here's how you would go about installing a...

Exadata Express Cloud Service Now Available in Europe

In late-April 2017, Exadata Express Cloud Service expanded its global footprint by enabling deployment in Oracle's continental Europe data center located in The Netherlands. This European expansion complements existing Exadata Express availability in Oracle's central-USA data center. It also upgrades the service to Oracle Database 12.2.0.1 (GA release) and Application Express (APEX) 5.1.1. The upgraded version of APEX brings enhancements such as an all new interactive grid control, an advanced data visualization engine based on Oracle JET, and more. A list of enhancements in APEX 5.1.1 is available here. Exadata Express delivers a full Oracle Database experience as a managed cloud service running on Exadata and provisioned in minutes - all at an affordable entry-level price starting at US$175 / month. It is ideal for small to medium sized data and packed with features for modern application development. Click below to watch a short video about how Exadata Express can help you get started with Oracle Databases running in Oracle Cloud. Note that Exadata Express is available for purchase directly from Oracle Store. Visit the service homepage at cloud.oracle.com/database to learn more.  

In late-April 2017, Exadata Express Cloud Service expanded its global footprint by enabling deployment in Oracle's continental Europe data center located in The Netherlands. This European expansion...

Database Cloud Service

Oracle Database High Availability in the Cloud Era!

The Cloud doesn’t change everything.  Application agnostic data protection can offer some level of availability to the databases, but, database-aware high availability and disaster recovery features are vital  for mission critical database deployments in the cloud. Database-aware techniques provide superior continual database service availability during planned and unplanned outages. The introduction of the cloud has changed the way many IT solutions are built.  Gone is the need to fully control every aspect of the process to get great security, performance, and availability.  Customers today are demonstrating, that yes, it is possible to run even mission critical applications in the cloud.  However, not all techniques for deploying a mission critical database in the cloud are created equally.  Due to infrastructure limitations, application agnostic storage level snapshots, replication and even virtual machine failover have become commonly accepted techniques to improve availability for generic cloud deployments.  These fit well within most cloud infrastructures, where shared storage and redundant networks are not common, and replicated storage across availability domains exist.  These may be acceptable to non-critical application deployments, but do these replication solutions really do as good a job at high availability as sophisticated clustering and data protection solutions offered by Oracle? On the surface, all may look good.  If your VM fails, you simply restart the database and service after a blackout of couple minutes. A common practice in many clouds, you can replicate the database to storage in the same or a different availability domain.  This protects from both server and instance failure, as well as an availability domain becoming unavailable due to network or power related failures.  It appears simple, but is it superior, to a cluster solution which offers both great scalability to distribute your workload and superior online failure protection? What happens if there is block corruption and storage faithfully replicates that block to the standby?  And under the surface, the reality is that minutes of downtime for your database often results in hours of downtime while restarting your entire solution stack. Let us take clustering, for example.  Traditional clustering solutions rely on cold failover, where the database is restarted on another node after a failure.  These solutions provide simple high availability, and don’t rely on replication.  However, like replication solutions, it can take time to restart the database instance and the entire solution stack above it after a failure, plus the secondary resource is idle or cold when not used. Real Application Clusters, builds upon cluster failover by running multiple active instances simultaneously against the same database files, providing both improved availability, and scalability across nodes in the cluster.  Because the instance and database service are already running on the surviving servers, there is no need to restart the database and mount the database files, thus maintaining continual database service availability.  Recovery after a failover is fast, as existing connections stay connected and failed connections can automatically get notified and reconnect.  More importantly, RAC provides a mechanism to maintain full database service availability for periodic software updates that include critical security fixes, operating system updates and database software updates (PSUs).   Instead of taking up to 2 hours of downtime per month for software updates, RAC in the Oracle cloud can enable zero downtime software maintenance for almost all software updates. Figure 1:  A four-node RAC database cluster Real Application Clusters has another benefit over storage replication and VM failover solutions, that’s not strictly related to high availability.  RAC enables workloads to scale out over multiple servers.  The largest servers in the Oracle Cloud provide 36 cores of power, which is equivalent to 72 vCPUs.  Using 2-node RAC, you’ve now introduced fast physical failover and online patching, plus a compute capability that now maxes out at 72 cores or 144 vCPUs.  Adding more servers to the cluster allows you to increase processing capability even further.  This extreme scalability eliminates the need to resort to other distributed database techniques to scale beyond a single server.  Some of these techniques will require custom application design, while in contrast, RAC scales off-the-shelf applications across servers. Real Application Clusters can also combine with other Oracle database-integrated data protection techniques to provide a Maximum Availability Architecture (MAA). MAA extends the RAC benefits of elevated availability for local failures to larger systemic failures affecting the availability domain while also protecting against data corruption.  Using all these techniques together provide differentiated SLA’s and high performance database capabilities that will benefit and support the most demanding workloads.  In the next posting on this blog, we will discuss in detail best practices on how to use Oracle’s Maximum Availability Architecture to best protect from disasters and data corruptions while reducing downtime for major upgrades.  We invite you to come back, and learn what other cloud vendors may not be telling you.  

The Cloud doesn’t change everything.  Application agnostic data protection can offer some level of availability to the databases, but, database-aware high availability and disaster recovery features...

Devops for the database? What?!!

If you feel like this reading the title of this blog, you are not alone.    CI/CD isn't such as new concept and is really the central point of every devops implementation. However, having a full-stack CI pipeline is not so easy and speed bumps are generally around the database.  Typical challenges I hear from customers are,  Dev database prep needs DBA intervention to clone Schema changes are typically applied manually by DBA Deployment requires DB isolation / app cycling Application generally ‘waiting’ for DB to be ready No automated delivery pipeline possible (except when no DB changes) In this multi-part series we will discuss building a full stack continuous integration pipeline in the Oracle public cloud. Once setup, complex changes that involve application and database tiers can propagate to  integration testing on commits. This allows a completely automated build process and improves release frequency. Fig. 1 shows our proposed architecture. 1. Create a test master  The first and foremost problem we solve here is rapid provisioning of a test master. This can be a clone of a production database or a test database with seed data. Having their own dev environment is every developers desire. Often times if the application is deployed over multiple schemas, this become more of a necessity than a luxury. Let's see how we can clone a production/test database to create a test master and then provision thin clones on-demand which serve as dev/test copies  Log into your Oracle cloud service console through https://cloud.oracle.com. Here's what the dashboard looks like,   Once you pick the database service, the console shows all of your provisioned database instances along with summary information. In this case, lets say we wish to clone the 'prod' database to create a test master that will serve as a source for all of my thin clones.   We start with provisioning a new service through the 'Create Service' button. Let's fill-in the service details. I will call the service 'TestMaster', and pick options as shown below   Hit 'Next' to fill-in database details. All of this is pretty standard service creation stuff. The magic happens in the bottom right portion where you specify the source database backup to restore from. The Initialize Data From Backup section is the place where you specify what source database you want restored and its backup location. Provide the database id of your source database, in this case 'prod' To find the database Id of your source instance fire this sql shown below,   We also need to  provide the storage container where the backup is located, and the credentials for your backup cloud service. Even if 'prod' exists on-premise, as long as you ship a backup to the oracle storage cloud, you can use this method to restore it as a new service. Backup containers are named in the format StorageAPIEndPoint/DomainName-ContainerName. To find out your Storage API endpoint or domain Name or backup container name, log into your storage cloud service account or talk to your storage cloud service or domain administrator. And finally, you need to provide the decryption key. This is mandatory since you can only store encrypted backups in the Oracle backup cloud service. If you have an RMAN encrypted backup, you need to type in the key in the text box. If your source was encrypted using TDE keys, then zip up your wallet files and upload them by clicking the Edit button If your source database exists in the database cloud service then most likely the wallet is located in /u01/app/oracle/product/12.1.0/dbhome_1/admin/<SID>/xdb_wallet In this case I zipped up the xdb_wallet folder to wallet.zip, downloaded it to my local machine and then uploaded into the UI as shown above. Confirm your submission and thats it! In about 20-30 mins your test master would be ready.   2. Thin provision linked clones ok, so now that we have our point in time test master, we'll use it to create thin clones for developers. We will first create a point-in-time snapshot of the underlying storage volume to baseline our clones and then use the snapshot to provision clone instances.  Go back to the DB console and pick TestMaster Click 'Admin' and select 'snapshots' tab Any guesses how we can create a snapshot? Yes, the UI is pretty intuitive, you click the Create Snapshot button provide a name and you have a thinly provisioned snapshot of the underlying storage volume. Once the snapshot is provisioned, click the hamburger menu to its right to reveal the Create Service option and use it to provision a new thin-cloned database service that uses the storage snapshot. The steps are similar to provisioning a new service so I won't provide any screen shots here.  Thats it! You just provisioned your first dev copy. Multiple such dev copies of the test master can be provisioned using the snapshot. In the next post we'll see how a developer orchestrates a java app server instance using the cloud stack manager, pulls code from the master git repository in the Oracle developer cloud service and starts coding his app and database changes. Until then have fun playing around with database cloud service. If you don't have an oracle cloud account, you can always get a free trial at, https://cloud.oracle.com/tryit                      

If you feel like this reading the title of this blog, you are not alone.    CI/CD isn't such as new concept and is really the central point of every devops implementation. However, having a full-stack...

Removing APEX from the CDB in oracle's database cloud service

As a proof of concept this morning, I decided to remove APEX from the CDB and install it in a PDB so see what works/breaks after the fact. (used a brand new, just created DBaaS instance with a 16.1.3 or 16.1.5 DBaaS version label) Here is what I did....and some of the steps might be redundant. 1st step, stop glassfish...used dbaascli glassfish stop as the oracle user.  For APEX, I started by getting the APEX 5.0.3 patch from support.oracle.com  and installed that into the CDB. I then downloaded the complete APEX multi-language zip from OTN. Once that was SFTPed onto the server, i unzipped it and replaced the $ORACLE_HOME/apex with this new 5.0.3 multi-language home.  Next we uninstall APEX. CD into the new oracle home and run this script as sys @apxremov.sql OK...all set..APEX removed from the CDB.  Now sqlplus into the DB as sys, change the container to the PDB I want alter session set container = PDB1; and then install APEX @apexins.sql SYSAUX SYSAUX TEMP /i/   Few minutes later apex was installed. I then followed the install guide for 5.0 found here, change the admin password and ran the ORDS config scripts. Now just to note, all passwords I used here for APEX_PUBLIC_USER and the ORDS scripts were the SAME passwords I created the DBaaS instance with. After apex was configured, I moved the old images directory in the glassfish docroot to i_old and put the 5.0.3 images/JS into a new i directory. started up glassfish dbaascli glassfish start and went to the dbaas monitor URL and the APEX PDB1 URL...and everything was working as expected.  Run into an issue? Please let me know! 

As a proof of concept this morning, I decided to remove APEX from the CDB and install it in a PDB so see what works/breaks after the fact. (used a brand new, just created DBaaS instance with a 16.1.3...

SCRIPT execution errors when creating a DBaaS instance with local and cloud backups - FIX

Some special characters in your OSS password could cause the backup configuration to fail when creating a Oracle Database Cloud Service (Database as a Service) instance if you specify Both Cloud Storage and Block Storage as the Backup Destination. Such failures occur when configuring bkup assistant, during the “Configuring Oracle Database Server” phase. The log for these failures includes an error message that begins with the text “SCRIPT execution errors”. To circumvent this problem, follow these steps:1. Create a service instance and specify Block Store Only as the Backup Destination.2. After the service instance is created, apply the patch to fix this issue. (As Oracle user) $ curl -O https://storage.us2.oraclecloud.com/v1/dbcsswlibp-usoracle29538/dbaas_patch/bkup/dbaas_patch-21866900.sh $ chmod 755 dbaas_patch-21866900.sh $ ./dbaas_patch-21866900.sh You should see the output:Patching /var/opt/oracle/perl_lib/DBAAS/opc_installer.pmPatch applied successfully.Now delete the patch file.$ rm dbaas_patch-21866900.sh3. Change the backup destination to Both Cloud Storage and Block Storage byfollowing the instructions in Changing the Backup Configuration to a DifferentBackup Destination. Lastly, you could also change the password you use to create storage containers to not have special characters as well. This wont fix the root issue as the patch will, but will allow you to create a DBaaS instance with both local and cloud backups if the thought of using Linux is uncomfortable.

Some special characters in your OSS password could cause the backup configuration to fail when creating a Oracle Database Cloud Service(Database as a Service) instance if you specify Both Cloud...

July 2015 PSU now available for patching Database as a Service instances

The July 2015 PSU (Patch Set Update) is now available for patching existing service instances of Oracle Database Cloud Service (Database as a Service).Before you apply the patchBefore you apply this patch, you need to update the cloud tooling on the service instance by following the instructions Updating the Cloud Tooling on an Oracle Database Cloud Service (Database as a Service) Instance.When you apply the patchYou can precheck and apply the July 2015 PSU without any issues, unless the service instance is running Oracle Database 12c at the January 2015 PSU or April 2015 PSU software level. In this case, the operation fails with an error indicating that prechecking failed.You can safely ignore this error and force application of the patch:Using the Oracle Database Cloud Service console:After choosing the Patch option from the menu for the July 2015 Patch Set Update, enable the Force apply patch option before you click the Patch button in the Patch Service window. Using the dbpatchm subcommand of the dbaascli utility:Before applying the patch, set the value of the ignore_patch_conflict key to 1 in the /var/opt/oracle/patch/dbpatchm.cfg patching configuration file; for example:ignore_patch_conflict=1

The July 2015 PSU (Patch Set Update) is now available for patching existing service instances of Oracle Database Cloud Service (Database as a Service). Before you apply the patchBefore you apply this...

Upgrading Your 12c DBaaS Instance to APEX 5

APEX 5 is out, and I bet you want to start using it as soon aspossible. This post will guide you through upgrading your 12c DBaaS instance toAPEX 5. (just follow the APEX 5 upgrade docs for 11gR2 cloud instances) Thereare a few challenges to overcome but nothing too difficult. The APEX docs spellit out here. I am going to assume a few things here forthis guide. (We all know what assuming does) 1. You are on 12.1.0.2.1 or 12.1.0.2.2 2. You still have APEX installed in the CDB  If the above it true, you are going to need the following tostart: 1. Apex 5 downloaded fromotn.oracle.com 2. Patch #20618595 (For a12.1.0.2.2 database) (The following sections should be done as the oracle user) Ok...lets go..start by using scp/sftp to get these 2 files to yourDBaaS instance. We can put them into the /tmp directory on the cloud instance.Now, unzip the patch file to this directory. unzip p20618595_121020_Linux-x86-64.zip Then, cd into the new patch directory cd 20618595 To apply this patch, the database and its listeners have to beshutdown. Multiple ways to do this so whichever way you feel most confortableis the best. Once the database and listener is down, run the following: $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail-ph ./ This is going to check for patch conflicts. If this does notreturn with Prereq "checkConflictAgainstOHWithDetail" passed. OPatch succeeded. you need to stop what you are doing. Continuing may do bad thingsto your database.....bad things....  Everything look ok? Then we can start patching...run the followingto start the patch: opatch apply If the patch applied successfully, start upthe database and the listener. We now need to upgrade all PDBs withthe new patch we applied. This is done with a utility called datapatch. You cansimple run this utility by issuing the following command at the Linux prompt: datapatch OK...patch applied, database and listener up and running andall PDBs patched up. We now can start on our APEX upgrade. Start by unzippingapex. This will result in an apex directory. Lets dosome maintenance and move the old apex directory and replace it withthe new apex directory. Change the Linux directory to the Oracle Home $ORACLE_HOME Now rename the apex 4.2 home to apex_old mv apex apex_old  Now move the apex 5 home to the oracle home mv /tmp/apex $ORACLE_HOME Move into the new apex directory cd $ORACLE_HOME/apex start up sqlplus as sys sqlplus / as sysdba and now start the upgrade... but wait! More than likely your SSH session is going to timeout during the upgrade...you can change the timeout parameters on the VM. Checkthe end of this post for a suggestion. After extending the timeouts...continuewith the upgrade. @apexins.sql SYSAUX SYSAUX TEMP /i/  Once the upgrade finishes, check the log files for errors byissuing the following commands at the Linux prompt. grep ORA- *.log grep PLS- *.log  If you see no errors, we need to replace the image directory asthe next step. Change directory to the glassfish docroot. cd/u01/app/oracle/product/glassfish3/glassfish/domains/domain1/docroot Move the old images directory to i_old mv i i_old Create an i directory mkdir i move into the i directory cd i and copy the images from the apex directory to here cp -rf $ORACLE_HOME/apex/images/* .   And that’s it...going to your apex URL on the could willbring up the new APEX 5 UI.   Upgrade complete!   (Special thanks to Jason Straub on helping out with the upgrade steps) ***** Extending your SSH session timeout ***** This should only be a temporary edit... log into your cloud VM as opc then change directory to the ssh directory cd /etc/ssh make a copy of the ssh config file sudo cp sshd_config sshd_config.save  edit the file as root sudo vi sshd_config change the parameters at the end of the file or replace them withthe following TCPKeepAlive yesClientAliveCountMax 99999ClientAliveInterval 30TCPKeepAlive yes save the file and restart ssh sudo service sshd restart

APEX 5 is out, and I bet you want to start using it as soon as possible. This post will guide you through upgrading your 12c DBaaS instance toAPEX 5. (just follow the APEX 5 upgrade docs for 11gR2...

Connecting to a Database Cloud Service with SQL Developer and SSH

**You will need SQL Developer 4.1 for Connecting to a Cloud 12c DBaaS PDB**  When you create a DBaaS instance, by default the only port open is 22, the SSH port. Now we must remember, these instances are on the public internet so security is priority. Personally, the only 2 ports I open in my DBaaS instances are 22 and 443 (for APEX access). So if we need SQL*Net access, what do we do? We create a tunnel over SSH for SQL*Net in SQL Developer. Find your Public IP for a DBaaS Instance Let's go through creating a connection, but before this we need to gather some information. First, we need the public IP of our DBaaS instance. That address can be found on your Database details page. Log into your Public Cloud Account, open the Database Cloud Service page. On this page we can see all of our created instances. Click on the instance you want to connect to. This brings up the database details page. Here we can see the SID, the PDB name, resources used as well as the public IP. Make note of the public IP and PDB1 connect descriptor for our SQL Developer connection. Get the Key Now that we have the public IP, we need one more item for our secured connection, the private key. When you created a DBaaS instance, you created a public and private key. The Public Key was used on instance creation. The private key will be used to connect to the instance at the OS level. We need this key for our SQL Developer connection.  We need to add an SSH host. In the toolbar across the top, View, select SSH. A new SSH view should appear. Now right click SSH Hosts and Add a host: Fill in the details for your SSH host Now add a ssh key, start by clicking the Use Key File checkbox then click the file browse button and add a key  Next continue in the modal and create a  New Local Port Forward. Name your port forward and ensure the port is 1521. When done, click OK. The SSH view should look like the following: We need to alter our SQL net connection now. Edit the connection details of your database connection. Now in the Port Forward section, select your SSH connection.  When finished, save the connection.  Hit the test button to check the connection. You have now created a SQL Developer connection that uses SSH to tunnel into your DBaaS instance to use SQL*net. Start by changing the connection type to SSH.

**You will need SQL Developer 4.1 for Connecting to a Cloud 12c DBaaS PDB** When you create a DBaaS instance, by default the only port open is 22, the SSH port. Now we must remember, these instances...

Getting Started with the Database Cloud Service

I thought I'd start off this blog with how to start with your DBaaS account. So you just got in your email the credentials for your brand new DBaaS account. Super! Now what? To start, go to cloud.oracle.com  and click the Sign In button in the upper right of the page.  On the next page, in the upper left is the My Services panel. Use the select list to select Public Cloud Services then click the Sign in to My Services button just below it.  Now we fill out the Identity Domain is usually something meaningful to you like your company followed by numbers. Something like ORACLE1234567. Its found in your welcome email as well. Next you will be asked to sign in. Use the credentials in your email for username, password. Now click Sign In. Next is the good stuff, what you have been waiting for, your cloud account! (Actually, you will probably be asked to change your password, then some security questions, but after that is the good stuff) You are now on the My Services Dashboard. Here we can see info about your cloud databases, backup service, java service and computer resources. (Some of these may or may not be visible depending on what you have purchased) On the Database Cloud Service panel, click the Open Service Console link (upper right) to get to the DBaaS Console. Here we have the DBaaS console. We can create and delete cloud databases, view information about our created databases and even patch, yes patch, databases right from this console. Lets create a database. To start, click the Create Instance button in the right of the page. This will bring up the Create Instance wizard with the steps in this train. The 1st stop on the train is the Subscription step. Here we choose what type of database we want and the billing frequency. Choose Oracle Database Cloud Service and whatever billing frequency you wish. Im going to choose Monthly. The Oracle Database Cloud Service - Virtual Image just sets up a VM for you, no database, no backup/recovery, no cool patching from the UI. We are going to ignore that option for now. After selecting Billing Frequency, click Next (upper right). Next up, Software Release. 12c rocks so there is no question on which options to choose here. Just incase, choose 12c and click Next. On the Software Edition page, we can choose what edition of the DBaaS 12c instance we want. To break it down a bit further for you, Enterprise Edition is just regular enterprise edition without the options. High Performance is nearly all the options, except In-Memory Database, Active Data Guard and RAC, with Extreme Performance being everything. After you have made your edition choice, click Next.  Lets take the Service Details page one panel at a time. In the upper left is Instance Configuration. Here we provide the wizard with the instance name and a description. Next we select a compute shape. Compute Shapes come in 2 flavors, regular and high memory. The high memory option has twice as much memory as the regular options and the same number of OCPUs. Choose from 1 OCPU and 7.5 GB of memory all the way to 16 OCPUs and 240 GB of memory (can it play doom?). Just a note, you can always change the shape in the DBaaS console later on after the instance has been created. The last attribute here is the VM Public Key. Every DBaaS instance is created with a public key that needs the corresponding private key to connect. Upon instance creation, only port 22 (ssh) is open and you need the private key to connect. Keys are really easy to make. This link will step you through creating a key pair. We have also included a pair here for you to get started with. I recommend creating your own. On the page, click the edit button and then for Key file name, use the file browse button to fine the key on your desktop and upload it. When finished, click the Enter button.  In the panel just below is Database Configuration. For the sake of time, lets just leave most of the default values but provide a password here. Now this password will be the password used for all the services and logins for this instance. (APEX admin, glassfish admin, dbaas_monitor user, sys, etc) Please don't use 123456 or manager. (maybe 1234567 or manager1? I kid, don't use those either). Also as stated, failover database is coming soon. Once you have chosen a secure, hard to guess password, look at the Backup and Recovery Configuration panel in the upper right. The first attribute is Backup Destination lets us choose where are backups are going and if we really want backups at all. This could be a development instance where you don't want backups because you are not going to put anything you can recover later into it or you are using it to test out an application and all the data is pre created with scripts you have. But on the other hand, this cloud be an instance where you care what happens to the data and objects created within.  We have the ability to have backups using block or local storage as well as block (local) and cloud storage. If you have a cloud storage account (can get a trial here), provide the storage container for the Cloud Storage Container attribute. The syntax is: <storage_service_name>-<identity_domain_name>/<container_name> So for my service, I would use mystorage-oracle1234567/myContainer. Creating a container can be done through curl via a rest service. Ill jumpstart you by including the following curl commands to connect and create a container. To connect:  curl -v -X GET -H 'X-Storage-User: storage_service_name-identity_domain_name:USERNAME' -H 'X-Storage-Pass: PASSWORD' https://storage.us2.oraclecloud.com/auth/v1.0 Remember to substitute your storage service name and identity domain as well as the username and password in that command. The return of that will contain a token in the format of AUTH_XXXXXXXXXXXXXX with X being random characters. You will need to use this in your subsequent calls. To create a container:  curl -v -X PUT -H 'X-Auth-Token: AUTH_XXXXXXXXXXXXX' https://storage.us2.oraclecloud.com/v1/storage_service_name-identity_domain_name/myNewContainer Again, substitute the appropriate variables above. Help using the REST APIs for the storage containers can be found here. The username and password are the ones for your storage account or are the same as the database cloud service. Check your Services Dashboard for the presence of the Oracle Cloud Storage Service. Last note on this,  storage_service_name might be the word Storage depending on what type of storage you have (metered or non-metered). Metered subscription will be Storage, Non-metered will be storage_service_name. I have a metered subscription so my form will look like the following: After filling out these 3 sections, click Next for a final review page. When ready, click the Create button on the upper right. Now take a break, have a snack but in about 30 minutes, you will have a fully functional 12c database in the cloud with backup/recovery configured, ssh enabled, APEX installed, a specially created dbaas monitor, as well as much, much more. Congratulations, you have just created your first 12c cloud database in Oracle's Public Cloud!    Next up, connecting to the new cloud instance via sql developer over ssh. 

I thought I'd start off this blog with how to start with your DBaaS account. So you just got in your email the credentials for your brand new DBaaS account. Super! Now what? To start, go to cloud.oracle...